I’m at the point in my career where keeping things simple is a top priority, even if that means a little sacrifice in other areas. In this particular case, as part of redesigning rpheath.com, I wanted to pull in the latest N posts from this blog. Given that I’m working with a static site, that meant using JavaScript. A young whippersnapper may immediately turn to React, Vue, Svelte, or (gasp!) Angular, which is reasonable. Those frameworks make this type of thing clean and easy. But I did not want to deal with integrating webpack and/or babel into my builds, as I’ve been down that road plenty, and it’s (shockingly!) still pretty annoying to deal with. Besides, each redesign of my personal site strives for more and more simplicity, so that would go against the grain.
I chose to use vanilla JS for this. Let’s talk about it! It all starts with a fetch-based function to grab the data from the blog’s API:
The fetchPosts()
function grabs data from an API and hands it over to a callback function that I define for processing. This is all so simple because fetch()
is built on the Promise
, which in modern JavaScript, is an expectation for asynchronous calls.
Since we don’t have a framework to do stuff for us, the next “problem” we need to take care of is ensuring that our code runs at the right time. Meaning, after the DOM has loaded. Old school folks may remember jQuery’s $('document').ready(...)
or $(function() { ... })
helpers! Yeah, it’s that, but with vanilla JavaScript. There are two ways to approach it:
1// (1) wait for the DOMContentLoaded event 2document.addEventListener("DOMContentLoaded", (_e) => { 3 console.log("The DOM is ready!"); 4}); 5 6// (2) a self-executing function 7(function() { 8 console.log("The DOM is ready!"); 9 console.warn("...but only if this JS is included at the bottom of the page"); 10 console.warn("...also, I ignore deferred scripts!"); 11})();
Both work, but notice the caveats with option (2) above. If you go with a self-executing function, it’ll execute whenever the browser reaches it, so it must be after your HTML is parsed, top-down (and even then it still ignores deferred scripts). We’ll go with option (1) for our purposes.
The next bit to work out is what to do with our data once we get it back from the API. In React, for example, we’d make a component that spit out some HTML for the browser. Well, that’s exactly what we’ll do, only we’ll use JavaScript’s Template Strings instead.
1const renderPosts = (data, container) => { 2 let posts = []; 3 4 data.forEach((post) => { 5 const html = ` 6 <div class="post"> 7 <div class="post-date"> 8 <time>${post.date}</time> 9 </div> 10 <div class="post-link"> 11 <a href="${post.url}">${post.title}</a> 12 </div> 13 <p class="post-caption">${post.summary}</p> 14 </div>`; 15 16 posts.push(html) 17 }); 18 19 container.innerHTML = posts.join("\n"); 20}; 21
I bet any youngsters who stumble onto this won’t even know what innerHTML
is. But essentially, we just take the data and iterate over the response, creating an array of string-interpolated items. Once we’re done “looping”, we just set the container’s HTML to render our posts.
It’s a nice touch to put some kind of default text or loading indicator into your “container” element:
Then once the DOM loads and your function executes, innerHTML
will replace your loading content with the interpolated response from the API.
Now that all of our pieces are in place, the result looks like this:
You can see this in action by going to rpheath.com/blog.
The important lesson here, if there is one, is to avoid defaulting to complexity because “that’s what the internet does”. If I wanted to use React for this, I totally could, and the outcome would look very clean and satisfying. But the complexity involved with integrating React into my builds was just not worth it. I have the same result, but with zero configuration or headache. I’m just using what browsers already know and understand. And for me, that’s the reward.