Only when all scripts encountered have been downloaded and executed, and the entire contents of the HTML have been parsed, will the page render. Users cannot interact with an empty page, so rendering as quickly as possible is critical to user experience. The good news is that we can do quite a bit to limit the render blocking effects. More on this later.
Scripts all the way down
document.write(). This technique simply writes more scripts to your page after the initial script is run.
This technique is clever, but in the end the rendering of the page is still blocked. Even worse, using
document.write() may be disallowed entirely by the browser. The image below includes a request map illustrating what this technique looks like:
We can see that ZenDesk, a popular support tool, initially loads up a tiny bit of JS on the main application (small circle bottom right), but quickly loads up other, much larger assets (bigger circles).
Worse still, sometimes these scripts load dependencies that the main application is already serving. We have seen 3rd party chat and review services load the entire jQuery library onto the page, even though the main application was already doing so! Often, these dependencies aren’t even optional. We like to avoid using jQuery where possible, but loading it twice is just terrible for performance.
How To Limit Third Party Influence on Performance
Async & Defer
defer. As mentioned above, scripts are render blocking, and thus problematic for performance.
defer allow us to limit that render blocking effect in different ways. First, a visualization of how the browser behaves when encountering scripts that do not use
Short for “asynchronous”, adding this attribute to script declarations essentially tells the browser to load scripts in the background, avoiding render blocking. When the script is ready, the browser is free to execute the file. If the page contains multiple scripts with the
async attribute, they can be downloaded however the parser deems necessary and their execution will occur as soon as they are downloaded. Use
async when the file in question doesn’t have dependencies, such as the DOM being fully parsed, or reliance on another library being loaded.
defer attribute, the browser still downloads the script asynchronously, but its execution is deferred until all parsing of the document is complete. A side-effect of waiting until the DOM is rendered before executing these scripts is that they are executed in the order in which they are declared. Use
defer when the script in question either relies on a fully parsed DOM or its execution isn’t high-priority.
For deeper understanding on
defer, be sure to visit Ire Aderinokun’s guide. It helped tremendously with my own understanding and inspired the animations I created above to help visualize their differences.
Prioritize critical assets via preloading and resource hinting
Using “preloading” is another great way to improve overall performance of your website. While it is most common for critical assets like webfonts (that are often blocked until CSS has completed) and images, this technique can be useful in loading critical JS.
When your site has a snippet of JS that needs to be loaded sooner rather than later, preloading and resource hinting can be a nice win.
<link rel="preload" as="script" href="critical.js">
By adding a
<link> element to the
<head> of the document, we can inform the browser that we’d like to prioritize the loading of
critical.js and we also help the parser by preemptively “hinting” at its type with
You can read more about preloading critical path requests on Google’s Web.dev blog.
Self host where possible
Another option to help limit the impact of third party JS is to self host it. When you must load some third party code, there is usually nothing stopping you from hosting it yourself. Doing so avoids the network round trips necessary to retrieve and load a file from another server. Coupled with
defer, this can really help with those key Lighthouse metrics.
Cautiously use a tag manager like Google Tag Manager
Tag managers, like Google Tag Manager are commonplace for teams with many marketing, tracking needs. They allow marketers to deploy marketing tags (scripts, pixels, etc.) without needing developer intervention. Essentially, it allows marketers to control which scripts are loaded and when, which user events trigger them, etc., all without needing a developer to code around all of those conditions. In theory, they make a lot of sense. In practice, they are often abused while competing stakeholders run amok loading scripts upon scripts because it's so easy to accomplish. While GTM loads scripts asynchronously, if your website is loading dozens of scripts and multiple megabytes of data, your performance is still going to be hosed. We recommend that teams use tag managers cautiously and routinely audit what is being loaded on key pages.