Front-end Performance Notes

On average, 80% of the end-user response time is spent on the front end [The Performance Golden Rule]

Please take the time to look at the original sources of information used when compiling this document.
Credit also to Steve for contributions.

Changelog

  1. 20/03/2013 - information added regarding 404s for non-page assets
  2. 20/03/2013 - information added regarding reducing cookie size
  3. 20/03/2013 - information added regarding using cookie-free domains for components

Styles at the top, scripts at the bottom

CSS blocks rendering so we should include all stylesheets straight away in the <head>, allowing the page to render progressively. A browser won't render a page until it has all style information, put this information at the bottom of the page and we make the browser wait before rendering content.

A browser will download as many assets as it can from a single domain in parallel. JS blocks parallel downloads to ensure the browser knows how the javascript affects the page, and also to preserve script order. Therefore we should deal with them last to ensure they don't delay any assets on our page from loading.

If we have no choice but to reference JS in the <head> we should put them after references to CSS files. Because JS blocks downloads, JS should be referenced after ALL CSS files unless the CSS relies upon the JS. Ensure we optimise the order of resources.

Make Fewer Requests

Every page asset requires an extra http request. Minimising these requests reduces the amount of work a browser needs to complete whilst rendering a page. Assess each and every http request to ensure it is necessary, and if so, optimised. The key to faster pages is reducing the number of components which in turn reduces the number of HTTP requests required to render the page.

Maximise Parallel Downloads

The number of assets a browser can download in parallel from the same domain is limited. To increase the number of parallel downloads we can serve assets from different domains or subdomains. Combining this with CDN technology can give us additional benefits such as serving assets from an optimal physical location.

Yahoo guidelines say to split assets across at least two but no more than four hostnames. This results in a good compromise between reducing DNS lookups and allowing a high degree of parallel downloads.

Use a Content Delivery Network

If the project justifies it, use a CDN. The user's proximity to the web server has an impact on response times. Deploying content across multiple, geographically dispersed servers will make pages load faster from the user's perspective.

HTTP requests and DNS lookups

The problem with serving assets from multiple domains however is DNS lookup. DNS lookups are expensive. Each time (from a cold cache) a new domain is referenced, the http request is subject to a DNS lookup where the actual location of the asset is checked. Consideration needs to be given on a per-site basis as to whether a browser will be able to fetch several under-parallelised assets from one domain quicker than it can perform DNS lookups to mulitple domains then parellelise those requests.

DNS lookups are only subjected upon first request. Subsequent requests to the same domain do not incur a DNS lookup.

DNS Prefetching

For third-party additions to our pages, e.g. twitter widgets, we incur DNS lookups to external domains. We can speed up this process using DNS pre-fetching. Simply, this means we can add a tag to our <head> to prefetch that hostname's DNS. This tells the browser to start prefetching the DNS for each external domain before it's actually needed, meaning when it gets to the actual element in our page, it will have already started the DNS lookup. It's a head start.

Gzipping & Minifying

For our text assets (HTML, CSS, JS), we should be both minifying them (to remove extra whitespace and comments) to save bytes and gzipping them to compress them further still. In some cases it's possible to see upwards of 90% reduction in file size by minifying and gzipping.

Gzipping resources can be handled at the server level automatically.

Cache Assets

By telling the browser which of our assets can be cached, and for how long (using the expires or Cache-Control header), we can reduce the number of http requests made by the browser when downloading resources upon return visits. We should cache CSS, JS, and images, thereby reducing http requests.

Appending a query string to our static assets will break the cache and force a download if we need to load changes within the expiry time. Alternatively, our build process could embed a versioning number into the file name.

(02) We can use cookies for various reasons - authentication, personalisation etc. Information about cookies is exchanged in the HTTP headers between web servers and browsers. Cookies themselves originate from web servers when browsers request a page. Browsers send back the cookie in future requests. It's important to keep the size of cookies as low as possible, so we can minimise the impact on the user's response time.

(02) We should: eliminate unnecessary cookies, keep cookie size as low as possible, set cookies at the appropriate domain level (see below), set an expires date appropriately.

Use Cookie-free Domains for Components

(03) Once the server creates a cookie for a particular domain, all subsequent HTTP requests for that domain include the cookie. This even includes when browsers request static assets like images. The server doesn't have any need for cookies sent with requests for static assets, and we therefore have network traffic for no good reason. We should make sure we don’t serve static content like images and stylesheets from a domain that sets cookies, to reduce network traffic.

(03) Therefore a solution is for us to request all of our static assets from a different domain which is cookie-free. This doesn't mean our assets need to "live" at a separate location, just that they are accessible via a different domain. We could use a subdomain, e.g. static.example.org. However, note that if we have already set cookies on our top level domain example.org, as opposed to www.example.org, these cookies will be included with requests to any subdomain. In this scenario we should use a completely separate domain and keep it cookie-free. Using a CDN can take care of this for us.

(03) Decision making - deciding to run a website without the www. leaves no choice but to write cookies to *.example.org, meaning we can't use a subdomain for static assets. Therefore for performance it's best to use the www. subdomain for our site code and write our cookies to that subdomain.

Make CSS and JS External

By storing this code in external files, we can make use of browser cache, and keep our code better organised. The only exception may be for pages with one view per session which may benefit from inline script to save on http requests.

Combine External CSS & JS

Combining multiple files means less http requests. Whilst we can author in as many CSS and JS files as we feel necessary to build in a modular component-based approach, we should concatenate and minify these into a small number of output files for production. This should be handled automatically during the build process.

Combining files is straightforward for global files which are applied site-wide. Section or page-specific files, code with different versioning needs, or files from seperate domains obviously require more thought.

Google recommends a maximum of 3, but preferably 2, JS files.

A good way to partition JavaScript is into 2 files: one containing the minimal code needed to render the page at startup; and one file containing the code that isn't needed until the page load has completed.

JavaScript of rarely visited components may be best served in its own file only when that component is requested by a user. Also, for small bits of JavaScript code that shouldn't be cached, consider inlining that JavaScript in the HTML page itself.

Prefer Asynchronous Loading

Fetching resources asynchronously prevents those resources from blocking the page load. For JS resources that aren't needed to construct the initial view of the web page, loading them asynchronously means the browser can continue parsing and rendering HTML that comes after the script without waiting for that script to be downloaded, parsed and executed.

Note: Multiple async scripts will be executed in no specific order.

<script async src="example.js"></script>

3rd party scripts are not always delivered efficiently, therefore it's important to load these scripts asynchronously to prevent them from slowing the rest of our page.

Conditional Loading For Responsive Designs

Using conditional loading we can ensure small-screen devices do not download assets intended for larger screens. We should avoid using display:none to manipulate content when designing responsively. Evidence of this approach suggests we are serving unnecessary assets to some devices, increasing download times. Build mobile first and use conditional loading to serve demanding content to more capable devices.

Defer Loading / Delayed Content

Similarly, by reviewing what on our page is absolutely required in order to render the page, we can decide if there are any components that can be post-loaded. For example hidden content that requires user interaction, or images below the fold, or JS not called upon startup. Deferred loading reduces the initial download size and allows other resources to be downloaded in parallel.

We could use HTML5 data attributes to store reference data for elements that we decide are peripheral (removing elements from the HTML), and can therefore be loaded progressively using javascript once the window has finished loading.

For applications with heavy use of JS we should consider splitting our JS into components and using deferred loading to ensure startup time is optimised. JS trigerred by user interaction is often not required for initial onload and can therefore be deferred until it's actually needed - "lazy loading". Remember, a browser cannot download any assets until it has finished downloading any requested JS files.

CSS Performance

CSS blocks progressive page rendering. This is to prevent the browser from having to redraw elements of the page if their style changes. Therefore it's crucial that browsers get hold of our CSS files as soon as possible - we should avoid DNS lookups. This means in an ideal world, CSS files shouldn't be on our static assets subdomain alongside our JS/images/fonts etc.

From a cold cache, the DNS lookup required to grab CSS files could well slow down initial page rendering. Despite best practice suggesting static assets should be served over subdomains, CSS may be an exception - it's on the critical path (the code and resources required to render the initial view of a web page).

A browser will download all CSS files before it begins to render a page. This includes stylesheets for other media types such as print, along with stylesheets wrapped in a media query - even if they're not needed.

We should: never serve CSS from a static/asset domain, serve it as early as possible, concatenate it (as a browser will download everything anyway), gzip and minify it, cache it.

Note: There may be an exception to the "never serve CSS from a static/asset domain" claim - if optimal location serving is offered by CDN technology this may outweight the negative impact on the extra DNS lookup?

Efficient CSS

We should use as few CSS rules as possible, and avoid inefficient selectors. The key to efficient selectors is to define rules that are as specific as possible and that avoid unnecessary redundancy.

Descendant selectors are inefficient - for each element that matches the key (the rightmost selector) the browser has to traverse up the DOM tree evaluating every ancestor until it finds a match or reaches the root element. We should make the key as specific as possible.

Overly qualified selectors are inefficient - if we have a unique ID or class we don't need to qualify this with a tag.

We should: make rules as specific as possible, remove redundant qualifiers, avoid descendant selectors and redundant ancestors, use class selectors instead of descendant selectors.

When using the @import directive to load stylesheets, the browser is unable to load these files in parallel. This means the download of other assets will be blocked, and on large scale sites may have a noticeable negative impact in performance.

Reduce the Number of DOM Elements

By following web-standards to build pages with semantic markup, we can reduce bloat and optimise the number of DOM elements. This means less bytes to download, and quicker DOM access in JavaScript.

No 404s

HTTP requests are expensive, so making requests and receiving a 404 is pure wastage. Ensure there are no 404s so that there are no useless http requests.

(01) For good measure, we should ensure that we serve a very basic 404 for non-page assets, like "Image not found". This means if a few resources aren't found, we minimise the impact these requests have on performance. If we serve our standard user-focused 404 (which may also query a database) for non-page assets, we increase response time and download wasted bytes. A simple plain text file is sufficient for missing non-page assets.

(01) Simple 404's for non-page assets may also benefit local development. In this scenario, some assets are very likely missing, we can be sure we have optimised the handling of these redundant requests to save on performance whilst we develop.

Minimise Redirects

Reducing http redirects from one URL to another cuts out any additional round-trip time (RTT) and wait time for users. We should save redirects for only when a technical necessity, and certainly not use them for popular, high traffic pages.

Spriting Images

Sprites should be an integral part of performance strategy. Loading one larger image over a single http request is better than several images over several requests. Sprites are tricky when used on non-fixed-dimension elements. We should avoid extra whitespace in our sprite just to cater for fluid elements, as in this scenario the browser would require more memory to decompress the image into a pixel map.

A solution to this is to use an empty element to hold our background image. Essentially this means placing an empty element inside our fluid element, fixing the dimensions of the empty element that can then be 'sprited'.

Progressive JPGs

Progressive JPGs load differently to traditional JPGs in that the whole image loads, but pixellated, before slowly coming into focus. This is in contrast to the typical behaviour whereby a JPG will load in stages in a jerky manner. There is a perceived performance improvement with progressive JPGs despite them usually being a little larger than normal JPGs.

To enable progressive JPGs, we can just tick the box when saving in Photoshop.

Losslessly Optimise Images

By running our images through an optimsation tool after exporting them from photoshop, we can reduce the file size without compromising on quality - lossless optimsation. This is a no-brainer and offers significant file size improvements.

Avoid Images Where Possible

If we can replicate images using just CSS, as long as it doesn't introduce shed loads more code, we should favour this approach. It means less http requests.

Don't Scale Images in HTML

We shouldn't use bigger images than we need. Always define the height and width of our images to avoid unnecessary reflows and repaints during rendering.

Consistent Resource URLs

We should serve a resource, e.g. images, from a consistent URL to eliminate duplicate download bytes and additional RTTs. If an image is served from different URLs duplicate requests are made for the same resource. If an image is served from a different domain, an extra DNS lookup may also be incurred.

Use Diagnostic Tools

yslow.org and developers.google.com/speed/pagespeed/insights_extensions are tools we can use to assess performance of our websites, collect invaluable feedback, and monitor impact of any changes we make.

References:

Back to top