Category Archives: WebPageTest

WebPageTest Private Instance in 5 Minutes or Less

Posted in Front End Engineering, Open Source, Web Performance, WebPageTest on by Rick Viscomi.

If you’re like me and you have ideas to help improve the WebPageTest public instance UI or you want to crush some bugs, you have a couple of development options. If it’s just a tweak to the styles or similar, you could verify the change live in browser developer tools and submit a small pull request to the GitHub repository. New features that are more complex can’t be done as easily on the fly. A development server should be used to ensure the changes work as intended and don’t break anything. So what are the options for testing WebPageTest locally?

Running WebPageTest on your laptop

My first choice is to run the development server on my laptop running OSX. For how easy it is, there are surprisingly few guides written on getting WebPageTest set up for ad hoc development. Here’s an overview of the process:

  1. Get a clean copy of the code.
  2. Configure the private instance.
  3. Configure Apache to point to the server.

The goal is to accomplish these three steps in under five minutes, so I may take some shortcuts or make assumptions about your development environment. I’ll be sure to note these along the way.

1. Clone the latest version from GitHub

If you want simple, this is it. Just download the master at Save it anywhere.

Note: If you already have the GitHub Desktop application installed, here’s a shortcut that will clone the repository for you: github-mac://openRepo/

It would also be proper version control etiquette to create a new branch for your changes rather than work in the master branch.

2. Point the private instance to the public test agents

This is key. If we’re only interested in making changes to the UI, it’s totally unnecessary to spin up your own test agents. If your change doesn’t depend on live data at all, you could skip this step. For example, the home page is static and as long as you don’t have to submit a test, you can work entirely local. However, most of the site involves configuring and analyzing tests, which definitely requires live data. The trick is that you can let the public instance do the actual testing workload while your private instance simply pulls in the data and displays it.

The WebPageTest API

Now is a good time to bring up the API. Of course, if you already have an API key you can skip this part.

If you want to communicate with the public instance, you’ll need to be authorized. All API users have a unique key, which can be immediately obtained by filling out a short form at After submitting, you’ll be emailed a key. Save this for the next part.

Using the public test agents

If the goal is to simply get test results from an arbitrary test agent, any of the Dulles browsers will do as they’re provisioned for a healthy amount traffic — not that we’ll be producing much. To use one of these test agents, we’ll need to create a file at www/settings/locations.ini. Note that the source code is bundled with a locations.ini.sample file for reference.

The contents of this file should be:


label=”Dulles, VA”

label=”Dulles, VA – Chrome”

The relay server and location ensure that we use the public instance and the relay key authorizes the requests. You can skip the overhead of creating a blank file and copy/pasting the configuration above by downloading it as a gist from GitHub and saving it to the www/settings/ directory. Remember to use your own API key.

3. Configure Apache

OSX already includes Apache, which is needed to run the server, and PHP, which is needed to render the pages. This last step is only a matter of flipping switches and linking the two with WebPageTest.

Open the Apache configuration file at /etc/apache2/httpd.conf and update the following settings.

Enable PHP

#LoadModule php5_module libexec/apache2/
By default, PHP is disabled. Enable it by changing this to: LoadModule php5_module libexec/apache2/

Give Apache read/write permissions

User _www
Group _www

Apache needs file permissions to save the test data. Instead of granting the Apache user (_www) these permissions, just run Apache as yourself. Change this to:
Group staff

In case you’re not sure, your username is the output of the whoami command.

Point to the server code

DocumentRoot "/foo"
<Directory "/foo">

This sets up the base file path from which requests will be served. By default it is pointing to some local file path (like /foo). So to point localhost to the WebPageTest web server, change this to:

Note: If you’ve already got Apache configured for something else, instead you may want to point to WebPageTest’s www directory for only a unique port number.

And that should be it! If you start Apache and navigate to localhost in your browser, you should see the WebPageTest home page. Now you’re ready to start the real work of making the UI changes.

You can find out more about how to use WebPageTest to analyze the performance of websites in my new book, Using WebPageTest.

Cursory Perf Audit:

Posted in Front End Engineering, Web Fonts, Web Performance, WebPageTest on by Rick Viscomi.

I’m a huge fan of Quentin Tarantino and so I was excited to find out earlier today that the locations of the large format showings of his new film, The Hateful Eight, were just released. Luckily, the theater a few miles away will be showing it in GLORIOUS 70MM. However, I was intrigued by their promotional website, I wondered about the performance of this site and if there were any interesting take-aways. So I spun up the WebPageTest site and here’s what I learned.

What it’s made of

The Hateful Eight website screenshot.

The home page is 7,079 KB fully loaded, so it’s a heavy page especially considering that the average page size is about 2,200 KB. The page is adorned with a very seasonally appropriate falling snow effect. I can feel the chills running down my spine — not from the freezing weather but from the JavaScript that must be crippling the CPU.

Behind the festive snow, the hero graphic (pun intended) scrolls with a parallax effect. When done right, this actually looks nice considering the depth of the images. Unfortunately, you probably won’t have requisite CPU remaining to get the full smooth scrolling effect. When you do manage to scroll down, you’ll find some lovely portraits of the main characters (which have a tilt effect on scroll) and press clippings. The trouble here, in addition to the Web 2.0 effects, is mostly the images. Because of the parallax effect, the landing graphic is actually composed of three images: the background image of the cabin and woods, the midground image of the characters wading through the snow, and the foreground image of the gunslinging hero. In total, these images are 1,904 KB. Those images alone are almost as big as the average web page!

It’s also worth mentioning that none of the text above the fold is actually text but rather static images. Not only is having to load extra images bad for performance, but it’s also terrible for accessibility as visually impaired users will be unable to have the images’ content read back to them via assistive technology. Speaking of the text, or lack thereof, two custom web fonts are included on the page: Rockwell W01 and Rockwell W01 Bold.

Load time

The WebPageTest results speak for themselves. The most horrifying statistic is the fully loaded time of about 16 seconds; because the content above the fold is entirely composed of images and the onload event waits for all images before firing. As the filmstrip shows, the page doesn’t appear usable until at least 15 seconds, after the images have loaded and their slide transition completes.

How to do better

15 seconds is an order of magnitude slower than what I would be willing to tolerate for a static page like this. There are many performance optimizations that could significantly help this site. Being a cursory perf audit, here are just a few that stand out:

1. Enable Keep-Alive

Seriously. It’s almost 2016 and people still haven’t heard about persisting connections. It’s one of the easiest optimizations out there and it takes less than a minute to implement. Do it.

2. Prioritize the first byte

The WPT “First Byte Time” grade is an F, and rightfully so. It takes almost 2,400 ms just to get a single byte of the HTML response. A page in decent shape would have already rendered by now. There are some nasty server-side issues going on here that WPT doesn’t have the insights to show us. In addition to the previous connection issue, these need to be fixed before all other optimizations because this directly impacts the lower limit of how fast this page can be. Even with a perfectly optimized client-side page, it will still be no faster than 2,400 ms and that’s a big problem.

3. Delay loading content below the fold

Fully loaded, the site has 47 requests, which isn’t too bad. But considering that there are 15 requests made before the first paint and 45 before the DOM Content Loaded event (more than 10 seconds later!) there are some serious prioritization issues. The content above the fold, groovy snow/parallax effects aside, is really just a group of images. Get these images onto the screen as soon as possible by avoiding contention with content below the fold. Don’t even load the subsequent images until the user starts to scroll them into view. Don’t even load the web fonts if there isn’t even any text above the fold!

Remember to prioritize the critical path, which is the series of network requests that must be made in order to complete the page above the fold. In this case, anything that is not the hero image or masthead graphic should be deferred until visible.

4. Optimize the content above the fold

Getting the secondary content out of the way of the primary content is just the first step. The site will seem faster because the important stuff will load sooner, but as a whole you’ve just rearranged the slow parts to load later. This is the step where you actually make things faster. This is kind of easy because we’re only dealing with images.

Most importantly, use the right image format. I’m going to go out on a limb and guess that the photographic images wouldn’t be nearly 2 MB if they weren’t PNG format. JPEG images are much better suited for photographic images and I bet that their size would be greatly reduced if properly formatted. Fewer bytes to download directly correlates with faster download times. One caveat is the use of transparency due to the layering of the images needed for the parallax effect. In that case, I would definitely use WebP as the image format to achieve transparency along with photographic quality.

Also, use appropriately sized images. The images are 1800 by 1700 pixels. My retina Macbook Pro browser window is about 1400 pixels wide, so this isn’t terribly inappropriate. However, the same images are used when the browser window is only 800 pixels wide! This is incredibly wasteful to load an image and hide most of it. It’s only exacerbated by the fact that all three background/midground/foreground images do the same thing.


I don’t care how slow The Hateful Eight website is because I love the director and I’m going to see his film regardless. This was just an exercise in using WebPageTest to analyze a page and demonstrating what can go wrong with web performance. But to the movie studios, remember that users hate slow websites and the abandonment rate increases with slowness. Some users might leave the site out of frustration for how slow it loads or how janky it is to use due to the excessive effects. The studios might as well consider these users as being dollars never spent on tickets for their film. Optimizing web performance is more than just a fun exercise; it’s a powerful tool that ensures that everyone who wants to visit a site can do it without having to wait, which can mean better conversion and better marketing reach.

Oh, and don’t get me started on its mobile web performance.

You can find out more about how to use WebPageTest to analyze the performance of websites in my new book, Using WebPageTest.

Using WebPageTest is complete!

Posted in Front End Engineering, Open Source, Web Performance, WebPageTest on by Rick Viscomi.

After two years from inception to production, I’m proud to say that the book I’ve coauthored with Andy Davies and Marcel Duran is finally complete.

We owe huge thanks to the production team at O’Reilly for their hard work whipping the book into shape and making this a reality. Most importantly, we honor Pat Meenan for tirelessly maintaining the tool for nearly 10 years. We’d also like to thank Ilya Grigorik, Lara Hogan, and Tim Kadlec for generously allowing us to quote them on the back cover of the book. We are also in debt to Steve Souders, who has done so much for the web performance community and is even responsible for many key WebPageTest features including the filmstrip. As Steve has kindly written in the foreword of the book:

WebPageTest is the leading web performance tool in the world today. It is easy to use, provides the performance metrics that matter, and is pioneering new ways to measure the actual user experience that websites deliver. In 2009’s Even Faster Web Sites, I wrote that WebPageTest “hasn’t gotten the wide adoption it deserves.” Fortunately, that’s no longer true. In fact, now there’s even a book about it! Read on and find out how to get the most out of WebPageTest to help you deliver a web experience that is fast and enjoyable.

It’s our hope that this book will invigorate readers into leveraging the power of WebPageTest to more effectively improve the performance of their sites. To get an ebook or soft cover copy, check out

If you’ll be attending the Velocity Conference in Amsterdam this week, Andy Davies and I will be signing free copies of the book. Come say hello!

“Using WebPagetest” Early Release TODAY

Posted in Front End Engineering, Web Performance, WebPageTest on by Rick Viscomi.

Just in time for Velocity Santa Clara! I’m really excited to share that the book Using WebPagetest, which I’ve coauthored with Andy Davies and Marcel Duran, is now available for early release online at O’Reilly. Here’s a snippet from the preface that helps to set the stage for the rest of the book:

We all know bad web performance when we see it. When something takes too long to load or become interactive we start to get bored, impatient, or even angry. The speed of a web page has the ability to evoke negative feelings and actions from us. When we lose interest, wait too long, or get mad, we may not behave as expected: to consume more content, see more advertisements, or purchase more products.

The web as a whole is getting measurably slower. Rich media like photos and videos are cheaper to download thanks to faster Internet connections, but they are more prevalent than ever. Expectations of performance are high and the bar is being raised ever higher.

By reading this book, chances are you’re not only a user but most importantly someone who can do something about this problem. There are many tools at your disposal that specialize in web performance optimizations. However, none are more venerable than WebPagetest is a free, open source web application that audits the speed of web pages. In this book, we will walk you through using this tool to test the performance of web pages so that you can diagnose the signs of slowness and get your users back on track.

I really hope that this book will help the web performance community come to a greater understanding of how to use WebPagetest and more effectively lift the quality of performance on the web.

If you’d like to access the early release or preorder the physical book, check out

Multivariate Testing with WebPagetest

Posted in Front End Engineering, Open Source, Web Performance, WebPageTest on by Rick Viscomi.

For your advanced web performance testing needs, WebPagetest’s out-of-the-box functionality can do most of the heavy lifting for you. The scripting interface alone is powerful enough to handle most advanced use cases. For everything else, we have some other strong tools at our disposal, including multi-location and bulk testing. These tools, however, are mutually exclusive. In other words, you can test an array of pages on a single location/browser setup or you could test a single page on an array of location/browser setups. An experimental new feature called multivariate testing (MVT) hopes to change the way you think about bulk testing. Continue reading

Higher Quality Screenshots with WebPagetest

Posted in Front End Engineering, Open Source, Web Performance, WebPageTest on by Rick Viscomi.

WebPagetest default image quality vs pngss=1

The advanced settings section of WebPagetest (WPT) includes a “Capture Video” option, which will record the page load of the tested web page as the user would see it. This makes it possible to provide visual performance metrics, such as visually complete time, visual progress percentage, and the filmstrip/video features on the Visual Comparison page. This is disabled by default, but once set it will be remembered the next time you visit the page. Even without this option, WPT still captures screenshots of the page at the start render, document complete, and fully loaded events.

These screenshots are saved as low-resolution JPEG images, which is usually good enough for most uses. However, sometimes you need to get a better look at what’s going on. Fortunately, WPT has you covered. Continue reading

It’s time to bridge the gap between front and back end performance testing

Posted in Web Performance, WebPageTest on by Rick Viscomi.

Front end performance tools like WebPagetest are limited in the level of visibility they can offer. Usually we blame the browser vendors for not providing the APIs to dig deeper, but that’s not the case anymore. The Navigation Timing API is a huge step forward to standardize the performance data available in JavaScript. What’s really missing now is the visibility into what is plaguing many unoptimized sites: back end performance.

90% of #webperf posts to WebPagetest forums last week were first-byte time issues. Scary how bad it frequently is (10+ sec)

Patrick Meenan, the creator of WebPagetest, noted in December 2012 that 90% of issues on the forums are time to first byte (TTFB) problems. TTFB is the time it takes to make the first request until the first bit of content is received. This includes things like DNS resolution, TCP connection, SSL negotiation, but also server processing time. “Server processing time” sounds vague, because it is. What exactly is going on here? Why is it taking so long? Continue reading