by Josh Fraser on September 24, 2012
I finally got around to reading the Steve Jobs biography by Walter Isaacson that’s been sitting on my bookshelf for months. It’s a great read and I’ve found myself captivated by the stories and lessons that can be found in Steve Jobs’ life. One story in particular jumped out at me:
One day Jobs came into the cubicle of Larry Kenyon, an engineer who was working on the Macintosh operating system, and complained that it was taking too long to boot up. Kenyon started to explain, but Jobs cut him off. “If it could save a person’s life, would you find a way to shave ten seconds off the boot time? he asked. Kenyon allowed that he probably could. Jobs went to a whiteboard and showed that if there were five million people using the Mac, and it took ten seconds extra to turn it on every day, that added up to three hundred million or so hours per year that people would save, which was the equivalent of at least one hundred lifetimes saved per year. “Larry was suitably impressed, and a few weeks later he came back and it booted up twenty-eight seconds faster,” Adkinson recalled. “Steve had a way of motivating by looking at the bigger picture.”
At Torbit, we believe that speed really matters. We have a simple, but audacious goal. We think the internet is too slow and we’re doing our best to fix it. It’s humbling to think about the collective amount of time (and lives) we’ve already helped save. It’s the reason why we founded this company. It’s the motivation behind what we do every day.
by Austin Hallock on September 19, 2012
Last week our CEO, Josh Fraser gave a presentation at the San Francisco Web Performance Meetup cleverly titled “Yo ho ho and a few billion pageviews of RUM” – quite relevant for today, International Talk Like a Pirate Day. If you have some spare time, it’s definitely worth watching! (video and slides). In preparation for the talk, Josh and I gathered some intriguing statistics using the terabytes of data Torbit has collected in the last 4 months. Much of that data is listed below on a categorical basis using a sample of 1,000 sites representing 6.7 billion pageviews.
Frontend vs Backend
As a developer, I spend a good deal of time making my backend code efficient. While that certainly does matter, the vast majority of time users spend waiting is due to frontend loading. Steve Souders’ Golden Rule for Performance states that 80-90% of the end-user response time is spent on the frontend. Across Torbit’s data, that number is actually 93%. We measure frontend vs backend timing based on “time to first byte” (TTFB) and on average 7% of load time is spent on the backend compared to a whopping 93% on the frontend.
Have a look over some of our blog posts on performance for tips on how to reduce frontend load time.
The following values are for the onload time across our sample set.
- Median: 2.53s
- Average: 4.97s
- Geometric Mean: 2.19s
- 90th Percentile: 10.38s
- 95th percentile: 16.86s
- 99th percentile: 43.73s
You’ll see with our mobile data, everything is shifted to the right on the histogram. This is caused by a myriad of reasons including slower processors, latency and slower connections with Edge, 3G, etc.
- Median: 3.87s
- Average: 6.23s
- Geometric Mean: 3.12s
- 90th Percentile: 12.07s
- 95th percentile: 18.11s
- 99th percentile: 44.42s
Taking a closer look at latency, the average response transfer time (time from first byte to last byte of the html response) is 0.30s from desktop browsers and 1.30s on mobile browsers, that’s over 4 times slower! The most important thing you can to to improve your performance on mobile is to reduce the number of requests that you make.
There are many factors that impact the performance experienced by end users in varying locations. When it comes to load times on the web, geography matters a lot.
Where’s the US? The US is the 22nd fastest country. Hopefully Google Fiber will help US cable companies to get with the times.
By US State
* scroll to see full list
Or if you prefer a visualization…
As you can see, the southern and rural states are the slowest, not too terribly surprising.
Of cities we have at least 100,000 data points for, below are the fastest and slowest.
|Johannesburg, South Africa
||University Park, USA
By US City
|Slowest US Cities
||Fastest US Cities
||University Park, PA
||College Park, MD
||Notre Dame, IN
||Stony Brook, NY
||Princeton Junction, NJ
I can’t see a good reason why Independence Ohio would be so quick, but what most of the fastest cities have in common is a major university (Penn State, University of Maryland, Notre Dame, Stanford, etc.)
Safari as the quickest browser might be a little shocking… As Josh mentioned in his talk, it’s hard to tell what specifically leads to the faster speeds – using Safari typically means they’re on a Mac, a pricier (likely higher performance) machine, and can afford higher speed internet.
|Chrome on Android
Read into that as you wish…
Bounce Rate (Desktop)
Load time plays a huge role in bounce rate. Not even 10 years ago most people were used to waiting 10-30 seconds for a page to load on dial-up. These days pages are expected to load right away, or the consumer will lose interest. The graph above is more proof of that fact.
Bounce Rate (Mobile)
Mobile devices suffer the same fate, just shifted to the right some. An interesting thing to note is how high the bounce rate is for pages that load in one second. With a bit of context behind the graph, the reason for that value is that typically the only pages that will load in 1 second are error pages.
Not only are consumers more likely to leave your page due to slow load times, they’re much less likely to be engaged users. I know if I have to wait 6 or 7 seconds for a page to load, I’m not going to stick around that site for long – this graph proves most people have that same mindset. Notice that engagement is doubled by reducing onload time from 6 seconds to 2 seconds.
If your site loads in 7 seconds on average, clearly that means you should add in a few more requests to bump that up to 9 seconds… In all seriousness though, mobile shows the same overall trend of less engagement with higher load times as expected. It has been suggested that the bimodal nature of this graph with the bump at 9 seconds might represent the difference between pages viewed on 3G versus wifi. In other words, perhaps we’re more patient if we know we’re on 3G and 9 seconds feels more reasonable to us.
Hopefully these statistics are of value to you, or at least somewhat entertaining. If you would like to see how your own site fairs, sign up for Torbit Insight where we provide all these statistics and more!
by Josh Fraser on August 29, 2012
On September 5th, I’ll be speaking at the San Francisco Web Performance Meetup. I’ll be revealing some never before shared data from measuring billions of pageviews with our Real User Measurement product, Torbit Insight. We’ll be looking at lots of fascinating performance trends, comparing browsers and geographical performance. If you like performance data make sure you sign up today. You’re not going to want to miss this one.
The event starts at 7pm and will be taking place at 1 Market Plaza, Steuart Tower, 5th Floor in San Francisco. Check out the Meetup page to learn more and reserve your spot.
by Josh Fraser on August 28, 2012
One of the key differentiators of Torbit Insight is that we offer the ability to quantify the value of your website performance. No other product on the market allows you to correlate website speed and revenue. For years we’ve heard stories from companies like Amazon that tell us that one tenth of a second equals one percent of sales. These public case studies are great, but what about your website? We built Torbit Insight because we wanted an easy way for everyone to know how much speed matters. There’s something incredibly powerful about seeing key metrics based on your own data from your own visitors. We’ve now helped hundreds of companies change performance from being just a technical metric (something your engineers worry about) to a business metric that influences real decisions at your organization.
If you use Insight you may have recently noticed a new addition to your dashboard. We added a graph that shows how user engagement is affected by the speed of your website as measured by the number of pages visited per session. Once again, the data tells a clear story: speed matters.
Showing the correlation between speed and revenue has always been a key focus for us. For example, we have a graph showing the correlation between your site speed and your bounce rate. For internet retailers, we offer conversion tracking that makes it easy for e-commerce sites to track the correlation between website performance and actual sales. Other sites have used our conversion tracking to track other activities like someone requesting a demo or signing up for a mailing list. Advertising-based businesses tend to care about other metrics like the number of pages viewed per session. More pageviews means more ad impressions which means more revenue. This new view should help give more visibility into this important user engagement metric. On the x-axis you can see the various speeds at which visitors experienced your site. On the y-axis you can see how many pages your visitors viewed across their session at each of those speeds. In the example above, you can see that there is a huge advantage in having 1 or 2 second load time. For this particular site, a 1 or 2 second load time ensures that an average visitor will view 23 pages per session. In contrast, if the site takes 15 seconds or longer to load, an average visitor will only view 5 pages per session.
The exact numbers and corresponding graph will vary from site to site, but most sites will see a strong correlation. When your site loads faster, people stick around longer and view more pages. Once you know how much speed matters for your site, you have a much better way of knowing how much to invest in site performance, whether that’s the money you spend on developers or on performance-related technology like a CDN or Dynamic Content Optimization.
We’re excited to offer this new feature to all of our Premium and Enterprise customers. Sign up for a free 14 day trial of Premium Torbit Insight and find our how speed affects your user engagement today.
by Josh Fraser on August 24, 2012
I am excited to announce that starting today we have lifted all pageview limits for Torbit Insight, including those of you on our free plan. We believe it’s important for our customers to be able to see performance data for 100% of their traffic. We’ve grown to a scale where we can handle the extra traffic and we’re happy to offer our service to every site regardless of their size. It’s our small contribution to making the internet a faster and better place for everyone.
We have also made some changes to our pricing that we want to let you know about. Since launching Torbit Insight, we’ve had a chance to work with hundreds of sites and have gotten lots of great feedback about the functionality you love the most. Over the last few months we have added a lot of features without raising our prices at all. Our goals with these changes are to make things simpler, continue adding more value to our product and ultimately make it easier for more people to use Torbit.
Here is an overview of what is changing today:
No more pageview limits. No asterisk.
We will be including our drill down capabilities with our free plan. This includes our loading timeline, map view and browser breakdown.
For the sake of simplicity, we are saying goodbye to our Standard plan. Customers who were on our Standard plan have been automatically upgraded to Premium accounts for the same price.
We’re raising the price of our Premium plan to $499 / month. We know this is a big price increase, but it’s an important change that will allow us to focus more of our energy on the businesses that find the most value in our product today.
Existing Premium customers will be grandfathered in at the previous price. Feel free to upgrade or downgrade your plan as it makes sense for your business.
Please contact us about an Enterprise plan if you need more than 5 domains, page tagging, extended data retention or 24/7 support. If you would like to discuss any of these changes, please feel free to contact us.
by Josh Fraser on August 23, 2012
It’s important to look at the distribution of your data when considering your performance. I’ve written before about the dangers of only looking at your average loading time. Averages can be very misleading. I’ve seen plenty of sites that have a 4 second average loading time, but a 20 second 90th percentile loading time. That’s why we offer a histogram view and always encourage our customers to track their goals using their 90th or 95th percentile loading time.
We’ve also had requests to include the geometric mean as one of our featured metrics. We thought that was a great idea and geometric mean is now featured on your Torbit dashboard along with your existing metrics (Median, Average, 90th Percentile, 95th Percentile, and 99th Percentile).
For those of you who are unfamiliar with a geometric mean, here is a quick explanation of what this new metric means for you and your performance data.
As you know, there are a lot of different factors that influence how fast your website loads. The geographic proximity of your server to your visitors has a big impact on your speed. It also makes a difference which browser each visitor is using and whether they are on a fast internet connection or not. When you look at your performance data as a whole, you are seeing the combination of many independent variables. When looking at end user performance data, it usually looks like the graph below. The data does not take a normal distribution shape, as it is skewed to the right. However, if you took the logarithm of all the data and re-graphed, you would have a normal distribution, or the standard bell curve. Thus, this is called a log-normal distribution.
The arithmetic mean (what we usually think of as an average) is very susceptible to outliers. In pageload times, it’s easy to have a few really slow data points that skew your data. It’s not a problem if you have a normal distribution since the outliers balance each other out (both visually and mathamatically). The problem is, we don’t have a normal distribution, we have a log-normal distribution. As it turns out, when you have a log-normal distribution, the geometric mean is a much better way of representing the central tendency of your data.
A geometric mean is calculated by multiplying your data points together and taking the nth root (n being the number of data points you have) of that resulting product. With this calculation, the geometric mean normalizes the ranges being averaged, so that no range dominates the weighting, and a given percentage change in any of the properties has the same effect on the geometric mean. In this way, the geometric mean helps with outliers so they don’t have undue weight. To learn more about geometric mean, I’d recommend heading to wikipedia for a more in-depth explanation of how it is calculated and when it’s most useful.
Your geometric mean will likely be the lowest value on your dashboard, but we didn’t just add this to make you feel better about your site speed. Our goal is always to give you more transparency and a more holistic view into your website performance.
by Josh Fraser on July 31, 2012
We’ve been growing rapidly since launching Torbit Insight at the end of April. We are already processing billions of pageviews every month and are currently processing about 15,000 metrics every second. If you’re curious, we’ve added a real time counter to the bottom of our homepage that shows the live number.
We know these numbers can get a bit mind boggling and they’re not showing signs of slowing down anytime soon. As part of our growth plan, our engineers have completely rebuilt the backend of Torbit Insight. Our new “big data” store will allow us to continue our rapid growth while also making it easier to add new features at scale.
As some of you noticed, our old backend was starting to struggle a bit under the load. I apologize to those of you who reported missing data or other weird issues in the last couple weeks. Thank you for bearing with us. The new backend should bring a lot more stability and reliability going forward. Otherwise, your experience should be largely unaffected. We’ve kept most of the features the same and your data has already been imported into the new system. The conversion tab will look a little different for now as it’s being revamped to work with our new collection system. If you see anything else unusual, please let us know.
We’ve been gathering feature requests from our customers for a while. With this launch, our team will be able to focus again on rolling out the features you’ve been waiting for. If we have other suggestions you want us to consider, feel free to send your suggestions directly to me at firstname.lastname@example.org. We love having customers engaged early in the development process as it helps keep us on track.
A huge thanks to our team and especially Jon and Mike on this important accomplishment.
by Josh Fraser on July 23, 2012
Jonathan Klein from Wayfair wrote a post a few weeks ago about using WebPageTest to measure the performance of their CDN. The results were surprising. Wayfair found that their CDN was delivering minimal performance gains. As you would expect, the post generated a lot of lively discussion with lots of ideas about different variables that could be affecting the outcome of the test. Several people (myself included) recommended they use Real User Measurement to see how much of an improvement their actual visitors are experiencing.
Last week, Klein posted the results from the Real User Measurement test. After using the tagging feature of our Insight product to run an A/B on their production site, the results told much the same story as the synthetic test. Wayfair saw no major performance improvement due to the use of a CDN.
The results of these two tests are quite surprising. A CDN is a well known tool that will improve the performance of most websites. You can’t change the speed of light, but you can make sure your content is delivered from servers closer to your visitors. Although disappointed with the results, Klein was careful to point out other benefits of using a CDN. Klein said the ability to offload origin bandwidth and tolerate traffic spikes was enough to justify the cost of their CDN.
Performance guru, Steve Souders took a look at the results and reminded people in the comments that:
There are numerous performance best practices. Not all of them apply to every site. But that doesn’t mean the best practice is bad – it just might not be relevant at that time for that particular site.
Souders was able to trace the problem back to several large images that are being loaded from their CDN, but appear to be taking far longer than expected to load. I’m confident Wayfair will be able to take this data to their CDN and get this particular issue resolved. Many sites like Wayfair are spending thousands of dollars on their CDN, but have never taken the time to really evaluate what sort of performance gains they are receiving for their money. I love that Jonathan was willing to set up this test and share the results with the world. It’s a great example of how you can use Real User Measurement to keep your vendors accountable for the performance gains they promise.
Using Torbit Insight, Wayfair was able to set up this test and get meaningful data back in very short period of time. It’s a great example of how easy it is to use our tagging feature to do a performance-related A/B test. If you haven’t already, be sure to read Jonathan Klein’s full post for all the details on how he conducted the experiment. For anyone else interested in conducting a similar test, send us a note, we’d love to help.
by Josh Fraser on July 12, 2012
In the last few months since we launched Torbit Insight, hundreds of top retailers and large media properties have adopted Real User Measurement on their sites. In fact, we’ve measured over 3 billion page views for retailers like Wayfair, CafePress and Build.com. As we’ve had the privilege of working with some of the largest sites around, we’ve noticed an ongoing trend. Our customers are starting to depend on the Real User Measurement (RUM) data we give them as their primary source for monitoring their website performance.
Performance measurement has traditionally been done using synthetic testing (sometimes also referred to as active monitoring). Synthetic testing is when you load a website on a regular interval from one or more locations around the world to see how fast it loads. This data is then used to generate reports or trigger alarms when there are performance issues with your site. While synthetic testing is certainly useful, hundreds of top sites are turning to Real User Measurement as a source of more accurate data.
While Synthetic testing is valuable for deep analysis and debugging, it has a few short comings. With synthetic testing you only get visibility into the specific pages that you test. This is typically a small fraction of the pages your customers actually visit, leaving you with large sections of your site without monitoring. Synthetic testing gets expensive, especially if you try and increase your coverage to more pages across your site. You’re also putting more stress on your servers, taking valuable capacity away from your actual visitors. Of course, the main problem with synthetic testing is that it makes so many assumptions about your visitors. There are dozens of factors that affect the speed at which someone is able to access your site. Where are they geographically located? What is their connection speed? Which browser are they using? Are they visiting for the first time, or are they a repeat visitor? All of these variables affect the loading experience for your visitors. If you want to know what your visitors are actually experiencing, you have to use Real User Measurement.
It’s impossible to test every variation of location, network connection speed, OS, browser & add-on. That’s not to say synthetic testing is bad. There’s a place for both.
There are a few key factors that are accelerating the adoption of Real User Measurement. We now have the web timing spec support in all of the major browsers. This allows us to collect highly accurate timing data from the browser itself, starting even before the page is loaded. This allows us to time things like DNS lookups and the time it takes to do the TCP handshake. One of the challenges of implementing RUM in the past has been simply the massive amount of data that it generates. With the explosion of “big data” tools, it’s now feasible to collect billions of samples and make sense of them. Thankfully, you don’t have to build it yourself, we offer a great Real User Measurement tool at Torbit and we even made it free for people to get started.
Every visitor matters. If your site is slow, chances are you are leaving visitors and revenue on the table. The first step in making your site faster is making sure you have an accurate way to measure your speed.
by Josh Fraser on June 25, 2012
The Velocity Conference is always a fun event for us and I doubt this year will be an exception. It’s always a great time to catch up with our friends, customers and lots of other smart people who care about performance on the web.
This year we are co-sponsoring a party with our friends at Dyn. If you’re attending, I hope you’ll stop by the Dyn Music + Tech party on Tuesday night. We’ll be handing out free Torbit shirts and other swag. Come have some free food and drinks on us! Hope to see you there!