Archive for November, 2011

Psychology of Software Performance: the Just Noticeable Difference

by Steve Seow on November 29, 2011

The following article is a guest post by Dr. Steve Seow. Seow is an architect with Microsoft, has a Ph.D. in Psychology from Brown University and is the author of ‘Designing and Engineering Time’. We thought it would be interesting to get his perspective on performance and the psychology behind how we perceive time.

We can safely assume that many, if not all, the readers of this blog understand that performance is critical to the user experience of any software solution. Many of us are trained to tweak code to optimize performance, and we measure and express the deltas in percentages or time units to quantify the optimization. These metrics are important because any time, money and effort expended needs to be justified and someone (perhaps the person who signs the check) needs to be assured that there is a favorable return on investment (ROI).

Let’s make this more interesting. Suppose you estimate that it will cost $30,000 (over half of your budget), to reduce the processing time of a particular feature from 20 to 17 seconds. A full 3 seconds! Do you pull the trigger? What if the delta is from 10 to 7 seconds? Is there a net positive ROI in each case?

This hypothetical situation is what got me interested in performance, and more specifically, the psychology and economics that go into software engineering practices and decisions. In my first year at Microsoft, an engineering director, knowing my psychology background, asked me a really simple question: how much do we need to improve the timing of X in order for users to even notice the difference? The question was clearly a psychological one. We’re no longer talking about one’s and zero’s here. We’re talking sensation, perception, and psychophysics. We’re not forgetting the economics piece. We’ll come back to that.

Psychologists have measured human sensation and perception for over a century. Without going into details, suffice it to say that we are wired to detect differences in magnitude of a property in systematic way. The property of something (say, the brightness of a light) will need to increase its magnitude by a certain percentage before we go “ah, that’s different than before”. This is known as the j.n.d. or just noticeable difference.

Pooling from a ton of psychophysical research on time perception and other modalities of perception, it became clear that a 20% j.n.d. will, probabilistically-speaking, ensure that users will detect a difference in timing. What does this mean for the two scenarios above? The first case, the 3-second improvement over 20 seconds is at 15% delta. This is below the desired the rule-of-thumb j.n.d. of 20%. The second case, however, the same 3-second improvement is at 30% delta of 10 seconds. Now we can have some confidence that the difference will be detected.

An important thing to remember is that this doesn’t suggest that deltas below 20% are not worth the investment. Recall that at the beginning we were considering the investment considering the proportion of the budget it consumes, so now we’re talking economics. If a feature of your website can be tweaked from 6 to 5 seconds, which is a 17% improvement, at relatively low cost you would be foolish not to go ahead and optimize. The correlation between performance and revenue has been well documented and shaving a second off your load time can have a meaningful impact on your revenue. Depending on your scale, even achieving a mere 5% delta improvement for your website could easily be worth the investment.

This is merely the simplest application of j.n.d. in the world of software performance. I bring readers further down the rabbit hole in Chapter 5 of my book, Designing and Engineering Time, and on my site

CNBC discusses retailers need for tech speed

by Josh Fraser on November 21, 2011

It’s always great to see website performance getting mainstream coverage and with Cyber Monday quickly approaching awareness is at an all time high. Today, CNBC covered retailers that are preparing for the holiday season and discussed the impact of website performance on sales.

Here are a few key quotes:

“According to Compuware’s Gomez division, 71% of consumers expect websites to load as quickly on their mobile devices as they load at home but very few retailers can meet that need for speed. Speed is critical and we’re getting ready to go into a time of year where the web traffic can grow by ten times or more.”


“Speed is critical. After three seconds, abandonment rates for mobile websites are nearly identical to that of desktop browsing at home. The simple equation is that speed equals sales. Consumers will buy more from faster websites and they’ll abandon slower websites in favor of alternatives.”

You can watch the full clip below or on the CNBC website.

A better way to load CSS

by Jon Fox on November 17, 2011

Here at Torbit we’re always looking for ways to improve the performance of our client’s websites in an automated way. Our latest optimization is called CSS Smart Loader and is a better way to load / manage CSS.

When dealing with CSS and performance there are a few rules to consider:
1) Minify / compress your CSS
2) Combine your CSS into one request
3) Put your CSS on a CDN
4) Optimize for best caching
5) Make sure your CSS is non-blocking

Our new CSS Smart Loader handles all of these issues beautifully in order to get the best performance possible. Let’s go through them one at a time and cover the details of how this new optimization works.

Minify / compress your CSS
In order to reduce the overall download size it’s important to minify and compress your CSS resources. Minifying refers to removing excess white space and simplifying syntax in order to reduce the number of characters in the CSS resource. Compressing, in this case, refers to re-encoding the contents of a file via a common standard (GZIP) to allow a smaller overall file transfer size (reducing the number of bytes in transit). All modern browsers (and most browsers) support GZIP compression. Torbit will automatically handle minifying and gzip’ing all the CSS on your site, as well as static 3rd party CSS you include, in order to minimize the overall download of the CSS on the page.

Combine your CSS into one request
Another important best practice for performance is to combine all of your CSS into one HTTP request. The idea here is that every request the browser has to make has some overhead (headers and connection costs). By reducing the number of requests we can reduce the overhead costs of these downloads and improve the overall performance. Our new CSS Smart Loader does this by including all your CSS as strings in a JavaScript file, then attaching new style elements to the DOM to apply these styles to the page. This allows us to deliver all the CSS in a single request while still being able to treat the individual CSS files you originally included as individual attachments to the DOM. This makes it easier to preserve the proper media attributes for each file, ensure the CSS is included in the same place in the DOM it originally was included (also preserving the proper order), has some benefits with caching (which we’ll get to shortly), and prevents one error in the bottom of one CSS file from breaking the rest of your CSS in the files that follow it. This allows us to have much better control while still reducing the number of connections for CSS down to only one.

Put your CSS on a CDN
In case you’re not already familiar with the term, a CDN is a content delivery network. A CDN basically works by reducing the latency involved in a request by putting servers closer (geographically or network hops) to your site’s visitors. It’s faster to send a packet over a shorter distance / smaller number of hops and this leads to faster downloads of the content for your site’s visitors. In general it’s good practice to put all static files on a CDN (something Torbit can automate for you). Our new CSS Smart Loader also pushes the javascript file containing all your CSS to a CDN (one provided by us or yours if you already have one) which allows us to get the data to the browser even faster.

Optimize for best caching
By optimizing for the best caching you can reduce how often your site’s visitors have to download all of your CSS. If you had no caching every visitor would be forced to download all the CSS on every pageview. Obviously this is not ideal and you can improve the performance by keeping the contents of these files in the browser as long as possible. There are several parts to how we improve browser caching. The first is by using far future expires headers and versioning. This basically amounts to adding a special header in the CSS response that tells the browser it’s safe to cache for a very long time (several years). Normally this could be a problem if you change your CSS as the browser won’t know to re-download the file (because we’ve told it to cache this file for a very long time), so to address this we also do versioning. Versioning refers to changing the filename whenever the file contents changes. This allows us to prevent pages from using out of date files, while allowing us to keep a long cache life.

The other part of our improved browser caching involves using HTML5 local storage. HTML5 local storage is basically a key / value pair data store that lives in the client’s browser. The overall size of this storage varies, but is generally around 5mb per domain. Most modern browsers now provide support for local storage, so where available we’ll use this as a smarter cache. Local storage allows us to store the individual CSS files (because we pushed them down to the browser as strings) so that we can include only the CSS files needed on subsequent pages. It also means that if one of your files changes we can push only the updated file, or only the new file if another CSS file is requested on a future pageview. You can also generally get a longer cache life by using local storage because the space is specific to only that domain, so you’re not competing with other items in the cache on other sites. And lastly, local storage generally is cleared less often than the browser cache, which means the files can stay in the user’s browser longer and it’s more likely you won’t have a completely empty cache view for returning visitors.

We also include your inline styles as well in these optimizations which allows us to minify your inline styles and cache them in the browser (or store them in the browser’s local storage) so that they won’t have to be downloaded on every page view – reducing the overall file size of every page.

Make sure your CSS is non-blocking
And finally, we need to make sure our CSS is non-blocking. In some cases, CSS can block the browser from downloading any other resources. This means the browser has to wait for the CSS to finish downloading before it can start downloading other images and the like for the rest of the page. Obviously the more we can download in parallel, the better, so we make sure to include the CSS in a non-blocking manner by including it in a non-blocking javascript request. This ensures that we don’t keep the rest of the resources on the page from downloading, while still giving us all the great performance of the other optimizations mentioned above.

We’re really proud of our new CSS Smart Loader. It’s an important step in improving the overall performance of a webpage and especially important for improving onready time and the perceived performance since it blocks rendering. We’ve got a lot of other exciting performance optimizations in the works. As always, you can do these optimizations yourself, but why bother when we can automate them for you with Torbit!

Upcoming web performance events

by Josh Fraser on November 2, 2011

There are several web performance events taking place at around the same time next week that are worth highlighting.

1. Europe Velocity Conference
The Velocity Conference is expanding to Europe (Berlin to be exact). We had a blast at the last Velocity Conference in the States and expect this one to be another great event. We’re sad to be missing it, but we did manage to get our hands on a discount code, which I hope you will take advantage of. You can use the code “veu11sts” to take 20% off your total price when registering.

Steve and his team have once again put together an amazing agenda, which includes:

  • Jon Jenkins talking about Amazon Silk
  • Browser sessions from Chrome, Firefox, and Opera
  • David Mandelin discussing JavaScript engines
  • Jeff Veen talking about Designing for Disaster
  • Estelle Weyl presenting on Mobile UI Performance

2. SF Web Performance Meetup and Panel
For those of you who can’t make it to Europe, I hope you’ll join us for a panel discussion at Google. The SF Web Performance Group is hosting a panel discussion titled “Automated Web & Mobile Perf: Pipe dreams or Piece of Cake?“. Jon Fox and I are honored to be included on the panel along with some of the other leaders in the performance space. Our friends Aaron Kulick and Hans Brough are organizing the event and Arvind Jain (the leader of Google’s “Make the web faster” initiative) will be moderating.

The panel is being held on the Google Campus at 7pm on Thursday Nov 10th and is free to the public. Last I checked, there was already a waiting list for this event, but it’s worth signing up as there may be last minute cancelations.

I hope to see you there!