The following article is a guest post by Dr. Steve Seow. Seow is an architect with Microsoft, has a Ph.D. in Psychology from Brown University and is the author of ‘Designing and Engineering Time’. We thought it would be interesting to get his perspective on performance and the psychology behind how we perceive time.
We can safely assume that many, if not all, the readers of this blog understand that performance is critical to the user experience of any software solution. Many of us are trained to tweak code to optimize performance, and we measure and express the deltas in percentages or time units to quantify the optimization. These metrics are important because any time, money and effort expended needs to be justified and someone (perhaps the person who signs the check) needs to be assured that there is a favorable return on investment (ROI).
Let’s make this more interesting. Suppose you estimate that it will cost $30,000 (over half of your budget), to reduce the processing time of a particular feature from 20 to 17 seconds. A full 3 seconds! Do you pull the trigger? What if the delta is from 10 to 7 seconds? Is there a net positive ROI in each case?
This hypothetical situation is what got me interested in performance, and more specifically, the psychology and economics that go into software engineering practices and decisions. In my first year at Microsoft, an engineering director, knowing my psychology background, asked me a really simple question: how much do we need to improve the timing of X in order for users to even notice the difference? The question was clearly a psychological one. We’re no longer talking about one’s and zero’s here. We’re talking sensation, perception, and psychophysics. We’re not forgetting the economics piece. We’ll come back to that.
Psychologists have measured human sensation and perception for over a century. Without going into details, suffice it to say that we are wired to detect differences in magnitude of a property in systematic way. The property of something (say, the brightness of a light) will need to increase its magnitude by a certain percentage before we go “ah, that’s different than before”. This is known as the j.n.d. or just noticeable difference.
Pooling from a ton of psychophysical research on time perception and other modalities of perception, it became clear that a 20% j.n.d. will, probabilistically-speaking, ensure that users will detect a difference in timing. What does this mean for the two scenarios above? The first case, the 3-second improvement over 20 seconds is at 15% delta. This is below the desired the rule-of-thumb j.n.d. of 20%. The second case, however, the same 3-second improvement is at 30% delta of 10 seconds. Now we can have some confidence that the difference will be detected.
An important thing to remember is that this doesn’t suggest that deltas below 20% are not worth the investment. Recall that at the beginning we were considering the investment considering the proportion of the budget it consumes, so now we’re talking economics. If a feature of your website can be tweaked from 6 to 5 seconds, which is a 17% improvement, at relatively low cost you would be foolish not to go ahead and optimize. The correlation between performance and revenue has been well documented and shaving a second off your load time can have a meaningful impact on your revenue. Depending on your scale, even achieving a mere 5% delta improvement for your website could easily be worth the investment.
This is merely the simplest application of j.n.d. in the world of software performance. I bring readers further down the rabbit hole in Chapter 5 of my book, Designing and Engineering Time, and on my site PerfScience.com.