Category: WebPerformance.Org

Real User Measurement – A tool for the whole business

The latest trend A key tool in web performance measurement is the drive to implement the use of Real User Measurement (RUM) in a web performance measurement strategy. As someone who cut their teeth on synthetic measurements using distributed robots and repeatable scripts, it took me a long time to see the light of RUM, but I am now a complete convert – I understand that the richness and completeness of RUM provides data that I was blocked from seeing with synthetic data.

[UPDATE: I work for Akamai focusing on the mPulse RUM tool.]

The key for organizations is realizing that RUM is complementary to Synthetic Measurements. The two work together when identifying and solving tricky external web performance issues that can be missed by using a single measurement perspective.

The best way to adopt RUM is to use the dimensions already in place to segment and analyze visitors in traditional web analytics tools. The time and effort used in this effort can inform RUM configuration by determining:

  • Unique customer populations – registered users, loyalty program levels, etc
  • Geography
  • Browser and Device
  • Pages and site categories visited
  • Etc.

This information needs to bleed through so that it can be linked directly to the components of the infrastructure and codebase that were used when the customer made their visit. Limiting this data pool to the identification and solving of infrastructure, application, and operations issues isolates the information from a potentially huge population of hungry RUM consumers – the business side of any organization.

The Business users who fed their web analytics data into the setup of RUM need to see the benefit of their efforts. Sharing RUM with the teams that use web analytics and aligning the two strategies, companies can directly tie detailed performance data to existing customer analytics. With this combination, they can begin to truly understand the effects of A/B testing, marketing campaigns, and performance changes on business success and health. But business users need a different language to understand the data that web performance professionals consume so naturally.

I don’t know what the language is, but developing it means taking the data into business teams and seeing how it works for them. What companies will find is that the data used by one group won’t be the same as for the other, but there will be enough shared characteristics to allow the group to share a dialect of performance when speaking to each other.

This new audience presents the challenge of clearly presenting the data in a form that is easily consumed by business teams alongside existing analytics data. Providing yet another tool or interface will not drive adoption. Adoption will be driven be attaching RUM to the multi-billion dollar analytics industry so that the value of these critical metrics is easily understood by and made actionable to the business side of any organization.

So, as the proponents of RUM in web performance, the question we need to ask is not “Should we do this?”, but rather “Why aren’t we doing this already?”.

The Rule of Thirds: The Web Performance Analyst

Blurry Man - Brian Auer - http://www.flickr.com/photos/brianauer/2929494868/

Recently, there has been a big push for the Dev/Ops culture, an integrated blending of development and operations who work closely together to ensure that poor performing web and mobile applications don’t make it out the door. They have become the rockstars of the conference circuit and the employment boards.

I fit into neither of these categories. I have never run anything more than a couple of linux servers with Apache and MySQL. I write code because I’m curious, not because I’m good at it – in fact, I write the worst code in the world and I am willing to prove it!

I am a member of a web and mobile performance culture that is language and platform independent, to use some buzzwords.

I am a web and mobile performance consultant and analyst.

I can take apart reams of data to find statistical patterns and anomalies. I believe that averages are evil, and have believed this for more than a decade. I have been using frequency and percentile distributions for almost as long and watched as the industry finally caught up.

I can link the business issue that faces your company with the technical concerns you are facing and help guide you to the middle ground where performance and the balance sheet are in careful equilibrium.

I don’t care what you write your code in. I don’t care what you run it on. Now, don’t get me wrong: I respect and admire the Dev/Ops folks I have met and know. I am just not in their tribe.

HTTP Compression – Have you checked ALL your browsers?

Apache has been my web server of choice for more than a decade. It was one of the first things I learned to compile and manage properly on linux, so I have a great affinity for it. However, there are still a few gotchas that are out there that make me grateful that I still know my way around the httpd.conf file.

HTTP compression is something I have advocated for a long time (just Googled my name and compression – I wrote some of that stuff?) as just basic common sense.
Make Stuff Smaller. Go Faster. Cost Less Bandwidth. Lower CDN Charges. [Ok, I can’t be sure of the last one.]

But, browsers haven’t always played nice. At least up until about 2008. After then, I can be pretty safe in saying that even the most brain-damaged web and mobile browsers could handle pretty much any compressed content we threw at them.

Oh, Apache! But where were you? There is an old rule that is still out there, buried deep in the httpd.conf file that can shoot you. I actually caught it yesterday when looking at a site using IE8 and Firefox 8 measurement agents at work. Firefox was about 570K while IE was nearly 980K. Turns out that server was not compressing CSS and JS files sent to IE due to this little gem:

 BrowserMatch \bMSIE !no-gzip gzip-only-text/html

This was in response to some issues with HTTP Compression in IE 5 and early versions of IE6 – remember them? – and was appropriate then. Guess what? If you still have this buried in your Apache configuration (or any web server or hardware device that does compression for you), break out the chisels: it’s likely your httpd.conf file hasn’t been touched since the stone age.

Take. It. Out. NOW!

Your site shouldn’t see traffic from any browsers that don’t support compression (unless they’re robots and then, oh well!) so having rules that might accidentally deny compression might cause troubles. Turn the old security ACL rule around for HTTP compression:

Allow everything, then explicitly disable compression.

That should help prevent any accidents. Or higher bandwidth bills due to IE traffic.

OCSP and the GoDaddy Event

The GoDaddy DNS event (which I wrote about here) has been the subject of many a post-mortem and water-cooler conversation in the web performance world for the last week. In addition to the many well-publicized issues that have been discussed, there was one more, hidden effect that most folks may not have noticed – unless you use Firefox.

Firefox uses OCSP lookups to validate the certificate of SSL certificates. If you go to a new site and connect using SSL, Firefox has a process to check the validity of SSL cert. The results are of the lookup cached and stored for some time (I have heard 3 days, this could be incorrect) before checking again.

Before the security wonks in the audience get upset, realize I’m not an OCSP or SSL expert, and would love some comments and feedback that help the rest of us understand exactly how this works. What I do know is that anyone who came to a site the relied on an SSL cert provided and/or signed by GoDaddy at some point in its cert validation path discovered a nasty side-effect of this really great idea when the GoDaddy DNS outage occurred: If you can’t reach the cert signer, the performance of your site will be significantly delayed.

Remember this: It was GoDaddy this time; next time, it could be your cert signing authority.

How did this happen? Performing an OCSP lookup requires a opening a new TCP connection so that an HTTP request can be made to the OCSP provider. A new TCP connection requires a DNS lookup. If you can’t perform a successful DNS lookup to find the IP address of the OCSP host…well, I think you can guess the rest.
Unlike other third-party outages, these are not ones that can be shrugged off. These are ones that will affect page rendering by blocking the downloading the mobile or web application content you present to customers.

I am not someone who can comment on the effectiveness of OCSP lookups in increasing web and mobile security. OCSP lookup for Firefox are simply one more indication of how complex the design and management of modern online applications is.

Learning from the near-disaster state and preventing it from happening again is more important that a disaster post-mortem. The signs of potential complexity collapse exist throughout your applications, if you take the time to look. And while something like OCSP may like like a minor inconvenience, when it affects a discernible portion of your Firefox users, it becomes a very large mouse scaring a very jumpy elephant.

Web Performance: Your opinion is only somewhat relevant

Context is everything. Where you stand when reading or watching something shapes the way you experience it. Just as Einstein explained to us in the Train/Platform Thought Experiment, the position of the observer dictates how the event is described and recorded.

There is no difference with web performance. When a company develops an online application and presents it to customers (it doesn’t matter if they are outside/retail or inside/partner/employee), the perspective of the team that approved, created, tested, and released the application becomes, as a VP at a previous company explained to me, “interesting, but irrelevant”.

Step away from the world of online application performance for a minute, and put yourself in the shoes of the customer; become a consumer. How do you feel when a site, application, or mobile app is slow to give you what you want? I’ll give you some idea:

The stress levels of volunteers who took part in the study rose significantly when they were confronted with a poor online shopping experience, proving the existence of ‘Web Stress’. Brain wave analysis from the experiment revealed that participants had to concentrate up to 50% more when using badly performing websites, while EOG technology* and behavioural analysis of the subjects also revealed greater agitation and stress in these periods. (“Web Stress: A Wake Up Call for European Business”, emphasis mine)

I know it comes from a competitor, but it is true. It applies to me; it applies to you. And web performance professionals need to step away from the screens for a minute and put themselves in the shoes of the people standing on the platform.

Everyday, your online applications change, grow, fail, falter, and evolve – the train is always moving. To the people on the platform, all they see is your train and how it’s moving compared to the other trains they have watched go by. You worked hard on your train, polishing the brass, adding new cars, even upgrading the engine. To you, the train is a magnificent achievement that everyone should admire, especially now that the new engine makes it so much faster!

The customer on the platform is measuring how your updated train is moving compared to the MAGLEV bullet train on the super-conducting rail next to you and asking “How come this train is so slow?”

The complexity of a modern web site is astounding, and improving performance by 0.4 seconds is often a feat worthy of applause…among web performance professionals. From the perspective of your customers, that 0.4 second improvement is still not enough.

Web performance is a numbers game. As an industry, we have been focused on one set of numbers for too long. The customer experience, not the stopwatch, has to drive your company to the next level of performance maturity. To do that, you have to step off your online application train and take a cold hard look at what you deliver to your customers, alongside them down on the platform.

Web Performance: The Myopia of Speed

In February 2010, Fred Wilson spoke to the Future of Web Apps Conference. He delivered a speech emphasizing 10 things that make a Web application successful.

The one that seems to have stuck in everyone’s mind is the first of these. People have focused, quoted, and written almost exclusively about number one:

First and foremost, we believe that speed is more than a feature. Speed is the most important feature.

Fred Wilson

Strong words.

Fred has worked with Web and mobile companies for many years, so he comes at this with a modicum of experience. And for years, I would have agreed with this. But Fred goes on to describe 9 other items that don’t get the same Google-juice that this one quote does. There are probably 10 more that companies could come up with.

But a maniacal focus on speed means that in some companies, all else is tossed in order for that goal of achieving some insane, straight-line, one-dimensional goal. Some companies are likely investigating faster than light technologies to make the delivery of online applications even faster.

Can you base your entire business on having the fastest online application? What do you have to do to be fast?

Strip it down. Lose the weight, the bloat, the features. And what’s left is a powerful beast designed to do one party trick, likely at the expense of some other aspect of the business that supports the application.

If a company focuses on a few metrics, a few key indicators, they might evolve up to NASCAR, where it is not just speed, but cornering, that matters. Only left-hand corners, mind you, but corners nonetheless. Here speed is important, but is balanced against availability and consistency to ensure that a complete view of the value of the site is understood.

But is that enough? Do your customers always want to go left in your application? What happens if you are asked to allow some customers to go right? Do all of the other performance factors that you have worked on suddenly collapse?

As you can tell, growing up means that my taste in fast cars and racing forms has evolved, become more complex. Straight-line speed, followed by multi-dimensional perspectives have led me to realize that speed is only one feature.

So, if top-fuel and stock-car racing aren’t my gig, what is?

For a number of years in the 1980s and again since 2008, I have had a love of Formula One. The complexity of what these machines are trying to achieve boggles the mind.
Formula One is speed, of that there is no doubt. But there is cornering (left and right), weight distribution, brake temperature, fuel mix, traffic, uphill (and downhill, sometimes with corners!), street courses and track courses. And there are 24 answers to the same question in every race.

And then, there is a driver. In Formula One, a driver with an “inferior” car can win the day, if that inferiority is what is particularly suited to that course, in the hands of a skilled manager.

There is no doubt that like Formula One, speed is key to coming out on top. But if the organization is focused solely on speed, then your view of performance will never evolve. The key to ensuring a complete Web performance experience is a maniacal focus on a matrix of items: speed, complexity, third-parties, availability, server uptime, network reliability, design, product, supply-chain, inventory management integration, authentication, security, and on and on.

The Web application is a just that: a web. Multi-dependent factor and performance indicators that must be weighed, balanced, and prioritized to succeed. No web application, no online application, fixed or mobile, will survive without speed.

However, if speed is all you have, is that enough to keep someone coming back?
Is your organization saying that speed is all there is to performance?

Customer Experience: The Vanishing Reviews

SJE is an excellent supporter of the online economy. However, she is also very focused on the experience she suffers through on many online retail applications. The question I get frequently from the other end of the living room (Retail and Wardrobe Management Control Center – see image) is: “Is Company X a customer? Because their site (is slow | is badly designed | doesn’t work | sucks)!”.
Most of the time, there isn’t much to do, and the site usually responds and SJE is able to complete the task she is focused on.

Last night, however, a retailer did something that strayed into new territory. This company unwittingly affected the customer experience to such a degree that they actually destroyed the trust of a long-term customer.

This isn’t good for me, as I wear a lot of fine products from this retailer. But even in my eyes, they committed a grievous sin.

This retailer decided, for reasons that are known only to them, to delete a number of negative comments, reviews, and ratings for a product that they have for sale.
I just checked, and sure enough, all of the comments, including my wife’s very strong negative feedback about the quality, are gone.

I can think of a number of really devious and greedy reasons why a company might do this. It could also be an accident. If it was an accident, you might want to note that reviews and comments for this product were accidentally lost.

Now, if you went to a retailer and saw that your comments and reviews had been deleted, how would you feel? Would you trust that retailer ever again? What would happen if the twittering masses picked up the meme and started to add fuel to the bonfire?

A strong business, a solid design, an amazing presentation, and unrivaled delivery aren’t enough for some businesses. As a company, there is substantial effort, time, and treasure dedicated to converting visitors into customers. And it sometimes takes only one boneheaded move to turn a customer into the anti-customer.

Overcoming the Momentum of Traditional Web Performance

When I asked if traditional Web performance still mattered, the post generated a flurry of comments and questions that I haven’t seen in in a long time.

After some reflection and discussions with people who have been tackling this problem for longer than I have, the answer is yes, it does matter. However, synthetic Web performance measurement will not matter the way it does now. The synthetic approach will decrease in importance within fully evolved companies, organizations that have strong cultures of Web performance.

In these organizations, the questions change as the approach becomes foundational and integral to the operation of the online business. Ways of examining competition and performance improvement evolve, and the focus moves – from the perspective of We have a problem to one of of Our customers / visitors have a problem.

The shift is fundamental and critical. For as long as I have been in the business, synthetic measurements have served as a proxy for customer experience. But unless you get into the browser, out to where and how the customer uses the online application, the margin of error will remain large.

The customer is not an operational issue. There is no technical fix for perceived performance.

There is no easy solution for evolving the experience of performance.

Compression and the Browser – Who Supports What?

The title is a question I ask because I hear so many different views and perspectives about HTTP compression from the people I work with, colleagues and customers alike.

There appears to be no absolute statement about the compression capabilities of all current (or in-use) browsers anywhere on the Web.
My standard line is: If your customers are using modern browsers, compress all text content — HTML (dynamic and static), CSS, XML, and Javascript. If you find that a subset of your customers have challenges with compression (I suggest using a cross-browser testing tool to determine this before your customers do), write very explicit regular expressions into your Web server or compression device configuration to filter the user-agent string in a targeted, not a global, way.

For example, last week I was on a call with a customer and they disabled compression for all versions of Internet Explorer 6, as the Windows XP pre-SP2 version (which they say you could not easily identify) did not handle it well. My immediate response (in my head, not out loud) was that if you had customers using Window XP pre-SP2, those machines were likely pwned by the Russian Mob. I find it very odd that an organization would disable HTTP compression for all Internet Explorer 6 visitors for the benefit of a very small number of ancient Windows XP installations.

Feedback from readers, experts, and browser manufacturers that would allow me to compile a list of compatible browsers, and any known issues or restrictions with browsers, would go a long way to resolving this ongoing debate.

UPDATE: Aaron Peters pointed me in the direction of BrowserScope which has an extensive (exhaustive?) list of browsers and their capabilities. If you are seeking the final word, this is a good place to start, as it tests real browsers being used by real people in the real world.

UPDATE – 09/24/2012: I found a site today that was still configured incorrectly. Please, please, check your HTTP Compression settings for ALL browsers your customers use. Including you MOBILE clients.

Copyright © 2024 Performance Zen

Theme by Anders NorenUp ↑