Tuesday, March 26, 2013

Tracking When Users Print Pages

A few months ago I had the pleasure of writing a piece for .net Magazine about print styles (Make your website printable with CSS). It was posted to .net's web site last month and received an overwhelming one comment. That comment, however, summed up something I hear all the time:

Would be interesting to see some statistics on how many people actually print websites.

For years I have argued that the best user statistics are those for the site you are building. In the absence of global numbers for how many users print web pages, in this post I'm going to show you how you can measure how many (and which) pages get printed from your site by using Google Analytics. I am also hoping those who know everything about Analytics can answer some of my questions.

The Concept

While looking around for existing solutions to track printed pages, I found this article: Use Google Analytics to Track When People Print your Web Pages (written exactly one year before I got my own code working). While there doesn't appear to be anything wrong with this approach (I did not try it), how it both produces the tracking code (JavaScript) and presents the data in Analytics (different than how I report on custom events), doesn't match my preferred approach.

I want to be able to call the Google Analytics tracking image (__utm.gif) only when the page is going to be printed, skipping unnecessary HTTP calls and the resulting image download (brief though it is). I rely on the CSS @media print declaration to call the image. I also don't want to write that image call to the page with yet more client-side script when I can assemble it all right on the server.

Since my post Calling QR in Print CSS Only When Needed already outlines the general flow (presuming I only want to support Internet Explorer 8 and greater), I can lean on the CSS syntax there.

To reiterate this technique will not work in versions of Internet Explorer 7 and earlier.

Constructing the Query String

I had a heck of a time finding information on how the Analytics query string needs to be constructed, and when I did find information it didn't always explain the values in much detail.

Google's developer site has information on all the query string parameters for the GIF request, but no information on what is required or what all the possible values might be. I did find a list of what may be the required parameters while searching among a thread on tracking emails with Analytics. Through a good deal of experimentation I came up with the following minimum list for my purpose:

Variable Description
utmac Account String. Appears on all requests. This is your UA-#######-# ID.
utmwv Tracking code version. While my standard GA requests use 5.4.0, I opted to use 4.3 for reasons I no longer recall.
utmn Unique ID generated for each GIF request to prevent caching of the GIF image. I just concatenate the current year, month, day, hour, minute and second.
utmhn Host Name of your site, which is a URL-encoded string.
utmr Referral, complete URL. In this case I just insert a dash so it is not blank.
utmp Page request of the current page.
utmt Indicates the type of request, which is one of: event, transaction, item, or a custom variable. If you leave it blank, it defaults to page. Because I am tracking events, I use event.
utme Extensible parameter. This is where you write your event. I use 5(Print*{page address}). See below for why.
utmcc Cookie values. This request parameter sends all the cookies requested from the page. It can get pretty long. It must be URL encoded. It must include __utma and __utmz values.

Because the whole point of this is exercise is to track the event in Google Analytics, it was important to understand how to construct the event for the query string. I struggled a bit.

I still haven't figured out what the number 5 maps to, but it works. I also found that I need an asterisk as a separator, though I found no documentation explaining it. In the end, the only way a print event tracked as I wanted was when I constructed it as: 5(Print*/Accessibility). In this example, /Accessibility is the address of the page I am tracking.

The other tricky bit is pulling the cookie value and stuffing it into the string. Conveniently I can get to this within our content management system (QuantumCMS, which you should use) on the server side. Many others (if not most or all) have a similar ability. At the very least you have to include the __utma and __utmz values, passed as encoded parameters for utmcc. Without these, my tracking would not fire.

The Completed Query String

For ease of reading, I will break the string to a new line at each &. This represents what is generated when I visit the careers page on the Algonquin Studios site using Opera.

http://www.google-analytics.com/__utm.gif
?utmac=UA-1464893-3
&utmwv=4.3
&utmn=2013326124551
&utmhn=algonquinstudios.com
&utmr=-
&utmp=/Engage/Careers
&utmt=event
&utme=5%28Print*/Engage/Careers%29
&utmcc=__utma%3D267504222.1477743002.1364314722.1364314722.1364314722.1%3B%2B__utmb%3D267504222.17.7.1364314901604%3B%2B__utmz%3D267504222.1364314722.1.1.utmcsr%3D%28direct%29|utmccn%3D%28direct%29|utmcmd%3D%28none%29

Constructing the CSS

Now that you have the query string and the Google Analytics tracking image, you just need to call the image when the page is printed. All you need to do is embed a style block at the top of your page with the print media query, and call the image within it:

@media print {
 header::after
  { content: url(http://www.google-analytics.com/__utm.gif?utmac=UA-1464893-3&utmwv=4.3&utmn=2013326124551&utmhn=algonquinstudios.com&utmr=-&utmp=/Engage/Careers&utmt=event&utme=5%28Print*/Engage/Careers%29&utmcc=__utma%3D267504222.1477743002.1364314722.1364314722.1364314722.1%3B%2B__utmb%3D267504222.17.7.1364314901604%3B%2B__utmz%3D267504222.1364314722.1.1.utmcsr%3D%28direct%29|utmccn%3D%28direct%29|utmcmd%3D%28none%29); }

If you read my post on embedding QR codes, then this code will be familiar — I use header::before in that example. As such, I use header::after here so you can use them both keyed off the same element (header) without conflict.

If you look closely, you may have noticed that my event parameter looks like 5%28Print*/Engage/Careers%29 instead of 5(Print*/Accessibility). I URL encoded the parentheses on the entire string to make certain that they do not conflict with the parentheses in the CSS. If you don't do that, the browser will get confused and fail to load the image.

Once you have the CSS in place, I recommend going into HTTP Fox or the Chrome Developer Tools to make sure the image is called when you fire a print preview (save paper!), and then to make sure it has the parameters you expect — particularly the utme value:

Screen shot of Chrome Dev Tools.
Screen shot of Chrome Dev Tools showing the query string parameters for the tracking GIF.

Checking Your Google Analytics Report

Assuming you've verified all is working well, you just need to run a report for events in Google Analytics. Bear in mind that Analytics isn't up-to-the-minute, so you may need to give it some time to capture all the data.

Log into your Analytics account and make sure you set the report date to the time period where you rolled out these changes. Choose "Content" from the "Standard Reports" on the left side. From there, expand "Events" and then select "Top Events." You should see "Print" as one of the items in the "Event Category" column (you may need to show more rows).

Screen capture from Google Analytics
After you click "Top Events," you will see all of the events you are tracking (if any other).

Click on the word "Print" in that grid and you will see all the pages that were tracked (ostensibly because you or a user printed the page).

Screen capture from Google Analytics
The report is handy if you know the page addresses, but Analytics doesn't think of them as such. As a result, clicking the addresses will not take you to the page.

From here you can run a secondary dimension to cross-reference this with more information. In my example, I tested different pages in different browsers so I could quickly verify the cross-browser support. You can run screen resolution, landing page, or any other dimension that you think might be handy to compare.

Screen capture from Google Analytics
An example comparing the printed pages with the browser as a secondary dimension of the report.

Conclusion

I am just adding this to my own site, so I don't have any numbers to offer as part of this post. However, if you implement this please feel free to let me (and everyone) know how many users you have who print and for what site. I don't expect the numbers to be high, but I do expect to see it happen here and there.

If you have any additions, corrections or suggestions, please let me know. I am still unclear how all the Google Analytics query string parameters come together and exactly what they all mean, so there may be some optimizations I can work into it.

Related

Related articles on print styles:

Stuff I've Written

Update: October 16, 2014

Over at Smashing Magazine, Krasimir Tsonev appears to have independently developed the same method in his post, CSS-Only Solution For UI Tracking. I left a comment pointing folks here and to my Web Standards Sherpa article covering same.

Sunday, March 24, 2013

Women in Technology

Portrait of Augusta Ada King, Countess of Lovelace
Augusta Ada King, Countess of Lovelace (1815—1852), considered by many to be the first computer programmer (of any gender). Portrait by Alfred Edward Chalon.

Lately you've probably heard plenty about education in the US and the renewed push for STEM (science, technology, engineering, math). As STEM education gets attention, it has reminded us all that there is a shortage of women in STEM-related fields as well as STEM-related courses and programs.

As someone in the technology industry, I can see this difference when I go to conferences, when I speak at classes, when I review job applications, and when I talk to women in my life who are interested in technology.

That's why it was heartening to hear about a local young woman spinning up a chapter of Girl Develop It here in Buffalo (also on Twitter at @gdiBuffalo).

I hope this group pans out. I think it can benefit both men and women.

Challenges in Tech

I hate to blunt this positive by bringing in negatives, but it's because of these negatives that I see such value in this new local group.

There are so many pithy, rambling, crazy, angry things on a daily basis about gender in technology that I'd rather not add to the noise. I will, however, link to examples of why I feel there is a need for resources for women in our industry. There are far far more examples out there.

Back on a positive note, there are other resources on the web for women, such as Ladies in Tech, Girls Who Code, and Black Girls Code.

Tuesday, March 12, 2013

WebKit Will and Won't Be the New IE

Web developers have been looking to call everything the new Internet Explorer for a while now. With Opera's recent move to WebKit as its rendering engine (replacing Presto), even more developers are suggesting that WebKit is becoming the new IE.

I think they are right, but for the wrong reasons.

How WebKit Won't Be the New IE

Unlike Trident (Internet Explorer's rendering engine), WebKit can be wielded in many different ways by many different browsers. It's less a singular rendering engine and more a collection of pieces and parts that can be assembled in different ways. The tweet above demonstrates that there can be different WebKit implementations.

You may argue that the example above is just a case of the first implementation of an update that will make it into all browsers that use WebKit. That may very well be true (we don't know yet), but there are many more examples of differences as Paul Irish painstakingly details in his post WebKit for Developers. I encourage you to read it because it's an almost hilarious dive into the rabbit hole of WebKit. The most salient and clear point is the bullet list of what is not shared in WebKit ports:

  • Anything on the GPU
    • 3D Transforms
    • WebGL
    • Video decoding
  • 2D drawing to the screen
    • Antialiasing approaches
    • SVG & CSS gradient rendering
  • Text rendering & hyphenation
  • Network stack (SPDY, prerendering, WebSocket transport)
  • A JavaScript engine
    • JavaScriptCore is in the WebKit repo. There are bindings in WebKit for both it and V8
  • Rendering of form controls
  • video & audio element behavior (and codec support)
  • Image decoding
  • Navigating back/forward
    • The navigation parts of pushState()
  • SSL features like Strict Transport Security and Public Key Pins

That amounts to quite a lot of potential variance between WebKit-powered browsers, which is how WebKit is not the new IE.

How WebKit Will Be the New IE

Far too many developers are always looking for ways to justify testing less. It could be laziness, it could be lack of access to enough device configurations, it could be … well, frankly, I think it's the first one. My IE10 tweet above was based on watching people thrilled as much at taking a shot at Internet Explorer as they were at feeling they could test on one fewer browser.

Developers may use the common engine in one browser as justification for not testing on all the WebKit browsers, or they just may not know about the potential for dramatic differences between implementations. What happens when everyone builds for one awesome browser engine (or one perceived common engine)? We end up with a browser monoculture, devoid of testing for other variations.

As a web developer since the dawn of the web, I've seen it happen before. I remember surfing on Netscape only to find a site didn't take into account my screen colors or resolution, or on Internet Explorer only to find a site didn't take into account my laptop pixels-per-inch default. I was using the popular rendering engine of the day, but the assumption that all users on that engine would all have the same experience was wrong then, and it will be even more wrong now.

WebKit will become the new IE if web developers continue these false assumptions and fail to test other WebKit implementations. Developers will build for one implementation, test for it and maybe a couple variations, and call it a day.

Conclusion

My fear is that all the gains we've made in the last few years toward a more inter-operable standards-based web (leaving behind the Internet Explorer monoculture) will fall away as we unwittingly move toward a WebKit-tweaked yet non-multi-WebKit-inter-operable web.

In short, we'll test for one WebKit, not realize (or care) about all the other variations, and end up breaking the web all over again.

WebKit won't be the new IE, but WebKit will be the new IE.

Friday, March 8, 2013

Calling QR in Print CSS Only When Needed

For those of us who put together print styles for our sites, we've probably tossed around the idea of embedding QR codes so that users can quickly get back to a page they have printed. In the hardcopy version of my article for .net Magazine, "Make your website printable with CSS," I show how you can embed QR codes in your page (it's not included in the online version).

In my example I use the Google Charts API to generate the QR code on the fly. The problem in my example is that the QR code image gets called whether or not you print the page. Not only is this an additional HTTP request, it's also an additional download that immediately gets hidden. This puts a bandwidth burden on users who aren't printing, but it's also the only way to support your users on Internet Explorer 8 and below (who may be the ones trapped at the office who want to bring the document home).

If you truly have no IE8 or below users, then the less bandwidth-hoggy approach is rather simple, if a bit inelegant.

Since each call to the Google Charts API to get the QR code must include the full address of the page, I cannot leave this to my linked CSS file (which is static, not run through any server-side processing), nor would I want to push every URL for every page of my site into that file. Initially I wanted to use a data- attribute to hold the URL and then, using the generated content feature of CSS, have it take that value and feed it into the content: CSS declaration to have it generate the image from there. Except that's not how CSS works. You cannot use CSS to generate an image from a CSS variable.

The easiest solution is to a put a style block at the top of your page (something I hate doing) and feed the current page's URL into the Google Chart API query string to dynamically draw the image. The rest of the styles that affect placement, spacing, etc. should all be in your print stylesheet already. The example:

@media print {
  header::before
    { content: url(http://chart.apis.google.com/chart?chs=120x120&cht=qr&chl=http%3A%2F%2Falgonquinstudios.com/Engage/Careers); }
}

That's it. Now when (and only when) you call the print styles, the image will load. As proof, here is a screen shot using HTTPFox showing the page before the print styles were called and after, where you can clearly see the QR code is called only when the print styles are fired.

Screen shots of the list of HTTP requests before and after the print styles were fired. You can click / tap to see the full-size image.
Screen shot of the print preview with the generated QR code in place.

Note: This technique will not work in any version of Internet Explorer that doesn't support CSS generated content, which includes IE 8 and below. Internet Explorer 9 and above happily include the QR code generated with this method.

Update: March 26, 2013

I build on this technique to show you how you can use Google Analytics to track which and when pages of your site are printed: Tracking When Users Print Pages.

Thursday, March 7, 2013

Observing Users with Mobile Devices

Nuns taking photos of each other at the Peak on Hong Kong island with their Hello Kitty iPad (which could result in a niche Tumblr).

I had the pleasure of traveling to Hong Kong for the UXHK conference just last week (the conference was the week prior, but I stayed around to be a tourist). While there I decided to spend some time observing how people used their mobile devices and what devices they used. Far from scientific and probably highly tainted by my own assumptions, it was still an interesting experiment.

When I got back I stumbled across some articles discussing how people use their mobile devices and was pleased to find a lot of commonality.

My Own Observations

It seemed like everyone in Hong Kong was using a smartphone. Not everyone was, but given how often my movement on densely-packed streets was stymied because a texting twenty-something or an elderly Bejeweled player was slowly meandering through the chaos, it certainly felt that way.

What I did track is that once on the MTR (the Hong Kong subway), about 8 out of 10 people pulled out a smartphone and started to do something. Occasionally it was someone using it to talk, but usually it was some awkward one-handed wholly-attention-grabbing activity. For the cases where I could see, a bit more than half the people appeared to be playing games. Age did not seem to be a determining factor for games versus non-games. For the remainder, I saw lots of what may have been texting or tweeting. I am guessing this because I primarily saw people selecting Chinese characters as part of some sort of text-based input for an app.

I was most struck by how few iPhones I saw. It felt that most of the people who I took to be locals were using Android devices. I was also surprised at the number of phablets (large smartphones, but not large enough to be tablets) I saw. In particular, every time I looked I saw at least one Samsung Galaxy Note II.

Among tourists I saw a different breakdown. I am guessing who the tourists were, but camera-toting white people seemed an easy fit, with tourist traps, accents, and personal gear helping to suggest others. I saw many iPhones in their midst, and more than a few tourists taking photos with their iPads, Smart Covers dangling in the breeze.

The Apple store in Hong Kong. 5 people were taking photos as I walked by: 1 was using a digital camera, 1 was using an iPad, the other 3 were using Android phones.

How Do Users Really Hold Mobile Devices?

Over at the UX Matters site, Steven Hoober asked How Do Users Really Hold Mobile Devices? His approach was similar to mine in that he and his team observed users "in the wild," but they actually tracked data points as they went (instead of relying on memory, as I did). They get some interesting results in their observations:

In over 40% of our observations, a user was interacting with a mobile phone without inputting any data via key or screen.

The users who we observed touching their phone’s screens or buttons held their phones in three basic ways:

  • one handed—49%
  • cradled—36%
  • two handed—15%
Pie chart of breakdown of how users hold mobile phones.

While I don't have numbers from my own casual observations to back up my opinions (they are just opinions, after all), I feel like the breakdown for how people held their phones when using touch input was similar to what I saw. However, I saw nowhere near 40% of smartphone users talking/listening to their phones.

One thing this study cannot capture is how people hold their phones for more specific tasks. For example, I saw lots of people taking photos with their phones, phablets, and tablets. Other than awkward arms-length self-portraits (with either the front- or rear-facing camera), I always saw them use both hands. This doesn't surprise me and it's probably not worth measuring, but it would be interesting if it turned out that my assumption was totally wrong.

Observations on use of mobile devices at airports and train stations

Maish Nichani and Bernie Quah at Pebble Road illustrated some casual observations on use of mobile devices at airports and train stations. While this isn't a scientific study, it's interesting to see that the poses seem to be universal.

For my observations, which were primarily on busy sidewalks, at subway stations and on the subway trains themselves, there would be additional sketches. Even the train sketches don't all apply (densely-packed subways are a bit different than trains with enough room you can sit sideways and that have access to power outlets). I spent far less time observing at the Hong Kong airport because I was either arriving and trying to get clear, or departing and trying to find food.

Illustration of mobile user sitting sideways on train seat next to power outlet.

Your Own Observations

These two articles illustrate how easy it can be to see how people interact with their mobile devices. An advantage to this passive approach is that you catch people behaving as they normally do, without subconsciously modifying their behavior because they are being observed. Anybody who has run any kind of user group testing knows that can be a problem.

A disadvantage to this observational approach is that you don't know what people are doing — you have no context. While you might be able to quickly tell when someone is taking a photo, it's harder to tell if someone is checking in on Foursquare, playing Tetris, texting, tweeting, or looking up directions to a restaurant. This lack of context will always make your observations useful in only the most basic way.

Regardless, these observations might be enough for you to devise your own testing methodology as you build apps, make mobile-friendly sites, develop interfaces in general, or even work on hardware.

Tangentially Related

All those nifty touch-screen laptops have their own interesting challenges. Not only are they touch screen, they are mouse- and keyboard-driven at the same time. Boris Smus shows examples of how user expectations may pan out in Interactive Touch Laptop Experiments.

Monday, March 4, 2013

UX Hong Kong 2013 Recap

Panoramic view from second row (1st was reserved) at #UXHK. See the check-in at Foursquare.

I had the pleasure of returning to Hong Kong in late February to attend the third (my first) UX Hong Kong two day conference. A combination of speakers, subject matter, my desire to return to Hong Kong, and timing came together to make this conference a good fit.

Day One

Morning Session

The first half of the first day was a series of six speakers (Michael Davis-Burchat, Jeff Gothelf, Timothy Loo, Will Evans, Marcel Takagi, and Josh Seiden) who each had about 15 minutes to seemingly pitch their half-day workshops the following day (for which attendees had already signed up). The benefit for attendees is that they received an overview of each of the sessions and got some of the highlights as a result.

It also felt to me that the conference was taking a decidedly process-oriented focus on UX, specifically around agile, lean, scrum, and related practices.

Timothy Loo spoke about an overall company-wide UX strategy, framing it in examples of business and brand strategies. Jeff Gothelf addressed agile and lean processes and how to apply them to UX. Marcel Takagi spoke about regional experiences in Asia, balancing local needs with universal design. Josh Seiden spoke about how to apply Agile to existing and new businesses. Will Evans introduced me to the Cynefin model, and I think confused some non-native speakers by using "ontologies" and "epistomology" in his presentation. Michael Davis Burchat discussed simplified research and studies to apply reason to the overall process.

Afternoon Session

The afternoon discussion group I selected was "Making a User-Centered Product Company," led by Andrew Mayfield. He started off by having our individual tables (now our group) come up with a definition of "minimally viable product." In fact, for each question he asked, he had our groups come up with definitions and he would simply validate our statements. When he started asking about stand-up meetings, I realized I was in an Agile discussion group (it became clear when I noted that stand-up meetings have existed for decades and he had to qualify that he meant a particular kind of stand-up meeting).

While I felt the session was more about Agile than UX, and the web site did not make any mention of Agile for this session, I had some very good discussions with the folks at my table. Partly because none of them were sold on looking solely at the process. We spoke about what we each did at our jobs and how and what aspects of UX we felt we touched on a day-to-day basis.

Miscellany

For breaks and lunch, there was a nice selection of foods and drinks, even if there weren't enough chairs or tables. Conversations with other attendees were easy, and as the seemingly token American, it was a great opportunity for me to get face-to-face insights on all sorts of software and web topics from the other side of the world.

The post-conference mixer, which I accidentally attended because I just kept chatting with people, was well attended. It helped that it was held in the same atrium as the meal and was the only way out of the building.

Day Two

Morning Session

I attended "Lean UX: Agility Through Cross-Functional Collaboration" by Jeff Gothelf (slides from the same presentation he gave in May 2012) partly because when I signed up I thought it was the only Agile session and I wanted to see how a specific process had been applied to UX. I figured seeing UX in a different way might give me more insight into its application. It turns out with the seemingly Agile focus for the conference, I had already received my introduction to that process.

This session seemed to target the Lean start-up model in particular, which may work well for granular features and products that are well-suited to a two-week sprint cycle. Applying this to a multi-month or multi-year project seems a bit trickier and even Jeff acknowledged that the Lean process may not easily slide into a long-term project or a waterfall model.

During the course of the session, he provided us with a general set of steps to follow and gave us an example product to which we could apply it. The steps were:

  1. Goal setting
  2. Declare assumptions
  3. Hypothesis
    • "We believe that [building this feature] for [this audience] will achieve [this outcome]."
  4. Identify smallest thing to do test that hypothesis (minimally viable product or experiment)

Interestingly, this entire process seems to closely resemble the rapid prototyping approach we have been practicing at Algonquin Studios since the start of the company 15 years ago, itself borrowed from many smarter people before us. When I made this connection in my head, the session ended up applying very neatly to my world.

Afternoon Session

My afternoon discussion group was "UX Strategy: Redesining Business" by Tim Loo (Slideshare of his presentation). Being the last session on the last day, some folks were getting a little antsy to head out, and the session was clearly running over its allotted time, which didn't help. However, when we were focused our group had some great discussions about how to get UX into our existing business models.

The UX strategy Tim presented seemed straightforward, if wordy: "Long-term vision, roadmaps and key performance indicators to align every customer touch-point with your brand position and business strategy."

He presented a general framework for this strategy, challenges to UX acceptance, and then ran us through a "shit-storming" session, where we mapped pain-points for users. In particular we logged an emotion someone might feel when using a product along with what caused it and used those to help prioritize what we would address.

Conveniently, this session wasn't bogged down in a process, but it was a little tricky to understand how to plan for particular outcomes (emotions, customer stories) instead of outputs (features, bug fixes).

Wrap-up

There are some wrap-up pages made up of tweets, pictures and posts. One is Lanyrd's Coverage of UX Hong Kong 2013 and another is Eventifier's collection. Other than Tim Loo's slides, I found no others online. I opted to exclude the sketch notes that popped up because they didn't seem to capture what I thought were the salient points of the presentations.