Sunday, March 16, 2014

Make Getty Embeds Responsive

In my post What to Consider before Using Free Getty Images one of the many caveats I outlined was the lack of responsive support in Getty's iframe code. Of all the issues I raised, this one is actually pretty easy to get around.


While the other points still preclude me from readily using Getty image embeds, I recognize its value to clients and want to be able to allow them to use these images without fear of breaking the responsive sites we've coded. I also wanted a solution that won't require my clients to do any extra work, such as embedding HTML wrappers, adding classes or IDs, inline script, or generally mucking about in code.

If you've read at least a few of my last 15 years of writing, you might know I have a general aversion to scripted solutions and generally start trying with a CSS solution. At first glance, there is an example dating back five years at A List Apart that I thought might work, Creating Intrinsic Ratios for Video.

Another post tackled other embeds (Vimeo, Slideshare) in 2011 using the same technique expanded just a bit. Five years on and a new article popped up (just a couple weeks ago, Making Embedded Content Work In Responsive Design) that is targeting more third-party embeds (calendars, Google Maps), but leans on the A List Apart technique — and by leans on I mean just uses it outright but for more services.

The A List Apart solution and its variations don't work for two reasons: 1) they violate my requirement of not making my authors create HTML and 2) they rely on knowing the ratios. Every Getty image can have its own aspect ratio that I can't expect my authors to calculate.

The Getty embed has another factor not accounted for in these other solutions — regardless of the image aspect ratio, there is always a credit box below the image at a seemingly fixed height. This bar will always occupy that height, so scaling the image's height based directly on its width ends up leaving an ugly vertical white bar on the right of the image. This precludes any simple ratio as a solution.

My Solution

I made a demo to show my solution (does not open in a new window).

I decided the best approach was to write a JavaScript function that accounted for the height of the image credit as it calculated the image ratio. Then it would apply width and height styles that would scale the embed without leaving the ugly white gap on the right (barring rounding errors, which are evident in the portrait images).

I opted for JavaScript instead of a block of jQuery because I knew this would be maybe a dozen lines of code in total, and requiring an additional 29-82KB (depending on your minification and zippage) for the jQuery library is, well, absurd. Also, I am not a fan of dependencies, particularly when most developers rely on hosted libraries.

I did some screen captures of Getty image embeds and identified the image credit bar is 69 pixels tall. That number may (will) change in the future. You may want to populate that variable from your chosen CMS so you don't have to do a full testing and deployment pass just to update one variable in your JavaScript functions file or page template (across all your sites) when Getty inevitably changes it.

The Getty iframe has no unique ID or class to make it easy to identify on the page, nor any other unique attributes, with the exception of the src attribute. So I loop through all iframes on the page and only grab those with the Getty URL.

I then get the iframe's width and height attributes, subtracting 69 from the latter, and calculate the ratio. From there I scale the iframe to 100% width and then get its new pixel width to feed to the ratio to calculate what its new height should be, finally adding 69 to it.

In my example page, I call the function at the bottom of the page and also in an onload in the body. There are better ways to do this, but given all the variations that are already out there (and that you may already employ), I leave it to you to figure out the best approach to handle for users who resize windows or rotate their phone/tablet.

What is compelling to me about this solution is that my clients (site authors) don't need to worry about adding or modifying any HTML on the page (most don't know HTML anyway), let alone CSS or script. When they paste the embed code, it should just work.

The Code

function responsiveGetty() {

  try {
    // Get all the iframes on the page.
    var iframes = document.getElementsByTagName('iframe');

    // Height in pixels of the Getty credits/share bar at the time of this writing.
    var crHeight = 69;

    for(var i = 0; i < iframes.length; ++i) {

      // Check to see if it's a Getty embed using the only attribute that's unique, the src.
      if(iframes[i].src.indexOf("//") != -1) {

        eachIframe = iframes[i];

        // Get the current ratio after excluding the credit bar.
        picHeight = eachIframe.height - crHeight;
        picRatio =  picHeight / eachIframe.width;

        // Set the iframe to fill the container width. = "100%";

        // Set the iframe height to correspond to the ratio, adding back the credit bar height. = ((picRatio * eachIframe.offsetWidth) + crHeight) + "px";
  } catch(e) {}



There are probably ways to optimize my code or factors I did not consider. If you see something wrong or that could be improved, leave a comment.

Friday, March 14, 2014

I Don't Care What Google Did, Just Keep Underlining Links

Screen shots of Google results page with two kins of color-blindness simulated.
Screen shots of Google search results showing protanopia (middle) and deuteranopia (right) forms of color-blindness. Click/tap/select the image for a full-size view.

I figured I'd lead with my argument right in the title. Even if you read no further, you know where I stand. I'm just going to fill up the rest of this space explaining why anyway.


The Verge posted an article (Google removes underlined links, says goodbye to 1996) telling us Google is removing underlines on hyperlinks in search results, and also suggesting that underlines are oh-so-18-years-ago.

It's that sentiment (echoed in the article with the '90s-style underlined links are being removed from Google search results) that makes me worry The Verge is being snide about a usability feature it doesn't understand. The original heads-up from a Googler wasn't quite so focused on the underlines.

Why Google's Almost Plan Works

Google's search results are almost completely hyperlinks. Google retains a classic indicator of the hyperlink and keeps them all blue (a color contrast ratio of 11.2:1 to the white background and 1.5:1 with the body text) so that users don't have to learn a color scheme unique to Google. In this context, when users know the page is full of links and the colors are consistent, coupled with Google's position as a top site on the web, users aren't likely to get confused about what to click/tap/follow.

Similarly, The Verge has no underlines on its hyperlinks, whether in the navigation or in the content, until the link gets focus or the mouse hovers. This likely isn't an issue for most users as the in-content links are orangey-red within otherwise black text (a 3.7:1 color contrast ratio to the body text and 3.5:1 contrast ratio to the background). The non-inline links are pretty much all navigation anyway, and removing underlines from navigation links is a de facto standard. In this case, the links can be anywhere in the page content — they don't benefit from consistent positioning on the page as Google's links do.

How It Won't Work for You

My concern is that the average web developer may see Google dropping underlines as an excuse to do it on their own projects, without the context. For example, an article or blog post may be littered with links throughout the content. This doesn't correspond to the same type of content or organization that you see on the Google search results page. That same article or blog post may also not have a color scheme that makes it appropriate to remove the underlines.

Google misses the mark in that the blue hyperlinks don't have sufficient contrast with the rest of the text on the page. The layout Google uses, and has used for years, mitigates this as users will quickly (re)discover how links are organized on the page regardless of color or underline.

I mention The Verge's color contrast ratio above because its orangey-red links will fail Web Content Accessibility Guidelines 2.0 (WCAG) level AA compliance. I am not trying to pick on The Verge here — I can find many sites that will fail that check, including some of my own. But it is worth understanding that removing underlines, to meet even basic accessibility compliance, will require you to step up your game on understanding color contrast.

Screen shots showing links on The Verge with different forms of color-blindness.
Screen shots of hyperlinks on The Verge showing deuteranopia (top) and protanopia (bottom) forms of color-blindness. Click/tap/select the image for a full-size view.

What You'll Need to Do

To make it easy, I'll link to the WCAG notes with a quick description of what you have to do.

Guideline 1.4.1 states that you cannot rely on color alone to convey information (such as when text is a hyperlink).

If you do rely on color, contrast is imperative. Use only colors that would provide 3:1 contrast with black words and 4.5:1 contrast with a white background. I've included links to contrast checkers below.

You can read more about how to meet WCAG item 1.4.1, including sample scenarios and yet more links, in the Understanding SC 1.4.1 document.

My Recommendation

Unless you plan to run the necessary color contrast tests, just keep the underlines on your hyperlinks.


Update: 4:50pm

I think this point is worth considering:

Update: March 17, 2014

On the WebAIM mailing list Elizabeth J. Pyatt points out that the Google's link underlines don't work for keyboard users. The underlines appear when you hover over a link, but if you tab through the links no underlines appear. I'm a twit for missing this, but Google is committing a grave accessibility mistake by not including a :focus selector in its CSS.

Again, please don't follow Google's lead.

Wednesday, March 12, 2014

Web Turns 25, Seems Popular

Logo for The Web at 25

The world wide web has officially lasted 25 consecutive years, which means it's catching up to its parent, the Internet, which itself is bearing down on 45. That's an important distinction. The Internet is not the web, it is the foundation on which the web was born.

In honor of the web's quarter century bringing us all manner of useful and useless information via the lowly hyperlink, the World Wide Web Consortium (the standards body behind HTML and CSS, among other standards) and the World Wide Web Foundation have teamed up to create the site

The site includes a link to Tim Berners-Lee's 1989 proposal for the web, news on upcoming events, and plenty of factoids. In addition, there is a Twitter account (@Web25) that has been collecting peoples' memories of the early days of the web with the hashtag #web25. There is even a Storify collecting many of the tweets (which I have embedded below).

Some other sites talking about the web's anniversary:

For good measure, I've included Tim Berners'Lee's video talking a bit about where the web will continue to go:

If you want to pretend that you are enjoying the early days of the web again, head on over to the browser archive, which I started building in 1994 (two years after my first foray onto the web), to download the earliest releases of Netscape Navigator or browsers you've never heard of. You can also wander over to the W3C Web History Community Group, where some folks have started to gather early documents.

You can also head over to CERN's World Wide Web project site, dating back to 1993 and the first time HTML documentation was made generally available.

Some other historical bits I have covered on my blog:

And now that embedded Storify I threatened earlier:


Somebody posed the following question to Tim Berners-Lee in the AMA:

What was one of the things you never thought the internet would be used for, but has actually become one of the main reasons people use the internet?

Tim Berners-Lee's answer:


It's taken 25 years, but the reign of cats on the web is complete.

Saturday, March 8, 2014

What to Consider before Using Free Getty Images

There was quite a lot of chatter this week over Getty's move to make its image library (ok, only 40 million of its images) free for non-commercial use on the web. Some might think they can now just start taking images from the Getty site, but not quite. Getty requires you use its embed tool.


If you're reading this blog, odds are that you are comfortable with HTML. While I cannot deny the value of being able to drop quality images into your site with hardly any effort, there are things as site builders that you need to know so you can best deploy them — or decide if it's even worth using the Getty images.


Because the image is embedded via an iframe, there is no opportunity for you to insert a text alternative, short of using ARIA (such as an aria-describedby attribute). In addition, the img element within the iframe has no alt attribute.

This feels like a miss to me because Getty has the image description — it's part of how search results are generated. For example, on this image page within Getty there is a caption: View from Giotto's Bell Tower (Campanile di Giotto) on the dome of Florence.

For some reason Getty doesn't make this available as a text alternative. I'm not expecting a longdesc here, but the text alternative seems like low-hanging fruit to me.

All of this ignores any possible issues AT might have with embedded iframes. In that case, I'll have to defer to an AT user who has some experience with navigating into and out of an iframe within a page of content. My guess is that it isn't much different from other embedding techniques, such as YouTube or SlideShare.

Web Beacons

In the example at the start of this post you can see there is a Twitter icon and a Tumblr icon. The Twitter button is loaded via (yet another) iframe. Calls are made to both Twitter's and Tumblr's servers.

Every time you embed an image, not only are your users making a call for the image itself, but also calls to Twitter and Tumblr. Getty is probably more than happy to have these tracking beacons in place. While there are only two today, it is not unreasonable to expect to see more social platform buttons and additional beacons appear.

Conveniently, running a blocker like Ghostery won't block the image just because you block the share buttons.

Keep in mind, though, that Getty has full control over what appears in that iframe. While two share buttons may be innocuous today, it is possible (however unlikely) that Getty could choose to start serving advertisements in those iframes.

Depending on our paranoia level for your own content presentation or the paranoia level you presume from your users, this may be an issue. It may also not matter in the slightest.

Link Rot

Once you embed the image and walk away, you have no control. If Getty changes its licensing, if the photo owner rescinds his/her licensing to Getty, if Getty decides to throttle its service in the case of great demand, and so on, your site may end up with a big blank hole. Or perhaps an ugly 404.

I have blog posts that go back to 1999. Services I might have used then to embed content would likely be gone now. Services I have used in just the last few years have gone away. A List Apart has had to make some efforts to combat the risk of link rot when deciding to use embeds on its site. You may want to consider if you or your clients/authors are technically savvy enough or have the time to account for that eventuality.

If you embed YouTube videos, you are already taking a risk with link rot. YouTube, I would argue, has more staying power than Getty's service. I could be wrong, but Getty is just getting its feet wet and finding its way with this service.

Getty's Embed Tool

Screen shot of the embed dialog.
The Getty image embed dialog just gives you the HTML, no features to adjust the size for those who don't know HTML.

Today there is a simple tool to provide the code for embedding on your site. At the time of this writing, there is no option to generate a shortcode for WordPress or similar sites. This may not be an issue, but it's worth noting in case you have an editor configuration that requires one.

If you manage to find the perfect photo after navigating screen-fuls of images, make sure you either control-click or right-click-new-window the learn more or terms links, otherwise you'll lose your place when the page reloads. That happened to me once for each before I learned my lesson.


You'll note in the dialog box that there is no way to control the size of the image unless you know HTML and can do some quick math to maintain the aspect ratio. Even then, the addition of the photo credit and share buttons throws off just a quick resize since the bar doesn't scale at the same ratio as the image.

The image at the start of this post uses the default embed code, which relies on an iframe to get the job done. Here is the code used above so you can see it for yourself:

<iframe src="//" width="478" height="428" frameborder="0" scrolling="no"></iframe>

If I want to make the image fill the width of this column (540px), then I need to scale it up a bit, which gets me extra space below the bar that I don't need:

Scaling it down to 200 pixels in width and maintaining the aspect ratio gets me a minimum iframe height despite my code in order to preserve the now truncated credit bar:

No matter what size you embed the image, the file will always be the same dimensions. Embedding the file smaller doesn't result in a smaller payload delivered to the browser. The image dimensions don't change, so scaling the image up also doesn't give you a higher quality image, just a potentially blurry image.

This is important to consider if you want to make the iframe responsive. You don't get a larger image or a higher quality image as the iframe scales. The image itself isn't high resolution, so scaling up at all will get muddy on high PPI devices.


You can continue to use no images. You could steal images from the web (which is wrong, so don't do that). You can, of course, always find free stock imagery. It typically sucks. However, there is a list of resources over at Medium of stock photography that doesn't suck. You can also look at the CC-BY licensing terms at Flickr and pull images from there (with no troublesome embed).

Alternatively, you can go take the photo you need. Granted, if Getty is appealing to you it may very well be because you don't have the time, skill, or inclination to shoot, edit and embed your own images. There is nothing wrong with that.

However, using your own images gives you many benefits. The image below is mine. I think it looks better than the Getty image I use at the start of this post (which is rarely the case). It is responsive. It is high resolution. It utilizes accessibility features. It has no web beacons. It's a smaller file (42.1KB at 1,000 × 664 versus the Getty image at 91.9KB at 478 × 359).

It also required far more effort to optimize and embed.

The Duomo in Firenze, Italy as seen from the Campanile.
Example of my own photo that scales, is high-DPI, and contains basic accessibility features.


Update: March 13, 2014

Electronic Frontier Foundation has weighed in: Getty Images Allows Free Embedding, but at What Cost to Privacy? It goes deeper into my concerns about privacy and tracking and notes that by not serving the images via HTTPS users may be exposing their surfing behavior even when using a site over HTTPS.

Update: March 16, 2014

I addressed one of my caveats above, the only one over which I have any control and by no means the most important one. I wrote a script to Make Getty Embeds Responsive. Maybe it will help you, maybe you come up with a better variation. Either way, at least you won't have to worry about clients embedding fixed-width Getty iframes that blow up your responsive layouts.

Monday, March 3, 2014

On Screen Reader Detection


The latest WebAIM screen reader survey results came out last week, and I had been looking forward to the results of the questions related to screen reader detection. I can say I was a bit surprised by both. To make it easy, I'll reprint the questions and answers here.

Screen Reader Detection

Pie chart from the answers.

How comfortable would you be with allowing web sites to detect whether you are using a screen reader? (See the question on the WebAIM site.)

The vast majority (78.4%) of screen reader users are very or somewhat comfortable with allowing screen reader detection. 55.4% of those with disabilities indicated they were very comfortable with screen reader detection compared to 31.4% of respondents without disabilities.

Screen Reader Detection for Better Accessibility

Pie chart from the answers.

How comfortable would you be with allowing web sites to detect whether you are using a screen reader if doing so resulted in a more accessible experience? (See the question on the WebAIM site.)

86.5% of respondents were very or somewhat comfortable with allowing screen reader detection if it resulted in better accessibility. Historically, there has generally been resistance to web technologies that would detect assistive technologies - primarily due to privacy concerns and fear of discrimination. These responses clearly indicate that the vast majority of users are comfortable with revealing their usage of assistive technologies, especially if it results in a more accessible experience.

My Opinion

I think the wrong question is being asked on the survey.

Detecting a screen reader is akin to detecting a browser. If you've been doing this long enough, you know that on the whole browser detection is a bad idea. It is often wrong and doesn't necessarily equate to what features really exist, which is why feature detection evolved as a best practice. You can read my rant from 2011 where web devs were making the same mistake trying to detect mobile devices.

Detecting the features of a screen reader is different, however. Here you may be able to actually get somewhere. But this is where different risks come in. I'll focus on three that come to mind immediately.

Double Effort

The first is what happens once you have detected a user with a screen reader. Do you detect for other accessibility tools or ignore those? Do you serve different content? Different mark-up? Do users get shunted to different URLs?

Evidence suggests this doesn't ever go well. Even today, the example of the UK Home Office Cyber Streetwise site is perfect example — the user is provided a link that cannot be activated sans mouse which in turn points to a text-only version of the site. It is not truly accessible and assumes only visual disabilities.

Any organization charged with maintaining two sites will ultimately fail at doing so as resources are prioritized to targeting the primary site. Eventually you get an atrophied site in the best case, and a complete failure in the worst case.

It opens the door to separate-but-equal thinking. Patrick Lauke captured this idea nicely on Twitter, which I re-tweeted with this longdesc (because I am that guy):

Selection Bias

A second risk with this detection approach is that selection bias will taint your perspective (I've written about this before). Just as web devs would build features that blocked, say IE6, and then turn around to point out that IE6 usage had dropped on their sites, we can expect to see the same thing happen here.

Poorly-written detection scripts will set the expectation that site owners are getting a view of who is using their site, but will end up showing the opposite. Not only that, low numbers can be used to justify not supporting those users, especially if those numbers come in below the IE6 or IE8 or whatever-is-the-current-most-hated-IE numbers that you've been arguing are too low to support. Roger Johansson sums it up nicely:


We already know that assorted web beacons can cross-reference your social media profiles to your gender to your geographic location to your age to your shoe size. There is already plenty of personally-identifiable information about you available to every site, is it right to allow those sites to know you have a disability?

This is the kind of information that in the United States you might think is covered by HIPAA, only to find that as a general web surfer you are handing it over to anyone who asks. Certainly no registry can be trusted with managing that when even the UK's NHS uploads not-really-anonymized patient data to the cloud (Google servers outside the UK in this case).

Consider also what happens when the site has a different URL for every page targeted specifically at disabled users. Now when those users share a URL to a page, they are in effect telling the world they have a disability, even if which disability isn't clear.

There is a privacy risk here that I don't think those who took the survey were in a position to consider, and I don't those who asked the question were able to contextualize appropriately.

Other Responses

Marco Zehe jumps on this pretty quickly with his post Why screen reader detection on the web is a bad thing. In addition to raising points why he thinks this is bad, he points out where the survey takers might not understand the scope of the question:

Funny enough, the question about plain text alternatives was answered with “seldom or never” by almost 30% of respondents, so the desire to use such sites in general is much lower than the two screen reader detection questions might suggest. So I again submit that only the lack of proper context made so many people answer those questions differently than the one about plain text alternatives.

Léonie Watson also responded quickly in her post Thoughts on screen reader detection with her own reasons that I am breaking down into a bullet list here (on her post, these are the headings for copy with more detail):

  • I don’t want to share personal information with websites I visit
  • I don’t want to be relegated to a ghetto
  • I don’t want design decisions to be based on the wrong thing
  • I don’t want old mistakes to be repeated
  • I don’t want things to be hard work
  • I do want much more conversation about screen reader detection

Karl Groves points out some disability points the general public often forgets in hist post “Should we detect screen readers?” is the wrong question:

  • There are more people who are low-vision than who are blind
  • There are more people who are hard of hearing than who are visually impaired
  • There are more people who are motor impaired than who are hard of hearing
  • There are more people who are cognitively impaired than all of the above

Dennis Lembree covers reasons against over at WebAxe in the post Detecting Screen Readers – No

  • Text-only websites didn’t work before and you know devs will do this if a mechanism is provided.
  • Screen reader detection is eerily similar to the browser-sniffing technique which has proven to be a poor practice.
  • Maintaining separate channels of code is a nightmare; developers overloaded already with supporting multiple browsers, devices, etc (via RWD). And if done, it will many times become outdated if not entirely forgotten about.
  • Why screen reader detection? If you follow that logic, then detection should be provided for screen magnifiers, braille output devices, onscreen keyboards, voice-recognition, etc. That’s just crazy.

Dylan Barrell is (so far as I have found) the sole voice saying maybe this isn't so bad, in hist post Assistive Technology Detection: It can be done right. He argues for some benefits and then proposes a couple possible approaches to deal with the concerns he is hearing:

  1. Allow the web site to request the information, and the user to allow/disallow this on a per-website/domain basis. I.e. the web site requests and the user decides. […]
  2. A second approach is to put the control in the hands of a registry. This registry would store the domain names of the organizations who have signed a contract that explicitly binds them into a code of conduct regarding the use of the data. […]

Update: March 5, 2014

Marco Zehe, Mozilla accessibility QA engineer and evangelist, has opened a bug with Mozilla asking for a privacy review of the overall idea of screen reader detection: Bug 979298 - Screen reader detection heuristics: Privacy review

Update: March 6, 2014

Along the lines of separate-but-equal text-only sites being anything but equal, Safeway appears to have bought into that concept and is eliminating its text-only site in favor of making the overall site more accessible.

Update: March 19, 2014

In the post Am I Vision Impaired? Who Wants to Know? the author points out that by installing apps on your phone, you are already giving the app makers access to whether or not you use AT. There is an API for this information and its use is not indicated to end users that way the phone tells/asks you about access to your camera or GPS.