Showing posts with label browser. Show all posts
Showing posts with label browser. Show all posts

Wednesday, March 25, 2015

Twitter App Sets Browsers Back 10 Versions

Screen shot of a web page as seen in the Twitter app, with a menu showing the option to open in the user's default web browser.

The title of this post may be a bit of hyperbole for some, but it is completely true for me.

Sometime over the course of the last week Twitter changed what happens when I tap links in the native Twitter app on Android. Links now open within an embedded browser, not in my default browser.

I have Chrome 40 installed on my Android phone. The built-in web view on my phone is 10 releases back, at Chrome 30. Normally this isn't a concern of mine, but when a good deal of my Twitter timeline consists of bleeding edge web development techniques, I want to view those on a current release of Chrome.

The first image shows that the user agent string within the Twitter app includes Chrome 30. The second image shows my default browser user agent string is Chrome 40.

This change appeared while I was traveling internationally, which means I had a slower connection than usual as well as a data cap. Not only do I have to view content in an old browser, I have to know that the web view is older so that I then know to open it in my default browser.

That's at least two more taps, plus the burden of the download starting in the web view that I don't want. That extra download burden also impacts my data cap, which is an even bigger issue if I have chosen to surf with Opera Mini to make the most of my limited data cap (you know, data budgeting).

Not only did I never enable this feature, I cannot disable it. It appeared three weeks after my last Twitter app update (see the caption below).

The first image shows the settings screen in the Android Twitter app, version 5.48.0. You can see there is no option to disable the in-app browser, though it has been enabled. The second image is the note in the Google Play store that tells me the only change in the new release is updated profiles so it's easier to view bios, Tweets and photos. The final image shows the option to disable the in-app browser, but only because I updated to version 5.51.0 (when I returned from home and shed my data cap).

So What?

A couple months ago Peter-Paul Koch wrote about the massive fragmentation in the world of Chrome (Chrome continues to fall apart at brisk pace), something to which Twitter is now contributing en masse.

In the modern world of rapidly updating browsers, 10 releases may not seem like a big deal. I guess it comes down to what you want to see, or more importantly, what you want your users to see. Can I Use provides a quick way to compare Chrome 30 and Chrome 40 to see which features you may be missing. Here's a short list:

  • The ability to discard many -webkit- prefixes,
  • Font unicode-range subsetting,
  • matches() DOM method,
  • CSS touch-action property,
  • CSS Font Loading,
  • Custom Elements,
  • picture element,
  • Web Cryptography,
  • WOFF 2.0 - Web Open Font Format.

If you rely on any of these (or many other) features of the open web platform, and you receive traffic from Twitter, I suggest you monitor your logs to see if the most common version of Chrome drops.

As for user experience, If you plan to allow users to toggle a new "feature," don't push that feature to them without the toggle. Especially when you exclude it from your update notes within the app store.

Saturday, February 14, 2015

Using Bookmarklets on Mobile

Viewing comments on Medium. Login prompt when tapping to view comment replies.
Viewing comments on Medium (first image), then being prompted to login in order to view comment replies (second image). Both images are current version of Chrome on Android.

This is a follow-up to my post CSS Bookmarklets for Testing and Fixing.

While surfing Medium the other day I chose to read a comment. As usual, the comment overlay came up at the bottom of my screen with an option to see replies. When I tapped the replies link, I was immediately prompted to log in. This was new.

In the time between me tweeting Medium to complain, and them responding that it was a bug, I wrote a bookmarklet to remove that login overlay.

This was the easy part. The hard part was using the bookmarklet on my mobile.

As you may already know, there is no bookmark bar in the average mobile browser (at least not on smaller screens). Viewing bookmarks will generally take you to a new tab or screen, meaning a bookmarklet cannot affect the page you were viewing.

Conveniently, once you create a bookmark it becomes available through the auto-complete feature of the browser address bar. In this case, while viewing the page I tapped the address bar and started typing the name of my new bookmarklet. It helps that I remembered this, otherwise it might have taken more time.

Typing the name of the bookmarklet into the address bar as it shows options from auto-complete. Once the bookmarklet fires I can see the comment replies.
Typing the name of the bookmarklet into the address bar as it shows options from auto-complete (first image), then once the bookmarklet fires I can see the comment replies (second image). Both images are current version of Chrome on Android.
This allows you to use bookmarklets you have specifically crafted to improve your mobile experience, or just general bookmarklets that you might not have thought would work on mobile.

Related

Fix Medium Bookmarklet

Hopefully by the time your read this Medium will have fixed the issue. If not, here is the bookmarklet I use:

javascript:(function(){var a=document.createElement('style'),b;document.head.appendChild(a);b=a.sheet;b.insertRule('.overlay{display:none !important;}',0);})()

Of note: after you do this, the hit state of the View n replies link is partially blocked. You need to tap at the very top of the link. If that requires too much precision, then zoom in until it it wraps to two lines and tap the top line of text.

What I Was Reading on Medium

Christian Heilmann wrote a great post on the web application myth, which may be the title, though I can't be sure because Medium's URLs never match what may be the page title, which is denoted by an h3 because there is no h1 nor h2 on the page...

Anyway, regardless of title, go read what I'll title The Web Application Myth: Web applications don’t follow new rules.

Tuesday, February 3, 2015

Best Viewed in 1 of 11 Flavors of Chrome!

Make sure you view this on Google's flavor of Chrome, otherwise, well, I have no idea what will happen.

Sometimes it's frustrating being a developer who's been around to see Mosaic supplanted by Netscape Navigator supplanted by Internet Explorer supplanted by Chrome/WebKit. Developers just love dumping one platform for the new shiny.

As I said last week, all of this has happened before and will happen again. The difference with this post is that I am not going to rant about lazy developers whining over a world that will still contain Internet Explorer and its offspring.

Instead, let's ask the average anti-IE / pro-WebKit developer a very simple question — on how many flavors of Chrome do you test?

I don't mean how many versions of Chrome. I also don't mean how many different WebKit-based browsers. No, how many flavors of Chrome?

I'll guess probably not more than a couple. I have four that I can, but typically don't, use. Even at four that's far too few.

Today Peter-Paul Koch pointed out that there are eleven (11!) flavors of Chrome (Chromia, if you will). All of them built on Chromium. Here's the breakdown from his article:

Vendor Version Tested Default Remarks
Google 40 Yes Yes
Opera 39 Yes No
Yandex 38 Yes No
Xiaomi 34 or 35 Yes Yes Zoom reflow
HTC 33 Yes Yes Zoom reflow
Cyanogen 33 Yes Yes
LG 30 Yes Yes Mid-range
Puffin 30 Yes No Proxy
Samsung 28 Yes Yes
Amazon 37 No Yes Silk
LG 34 No Yes High-end

You may have noticed that this only accounts for mobile devices. Some on Twitter also noted Chrome on Google TV, or on Android TV, which doesn't account for the Samsung Android TV nor the Sony Android TV.

So maybe it's fifteen (15!) flavors of Chrome. Either way, I suspect that number will continue to grow.

Even if I include IE6, I only have to worry about 5 versions of Internet Explorer across mobile and desktop. If I want the idyllic WebKit-only world so many seem to crave, then I need about a dozen flavors of Chrome before I can get started with the Operas, Safaris, Yandexes, and Vivaldis (plural because those forks of WebKit also have their own versions to support)

All of this written against the backdrop of a Medium post claiming it won't consider IE11 a Tier 1 browser because of what it considers an ugly border in the editor view. Unable to find IE developers anywhere, nor to figure out where to file a bug, Medium just browser-sniffs IE11 into a second tier. I'm sure Medium tested across eleven flavors of Chrome, though.

Please read PPK's piece: Chrome continues to fall apart at brisk pace

Monday, January 26, 2015

All of This Has Happened Before and Will Happen Again

Jacob Rossi from Microsoft put together an article for Smashing Magazine that discusses Microsoft's Project Spartan web browser, Inside Microsoft’s New Rendering Engine For The “Project Spartan”.

Unlike other click-bait efforts that only speculated that perhaps Spartan was going to be WebKit-based, showing their own preference instead of any real understanding of the browser world, this one is filled with lots of great information. You should read it.

The first few comments, on the other hand, started off a mess (with many more on Twitter since the initial announcement). Two examples from the article:

So here was the opportunity to swallow their pride and join WebKit to make the internet a better place

…and they built *another* closed-source, proprietary rendering engine.

[Slow sarcastic clap]

« IE did shape the web in a positive way »

This made me laugh more than it should. You seem to forget why Internet Explorer has felt the need to change its name in the first place. And it’s not because it was «too good» or «too innovative»…

Many folks jumped in and corrected, down-voted, and generally balanced the insipid whining. Christian Heilmann, who has logged more years working for Firefox than most devs have logged using it, waded in to challenge many of the incorrect assertions.

Bruce Lawson, who happens to work for another browser vendor (Opera) noted all the things Internet Explorer did for the web in his five-year-old post In praise of Internet Explorer 6. It's also a cautionary tale about where reliance on a single rendering engine will take us.

What these two guys have in common, besides working for the competition, is that they have been on the web since its dawn. They've seen what happens when one browser gets too big (Internet Explorer) and how we spend the next decade-plus digging out from the mess.

How did we get into that mess? By people coding for one rendering engine.

Everyone who calls for WebKit in Internet Explorer is exactly the same kind of developer who would have coded to Internet Explorer 15 years ago (and probably happily displayed the best viewed in badge).

If you are that developer, then it will all be your fault when it happens again. When WebKit is no longer the hot engine. When Chrome loses its dominance. When Apple's market share falls to match the developing world. You will be to blame.

Do you think that won't happen? Just look to Android browser fragmentation, or WebKit failing to support a standard that Firefox and IE have nailed, or Chrome introducing its own proprietary features (can't find the link; it's coming), or failing to use best practices as it tries to carry the next big thing forward, or the complete lack of developer relations from Apple. We've had over half a decade of warning signs.

It's happening again, and every petulant, lazy developer who calls for a WebKit-only world is responsible.

Related

Update: February 3, 2015

My rant continues in my post Best Viewed in 1 of 11 Flavors of Chrome! It's built off PPK's Chrome continues to fall apart at brisk pace. Even I didn't know there are so many Chromium variants.

Saturday, January 24, 2015

CSS Bookmarklets for Testing and Fixing

Animated image showing the Pinterest site and its infuriating blocking overlay, which is removed with the bookmarklet below.

I regularly have to test sites in development, review some third-party site, or just use a site in my day-to-day time wasting (and banking) rituals. I've relied on viewing the page's source or popping into my browser's dev tools to find a missing element, copy un-transformed text, check for inline styles, and so on. Typically I am relying on CSS and not JavaScript, as that is where I excel.

I got a little annoyed doing that all the time, and this morning I had reason to visit Pinterest and mostly lost my marbles at its login overlay and refusal to scroll. So I channeled that rage and taught myself to build a bookmarklet to dump that Pinterest overlay crap. I have created a few more that include my standard styles for testing, styles that perhaps you (dear reader) will find useful.

I'll have basic instructions below showing you how you can build your own and/or modify the ones I've provided.

Bookmarklets You May Steal

Note that I say may steal. That's me giving you permission. Note that I call them bookmarklets. That's me not giving into the term favelets or whatever HotJava called them (was it hot links?).

Restore Link Underlines

You know what's cool? Removing link underlines and providing terrible link color contrast. It's so cool, in fact, that I want to make those sites less cool. As well as usable. Read my rant on this.

This bookmarklet restores link underlines across the board. Every link. After all, if you want the link underlines, you probably don't care that the designer would freak out at the noise it adds to the page.

javascript:(function(){var a=document.createElement('style'),b;document.head.appendChild(a);b=a.sheet;b.insertRule('a[href]{text-decoration:underline !important}',0);})()

Restore Focus Outlines (or Fix Virgin America)

Just as cool as removing link underlines is removing the outline on elements that get focus as you tab through a page. After all, if you've hidden the links, why not hide when the links are selected. Virgin America tends to agree.

This bookmarklet not only restores the outline (in the form of the two-pixel dotted blue line), but also adds a drop shadow for those cases where the blue is lost against the surrounding colors.

javascript:(function(){var a=document.createElement('style'),b;document.head.appendChild(a);b=a.sheet;b.insertRule('*:focus{outline:2px dotted #00f !important;box-shadow: 0 0 2em rgba(0,0,0,.75) !important;}',0);})()

Find Inline Styles

Over at Algonquin Studios we have worked in the content management space for, well, since the dawn of content management systems. One of the risks of using a CMS is that your authors may accidentally (or intentionally) embed styles whether by pasting rich-text from elsewhere or by features built into the WYSIWYG editor within the tool. This is most common with text styles.

Sometimes it is faster to just find the elements that have a style attribute on them, as that's the first clue that there may be a conflict that needs to be corrected. This option will find any of those elements and give them a yellow background along with a two-pixel dotted red border (like the Windows "hot dog" theme from the previous century).

javascript:(function(){var a=document.createElement('style'),b;document.head.appendChild(a);b=a.sheet;b.insertRule('*[style]{border:2px dotted #f00 !important;background-color:#ff0 !important;}',0);})()

Find Duplicate ARIA Roles

In ARIA, there are a few instances of roles that should only appear once on a page. These landmark roles are banner, contentinfo, and main. In addition, the W3C HTML5.1 specification notes that there must be only one main element per page.

This bookmarklet will identify any additional instances of any of the once-per-page items above. If you know enough about coding ARIA, then you probably know enough about finding which of the roles/elements is on repeat. Offending items will have a two-pixel dotted red border and red background.

javascript:(function(){var a=document.createElement('style'),b;document.head.appendChild(a);b=a.sheet;b.insertRule('*[role=main]:nth-of-type(n+2),*[role=banner]:nth-of-type(n+2),*[role=contentinfo]:nth-of-type(n+2),main:nth-of-type(n+2){border:2px dotted #f00 !important;background-color:#f00;}',0);})()

Find Missing Alt Attributes

An image without an alt attribute can be anything from an annoyance to a barrier to those using assistive technologies. Being able to quickly identify those images on a page can save time when figuring where to focus your efforts.

This bookmarklet will find those images and give them a two-pixel dotted red border. Note that it only looks for images with a missing alt, as a blank alt attribute is often perfectly valid.

javascript:(function(){var a=document.createElement('style'),b;document.head.appendChild(a);b=a.sheet;b.insertRule('img:not([alt]){border:2px dotted #f00 !important;background-color:#f00;}',0);})()

Reset Text Size (Added January 30)

Sadly, it is not uncommon for sites to reset the default size of the text on the page. Too often that is done to satisfy a design change. One site where I find the text too small to read comfortably, or at all, is Daring Fireball. I know I am not the only one to feel this way.

This bookmarklet will resize the text on the body element to 100%, ideally conforming to whatever your default browser preferences are. It works great on Daring Fireball, but could easily be overridden on sites that set the text size in other ways and/or on other elements.

javascript:(function(){var a=document.createElement('style'),b;document.head.appendChild(a);b=a.sheet;b.insertRule('body{font-size:100% !important;}',0);})()

Find Empty Elements (Added May 6)

It is not uncommon for a WYSIWYG editor in a CMS or on a comment site to throw extra empty p elements into the content. While I once wrote a style into my development CSS to highlight these issues, I was reminded of the potential utility by a Happy Cog post on pseudo classes.

This bookmarklet will find elements that are empty — no content, no whitepsace. It will not highlight images (by excluding elements with a src attribute) nor form inputs (by excluding elements with a type element), two common self-terminating elements that will otheriwse trigger this. It isn't perfect, but you are welcome to make it your own.

javascript:(function(){var a=document.createElement('style'),b;document.head.appendChild(a);b=a.sheet;b.insertRule('*:not([src]):not([type]):empty{border:2px dotted #00f !important;background-color:#00f;}',0);})()

Fix Pinterest

When you visit Pinterest without a Pinterest account, or without being logged in, you are prompted to sign up/in by a terrible overlay. In addition, the page won't scroll past a certain point. This annoys me. So I made a bookmarklet to remove the two overlays and re-enable scrolling. You can test it on my abandoned Pinterest page.

javascript:(function(){var a=document.createElement('style'),b;document.head.appendChild(a);b=a.sheet;b.insertRule('.Modal, .UnauthBanner {display: none !important;}',0);b.insertRule('.hasFooter.Grid.Module{overflow-y:visible !important;}',0);b.insertRule('.noScroll{overflow:auto !important;}',0);})()

Make/Modify Your Own Bookmarklet

The Virgin America site is made usable for those who navigate with a keyboard by restoring link underlines and adding focus styles to elements.

If you look at the code chunks above, you'll see I am doing the same thing over and over. I am using the JavaScript CSSStyleSheet.insertRule() method to insert a new style rule into the page's stylesheet. Not only does the Mozilla Developer Network have a great overview with sample code, but David Walsh shows similar code with some minor tweaks.

This approach allows me to leverage my CSS skills to write selectors to find and style elements on the page. Since CSS has so many powerful selectors, I find this easier to quickly repurpose. In addition to adding a new style, I always include !important with each so that it will override any inline styles.

If you are writing a function from scratch, make sure you minify it to take up less space (you may bump into character limits for a bookmarklet). Pre-pend javascript: and make it the href value of a link and you are done.

Here is a sample block of code you can use with the styles rendered in bold so you can replace them with your favorite selector. In this example I have two style rules so you can see how to add additional selectors.

javascript:(function(){var a=document.createElement('style'),b;document.head.appendChild(a);b=a.sheet;b.insertRule("a[href]{text-decoration:underline !important}",0);b.insertRule("*[style]{border:2px dotted #f00 !important;background-color:#f00;}",0);})()

And with that you should be off to the races.

Related

Links to my posts referenced above:

Update: February 14, 2015

It's hard to use bookmarklets on mobile devices, but I have a solution.

Monday, December 15, 2014

20 Years Since Netscape Navigator 1.0

Screen shot of the Netscape 1.0N browser information page.
Screen shot of the Netscape 1.0N browser information page.

The creepy pulsing N. Twenty years ago today, Netscape Communications Corporation released version 1.0 of Navigator, the browser that became synonymous with the web (for the general public). Well, really the general public (and most developers) referred to the browser as Netscape, not by its real name, Navigator.

The Navigator broken image icon. Based on Mosaic, Navigator quickly replaced the now not-so-cool Mosaic on my work and personal computer, and made Lynx look downright boring. It also presented the world with the creepy pulsing N, which was thankfully replaced pretty quickly. The first release also provided us with the familiar broken image icon that would persist until Internet Explorer's ubiquity usurped it.

Navigator persisted for more than thirteen years after that release, through the ups and downs of the oddly-named browser wars, until it was finally scuttled by its last owner, AOL, on December 28, 2007. AOL released security updates until March 1, 2008, marking the last update Navigator would ever see.

In honor of the browser where I cut my teeth learning all about the web, I grabbed the Navigator 1.0 release from the evolt.org browser archive (Mac and Windows 16-bit only, sorry, and it's the 1.0N release) and installed it on a shaky WindowsXP virtual machine. Unsurprisingly, trying to surf anywhere with it was a mess. The browser pre-dated frames, cookies, HTML tables (support came in 1.1), JavaScript, and support for any of the robust features of HTTP.

Screen capture of Wikipedia in Netscape 1.0.
Screen capture of Wikipedia in Netscape 1.0.
Screen capture of Yahoo in Netscape 1.0.
Screen capture of Yahoo in Netscape 1.0.

Interestingly, Navigator was first released as free software, only to walk it back a couple months later. The Wikipedia post spends a couple sentences on this:

Netscape announced in its first press release (13 October 1994) that it would make Navigator available without charge to all non-commercial users, and beta versions of version 1.0 and 1.1 were indeed freely downloadable in November 1994 and March 1995, with the full version 1.0 available in December 1994. Netscape's initial corporate policy regarding Navigator is interesting, as it claimed that it would make Navigator freely available for non-commercial use in accordance with the notion that Internet software should be distributed for free.

However, within 2 months of that press release, Netscape apparently reversed its policy on who could freely obtain and use version 1.0 by only mentioning that educational and non-profit institutions could use version 1.0 at no charge.

If the history of browser is something you find interesting, Wikipedia has this handy timeline of web browsers dating back to 1990. On the plus side, this is an SVG file, so you can zoom in to read it. Eric Meyer has a more structured browser timeline, but it doesn't start until 1996.

Timeline of web browsers.svg
"Timeline of web browsers" by ADeveria - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

Interestingly, I don't use Netscape Navigator (any release) at all anymore, but I do still fire up Lynx a few times a month.

Thursday, October 30, 2014

Linear Gradient Problems in Chrome

Detail of the effect I wanted to re-create with a linear gradient — a gray column, a white narrow gutter, a black vertical line, and the rest as white.

I'm going to tell you up front that I don't have a fix for the issue I am raising, though there are bugs filed against it.

I wanted to create equal-height columns that don't use tables, piles of JavaScript, background images, or many of the other code-heavy techniques out there today. I just wanted a CSS-only option. I have played around with CSS gradients to define columns before, something it turns out was covered in 2010 at CSS Tricks, and I decided browser support had come along enough that I could make a prefix-free solution.

In the image above I show an example where I want a vertical line between two columns, along with a narrow gutter. This is pretty straightforward, though you're better off doing it by hand than using any of the gradient generators out there right now. Ultimately I needed a step that is one pixel wide (yes, I am using pixels for this example) that is also a solid color. Easy enough.

It turns out that Chrome, Firefox and Internet Explorer 11 just don't seem to dig making a one pixel gradient step. That's ok. I can work with that. What I wasn't prepared for was how Chrome (38 as of this writing, though this appeared in prior versions) opted to handle it.

At some window sizes, Chrome displays no step at all. At other window sizes, it's 5 pixels. Sometimes the widths of the other steps change as well. This means some Chrome users will see nothing, others will see something five times wider than I want. The animated GIF below shows what happens to the line (in red) as I scale the window width. I think you can agree that it can be a pretty jarring experience for users (part of me worries that this kind of rapid flashing on the whole page can also overwhelm some users).

Animated screen capture of the CodePen in Google Chrome 38 showing the stuttering width of the "columns" and the inability to handle a one pixel band/step.

The animated screen shot is from a Pen that I created to show the effect. I have embedded it below, though you can visit (and fork) the pen directly at CodePen.io.

There are also two open Chromium bugs and one Stack Overflow discussion that are related, though not just with single-pixel gradients.:

Notes on the first bug offer an explanation of sorts:

skia discretizes the colors into 256 levels for (lots of) speed. hard-edged gradients like this (where there are two colors at the same color-stop) definitely show up this limitation. We can look at ways to increase precision, but there will be a real performance cost, so we have to decide how important this particular behavior is in practice.

Essentially the argument is that this is a performance trade-off. One that both Firefox and Internet Explorer seem more than capable of handling, which means I'm not buying this excuse a year and a half after it was offered. It just feels like a cop-out.

If you think that your work could benefit from having these bugs fixed, please go star them. Otherwise we may not use that awesome CSS feature, and by extension we're enabling the browser monoculture that is Chrome.

Update below the pen

See the Pen Testing Gradients as Column BG by Adrian Roselli (@aardrian) on CodePen.

Update: 10 minutes After Posting

I posted a link to my pen and this post to the bugs, and in both cases I later got email bounce notifications ("The email account that you tried to reach is disabled."). The address srsrid...@chromium.org, the only CC on 281489 and one of two on 233879, is gone which makes me think nobody is listening on at least one of the bugs.

Thursday, May 22, 2014

HTML5 Developer Conference Slides: Selfish Accessibility

2014 HTML5 Developer Conference

Today I had the pleasure of speaking at the HTML5 Developer Conference in lovely San Francisco. I presented on accessibility and how it relates to you as a current and future user with my presentation Selfish Accessibility. The full abstract:

We can all pretend that we're helping others by making web sites accessible, but we are really making the web better for our future selves. Learn some fundamentals of web accessibility and how it can benefit you (whether future you from aging or you after something else limits your abilities). We'll review simple testing techniques, basic features and enhancements, coming trends, and where to get help. This isn't intended to be a deep dive into ARIA, but more of an overall primer for those who aren't sure where to start nor how it helps them.

After submitting a couple drafts, and then some furious last-minute editing as some of the specs I referenced were tweaked a bit, I managed to squeeze out 89 slides in ~50 minutes. The slides are embedded below. There will likely be a video of the talk coming on the official site later, though I think they started the camera late.

Quick links to two items I referenced in Q&A after my talk:

If you attended, thanks! Whether or not you did, make sure to grab the links throughout as well as on the references slides scattered throughout (though mostly at the end).

Update: August 6, 2014

Good news everybody! The video for my talk was just posted at the HTML5 Developer Conference channel on YouTube. For your convenience, I have also embedded it below.

Friday, May 2, 2014

On Hiding URLs in the Browser

This image is stolen directly from Allen Pike's post because I don't have time yet to make a proper one. It shows the same page URL as seen in the address bars of Firefox 29 and Chrome Canary 36.0.1951.

Two days ago news broke that Chrome was going to modify the address bar in the browser to hide a page's URL. Web developers reacted pretty swiftly saying it's a bad idea. The first one I saw was Allen Pike's Burying the URL, and then a thread on Hacker News.

My first reaction is that this is a terrible idea, for all the points listed.

My second reaction is different.

I'm a web developer. I rely on seeing the URL every day not only as part of my job, but also to consume content and understand its value. I've been championing human-readable URLs for 20+ years and have railed against platforms that don't use them. Non-human-readable URLs are still valuable to me, though probably not the typical user.

Whether I want to hack a URL to remove the tracking nonsense appended by buff.ly-style shorteners, posit the date of a blog post when the author hides publish dates, or make sure I am looking at the development version of a site I am building instead of the staging or production URL (which is hilarious to get wrong while trying to debug an issue), I see a lot of value in URLs.

I realize that many (most?) other users aren't using URLs the way I do. Sure, I have trained my parents to look for certain things in a URL, but for the sites they frequent, which are often news sites with long strings of nonsense in the URL, there isn't much value for them.

So besides web developers, is this really hurting the typical web user? Is our general outcry based on anything other than a variation on "looks great in my browser?"

I posted a tweet this morning that is getting some retweet action, but I'm not sure that people understand that I am framing the issue from my perspective, not the average user's perspective. If anything, it belies how little I want to be family tech support. The tweet:

At the same time, Patrick Lauke was on his own tirade about how, as developers, we aren't really typical users. A couple points from the stream:

URLs have been getting masked, obfuscated, or hidden for some time now. My mobile phone hides a URL as soon as I scroll down the page. My browser doesn't show the HTTP protocol unless it's HTTPS. Opera Coast has gone a step further as it appifies the web experience.

Opera Coast has hidden URLs all along (as I note in my review), and while I don't know how that's mattered to typical users, it was enough to make me stop using it. While it is possible to get to the URL within Opera Coast, it's a bit more of a hassle than I like.

In the case of Chrome, it will be one click (which is one more click than I would like) to show the full URL as this video (embedded below) shared by Mark Harwood shows (sorry, only MP4 for now):

If the hidden URLs feature does make it into a coming release of Chrome, we as web developers can simply disable it with a Chrome flag: chrome://flags/#origin-chip-in-omnibox.

At this point, my resistance to the change is that I am a web developer who consumes URLs and that I don't want to have to re-train my parents how to send a link (without using some nonsense "share" button). I'd rather not see URLs go away, but I'm a professional, I can sort it out.

Unrelated

App Links you say? Don't get me started on App Links.

Update: May 4, 2014: Related

Jake Archibald has shared his opinion in his post Improving the URL bar. As is often the case in technical discussions, the comments have a lot of good back and forth.

Jeremy Keith has his own response post, the mispronounced-yet-whittily-named URLy warning.

Remy Sharp offers an alternative to the proposed URL hiding in hist post On Chrome hiding URLs to protect users from phishing. In short, he tries to tackle the stated reason behind Chrome's desire to hide URLs without hiding URLs.

Not really related, but interesting nonetheless is this post, The Secret Messages Inside Chinese URLs. The post is really talking about domain names on their own, but in the context of the page-level URL discussion I think it's novel.

Update: May 6, 2014

Opera 21 has come out, and among its release notes is this nugget on URLs in the address bar:

We now provide an option to make Opera persistently show a page’s complete URL in the address field.

[…]

For more technical users who need to quickly see the entire URL at a glance, go to “Settings | Advanced: Show always full URL in address field” to view all of that “important” information.

In addition, thanks to a re-start of the conversation on Twitter Jake Archibald has found a study that, while not exactly addressing the entire topic, does address it in part: Does Domain Highlighting Help People Identify Phishing Sites? The non-paywalled PDF file was found by Manu Sporny. Some notes:

Our research asks a basic question: how well does domain highlighting work? To answer this, we showed 22 participants 16 web pages typical of those targeted for phishing attacks, where participants had to determine the page’s legitimacy. […] We conclude that domain highlighting, while providing some benefit, cannot be relied upon as the sole method to prevent phishing attacks.

Now to change gears again, for those who claim that browsers have always shown the full URL, I present the following evidence that it's not quite true: Lynx 2.8.1 (one of the browsers I used daily back in the old days, not as a novel testing tool).

A screen shot of Lynx 2.8.1 viewing a page at CNN with no visible URL, as well as no obvious way to display it.

Update: May 13

A couple more interesting reads have popped up. One is a more in-depth and sweeping discussion from Jeremy Keith that builds on his original post and equates the removal of relatively small features with the long-term removal of control from users: Seams

The other one I missed during my first update to this post. Nicholas Zakas argues that URLs are already dead.

Tuesday, April 8, 2014

Burying Windows XP with IE11 Enterprise Mode

Chart showing the IE8 is the browser common to Windows XP and Windows 7.
Screen shot from Microsoft's presentation on IE11 Enterprise Mode, showing what browsers are available on what versions of Windows. Note that the Venn-ish diagram has no IE11 intersection for Windows 8.

As of today, Windows XP has effectively reached its end of life. What I mean by that is that Microsoft will no longer be releasing security patches for Windows XP. Those of you waiting to deploy those XP exploits can run at the platform unopposed.

While this may be a nuisance for the home user (and the family who acts as his/her tech support), this has larger implications in the business world. For example, if you work in the healthcare world you may very well be in violation of HIPAA / HITECH laws if you're still running Windows XP tomorrow.

What's really annoying about this is that so many web-based applications were built to support the dominant browser(s) at the time — Internet Explorer 6 through 8. What that means is users on Internet Explorer 11 are being locked out of these online tools, making the transition away from Windows XP (which cannot have a version of IE greater than 8) a tough proposition for organizations.

Simply put, poor web development practices have created an environment where upgrading to the latest version of IE is directly at odds with keeping your productivity up (if it requires you to stay on an old version of IE). Complicate that by now making that old version of IE a vector for security breaches and compliance penalties/lawsuits.

But fear not! As long you have the hardware and licenses to run Windows 7 or Windows 8.1 (notice, not Windows 8), you can still use those Internet Explorer 8 web sites without being locked out (you're SOL if you need IE6).

With a week before Windows XP turns into a zombie, Microsoft released Enterprise Mode for Internet Explorer 11. After all, you only needed a week to get all that hardware in place and configured, right?

Enterprise Mode doesn't just emulate IE8, it masquerades as it. Some of the benefits of Enterprise Mode:

  • Enterprise Mode sends the IE8 user agent string to defeat misguided browser sniffers;
  • it mimics the responses IE8 sends to ActiveX controls, ideally allowing them to keep working;
  • it supports features removed from later versions of IE (CSS Expressions, woo hoo!);
  • pre-caching and pre-rendering are disabled to keep from confusing older applications;
  • IE8's Compatibility View is supported, so odds are many applications designed for IE7 will work.

Some web developers have panicked that now they'll have to support another browser or browser mode, but so far the evidence doesn't bear that out.

Enterprise Mode will be controlled by a central source, most likely corporate IT departments, and will only be enabled for sites that have been manually identified. Intranets and custom-built un-maintained web-based applications are an easy fit here. If an IT department deems it appropriate, it can also allows end users to decide to enable Enterprise Mode on a site-by-site basis.

Microsoft has been testing this in many industries and countries (though not China, the biggest culprit for old, and illegal, versions of Windows). Hopefully this will help speed users to upgrade to IE11, even if it doesn't provide motivation for organizations to upgrade their legacy IE8 applications.

In addition to the links above, you can get more information from the video of Microsoft's Enterprise Mode presentation, or you can just view the presentation slides alone.

In short, this is a great band-aid for organizations that already have Windows 7 or 8.1, but won't be helping to push IE8 out of the way (despite the best efforts of some). In short, as web developers, we can expect to support IE8 for a while still.

Related

With the demise of Windows XP (even though we know it's not suddenly gone today), Internet Explorer 6 is also at its end of life (because no supported platform can run it). We know that it won't go away completely, but it's still being celebrated at sites like IE6death.com.

Update: April 11, 2014

I mentioned HIPAA above and linked to a post that argues for the presence of Windows XP as an automatic HIPAA violation. A more balanced and, and well-cited, post is over at the Algonquin Studios blog: So You’re Stuck with Windows XP but Still Need to be HIPAA Compliant

Wednesday, March 12, 2014

Web Turns 25, Seems Popular

Logo for The Web at 25

The world wide web has officially lasted 25 consecutive years, which means it's catching up to its parent, the Internet, which itself is bearing down on 45. That's an important distinction. The Internet is not the web, it is the foundation on which the web was born.

In honor of the web's quarter century bringing us all manner of useful and useless information via the lowly hyperlink, the World Wide Web Consortium (the standards body behind HTML and CSS, among other standards) and the World Wide Web Foundation have teamed up to create the site webat25.org.

The site includes a link to Tim Berners-Lee's 1989 proposal for the web, news on upcoming events, and plenty of factoids. In addition, there is a Twitter account (@Web25) that has been collecting peoples' memories of the early days of the web with the hashtag #web25. There is even a Storify collecting many of the tweets (which I have embedded below).

Some other sites talking about the web's anniversary:

For good measure, I've included Tim Berners'Lee's video talking a bit about where the web will continue to go:

If you want to pretend that you are enjoying the early days of the web again, head on over to the evolt.org browser archive, which I started building in 1994 (two years after my first foray onto the web), to download the earliest releases of Netscape Navigator or browsers you've never heard of. You can also wander over to the W3C Web History Community Group, where some folks have started to gather early documents.

You can also head over to CERN's World Wide Web project site, dating back to 1993 and the first time HTML documentation was made generally available.

Some other historical bits I have covered on my blog:

And now that embedded Storify I threatened earlier:

Bonus

Somebody posed the following question to Tim Berners-Lee in the AMA:

What was one of the things you never thought the internet would be used for, but has actually become one of the main reasons people use the internet?

Tim Berners-Lee's answer:

Kittens.

It's taken 25 years, but the reign of cats on the web is complete.

Monday, March 3, 2014

On Screen Reader Detection

Background

The latest WebAIM screen reader survey results came out last week, and I had been looking forward to the results of the questions related to screen reader detection. I can say I was a bit surprised by both. To make it easy, I'll reprint the questions and answers here.

Screen Reader Detection

Pie chart from the answers.

How comfortable would you be with allowing web sites to detect whether you are using a screen reader? (See the question on the WebAIM site.)

The vast majority (78.4%) of screen reader users are very or somewhat comfortable with allowing screen reader detection. 55.4% of those with disabilities indicated they were very comfortable with screen reader detection compared to 31.4% of respondents without disabilities.

Screen Reader Detection for Better Accessibility

Pie chart from the answers.

How comfortable would you be with allowing web sites to detect whether you are using a screen reader if doing so resulted in a more accessible experience? (See the question on the WebAIM site.)

86.5% of respondents were very or somewhat comfortable with allowing screen reader detection if it resulted in better accessibility. Historically, there has generally been resistance to web technologies that would detect assistive technologies - primarily due to privacy concerns and fear of discrimination. These responses clearly indicate that the vast majority of users are comfortable with revealing their usage of assistive technologies, especially if it results in a more accessible experience.

My Opinion

I think the wrong question is being asked on the survey.

Detecting a screen reader is akin to detecting a browser. If you've been doing this long enough, you know that on the whole browser detection is a bad idea. It is often wrong and doesn't necessarily equate to what features really exist, which is why feature detection evolved as a best practice. You can read my rant from 2011 where web devs were making the same mistake trying to detect mobile devices.

Detecting the features of a screen reader is different, however. Here you may be able to actually get somewhere. But this is where different risks come in. I'll focus on three that come to mind immediately.

Double Effort

The first is what happens once you have detected a user with a screen reader. Do you detect for other accessibility tools or ignore those? Do you serve different content? Different mark-up? Do users get shunted to different URLs?

Evidence suggests this doesn't ever go well. Even today, the example of the UK Home Office Cyber Streetwise site is perfect example — the user is provided a link that cannot be activated sans mouse which in turn points to a text-only version of the site. It is not truly accessible and assumes only visual disabilities.

Any organization charged with maintaining two sites will ultimately fail at doing so as resources are prioritized to targeting the primary site. Eventually you get an atrophied site in the best case, and a complete failure in the worst case.

It opens the door to separate-but-equal thinking. Patrick Lauke captured this idea nicely on Twitter, which I re-tweeted with this longdesc (because I am that guy):

Selection Bias

A second risk with this detection approach is that selection bias will taint your perspective (I've written about this before). Just as web devs would build features that blocked, say IE6, and then turn around to point out that IE6 usage had dropped on their sites, we can expect to see the same thing happen here.

Poorly-written detection scripts will set the expectation that site owners are getting a view of who is using their site, but will end up showing the opposite. Not only that, low numbers can be used to justify not supporting those users, especially if those numbers come in below the IE6 or IE8 or whatever-is-the-current-most-hated-IE numbers that you've been arguing are too low to support. Roger Johansson sums it up nicely:

Privacy

We already know that assorted web beacons can cross-reference your social media profiles to your gender to your geographic location to your age to your shoe size. There is already plenty of personally-identifiable information about you available to every site, is it right to allow those sites to know you have a disability?

This is the kind of information that in the United States you might think is covered by HIPAA, only to find that as a general web surfer you are handing it over to anyone who asks. Certainly no registry can be trusted with managing that when even the UK's NHS uploads not-really-anonymized patient data to the cloud (Google servers outside the UK in this case).

Consider also what happens when the site has a different URL for every page targeted specifically at disabled users. Now when those users share a URL to a page, they are in effect telling the world they have a disability, even if which disability isn't clear.

There is a privacy risk here that I don't think those who took the survey were in a position to consider, and I don't those who asked the question were able to contextualize appropriately.

Other Responses

Marco Zehe jumps on this pretty quickly with his post Why screen reader detection on the web is a bad thing. In addition to raising points why he thinks this is bad, he points out where the survey takers might not understand the scope of the question:

Funny enough, the question about plain text alternatives was answered with “seldom or never” by almost 30% of respondents, so the desire to use such sites in general is much lower than the two screen reader detection questions might suggest. So I again submit that only the lack of proper context made so many people answer those questions differently than the one about plain text alternatives.

Léonie Watson also responded quickly in her post Thoughts on screen reader detection with her own reasons that I am breaking down into a bullet list here (on her post, these are the headings for copy with more detail):

  • I don’t want to share personal information with websites I visit
  • I don’t want to be relegated to a ghetto
  • I don’t want design decisions to be based on the wrong thing
  • I don’t want old mistakes to be repeated
  • I don’t want things to be hard work
  • I do want much more conversation about screen reader detection

Karl Groves points out some disability points the general public often forgets in hist post “Should we detect screen readers?” is the wrong question:

  • There are more people who are low-vision than who are blind
  • There are more people who are hard of hearing than who are visually impaired
  • There are more people who are motor impaired than who are hard of hearing
  • There are more people who are cognitively impaired than all of the above

Dennis Lembree covers reasons against over at WebAxe in the post Detecting Screen Readers – No

  • Text-only websites didn’t work before and you know devs will do this if a mechanism is provided.
  • Screen reader detection is eerily similar to the browser-sniffing technique which has proven to be a poor practice.
  • Maintaining separate channels of code is a nightmare; developers overloaded already with supporting multiple browsers, devices, etc (via RWD). And if done, it will many times become outdated if not entirely forgotten about.
  • Why screen reader detection? If you follow that logic, then detection should be provided for screen magnifiers, braille output devices, onscreen keyboards, voice-recognition, etc. That’s just crazy.

Dylan Barrell is (so far as I have found) the sole voice saying maybe this isn't so bad, in hist post Assistive Technology Detection: It can be done right. He argues for some benefits and then proposes a couple possible approaches to deal with the concerns he is hearing:

  1. Allow the web site to request the information, and the user to allow/disallow this on a per-website/domain basis. I.e. the web site requests and the user decides. […]
  2. A second approach is to put the control in the hands of a registry. This registry would store the domain names of the organizations who have signed a contract that explicitly binds them into a code of conduct regarding the use of the data. […]

Update: March 5, 2014

Marco Zehe, Mozilla accessibility QA engineer and evangelist, has opened a bug with Mozilla asking for a privacy review of the overall idea of screen reader detection: Bug 979298 - Screen reader detection heuristics: Privacy review

Update: March 6, 2014

Along the lines of separate-but-equal text-only sites being anything but equal, Safeway appears to have bought into that concept and is eliminating its text-only site in favor of making the overall site more accessible.

Update: March 19, 2014

In the post Am I Vision Impaired? Who Wants to Know? the author points out that by installing apps on your phone, you are already giving the app makers access to whether or not you use AT. There is an API for this information and its use is not indicated to end users that way the phone tells/asks you about access to your camera or GPS.

Sunday, January 19, 2014

Comparing Opera Mini and Chrome Compression

Depending on how much you spend staying up on web browsers, you've probably heard the cry of Opera did it first more than once (though the low-hanging fruit, browser tabs, wasn't technically Opera first). When Google announced that Chrome would offer a data compression mode, you may have figured you'd hear it again owing to Opera Mini.

In 2004, Opera developed Mini as a browser backed by proxies to help reduce data use and speed up the overall experience. In 2006 Opera Mini went worldwide. Sadly, StatCounter doesn't break Opera Mini out from regular Opera Mobile, so it's hard to get a sense of Mini's market share. Opera's own numbers, however, report 241 million Mini users worldwide in November of 2013, with an annual increase of 21%.

Chrome for mobile devices has been climbing in use, partly because Android devices have started to move away from the default Android browser (though this doesn't affect all the Android 2.x devices and many of the 4.x devices that will be out there for a while). By adding support for data compression, Chrome is that much more appealing to users who have bandwidth caps, poor connections, or any other factor that limits how well they can see fat pages. Interestingly, some of the data compression comes from converting all the images to WebP (ol' Gil has finally found a way to make that format work). Chrome also automatically puts you into Safe Browsing mode as part of its compression process.

So I fired up both browsers, chose a list of web pages/sites that I haven't surfed using either of them, dropped into 3G and started my compressed surfing. These are the results:

Screenshot of Google Chrome bandwidth savings screen.
Chrome requested 18.70Mb of data and compressed it to 4.83Mb, for a compression rate of 74%.
Screenshot of Google Chrome bandwidth savings screen.
Mini requested 12.9Mb of data and compressed it to 3.7Mb, for a compression rate of 72%.

This test was by no means rigorous or scientific. While Chrome compressed just a bit better overall, I felt like the experience was slower than on Mini. Chrome was also served much more data, perhaps owing to browser detection scripts offering more "features" to Chrome, or Mini's rendering engine just ignoring some of the elements it didn't know.

For those who have decided that Google is the great new evil, you may want to consider that Google proxies are between you and the web for every request when using Chrome's compression. For Mini users the same is true of Opera's servers, but far fewer people seem to be concerned about demonizing Opera Software. How much stock you put into Google's Safe Browsing technology behaving as some sort of censor is up to you and your own paranoia. I don't much care either way, but some folks might. As someone who's used Opera Mini for years when I travel outside the U.S., I'm very comfortable with it and doubt I'll switch — it's easier for me to just fire up Mini than it is to navigate Chrome's menus to enable compression.

By the way, the pages I used for my test:

  • http://www.todaysiphone.com/2014/01/apples-iwatch-much-imagined-latest-rumors-anything-go/
  • http://www.fluevog.com/code/?w[]=gender:men&perpage=-1
  • http://www.orlandosentinel.com/news/local/trayvon-martin/os-metrowest-shooting-stand-your-ground-20140117,0,885944,full.story
  • http://www.barrelny.com/blog/text-align-justify-and-rwd/
  • http://scatterfeed.wordpress.com/2014/01/18/natures-squeegee-the-nictitating-membrane/
  • http://gallery.bridgesmathart.org/exhibitions/2014-joint-mathematics-meetings/blbodner
  • http://www.novayagazeta.ru/photos/61844.html
  • http://pluto.jhuapl.edu/gallery/sciencePhotos/image.php?page=2&gallery_id=2&image_id=63
  • http://blogs.channel4.com/factcheck/factcheck-immigrants-pay/16332

Tuesday, January 14, 2014

W3C EME is not DRM (nor other fear-mongering TLAs)

Photo of hippie, text: Thinks EME and DRM are unethical. Uses Chrome. Plenty has been written about the W3C and DRM. Sadly, most of it has been written in the form of attacks against the W3C, with very few laying out the facts.

Note: I am a participant in the W3C HTML Working Group (as an invited expert). Encrypted Media Extensions (EME) are part of the scope of the HTML Working Group. You can decide if my opinion is tainted, but I owe nothing to the W3C to warrant arguing either way. I also don't speak for the W3C.

In Doctorow's latest post on Boing Boing, Requirements for DRM in HTML5 are a secret, he cherry-picks an email from the W3C's Restricted Media Community Group where someone wants to dive into DRM requirements but is rebuffed simply because the W3C isn't making DRM, just the APIs to access content protected by DRM (via the EME spec):

[…] what we are trying to do with EME is provide a clean API to integrate these solutions with the HTML Media Element.

And that's the crux of what the W3C is doing with DRM — developing a standard API so browsers can access content that will be locked down with or without their participation anyway.

The more W3C-savvy among you may recognize that W3C community groups don't publish specifications, they provide a way for the general public to weigh in on topics and generate wider discussion. As stated on the Restricted Media Community Group page, [T]his group will not publish specifications. In fact, if you are reading this and care about it, you should join. You may note that the people attacking the W3C haven't.

If you still aren't sure what the Encrypted Media Extensions (EME) spec has to do with DRM, then just read the abstract:

This specification does not define a content protection or Digital Rights Management system. Rather, it defines a common API that may be used to discover, select and interact with such systems as well as with simpler content encryption systems.

This is why Tim Berners-Lee declared it as in scope for the W3C. DRM exists and has existed for a long time. DRM requires plug-ins or third-party applications right now. By creating an API that all DRM systems use, playback in the browser will be possible (via Content Decryption Modules), thus helping to support an open web (just use your browser) instead of continued silos (Hulu app, Netflix app, Silverlight plug-in, etc).

Tim Berners-Lee provided further context in response to community outcry. Those who are bashing the W3C for DRM should read it, or perhaps just these salient points:

[I]f content protection of some kind has to be used for videos, it is better for it to be discussed in the open at W3C, better for everyone to use an interoperable open standard as much as possible, and better for it to be framed in a browser which can be open source, and available on a general purpose computer rather than a special purpose box. Those are key arguments for the decision that this topic is in scope.

This clearly doesn't jive with the false headline Doctorow used in October, W3C green-lights adding DRM to the Web's standards, says it's OK for your browser to say "I can't let you do that, Dave".

It also doesn't suggest the kind of future that Doctorow outlines in his personal post, We are Huxleying ourselves into the full Orwell, where he says I’m not kidding about any of this. I can’t sleep anymore. I think it may be game over. Of all the things to lose sleep over, this really shouldn't rank. I'm not kidding.

Cory Doctorow is fear-mongering. At this point I believe he is mis-representing the facts to further his agenda of stopping all forms of DRM, as there is more than enough evidence to suggest the opposite of what he claims (though he never links it, so perhaps he's terrible at Google?). This may be because he genuinely doesn't understand what EME is intended to address, or it may be to drive ad revenue on Boing Boing by hitting a volatile topic, but I'd like to think it's the former.

People far smarter than I, and closer to the issues, have written about this. If you find my arguments lacking then you should read these before deciding the W3C is evil.

  1. DRM in HTML5 is a victory for the open Web, not a defeat, at Ars Technica, May 10, 2013.
  2. Dear EFF: please don't pick the wrong fight, by Chris Adams, October 4, 2013.
  3. The Bridge of Khazad-DRM, by Brendan Eich (Mozilla CTO), October 22, 2013.
  4. (Austening ourselves to the full Brontë) Please Bring Me More Of That Yummy DRM Discussion, by Robin Berjon, January 10, 2014.

If you want to comment, I do not moderate but I don't allow anonymous posts (strictly spam issues). If you want to post without linking to a social media account, contact me on Twitter and I'll temporarily remove the restriction.

My rant that got me started...

Update: January 15, 2014

Well, here's a nugget that suggests this conversation is unlikely to be calm:

Update: January 21, 2014

HTML5 Rocks published an article on the 16th titled EME WTF? An introduction to Encrypted Media Extensions. For those interested in seeing some code examples, take a look.

Update: February 14, 2014

Some of the members of the W3C TAG are hammering out a document about EME. They outline the goals here:

Over the web's 25 years there have been several technologies and architectures which have had the effect of restricting access for some people to portions of the web. This document explores how these work and the effect they had on the web, with the ultimate goal of aiming to inform the debate about the inclusion of Encrypted Media Extensions (EME) in HTML.

In it they cite authentication and the object element as current examples of restricted content on the web.