Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Wednesday, April 22, 2015

On the Mis-Named Mobilegeddon

If you are a web pro then it is likely that you heard that Google's search results were going to change based on how mobile-friendly a site is (you probably heard a couple months ago even). This change took effect yesterday.

As with almost all things in the tech world that affect clients, the press hit yesterday as well, and today clients are looking for more information. Conveniently, our clients are golden as we went all-responsive years ago.

If you already built sites to be responsive, ideally mobile-first, then you needn't worry. Your clients have probably already noticed that the text "mobile-friendly" appears in front of the results for their sites in Google and have been comforted as a result.

If you have not built sites to be responsive, or have had no mobile strategy whatsoever, then you may be among those calling it, or seeing it referred to as, mobilegeddon. A terrible name that clearly comes from FUD (Fear, Uncertainty and Doubt).

If you are someone who relies on a firm to build and/or manage your site, then you should also beware the SEO snake oil salesman who may knock on your door and build on that very FUD to sell you things you don't need.

From Google Webmaster Central

For that latter two cases, I have pulled the first three points from Google's notes on the mobile-friendly (a much better term) update. I recommend reading the whole thing, of course.

1. Will desktop and/or tablet ranking also be affected by this change?

No, this update has no effect on searches from tablets or desktops. It affects searches from mobile devices across all languages and locations.

2. Is it a page-level or site-level mobile ranking boost?

It’s a page-level change. For instance, if ten of your site’s pages are mobile-friendly, but the rest of your pages aren’t, only the ten mobile-friendly pages can be positively impacted.

3. How do I know if Google thinks a page on my site is mobile-friendly?

Individual pages can be tested for “mobile-friendliness” using the Mobile-Friendly Test.

From Aaron Gustafson

Aaron Gustafson put together a simple list of four things you as a web developer can do to mitigate the effects of Google's changes, though the simplicity belies the depth of effort that may be needed for some sites. I've collected the list, but his post has the details for how to approach each step:

  1. Embrace mobile-first CSS
  2. Focus on key tasks
  3. Get smarter about images
  4. Embrace the continuum

What Is Your Mobile Traffic?

I've been asked how to find out how much traffic to a site is from mobile users. In Google Analytics this is pretty easy:

  1. Choose Audience from the left menu.
  2. Choose Mobile once Audience has expanded.

Bear in mind that this just tells you where you are today. If that number drops then it may be a sign that your mobile strategy isn't working. At the same time, if that number is already low then it may not drop any further owing to unintentional selection bias in how your pages are coded.

Oh, By the Way

Google isn't the only search engine. When I mentioned that on this blog before, Google had 66.4% of the U.S. search market. As of January 2015, that's down to 64.4%. Bing is up from 15.9% to 19.7%.

Google Sites led the U.S. explicit core search market in January with 64.4 percent market share, followed by Microsoft Sites with 19.7 percent and Yahoo Sites with 13.0 percent (up 1.0 percentage point). Ask Network accounted for 1.8 percent of explicit core searches, followed by AOL, Inc. with 1.1 percent.

While I Have Your Attention

Two days after the initial announcement of this change, word also came that Google is working on a method to rank pages not by inbound links, but by trustworthiness, in essence by facts.

When this finally hits, pay attention to those who refer to the change as Truthigeddon. Be wary of them.

Friday, March 14, 2014

I Don't Care What Google Did, Just Keep Underlining Links

Screen shots of Google results page with two kins of color-blindness simulated.
Screen shots of Google search results showing protanopia (middle) and deuteranopia (right) forms of color-blindness. Click/tap/select the image for a full-size view.

I figured I'd lead with my argument right in the title. Even if you read no further, you know where I stand. I'm just going to fill up the rest of this space explaining why anyway.

Background

The Verge posted an article (Google removes underlined links, says goodbye to 1996) telling us Google is removing underlines on hyperlinks in search results, and also suggesting that underlines are oh-so-18-years-ago.

It's that sentiment (echoed in the article with the '90s-style underlined links are being removed from Google search results) that makes me worry The Verge is being snide about a usability feature it doesn't understand. The original heads-up from a Googler wasn't quite so focused on the underlines.

Why Google's Almost Plan Works

Google's search results are almost completely hyperlinks. Google retains a classic indicator of the hyperlink and keeps them all blue (a color contrast ratio of 11.2:1 to the white background and 1.5:1 with the body text) so that users don't have to learn a color scheme unique to Google. In this context, when users know the page is full of links and the colors are consistent, coupled with Google's position as a top site on the web, users aren't likely to get confused about what to click/tap/follow.

Similarly, The Verge has no underlines on its hyperlinks, whether in the navigation or in the content, until the link gets focus or the mouse hovers. This likely isn't an issue for most users as the in-content links are orangey-red within otherwise black text (a 3.7:1 color contrast ratio to the body text and 3.5:1 contrast ratio to the background). The non-inline links are pretty much all navigation anyway, and removing underlines from navigation links is a de facto standard. In this case, the links can be anywhere in the page content — they don't benefit from consistent positioning on the page as Google's links do.

How It Won't Work for You

My concern is that the average web developer may see Google dropping underlines as an excuse to do it on their own projects, without the context. For example, an article or blog post may be littered with links throughout the content. This doesn't correspond to the same type of content or organization that you see on the Google search results page. That same article or blog post may also not have a color scheme that makes it appropriate to remove the underlines.

Google misses the mark in that the blue hyperlinks don't have sufficient contrast with the rest of the text on the page. The layout Google uses, and has used for years, mitigates this as users will quickly (re)discover how links are organized on the page regardless of color or underline.

I mention The Verge's color contrast ratio above because its orangey-red links will fail Web Content Accessibility Guidelines 2.0 (WCAG) level AA compliance. I am not trying to pick on The Verge here — I can find many sites that will fail that check, including some of my own. But it is worth understanding that removing underlines, to meet even basic accessibility compliance, will require you to step up your game on understanding color contrast.

Screen shots showing links on The Verge with different forms of color-blindness.
Screen shots of hyperlinks on The Verge showing deuteranopia (top) and protanopia (bottom) forms of color-blindness. Click/tap/select the image for a full-size view.

What You'll Need to Do

To make it easy, I'll link to the WCAG notes with a quick description of what you have to do.

Guideline 1.4.1 states that you cannot rely on color alone to convey information (such as when text is a hyperlink).

If you do rely on color, contrast is imperative. Use only colors that would provide 3:1 contrast with black words and 4.5:1 contrast with a white background. I've included links to contrast checkers below.

You can read more about how to meet WCAG item 1.4.1, including sample scenarios and yet more links, in the Understanding SC 1.4.1 document.

My Recommendation

Unless you plan to run the necessary color contrast tests, just keep the underlines on your hyperlinks.

Related

Update: 4:50pm

I think this point is worth considering:

Update: March 17, 2014

On the WebAIM mailing list Elizabeth J. Pyatt points out that the Google's link underlines don't work for keyboard users. The underlines appear when you hover over a link, but if you tab through the links no underlines appear. I'm a twit for missing this, but Google is committing a grave accessibility mistake by not including a :focus selector in its CSS.

Again, please don't follow Google's lead.

Wednesday, June 12, 2013

Google Needs to Provide Android App Interstitial Alternative

Yesterday Matt Cutts from Google tweeted that Google search results for users on smartphones may be adjusted based on the kinds of errors a web site produces (of course I was excited):

Matt links to a page that outlines two examples of errors that might trigger this downgrade of a site's position in the Google search results and, right in the first paragraph, links to Google's own common mistakes page:

As part of our efforts to improve the mobile web, we published our recommendations and the most common configuration mistakes.

I think it's fair to assume that anything listed on the "Common mistakes in smartphone sites" page can negatively impact your site ranking. In particular this section on app download interstitials caught my eye:

Many webmasters promote their site's apps to their web visitors. There are many implementations to do this, some of which may cause indexing issues of smartphone-optimized content and others that may be too disruptive to the visitor's usage of the site.

Based on these various considerations, we recommend using a simple banner to promote your app inline with the page's content. This banner can be implemented using:

  • The native browser and operating system support such as Smart App Banners for Safari on iOS6.
  • An HTML image, similar to a typical small advert, that links to the correct app store for download.

I think it's good that Google links to the Apple article. I think it's unfortunate that Google does not link to Microsoft's own solution. If you read my blog regularly, or just follow me on Twitter, you may know that I covered both Apple's and Microsoft's app banner solution in January in the post "App Store Meta Tags."

You might also note that I stated that Google Play offers no such feature. Google, the force behind Android and the one now (or soon) penalizing sites in its search engine for app interstitials, provides no corresponding alternate solution of its own.

A great thing that Google could do for its Android users, for its search engine results, and for app developers, is to support a custom meta tag that allows web sites to promote their own Android apps in the Play store. Developers can start to replace awful Android app interstitials on web sites, users can get a cleaner experience, and site owners who can't conceive of other ways to promote their apps on their home pages can move toward something that is easier to maintain and doesn't penalize them.

I think it's nice that Google is paying attention to web devs by adjusting search results, but my ranty tweets are probably falling on deaf ears. The web would be indebted to someone who can get Google's and Android's ear on this.

Thursday, April 4, 2013

Chrome: Blink and You Missed the News

The new Blink logo. It's old news by this Thursday morning, but in case you had not heard, Google is forking WebKit to make its own rendering engine, Blink. Opera will be using the Blink fork of WebKit as its rendering engine.

A combination of people who are far smarter, far more well connected, and in timezones that allow them to write about this sooner, along with all the Twitter chatter, has already hashed out the major details. As such, I will link to them below. I would be a terrible blogger if I didn't offer my opinion, however.

I will format this the way I did when I provided my in-depth analysis of Opera's move to WebKit (away from Presto) less than two months ago.

So what does this really mean?

For Developers

Any developer who is complaining that this means there is another browser/engine against which they will need to test has been doing it wrong.

Web developers should always test against different browsers, regardless of their engine. In particular, WebKit has so many nuanced implementations that not independently testing against each browser that uses WebKit belies either a lack of understanding of how WebKit is implemented or laziness.

If you aren't sure what is different between each WebKit implementation (Chrome, Safari, Android browser, Opera, etc.), I encourage you to read my post "WebKit Will and Won't Be the New IE," where I provide a high-level overview of these variances.

For Users

At this point it doesn't mean a whole lot.

Google will argue this is better for users. Apple will argue that Google took its ball and left. Opera won't be arguing. None of that impacts users because we have mostly done a good job of promoting standards-based development. I again refer you to "WebKit Will and Won't Be the New IE" for how poor testing can impact users, but that's not a function of the engines.

Because Apple only allows WebKit on iOS devices, and even then it restricts those browsers to a different JavaScript engine and thus a lesser experience, Chrome and Opera for iOS may still stay on WebKit. Over time as its harder to incorporate features from Blink back into the WebKit core, there may be feature divergence which may affect users.

That's just speculation on my part.

For Standards

For a specification to become a W3C recommendation, there must be two 100% complete and fully interoperable implementations, which basically means two browsers need to support it. When Opera announced the shuttering of Presto, that left Trident (Internet Explorer), Gecko (Mozilla), and WebKit (Safari and Chrome) as the remaining engines (of measurable size). Essentially, two out of the three of them had to agree to implement a feature.

With Blink, provided the W3C recognizes it as a stand-alone engine, there is now one more engine back in the mix, essentially returning the count to where it was in February before Presto's wind-down (to be fair to Presto, it's expected to exist in the wild until 2020, but with no new feature development).

I am hoping that this is a good thing for standards.

Blink won't be using vendor prefixes (even though it will have inherited some), so I consider that a step in the right direction. While I think this matters to developers, I think it matters even more to standards.

Technical Aside

From Peter-Paul Koch:

Chrome 28 will be the first stable release to use Blink; earlier versions will use WebKit. Opera and Yandex will start using Blink whenever they start using Chromium 28.

Related

First some bits from The Twitters:

And now to the related links:

There's this one from 2010 by Haavard Moen that I thought worth highlighting: "Dear Google: Please fork WebKit."

Update, 5:35pm

A video Q&A from Google Developers about Blink (time markers available on the Chromium blog).

Monday, January 7, 2013

Google Maps: Misbehaving with UA Sniffing

Here's the TL;DR: Google Maps sniffs a browser's user agent string. If it finds Internet Explorer on Windows Phone, then it kicks it over to the m.google.com mobile home page.

So let's be clear. It's 2013 and one of the biggest companies on the internet is using a sniffer to redirect users on a browser and platform that it sees as competition.

Google's general claim is that the mobile version of Google Maps is optimized for WebKit browsers (such as Google Chrome) and therefore Google doesn't support non-WebKit browsers. Even though Google Maps works fine on Firefox mobile (which supports panning, but not pinch-to-zoom) and Opera Mobile (which sometimes supports panning, but not pinch-to-zoom), neither of which uses the Webkit engine. It even renders using Opera Mini, although I can't get it do anything. I can't test Internet Explorer on Windows Phone because I don't have it.

I can, however, test Google's browser sniffer by changing the user agent string in my browser to report itself as Windows Phone and watch my request for maps.google.com get redirected to m.google.com, the Google mobile home page. This tells me that Google isn't performing feature detection (such as touch events or multi-touch support), but is instead damning the browser by name alone.

This is the lesson Google is teaching young web developers who don't understand how flawed this approach is (contrary to its own instructions on best practices from less than a month ago). Google Maps happily lets me have a sub-par experience in Opera Mobile or Firefox mobile. It even lets me have a broken experience in Opera Mini. But Internet Explorer on Windows Phone? Google Maps just boots those users.

Reports I have read (and watched) on assorted articles online suggest that Google Maps works reasonably well on IE on Windows Phone (supporting panning and pinch-to-zoom). As such, I don't buy Google's argument that it wants to prevent users from having a poor experience—there is already evidence that a large number of users (more than use Windows Phone) are having a poor experience.

From Google:

The mobile web version of Google Maps is optimized for WebKit browsers such as Chrome and Safari. However, since Internet Explorer is not a WebKit browser, Windows Phone devices are not able to access Google Maps for the mobile web.

…Because we actively block them, should be how that quote ended.

So why is Google really doing this? Is it because it's fun to pick on Microsoft? Is it because Google thought it could get away with it? Is it to make the Windows Phone experience less appealing than Android's? Is it because Google doesn't like Microsoft's touch events specification (and how well it's been received) at the W3C? Is it because of recent court cases between Google and Microsoft?

In this case I don't much care. I care instead about the terrible example Google is setting for web developers.

Background

My Related Posts

Tuesday, October 30, 2012

Confusion in Recent Google Updates

Google pushed out some updates recently which have had SEO experts and spammers, as well as the average web developer or content author, a bit confused. It seems that some sites have been losing traffic and attributing the change to the wrong update. It also seems that some of this has percolated up to my clients in the form of fear-mongering and misinformation, so I'll try to provide a quick overview of what has happened.

Exact Match Domains (EMD)

For years identifying a keyword-stuffed domain name for your product or service was considered the coup de grace of SEO. Frankly, on Google, this was true. For instance if my company, Algonquin Studios, wanted to rank highly for the search phrases web design buffalo or buffalo web design then I might register the domains WebDesignBuffalo.com and BuffaloWebDesign.com. I could even register nearby cities, like RochesterWebDesign.com, TorontoWebDesgin.com, ClevelandWebDesign.com, and so on, with the intent to drive traffic to my Buffalo-based business.

Google has finally taken steps to prevent that decidedly spammy user-unfriendly practice. With the EMD update, Google will look at the domain name and compare the rest of the site. If the site is a spammy, keyword-stuffing, redirection mess, then it will probably be penalized. If the domain name matches my company name, product or service and (for this example) is located in the area specified by the domain, then it will probably not experience any change.

In all, Google expects this will affect 0.6% of English-US queries.

Panda/Penguin

While spammers panicked about this change, some not spammy sites noticed a change at about the same time. This may have been due to Panda and Penguin updates that rolled out around the same time and have been rolling out all along.

Considering the Panda update was affecting 2.4% of English search queries, that's already a factor of four more of an impact than the EMD update. Considering that Google pushes out updates all the time, tracing one single update to any change in your Google result position is going to be tough.

A couple tweets from Matt Cutts, head of the web spam team at Google, help cut to the source instead of relying on SEO-middle-men to misrepresent the feedback:

This one details the number of algorithm changes that regularly happen:

The trick is trying to recognize what on your site might have been read as spam and adjust it to be user-friendly, not to try to tweak your site to beat an ever-changing algorithm.

Disavowing Links

This one ranks as confusion for a web developer like me.

The only feature Google has added that I think takes potential fun away from blogs (or any site that allows commenting) is the tool to disavow links. This tool allows a site owner to essentially tell Google not to count links pointing at it when figuring PageRank.

One reason I don't like it is that it allows sites that have engaged in black-hat SEO tactics and have ultimately been penalized by Google to undo the now-negative effects of paid links, link exchanges and other link schemes that violate Google's terms. While this is good for sites that have been taken to the cleaners by SEO scammers, I still don't like how easily they could be excused.

Another reason I don't like it is that all those liars, cheaters, scammers, spammers, thieves and crooks who have spam-posted to my blog can go and disassociate those now-negative links to their sites. Sadly, associating their sites with filth of the lowest order by careful keyword linking (as I have done at the start of this paragraph) is the only ammo I have with which to take pot-shots at their spam juggernauts.

This new tool means you might not see spammers harassing you to remove their own spammy comments from your blogs. Which is unfortunate, because ignoring them seems only fair.

Just this morning Matt Cutts tweeted a link to a Q&A to answer some questions about the tool:

The post includes some answers intended to address concerns like mine.

Meta Keywords, Redux

As I have said again and again, the use of meta keywords is pointless in all search engines, but especially in Google. This doesn't stop SEO snake-oil salesmen from misrepresenting a recent Google update to their prospects.

Last month Google announced its news keywords meta tag, which does not follow the same syntax that traditional (and ignored) keyword meta tags follow. An example of the new syntax:

meta name="news_keywords" content="World Cup, Brazil 2014, Spain vs Netherlands"

From the announcement, you can see this is clearly targeted at news outlets and publishers that are picked up by Google News (your blog about cats or your drunk driving lawyer web site won't benefit):

The goal is simple: empower news writers to express their stories freely while helping Google News to properly understand and classify that content so that it’s discoverable by our wide audience of users.

For further proof, the documentation for this feature is in the Google News publishers help section.

In short, unless your site is a valid news site, don't get talked into using this feature and fire the SEO team that tries to sell you on it.

Related

Monday, October 22, 2012

SEO Isn't Just Google

This past weekend I had the pleasure of participating in Buffalo's first WordCamp for WordPress users. Before my presentation I made it a point to sit in on the other sessions that were in the same track as mine.

When discussing SEO, all the sessions I saw mentioned only Google. The Google logo appeared throughout, Google's PageRank was discussed, Google search result screen captures were used, and so on.

The presenters for an SEO-specific session even went so far as to embed a video of Matt Cutts (from Google) in their presentation and declare that Matt Cutts stated that WordPress is the best platform for SEO.

For context, Matt Cutts appeared at a WordCamp in May, 2009 to discuss his search engine (Google) for an audience using a particular platform (WordPress). Matt even said, WordPress automatically solves a ton of SEO issues. Instead of doing it yourself, you selected WordPress (at about 3:15 in the video). He's pitching his product to a particular audience to validate their technical decision (he's just saying they don't need to manually code these tweaks).

If while watching that video you heard Matt Cutts declare that WordPress is the best platform for SEO, then you are engaging in selection bias.

This same selection bias is also happening when developers work so hard to target Google and not any other search engines. If you convince yourself that Google is the only search engine because you don't see other search engines in your logs, then perhaps you are the reason you don't see those other search engines.

To provide context, this table shows the ratio of searches performed by different search engines in August 2012 in the United States. These are from comScore's August 2012 U.S. Search Engine Rankings report.

Google Sites 66.4%
Microsoft Sites 15.9%
Yahoo! Sites 12.8%
Ask Network 3.2%
AOL, Inc. 1.7%

It's easy to dismiss 16% when you don't know how many searches that translates to.

More than 17 billion searches were performed in August 2012. Google ranked at the top (as expected) with 11.3 billion, followed by Microsoft sites (Bing) at 2.7 billion. The breakdown of individual searches per engine follows:

Google Sites 11,317,000,000
Microsoft Sites 2,710,000,000
Yahoo! Sites 2,177,000,000
Ask Network 550,000,000
AOL, Inc. 292,000,000

To put this another way, for every four (ok, just over) searches using Google, there is another search done in Bing. For every five searches using Google, there is another one done using Yahoo.

If your logs don't reflect those ratios in search engines feeding your site, then you need to consider if you are focusing too hard on Google to the detriment of other search engines.

Now let's take this out of the United States.

Considering Bing's partnership with the Chinese search engine Baidu, contrasted with Google's battles with the Chinese government, it might be a matter of time before Bing tops Google for Asian searches. Given the size of the Asian market (over half a billion users), if you do any business there it might warrant paying attention to both Baidu and Bing.

Related

Update: May 15, 2013

Bing is now up to 17%, having taken almost all of that extra point from Google.

Saturday, September 3, 2011

Patent Wars Sorta-Infographic

I'm giving in to the cool hip trend of infographics that has been popping up like pinkeye across blogging and tech sites lately. These infographics are typically nothing more than data points (sometimes just narrative) strewn about with mathematically suspect charts or somewhat-related design elements. But they seem to draw traffic, even when there isn't even data to graph (Browsers as Wrestlers "Infographic"). So I am using my lack of shame to power through this long weekend with three posts of three infographics from other sites.

In today's installment I am posting an infographic that has some (imprecise) charts and a couple process maps outlining the current state of patents called Patent Wars: A New Age of Competition. You can find the original image at Business Insurance Quotes site, where it has no accompanying explanation or background.

Image of patent wars, no accompanying text available.
Image of patent wars, no accompanying text available.

Related

Friday, August 26, 2011

We Really Still Have to Debunk Bad SEO?

Image of bottle of SEO snake oil.I've been doing this web thing from the start (sort of — I did not have a NeXT machine and a guy named Tim in my living room) and I've watched how people have clamored to have their web sites discovered on the web. As the web grew and search engines emerged, people started trying new ways to get listed in these new automated directories, and so began the scourge of the Search Engine Optimization (SEO) peddler.

The web magazine .Net posted what to me is a surprising article this week (surprising in that I thought we all knew this stuff): The top 10 SEO myths. I am going to recap them here, although you should go to the article itself for more detail and the full list of reader comments. Remember, these are myths, which means they are not true.

  1. Satisfaction, guaranteed;
  2. A high Google PageRank = high ranking;
  3. Endorsed by Google;
  4. Meta tag keywords matter;
  5. Cheat your way to the top;
  6. Keywords? Cram 'em in;
  7. Spending money on Google AdWords boosts your rankings;
  8. Land here;
  9. Set it and forget it;
  10. Rankings aren't the only fruit.

The problem here is that for those of us who know better, this is a list that could easily be ten years old (with a couple obvious exceptions, like the reference to AdWords). For those who don't know better or who haven't had the experience, this might be new stuff. For our clients, this is almost always new stuff and SEO snake oil salesmen capitalize on that lack of knowledge to sell false promises and packs of lies. One of my colleagues recently had to pull one of our clients back from the brink and his ongoing frustration is evident in his own retelling:

I have a client who recently ended an SEO engagement with another firm because they wouldn’t explain how they executed their strategies. Their response to his inquiry was to ask for $6,000 / month, up from $2,000 / month for the same work in two new keywords.

This kind of thing happens all the time. I recently ran into another SEO "guru" selling his wares by promising to keep a site's meta tags up-to-date through a monthly payment plan. When I explained that Google doesn't use meta tags in ranking, his response was that I was wrong. When I pointed him to a two-year-old official Google video where a Google representative explains that meta tags are not used, his response was to state that he believed Google still uses them because he sees results from his work. My client was smart enough to end that engagement, but not all are.

Because I cannot protect my clients in person all the time, I have tried to write materials to educate them. For our content management system, QuantumCMS, I have posted tips for our clients, sometimes as a reaction to an SEO salesman sniffing around and sometimes to try to head that off. A couple examples:

Along with these client-facing tips I sometimes get frustrated enough to write posts like this, trying to remind people that SEO is not some magical rocket surgery and that those who claim it is should be ignored. I've picked a couple you may read if you are so inclined:

And because I still have to cite this meta tags video far far too often, I figured I'd just re-embed it here:

Related

My ire doesn't stop at SEO self-proclaimed-gurus. I also think social media self-proclaimed-gurus are just the latest incarnation of that evil. Some examples:

Tuesday, August 2, 2011

Are Patents Killing HTML5 Video?

WebM logoYou may recall from my post in February, WebM, H.264 Debate Still Going, that the H.264 video codec is considered patent-encumbered (which resulted in its dismissal from the HTML5 specification) and Google has argued that its own WebM / VP8 codec is made up of patents it owns, releasing it as royalty-free.

We all had to know it wouldn't be that simple. MPEG-LA, the licensing entity for multimedia codecs such as H.264, put out a call for patents related to VP8, the underlying technology in WebM, back on February 11, leaving it open for a month. Ostensibly MPEG-LA cast its wide net in the hopes that it could find existing patents that it can then use (perhaps by forming a patent pool or preparing for a lawsuit) to lay claim to rights over technology in VP8.

In April, Google started its own call for patent holders and formed the WebM Community Cross-License (CCL) initiative. Its description from its site:

The WebM Community Cross-License (CCL) initiative enables the web community to further support the WebM Project. Google, Matroska and the Xiph.Org Foundation make the various components of WebM openly available on royalty-free terms. By joining the CCL, member organizations likewise agree to license patents they may have that are essential to WebM technologies to other members of the CCL.

With no official announcement, MPEG-LA revealed in an interview (WebM Patent Fight Ahead for Google?) that twelve parties had come forward with patents they felt were covered in VP8. If those claims hold up then MPEG-LA's next step is to create a patent pool, which allows even more patents and patent holders to be added to the mix. It also means Google will be faced with either paying for licensing from that patent pool or defending itself in a lawsuit. Given Google's US $105 million outlay to acquire the VP8 codec (via its acquisition of On2) already, you can bet some number crunching will take place to evaluate the value of either approach.

MPEG-LA most likely doesn't care whether or not VP8 wins over H.264. All MPEG-LA is interested in is getting its licensing fees from both of them. While MPEG-LA doesn't necessarily fit the model of a standard patent troll (it isn't located in Texas), if you read my post from yesterday, A Patent Trolling Primer, you can see some parallels.

These are posts I have written both about patent abuse and the H.264 vs. WebM debate. Instead of a recap of each here, you can get more history in these posts:

Back in January, Moving Picture Experts Group (MPEG, a working group of ISO/IEC and not related to MPEG-LA) issued a statement that it plans to move forward with a royalty-free video encoding standard. Sadly, I don't have any news on its progress. If you have any, please comment below and let me know.

Other Media Types

As proponents of Ogg Vorbis will tell you on the We Want Ogg site, while they claim that it is a royalty-free and unpatented audio format, they cannot seem to get either Apple or Microsoft to support it in their browsers — even though the code to embed it natively in the browser is available (a better solution than a plug-in). In this case, owning patents related to audio technologies as a browser maker is motivation to not support an open format.

Google's WebP image format, which uses some of the still-image compression tricks from VP8, isn't being supported in Mozilla (Mozilla rejects WebP image format, Google adds it to Picasa). While patent issues around VP8 may certainly creep into WebP, in this case the argument to refuse support for the image format comes down to its lack of clear benefits to other image formats. If VP8 compression formats become patent-encumbered then you can bet this new format will die on the vine.

Related

Update: March 7, 2013

Today on Google's WebM blog:

Today Google Inc. and MPEG LA, LLC announced agreements that will result in MPEG LA ending its efforts to form a VP8 patent pool.

There is no indication of how much money changed hands.

Thursday, March 3, 2011

Recent(ish) News on Google, Bing, SEO/SEM

Google Logo I have written many times here about SEO/SEM and how so much of it is sold to organizations by scam artists (though I recoil at the thought of calling them "artists"). Too often it includes demonstrably false claims, like how meta keywords and descriptions will help your site and that you should invest in the SEO vendor to do just that.

I also try hard not to spend too much time addressing the ever-changing landscape of the search engines, let alone focusing on just one of them. However, sometimes it's worth wrapping up some of the more interesting developments because they can genuinely affect my clients who aren't trying to game the search engines.

Content Farms and Site Scrapers

If you've spent any time searching through Google you may notice that sometimes you get multiple results on your search phrase that look the same in the results, but when visiting the site you find they are just ad-laden monstrosities with no value. Sometimes one of these spam sites would appear higher in the Google search results than the site from which the content was stolen.

Google has now taken steps to not only push those sites back down to the bowels where they belong, but also to penalize those sites. These changes started in late January and went through some more revisions at the end of last month.

I think it's fair to expect Google to keep tweaking these rules. Given all the sites that offer RSS feeds of their content (along with other syndication methods), it's likely that many sites integrate content from external sites into their own. The trick here will be for Google to recognize a site that has a original content that also syndicates third-party content from a site that has nothing but content taken from elsewhere. If you do syndicate content, then you should be sure to what you site stats and your ranking in the search results to see if you are affected at all.

Additional reading:

Page Titles

Perhaps you have spent a great deal of time carefully crafting your page titles (specifically the text that appears in the title and which displays in your browser title bar). Perhaps you have noticed that in Google the title you entered is not what appears on the search results page. This isn't a bug, doesn't mean your site was indexed improperly, and doesn't necessarily mean your page title had some other affect on your page rank. This is done intentionally by Google.

This does imply, however, that your titles are unwieldy. Google does this when titles are too short, when they used repeatedly throughout a site, or when they are stuffed with keywords. If you find that your title is being cut off (implying it's too long) then you may want to limit your title to 66 characters, or at least put the most important information in those first 66 characters.

Additional reading:

Social Media

It wasn't that long ago that Google and Bing said that links in social media (think Facebook and Twitter) will affect a site's position in search results (PageRank for Google). Some people may even be tempted to run out and post links to every social media outlet they can find, hoping that the more inbound links, the better for their site. Thankfully it's not that simple.

Both Google and Bing look at the social standing of a user when calculating the value of an inbound link. This can include number of followers (fans/friends on Facebook), number followed, what other content is posted, how much a user gets retweeted or mentioned and a few other factors. In short, those Twitter accounts that come and go in a matter of hours that tweet a thousand links into the ether aren't doing any good. A good social media strategy that is garnering success, however, should also give a boost to the sites it links.

What is not clear, however, is how URL shorteners (and which ones) affect the weight of those links.

Additional reading:

Random Bits

These are some random articles I collected for posts that never happened. I still think there's good stuff in these and warrant a few minutes to read.

Google: Bing Is Cheating, Copying Our Search Results and Bing: Why Google's Wrong In Its Accusations should be read together. The accusation from Google that Bing is stealing its search results is fascinating on its own, but reading Bing's response demonstrates a host of things Bing also does differently. For me it was an entertaining battle, but that's about it.

HuffPo's Achilles Heel discusses how Huffington Post relies on questionable SEO techniques, which I equate to spamming, and wonders how long the site will be viable if AOL isn't willing to keep up the SEO game as the rules change. It could be a great purchase for AOL, or a dead site full of brief article stubs.

Is SEO Dead? 1997 Prediction, Meet 2009 Reality is a two-year-old article dealing with a twelve-year-old argument. And still relevant.

When A Stranger Calls: The Effect Of Agency Pitches On In-House SEO Programs should be particularly interesting to people who are charged with some form of SEO within an organization. Too often the unsolicited call or email comes in making grandiose promises and citing questionable data and results. This article provides a good position from which to push back and make sure you and your employer aren't taken to the cleaners.

A 3-Step SEO Copywriting Confession almost sounds like an admission of wrongdoing, but instead talks about how to structure your content for SEO without completely destroying it.

Additional reading (that I wrote):

Monday, February 21, 2011

WebM, H.264 Debate Still Going

Terrible illustration of Chrome dropping H.264.On February 2, Microsoft released a plug-in for Chrome on Windows 7 to allow users to play H.264 video directly in Chrome. In addition, Microsoft has said that it will support WebM (VP8) when a user has the codec installed. And so began the fragmentation of the HTML video model, back to relying on plug-ins to support what was otherwise intended to be supported natively within browsers.

Microsoft went on to ask three broad questions in a separate post (HTML5 and Web Video: Questions for the Industry from the Community), taking Google to task for what Microsoft considers inconsistent application of its own patent concerns and openness (emphasis Microsoft's):

  1. Who bears the liability and risk for consumers, businesses, and developers until the legal system resolves the intellectual property issues;
  2. When and how does Google make room for the Open Web Standards community to engage genuinely;
  3. What is the plan for restoring consistency across devices, Web services, and the PC.

The same day Microsoft was announcing its plug-in approach to addressing the WebM and H.264 battle, the post On WebM again: freedom, quality, patents came out addressing what it felt were the five most common issues raised with WebM (which I have paraphrased):

  1. Quality: the argument here is that it's a function of the encoder and the WebM can match H.264;
  2. Patent Risk: comparing the 164 unexpired U.S. patents used in a single encoder, he finds that 126 of them are there for the encoder's H.264 support, the remaining (used by WebM) are in a library released by Google.
  3. Not open enough: there is little argument here, given that it's currently in Google's hands to manage and develop.
  4. H.264 is not so encumbered: but only for non-commercial use for freely-distributed web video.
  5. Google provides no protection from infringing patents: nor does MPEG-LA.

Changing the Nature of the Battle

On February 11, the post MPEG LA puts Google's WebM video format VP8 under patent scrutiny outlines how MPEG-LA, the licensing entity for multimedia codecs such as H.264, has put out a call for patents related to VP8, the underlying technology in WebM. That deadline for submissions is March 18, or less than a month away as of this writing. From there, MPEG-LA will create a patent pool of contributing patent holders for any items that are deemed essential to the codec. This patent pool can then be used to negotiate licensing. In short, VP8/WebM could soon be more patent encumbered than it has been. This puts Google on the defensive as it will have to show that none of the patents in use are valid and/or infringed.

The same author of that last post posted on the 14th at The Guardian, Royalty-free MPEG video codec ups the ante for Google's WebM/VP8. In case that title isn't clear enough, a new royalty-free video standard may be in the pipes. MPEG, a standards body separate from MPEG-LA (the licensing body) has called for proposals toward a royalty-free MPEG video coding standard. One of the goals is to make this new standard comparable to the baseline used in H.264.

If this pans out, it puts another barrier in front of the WebM offering from Google, namely that for WebM to be adopted it will have to best any new royalty-free MPEG codec. The three items that can bring WebM (VP8) down:

  1. If the MPEG-LA call for for patents and resulting patent pool for VP8 nets some patents, MPEG-LA will now form a patent pool to push for licensing agreements, which Google will have to fight at each step.
  2. If MPEG can genuinely develop a royalty-free video coding standard, it can beat WebM either from the patent perspective or by forcing WebM to be technically superior.
  3. Assuming WebM can get past the first two items, it's still back where it started — in a battle for adoption and endorsement against the already entrenched H.264 standard.

Real-World Needs

Considering Google is the company that delivers so much of the video viewed on the web via YouTube, it makes sense that Google presumes it can take the lead on video codec standards. Netflix, however, has its entire business model built around video, which is now moving inexorably to the web. Netflix commented back in December (HTML5 and Video Streaming) that it needs some things sorted out before it can even rely on HTML5 and video to deliver its content (the first 6 of which it has resolved through its own proprietary technology):

  1. The acceptable A/V container formats (e.g. Fragmented MP4, WebM, etc.);
  2. The acceptable audio and video codecs (e.g. H.264, VP8, AAC, etc.);
  3. The streaming protocol (e.g. HTTP, RTP, etc.);
  4. A way for the streaming protocol to adapt to available bandwidth;
  5. A way of conveying information about available streams and other parameters to the streaming player module;
  6. A way of supporting protected content (e.g. with DRM systems);
  7. A way of exposing all this functionality into HTML5.

It's clear that the web video debate extends far beyond the academics of HTML5. As long as issues related to patents and licensing are unresolved, or perceived as unresolved, we will see more proprietary solutions gain more ground, further balkanizing the future of video on the web.

If you think this doesn't affect the end user, all that time and effort put into creating proprietary solutions ultimately costs you as the consumer in the form of increased fees for your content. Granted, it will be some time for browsers to catch up with the selected codec, and yet more time for users to catch up with the supporting browsers, but the more this debate continues, the longer before we can even start that long road of user adoption.

Possibly the best outcome we can hope for is that this battle results in a royalty-free MPEG standard, nullifying many of the arguments against both H.264 and WebM.

Related

Thursday, January 13, 2011

H.264 Getting Dropped from Chrome

Terrible illustration of Chrome dropping H.264.If you pay any attention to the plodding chaos that is the development of HTML5, then you've probably seen the discussions around the video element and how best to encode videos. Over a year and half ago Ian Hickson gutted the video and audio portions of the HTML5 specification to remove all references to codecs, disconnecting the two competing standards, H.264 and Ogg Theora, from the next HTML standard. He did this in an email announcement to the WHATWG list, explaining the issues with licensing and browser support for both options.

At the time Safari refused to implement support for Ogg Theora, Opera and Mozilla refused to support H.264, Internet Explorer was silent, and only Google Chrome implemented both (though Google said it could not provide the H.264 codec license to Chromium third-party distributors).

A year and half later and Google has dropped support for H.264 from Chrome as of two days ago. While Google has hung its argument on the hook of license restrictions, it's probable that Google is really just pushing its own WebM format.

The licensing argument is simple — the compression parts of H.264 are patented by MPEG-LA. While MPEG-LA has opened up the H.264 license for the web (after originally saying it wouldn't collect royalties until 2016), it's conceivable that move was intended to get the web hooked on it. And then it's fair to assume the patent trolling might begin (history indicates the odds are good).

This announcement and the logic behind it has started off a mini-firestorm among developers on the leading edge of HTML5.

Ars Technica wrote up a pretty scathing review of Google's move in the article Google's dropping H.264 from Chrome a step backward for openness, suggesting Google's real issue is about control over its own WebM format. The article goes into more detail about Google acquisition of the company responsible for developing what is now WebM and compares and contrasts the licenses of these and other standards.

Haavard Moen, who works for Opera, takes some time to disassemble the argument made by Ars Technica in his post Is the removal of H.264 from Chrome a step backward for openness? He breaks it up into 11 points, corrects or contextualizes them, and then suggests that the bulk of the points aren't even relevant to the discussion at hand.

The chart below (and its comments) were unabashedly stolen and marked up from a graphic by Bruce Lawson (of Opera Software fame). He uses it to outline which browsers support which codec. I have added a column to list Ogg Theora.

Browser Ogg Theora Native webM Support H.264 Support Member of MPEG-LA H.264 Licensing Pool
Opera Yes Yes No No
Firefox Yes Yes No No
Chrome Yes Yes No No
Internet Explorer 9 No No Yes Yes
Safari No No Yes Yes
The two browsers that only support h264 video are Internet Explorer 9 and Apple Safari, the vendors of which have a financial stake in the codec:
www.mpegla.com/main/programs/AVC/Pages/Licensors.aspx

The column in the chart asking about the MPEG-LA licensing pool is intended to show that the only browsers still supporting H.264 are those with a financial stake. Bruce updated his post with a link from a site visitor that claims that Microsoft gets far less from its financial commitment to H.264 than it pays in.

An argument that keeps popping up is that Google should drop support for Flash, given that it is not an open standard. I have dismissed this immediately on the argument that Flash has been here for a very long time, and it's not practical to drop support for a technology that is already driving millions of sites, at least not without the myopic world view that Apple lugs around.

That argument, however, made it into the post from John Gruber's blog, Daring Fireball, titled Simple Questions for Google Regarding Chrome’s Dropping of H.264. Remy Sharp was quick to respond on his own blog with My take on Google dropping H.264.

While Flash is only a part of that debate, it's further insight into the arguments we're hear from both sides. Some of the arguments will lean on a perceived double standard, as we see with the Flash example; some will lean on license debate, as we see with the constant references to MPEG-LA versus WebM and its background; some will lean on the quality of the video, which I intentionally left out of this post; and some will lean on who wants to be the next big monopoly for the burgeoning growth of video on the web.

For the average developer, it might be best to wait until the dust settles. You'll still be making decisions based on browser support in the end anyway.

Related

UPDATE: More Links from around the Innertubes

These links came rolling out this past weekend (January 14-16) and offer more arguments for and against H.264 and Google's decision.

Update (January 21, 2011)

Friday, December 17, 2010

You Get What You Pay For

We're just shutting down delicious, not selling your children to gypsies. Get the f-ck over it.

First off, let me apologize for ending the title of this post with a preposition. I am playing off an idiom, so I think I have some leeway. Besides, "You get that for which you pay" just doesn't roll off the tongue.

In the last week I have watched two free web services I use announce (in some fashion) that they are going away. This has caused a good deal of frustration and anger on behalf of users. And it's all just a repeat of things I have seen on the web for 15 years now.

I have watched the Brightkite blog, Facebook page and Brightkite/Twitter accounts get hammered with angry and abusive comments from users (Brightkite Yields to Foursquare, Gowalla, Etc.).

I have watched on Twitter as people have derided Yahoo's decision to shut down del.icio.us, the place where they have shared and stored bookmarks for years (Leaked Slide Shows Yahoo Is Killing Delicious & Other Web Apps at Mashable).

I felt vindicated when Google decided to pull the plug on Google Wave, partly owing to the fact that nobody could quite figure out how to wield something that was a floor wax and a dessert topping all in one (Google Wave is Dead at ReadWriteWeb).

I have watched as some of the URL shorteners on which we have come to rely for services like Twitter have announced that they are going away, or have just disappeared (List of URL Shorteners Grows Shortener).

I, and perhaps the entire web, breathed a sigh of relief when Geocities announced it was going to take a dirt nap — and finally did (Wait - GeoCities Still Exists?).

I remember when both Hotmail and Yahoo decided it was time to start charging for access to some of the more enhanced features of the free email they offered users (Say Goodbye to Free Email).

I saw people panic when they might lose access to all sorts of free video, photos, and even text content from CNN, Salon, and others (End of the Free Content Ride?).

We Get It; You've Been There, What's Your Point?

These services all have a couple key things in common:

  1. Users have put a lot of time, energy, and apparently emotion into these services.
  2. They are free.

The second point, in my opinion, should mitigate the first point. If you as a user are not paying to use a service, then is it a wise decision to build your social life or your business around it? Do you as a user not realize that these organizations owe you nothing?

As Brightkite announced the shuttering of its core service with only a week heads-up, they were kind enough to allow users to grab their data via RSS feeds. Yahoo hasn't even formalized the future of del.icio.us, but already fans have found a way to grab the data. But in both of these cases, if you as a user aren't backing up your data, keeping an archive, or storing it elsewhere, whose fault is it really that you might lose it all?

Is it wise to build a social media marketing campaign on Facebook, a platform notorious for changing the rules (features, privacy controls, layout, etc.) on a whim? Is relying on a free URL shortener service a good idea as the only method to present links to your highly developed web marketing campaigns? Should you really run your entire business on the features offered by Gmail, Google Calendar, Google Docs, etc? If you have to alert staff/friends/partners to something important in a timely fashion, can you really trust Twitter to do what you need?

The culture of the web (nee Internet) has always been one of an open and sharing environment, where people and organizations post information that they understand will be shared/borrowed/stolen/derided. Somehow users of the web have come to expect that everything is, or should be, free. Look at the proliferation of sites to steal movies and music as an example on one end of the spectrum. On the other end is the reliance on Wikipedia by every school kid across the country instead of a purchased encyclopedia.

Let's all take some time to evaluate our plans and what we are doing. When that vendor who builds Facebook campaigns comes back to tell you that what he/she built last year won't work this year due to a Facebook change, there is your cost. When you have to take time from your real work to download all your bookmarks just so you can try to find a way to share them again or even get them into your browser, there is your cost. When you build a business on the back of a Twitter API and have to retool your entire platform due to an arbitrary change in how you call the service, there is your cost. When your Google Doc is sitting in "the cloud" and you're sitting in a meeting without wifi just before you have to present it, there is your cost.

This cost, however, ignores something that can't be measured on your end with dollars. The cost of sharing your personal information, your activities, your habits, are all your daily cost for using many of these services.

You may be under the impression that I have something against these free services. The use of this very blog should tell you otherwise. Instead I have something against users who have an expectation of free, top-notch service from organizations who are really only around as far as their cash flow can sustain them.

I keep my bookmarks on my local machine and just share the files between computers. I have been archiving my Brightkite photos since I started using the service, and archiving the posts to Twitter and Facebook, all the while backing up my Twitter stream. I use locally-installed software (MS Word, OpenOffice) writing to generic formats (RTF, etc.) and keep the files I need where I can access them (file vault on my site). I pay for a personal email service in addition to maintaining a free one. Other than Twitter, with its character limits, I avoid URL shorteners (and have no interest in rolling my own). I signed up for Diaspora in the hopes that I can funnel all my social media chaos to the one place I can take it with me. I keep a landline in my house so when the power goes out I can still make a phone call to 911.

I don't tweet my disgust when Facebook changes its layout. I don't post angry comments on Brightkite's wall when they kill a service. I don't try to organize people to take their time to rebuild Google Wave when I cannot. I don't punch my co-worker when he buys me a sandwich and the deli failed to exclude the mayo.

Let's all take some personal responsibility and stop relying solely on something simply because it's free. Your favorite free thing is different or gone (or will be). Suck it up and move on.

Update: January 10, 2011

Alex Williams at ReadWriteWeb echoes the general theme of expecting free stuff in the post "Dimdim: The Risk of Using A Free Service."

Update: January 12, 2011

Free sometimes means "full of malware and viruses," even when you are just installing free themes for your blog: Why You Should Never Search For Free WordPress Themes in Google or Anywhere Else

Update: January 2, 2014

Jeffrey Zeldman explains the process in a narrative: The Black Hole of The Valley