When Bad News Follows You: SEO Redux?

Published in News, Rants and Ephemera on Monday, August 27th, 2007

The NY Times gets some SEO voodoo applied to their CMS and now old news stories are "coming back to haunt" those that were reported upon.

I came across the article from Nicholas Carr's post Should the Net forget?

This is an interesting consequence that seems to be getting pushed on SEO, rather then perhaps looking at it from the aspect of accountable reporting, no?

Nicholas states that:

With search engine optimization - or SEO, as it's commonly known - news organizations and other companies are actively manipulating the Web's memory. They're programming the Web to "remember" stuff that might otherwise have become obscure by becoming harder to find.

The result is that:

People are coming forward at the rate of roughly one a day to complain that they are being embarrassed, are worried about losing or not getting jobs, or may be losing customers because of the sudden prominence of old news articles that contain errors or were never followed up.

In Summary

So, in the past as the print info (newspaper issues) simply disappeared or, more recently, as they hid the content behind paywalls and poor SEO, newspapers didn't have to worry about the consequences of articles that contain errors or were never followed up, but now people may suffer from these mistakes and lack of integrity.

What do you think the answer should be? Nicholas Carr asks Should the Net forget? I'm not so sure, and I don't think that the answer is that simple.

There's a learning curve to moving print onto the web, and this case encompasses one facet of what needs to be conisdered, but it would be great if some form of integrity from those doing the reporting kept these kinds of things from happening.

Comments and Feedback

So in a nutshell: SEO is becoming a scapegoat for sloppy journalism. If such articles really are provably inaccurate and someone is losing business as a consequence, then surely that is sufficient grounds for a lawsuit? I realise we live in a society where litigation is becoming a frivolity, but this seems a scenario where litigation is the best solution.

I think that Slate does a pretty good job of updating articles that contain incorrect information.

For an example, check out this article on Lent where the asterisk indicates the correction (www.slate.com/id/2137092/ and the correction announcement page at www.slate.com/id/2137339/. The correction within the article is much more useful and important than the corrections page.

Newspapers have a lot of catching up to do when it comes to understanding the Internet. A correction page alone is not enough, especially when they have the power to go back and fix their wrongs.

On the other hand, when something is just outdated (a restaurant review, for example), it is the job of the consumer to recognize the date/age of the article. And for another sticky issue, problems also arise when other site pick up a story and don't run the corrected versions.

Abi, that's a good example and exactly what I was thinking of. If these newspapers are using a decent CMS how hard would it be to locate the article in error and update it, denoting where the error occurred and potentially creating a corrections page with more information.

What I'm basically reading is that the journalism profession would be hurt by this because people may not be able to trust what they read if there were things omitted. So...what would the difference be? The NY Times is, depending on your outlook on things, a rather biased publication. I've never read it and believed everything.

But the options they're investigating seem to say we want to cover our back but are not that concerned that the truth actually be known. If I published something false and potentially harmful to someone isn't it my duty to correct it at my expense and exonerate the person in question rather than to look out for my back first? The fact that they are hesitant to do what Slate is doing in the first place causes me to distrust journalism even more.

This shouldn't have anything to do with actual or perceived bias in any news outlet. And making ad hominem statements doesn't get us to the heart of the issue: it is relatively easy for online newspapers to make corrections directly to an article and note that those corrections were made.

For Slate's response to the NY Times article, check out Don't blame the NY Times for your bad reputation.

Abi,

True, the issue at hand doesn't have to do with bias but the issue of trusting a news outlet definitely has a lot to do with it. If someone perceives a slant in the way the news is being reported or the type of news being reported then the trustworthiness of the source will be questioned by the person reading. And many people feel very strongly about that in regard to the NY Times and other papers (LA Times, Washington Post, etc).

And after all that, I heartily agree with your assessment about the heart of the issue: it is very easy to make corrections and they ought to be done.

George Franklin Fri, 7th of September, 2007

I can understand the NY Times and other large publications having to be responsible and held accountable, the point missing here is those smaller blogs and publications that are not so high and mighty, but still have searchable content.

Litigation can only go so far. Who is to stop the millions of people adding content to the internet each day that becomes searchable? We can't have millions of law suits and everyone trying to sue each other. That would be ridiculous.

The real question is how to regulate content on the web. Who does it? What is acceptable? And how are people to be held accountable?

It's an international situation so it isn't something one country can determine, and therefore poses a global problem. There is so much false information available online at this point already, how can that be fixed? What laws can be applied? What laws can be created to protect those being abused and hurt by this content?

This is an interesting place to start, but the story goes far beyond the well known, "old school" newspapers.

Home » Blog » News, Rants and Ephemera

Check out the blog categories for older content

The latest from my personal website,
Mike Papageorge.com

SiteUptime Web Site Monitoring Service

Sitepoint's web devlopment books have helped me out on many occasions both for finding a quick solution to a problem but also to level out my knowlegde in weaker areas (JavaScript, I'm looking at you!). I am recommending the following titles from my bookshelf:

The Principles Of Successful Freelancing

I started freelancing by diving in head first and getting on with it. Many years and a lot of experience later I was still able to take away some gems from this book, and there are plenty I wish I had thought of beforehand. If you are new to freelancing and have a lot of questions (or maybe don't know what questions to ask!) do yourself a favor and at least check out the sample chapters.

The Art & Science Of JavaScript

The author line-up for this book says it all. 7 excellent developers show you how to get your JavaScript coding up to speed with 7 chapters of great theory, code and examples. Metaprogramming with JavaScript (chapter 5 from Dan Webb) really helped me iron out some things I was missing about JavaScript. That said each chapter really helped me to develop my JavaScript skills beyond simple Ajax calls and html insertion with libs like JQuery.

The PHP Anthology: 101 Essential Tips, Tricks & Hacks

Like the other books listed here, this provides a great reference for the PHP developer looking to have the right answers from the right people at their fingertips. I tend to pull this off the shelf when I need to delve into new territory and usually find a workable solution to keep development moving. This only needs to happen once and you recoup the price of the book in time saved from having to develop the solution or find the right pattern for getting the job done..