Suppose a webmaster excludes a duplicated page on his site using robots.txt or meta exclusion, but then a user proceeds to link to it anyway.  This is one of the problems with excluding the duplicate content on a site.  More specifically, this is the method I typically use to eliminate the duplicate content as a result of breadcrumb navigation — see this blog entry for more information on that.

Redirecting duplicate content is possible in cases where the duplication is more of an error than anything else (URL canonicalization, etc.); in this case, the page is essential to navigation (and thus cannot be 301-redirected), but also should not be indexed because it is, in fact, a duplicate.  Ideally, people would only link to the version that is not excluded.  But how should the user know which version this is, and why should he care anyway?

Supposing someone does link to it, though, does it still count as vote for your site as a whole at all?  It would make sense to me, but I'm not sure.  Anyone care to comment? 

Tell an amigo:
  • Sphinn
  • Digg
  • Reddit
  • del.icio.us
  • StumbleUpon
  • Facebook



Related posts:
Influence of a tilde, ~, on a Link's Value I have never gotten an authorative answer to this question,...
Free SEO Site Clinic Site Selected (September) The web site for the September SEO site clinic has...
Does URL Parameter Order Matter? This is a question I'm currently researching. I have never...
Using a "fake" 404 page to target misspellings This is a random idea I cooked up recently.  Many...
DiggBait 101: 8 Ways to Help You Get to Digg's Front Page 1. Make People Read Your Boring Crap with a Persuasive...