Tag Archive | "google"

Tags: , , ,

Who Should We Believe? Jill, or Google?

Posted on 22 January 2011 by John Britsios

Shoot in the footA recent thread on Jill Whalen’s forum sparked some controversial comments when she posted an anecdote regarding Google’s indexing of site search pages on her site. There’s nothing earth-shaking there, of course.

What’s interesting came from the comments and her responses to them. The first that caught my attention was:

There’s actually no use for the keyword tag for words that are already appearing on the page. The idea is to use it for keywords that don’t already appear on the page, but which might be relevant anyway. After all, if they’re already on the page, what good is it to use them again? (bolding mine)

That was my first WTF moment. I responded with:

Jill, as we both know, the purpose of the keywords meta tag implementation is to specify keywords that a search engine may use to improve the quality of search results. It provides a list of words or phrases about the contents of the Web page and provides some additional text for crawler-based search engines.

That said, the keywords placed there must be found within the content of the document. If you want to target semantically relevant keywords not found in the content of the page, the appropriate solution would be the implementation of “Common Tags

She responded with:

Disagree. The meta keywords tag was originally created to provide a place for words that were not contained on the page. After all, if they’re already on the page, the search engine already know it’s relevant for those words.

So I asked her:

So if I got you right, do the keywords meta tag serve the same, or partially the same purpose the “Common Tags” do?

She responded with:

Since common tags are just something someone made up and not a real tag, I don’t really know.

At that point, I felt a headache coming on, and responded with this:

Well here is some info about CTags by Vanessa Fox.

You said above that using relevant keywords but not found in the content of the page is legitimate. So I felt like I had to be more explicit.

The purpose of implementing keywords in a keyword meta tag is for preliminary indexing and specifically conceived for exhaustively and completely catalogue HTML documents, and not for determining semantic related words or attempts to boost the overall semantical relevancy of a document.

Her response:

Since Yahoo isn’t doing search anymore, that tag probably has died it as well.

I answered with:

I am afraid that I will have to disagree.

Then, in response to another poster’s comment, she added this:

Anyone who was paying attention has known that Google has never used the Meta keyword tag to know what a page is all about in terms of where it might show up in the search results.

As far as I know, they’ve never used it so it’s not something they declared that they’re suddenly not using.

That poster responded with:

So you’re saying that Google never read the meta keywords tag for its purposes? I know Google declared they suddenly stopped using it.

To which, she answered:

I think you’d be hard pressed to find this declaration from them anywhere.

He then offered this link:

Google does not use the keywords meta tag in web ranking.

The pertinent excerpt from that link (Google Webmaster Central Blog) is this:

“Because the keywords meta tag was so often abused, many years ago Google began disregarding the keywords meta tag.” (bolding mine)

So Google clearly DID at one time use the keywords meta tag. And Jill Whalen says she’s been in SEO since before Google was born. Hmmmmm…

That brought on another one of those WTF moments, and her earlier comment, claiming that Common Tags aren’t “real” tags was adequately responded to by another poster, with:

…as W3C also has maintained that Common Tags continue to play a part in tagging and folksonomies for Resource Description Framework. SPARQL and its derivatives, for instance, still recognize C-Tags, and there haven’t been any discussions of discontinuing the practice.

About that point, having been called out on a handful of inaccuracies, Jill closed the thread to further comments. However, there was still some discussion in the comments of Ben Pfeiffer’s article on SEO Round Table.

And many of the comments there show that some people are still confused about the true past and present nature of the keyword meta tag. Is it any wonder, when such misinformation is published?

So the question that arises is, if an SEO of her experience and supposed knowledge can state as fact, opinions that are in such opposition to what Google and W3C state… who do we believe?

  • Did Google NEVER use the meta keyword tag?
  • Are Common Tags not “real” tags?
  • Were meta keywords intended to be only words that DON’T appear on the page?

I know who I don’t believe. Do you?

Comments (13)

Tags: , , , , , ,

Google’s Supplemental Index Exists

Posted on 18 November 2009 by John Britsios

One way that you can measure your websites’ SEO health is by figuring out if your most important web pages (such as those that contain your biggest selling services or products) have been placed in Google’s supplemental index.

Many people think that when Google ceased to label the supplemental results pages, that signaled the end of their supplemental index. False. Google made in clear in their article “Supplemental goes mainstream“, published at their Webmaster Central Blog, that:

“The distinction between the main and the supplemental index is therefore continuing to narrow. Given all the progress that we’ve been able to make so far, and thinking ahead to future improvements, we’ve decided to stop labeling these URLs as Supplemental Results. Of course, you will continue to benefit from Google’s supplemental index being deeper and fresher.”

The pages which are the first results for any SERPs are those in the main index. The only time you’ll find pages from the Google supplemental index is if there are very few or zero results for your chosen search term in the main index.

Furthermore, Google has a tendency to transfer old cached pages over to their supplemental index. These might be pages which aren’t even on your server any longer.

Bot Herding for PageRank Flow

For appearance in Google’s main index, your web pages must have a certain indefinite amount of Pagerank or “juice”, in addition to and apart from other relevant factors. Google makes use of PageRank values for setting crawling priorities and determining whether or not a document belongs in their main or their supplemental index.

Matt Cutts, who heads up the Google Webspam Team, has this to say:

“PageRank is the primary factor determining whether a URL is in the main web index vs. the supplemental results.”

Once you understand the common causes behind supplemental pages you will be able to determine which pages might be placed in the supplemental index. Then you’ll be able to improve your websites’ internal linking via links from fully indexed and more prominent pages added to your pages and your home site.

Effective Link Building

Andy Beal says something very similar to Matt Cutts:

“If you got 60,000 pages, and you only got ’this much’ PageRank, and you divide it [...he mumbles], some of them are going to be in the supplemental index. Given ‘this many people’ who link to you, we’re willing to include ‘this many’ pages in the main index.”

An SEO professional or a Link Builder will tend to advise you that the most highly effective way of getting your pages out of the supplemental results is by creating unique, high quality content and then doing promotional work to acquire inbound links.

That’s right, but why go through all that trouble without first seeing just how far you’re able to get with the PageRank that your site possesses now?

There is a handful of internal link-based strategies that you can use to fight supplemental results. One highly effective and widely used popular strategy has been dubbed “Bot Herding.”

This is merely a methodology for improving yourwebsite’s navigating system via control of the flow of PageRank for enhancing the prominence of your most valuable and important pages. You can achieve this through linking to them from pages within your domain, etc.

What commonly causes supplemental pages?

The main cause of Supplemental results is very simply a lack of Page Rank. Nevertheless, this is not the sole cause. There are several others:

  • Pages with no or very low PageRank;
  • Suspicious pages, including non-unique or irrelevant page content heading tags, meta tags, links to bad neighborhoods, and so on.
  • Pages with canonicalization problems (duplicated content, too much content similarity);
  • Lengthy URLs, especially ones with extensive parameters, starting with a question mark (?) and being separated with an ampersand (&) and are not rewritten;
  • Pages with very little or zero original content;
  • Poor website navigation;
  • Keyword stuffing (using many irrelevant keywords);
  • Orphaned web pages which no-one links to, including your own;
  • Error pages, if a site does not use If-Modified-Since, Last Modified and/or Expires rules.

If you have pages that have been placed in the supplemental index, you know what to do. Remove them from there, save yourself money and time, and increase your website’s rankings!

Comments (0)

Tags: , , , , , ,

Search Engines Not Displaying Your Description Meta Tag?

Posted on 23 October 2009 by John Britsios

What exactly is it that you see when one of the pages of your website appears in search results?

Most of the time, the text that you’ll find there is taken from your description meta tag. Depending on the search terms used to find your site, the results may show an excerpt of content from the page which contains the search terms – which is useful in determining if this page is relevant to the search.

Another thing which might show up in the search engines is an excerpt from the DMOZ /Open Directory Project’s description of your site, assuming that your site is listed by DMOZ. If you’d rather the search engines focus on the contents of your description meta tag rather than on this third party description of your site, you can specify this by simply adding the following meta tag:

<meta name=”robots” content=”noodp” />

This meta tag is supported not only by Google, but also Yahoo! Bing and other search engines. While we’re on the subject of Yahoo!, if your site is listed in the Yahoo directory, search results for your site will usually be taken from the Yahoo directory description of your site instead of your meta tags. This is also an easy problem to fix – just include the following meta tag:

<meta name=”robots” content=”noydir” />

If your description meta tag information isn’t coming up in the results for particular search engines, you can set meta tags for each engine’s bots, like so:

<meta name=”googlebot” content=”noodp” /> for Google

<meta name=”slurp” content=”noydir” /> for Yahoo!

<meta name=”msnbot” content=”noodp” /> for Bing

You can also create just one robots meta tag which specifies all of the attributes you’d like to include. Just separate these attributes by commas, as in this example:

<meta name=”robots” content=”noodp,noydir” />

As with anything relating to how your site is indexed, it can take a while for the text displayed when your site shows up in search engine results to change. Give the search engine bots a little time to come back and re-index your site and be patient – they will change sooner or later.

Comments (0)