Jump to content

Search engine optimization: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Reverted edits by Lisagan (talk) to last revision by Bonadea (HG)
Lisagan (talk | contribs)
Line 11: Line 11:
[[Webmaster]]s and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early [[World Wide Web|Web]]. Initially, all webmasters needed to do was to submit the address of a page, or [[Uniform Resource Locator|URL]], to the various engines which would send a "[[Web crawler|spider]]" to "crawl" that page, extract links to other pages from it, and return information found on the page to be [[Index (search engine)|indexed]].<ref>{{cite web | url=http://www.webir.org/resources/phd/pinkerton_2000.pdf| format =PDF | title=Finding What People Want: Experiences with the WebCrawler|accessdate=2007-05-07| publisher=The Second International WWW Conference Chicago, USA, October 17–20, 1994|author=Brian Pinkerton}}</ref> The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an [[search engine indexing|indexer]], extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, and all links the page contains, which are then placed into a scheduler for crawling at a later date.
[[Webmaster]]s and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early [[World Wide Web|Web]]. Initially, all webmasters needed to do was to submit the address of a page, or [[Uniform Resource Locator|URL]], to the various engines which would send a "[[Web crawler|spider]]" to "crawl" that page, extract links to other pages from it, and return information found on the page to be [[Index (search engine)|indexed]].<ref>{{cite web | url=http://www.webir.org/resources/phd/pinkerton_2000.pdf| format =PDF | title=Finding What People Want: Experiences with the WebCrawler|accessdate=2007-05-07| publisher=The Second International WWW Conference Chicago, USA, October 17–20, 1994|author=Brian Pinkerton}}</ref> The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an [[search engine indexing|indexer]], extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, and all links the page contains, which are then placed into a scheduler for crawling at a later date.


Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, creating an opportunity for both [[White hat (computer security)|white hat]] and [[black hat hacking|black hat]] SEO practitioners. According to industry analyst [[Danny Sullivan (technologist)|Danny Sullivan]], the phrase "search engine optimization" probably came into use in 1997.<ref>{{cite web|url=http://forums.searchenginewatch.com/showpost.php?p=2119&postcount=10|title=Who Invented the Term "Search Engine Optimization"?|author=Danny Sullivan|publisher=[[Search Engine Watch]]|date=June 14, 2004|accessdate=2007-05-14}} See [http://groups.google.com/group/alt.current-events.net-abuse.spam/browse_thread/thread/6fee2777dc17b8ab/3858bff94e56aff3?lnk=st&q=%22search+engine+optimization%22&rnum=1#3858bff94e56aff3 Google groups thread].</ref> The first documented use of the term Search Engine Optimization was John Audette and his company Multimedia Marketing Group as documented by a web page from the MMG site from August, 1997.<ref>{{cite web|url=http://www.mmgco.com/campaign.html (Document Number 19970801004204)| title=Documentation of Who Invented SEO at the Internet Way Back Machine | publisher=Internet Way Back Machine |archiveurl=http://web.archive.org/web/19970801004204/www.mmgco.com/campaign.html (Document Number 19970801004204) |archivedate=1997-08-01}}</ref>
Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, creating an opportunity for both [[White hat (computer security)|white hat]] and [[black hat hacking|black hat]] SEO practitioners. According to industry analyst [[Danny Sullivan (technologist)|Danny Sullivan]], the phrase "search engine optimization" probably came into use in 1997.<ref>{{cite web|url=http://forums.searchenginewatch.com/showpost.php?p=2119&postcount=10|title=Who Invented the Term "Search Engine Optimization"?|author=Danny Sullivan|publisher=[[Search Engine Watch]]|date=June 14, 2004|accessdate=2007-05-14}} See [http://groups.google.com/group/alt.current-events.net-abuse.spam/browse_thread/thread/6fee2777dc17b8ab/3858bff94e56aff3?lnk=st&q=%22search+engine+optimization%22&rnum=1#3858bff94e56aff3 Google groups thread].</ref>


Early versions of search [[algorithm]]s relied on webmaster-provided information such as the keyword [[meta tag]], or index files in engines like [[Aliweb|ALIWEB]]. Meta tags provide a guide to each page's content. Using meta data to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.<ref>{{cite web| url=http://www.e-learningguru.com/articles/metacrap.htm|title=Metacrap: Putting the torch to seven straw-men of the meta-utopia|author=[[Cory Doctorow]]|date=August 26, 2001|publisher=e-LearningGuru|accessdate=2007-05-08 |archiveurl = http://web.archive.org/web/20070409062313/http://www.e-learningguru.com/articles/metacrap.htm |archivedate = 2007-04-09}}</ref>{{verify credibility|date=September 2011}} Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.<ref>{{cite web |url=http://www.csse.monash.edu.au/~lloyd/tilde/InterNet/Search/1998_WWW7.html|title=What is a tall poppy among web pages?|month=April | year=1998|publisher=Proc. 7th Int. World Wide Web Conference|accessdate=2007-05-08|author=Pringle, G., Allison, L., and Dowe, D.}}</ref>
Early versions of search [[algorithm]]s relied on webmaster-provided information such as the keyword [[meta tag]], or index files in engines like [[Aliweb|ALIWEB]]. Meta tags provide a guide to each page's content. Using meta data to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content <ref>{{cite web|url=http://blog.hpgroup-seo.co.uk/toptenwaystogetbannedfromgoogle/4561059 |title=Top 10 ways to get banned from Google|author=Tobias Bowman|publisher=[[HP Group]]accessdate=2012-06-29}} </ref> Ten Ways To Get Banned From Google Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.<ref>{{cite web| url=http://www.e-learningguru.com/articles/metacrap.htm|title=Metacrap: Putting the torch to seven straw-men of the meta-utopia|author=[[Cory Doctorow]]|date=August 26, 2001|publisher=e-LearningGuru|accessdate=2007-05-08 |archiveurl = http://web.archive.org/web/20070409062313/http://www.e-learningguru.com/articles/metacrap.htm |archivedate = 2007-04-09}}</ref>{{verify credibility|date=September 2011}} Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.<ref>{{cite web |url=http://www.csse.monash.edu.au/~lloyd/tilde/InterNet/Search/1998_WWW7.html|title=What is a tall poppy among web pages?|month=April | year=1998|publisher=Proc. 7th Int. World Wide Web Conference|accessdate=2007-05-08|author=Pringle, G., Allison, L., and Dowe, D.}}</ref>


By relying so much on factors such as [[keyword density]] which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their [[Search engine results page|results page]]s showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, allowing those results to be false would turn users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate. Graduate students at [[Stanford University]], [[Larry Page]] and [[Sergey Brin]], developed "Backrub," a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, [[PageRank]], is a function of the quantity and strength of [[inbound link]]s.<ref name="lgscalehyptxt">{{cite web|author=Brin, Sergey and Page, Larry|url=http://www-db.stanford.edu/~backrub/google.html|title=The Anatomy of a Large-Scale Hypertextual Web Search Engine|publisher=Proceedings of the seventh international conference on World Wide Web|year=1998|pages=107–117|accessdate=2007-05-08}}</ref> PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.
By relying so much on factors such as [[keyword density]] which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their [[Search engine results page|results page]]s showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, allowing those results to be false would turn users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate. Graduate students at [[Stanford University]], [[Larry Page]] and [[Sergey Brin]], developed "Backrub," a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, [[PageRank]], is a function of the quantity and strength of [[inbound link]]s.<ref name="lgscalehyptxt">{{cite web|author=Brin, Sergey and Page, Larry|url=http://www-db.stanford.edu/~backrub/google.html|title=The Anatomy of a Large-Scale Hypertextual Web Search Engine|publisher=Proceedings of the seventh international conference on World Wide Web|year=1998|pages=107–117|accessdate=2007-05-08}}</ref> PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.

Revision as of 14:56, 29 June 2012

Search engine optimization (SEO) is the process of improving the visibility of a website or a web page in a search engine's "natural," or un-paid ("organic" or "algorithmic"), search results. In general, the earlier (or higher ranked on the search results page), and more frequently a site appears in the search results list, the more visitors it will receive from the search engine's users. SEO may target different kinds of search, including image search, local search, video search, academic search,[1] news search and industry-specific vertical search engines.

As an Internet marketing strategy, SEO considers how search engines work, what people search for, the actual search terms or keywords typed into search engines and which search engines are preferred by their targeted audience. Optimizing a website may involve editing its content and HTML and associated coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines. Promoting a site to increase the number of backlinks, or inbound links, is another SEO tactic.

The acronym "SEOs" can refer to "search engine optimizers," a term adopted by an industry of consultants who carry out optimization projects on behalf of clients, and by employees who perform SEO services in-house. Search engine optimizers may offer SEO as a stand-alone service or as a part of a broader marketing campaign. Because effective SEO may require changes to the HTML source code of a site and site content, SEO tactics may be incorporated into website development and design. The term "search engine friendly" may be used to describe website designs, menus, content management systems, images, videos, shopping carts, and other elements that have been optimized for the purpose of search engine exposure.

History

Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters needed to do was to submit the address of a page, or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.[2] The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an indexer, extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, and all links the page contains, which are then placed into a scheduler for crawling at a later date.

Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, creating an opportunity for both white hat and black hat SEO practitioners. According to industry analyst Danny Sullivan, the phrase "search engine optimization" probably came into use in 1997.[3]

Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag, or index files in engines like ALIWEB. Meta tags provide a guide to each page's content. Using meta data to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content [4] Ten Ways To Get Banned From Google Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.[5][unreliable source?] Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.[6]

By relying so much on factors such as keyword density which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, allowing those results to be false would turn users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate. Graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub," a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[7] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.

Page and Brin founded Google in 1998. Google attracted a loyal following among the growing number of Internet users, who liked its simple design.[8] Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.[9]

By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. Google says it ranks sites using more than 200 different signals.[10] The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages. SEO service providers, such as Rand Fishkin, Barry Schwartz, Aaron Wall and Jill Whalen, have studied different approaches to search engine optimization, and have published their opinions in online forums and blogs.[11][12] SEO practitioners may also study patents held by various search engines to gain insight into the algorithms.[13]

In 2005, Google began personalizing search results for each user. Depending on their history of previous searches, Google crafted results for logged in users.[14] In 2008, Bruce Clay said that "ranking is dead" because of personalized search. It would become meaningless to discuss how a website ranked, because its rank would potentially be different for each user and each search.[15]

In 2007, Google announced a campaign against paid links that transfer PageRank.[16] On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat nofollowed links in the same way, in order to prevent SEO service providers from using nofollow for PageRank sculpting.[17] As a result of this change the usage of nofollow leads to evaporation of pagerank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated Javascript and thus permit PageRank sculpting. Additionally several solutions have been suggested that include the usage of iframes, Flash and Javascript. [18]

In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.[19]

Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.[20]

In February 2011, Google announced the "Panda update, which penalizes websites containing content duplicated from other websites and sources. Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice, however Google implemented a new system which punishes sites whose content is not unique. [21]

Relationship with search engines

Yahoo and Google offices

By 1997, search engines recognized that webmasters were making efforts to rank well in their search engines, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Altavista and Infoseek, adjusted their algorithms in an effort to prevent webmasters from manipulating rankings.[22]

Due to the high marketing value of targeted search results, there is potential for an adversarial relationship between search engines and SEO service providers. In 2005, an annual conference, AIRWeb, Adversarial Information Retrieval on the Web,[23] was created to discuss and minimize the damaging effects of aggressive web content providers.

Companies that employ overly aggressive techniques can get their client websites banned from the search results. In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients.[24] Wired magazine reported that the same company sued blogger and SEO Aaron Wall for writing about the ban.[25] Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.[26]

Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences, chats, and seminars. Major search engines provide information and guidelines to help with site optimization.[27][28] Google has a Sitemaps program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website.[29] Bing Toolbox provides a way from webmasters to submit a sitemap and web feeds, allowing users to determine the crawl rate, and how many pages have been indexed by their search engine.

Methods

Suppose each circle is a website, and an arrow is a link from one website to another, such that a user can click on a link within, say, website F to go to website B, but not vice versa. Search engines begin by assuming that each website has an equal chance of being chosen by a user. Next, crawlers examine which websites link to which other websites and guess that websites with more incoming links contain valuable information that users want.
Search engines uses complex mathematical algorithms to guess which websites a user seeks, based in part on examination of how websites link to each other. Since website B is the recipient of numerous inbound links, B ranks highly in a web search, and will come up early in a web search. Further, since B is popular, and has an outbound link to C, C ranks highly too.

Getting indexed

The leading search engines, such as Google, Bing and Yahoo!, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Some search engines, notably Yahoo!, operate a paid submission service that guarantee crawling for either a set fee or cost per click.[30] Such programs usually guarantee inclusion in the database, but do not guarantee specific ranking within the search results.[31] Two major directories, the Yahoo Directory and the Open Directory Project both require manual submission and human editorial review.[32] Google offers Google Webmaster Tools, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that are not discoverable by automatically following links.[33]

Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.[34]

Preventing crawling

To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots. When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[35]

Increasing prominence

A variety of methods can increase the prominence of a webpage within the search results. Cross linking between pages of the same website to provide more links to most important pages may improve its visibility.[36] Writing content that includes frequently searched keyword phrase, so as to be relevant to a wide variety of search queries will tend to increase traffic.[36] Updating content so as to keep search engines crawling back frequently can give additional weight to a site. Adding relevant keywords to a web page's meta data, including the title tag and meta description, will tend to improve the relevancy of a site's search listings, thus increasing traffic. URL normalization of web pages accessible via multiple urls, using the "canonical" meta tag[37] or via 301 redirects can help make sure links to different versions of the url all count towards the page's link popularity score.

White hat versus black hat

SEO techniques can be classified into two broad categories: techniques that search engines recommend as part of good design, and those techniques of which search engines do not approve. The search engines attempt to minimize the effect of the latter, among them spamdexing. Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO.[38] White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.[39]

An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines[27][28][40] are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines, but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see. White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the spiders, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility,[41] although the two are not identical.

Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception. One black hat technique uses text that is hidden, either as text colored similar to the background, in an invisible div, or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.

Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review. One example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices.[42] Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's list.[43]

As a marketing strategy

SEO is not an appropriate strategy for every website, and other Internet marketing strategies can be more effective, depending on the site operator's goals.[44] A successful Internet marketing campaign may also depend upon building high quality web pages to engage and persuade, setting up analytics programs to enable site owners to measure results, and improving a site's conversion rate.[45]

SEO may generate an adequate return on investment. However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals. Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors.[46] Search engines can change their algorithms, impacting a website's placement, possibly resulting in a serious loss of traffic. According to Google's CEO, Eric Schmidt, in 2010, Google made over 500 algorithm changes - almost 1.5 per day.[47] It is considered wise business practice for website operators to liberate themselves from dependence on search engine traffic.[48] Seomoz.org has suggested that "search marketers, in a twist of irony, receive a very small share of their traffic from search engines." Instead, their main sources of traffic are links from other websites.[49]

International markets

Optimization techniques are highly tuned to the dominant search engines in the target market. The search engines' market shares vary from market to market, as does competition. In 2003, Danny Sullivan stated that Google represented about 75% of all searches.[50] In markets outside the United States, Google's share is often larger, and Google remains the dominant search engine worldwide as of 2007.[51] As of 2006, Google had an 85-90% market share in Germany.[52] While there were hundreds of SEO firms in the US at that time, there were only about five in Germany.[52] As of June 2008, the marketshare of Google in the UK was close to 90% according to Hitwise.[53] That market share is achieved in a number of countries.

As of 2009, there are only a few large markets where Google is not the leading search engine. In most cases, when Google is not leading in a given market, it is lagging behind a local player. The most notable markets where this is the case are China, Japan, South Korea, Russia and the Czech Republic where respectively Baidu, Yahoo! Japan, Naver, Yandex and Seznam are market leaders.

Successful search optimization for international markets may require professional translation of web pages, registration of a domain name with a top level domain in the target market, and web hosting that provides a local IP address. Otherwise, the fundamental elements of search optimization are essentially the same, regardless of language.[52]

On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of Oklahoma, against the search engine Google. SearchKing's claim was that Google's tactics to prevent spamdexing constituted a tortious interference with contractual relations. On May 27, 2003, the court granted Google's motion to dismiss the complaint because SearchKing "failed to state a claim upon which relief may be granted."[54][55]

In March 2006, KinderStart filed a lawsuit against Google over search engine rankings. Kinderstart's website was removed from Google's index prior to the lawsuit and the amount of traffic to the site dropped by 70%. On March 16, 2007 the United States District Court for the Northern District of California (San Jose Division) dismissed KinderStart's complaint without leave to amend, and partially granted Google's motion for Rule 11 sanctions against KinderStart's attorney, requiring him to pay part of Google's legal expenses.[56][57]

See also

Notes

  1. ^ Beel, Jöran and Gipp, Bela and Wilde, Erik (2010). "Academic Search Engine Optimization (ASEO): Optimizing Scholarly Literature for Google Scholar and Co" (PDF). Journal of Scholarly Publishing. pp. 176–190. Retrieved 2010-04-18.{{cite web}}: CS1 maint: multiple names: authors list (link)
  2. ^ Brian Pinkerton. "Finding What People Want: Experiences with the WebCrawler" (PDF). The Second International WWW Conference Chicago, USA, October 17–20, 1994. Retrieved 2007-05-07.
  3. ^ Danny Sullivan (June 14, 2004). "Who Invented the Term "Search Engine Optimization"?". Search Engine Watch. Retrieved 2007-05-14. See Google groups thread.
  4. ^ Tobias Bowman. "Top 10 ways to get banned from Google". HP Groupaccessdate=2012-06-29.
  5. ^ Cory Doctorow (August 26, 2001). "Metacrap: Putting the torch to seven straw-men of the meta-utopia". e-LearningGuru. Archived from the original on 2007-04-09. Retrieved 2007-05-08.
  6. ^ Pringle, G., Allison, L., and Dowe, D. (1998). "What is a tall poppy among web pages?". Proc. 7th Int. World Wide Web Conference. Retrieved 2007-05-08. {{cite web}}: Unknown parameter |month= ignored (help)CS1 maint: multiple names: authors list (link)
  7. ^ Brin, Sergey and Page, Larry (1998). "The Anatomy of a Large-Scale Hypertextual Web Search Engine". Proceedings of the seventh international conference on World Wide Web. pp. 107–117. Retrieved 2007-05-08.{{cite web}}: CS1 maint: multiple names: authors list (link)
  8. ^ Thompson, Bill (December 19, 2003). "Is Google good for you?". BBC News. Retrieved 2007-05-16.
  9. ^ Zoltan Gyongyi and Hector Garcia-Molina (2005). "Link Spam Alliances" (PDF). Proceedings of the 31st VLDB Conference, Trondheim, Norway. Retrieved 2007-05-09.
  10. ^ Hansell, Saul (June 3, 2007). "Google Keeps Tweaking Its Search Engine". New York Times. Retrieved 2007-06-06.
  11. ^ Danny Sullivan (September 29, 2005). "Rundown On Search Ranking Factors". Search Engine Watch. Retrieved 2007-05-08.
  12. ^ "Search Engine Ranking Factors V2". SEOmoz.org. April 2, 2007. Retrieved 2007-05-14.
  13. ^ Christine Churchill (November 23, 2005). "Understanding Search Engine Patents". Search Engine Watch. Retrieved 2007-05-08.
  14. ^ "Google Personalized Search Leaves Google Labs - Search Engine Watch (SEW)". searchenginewatch.com. Retrieved 2009-09-05.
  15. ^ "Will Personal Search Turn SEO On Its Ear?". www.webpronews.com. Retrieved 2009-09-05. {{cite web}}: Text "WebProNews" ignored (help)
  16. ^ "8 Things We Learned About Google PageRank". www.searchenginejournal.com. Retrieved 2009-08-17.
  17. ^ "PageRank sculpting". Matt Cutts. Retrieved 2010-01-12.
  18. ^ "Google Loses "Backwards Compatibility" On Paid Link Blocking & PageRank Sculpting". searchengineland.com. Retrieved 2009-08-17.
  19. ^ "Personalized Search for everyone". Google. Retrieved 2009-12-14.
  20. ^ "Relevance Meets Real Time Web". Google Blog.
  21. ^ "Google Search Quality Updates". Google Blog.
  22. ^ Laurie J. Flynn (November 11, 1996). "Desperately Seeking Surfers". New York Times. Retrieved 2007-05-09.
  23. ^ "AIRWeb". Adversarial Information Retrieval on the Web, annual conference. Retrieved 2007-05-09.
  24. ^ David Kesmodel (September 22, 2005). "Sites Get Dropped by Search Engines After Trying to 'Optimize' Rankings". Wall Street Journal. Retrieved 2008-07-30. {{cite web}}: Italic or bold markup not allowed in: |publisher= (help)
  25. ^ Adam L. Penenberg (September 8, 2005). "Legal Showdown in Search Fracas". Wired Magazine. Retrieved 2007-05-09.
  26. ^ Matt Cutts (February 2, 2006). "Confirming a penalty". mattcutts.com/blog. Retrieved 2007-05-09.
  27. ^ a b "Google's Guidelines on Site Design". google.com. Retrieved 2007-04-18.
  28. ^ a b "Guidelines for Successful Indexing". bing.com. Retrieved 2011-09-07.
  29. ^ "Sitemaps". google.com. Retrieved 2012-05-04.
  30. ^ "Submitting To Search Crawlers: Google, Yahoo, Ask & Microsoft's Live Search". Search Engine Watch. 2007-03-12. Retrieved 2007-05-15.
  31. ^ "Search Submit". searchmarketing.yahoo.com. Retrieved 2007-05-09.[dead link]
  32. ^ "Submitting To Directories: Yahoo & The Open Directory". Search Engine Watch. 2007-03-12. Retrieved 2007-05-15.
  33. ^ "What is a Sitemap file and why should I have one?". google.com. Retrieved 2007-03-19.
  34. ^ Cho, J., Garcia-Molina, H. (1998). "Efficient crawling through URL ordering". Proceedings of the seventh conference on World Wide Web, Brisbane, Australia. Retrieved 2007-05-09.{{cite web}}: CS1 maint: multiple names: authors list (link)
  35. ^ "Newspapers Amok! New York Times Spamming Google? LA Times Hijacking Cars.com?". Search Engine Land. May 8, 2007. Retrieved 2007-05-09.
  36. ^ a b "The Most Important SEO Strategy - ClickZ". www.clickz.com. Retrieved 2010-04-18.
  37. ^ "Bing - Partnering to help solve duplicate content issues - Webmaster Blog - Bing Community". www.bing.com. Retrieved 2009-10-30.
  38. ^ Andrew Goodman. "Search Engine Showdown: Black hats vs. White hats at SES". SearchEngineWatch. Retrieved 2007-05-09.
  39. ^ Jill Whalen (November 16, 2004). "Black Hat/White Hat Search Engine Optimization". searchengineguide.com. Retrieved 2007-05-09.
  40. ^ "What's an SEO? Does Google recommend working with companies that offer to make my site Google-friendly?". google.com. Retrieved 2007-04-18.
  41. ^ Andy Hagans (November 8, 2005). "High Accessibility Is Effective Search Engine Optimization". A List Apart. Retrieved 2007-05-09.
  42. ^ Matt Cutts (February 4, 2006). "Ramping up on international webspam". mattcutts.com/blog. Retrieved 2007-05-09.
  43. ^ Matt Cutts (February 7, 2006). "Recent reinclusions". mattcutts.com/blog. Retrieved 2007-05-09.
  44. ^ "What SEO Isn't". blog.v7n.com. June 24, 2006. Retrieved 2007-05-16.
  45. ^ Melissa Burdon (March 13, 2007). "The Battle Between Search Engine Optimization and Conversion: Who Wins?". Grok.com. Retrieved 2007-05-09.
  46. ^ Andy Greenberg (April 30, 2007). "Condemned To Google Hell". Forbes. Archived from the original on 2007-05-02. Retrieved 2007-05-09.
  47. ^ Matt McGee (September 21, 2011). "Schmidt's testimony reveals how Google tests alorithm changes".
  48. ^ Jakob Nielsen (January 9, 2006). "Search Engines as Leeches on the Web". useit.com. Retrieved 2007-05-14.
  49. ^ "A survey of 25 blogs in the search space comparing external metrics to visitor tracking data". seomoz.org. Retrieved 2007-05-31.
  50. ^ Graham, Jefferson (2003-08-26). "The search engine that could". USA Today. Retrieved 2007-05-15.
  51. ^ Greg Jarboe (2007-02-22). "Stats Show Google Dominates the International Search Landscape". Search Engine Watch. Retrieved 2007-05-15.
  52. ^ a b c Mike Grehan (April 3, 2006). "Search Engine Optimizing for Europe". Click. Retrieved 2007-05-14.
  53. ^ Jack Schofield (2008-06-10). "Google UK closes in on 90% market share". London: Guardian. Retrieved 2008-06-10.
  54. ^ "Search King, Inc. v. Google Technology, Inc., CIV-02-1457-M" (PDF). docstoc.com. May 27, 2003. Retrieved 2008-05-23.
  55. ^ Stefanie Olsen (May 30, 2003). "Judge dismisses suit against Google". CNET. Retrieved 2007-05-10.
  56. ^ "Technology & Marketing Law Blog: KinderStart v. Google Dismissed—With Sanctions Against KinderStart's Counsel". blog.ericgoldman.org. Retrieved 2008-06-23.
  57. ^ "Technology & Marketing Law Blog: Google Sued Over Rankings—KinderStart.com v. Google". blog.ericgoldman.org. Retrieved 2008-06-23.
Listen to this article
(2 parts, 22 minutes)
Spoken Wikipedia icon
These audio files were created from a revision of this article dated
Error: no date provided
, and do not reflect subsequent edits.