The Basics of Google Search

web design company toronto

Webmasters — those in charge of maintaining a website — and Website owners spend a lot of time, effort, and in some cases, money trying to influence Google’s SERP (Search Engine Results Page) lists in the hopes of getting a higher position in the list. Understanding the overall process is the first step in determining where SEO (Search Engine Optimization) techniques can be used to accomplish the goal of improving your website’s SERP positioning.

An Overview of the Search Process

The speed at which Google performs is made possible by “parallel processing”. In non-geek language all that means is that multiple computer calculations can be done at the same time. This allows Google and other search engines to scan thousands of web pages simultaneously rather than one at a time. The Search Process begins when the visitor enters his or her “query” into the engine. The techie’s definition of query is a request for information from a database. Google’s web servers receive the query and shuttle it off to the Google Index — which is the database where information about potentially available web design company toronto is maintained. Sorting through the index is the job of the Google documents servers which retrieve the stored websites that best match the query and produce the mini-description of the site’s content we all end up seeing in the SERP. Once the document servers have made the selections, the SERP results are returned to the visitor — all in a matter of seconds. Note that there are three distinct parts to this process: 1) the web crawler that actually searches and selects webpages for inclusion in the database, 2) the indexer that stores the webpages, and 3) the query processor that makes the final match between search query and search results.

Googlebot: the “Crawler”

Think of Googlebot as a software robot. Some liken it to a “spider” that crawls the World Wide Web in search of stuff to add to the Google Index or database. Googlebot finds pages by crawling the web in search of links or by “add URL (Uniform Resource Locator)” forms submitted directly to Google. The URL is simply an Internet address. The “add URL” process poses some difficulties for Google since some unscrupulous techies have devised their own software robots that continually bombard Google with “add URL” requests. It is the crawling process that helps Google “police” the web and return relevant content. Googlebot sorts through the links it finds on every webpage it visits — and that’s literally millions of pages — and stores them for later “deep” crawling. These subsequent crawls keep the Google Index fresh and up to date.

The Index

Googlebot sends the full text of webpages it finds to the Index but to improve subsequent search performance the Index does not store common words like the, and, or, and so on, and also ignores certain punctuation marks. The resultant Index is sorted alphabetically — just like the Index in the back of a published book.

The Search Query Processor

Here’s where the real work is done. The process is complex and involves computer algorithms and its actual workings are not revealed by Google and other Search Engine Providers. But enough is known to allow SEO professionals and others to improve site positioning in Google SERPs. Google ranks pages via several criteria but the ones some see to be most easily influenced are page rank and keyword density. PageRank — or PR — is a patented Google technology whereby Google ranks webpages according to both the quality and quantity of the links pointing to the website and the links pointing to other websites. Google also ranks pages according to how well their text matches the keywords from the search query, again in terms of both quantity (keyword density) and accuracy. So if you want to improve your site’s SERP, Page Rank and Keyword Density are two places to start. Good luck!

 

Leave a comment

Your email address will not be published. Required fields are marked *