What is Googlebot? Googlebot is Chrome – a native browser as a spider

What is Googlebot? Googlebot is actually Chrome. Discover the pros and cons of having a native browser as a spider. See how it influences how Google crawls.

GoogleBot is actually Chrome: The Jig is Up!

Why a Search Giant decided to build the Fastest Browser ever…

Background

Over the years, Google has built one of the most impressive technology companies in the world, with products ranging from an Advertising Empire to one of the top Mobile Platforms (Android). They’ve raised a pile of money, and acquired some of the most talented people in the world all under one banner; and all of this was accomplished on the back of a single flag-ship offering: Google’s Web Search.

For those of you who have been under a rock for a decade or so, Google Search is probably the most popular search application anywhere in the world, owning roughly 65% market share in the US alone.

This freely available product is guided by a deceptively simple mission statement:

“To organize the world’s information and make it universally accessible and useful.” [i]

It’s not unreasonable to state that all of Google’s achievements were in some way dependent on Google’s Web Search. The proficiencies gained from large-scale web crawling, and content categorization have made the engineering teams at Google experts and innovators in a wide variety of fields, from Computer Hardware (motherboards with batteries, mean time to failure on consumer hard drives) to Database Systems (Big Table, MapReduce), and even File Systems (GFS).

They have also advanced the fields of Optical Character Recognition, and Natural Language Processing through the Google Books and Google Translate project; both of which were probably aided by the natural language data derived from web-scale crawling.

Google Ads would be impossible without Google Web Search; and without the revenue brought in  from Google Adwords during the early stages, Google may not have been able to invest so heavily in acquisition and innovation.

Since search drives Google’s success, why would Google even bother to get involved in the browser wars? The browser market is fiercely competitive, and not particularly lucrative.  Building a browser also comes with the burden of supporting users, maintaining standards compliance, and mitigating privacy or regulatory concerns.

Yes, user data from the opt-in program is highly relevant and highly valuable to what they do… but this could have been achieved through plugins, add-ons, and integration agreements as they have done with the Google Toolbar and the Firefox Search Licensing arrangement without the extra burdens related to managing and supporting the Chromium Project.

And if that weren’t enough to make you curious, Google didn’t just take WebKit’s Rending Engine and call it Chrome… they created a new JavaScript Engine known as V8. This new JavaScript engine is perhaps the fastest engine available, and Google chose to add engineering complexity by making it standalone/embeddable; incidentally making projects like NodeJS possible. The engineering energy that went into creating the V8 Engine Is no small matter, as it was written entirely in C++ and designed to convert JavaScript to machine code to increase speed. The V8 Engine firmly demonstrates the level of talent Google has at its disposal.

Additional changes to the WebKit Core were developed by Google, creating a unique browser where each “tab” is a unique processor thread, providing an unparalleled fault tolerance in the event of unsafe script executions. A tab can crash without crashing the entire browsing session. This seemingly minor change would have required some major rewrites to the WebKit Core, and again demonstrates a high level of familiarity not only with CPU Architecture, but the underlying code of the WebKit project.

So honestly, why all this effort for a free product? Why fork WebKit and create Chromium instead of simply contributing directly to the WebKit project?

What if Chrome was in fact a repackaging of their search crawler; affectionately known as Googlebot, for the consumer environment? How much more valuable would user behavioral data if it could be mapped to their Search Stack on a 1:1 basis?

Feasibility of a Native Browser Spider

The first question we should probably ask ourselves, is “Can a browser even be used as a Spider?” Fundamentally, a browser is just software that provides an implementation of the W3C DOM Specification via a Rendering Engine, and a scripting engine to enable any additional scripting resources.

If the source code is available, it’s absolutely possible to edit the Rendering Engine to perform feature extraction tasks such as returning the Page TITLE element, or returning all ANCHOR elements on the page.

WebKit and The Mozilla Foundation both provide open source rendering and scripting engines, and the Chromium Project represents Google’s own foray into the market. The lesser known KHTML is yet another open source Rendering Engine.

WebKit is developed in C++ which is one of Google’s primary development languages, and is in fact the same language the V8 Engine was written in. Interestingly it also provides native Python Bindings through QT and GTK, making it even more attractive for Google.  The Chromium Project even offers a remote administration & testing tool in Python, making it possible to interact with the browser from Python Scripts across servers.

The Mozilla Foundation offers the open source Gecko as a Rendering Engine and Rhino as a Scripting Engine, which seem to have GTK bindings via MozEmbed. I haven’t found much documentation about a MozEmbed QT Binding; there seems to be one, but it may not be well used.

Though WebKit was only open sourced in 2003, KHTML and Gecko have been around for quite a long while; KHTML was the initial basis for WebKit/Safari prior to Apple’s open sourcing of the project.

If we wanted to remove specific engineering expertise from the equation, The Selenium Project also provides a web browser automation toolkit that Google could leverage, though I believe the engineering overhead would make this an unwieldy solution. Direct integration with the browser would provide the best performance… especially when your browser is engineered to have a unique processor process per tab and a lightning fast JavaScript Engine. Mozilla’s Project offers a command line interface which can be used remotely, though it seems to be far less usable than a direct implementation.

Based on the age of the open source rendering engines, and Google’s unique engineering expertise, it’s certainly possible for Google to use a Native, DOM Compliant browser in their crawling or indexing activities. In fact thanks to Chromium Remote, Selenium, and other related regression testing projects, it’s possible for a skilled SEO Professional to attempt to implement native browser based SEO Tools.

Whether Google leverages a simple browser plugin, a custom implementation of a browser, or a special remote testing suite to access the browser remotely, having access to a native and compliant DOM implementation is of great benefit to anyone seeking to extract ranking features from web pages.

Advantages of a Native Browser as a Spider

Having established that it’s highly possible for Google to implement a native browser as a spider or indexing utility, we now have to ask ourselves “What are the benefits of doing so?” It’s certainly not an easy thing to do after all, so there should be a significant benefit in doing so.

A native browser implementation enables JavaScript and CSS Support, provides full DOM Access, and even opens up the possibility to index/access dynamic content including AJAX. Deep integrations for Flash, PDF, and other non-HTML content are available via plugins.

Page and Dependency Load Times are trackable, while DOM Access provides the ability to manipulate form elements or extract data from elements within a page. Using the GetComputedStyle method it’s possible to use simple JavaScript to detect hidden elements even if they hidden after the page loads using a JavaScript transformation.

There are synthetic implementations of the DOM, such as jsDOM, BeautifulSoup, and HTMLUnit which enable some of the benefits of a native DOM implementation via a browser, such as extracting elements by name or id, or fill/click elements but they lack the ability to detect DOM Transformations by Scripting languages or extract computed styles. These missing features are critical for attempting to detect hidden links or text with a suitable degree of accuracy. These synthetic DOM implementations also do not afford any access to plugins or AJAX content.

Perhaps the most important benefit of using a native browser for crawling and indexing documents is that it closely emulates the user experience across a host of parameters. Page load time, final markup and content, and even the positions of the elements within the DOM will mirror the experience of a user almost 1:1; barring any non-standard implementations of the DOM in the rendering engine.

Making the Case: Googlebot is a Browser

So far we’ve seen that it’s not only viable, but of substantial benefit to use a web browser for crawling; but what evidence do we have that Google has ever considered this? For this we can turn to the patent filings by Google and their competitors in the search sphere.

In the patent “Document segmentation based on visual gaps” [ii] we see Google discussing Computer Vision methods for determining the structure of a page and separating content, some of which rely on an understanding of the underlying DOM of the page.

“[0038] In situations in which document 500 is a web page, document 500 may be generated using a markup language, such as HTML. The particular HTML elements and style used to layout different web pages varies greatly. Although HTML is based on a hierarchical document object model (DOM), the hierarchy of the DOM is not necessarily indicative of the visual layout or visual segmentation of the document.” [emphasis added]

“[0039] Segmentation component 230 may generate a visual model of the candidate document (act 403). The visual model may be particularly based on visual gaps or separators, such as white space, in the document. In the context of HTML, for instance, different HTML elements may be assigned various weights (numerical values) that attempt to quantify the magnitude of the visual gap introduced into the rendered document. In one implementation, larger weights may indicate larger visual gaps. The weights may be determined in a number of ways. The weights may, for instance, be determined by subjective analysis of a number of HTML documents for HTML elements that tend to visually separate the documents. Based on this subjective analysis weights may be initially assigned and then modified (“tweaked”) until documents are acceptably segmented. Other techniques for generating appropriate weights may also be used, such as based on examination of the behavior or source code of Web browser software or using a labeled corpus of hand-segmented web pages to automatically set weights through a machine learning process.” [emphasis added]

This patent describes a process of visual segmentation which may rely on the behavior of a browser, and which showcases an understanding of the DOM and its shortcomings. While not a smoking gun it clearly shows that Google is familiar with the DOM, and has considered the value that a browser can bring to their analysis of a document.

Google’s Instant Snapshot product may well be a byproduct of this research into visual gaps, and page structure, though we’ll return to that a little later.

Browsers don’t just render the DOM Hierarchy of HTML, they include transformations via CSS and JavaScript, and for Google to extract the most meaningful features from a web page it would be necessary to have access to these transformations.

In the patent “Ranking documents based on user behavior and/or feature data” [iii]  we see the following claims:

“The document information may also, or alternatively, include feature data associated with features of documents (“source documents”), links in the source documents, and possibly documents pointed to by these links (“target documents”). Examples of features associated with a link might include the font size of the anchor text associated with the link; the position of the link (measured, for example, in a HTML list, in running text, above or below the first screenful viewed on an 800.times.600 browser display, side (top, bottom, left, right) of document, in a footer, in a sidebar, etc.); if the link is in a list, the position of the link in the list; font color and/or attributes of the link (e.g., italics, gray, same color as background, etc.); number of words in anchor text associated with the link; actual words in the anchor text associated with the link; commerciality of the anchor text associated with the link; type of the link (e.g., image link); if the link is associated with an image (i.e., image link), the aspect ratio of the image; the context of a few words before and/or after the link; a topical cluster with which the anchor text of the link is associated; whether the link leads somewhere on the same host or domain; if the link leads to somewhere on the same domain, whether the link URL is shorter than the referring URL; and/or whether the link URL embeds another URL (e.g., for server-side redirection). This list is not exhaustive and may include more, less, or different features associated with a link.” [emphasis added]

Many of these features, such as the font color, styling, and size would require direct access to the Cascading Style Sheets and any JavaScript transformations related to the element in question. Most modern browsers support a “GetComputedStyle” method within their DOM Implementation, allowing a client to request the final style of an element based on current script transformations.

[insert image of getcomputedstyle via chrome js console]

Another interesting patent; “Searching through content which is accessible through web-based forms,” [iv]  related to Google’s ability to index content behind HTML FORMs with GET Methods[v] gets us closer to uncovering the technology behind Google Crawlers. This patent describes a process where a crawler can actually manipulate a GET form for the purposes of crawling or even redirecting a search query through the form for the end user.

“Many web sites often use JavaScript to modify the method invocation string before form submission. This is done to prevent each crawling of their web forms. These web forms cannot be automatically invoked easily. In various embodiments, to get around this impediment, a JavaScript emulation engine is used. In one implementation, a simple browser client is invoked, which in turn invokes a JavaScript engine. As part of the description of any web form, JavaScript fragments on a web page are also recorded. Before invoking a web form, the script on the emulation engine is executed to get the modified invocation string if any. The parameters (and their mapped words or internal values) are then concatenated to the invocation string along with the values for any hidden inputs.”

Here we see Google claiming an implementation of a simple browser client, with a JavaScript engine, further confirming that crawlers can and may take advantage of both native DOM via a browser and a JavaScript engine.

Patents don’t necessarily offer proof of implementation though, as they serve mainly to document and protect intellectual property. It’s wholly possible that these patents are merely proofs of concept designed to secure the safety of future innovations within Google. To find evidence of implementation, we’ll have to dig deeper…

In addition to the previously mentioned Webmaster Central Article “Crawling through HTML Forms” of 2008, we find evidence of the implementation of both patents thanks to the effort of some SEO Professionals sharing their test results with the SEO Community via SEOMoz’s popular YouMoz blog.

In “New Reality: Google Follows Links in JavaScript”[vi] we see evidence that not only is Google executing and crawling links within JavaScript, it’s actually able to do so within jQuery AJAX requests. This example is interesting, as it dates to roughly the same period as the Webmaster Central announcement and highlights some very advanced JavaScript functionality… even today it’s commonly believed by SEO Professionals that Google has no access to AJAX content, though it appears they have had access to at least ANCHOR elements generated by AJAX since at least 2008/2009. This post also seems to confirm that the techniques of the patent “Ranking documents based on user behavior and/or feature data” are being deployed, as shorter URLs under the parent domain were extracted and crawled first in accordance with the following:

“whether the link leads somewhere on the same host or domain; if the link leads to somewhere on the same domain, whether the link URL is shorter than the referring URL;”

A more recent article in Forbes “Google Isn’t Just Reading Your Links, It’s Now Running Your Code”[vii] discusses some advances made during the “Caffeine” update of June 2009, further confirming that Google doesn’t just parse JavaScript, but executes it in an attempt to understand the underlying behavior. There is a brief discussion on the difficulties related to understanding JavaScript without execution, and makes the case that execution is the only way to really understand what a snippet is attempting to do, and when it may start/stop its operations.

These assessments lead us to conclude that Google is absolutely parsing, and even periodically executing JavaScript… but does that necessarily require a browser? V8 is a standalone, embeddable JavaScript Execution engine after all.

This is the moment where we return to Google’s “Instant Preview” product, which may in fact be a practical extension of the Visual Gap Analysis patent, re-packaged as an enhancement to Google Web Search. Google’s own FAQ for Instant Preview indicates that visual previews can be generated “on-the-fly” by user request, or during a crawl by Googlebot. The Instant Preview will include keyword highlighting relevant to the user’s query. The FAQ continues to explain how you can tell the difference between an “on-the-fly” preview and a crawled preview:

“Q: What’s the difference between on-the-fly and crawled previews?
A:
Crawled previews are generated with content fetched with Googlebot. When we generate previews on-the-fly based on user requests, we use the “Google Web Preview” user-agent, currently:
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/[WEBKIT_VERSION] (KHTML, like Gecko; Google Web Preview) Chrome/[CHROME_VERSION] Safari/[WEBKIT_VERSION]
As on-the-fly rendering is only done based on a user request (when a user activates previews), it’s possible that it will include embedded content which may be blocked from Googlebot using a robots.txt file.”

Notice the Chrome/Webkit User-Agent, flagged as Mozilla Compatible. The standard GoogleBot User-Agent also identifies itself as being Mozilla compatible, without explicitly stating whether or not it is in fact Chrome. While we still do not have definitive evidence that the Chromium project is the underlying source for GoogleBot, we have certainly verified that GoogleBot is a browser.

Evolution of a Web Crawler

BackRub Era, pre-2000

A Python based crawler, designed primarily to extract links for citation analysis. This is the birth of PageRank.

Early Google, 2000 to 2003

Early on, Google is primarily streamlining what was BackRub. The team is probably converting the crawler to C++ and working on a distributed crawling system and overcoming the massive architecture challenges related to web-scale crawling and indexing.

As time goes on, Google is probably looking for ways to improve their feature extraction and cope with the issues inherent in the initial PageRank algorithm and expanding their indexing beyond citation analysis. Crawlers are probably still sophisticated parsing scripts with an artificial DOM implementation, based largely around pattern matching.

2003 sees the first named Google update; Boston. This heralds a habit of naming and announcing major updates and engaging the search community at large.

Google 2003 to 2006

In April of 2003 the Cassandra update arrives, offering changes to the algorithm to detect hidden links and text. At this stage, Google would definitely benefit from having a native browser and Mozilla 1.0 is available under an open source license.

In 2004 we see the “Austin” update which again implements functionality that hints at a native Browser behind the crawler. Austin attempts to put an end to hidden text. 2004 is also the year of the Google IPO.

2005 sees Google hiring away a key Mozilla developer; leading to suspicions of a Google Browser, and the Big Daddy update. Big Daddy is claimed as an infrastructure update, and rollout carries over into 2006. WebKit is officially open sourced in June 2005, and Big Daddy begins in December; if Google were shifting away from Gecko/Mozilla, in favor of WebKit this would be the time they do so.

Google 2006 to 2009

2006 through 2008 represent feature enhancement updates; Google spends some time revamping how we interact with search. They’ve got a lot of data, and now they’re sharing it to make the user experience richer and more useful. We see universal search expanding the types of information returned, and Suggest reveals the depth of Google’s natural language processing capabilities.

2006 is one of the first occurrences of GoogleBot 2.1 with its Mozilla/5.0 Compatible user-agent.

Webmasters also begin noticing new behavior, such as GoogleBot requesting CSS resources. [viii] It is in April 2008 that Google announces their ability to crawl through HTML Forms, and parse some JavaScript.

During this seemingly slow period for major algorithm or infrastructure updates, the Chromium and V8 projects are introduced. They are launched on September 2nd, 2008. Chrome is touted as one of the fastest browsers ever thanks in part to the V8 JavaScript engine.

Interestingly, 2009 signals what will soon be a major shift in Google’s infrastructure with real time indexing and a preview of Caffeine. The innovations and advances in the Chromium project are likely partially responsible for this great leap forward, as crawling and feature extraction can essentially happen as one step through the use of Chrome as a crawler. With a real time, flexible MapReduce queue it would certainly be possible to index and rank documents in near real time.

Google 2009 to Present

2010 saw the full roll out of Caffeine, a major infrastructure update for Google enabling near real time indexing. Google Instant, and Instant Preview were also launched. Instant Preview offers the most compelling evidence of WebKit/Chrome in the search stack and will probably lead to some very interesting algorithm updates in the future as Google gathers data and refines their Computer Vision techniques.

Content Quality and Social Signals have become increasingly more important to Google, as showcased by the Panda update and the DecorMyEyes scandal. I expect social to be increasingly more important, especially as Google+ enables Google to construct behavioral profiles and establish author identity with more certainty. We’ll certainly see social media’s equivalent to Link Spam in due time.

Conclusions

It seems the evidence leads us to only one logical conclusion; Googlebot is a browser. Which browser is anyone’s guess, though we have some clues that we can use to draw some solid inferences. The user-agent makes Mozilla’s Gecko engine a possibility, and the stand-along nature of V8 doesn’t entirely eliminate that possibility.

Another nod in favor of Gecko is the age and stability of the project; Gecko has been open source since 2003, while WebKit has only been open source since 2005. Google has also hired many talented Mozilla Engineers over the years. The recently leaked Search Tester document also demonstrates that Google requires its quality testers to lean on Firefox for their query testing.

However, based on the proposed timeline, it’s not entirely necessary for the underlying code base to be available to Google prior to 2005. This is especially true, as the V8 Engine and Chromium were announced simultaneously and precede a major Google Update; Google could have easily shifted gears and implemented Chrome during this time.

This becomes even more plausible when you consider the major changes made to Chrome; each tab being a unique thread creates a high level of fault tolerance and redundancy. If a GoogleBot instance were a single server running Chrome via Chromium Remote, with say 3 to 5 tabs, it would be possible for Google to provide threaded crawling and indexing with minimal risk. If a tab crashes, the other tabs are isolated and protected; this behavior is not available in even the most recent editions of Firefox.

In short, Chrome was engineered for multi-threaded crawling; lightning fast browsing is just a pleasant side effect for users.

Some Closing Thoughts

The importance of Google Web Search to Google’s entire empire cannot be understated. Whether it provides revenue opportunities, data to mine, or simply presenting technological hurdles leading to the next generation of innovative solution, Google is fueled by Web Search.

Another important take away is that Google is highly dependent on Open-Source software to fuel their innovations, and aren’t afraid to borrow what they need to create something that will change the world. It’s not all sunshine and roses though; Google has offended many important players, including the late Steve Jobs, and Sun. They’ve run afoul of Federal Regulations, and are a total privacy quagmire. Despite their reliance on Open Source Software, and their love of data, Google is still very much a Walled Garden and there’s lots of room for the next generation to do what they do, only better.

Big Data and the ability to accurately extract it, classify it, and store it has enabled Google to become a leader in so many fields, and for the first time in history the same tools and techniques are at the fingertips of the average developer.

Projects like Hadoop bring MapReduce into the Open Source arena, while the NoSQL Movement and the growth of the Cloud are making it possible for anyone to build a mini-Googleplex. Projects like NLTK bring Natural Language Processing out of the University and into the hands of mad scientists like Michael King, and the explosion of APIs is making more and more data available without the added complexity of crawling.

So what are we waiting for? If you’re developing SEO for enterprises, Googlebot plays a massive role in your crawling strategies. The next iteration to consider is how Googlebot for desktop and mobile might impact the next step of a mobile-only indexing environment. Let’s move forward and build the future.

Thanks and Acknowledgements:

Thanks to Bill Slawski of SEO by the Sea for pointing me at some very relevant patents, and being so willing to share. He’s quite a nice fellow.

Special thanks goes out to Mike King for encouraging me to keep pecking away at this crazy idea, and for always being available to hear my zany ideas. It’s a privilege to call him a colleague, but it’s truly an honor to call him a friend.

References:


[i] “Google Enterprise.” Google. Web. 28 Oct. 2011. <https://www.google.com/enterprise/whygoogle.html>.

[ii] “Document Segmentation Based on Visual Gaps.” Google. Web. 28 Oct. 2011. <https://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.html&r=1&f=G&l=50&d=PG01&p=1&S1=20060149775&OS=20060149775&RS=20060149775>.

[iii] “Ranking documents based on user behavior and/or feature data.” Google. Web. 28 Oct. 2011. <https://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&p=1&f=G&l=50&d=PTXT&S1=7,716,225.PN.&OS=pn/7,716,225&RS=PN/7,716,225>.

[iv] “Searching through content which is accessible through web-based forms.” Google. Web. 28 Oct. 2011. <https://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=8&f=G&l=50&d=PTXT&p=1&p=1&S1=%28%28browser+AND+crawler%29+AND+google%29&OS=browser+AND+crawler+AND+google&RS=%28%28browser+AND+crawler%29+AND+google%29>.

[v] “Crawling through HTML Forms.” Official Google Webmaster Central Blog. Web. 28 Oct. 2011. <https://googlewebmastercentral.blogspot.com/2008/04/crawling-through-html-forms.html>.

[vi]“New Reality: Google Follows Links in JavaScript. – YOUmoz | SEOmoz.” SEO Software. Simplified. | SEOmoz. Web. 29 Oct. 2011. <https://www.seomoz.org/ugc/new-reality-google-follows-links-in-javascript-4930>..

[vii]“Google Isn’t Just Reading Your Links, It’s Now Running Your Code – Forbes.” Information for the World’s Business Leaders – Forbes.com. Web. 29 Oct. 2011. <https://www.forbes.com/sites/velocity/2010/06/25/google-isnt-just-reading-your-links-its-now-running-your-code/>.

[viii]“GoogleBot Requested a CSS File! – Updated – EKstreme.com.” Webmaster, SEO, Tools and Tutorials. Web. 29 Oct. 2011. <https://ekstreme.com/thingsofsorts/seosem/googlebot-requested-a-css-file>.

101 Comments

  • Pieter van Diggele
    Reply

    “Bill Slawski of SEO By the SEO”

    Second SEO should be Sea i guess 🙂

    Nice article, quite technical for me though. Does the knowledge that Google Bot is a browser have any consequences for SEO tactics?

    • josh-u-a
      Reply

      Haha thanks Pieter, I corrected it.

      The understanding that Google is deploying a browser during crawls absolutely changes the game for SEO; it means they have access to JavaScript and CSS based transformations and dynamic AJAX Content.

      Many Siloing Tactics used for shaping the flow of Link Popularity within a site deployed JavaScript based links to try and keep them from being crawled and this is pretty much useless if they’re deploying a browser crawler.

      It can also affect progressive enhancement techniques like SIFR and swfObject based replacements, since a browser would execute the scripts that handle the dynamic replacements.

      • mike
        Reply

        Interesting point on progressive enhancement techniques – do you think that search engines will likely take the easier roots to understanding content (e.g. the HTML text associated with swfObject fall backs & HTML text behind image replacement)?

        • josh-u-a
          Reply

          Hello Mike,

          Google already acknowledges they have the ability to access textual content within Flash, and individuals have come forward since this paper launched with evidence that Google is also executing Flash Movies and extracting links from within… so the answer is a definite yes on that.

          For some examples of Link Extraction within Flash by Google, check out Mike King(iPullRank)’s recent post on SEOMoz: https://www.seomoz.org/blog/just-how-smart-are-search-robots

  • […] Flummeri, men rätt kvalificerat flummeri ändå:https://ipullrank.com/googlebot-is-chrome/ […]

  • […] @ipullrank wowed the SEO community at Searchlove yesterday, presenting his theory that Googlebot is in fact Chrome. Today, every single time I open the page to Mike’s article, Chrome simply crashes – […]

  • Reply

    […] the SERPs. You can’t annotate a screenshot with a text-based crawler. Mike’s site has a great post about […]

  • Reply

    Thanks, Josh, for your work on this. It’s a good read and I hope you get value out of having done the work.

    The traditional Engineering approach to problem solving starts with a specific issue, seeks to understand it in light of the larger general issues, and then attempts to solve the specific problem as a specific case of the general problem (using a model to define a general solution, and constraining the model to create the specific problem).

    If Google works this way, they identify a need to, say, qualify js links (specific problem). They study that js rendered links problem as part of the larger pages-are-rendered-differently-than-coded issue, and create a model (approximation) for the bigger rendering issue. Leadership branches the work and sets priorities, while a team refines the broader general model (and seeks a general solution to the overall rendering problem). Meanwhile, the team working on the specific solution (js links) models that and solves it. All learnings are shared across general and specific projects, and the models all improve.

    We watch this as SEOs.. we see the initial work, the initial solutions, and as you have outlined here, we see how the overall team (probably) attacks the general solution as opportunity dictates and learning supports.

    So even if they are not yet “rendering” pages as a general rule, they are almost certainly developing that ability, and rendering specific aspects as is needed to solve specific problems (js links, DOM manipulations, various AJAX issues, and even user-centric issues like page speed and page segmentation content quality issues).

    Key for SEO is to attack very specific aspects of interest to SEO, and closely examine bot vs. rendering behavior expressed by Google (what are crawlers/spiders doing on site, what is preview showing, what is GA showing, what is search showing, etc). This may reveal clues of both Google’s progress on the general solution (rendering all pages at crawl time and using that info to better index) and Google’s dependence on the underlying assumptions built into the specific models (that’s where the SEO tactics come in). It is generally safe to assume that some of those specific model assumptions will be adopted into each iteration of Google’s general solution (as it gets incrementally implemented), making tested SEO tactics better values for the long term.

    Thanks for the post, and thanks for putting it on this blog (instead of into an SEO community or aggregator site). I look forward to reading more in the future.

  • Reply

    You’re welcome, Josh.

    Definitely a fascinating and timely post, especially with an announcement from Google after you posted on them experimenting with different ways to access content through javascript:

    https://googlewebmastercentral.blogspot.com/2011/11/get-post-and-safely-surfacing-more-of.html

    Google’s acquisition of the patents from Wowd even hints at the use of a browser add-on to collect data about which pages people are visiting in real time or near real time to identify hot topics and pages. Wowd had no crawlers and depended upon a browser add-on to do their indexing for them.

    Thanks.

    • Reply

      I wonder if Googlebot will ever be a bot that can fully read and crawl flash content?

  • […] talk was his expose at the end (written by his friend Josh) about how they believe that Chrome is Googlebot and Googlebot is Chrome. Basically, Google is able to see everything that we see and is able to parse […]

  • Reply

    […] Googlebot is Chrome by Joshua Giardino. Sorry to be self-serving here but I think Josh’s work could use a lot more visibility. He goes in-depth about how Googlebot is not a text-based crawler but a headless web browser that can render full pages. Why I think it’s important: It puts Google on the hook to improve search quality much more and it tells us we can’t hide anything from them using JavaScript. […]

  • […] Rolig spekulation om att Google Chrome är en integrerad del av Googlebot Postat den 1 november, 2011 av Nikke Lindqvist Flummeri, men rätt kvalificerat flummeri ändå:https://ipullrank.com/googlebot-is-chrome/ […]

  • […] Rolig spekulation om att Google Chrome är en integrerad del av Googlebot Postat den 1 november, 2011 av Nikke Lindqvist Flummeri, men rätt kvalificerat flummeri ändå:https://ipullrank.com/googlebot-is-chrome/ […]

  • […] Googlebot can crawl AJAX to retrieve Facebook comments coincidentally only hours after I unveiled Joshua Giardino’s research that suggested Googlebot is actually a headless browser based off the Chromium codebase at SearchLove New York. I’m going to challenge Matt […]

  • Craig Addyman
    Reply

    Thanks for this post – It really gets you thinking!

    The role of an SEO would change quite dramatically and I would have thought getting your site technically correct would take a back seat.

  • […] Googlebot can crawl AJAX to retrieve Facebook comments coincidentally only hours after I unveiled Joshua Giardino's research that suggested Googlebot is actually a headless browser based off the Chromium codebase at SearchLove New York. I'm going to challenge Matt Cutts' […]

  • […] Googlebot can crawl AJAX to retrieve Facebook comments coincidentally only hours after I unveiled Joshua Giardino's research that suggested Googlebot is actually a headless browser based off the Chromium codebase at SearchLove New York. I'm going to challenge Matt […]

  • Reply

    I found this post from your recent SEOmoz post, I’ve definitely had my suspicions that they were more capable then they were putting out but I didn’t know they were this advanced. 10 years from now search robots might be as good as quality readers if they keep this up.

  • […] Googlebot can crawl AJAX to retrieve Facebook comments coincidentally only hours after I unveiled Joshua Giardino's research that suggested Googlebot is actually a headless browser based off the Chromium codebase at SearchLove New York. I'm going to challenge Matt […]

  • […] [Source: iPullRank] […]

  • […] Googlebot can crawl AJAX to retrieve Facebook comments coincidentally only hours after I unveiled Joshua Giardino’s research that suggested Googlebot is actually a headless browser based off the Chromium codebase at SearchLove New York. I’m going to challenge Matt […]

  • […] Googlebot can crawl AJAX to retrieve Facebook comments coincidentally only hours after I unveiled Joshua Giardino’s research that suggested Googlebot is actually a headless browser based off the Chromium codebase at SearchLove New York. I’m going to challenge Matt […]

  • […] Googlebot can crawl AJAX to retrieve Facebook comments coincidentally only hours after I unveiled Joshua Giardino's research that suggested Googlebot is actually a headless browser based off the Chromium codebase at SearchLove New York. I'm going to challenge Matt Cutts' […]

  • […] t&#959 retrieve Facebook comments coincidentally &#959nl&#1091 hours &#1072ft&#1077r I unveiled Joshua Giardino’s research th&#1072t suggested Googlebot &#1110&#1109 really a headless browse… based &#959ff th&#1077 Chromium codebase &#1072t SearchLove N&#1077w York. I’m going t&#959 […]

  • Reply

    Thanks josh-u-a for a great post. It was too technical for me to go into details but while reading it I understood that you’ve got the point and I will definitely take a step back and have a look at my website as googlebot can possibly do (through the browser window).
    Also I would like to mention that your post is backed up by google itself, as the guys there are saying that most important for them is user experience. So the best way for tracking it is to be a browser.
    Seems to me that analytics data is involved in this game as well. What do you think about it?

  • […] On that note, if you didn’t attend, not only did you miss a special appearance by EmCee Hammer; you missed Mike King’s awesome presentation “Targeting Humans” which contained a brief section on my recent paper “GoogleBot is Chrome: The Jig is Up… or why a search giant built a browser.” […]

  • […] On that note, if you didn’t attend, not only did you miss a special appearance by EmCee Hammer; you missed Mike King’s awesome presentation “Targeting Humans” which contained a brief section on my recent paper “GoogleBot is Chrome: The Jig is Up… or why a search giant built a browser.” […]

  • […] Googlebot can crawl AJAX to retrieve Facebook comments coincidentally only hours after I unveiled Joshua Giardino's research that suggested Googlebot is actually a headless browser based off the Chromium codebase at SearchLove New York. I'm going to challenge Matt […]

  • […] Googlebot can crawl AJAX to retrieve Facebook comments coincidentally only hours after I unveiled Joshua Giardino's research that suggested Googlebot is actually a headless browser based off the Chromium codebase at SearchLove New York. I'm going to challenge Matt […]

  • Reply

    I’m tracking spiders on my site since the beginning of 2010 and I’ve seen an evolution. More separated bots from Google. At first I only saw some Googlebot but now I see a bot for images, ads, feeds, etc.

    Any idea how you could actually track the Chromebot? There’s no tagging involved, so… makes is very difficult.

  • […] Googlebot can crawl AJAX to retrieve Facebook comments coincidentally only hours after I unveiled Joshua Giardino's research that suggested Googlebot is actually a headless browser based off the Chromium codebase at SearchLove New York. I'm going to challenge Matt Cutts' […]

  • Reply

    […] Lei mais: Link original em Inglês. Link em Português, cortesia Bing Translate […]

  • […] Googlebot can crawl AJAX to retrieve Facebook comments coincidentally only hours after I unveiled Joshua Giardino's research that suggested Googlebot is actually a headless browser based off the Chromium codebase at SearchLove New York. I'm going to challenge Matt Cutts' […]

  • […] Googlebot can crawl AJAX to retrieve Facebook comments coincidentally only hours after I unveiled Joshua Giardino's research that suggested Googlebot is actually a headless browser based off the Chromium codebase at SearchLove New York. I'm going to challenge Matt Cutts' […]

  • […] This is a fascinating post by @chrisgrimes on “Why a Search Giant decided to build the Fastest Browser ever…” https://ipullrank.com/googlebot-is-chrome/ […]

  • […] Googlebot can crawl AJAX to retrieve Facebook comments coincidentally only hours after I unveiled Joshua Giardino’s research that suggested Googlebot is actually a headless browser based off the Chromium codebase at SearchLove New York. I’m going to challenge Matt […]

  • […] a really solid post on Michael King’s blog arguing the case for Googlebot actually being Google’s Chrome browser.  While there […]

  • […] Googlebot is Chrome by Joshua Giardino […]

  • […] read the whole thing in one go, but I still refer back to various sections of it all the time.Googlebot is Chrome by Joshua GiardinoI only read this a few days ago but have still referred back to it a lot since then.  Even if the […]

  • […] THE NEWS | THE OPINION | THE ADVICE […]

  • […] Googlebot can crawl AJAX to retrieve Facebook comments coincidentally only hours after I unveiled Joshua Giardino's research that suggested Googlebot is actually a headless browser based off the Chromium codebase at SearchLove New York. I'm going to challenge Matt […]

  • Reply

    […] article here has a detailed and more technical explanation of this. Take a look and let me know what you […]

  • Reply

    […] article here has a detailed and more technical explanation of this. Take a look and let me know what you […]

  • […] article here has a detailed and more technical explanation of this. Take a look and let me know what you […]

  • […] recently brought my friend Josh Giardino (who you might remember wrote the “Googlebot is Chrome” article) aboard as Manager of R&D and I’m working on some awesome projects with him one of […]

  • creativerty
    Reply

    Great article and an eye opener to what Google is up to, I still use Firefox for Firebug, though I have noticed that Chrome has got a lot quicker.

  • creativerty
    Reply

    Great article and an eye opener to what Google is up to, I still use Firefox for Firebug, though I have noticed that Chrome has got a lot quicker.

  • Reply

    شكرا   مووضع  مفيد جدا 
    https://vb.0ab0.com
    https://0ab0.com
    <a href=”https://vb.0ab0.com/”>منتديات</a>
    <a href=”https://0ab0.com/”>شات</a>

  • Reply

    […] Googlebot Is Chrome […]

  • Reply

    […] EmailDet finns några roliga, staniolhattiga, och ganska skrattretande inlägg inom SEO-sfären där folk med höjt röstläge hävdar att Googles egen webbläsare Chrome i […]

  • […] a really solid post on Michael King’s blog arguing the case for Googlebot actually being Google’s Chrome browser.  While there isn’t […]

  • Reply

    […] Googlebot is Chrome – iPullRank – GoogleBot is actually Chrome: The Jig is Up! Why a Search Giant decided to build the Fastest Browser ever… Background. Over the years, Google has built one of the … […]

  • Reply

    Google

    Always a huge fan of linking to bloggers that I enjoy but don’t get a good deal of link appreciate from.

  • Reply

    […] Googlebot Is Chrome – iPullRank – GoogleBot is actually Chrome: The Jig is Up! Why a Search Giant decided to build the Fastest Browser ever… Background. Over the years, Google has built one of the … […]

  • […] [0]https://ipullrank.com/googlebot-is-chrome/ [1]https://www.sitepoint.com/browser-trends-august-2015-chrome-e… […]

  • Reply

    Title

    [… ]we love to respect quite a few different net website pages on the net, even though they aren’t linked to all of us, simply by backlinking to them. Listed here are several websites well worth examining out[… ]…

  • Reply

    Title

    [… ]always a large fan of backlinks to people that i including although genuinely don’t obtain a lot of website link enjoy from[… ]…

  • Reply

    Title

    [… ]always a big enthusiast regarding linking to web owners that we like however truly don’t get a large amount of link love from[… ]…

  • знакомства с Украинскими девушками

    знакомства с Украинскими девушками

  • Reply

    zootopia 2016

    zootopia 2016

  • Reply

    film online 2016 free ua ru kz

    film online 2016 free ua ru kz

  • Дивергент, глава 3: За стеной / Allegiant, Высотка / High-Rise,

    Дивергент, глава 3: За стеной / Allegiant, Высотка / High-Rise,

  • […] JS files or cookies. Although, many of us have noticed that Google discovers URLs  and uses Chrome as a crawler. It’s an SEO Technicality to find a way to do it without overloading Googlebot with duplicate […]

  • лучшие фильмы 2016 2016 года список новые фильмы

    лучшие фильмы 2016 2016 года список новые фильмы

  • лучшие фильмы года 2016

    лучшие фильмы года 2016

  • jykfqy abkmvs cvjnhtnm ,tp htubcnhfwbb

    jykfqy abkmvs cvjnhtnm ,tp htubcnhfwbb

  • […] Googlebot is Chrome – iPullRank | … – GoogleBot is actually Chrome: The Jig is Up! Why a Search Giant decided to build the Fastest Browser ever… Background. Over the years, Google has built one … […]

  • Reply

    more

    more

  • Reply

    film

    film

  • Reply

    ukraine girl kiev lviv

    ukraine girl kiev lviv

  • Reply

    ukraine girl kiev lviv 2016

    ukraine girl kiev lviv 2016

  • Reply

    helou zis me

    helou zis me

  • Reply

    helou zis me 2016

    helou zis me

  • Reply

    clic here

    clic here

  • Reply

    clic

    clic

  • […] from a blog post dated Oct 2015. This announcement doesn’t come as a complete surprise, since googlebot is chrome, but it still is a relatively new feature. There are mixed reports online about whether or not it […]

  • Reply

    illuziaobmana

    illuziaobmana

  • Reply

    HDKINOONLINE

    HDKINOONLINE 2016

  • Reply

    2016

    2016

  • Reply

    manastirski_chay

    manastirski_chay

  • Reply

    manastirski_chay_2016

    manastirski_chay_2016

  • Reply

    manastirski_chay_2016ua

    manastirski_chay_2016ua

  • Reply

    Title

    [… ]we prefer to reverance many some other net internet pages on the net, whether or not they will aren’t associated with us, by simply backlinks in their eyes. Listed here are some internet pages worthy of verifying out[… ]…

  • Reply

    dory2016

    dory2016

  • Reply

    site map list

    sitemaplist

  • Reply

    online streaming porn, hot mp4 porn videos, hq porn for iphone and android

    online streaming porn, hot mp4 porn videos, hq porn for iphone and android

  • Reply

    online streaming porn, hot mp4 porn videos, hq porn for iphone and android latest

    online streaming porn, hot mp4 porn videos, hq porn for iphone and android

  • Reply

    college-xvideo-play online streaming porn, hot mp4 porn videos, hq porn for iphone and android

    online streaming porn, hot mp4 porn videos, hq porn for iphone and android

  • Reply

    college-xvideo-play online streaming porn, hot mp4 porn videos, hq porn for iphone and android

    online streaming porn, hot mp4 porn videos, hq porn for iphone and android

  • 18 year old noelle free mobile porn video

    18 year old noelle free mobile porn video

  • Reply

    Pussy Her pussy is so tight…

    PussyHerpussyissotight…

  • xnxx tags sunny leone xxx brother rapes sister

    xnxx tags sunny leone xxx brother rapes sister

  • Reply

    sister caught on hiddencam peter north anal addicts sexe au senegal en afrique woman drinks her own cum xxx 63195

    sister caught on hiddencam peter north anal addicts sexe au senegal en afrique woman drinks her own cum xxx 63195

  • Reply

    sister-helping-brother-to-masturbaste

    sister-helping-brother-to-masturbaste

  • Reply

    czech-teen-student-was-seduced-by-her-old-teacher-while-doing-tests

    czech-teen-student-was-seduced-by-her-old-teacher-while-doing-tests

  • Reply

    porn video tube, hard porn, streaming porn, hq porn 2016

    porn video tube, hard porn, streaming porn, hq porn 20016

  • Reply

    Отряд самоубийц, новинки фильмов 2016года см…

    Лучшие фильмы

  • Reply

    Отряд самоубийц, новинки фильмов 2016года см…

    Лучшие фильмы2

  • Reply

    Смотреть Обзоры Телефонов Самсунг Новин…

    ТелефоновСамсунг2016

  • Reply

    This is old but still provides some very good speculation on how Goog operates today. Wow

Leave a Comment

Your email address will not be published. Required fields are marked *

Get The Rank Report

iPullRank's weekly SEO, Content Strategy and Generative AI Newsletter

TIPS, ADVICE, AND EXCLUSIVE INSIGHTS DIRECT TO YOUR INBOX

Join over 4000+ Rank Climbers and get the SEO, Content, and AI industry news, updates, and best practices to level up your digital performance.

Considering AI Content?

AI generative content has gone mainstream.

Discover what that means for your business and why AI generation can be your competitive advantage in the world of content and SEO.