How Market Brew Changed SEO

Preface

Over the past few years, we’ve received a lot of questions about how Market Brew was able to flourish in a space where most SEO software platforms took a hit to their accuracy and stability.

The short answer is due to Market Brew’s unique approach. Back in 2006, when the idea of a transparent search engine model was being formulated (and patented) by Market Brew’s founders, most of the SEO tools space was starting by scraping, or crawling Google search results on a daily basis, and indexing that data to be retrievable by URL or keyword lookup.

Most of these approaches were heavily dependant on Google’s data. This data was fairly transparent back then, since it was a pretty easy to understand mechanism. But as the data and its relationships grew more complex, these approaches failed.

Market Brew, on the other hand, started from the bottom up. The hypothesis was that, eventually, Google’s search engine would act as a very complicated black box, and using that as an input to any SEO tool would be fatal.

Today, Google couldn’t be more of a black box, and Market Brew couldn’t be sitting any prettier.

The approach, argued below, is the first to unlock the black box, and inevitably will be seen as the only approach that lets SEOs truly classify how Google’s search engine changes on a day-to-day, and month-to-month basis.

The Early Beginnings

The founders actually started out doing SEO like everyone else.

Back in 2006, they found early success by automating a lot of their internal SEO optimizations. After scaling their local successes nationwide, these large-scale optimizations started to become noticed by Google’s search engineers.

Over the course of the next year, they played a high-stakes game of “cat and mouse”, and each successive loophole that they exploited in Google’s algorithms was closed up. By late 2007, they had patented a “navigable website analysis engine”.

After many more years of research and analyzing hundreds of thousands of sites, Market Brew’s standard model was born. This, in turn, led to many other discoveries and inventions like Market Brew’s self-calibrating search engine model.


2008 – The founders were filling their closets with computers.

The first patent, filed in 2007, was the first of many that outlined the founder’s vision for the future of SEO: eventually, search engines would get so complex that the ability to model this black box would be very important and useful.

The founders were banking on Google eventually removing or obfuscating much of the data that they were currently sharing with the SEO world. It was their own experience, which had been on the cutting edge of SEO optimization, that led to this conclusion.

Holding Their Ground

They were aliens in the SEO world. Who the heck needed this? In reality, all you needed at that time was a good website auditor. Google wasn’t that hard to figure out.

By 2009, hundreds of SEO tools had sprung up, mostly based on scraping Google data and regurgitating that data in more and more unique ways. The main players at this time were all going vertical: all-in-one solutions that tied together different APIs, mostly from Google.


2010 – Filling closets was so old school. Now the founders were filling entire rooms full of computers. All of this would soon disappear in a transition to the cloud.

The founders even considered a business model where they attempted to sell data streams to many of these all-in-one platforms. In the end, most of the platforms simply didn’t need that level of data yet. A good stream of backlinks, rank tracking, content analysis, and the website audit was all you needed.

Things changed very slowly at first. I recall setting up next to Blekko at Pubcon one year, who had just announced a feature where users could “explore their search engine” for a small fee.

Finally, a well-backed Google challenger had made the same realization that the founders of Market Brew had: SEOs would need to have outside help from a search engine of some sort.

Google Firewalls Its Black Box

Sure enough, Google began a slow march towards data obfuscation.

First, their backlink operator stopped working. This was a treasure-trove of data that allowed website owners the ability to understand how backlinks factored into their final search rankings.

Then “(Not Provided)” started showing up in everyone’s analytics package. For years, website owners depended on understanding which keywords were driving traffic to their site. With this information, they could essentially cross-correlate search results to give them a clear picture of how many of Google’s algorithms worked.

After killing off crucial keyword data, Google’s Pay-Per-Click API to its Keyword Planner was cut off or obfuscated, rendering thousands of agencies without the ability to know which keywords to target.

By 2014, we all know about Matt Cutts’ departure as Google’s official liaison to the SEO community.

It was official: the data and explanations had dried up. Subsequently, throughout this time many of the all-in-one platforms began to get burned. The intensity of this downturn, of course, was determined by how many data streams a vendor relied on Google for.


2011 – after shifting to the cloud.

Because Market Brew’s approach didn’t rely on scraping (using) any of Google’s data, they avoided a downturn from this anti-transparency march — something that couldn’t be said about the rest of the field.

Conventional SEO Tools Start To Fail

In late 2012, Google started quietly introducing Artificial Intelligence into its core search algorithms, and the seemingly direct relationships between inputs and outputs to its black box began to get real fuzzy.

The majority of SEO tools at this time modeled Google as a semi-transparent box that, with enough data, could be explained in a straightforward manner to its users. Unfortunately, what remaining data they were relying on, had dried up. And now, search engine mechanisms were as confusing as ever, even to Google engineers themselves.

There are two major things that happened to Google’s search results that represented existential threats to the way SEO tools were being used.

  1. The Caffeine Update: this made the information updated in its search results (things like META Title, Description, and matched snippets in the HTML itself) asynchronous with the scoring updates. For instance, changes that you had just made to your web page would be shown in the results, but then a few months later your rankings would change. It was now impossible to attribute rankings changes to specific optimizations, simply by relying on what was being shown on the search results.
  2. Dynamic Algorithmic Weightings: up to this point, Google had manually curated the weightings of its various algorithmic rules. Because this was a manual process, there were two advantages: first, these weightings didn’t change very often; and second, these weightings weren’t very different from one search result to the next. After the introduction of A.I., the weightings changed as much as daily, and each search result had its own set of weightings.

Google search results became incredibly hard to decipher at this point. The major tools that SEOs were using could not deliver reliable results, and all of a sudden approaching a search engine by reading its final output seemed silly.

The Unique Approach Gains Momentum

By 2013, the founders’ unique approach to SEO began to shine. With the lack of quality data from Google, most SEO ranking tools became lagging indicators with very little distinction on why those ranking shifts happened.

On the contrary, the search engine model approach was ready for primetime. In early 2013, the founders sat down in their Palo Alto, CA office and brainstormed on how they could take advantage of Google’s faucet of data being shut off.

They had successfully built and demonstrated a search engine model with families of algorithms that they would fine-tune (manually) whenever a major algorithmic shift would occur. But now, Google was threatening to change the mixture of these algorithms much more rapidly and without public fanfare (and documentation).

Large brands began to adopt the technology at a rapid pace. In late 2013, after months of R&D, they realized one of the final missing pieces in the approach, that would end up revolutionizing the way teams did SEO.

Market Brew realized the search engine model approach still had the same critical flaw of the conventional SEO tools: how do you deal with Google always changing its algorithms?

Because of their bottom up approach, they had a major advantage over everyone else. They could fine tune the model at a much more granular level. More control over inputs meant that their models could easily be trained to find the right mixture of algorithms for each environment.

There was only one problem: using a brute force approach, trying to simulate every possible permutation of algorithmic weightings, even on the fastest computers today, would take on the order of thousands of years to find a stable output that behaved like the real search results.

Fortunately, after thousands of hours and hundreds of thousands of dollars of R&D, they finally had a breakthrough: they had figured out a way to machine learn the behavior and characteristics of any target search result in a matter of minutes. To do this, they borrowed a genetic algorithm, an Artificial Intelligence technique called Particle Swarm Optimization.

By the end of 2013, Market Brew was voted #1 out of 60 Silicon Valley Big Data Startups by judges from Oracle, Draper Fisher Jurvetson and Xignite.

Unlocking The Black Box

The main impediment to the search engine model approach was finding the correct combination of inputs into the model. After discovering what genetic algorithms like Particle Swarm Optimization could do for this process, they were able to create a simple mechanism that allowed Market Brew users to take the generic search engine and transform it into a Google-like model of their choice in a matter of minutes.

Today, Market Brew clients do this many times for all kinds of models (mobile vs. desktop, local vs. national, etc.) and even use this process to track the changes in behavior and characteristics of Google when it is undergoing an algorithmic “shift”.

Most algorithmic changes today aren’t new algorithms, rather they are refinements to the weightings of the already existing ones. From the Market Brew side of things, this translates into users re-calibrating their models whenever their models diverge from the real thing. Not only do they get a newly calibrated model, but they get an easy way to do a before / after comparison of the model settings.

Because Market Brew stores every data point in the model from the moment users start using the system, the dynamic inputs on the model can be compared across newly calibrated versions. This means that Market Brew turns into machine-learning the machine learner, so to speak. Google puts more emphasis on backlinks? Your search engine model re-calibration will indicate that.


Conceptual diagram of Particle Swarm Optimization in action with a single global minimum. Image credit: Jonathan Becker.

By 2016, the word was out that Market Brew had won its founders’ decades-long bet. In early 2015, Search Engine Journal gave me the opportunity to describe this vision. Among many industry outlets, TechCrunch and Search Engine Land began interviewing me as an expert in artificial intelligence for SEO.

Thanks, Grandfather

It’s inevitable that engineers and data scientists ask: given a black box, how would it be possible to build a search engine model with thousands of inputs? How in the world would you know which algorithmic families to model?

Market Brew got really lucky. They’ve rarely used Google’s “I’m feeling lucky” button, but this is one time they used it.

If you remember, they started out in 2007. Back then, Google was pretty simple, and semi-transparent. Most of the new algorithms that were being introduced to its search engine were well documented.

Their model paralleled this complexity. At first, they just had a few major core algorithms in the model. This worked really well. They ran nightly QA regression tests across all of the data to make sure the models were stable representations of the real thing.

As per usual back in the day, Google would add a new algorithm. The regression tests would sound an alarm: something was not right. The models, no matter what mixture of algorithms, just couldn’t correlate against the real-world search results.

So they would start trying new algorithms in the model. One by one, they eliminated the possibilities, until they found an algorithm that perfectly brought the generic model back into stable correlation with real world search results.

They continued this iterative tradition for almost five years, until Google stopped making it a tradition of adding major algorithms to its core engine. By 2012, most of the Google updates were changes to the mixture of algorithms, rather than adding new ones.

Because they were able to incrementally fill one gap in the model at a time, they were able to, with great confidence, identify all the major pillars in Google’s modern search engine.

About The Author


Scott Stouffer is a Co-Founder and CTO of MarketBrew.com. Market Brew is an enterprise-grade technology that enables SEO teams to build models of search engines which, like a Google simulator of sorts, allows them to make changes to their web sites, and predict how their Google rankings will react 60 days from now. Mr. Stouffer is a graduate of Carnegie Mellon University and holds a M.S. in both Computer and Electrical Engineering. He has been behind the wave of technology at Market Brew. For more information about Market Brew, visit www.MarketBrew.com.