If you’ve ever happened to hear me speak on the topic of organic search then you are probably aware of the fact that I’m big on the idea of taking the time to learn everything there is to know about your space before making a single strategy related decision.
Unfortunately, I have found over the years that most companies don’t invest the kind of time necessary to properly research their space. Instead, the most common approach has been to grab an obvious keyword list, and start cranking out links. And to some degree that approach has worked in the past. But the organic landscape has changed pretty dramatically over the last year, so I think it’s a great time for me to renew my “research first” campaign.
But rather than simply tell you it’s important, I thought it might be a worthy exercise to actually walk through a hypothetical scenario using off-the-shelf tools/services (no custom programming of any kind) combined with the type of approach we like to take at BlueGlass, so that you can get a solid understanding of not just the “why” but the “how”.
Let’s suppose that I’m an entrepreneur that has developed a new technology that is going to revolutionize the online dating space. I’m all pumped up because IAC recently spent $50,000,000 to acquire OkCupid.
I also know that a huge chunk of the money was paid because “online dating” is an extremely popular search query, and that OkCupid ranks really well for it. So I make the decision that organic search definitely needs to be a part of my plan.
But what exactly do I need to do?
The place to start is with a process I like to call SERP profiling. Our goal in the profiling process is to collect a great deal of data about who is ranking where, and what are the possible factors driving those rankings.
Once we have all the data, we’re be able to sift through it and find all kinds of hidden nuggets of knowledge that will help us:
- Build a roadmap based on the path of least resistance.
- Set realistic expectations regarding time and resource investment.
- Identify areas that might become problematic before they actually do.
- Establish common sense guidelines for you marketing team to follow.
- Identify opportunities that the competition doesn’t see.
OK, lets get started….
Step 1. Collect SERP Samples
The obvious first step is to find out who is ranking well for online dating. In the past, this has been a pretty straightforward process; drop your keywords into a position reporter of some kind and hit the start button.
Unfortunately, due to Google’s obsession with localization/personalization it’s no longer quite that easy. In order to get a solid understanding of what’s really going on and who is really doing the best, we need to sample SERP data from multiple locations. The approach we like to take is to try and look at what comes closest to representing a default set of results, and then also a look at a sample that represents a large chunk (in terms of potential volume) of SERPs that will have a significant degree of customization.
The way to accomplish this is to use Google’s location tool to look at results from different locations. The ones I like to look at are:
Obviously, there aren’t many people who will intentionally set their geo location to the country level before conducting a query, but there is still a lot of value in looking at this view because it represents what is typically served in smaller markets across the country.
Large City Metros
Once we’ve collected the country level SERP, we’ll want to spend some time collecting data from locations that will contain a much higher level of customization.
The specific group I like to start with is the top 10 U.S. cities by population. (You can find a complete list here )
- New York
- Los Angeles
- San Antonio
- San Diego
- San Jose
(An important thing to remember when collecting your results is that you should clear your cookies before each query. If you don’t, your search behavior during the collection process will potentially skew what you see).
Step 2. Dump all of the URLs into Excel
The key is we want to do this for every city we sampled. That way when we’re done we’ll be able to see who shows up the most, and with what types of URLs. (homepages, interior pages, or a combo of both).
At this point, it’s also good to throw in some domain and page level metrics (from Open Site Explorer) so our final spreadsheet will look something like this:
From this point, we can start sorting through the initial data in various ways and start making some notes of interesting things that jump out at us. The items we spot at this level are going to help us prioritize what we spend our time looking for when we move on to the next step.
Here’s a couple quick examples of the types of things that might be on that list:
Total number of root domains
This can vary quite a bit. In this particular case, out of 11 sample locations, there are only 20 unique domains showing up. (For this exercise, I’ve omitted Wikipedia). Out of those 20 domains, 8 are truly local. (only show up in a single metro and only of content for that area).
OkCupid has great double listings
Very interesting. Notice how they use city/state instead of a traditional directory structure of state/city?
Where is Mingle2.com’s Homepage?
Most of the other sites with comparable page/domain authority have regional urls show up with the homepage, but not mingle2.com. Why is that?
There are quite a few more interesting bits, but that should be enough to get us going. Now let’s move on to the next step.
Step 3. Pull backlink data for each URL
Now that we have all of the urls that ranked across all of our samples, we need to pull backlink data for each and every on of them. At this point, our focus should be primarily on anchor text distribution, so we’re going to pull a standard anchor text report for each URL using Majestic SEO’s fresh database.
Why Majestic Fresh?
We’ve spent a lot of time doing comparisons between OSE, Majestic Historic and Majestic Fresh. They all have their strengths for different kind of research, but for this type of anchor text analysis, it is the best because it doesn’t miss much in terms of big important links when compared to the other two, and you can calculate more accurate percentage numbers, because of the freshness.
The Majest metrics we want to pull in are anchor text, external backlink count, and the number of linking root domains. Those are combined with the OSE page and domain authority data, and then we are going to add additional columns for backlink percentage (rounded to the nearest 1%) and links per domain.
When everything is said and done, your spreadsheet should look something like this:
This gives us a nice, and easy to manage data set with a ton of data points that will provide a great deal of insight into what is actually going on.
In part 2 of this post, we dive deeper into the data and look at some things that might impact our decisions regarding how we might want to proceed. In the meantime, get busy building some spreadsheets…