I discovered long ago that domains have power even if you never intend to build a website on them. Certain keyword domains can generate a nice sum of traffic over their lifespan. In the infancy of the web, an entire industry cropped up to take advantage of this new phenomenon (i.e. domaineers & their army of parked pages). Granted, the domaineer is often hoping to unload the domain at a later date for a amount much more than he originally paid, but in the mean time he’ll gladly collect those nickels, dimes and sometimes dollars on the pay-per-click traffic funneling through these great domain names. The Internet’s answer to Coinstar.
From personal experience, I’d say type-in traffic amounts to around 1% to 5% of the traffic numbers quoted in Google’s Keyword Search Tool. I think it largely depends on the verticle you’re operating in. So you’ve locked down this great domain, and you’re planning to point it to your main site via a redirect at your registrar or a 301 redirect. Not so fast sparky. Domains contain inherent Page Rank, and Google doesn’t like you just throwing around that hard earned Page Rank at whatever you’d like. You might be able to get away with it if you have one or two domains (though I wouldn’t recommend chancing it), but ratchet that number up to ten, twenty or beyond and you could be skirting with trouble in the form of a Google Penalty. Unnatural Links Penalty to be exact.
That’s why if you are going to employ this method of harvesting traffic you have to do it smart. I may have taken a sledgehammer to this nail, but I wanted to be sure I had this sucker taken care of. So originally, I had the exact-match domain setup to point to a related page on the interior of my main site through a redirect in my domain registrar’s management console. It turns out that the domain was showing up in the search results and, yep, passing page rank. A big no no.
To combat this, I changed the nameservers to point to a bargin basement webhost (iPower in this instance), setup a new folder in the directory which I pointed the domain to, and finally put a .htaccess and a index.php in the root directory. For the mystified Windows folks out there, we are working on a Linux box. My goal here was to redirect the traffic without passing along that page juice. I also needed the domain purged from Google’s index. So our keys here were nofollow, noindex.
Here is the .htaccess file:
#set env var MY_SET-HEADER to 1
RewriteRule .* - [E=MY_SET_HEADER:1]
#if MY_SET_HEADER is present then set header
Header set X-Robots-Tag "noindex, nofollow" env=MY_SET_HEADER
This file seems to be the real critical one for purging the domain from Google’s search index. I had also tried a robots.txt file with the index.php, and the combo did nothing to get it out of the index. Since the .htaccess largely handles the deindexing and stops the Googlebot in its tracks, the index.php file will handle the redirect.
Here is the index.php file:
I added the meta tag to further reinforce the noindex, nofollow (remember that sledgehammer I was talking about), but our robots.txt experiment above shows this has little overall effect. For the redirect url, I housed it inside a javascript redirect script. Conventional wisdom has always said that the Googlebot won’t follow a link embedded within javascript. That may or may not be the case, but I think its a safer bet than just housing it within a meta refresh tag.
Just for good measure, we had Google take a look a our approach via our website reconsideration request, and this solution (along with a number of other changes) scored us a manual spam action revoked. Its a bit more work than doing the registrar redirect, but keeping your site in Google’s good graces, while you still harvest type-in traffic, is worth the additional effort.
0 Comments on "How to Properly Setup Domains to Harvest Type-In Traffic"