Textpattern CMS support forum
You are not logged in. Register | Login | Help
- Topics: Active | Unanswered
#1 2008-04-30 13:58:38
- flatazor
- New Member
- Registered: 2007-05-02
- Posts: 5
Preventing Google from indexing pages returned from a search
Hello,
I’m having trouble getting the right pages of my site indexed in Google – I’m using a sitemap that is happily ignored – but that’s slightly off-topic!
More worryingly, about half of my indexed urls are made of irrelevant searches indexed by Google. Does anyone know how to prevent Google from using the search function in a Textpattern site?
I’ve looked at using a robots.txt, but that only restricts specific files or directories.
I’m also using Google’s webmaster tools, where you can remove unwanted URLs – but these must return a 404 error if they are not blocked by a robots.txt file.
Is anyone having the same kind of trouble?
Any suggestions welcome!
Offline
Re: Preventing Google from indexing pages returned from a search
you can try something like
<txp:if_search>
<meta name="robots" content="noindex,follow" />
<txp:else />
<meta name="robots" content="index,follow" />
</txp:if_search>
in the head of your template
Last edited by colak (2008-04-30 15:27:06)
Yiannis
——————————
NeMe | hblack.art | EMAP | A Sea change | Toolkit of Care
I do my best editing after I click on the submit button.
Offline
Re: Preventing Google from indexing pages returned from a search
flatazor wrote:
I’ve looked at using a robots.txt, but that only restricts specific files or directories.
Er, nope – it can be any url – directory or modrewrite – it doesn’t matter. In example, create section called search redirect all searches there and then:
User-agent: *
Disallow: /search/
Or you could just block the search with out the section.
Offline