# See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file # advertising-related bots: User-agent: Mediapartners-Google* Disallow: / # Crawlers that are kind enough to obey, but which we'd rather not have # unless they're feeding search engines. User-agent: UbiCrawler User-agent: DOC User-agent: Zao Disallow: / # Some bots are known to be trouble, particularly those designed to copy # entire sites. Please obey robots.txt. User-agent: sitecheck.internetseer.com User-agent: Zealbot User-agent: MSIECrawler User-agent: SiteSnagger User-agent: WebStripper User-agent: WebCopier User-agent: Fetch User-agent: Offline Explorer User-agent: Teleport User-agent: TeleportPro User-agent: WebZIP User-agent: linko User-agent: HTTrack User-agent: Microsoft.URL.Control User-agent: Xenu User-agent: larbin User-agent: libwww User-agent: ZyBORG User-agent: Download Ninja Disallow: / # Sorry, wget in its recursive mode is a frequent problem. # Please read the man page and use it properly; there is a # --wait option you can use to set the delay between hits, # for instance. # User-agent: wget Disallow: / # The 'grub' distributed client has been *very* poorly behaved. # User-agent: grub-client Disallow: / # Doesn't follow robots.txt anyway, but... # User-agent: k2spider Disallow: / # Hits many times per second, not acceptable # http://www.nameprotect.com/botinfo.html User-agent: NPBot Disallow: / # A capture bot, downloads gazillions of pages with no public benefit # http://www.webreaper.net/ User-agent: WebReaper Disallow: / # Friendly, low-speed bots are welcome viewing article pages, but not everything User-agent: * Disallow: /wp-admin Sitemap: http://msm.dev/sitemap.xml
Folgende Keywords wurden erkannt. Überprüfe die Optimierung dieser Keywords für Deine Seite.
(Nice to have)