Google’s John Mueller On Blocking Robots.txt Information From Being Listed

Google’s John Mueller On Blocking Robots.txt Files From Being Indexed
‘ );

h3_html = ‘



cta = ‘‘+cat_head_params.cta_text.textual content+’
atext = ‘


scdetails = scheader.getElementsByClassName( ‘scdetails’ );
sappendHtml( scdetails[0], h3_html );
sappendHtml( scdetails[0], atext );
sappendHtml( scdetails[0], cta );
// brand
sappendHtml( scheader, “” );
sc_logo = scheader.getElementsByClassName( ‘sc-logo’ );
logo_html = ‘‘;
sappendHtml( sc_logo[0], logo_html );

sappendHtml( scheader, ‘ADVERTISEMENT

‘ );

if(“undefined”!=typeof __gaTracker)
__gaTracker(‘create’, ‘UA-1465708-12’, ‘auto’, ‘tkTracker’);
__gaTracker(‘tkTracker.set’, ‘dimension1’, window.location.href );
__gaTracker(‘tkTracker.set’, ‘dimension2’, ‘search engine optimization’ );
__gaTracker(‘tkTracker.set’, ‘contentGroup1’, ‘search engine optimization’ );
__gaTracker(‘tkTracker.ship’, ‘hitType’: ‘pageview’, ‘web page’: cat_head_params.logo_url, ‘title’: cat_head_params.sponsor.headline, ‘sessionControl’: ‘begin’ );
slinks = scheader.getElementsByTagName( “a” );
sadd_event( slinks, ‘click on’, spons_track );

} // endif cat_head_params.sponsor_logo

Google’s John Mueller just lately supplied some recommendation on learn how to block robots.txt and sitemap information from being listed in search outcomes.

This recommendation was prompted by a tweet from Google’s Gary Illyes, who randomly identified that robots.txt can technically be listed like some other URL. Whereas it offers particular instructions for crawling, there’s nothing to cease it from being listed.

Right here’s the complete tweet from Illyes:

“Triggered by an inside query: robots.txt from indexing perspective is only a url whose content material may be listed. It may possibly develop into canonical or it may be deduped, identical to some other URL.
It solely has particular that means for crawling, however there its index standing doesn’t matter in any respect.”

In response to his fellow Googler, Mueller stated the x-robots-tag HTTP header can be utilized to dam indexing of robots.txt or sitemap information. That wasn’t all he needed to say on the matter, nevertheless, as this was arguably the important thing takeaway:

“Also, if your robots.txt or sitemap file is ranking for normal queries (not site:), that’s usually a sign that your site is really bad off and should be improved instead.”

So should you’re operating into the issue the place your robots.txt file is rating in search outcomes, blocking it utilizing the x-robots-tag HTTP header is an effective short-term resolution. But when that’s occurring then there are possible a lot bigger points to maintain within the long-term, as Mueller suggests.

Supply hyperlink web optimization