On this page
urllib.robotparser — Parser for robots.txt
Source code: Lib/urllib/robotparser.py
This module provides a single class, RobotFileParser, which answers questions about whether or not a particular user agent can fetch a URL on the Web site that published the robots.txt file. For more details on the structure of robots.txt files, see http://www.robotstxt.org/orig.html .
- class
urllib.robotparser.RobotFileParser( url='' ) -
This class provides methods to read, parse and answer questions about the
robots.txtfile at url.can_fetch( useragent, url )-
Returns
Trueif the useragent is allowed to fetch the url according to the rules contained in the parsedrobots.txtfile.
mtime( )-
Returns the time the
robots.txtfile was last fetched. This is useful for long-running web spiders that need to check for newrobots.txtfiles periodically.
crawl_delay( useragent )-
Returns the value of the
Crawl-delayparameter fromrobots.txtfor the useragent in question. If there is no such parameter or it doesn’t apply to the useragent specified or therobots.txtentry for this parameter has invalid syntax, returnNone.New in version 3.6.
request_rate( useragent )-
Returns the contents of the
Request-rateparameter fromrobots.txtas a named tupleRequestRate(requests, seconds). If there is no such parameter or it doesn’t apply to the useragent specified or therobots.txtentry for this parameter has invalid syntax, returnNone.New in version 3.6.
site_maps( )-
Returns the contents of the
Sitemapparameter fromrobots.txtin the form of alist(). If there is no such parameter or therobots.txtentry for this parameter has invalid syntax, returnNone.New in version 3.8.
The following example demonstrates basic use of the RobotFileParser class:
>>> import urllib.robotparser
>>> rp = urllib.robotparser.RobotFileParser()
>>> rp.set_url("http://www.musi-cal.com/robots.txt")
>>> rp.read()
>>> rrate = rp.request_rate("*")
>>> rrate.requests
3
>>> rrate.seconds
20
>>> rp.crawl_delay("*")
6
>>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco")
False
>>> rp.can_fetch("*", "http://www.musi-cal.com/")
True