从网页中提取数据,Scrapy 使用基于 XPath 和 CSS 表达式的技术叫做选择器。以下是 XPath 表达式的一些例子:
<head>
元素中的 <title>
元素/html/head/title复制
<title>
元素中的文本/html/head/title/text()复制
<td>
元素//td复制
//div[@class=”slice”]复制
选择器有四个基本的方法,如下所示:
S.N. | 方法 & 描述 |
---|---|
extract() | 它返回一个unicode字符串以及所选数据 |
extract_first() | 它返回第一个unicode字符串以及所选数据 |
re() | 它返回Unicode字符串列表,当正则表达式被赋予作为参数时提取 |
xpath() | 它返回选择器列表,它代表由指定XPath表达式参数选择的节点 |
css() | 它返回选择器列表,它代表由指定CSS表达式作为参数所选择的节点 |
如果使用选择器想快速的到到效果,我们可以使用Scrapy Shell
scrapy shell "http://www.163.com"复制
注意windows系统必须使用双引号
从一个普通的HTML网站提取数据,查看该网站得到的 XPath 的源代码。检测后,可以看到数据将在UL标签,并选择 li 标签中的 元素。
代码的下面行显示了不同类型的数据的提取:
response.xpath('//ul/li')复制
response.xpath('//ul/li/text()').extract()复制
response.xpath('//ul/li/a/text()').extract()复制
#对于选择网站的链接 response.xpath('//ul/li/a/@href').extract()复制
import scrapyclass DoubanSpider(scrapy.Spider): name = 'douban' allwed_url = 'douban.com' start_urls = [ 'https://movie.douban.com/top250/' ] def parse(self, response): movie_name = response.xpath("//div[@class='item']//a/span[1]/text()").extract() movie_core = response.xpath("//div[@class='star']/span[2]/text()").extract() yield { 'movie_name':movie_name, 'movie_core':movie_core }复制
执行以上代码,我可以在控制看到:
2018-01-24 15:17:14 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: spiderdemo1)2018-01-24 15:17:14 [scrapy.utils.log] INFO: Versions: lxml 4.1.1.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.3.1, w3lib 1.18.0, Twiste d 17.9.0, Python 3.6.3 (v3.6.3:2c5fed8, Oct 3 2017, 18:11:49) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 17.5.0 (OpenSSL 1.1.0g 2 Nov 2017), cryptography 2.1.4, Platform Windows-10-10.0.10240-SP02018-01-24 15:17:14 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'spiderdemo1', 'NEWSPIDER_MODULE': 'spiderdemo1.spiders','ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['spiderdemo1.spiders']}2018-01-24 15:17:14 [scrapy.middleware] INFO: Enabled extensions:['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats']2018-01-24 15:17:14 [scrapy.middleware] INFO: Enabled downloader middlewares:['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats']2018-01-24 15:17:14 [scrapy.middleware] INFO: Enabled spider middlewares:['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware']2018-01-24 15:17:14 [scrapy.middleware] INFO: Enabled item pipelines:[]2018-01-24 15:17:14 [scrapy.core.engine] INFO: Spider opened2018-01-24 15:17:14 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)2018-01-24 15:17:14 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:60232018-01-24 15:17:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://movie.douban.com/robots.txt> (referer: None)2018-01-24 15:17:15 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://movie.douban.com/top250> from <GET https://movie.douban.com/top250/>2018-01-24 15:17:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://movie.douban.com/top250> (referer: None)2018-01-24 15:17:15 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>{'movie_name': ['肖申克的救赎', '霸王别姬', '这个杀手不太冷', '阿甘正传', '美丽人生', '千与千寻', '泰坦尼克号', '辛德勒的名单', '盗梦空 间', '机器人总动员', '海上钢琴师', '三傻大闹宝莱坞', '忠犬八公的故事', '放牛班的春天', '大话西游之大圣娶亲', '教父', '龙猫', '楚门的世 界', '乱世佳人', '熔炉', '触不可及', '天堂电影院', '当幸福来敲门', '无间道', '星际穿越'], 'movie_core': ['9.6', '9.5', '9.4', '9.4', '9.5', '9.2', '9.2', '9.4', '9.3', '9.3', '9.2', '9.1', '9.2', '9.2', '9.2', '9.2', '9.1', '9.1', '9.2', '9.2', '9.1', '9.1', '8.9', '9.0', '9.1']}2018-01-24 15:17:15 [scrapy.core.engine] INFO: Closing spider (finished)2018-01-24 15:17:15 [scrapy.statscollectors] INFO: Dumping Scrapy stats:{'downloader/request_bytes': 651, 'downloader/request_count': 3, 'downloader/request_method_count/GET': 3, 'downloader/response_bytes': 13900, 'downloader/response_count': 3, 'downloader/response_status_count/200': 2, 'downloader/response_status_count/301': 1, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2018, 1, 24, 7, 17, 15, 247183), 'item_scraped_count': 1, 'log_count/DEBUG': 5, 'log_count/INFO': 7, 'response_received_count': 2, 'scheduler/dequeued': 2, 'scheduler/dequeued/memory': 2, 'scheduler/enqueued': 2, 'scheduler/enqueued/memory': 2, 'start_time': datetime.datetime(2018, 1, 24, 7, 17, 14, 784782)}2018-01-24 15:17:15 [scrapy.core.engine] INFO: Spider closed (finished)复制
with open("movie.txt", 'wb') as f: for n, c in zip(movie_name, movie_core): str = n+":"+c+"\n" f.write(str.encode())复制
scrapy 内置主要有四种:JSON,JSON lines,CSV,XML
我们将结果用最常用的JSON导出,命令如下:
scrapy crawl dmoz -o douban.json -t json复制
-o 后面是导出文件名,-t 后面是导出类型
Scrapy进程可通过使用蜘蛛提取来自网页中的数据。Scrapy使用Item类生成输出对象用于收刮数据
Item 对象是自定义的python字典,可以使用标准字典语法获取某个属性的值
import scrapyclass InfoItem(scrapy.Item): # define the fields for your item here like: movie_name = scrapy.Field() movie_core = scrapy.Field()复制
def parse(self, response): movie_name = response.xpath("//div[@class='item']//a/span[1]/text()").extract() movie_core = response.xpath("//div[@class='star']/span[2]/text()").extract() for n, c in zip(movie_name, movie_core): movie = InfoItem() movie['movie_name'] = n movie['movie_core'] = c yield movie复制
当Item 在Spider中被收集之后,就会被传递到Item Pipeline中进行处理
每个item pipeline组件是实现了简单的方法的python类,负责接收到item并通过它执行一些行为,同时也决定此Item是否继续通过pipeline,或者被丢弃而不再进行处理
item pipeline的主要作用:
每个item piple组件是一个独立的pyhton类,必须实现以process_item(self,item,spider)方法
每个item pipeline组件都需要调用该方法,这个方法必须返回一个具有数据的dict,或者item对象,或者抛出DropItem异常,被丢弃的item将不会被之后的pipeline组件所处理
import jsonclass MoviePipeline(object): def process_item(self, item, spider): json.dump(dict(item), open('diban.json', 'a', encoding='utf-8'), ensure_ascii=False) return item复制
注意:
写到pipeline后,要在settings中设置才可生效
ITEM_PIPELINES = { 'spiderdemo1.pipelines.MoviePipeline': 300}复制
6.4 将项目写入MongoDB
MongoDB地址和数据库名称在Scrapy设置中指定; MongoDB集合以item类命名
from pymongo import MongoClient from middle.settings import HOSTfrom middle.settings import PORTfrom middle.settings import DB_NAMEfrom middle.settings import SHEET_NAMEclass MiddlePipeline(object): def __init__(self): client = MongoClient(host=HOST, port=PORT) my_db = client[DB_NAME] self.sheet = my_db[SHEET_NAME] def process_item(self, item, spider): self.sheet.insert(dict(item)) return item