伯乐在线
Scrapy生成的项目目录
文件说明:
- scrapy.cfg 项目的配置信息,主要为Scrapy命令行工具提供一个基础的配置信息。(真正爬虫相关的配置信息在settings.py文件中)
- items.py 设置数据存储模板,用于结构化数据,如:Django的Model
- pipelines 数据处理行为,如:一般结构化的数据持久化
- settings.py 配置文件,如:递归的层数、并发数,延迟下载等
- spiders 爬虫目录,如:创建文件,编写爬虫规则
jobbole.py
# -*- coding: utf-8 -*- import scrapy class JobboleSpider(scrapy.Spider): name = 'jobbole' #允许的域名 allowed_domains = ['blog.jobbole.com'] #起始的url,是一个list,可以放入所有要爬取的URL start_urls = ['http://blog.jobbole.com/'] #业务逻辑 def parse(self, response): pass
设置断点模式
在scrapy.cfg文件同级目录下建立main.py文件,内容如下
# -*- coding: utf-8 -*- import sys import os from scrapy.cmdline import execute sys.path.APPend(os.path.dirname(os.path.abspath(__file__))) execute(['scrapy', 'crawl', 'jobbole'])
启动爬虫
启动爬虫命令:
scrapy crawl jobbole
(scrapyenv) E:\Python\Envs\EnterpriseSpider>scrapy crawl jobbole 2018-11-05 09:03:32 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: EnterpriseSpider) 2018-11-05 09:03:32 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.6.6 (v3.6.6:4cf1f54eb7, Jun 27 2018, 03:37:03) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0i 14 Aug 2018), cryptography 2.3.1, Platform windows-10-10.0.17134-SP0 2018-11-05 09:03:32 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'EnterpriseSpider', 'NEWSPIDER_MODULE': 'EnterpriseSpider.spiders', 'robotsTXT_OBEY': True, 'SPIDER_MODULES': ['EnterpriseSpider.spiders']} 2018-11-05 09:03:32 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.Telnetconsole', 'scrapy.extensions.logstats.LogStats'] Unhandled ERROR in Deferred: 2018-11-05 09:03:32 [twisted] critical: Unhandled error in Deferred: Traceback (most recent call last): File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\crawler.py", line 171, in crawl return self._crawl(crawler, *args, **kwargs) File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\crawler.py", line 175, in _crawl d = crawler.crawl(*args, **kwargs) File "e:\python\envs\scrapyenv\lib\site-packages\twisted\internet\defer.py", line 1613, in unwindGenerator return _cancellableinlineCallbacks(gen) File "e:\python\envs\scrapyenv\lib\site-packages\twisted\internet\defer.py", line 1529, in _cancellableInlineCallbacks _inlineCallbacks(None, g, status) --- <exception caught here> --- File "e:\python\envs\scrapyenv\lib\site-packages\twisted\internet\defer.py", line 1418, in _inlineCallbacks result = g.send(result) File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\crawler.py", line 80, in crawl self.engine = self._create_engine() File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\crawler.py", line 105, in _create_engine return executionEngine(self, lambda _: self.stop()) File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\core\engine.py", line 69, in __init__ self.downloader = downloader_cls(crawler) File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\core\downloader\__init__.py", line 88, in __init__ self.middleware = DownloaderMiddlewareManager.from_crawler(crawler) File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\middleware.py", line 58, in from_crawler return cls.from_settings(crawler.settings, crawler) File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\middleware.py", line 34, in from_settings mwcls = load_object(clspath) File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\utils\misc.py", line 44, in load_object mod = import_module(module) File "e:\python\envs\scrapyenv\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\downloadermiddlewares\retry.py", line 20, in <module> from twisted.web.client import Responsefailed File "e:\python\envs\scrapyenv\lib\site-packages\twisted\web\client.py", line 41, in <module> from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS File "e:\python\envs\scrapyenv\lib\site-packages\twisted\internet\endpoints.py", line 41, in <module> from twisted.internet.stdio import StandardIO, PipeAddress File "e:\python\envs\scrapyenv\lib\site-packages\twisted\internet\stdio.py", line 30, in <module> from twisted.internet import _win32stdio File "e:\python\envs\scrapyenv\lib\site-packages\twisted\internet\_win32stdio.py", line 9, in <module> import win32api builtins.ModulenotfoundError: No module named 'win32api' 2018-11-05 09:03:32 [twisted] CRITICAL: Traceback (most recent call last): File "e:\python\envs\scrapyenv\lib\site-packages\twisted\internet\defer.py", line 1418, in _inlineCallbacks result = g.send(result) File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\crawler.py", line 80, in crawl self.engine = self._create_engine() File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\crawler.py", line 105, in _create_engine return ExecutionEngine(self, lambda _: self.stop()) File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\core\engine.py", line 69, in __init__ self.downloader = downloader_cls(crawler) File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\core\downloader\__init__.py", line 88, in __init__ self.middleware = DownloaderMiddlewareManager.from_crawler(crawler) File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\middleware.py", line 58, in from_crawler return cls.from_settings(crawler.settings, crawler) File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\middleware.py", line 34, in from_settings mwcls = load_object(clspath) File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\utils\misc.py", line 44, in load_object mod = import_module(module) File "e:\python\envs\scrapyenv\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "e:\python\envs\scrapyenv\lib\site-packages\scrapy\downloadermiddlewares\retry.py", line 20, in <module> from twisted.web.client import ResponseFailed File "e:\python\envs\scrapyenv\lib\site-packages\twisted\web\client.py", line 41, in <module> from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS File "e:\python\envs\scrapyenv\lib\site-packages\twisted\internet\endpoints.py", line 41, in <module> from twisted.internet.stdio import StandardIO, PipeAddress File "e:\python\envs\scrapyenv\lib\site-packages\twisted\internet\stdio.py", line 30, in <module> from twisted.internet import _win32stdio File "e:\python\envs\scrapyenv\lib\site-packages\twisted\internet\_win32stdio.py", line 9, in <module> import win32api ModuleNotFoundError: No module named 'win32api' (scrapyenv) E:\Python\Envs\EnterpriseSpider>View Code
报错:ModuleNotFoundError: No module named 'win32api',缺少模块win32pi
安装缺少的模块,使用国内豆瓣源进行加速安装
pip install -i https://pypi.douban.com/simple pypiwin32
再次启动爬虫,运行成功。
(scrapyenv) E:\Python\Envs\EnterpriseSpider>scrapy crawl jobbole 2018-11-05 09:13:15 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: EnterpriseSpider) 2018-11-05 09:13:15 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.6.6 (v3.6.6:4cf1f54eb7, Jun 27 2018, 03:37:03) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0i 14 Aug 2018), cryptography 2.3.1, Platform Windows-10-10.0.17134-SP0 2018-11-05 09:13:15 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'EnterpriseSpider', 'NEWSPIDER_MODULE': 'EnterpriseSpider.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['EnterpriseSpider.spiders']} 2018-11-05 09:13:15 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2018-11-05 09:13:16 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2018-11-05 09:13:16 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2018-11-05 09:13:16 [scrapy.middleware] INFO: Enabled item pipelines: [] 2018-11-05 09:13:16 [scrapy.core.engine] INFO: Spider opened 2018-11-05 09:13:16 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2018-11-05 09:13:16 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023 2018-11-05 09:13:16 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://blog.jobbole.com/robots.txt> (referer: None) 2018-11-05 09:13:18 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://blog.jobbole.com/> (referer: None) 2018-11-05 09:13:18 [scrapy.core.engine] INFO: Closing spider (finished) 2018-11-05 09:13:18 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 440, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 21847, 'downloader/response_count': 2, 'downloader/response_status_count/200': 2, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2018, 11, 5, 1, 13, 18, 465449), 'log_count/DEBUG': 3, 'log_count/INFO': 7, 'response_received_count': 2, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2018, 11, 5, 1, 13, 16, 312886)} 2018-11-05 09:13:18 [scrapy.core.engine] INFO: Spider closed (finished) (scrapyenv) E:\Python\Envs\EnterpriseSpider>View Code
在PyCharm中调试爬虫
1. 修改settings.py文件
将ROBOTSTXT_OBEY的值设置为False,否则爬虫会去读取每个网站的robots协议,这样爬虫会很快停止。
2. 在jobbole.py文件设置断点
3. 以Debug模式启动main文件
进入断点,此时查看response,发现传入进来的URL实际上就是starts_urls
查看第一行response,发现类型是HtmlResponse,状态是200,返回成功
_DEFAULT_ENCODING是默认字符集ASCII,
encoding是scrapy设置的字符集UTF-8,
body设要爬取的网站html的源文件,我们实际操作的就是源文件。
相关阅读
From:https://baijiahao.baidu.com/s?id=1586819398136845808&wfr=spider&for=pc 国外14个网站国内2个网站:https://blog.csdn.net/
你是不是有很多想看的外国电影,但是有的国内网站都找不到呢?小编今天给大家分享几个国外免费在线的电影网站,资源很是丰
对于喜欢逛CSDN的人来说,看别人的博客确实能够对自己有不小的提
尽管2018年在线旅游市场整体增速放缓,但度假板块所占比重持续上升,且仍具有可观的增长潜力。景区等资源经营方发力智慧转型,在线厂商
近日,在线教育品牌“作业帮“完成了3.5 亿美元 D 轮融资,紧接着,今日头条也被爆出已经收购在线教育公司学霸君的2B业务。两大在线教