Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程


Posted in Python onNovember 11, 2021

前言

闲来无聊,写了一个爬虫程序获取百度疫情数据。申明一下,研究而已。而且页面应该会进程做反爬处理,可能需要调整对应xpath。

Github仓库地址:代码仓库

本文主要使用的是scrapy框架。

环境部署

主要简单推荐一下

插件推荐

这里先推荐一个Google Chrome的扩展插件xpath helper,可以验证xpath语法是不是正确。

Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程

爬虫目标

需要爬取的页面:实时更新:新型冠状病毒肺炎疫情地图

主要爬取的目标选取了全国的数据以及各个身份的数据。

Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程

Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程

项目创建

使用scrapy命令创建项目

scrapy startproject yqsj

webdriver部署

这里就不重新讲一遍了,可以参考我这篇文章的部署方法:Python 详解通过Scrapy框架实现爬取CSDN全站热榜标题热词流程

项目代码

开始撸代码,看一下百度疫情省份数据的问题。

Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程

页面需要点击展开全部span。所以在提取页面源码的时候需要模拟浏览器打开后,点击该按钮。所以按照这个方向,我们一步步来。

Item定义

定义两个类YqsjProvinceItem和YqsjChinaItem,分别定义国内省份数据和国内数据。

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
 
import scrapy
 
 
class YqsjProvinceItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    location = scrapy.Field()
    new = scrapy.Field()
    exist = scrapy.Field()
    total = scrapy.Field()
    cure = scrapy.Field()
    dead = scrapy.Field()
 
 
class YqsjChinaItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    # 现有确诊
    exist_diagnosis = scrapy.Field()
    # 无症状
    asymptomatic = scrapy.Field()
    # 现有疑似
    exist_suspecte = scrapy.Field()
    # 现有重症
    exist_severe = scrapy.Field()
    # 累计确诊
    cumulative_diagnosis = scrapy.Field()
    # 境外输入
    overseas_input = scrapy.Field()
    # 累计治愈
    cumulative_cure = scrapy.Field()
    # 累计死亡
    cumulative_dead = scrapy.Field()

中间件定义

需要打开页面后点击一下展开全部。

Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程

完整代码

# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
 
from scrapy import signals
 
# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapter
from scrapy.http import HtmlResponse
from selenium.common.exceptions import TimeoutException
from selenium.webdriver import ActionChains
import time
 
 
class YqsjSpiderMiddleware:
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.
 
    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s
 
    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.
 
        # Should return None or raise an exception.
        return None
 
    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.
 
        # Must return an iterable of Request, or item objects.
        for i in result:
            yield i
 
    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.
 
        # Should return either None or an iterable of Request or item objects.
        pass
 
    def process_start_requests(self, start_requests, spider):
        # Called with the start requests of the spider, and works
        # similarly to the process_spider_output() method, except
        # that it doesn't have a response associated.
 
        # Must return only requests (not items).
        for r in start_requests:
            yield r
 
    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)
 
 
class YqsjDownloaderMiddleware:
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.
 
    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s
 
    def process_request(self, request, spider):
        # Called for each request that goes through the downloader
        # middleware.
 
        # Must either:
        # - return None: continue processing this request
        # - or return a Response object
        # - or return a Request object
        # - or raise IgnoreRequest: process_exception() methods of
        #   installed downloader middleware will be called
        # return None
        try:
            spider.browser.get(request.url)
            spider.browser.maximize_window()
            time.sleep(2)
            spider.browser.find_element_by_xpath("//*[@id='nationTable']/div/span").click()
            # ActionChains(spider.browser).click(searchButtonElement)
            time.sleep(5)
            return HtmlResponse(url=spider.browser.current_url, body=spider.browser.page_source,
                                encoding="utf-8", request=request)
        except TimeoutException as e:
            print('超时异常:{}'.format(e))
            spider.browser.execute_script('window.stop()')
        finally:
            spider.browser.close()
 
    def process_response(self, request, response, spider):
        # Called with the response returned from the downloader.
 
        # Must either;
        # - return a Response object
        # - return a Request object
        # - or raise IgnoreRequest
        return response
 
    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.
 
        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass
 
    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)

定义爬虫

分别获取国内疫情数据以及省份疫情数据。完整代码:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time    : 2021/11/7 22:05
# @Author  : 至尊宝
# @Site    : 
# @File    : baidu_yq.py
 
import scrapy
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
 
from yqsj.items import YqsjChinaItem, YqsjProvinceItem
 
 
class YqsjSpider(scrapy.Spider):
    name = 'yqsj'
    # allowed_domains = ['blog.csdn.net']
    start_urls = ['https://voice.baidu.com/act/newpneumonia/newpneumonia#tab0']
    china_xpath = "//div[contains(@class, 'VirusSummarySix_1-1-317_2ZJJBJ')]/text()"
    province_xpath = "//*[@id='nationTable']/table/tbody/tr[{}]/td/text()"
    province_xpath_1 = "//*[@id='nationTable']/table/tbody/tr[{}]/td/div/span/text()"
 
    def __init__(self):
        chrome_options = Options()
        chrome_options.add_argument('--headless')  # 使用无头谷歌浏览器模式
        chrome_options.add_argument('--disable-gpu')
        chrome_options.add_argument('--no-sandbox')
        self.browser = webdriver.Chrome(chrome_options=chrome_options,
                                        executable_path="E:\\chromedriver_win32\\chromedriver.exe")
        self.browser.set_page_load_timeout(30)
 
    def parse(self, response, **kwargs):
        country_info = response.xpath(self.china_xpath)
        yq_china = YqsjChinaItem()
        yq_china['exist_diagnosis'] = country_info[0].get()
        yq_china['asymptomatic'] = country_info[1].get()
        yq_china['exist_suspecte'] = country_info[2].get()
        yq_china['exist_severe'] = country_info[3].get()
        yq_china['cumulative_diagnosis'] = country_info[4].get()
        yq_china['overseas_input'] = country_info[5].get()
        yq_china['cumulative_cure'] = country_info[6].get()
        yq_china['cumulative_dead'] = country_info[7].get()
        yield yq_china
        
        # 遍历35个地区
        for x in range(1, 35):
            path = self.province_xpath.format(x)
            path1 = self.province_xpath_1.format(x)
            province_info = response.xpath(path)
            province_name = response.xpath(path1)
            yq_province = YqsjProvinceItem()
            yq_province['location'] = province_name.get()
            yq_province['new'] = province_info[0].get()
            yq_province['exist'] = province_info[1].get()
            yq_province['total'] = province_info[2].get()
            yq_province['cure'] = province_info[3].get()
            yq_province['dead'] = province_info[4].get()
            yield yq_province

pipeline输出结果文本

将结果按照一定的文本格式输出出来。完整代码:

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
 
 
# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
 
from yqsj.items import YqsjChinaItem, YqsjProvinceItem
 
 
class YqsjPipeline:
    def __init__(self):
        self.file = open('result.txt', 'w', encoding='utf-8')
 
    def process_item(self, item, spider):
        if isinstance(item, YqsjChinaItem):
            self.file.write(
                "国内疫情\n现有确诊\t{}\n无症状\t{}\n现有疑似\t{}\n现有重症\t{}\n累计确诊\t{}\n境外输入\t{}\n累计治愈\t{}\n累计死亡\t{}\n".format(
                    item['exist_diagnosis'],
                    item['asymptomatic'],
                    item['exist_suspecte'],
                    item['exist_severe'],
                    item['cumulative_diagnosis'],
                    item['overseas_input'],
                    item['cumulative_cure'],
                    item['cumulative_dead']))
        if isinstance(item, YqsjProvinceItem):
            self.file.write(
                "省份:{}\t新增:{}\t现有:{}\t累计:{}\t治愈:{}\t死亡:{}\n".format(
                    item['location'],
                    item['new'],
                    item['exist'],
                    item['total'],
                    item['cure'],
                    item['dead']))
        return item
 
    def close_spider(self, spider):
        self.file.close()

配置文件改动

直接参考,自行调整:

# Scrapy settings for yqsj project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html
 
BOT_NAME = 'yqsj'
 
SPIDER_MODULES = ['yqsj.spiders']
NEWSPIDER_MODULE = 'yqsj.spiders'
 
 
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'yqsj (+http://www.yourdomain.com)'
USER_AGENT = 'Mozilla/5.0'
 
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
 
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
 
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
 
# Disable cookies (enabled by default)
COOKIES_ENABLED = False
 
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
 
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Language': 'en',
    'User-Agent': 'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.94 Safari/537.36'
}
 
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {
   'yqsj.middlewares.YqsjSpiderMiddleware': 543,
}
 
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
   'yqsj.middlewares.YqsjDownloaderMiddleware': 543,
}
 
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}
 
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'yqsj.pipelines.YqsjPipeline': 300,
}
 
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
 
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

验证结果

Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程

看看结果文件

Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程

总结

emmmm,闲着无聊,写着玩,没啥好总结的。

分享:

修心,亦是修行之一。顺境修力,逆境修心,缺一不可。 ——《剑来》

如果本文对你有作用的话,不要吝啬你的赞,谢谢。

以上就是Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程的详细内容,更多关于Python Scrapy框架的资料请关注三水点靠木其它相关文章!

Python 相关文章推荐
python友情链接检查方法
Jul 08 Python
Python中atexit模块的基本使用示例
Jul 08 Python
在Django中管理Users和Permissions以及Groups的方法
Jul 23 Python
python 简单照相机调用系统摄像头实现方法 pygame
Aug 03 Python
Python mutiprocessing多线程池pool操作示例
Jan 30 Python
24式加速你的Python(小结)
Jun 13 Python
pip install python 快速安装模块的教程图解
Oct 08 Python
python实现的批量分析xml标签中各个类别个数功能示例
Dec 30 Python
tensorflow实现二维平面模拟三维数据教程
Feb 11 Python
python实现FTP文件传输的方法(服务器端和客户端)
Mar 20 Python
python如何更新包
Jun 11 Python
python Yaml、Json、Dict之间的转化
Oct 19 Python
python中tkinter复选框使用操作
Nov 11 #Python
Python中的变量与常量
Nov 11 #Python
Python 键盘事件详解
Nov 11 #Python
Python 详解通过Scrapy框架实现爬取CSDN全站热榜标题热词流程
Nov 11 #Python
Python 多线程处理任务实例
Nov 07 #Python
python利用while求100内的整数和方式
Nov 07 #Python
python中if和elif的区别介绍
Nov 07 #Python
You might like
附件名前加网站名
2008/03/23 PHP
PHP求小于1000的所有水仙花数的代码
2012/01/10 PHP
PHP人民币金额数字转中文大写的函数代码
2013/02/27 PHP
php使用socket调用http和smtp协议实例小结
2019/07/26 PHP
PHP实现计算器小功能
2020/08/28 PHP
[JS源码]超长文章自动分页(客户端版)
2007/01/09 Javascript
深入解读JavaScript中的Iterator和for-of循环
2015/07/28 Javascript
原生js实现tab选项卡切换
2020/03/23 Javascript
JS实现探测网站链接的方法【测试可用】
2016/11/08 Javascript
jQ处理xml文件和xml字符串的方法(详解)
2016/11/22 Javascript
详解vue.js组件化开发实践
2016/12/14 Javascript
详解vue 模版组件的三种用法
2017/07/21 Javascript
React-Native左右联动List的示例代码
2017/09/21 Javascript
实时监控input框,实现输入框与下拉框联动的实例
2018/01/23 Javascript
微信小程序之自定义组件的实现代码(附源码)
2018/08/02 Javascript
vue.js template模板的使用(仿饿了么布局)
2018/08/13 Javascript
使用Vue.js中的过滤器实现幂方求值的方法
2019/08/27 Javascript
[41:12]Liquid vs Secret 2019国际邀请赛淘汰赛 败者组 BO3 第一场 8.24
2019/09/10 DOTA
Python深入学习之上下文管理器
2014/08/31 Python
使用Python的Flask框架实现视频的流媒体传输
2015/03/31 Python
Python线程指南详细介绍
2017/01/05 Python
windows下python之mysqldb模块安装方法
2017/09/07 Python
Python写出新冠状病毒确诊人数地图的方法
2020/02/12 Python
python-sys.stdout作为默认函数参数的实现
2020/02/21 Python
Python递归实现打印多重列表代码
2020/02/27 Python
在tensorflow实现直接读取网络的参数(weight and bias)的值
2020/06/24 Python
浅谈移动端网页图片预加载方案
2018/11/05 HTML / CSS
罗马尼亚在线杂货店:Pilulka.ro
2019/09/28 全球购物
管理科学大学生求职信
2013/11/13 职场文书
物流毕业生个人的自我评价
2014/02/13 职场文书
2014年党课学习材料
2014/05/11 职场文书
社会实践活动总结范文
2014/07/03 职场文书
大学生党员个人对照检查材料范文
2014/09/25 职场文书
2014教师年度思想工作总结
2014/11/10 职场文书
打架检讨书范文
2015/01/27 职场文书
2016年第16个全民国防教育日宣传活动总结
2016/04/05 职场文书