Python 详解通过Scrapy框架实现爬取CSDN全站热榜标题热词流程


Posted in Python onNovember 11, 2021

前言

接着我的上一篇:Python 详解爬取并统计CSDN全站热榜标题关键词词频流程

我换成Scrapy架构也实现了一遍。获取页面源码底层原理是一样的,Scrapy架构更系统一些。下面我会把需要注意的问题,也说明一下。

提供一下GitHub仓库地址:github本项目地址

环境部署

scrapy安装

pip install scrapy -i https://pypi.douban.com/simple

selenium安装

pip install selenium -i https://pypi.douban.com/simple

jieba安装

pip install jieba -i https://pypi.douban.com/simple

IDE:PyCharm

google chrome driver下载对应版本:google chrome driver下载地址

检查浏览器版本,下载对应版本。

Python 详解通过Scrapy框架实现爬取CSDN全站热榜标题热词流程

实现过程

下面开始搞起。

创建项目

使用scrapy命令创建我们的项目。

scrapy startproject csdn_hot_words

项目结构,如同官方给出的结构。

Python 详解通过Scrapy框架实现爬取CSDN全站热榜标题热词流程

定义Item实体

按照之前的逻辑,主要属性为标题关键词对应出现次数的字典。代码如下:

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
 
import scrapy
 
 
class CsdnHotWordsItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    words = scrapy.Field()

关键词提取工具

使用jieba分词获取工具。

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time    : 2021/11/5 23:47
# @Author  : 至尊宝
# @Site    : 
# @File    : analyse_sentence.py
 
import jieba.analyse
 
 
def get_key_word(sentence):
    result_dic = {}
    words_lis = jieba.analyse.extract_tags(
        sentence, topK=3, withWeight=True, allowPOS=())
    for word, flag in words_lis:
        if word in result_dic:
            result_dic[word] += 1
        else:
            result_dic[word] = 1
    return result_dic

爬虫构造

这里需要给爬虫初始化一个浏览器参数,用来实现页面的动态加载。

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time    : 2021/11/5 23:47
# @Author  : 至尊宝
# @Site    : 
# @File    : csdn.py
 
import scrapy
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
 
from csdn_hot_words.items import CsdnHotWordsItem
from csdn_hot_words.tools.analyse_sentence import get_key_word
 
 
class CsdnSpider(scrapy.Spider):
    name = 'csdn'
    # allowed_domains = ['blog.csdn.net']
    start_urls = ['https://blog.csdn.net/rank/list']
 
    def __init__(self):
        chrome_options = Options()
        chrome_options.add_argument('--headless')  # 使用无头谷歌浏览器模式
        chrome_options.add_argument('--disable-gpu')
        chrome_options.add_argument('--no-sandbox')
        self.browser = webdriver.Chrome(chrome_options=chrome_options,
                                        executable_path="E:\\chromedriver_win32\\chromedriver.exe")
        self.browser.set_page_load_timeout(30)
 
    def parse(self, response, **kwargs):
        titles = response.xpath("//div[@class='hosetitem-title']/a/text()")
        for x in titles:
            item = CsdnHotWordsItem()
            item['words'] = get_key_word(x.get())
            yield item

代码说明

1、这里使用的是chrome的无头模式,就不需要有个浏览器打开再访问,都是后台执行的。

2、需要添加chromedriver的执行文件地址。

3、在parse的部分,可以参考之前我文章的xpath,获取到标题并且调用关键词提取,构造item对象。

中间件代码构造

添加js代码执行内容。中间件完整代码:

# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
 
from scrapy import signals
from scrapy.http import HtmlResponse
from selenium.common.exceptions import TimeoutException
import time
 
from selenium.webdriver.chrome.options import Options
 
# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapter
 
 
class CsdnHotWordsSpiderMiddleware:
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.
 
    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s
 
    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.
 
        # Should return None or raise an exception.
        return None
 
    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.
 
        # Must return an iterable of Request, or item objects.
        for i in result:
            yield i
 
    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.
 
        # Should return either None or an iterable of Request or item objects.
        pass
 
    def process_start_requests(self, start_requests, spider):
        # Called with the start requests of the spider, and works
        # similarly to the process_spider_output() method, except
        # that it doesn't have a response associated.
 
        # Must return only requests (not items).
        for r in start_requests:
            yield r
 
    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)
 
 
class CsdnHotWordsDownloaderMiddleware:
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.
 
    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s
 
    def process_request(self, request, spider):
        js = '''
                        let height = 0
                let interval = setInterval(() => {
                    window.scrollTo({
                        top: height,
                        behavior: "smooth"
                    });
                    height += 500
                }, 500);
                setTimeout(() => {
                    clearInterval(interval)
                }, 20000);
            '''
        try:
            spider.browser.get(request.url)
            spider.browser.execute_script(js)
            time.sleep(20)
            return HtmlResponse(url=spider.browser.current_url, body=spider.browser.page_source,
                                encoding="utf-8", request=request)
        except TimeoutException as e:
            print('超时异常:{}'.format(e))
            spider.browser.execute_script('window.stop()')
        finally:
            spider.browser.close()
 
    def process_response(self, request, response, spider):
        # Called with the response returned from the downloader.
 
        # Must either;
        # - return a Response object
        # - return a Request object
        # - or raise IgnoreRequest
        return response
 
    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.
 
        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass
 
    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)

制作自定义pipeline

定义按照词频统计最终结果输出到文件。代码如下:

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
 
 
# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
 
 
class CsdnHotWordsPipeline:
 
    def __init__(self):
        self.file = open('result.txt', 'w', encoding='utf-8')
        self.all_words = []
 
    def process_item(self, item, spider):
        self.all_words.append(item)
        return item
 
    def close_spider(self, spider):
        key_word_dic = {}
        for y in self.all_words:
            print(y)
            for k, v in y['words'].items():
                if k.lower() in key_word_dic:
                    key_word_dic[k.lower()] += v
                else:
                    key_word_dic[k.lower()] = v
        word_count_sort = sorted(key_word_dic.items(),
                                 key=lambda x: x[1], reverse=True)
        for word in word_count_sort:
            self.file.write('{},{}\n'.format(word[0], word[1]))
        self.file.close()

settings配置

配置上要做一些调整。如下调整:

# Scrapy settings for csdn_hot_words project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html
 
BOT_NAME = 'csdn_hot_words'
 
SPIDER_MODULES = ['csdn_hot_words.spiders']
NEWSPIDER_MODULE = 'csdn_hot_words.spiders'
 
# Crawl responsibly by identifying yourself (and your website) on the user-agent
# USER_AGENT = 'csdn_hot_words (+http://www.yourdomain.com)'
USER_AGENT = 'Mozilla/5.0'
 
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
 
# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 32
 
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 30
# The download delay setting will honor only one of:
# CONCURRENT_REQUESTS_PER_DOMAIN = 16
# CONCURRENT_REQUESTS_PER_IP = 16
 
# Disable cookies (enabled by default)
COOKIES_ENABLED = False
 
# Disable Telnet Console (enabled by default)
# TELNETCONSOLE_ENABLED = False
 
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Language': 'en',
    'User-Agent': 'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.94 Safari/537.36'
}
 
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {
   'csdn_hot_words.middlewares.CsdnHotWordsSpiderMiddleware': 543,
}
 
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
   'csdn_hot_words.middlewares.CsdnHotWordsDownloaderMiddleware': 543,
}
 
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
# }
 
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'csdn_hot_words.pipelines.CsdnHotWordsPipeline': 300,
}
 
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
# AUTOTHROTTLE_DEBUG = False
 
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# HTTPCACHE_ENABLED = True
# HTTPCACHE_EXPIRATION_SECS = 0
# HTTPCACHE_DIR = 'httpcache'
# HTTPCACHE_IGNORE_HTTP_CODES = []
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

执行主程序

可以通过scrapy的命令执行,但是为了看日志方便,加了一个主程序代码。

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time    : 2021/11/5 22:41
# @Author  : 至尊宝
# @Site    : 
# @File    : main.py
from scrapy import cmdline
 
cmdline.execute('scrapy crawl csdn'.split())

执行结果

执行部分日志

Python 详解通过Scrapy框架实现爬取CSDN全站热榜标题热词流程

得到result.txt结果。

Python 详解通过Scrapy框架实现爬取CSDN全站热榜标题热词流程

总结

看,java还是yyds。不知道为什么2021这个关键词也可以排名靠前。于是我觉着把我标题也加上2021。

GitHub项目地址在发一遍:github本项目地址

申明一下,本文案例仅研究探索使用,不是为了恶意攻击。

分享:

凡夫俗子不下苦功夫、死力气去努力做成一件事,根本就没资格去谈什么天赋不天赋。

——烽火戏诸侯《剑来》

如果本文对你有用的话,请不要吝啬你的赞,谢谢。

以上就是Python 详解通过Scrapy框架实现爬取CSDN全站热榜标题热词流程的详细内容,更多关于Python Scrapy框架的资料请关注三水点靠木其它相关文章!

Python 相关文章推荐
用Python写的图片蜘蛛人代码
Aug 27 Python
Python中关于字符串对象的一些基础知识
Apr 08 Python
python发送HTTP请求的方法小结
Jul 08 Python
Python中asyncore异步模块的用法及实现httpclient的实例
Jun 28 Python
Python 爬虫学习笔记之多线程爬虫
Sep 21 Python
Python数据结构与算法之图的基本实现及迭代器实例详解
Dec 12 Python
详解python之协程gevent模块
Jun 14 Python
解决Shell执行python文件,传参空格引起的问题
Oct 30 Python
python 读取文件并把矩阵转成numpy的两种方法
Feb 12 Python
python 使用plt画图,去除图片四周的白边方法
Jul 09 Python
Selenium基于PIL实现拼接滚动截图
Apr 10 Python
解析Tensorflow之MNIST的使用
Jun 30 Python
Python 多线程处理任务实例
Nov 07 #Python
python利用while求100内的整数和方式
Nov 07 #Python
python中if和elif的区别介绍
Nov 07 #Python
python中取整数的几种方法
Python 中的 copy()和deepcopy()
Nov 07 #Python
Python MNIST手写体识别详解与试练
Python基础 括号()[]{}的详解
Nov 07 #Python
You might like
使用配置类定义Codeigniter全局变量
2014/06/12 PHP
php实现微信公众平台账号自定义菜单类
2014/12/02 PHP
Prototype ObjectRange对象学习
2009/07/19 Javascript
前端开发必须知道的JS之原型和继承
2010/07/06 Javascript
iframe里使用JavaScript控制主页转向的方法
2015/04/03 Javascript
JS解析XML文件和XML字符串详解
2015/04/17 Javascript
HTML5+jQuery实现搜索智能匹配功能
2017/03/24 jQuery
Vue.js上下滚动加载组件的实例代码
2017/07/17 Javascript
js使用highlight.js高亮你的代码
2017/08/18 Javascript
JavaScript伪数组用法实例分析
2017/12/22 Javascript
vue学习之Vue-Router用法实例分析
2020/01/06 Javascript
vue-router 控制路由权限的实现
2020/09/24 Javascript
EXTJS7实现点击拖拉选择文本
2020/12/17 Javascript
用python实现的可以拷贝或剪切一个文件列表中的所有文件
2009/04/30 Python
Python获取单个程序CPU使用情况趋势图
2015/03/10 Python
Python的Django框架中的数据库配置指南
2015/07/17 Python
在Python中使用正则表达式的方法
2015/08/13 Python
Python中防止sql注入的方法详解
2017/02/25 Python
Python3数据库操作包pymysql的操作方法
2018/07/16 Python
python使用tornado实现简单爬虫
2018/07/28 Python
JSON文件及Python对JSON文件的读写操作
2018/10/07 Python
Python实现的读取文件内容并写入其他文件操作示例
2019/04/09 Python
使用Python实现企业微信的自动打卡功能
2019/04/30 Python
Flask使用Pyecharts在单个页面展示多个图表的方法
2019/08/05 Python
基于Python实现体育彩票选号器功能代码实例
2020/09/16 Python
python如何用matplotlib创建三维图表
2021/01/26 Python
仓库管理专业个人的自我评价
2013/12/30 职场文书
法律系毕业生自荐信范文
2014/03/27 职场文书
协议书格式
2014/04/23 职场文书
测控技术自荐信
2014/06/05 职场文书
公安派出所所长四风问题个人对照检查材料
2014/10/04 职场文书
罚款通知怎么写
2015/04/22 职场文书
大学优秀学生主要事迹材料
2015/11/04 职场文书
2016年小学感恩节活动总结
2016/04/01 职场文书
引用计数法和root搜索算法以及JVM中判定对象需要回收的方法
2022/04/19 Java/Android
详解Golang如何实现支持随机删除元素的堆
2022/09/23 Python