详解基于Scrapy的IP代理池搭建


Posted in Python onSeptember 29, 2020

一、为什么要搭建爬虫代理池

在众多的网站防爬措施中,有一种是根据ip的访问频率进行限制,即在某一时间段内,当某个ip的访问次数达到一定的阀值时,该ip就会被拉黑、在一段时间内禁止访问。

应对的方法有两种:

1. 降低爬虫的爬取频率,避免IP被限制访问,缺点显而易见:会大大降低爬取的效率。

2. 搭建一个IP代理池,使用不同的IP轮流进行爬取。

二、搭建思路

1、从代理网站(如:西刺代理、快代理、云代理、无忧代理)爬取代理IP;

2、验证代理IP的可用性(使用代理IP去请求指定URL,根据响应验证代理IP是否生效);

3、将可用的代理IP保存到数据库;

在《Python爬虫代理池搭建》一文中我们已经使用Python的 requests 模块简单实现了一个IP代理池搭建,但是爬取速度较慢。由于西刺代理、快代理和云代理等网站需要爬取的IP代理列表页多达上千页,使用此种方法来爬取其实并不适合。

本文将以快代理网站的IP代理爬取为例,示例如何使用 Scrapy-Redis 来爬取代理IP。

三、搭建代理池

scrapy 项目的目录结构如下:

详解基于Scrapy的IP代理池搭建

items.py

# -*- coding: utf-8 -*-
import re
import scrapy
from proxy_pool.settings import PROXY_URL_FORMATTER

schema_pattern = re.compile(r'http|https$', re.I)
ip_pattern = re.compile(r'^([0-9]{1,3}.){3}[0-9]{1,3}$', re.I)
port_pattern = re.compile(r'^[0-9]{2,5}$', re.I)


class ProxyPoolItem(scrapy.Item):
  # define the fields for your item here like:
  # name = scrapy.Field()
  '''
    {
      "schema": "http", # 代理的类型
      "ip": "127.0.0.1", # 代理的IP地址
      "port": "8050", # 代理的端口号
      "original":"西刺代理",
      "used_total": 11, # 代理的使用次数
      "success_times": 5, # 代理请求成功的次数
      "continuous_failed": 3, # 使用代理发送请求,连续失败的次数
      "created_time": "2018-05-02" # 代理的爬取时间
    }
  '''
  schema = scrapy.Field()
  ip = scrapy.Field()
  port = scrapy.Field()
  original = scrapy.Field()
  used_total = scrapy.Field()
  success_times = scrapy.Field()
  continuous_failed = scrapy.Field()
  created_time = scrapy.Field()

  # 检查IP代理的格式是否正确
  def _check_format(self):
    if self['schema'] is not None and self['ip'] is not None and self['port'] is not None:
      if schema_pattern.match(self['schema']) and ip_pattern.match(self['ip']) and port_pattern.match(
          self['port']):
        return True
    return False

  # 获取IP代理的url
  def _get_url(self):
    return PROXY_URL_FORMATTER % {'schema': self['schema'], 'ip': self['ip'], 'port': self['port']}

kuai_proxy.py

# -*- coding: utf-8 -*-
import re
import time
import scrapy
from proxy_pool.utils import strip, logger
from proxy_pool.items import ProxyPoolItem


class KuaiProxySpider(scrapy.Spider):
  name = 'kuai_proxy'
  allowed_domains = ['kuaidaili.com']
  start_urls = ['https://www.kuaidaili.com/free/inha/1/', 'https://www.kuaidaili.com/free/intr/1/']

  def parse(self, response):
    logger.info('正在爬取:< ' + response.request.url + ' >')
    tr_list = response.css("div#list>table>tbody tr")
    for tr in tr_list:
      ip = tr.css("td[data-title='IP']::text").get()
      port = tr.css("td[data-title='PORT']::text").get()
      schema = tr.css("td[data-title='类型']::text").get()
      if schema.lower() == "http" or schema.lower() == "https":
        item = ProxyPoolItem()
        item['schema'] = strip(schema).lower()
        item['ip'] = strip(ip)
        item['port'] = strip(port)
        item['original'] = '快代理'
        item['created_time'] = time.strftime('%Y-%m-%d', time.localtime(time.time()))
        if item._check_format():
          yield item
    next_page = response.xpath("//a[@class='active']/../following-sibling::li/a/@href").get()
    if next_page is not None:
      next_url = 'https://www.kuaidaili.com' + next_page
      yield scrapy.Request(next_url)

middlewares.py

# -*- coding: utf-8 -*-

import random
from proxy_pool.utils import logger


# 随机选择 IP 代理下载器中间件
class RandomProxyMiddleware(object):

  # 从 settings 的 PROXIES 列表中随机选择一个作为代理
  def process_request(self, request, spider):
    proxy = random.choice(spider.settings['PROXIES'])
    request.meta['proxy'] = proxy
    return None


# 随机选择 User-Agent 的下载器中间件
class RandomUserAgentMiddleware(object):
  def process_request(self, request, spider):
    # 从 settings 的 USER_AGENTS 列表中随机选择一个作为 User-Agent
    user_agent = random.choice(spider.settings['USER_AGENT_LIST'])
    request.headers['User-Agent'] = user_agent
    return None

  def process_response(self, request, response, spider):
    # 验证 User-Agent 设置是否生效
    logger.info("headers ::> User-Agent = " + str(request.headers['User-Agent'], encoding="utf8"))
    return response

pipelines.py

# -*- coding: utf-8 -*-

import json
import redis
from proxy_pool.settings import REDIS_HOST,REDIS_PORT,REDIS_PARAMS,PROXIES_UNCHECKED_LIST,PROXIES_UNCHECKED_SET

server = redis.StrictRedis(host=REDIS_HOST, port=REDIS_PORT,password=REDIS_PARAMS['password'])

class ProxyPoolPipeline(object):

  # 将可用的IP代理添加到代理池队列
  def process_item(self, item, spider):
    if not self._is_existed(item):
      server.rpush(PROXIES_UNCHECKED_LIST, json.dumps(dict(item),ensure_ascii=False))

  # 检查IP代理是否已经存在
  def _is_existed(self,item):
    added = server.sadd(PROXIES_UNCHECKED_SET, item._get_url())
    return added == 0

settings.py

# -*- coding: utf-8 -*-
BOT_NAME = 'proxy_pool'

SPIDER_MODULES = ['proxy_pool.spiders']
NEWSPIDER_MODULE = 'proxy_pool.spiders'

# 保存未检验代理的Redis key
PROXIES_UNCHECKED_LIST = 'proxies:unchecked:list'

# 已经存在的未检验HTTP代理和HTTPS代理集合
PROXIES_UNCHECKED_SET = 'proxies:unchecked:set'

# 代理地址的格式化字符串
PROXY_URL_FORMATTER = '%(schema)s://%(ip)s:%(port)s'

# 通用请求头字段
DEFAULT_REQUEST_HEADERS = {
  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
  'Accept-Encoding': 'gzip, deflate, br',
  'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,zh-TW;q=0.7',
  'Connection': 'keep-alive'
}

# 请求太频繁会导致 503 ,在此设置 5 秒请求一次
DOWNLOAD_DELAY = 5 # 250 ms of delay

USER_AGENT_LIST = [
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
  "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
  "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
  "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
  "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
  "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
  "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
  "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
  "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
]

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
  'proxy_pool.middlewares.RandomUserAgentMiddleware': 543,
  #  'proxy_pool.middlewares.RandomProxyMiddleware': 544,
}

ITEM_PIPELINES = {
  'proxy_pool.pipelines.ProxyPoolPipeline': 300,
}

PROXIES = [
  "https://171.13.92.212:9797",
  "https://164.163.234.210:8080",
  "https://143.202.73.219:8080",
  "https://103.75.166.15:8080"
]

######################################################
##############下面是Scrapy-Redis相关配置################
######################################################

# 指定Redis的主机名和端口
REDIS_HOST = '172.16.250.238'
REDIS_PORT = 6379
REDIS_PARAMS = {'password': '123456'}

# 调度器启用Redis存储Requests队列
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

# 确保所有的爬虫实例使用Redis进行重复过滤
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

# 将Requests队列持久化到Redis,可支持暂停或重启爬虫
SCHEDULER_PERSIST = True

# Requests的调度策略,默认优先级队列
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue'

utils.py

# -*- coding: utf-8 -*-
import logging

# 设置日志输出格式
logging.basicConfig(level=logging.INFO,
          format='[%(asctime)-15s] [%(levelname)8s] [%(name)10s ] - %(message)s (%(filename)s:%(lineno)s)',
          datefmt='%Y-%m-%d %T'
          )
logger = logging.getLogger(__name__)

# Truncate header and tailer blanks
def strip(data):
  if data is not None:
    return data.strip()
  return data

到此这篇关于详解基于Scrapy的IP代理池搭建的文章就介绍到这了,更多相关Scrapy IP代理池搭建内容请搜索三水点靠木以前的文章或继续浏览下面的相关文章希望大家以后多多支持三水点靠木!

Python 相关文章推荐
python结合opencv实现人脸检测与跟踪
Jun 08 Python
简单的python后台管理程序
Apr 13 Python
python实现反转部分单向链表
Sep 27 Python
Python对象转换为json的方法步骤
Apr 25 Python
python虚拟环境完美部署教程
Aug 06 Python
用python生成与调用cntk模型代码演示方法
Aug 26 Python
wxpython+pymysql实现用户登陆功能
Nov 19 Python
opencv python 图片读取与显示图片窗口未响应问题的解决
Apr 24 Python
Python中的With语句的使用及原理
Jul 29 Python
python实现图片,视频人脸识别(opencv版)
Nov 18 Python
pytorch 中autograd.grad()函数的用法说明
May 12 Python
手把手教你实现PyTorch的MNIST数据集
Jun 28 Python
Python 创建守护进程的示例
Sep 29 #Python
Python 解析xml文件的示例
Sep 29 #Python
Python 字典一个键对应多个值的方法
Sep 29 #Python
python 获取字典特定值对应的键的实现
Sep 29 #Python
Python3 pyecharts生成Html文件柱状图及折线图代码实例
Sep 29 #Python
Python爬取微信小程序通用方法代码实例详解
Sep 29 #Python
详解如何修改python中字典的键和值
Sep 29 #Python
You might like
优化PHP代码的53条建议
2008/03/27 PHP
PHP中Cookie的使用详解(简单易懂)
2017/04/28 PHP
Laravel 5.4前后台分离,通过不同的二级域名访问方法
2019/10/13 PHP
YII2框架中日志的配置与使用方法实例分析
2020/03/18 PHP
php使用goto实现自动重启swoole、reactphp、workerman服务的代码
2020/04/13 PHP
PHP7 windows支持
2021/03/09 PHP
javascript 选择文件夹对话框(web)
2009/07/07 Javascript
JSON JQUERY模板实现说明
2010/07/03 Javascript
页面回到顶部的三种实现(锚标记,js)
2012/10/01 Javascript
JavaScript 实现鼠标拖动元素实例代码
2014/02/24 Javascript
javascript简单实现命名空间效果
2014/03/06 Javascript
jQuery多个input求和的实现方法
2015/02/12 Javascript
JavaScript列表框listbox全选和反选的实现方法
2015/03/18 Javascript
Javascript实现的Map集合工具类完整实例
2015/07/31 Javascript
使用jsonp实现跨域获取数据实例讲解
2016/12/25 Javascript
Angular2学习教程之ng中变更检测问题详解
2017/05/28 Javascript
深入浅出webpack之externals的使用
2017/12/04 Javascript
浅谈React组件之性能优化
2018/03/02 Javascript
详解vue-element Tree树形控件填坑路
2019/03/26 Javascript
微信小程序实现树莓派(raspberry pi)小车控制
2020/02/12 Javascript
[02:56]DOTA2上海特锦赛小组赛解说FreeAgain采访花絮
2016/02/27 DOTA
[00:32]2018DOTA2亚洲邀请赛Secret出场
2018/04/03 DOTA
Python并发编程协程(Coroutine)之Gevent详解
2017/12/27 Python
Python Requests模拟登录实现图书馆座位自动预约
2018/04/27 Python
Python3实现的Mysql数据库操作封装类
2018/06/06 Python
python ipset管理 增删白名单的方法
2019/01/14 Python
Python实现的插入排序,冒泡排序,快速排序,选择排序算法示例
2019/05/04 Python
Python可迭代对象操作示例
2019/05/07 Python
Python 图像处理: 生成二维高斯分布蒙版的实例
2019/07/04 Python
tensorflow 环境变量设置方式
2020/02/06 Python
解决Jupyter notebook更换主题工具栏被隐藏及添加目录生成插件问题
2020/04/20 Python
如何解决安装python3.6.1失败
2020/07/01 Python
帕克纽约:PARKER NY
2018/12/09 全球购物
世界上最大的艺术社区:SAA
2020/12/30 全球购物
员工入职担保书范文
2014/04/01 职场文书
颂军魂爱军营演讲稿
2014/09/13 职场文书