Python大数据之从网页上爬取数据的方法详解


Posted in Python onNovember 16, 2019

本文实例讲述了Python大数据之从网页上爬取数据的方法。分享给大家供大家参考,具体如下:

Python大数据之从网页上爬取数据的方法详解

myspider.py  :

#!/usr/bin/python
# -*- coding:utf-8 -*-
from scrapy.spiders import Spider
from lxml import etree
from jredu.items import JreduItem
class JreduSpider(Spider):
  name = 'tt' #爬虫的名字,必须的,唯一的
  allowed_domains = ['sohu.com']
  start_urls = [
    'http://www.sohu.com'
  ]
  def parse(self, response):
    content = response.body.decode('utf-8')
    dom = etree.HTML(content)
    for ul in dom.xpath("//div[@class='focus-news-box']/div[@class='list16']/ul"):
      lis = ul.xpath("./li")
      for li in lis:
        item = JreduItem() #定义对象
        if ul.index(li) == 0:
          strong = li.xpath("./a/strong/text()")
          li.xpath("./a/@href")
          item['title']= strong[0]
          item['href'] = li.xpath("./a/@href")[0]
        else:
          la = li.xpath("./a[last()]/text()")
          item['title'] = la[0]
          item['href'] = li.xpath("./a[last()]/href")[0]
        yield item

items.py    :

# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class JreduItem(scrapy.Item):#相当于Java里的实体类
  # define the fields for your item here like:
  # name = scrapy.Field()
  title = scrapy.Field()#创建一个field对象
  href = scrapy.Field()
  pass

middlewares.py  :

# -*- coding: utf-8 -*-
# Define here the models for your spider middleware
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/spider-middleware.html
from scrapy import signals
class JreduSpiderMiddleware(object):
  # Not all methods need to be defined. If a method is not defined,
  # scrapy acts as if the spider middleware does not modify the
  # passed objects.
  @classmethod
  def from_crawler(cls, crawler):
    # This method is used by Scrapy to create your spiders.
    s = cls()
    crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
    return s
  def process_spider_input(self, response, spider):
    # Called for each response that goes through the spider
    # middleware and into the spider.
    # Should return None or raise an exception.
    return None
  def process_spider_output(self, response, result, spider):
    # Called with the results returned from the Spider, after
    # it has processed the response.
    # Must return an iterable of Request, dict or Item objects.
    for i in result:
      yield i
  def process_spider_exception(self, response, exception, spider):
    # Called when a spider or process_spider_input() method
    # (from other spider middleware) raises an exception.
    # Should return either None or an iterable of Response, dict
    # or Item objects.
    pass
  def process_start_requests(self, start_requests, spider):
    # Called with the start requests of the spider, and works
    # similarly to the process_spider_output() method, except
    # that it doesn't have a response associated.
    # Must return only requests (not items).
    for r in start_requests:
      yield r
  def spider_opened(self, spider):
    spider.logger.info('Spider opened: %s' % spider.name)

pipelines.py  :

# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
import codecs
import json
class JreduPipeline(object):
  def __init__(self):
    self.fill = codecs.open("data.txt",encoding="utf-8",mode="wb");
  def process_item(self, item, spider):
    line = json.dumps(dict(item))+"\n"
    self.fill.write(line)
    return item

settings.py   :

# -*- coding: utf-8 -*-
# Scrapy settings for jredu project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#   http://doc.scrapy.org/en/latest/topics/settings.html
#   http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#   http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'jredu'
SPIDER_MODULES = ['jredu.spiders']
NEWSPIDER_MODULE = 'jredu.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'jredu (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#  'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#  'jredu.middlewares.JreduSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#  'jredu.middlewares.MyCustomDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#  'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
  'jredu.pipelines.JreduPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

最后需要一个程序入口的方法:

main.py     :

#!/usr/bin/python
# -*- coding:utf-8 -*-
#爬虫文件的执行入口
from scrapy import cmdline
cmdline.execute("scrapy crawl tt".split())

更多关于Python相关内容可查看本站专题:《Python Socket编程技巧总结》、《Python正则表达式用法总结》、《Python数据结构与算法教程》、《Python函数使用技巧总结》、《Python字符串操作技巧汇总》、《Python入门与进阶经典教程》及《Python文件与目录操作技巧汇总》

希望本文所述对大家Python程序设计有所帮助。

Python 相关文章推荐
Python getopt模块处理命令行选项实例
May 13 Python
使用 Python 获取 Linux 系统信息的代码
Jul 13 Python
python3学习之Splash的安装与实例教程
Jul 09 Python
从运行效率与开发效率比较Python和C++
Dec 14 Python
详解django实现自定义manage命令的扩展
Aug 13 Python
python中for循环变量作用域及用法详解
Nov 05 Python
tensorflow之自定义神经网络层实例
Feb 07 Python
python pymysql链接数据库查询结果转为Dataframe实例
Jun 05 Python
python新手学习可变和不可变对象
Jun 11 Python
jupyter notebook快速入门及使用详解
Nov 13 Python
深入探讨opencv图像矫正算法实战
May 21 Python
Python中生成随机数据安全性、多功能性、用途和速度方面进行比较
Apr 14 Python
简单了解Pandas缺失值处理方法
Nov 16 #Python
python selenium 执行完毕关闭chromedriver进程示例
Nov 15 #Python
浅谈Django2.0 加xadmin踩的坑
Nov 15 #Python
Django 实现xadmin后台菜单改为中文
Nov 15 #Python
django使用xadmin的全局配置详解
Nov 15 #Python
在django-xadmin中APScheduler的启动初始化实例
Nov 15 #Python
解决django-xadmin列表页filter关联对象搜索问题
Nov 15 #Python
You might like
数字转英文
2006/12/06 PHP
PHP 安全检测代码片段(分享)
2013/07/05 PHP
php文件上传后端处理小技巧
2016/05/22 PHP
PHPWind9.0手动屏蔽验证码解决后台关闭验证码但是依然显示的问题
2016/08/12 PHP
PHP将字符串首字母大小写转换的实例
2017/01/21 PHP
浅谈PHP SHA1withRSA加密生成签名及验签
2019/03/18 PHP
精通JavaScript 纠正 cleanWhitespace函数
2010/03/11 Javascript
一个JQuery操作Table的代码分享
2012/03/30 Javascript
JQuery实现鼠标滚轮滑动到页面节点
2015/07/28 Javascript
详解AngularJS中的http拦截
2016/02/09 Javascript
微信小程序 使用picker封装省市区三级联动实例代码
2016/10/28 Javascript
js继承实现方法详解
2016/12/16 Javascript
百度地图JavascriptApi Marker平滑移动及车头指向行径方向
2017/03/13 Javascript
详解使用VUE搭建后台管理系统(vue-cli更新至3.0)
2018/08/22 Javascript
js对象数组和对象的使用实例详解
2019/08/27 Javascript
Python中文编码那些事
2014/06/25 Python
Python代码调试的几种方法总结
2015/04/15 Python
Python字符串处理之count()方法的使用
2015/05/18 Python
python逆向入门教程
2018/01/15 Python
用TensorFlow实现多类支持向量机的示例代码
2018/04/28 Python
关于python写入文件自动换行的问题
2018/06/23 Python
pandas数据集的端到端处理
2019/02/18 Python
Python3使用xml.dom.minidom和xml.etree模块儿解析xml文件封装函数的方法
2019/09/23 Python
python通过移动端访问查看电脑界面
2020/01/06 Python
Jupyter notebook命令和编辑模式常用快捷键汇总
2020/11/17 Python
动态密码技术
2012/10/18 面试题
Ajax请求总共有多少种Callback
2016/07/17 面试题
会计专业自我鉴定范文
2013/12/29 职场文书
政府班子四风问题整改措施思想汇报
2014/10/08 职场文书
小学师德师风整改措施
2014/10/27 职场文书
2014年人大工作总结
2014/12/10 职场文书
北京英语导游词
2015/02/12 职场文书
诚信教育主题班会
2015/08/13 职场文书
2019年公司快递收发管理制度模板
2019/11/20 职场文书
关于Vue Router的10条高级技巧总结
2021/05/06 Vue.js
Java面试题冲刺第十七天--基础篇3
2021/08/07 面试题