Python大数据之从网页上爬取数据的方法详解


Posted in Python onNovember 16, 2019

本文实例讲述了Python大数据之从网页上爬取数据的方法。分享给大家供大家参考,具体如下:

Python大数据之从网页上爬取数据的方法详解

myspider.py  :

#!/usr/bin/python
# -*- coding:utf-8 -*-
from scrapy.spiders import Spider
from lxml import etree
from jredu.items import JreduItem
class JreduSpider(Spider):
  name = 'tt' #爬虫的名字,必须的,唯一的
  allowed_domains = ['sohu.com']
  start_urls = [
    'http://www.sohu.com'
  ]
  def parse(self, response):
    content = response.body.decode('utf-8')
    dom = etree.HTML(content)
    for ul in dom.xpath("//div[@class='focus-news-box']/div[@class='list16']/ul"):
      lis = ul.xpath("./li")
      for li in lis:
        item = JreduItem() #定义对象
        if ul.index(li) == 0:
          strong = li.xpath("./a/strong/text()")
          li.xpath("./a/@href")
          item['title']= strong[0]
          item['href'] = li.xpath("./a/@href")[0]
        else:
          la = li.xpath("./a[last()]/text()")
          item['title'] = la[0]
          item['href'] = li.xpath("./a[last()]/href")[0]
        yield item

items.py    :

# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class JreduItem(scrapy.Item):#相当于Java里的实体类
  # define the fields for your item here like:
  # name = scrapy.Field()
  title = scrapy.Field()#创建一个field对象
  href = scrapy.Field()
  pass

middlewares.py  :

# -*- coding: utf-8 -*-
# Define here the models for your spider middleware
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/spider-middleware.html
from scrapy import signals
class JreduSpiderMiddleware(object):
  # Not all methods need to be defined. If a method is not defined,
  # scrapy acts as if the spider middleware does not modify the
  # passed objects.
  @classmethod
  def from_crawler(cls, crawler):
    # This method is used by Scrapy to create your spiders.
    s = cls()
    crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
    return s
  def process_spider_input(self, response, spider):
    # Called for each response that goes through the spider
    # middleware and into the spider.
    # Should return None or raise an exception.
    return None
  def process_spider_output(self, response, result, spider):
    # Called with the results returned from the Spider, after
    # it has processed the response.
    # Must return an iterable of Request, dict or Item objects.
    for i in result:
      yield i
  def process_spider_exception(self, response, exception, spider):
    # Called when a spider or process_spider_input() method
    # (from other spider middleware) raises an exception.
    # Should return either None or an iterable of Response, dict
    # or Item objects.
    pass
  def process_start_requests(self, start_requests, spider):
    # Called with the start requests of the spider, and works
    # similarly to the process_spider_output() method, except
    # that it doesn't have a response associated.
    # Must return only requests (not items).
    for r in start_requests:
      yield r
  def spider_opened(self, spider):
    spider.logger.info('Spider opened: %s' % spider.name)

pipelines.py  :

# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
import codecs
import json
class JreduPipeline(object):
  def __init__(self):
    self.fill = codecs.open("data.txt",encoding="utf-8",mode="wb");
  def process_item(self, item, spider):
    line = json.dumps(dict(item))+"\n"
    self.fill.write(line)
    return item

settings.py   :

# -*- coding: utf-8 -*-
# Scrapy settings for jredu project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#   http://doc.scrapy.org/en/latest/topics/settings.html
#   http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#   http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'jredu'
SPIDER_MODULES = ['jredu.spiders']
NEWSPIDER_MODULE = 'jredu.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'jredu (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#  'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#  'jredu.middlewares.JreduSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#  'jredu.middlewares.MyCustomDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#  'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
  'jredu.pipelines.JreduPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

最后需要一个程序入口的方法:

main.py     :

#!/usr/bin/python
# -*- coding:utf-8 -*-
#爬虫文件的执行入口
from scrapy import cmdline
cmdline.execute("scrapy crawl tt".split())

更多关于Python相关内容可查看本站专题:《Python Socket编程技巧总结》、《Python正则表达式用法总结》、《Python数据结构与算法教程》、《Python函数使用技巧总结》、《Python字符串操作技巧汇总》、《Python入门与进阶经典教程》及《Python文件与目录操作技巧汇总》

希望本文所述对大家Python程序设计有所帮助。

Python 相关文章推荐
python抓取豆瓣图片并自动保存示例学习
Jan 10 Python
Django中处理出错页面的方法
Jul 15 Python
python 3利用BeautifulSoup抓取div标签的方法示例
May 28 Python
python对DICOM图像的读取方法详解
Jul 17 Python
利用python解决mysql视图导入导出依赖的问题
Dec 17 Python
Python爬虫的两套解析方法和四种爬虫实现过程
Jul 20 Python
Python字符串、整数、和浮点型数相互转换实例
Aug 04 Python
Python3.6实现根据电影名称(支持电视剧名称),获取下载链接的方法
Aug 26 Python
Python常用库大全及简要说明
Jan 17 Python
解决python的空格和tab混淆而报错的问题
Feb 26 Python
发工资啦!教你用Python实现邮箱自动群发工资条
May 10 Python
python使用matplotlib绘制图片时x轴的刻度处理
Aug 30 Python
简单了解Pandas缺失值处理方法
Nov 16 #Python
python selenium 执行完毕关闭chromedriver进程示例
Nov 15 #Python
浅谈Django2.0 加xadmin踩的坑
Nov 15 #Python
Django 实现xadmin后台菜单改为中文
Nov 15 #Python
django使用xadmin的全局配置详解
Nov 15 #Python
在django-xadmin中APScheduler的启动初始化实例
Nov 15 #Python
解决django-xadmin列表页filter关联对象搜索问题
Nov 15 #Python
You might like
常用表单验证类,有了这个,一般的验证就都齐了。
2006/12/06 PHP
比较全面的PHP数组的使用方法小结
2010/09/23 PHP
php输出1000以内质数(素数)示例
2014/02/16 PHP
PHP通过插入mysql数据来实现多机互锁实例
2014/11/05 PHP
PHP实现的json类实例
2015/07/28 PHP
再谈Yii Framework框架中的事件event原理与应用
2020/04/07 PHP
PHP 实现缩略图
2021/03/09 PHP
JavaScript实现的日期控件具体代码
2013/11/18 Javascript
jquery实现经典的淡入淡出选项卡效果代码
2015/09/22 Javascript
AngularJS ng-style中使用filter
2016/09/21 Javascript
Angular和百度地图的结合实例代码
2016/10/19 Javascript
Vue如何从1.0迁移到2.0
2017/10/19 Javascript
[01:00:59]VP VS VG Supermajor小组赛胜者组第二轮 BO3第二场 6.2
2018/06/03 DOTA
详解Python进程间通信之命名管道
2017/08/28 Python
浅谈python正则的常用方法 覆盖范围70%以上
2018/03/14 Python
python使用socket创建tcp服务器和客户端
2018/04/12 Python
python pycurl验证basic和digest认证的方法
2018/05/02 Python
对json字符串与python字符串的不同之处详解
2018/12/19 Python
Python实现插入排序和选择排序的方法
2019/05/12 Python
基于spring boot 日志(logback)报错的解决方式
2020/02/20 Python
Django ORM filter() 的运用详解
2020/05/14 Python
PyCharm中配置PySide2的图文教程
2020/06/18 Python
Python3+Flask安装使用教程详解
2021/02/16 Python
HTML5本地存储和本地数据库实例详解
2017/09/05 HTML / CSS
HTML5+CSS设置浮动却没有动反而在中间且错行的问题
2020/05/26 HTML / CSS
Net-A-Porter美国官网:全球时尚奢侈品名站
2017/02/11 全球购物
C语言中一个结构不能包含指向自己的指针吗
2012/05/25 面试题
通信工程毕业生自荐信
2013/11/01 职场文书
2013年研究生毕业感言
2014/02/06 职场文书
中秋寄语大全
2014/04/11 职场文书
职业道德模范事迹材料
2014/08/24 职场文书
公司副总经理岗位职责
2015/04/08 职场文书
大学推普周活动总结
2015/05/07 职场文书
医护人员继续教育学习心得体会
2016/01/19 职场文书
Python控制台输出俄罗斯方块的方法实例
2021/04/17 Python
教你在 Java 中实现 Dijkstra 最短路算法的方法
2022/04/08 Java/Android