Python大数据之从网页上爬取数据的方法详解


Posted in Python onNovember 16, 2019

本文实例讲述了Python大数据之从网页上爬取数据的方法。分享给大家供大家参考,具体如下:

Python大数据之从网页上爬取数据的方法详解

myspider.py  :

#!/usr/bin/python
# -*- coding:utf-8 -*-
from scrapy.spiders import Spider
from lxml import etree
from jredu.items import JreduItem
class JreduSpider(Spider):
  name = 'tt' #爬虫的名字,必须的,唯一的
  allowed_domains = ['sohu.com']
  start_urls = [
    'http://www.sohu.com'
  ]
  def parse(self, response):
    content = response.body.decode('utf-8')
    dom = etree.HTML(content)
    for ul in dom.xpath("//div[@class='focus-news-box']/div[@class='list16']/ul"):
      lis = ul.xpath("./li")
      for li in lis:
        item = JreduItem() #定义对象
        if ul.index(li) == 0:
          strong = li.xpath("./a/strong/text()")
          li.xpath("./a/@href")
          item['title']= strong[0]
          item['href'] = li.xpath("./a/@href")[0]
        else:
          la = li.xpath("./a[last()]/text()")
          item['title'] = la[0]
          item['href'] = li.xpath("./a[last()]/href")[0]
        yield item

items.py    :

# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class JreduItem(scrapy.Item):#相当于Java里的实体类
  # define the fields for your item here like:
  # name = scrapy.Field()
  title = scrapy.Field()#创建一个field对象
  href = scrapy.Field()
  pass

middlewares.py  :

# -*- coding: utf-8 -*-
# Define here the models for your spider middleware
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/spider-middleware.html
from scrapy import signals
class JreduSpiderMiddleware(object):
  # Not all methods need to be defined. If a method is not defined,
  # scrapy acts as if the spider middleware does not modify the
  # passed objects.
  @classmethod
  def from_crawler(cls, crawler):
    # This method is used by Scrapy to create your spiders.
    s = cls()
    crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
    return s
  def process_spider_input(self, response, spider):
    # Called for each response that goes through the spider
    # middleware and into the spider.
    # Should return None or raise an exception.
    return None
  def process_spider_output(self, response, result, spider):
    # Called with the results returned from the Spider, after
    # it has processed the response.
    # Must return an iterable of Request, dict or Item objects.
    for i in result:
      yield i
  def process_spider_exception(self, response, exception, spider):
    # Called when a spider or process_spider_input() method
    # (from other spider middleware) raises an exception.
    # Should return either None or an iterable of Response, dict
    # or Item objects.
    pass
  def process_start_requests(self, start_requests, spider):
    # Called with the start requests of the spider, and works
    # similarly to the process_spider_output() method, except
    # that it doesn't have a response associated.
    # Must return only requests (not items).
    for r in start_requests:
      yield r
  def spider_opened(self, spider):
    spider.logger.info('Spider opened: %s' % spider.name)

pipelines.py  :

# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
import codecs
import json
class JreduPipeline(object):
  def __init__(self):
    self.fill = codecs.open("data.txt",encoding="utf-8",mode="wb");
  def process_item(self, item, spider):
    line = json.dumps(dict(item))+"\n"
    self.fill.write(line)
    return item

settings.py   :

# -*- coding: utf-8 -*-
# Scrapy settings for jredu project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#   http://doc.scrapy.org/en/latest/topics/settings.html
#   http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#   http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'jredu'
SPIDER_MODULES = ['jredu.spiders']
NEWSPIDER_MODULE = 'jredu.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'jredu (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#  'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#  'jredu.middlewares.JreduSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#  'jredu.middlewares.MyCustomDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#  'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
  'jredu.pipelines.JreduPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

最后需要一个程序入口的方法:

main.py     :

#!/usr/bin/python
# -*- coding:utf-8 -*-
#爬虫文件的执行入口
from scrapy import cmdline
cmdline.execute("scrapy crawl tt".split())

更多关于Python相关内容可查看本站专题:《Python Socket编程技巧总结》、《Python正则表达式用法总结》、《Python数据结构与算法教程》、《Python函数使用技巧总结》、《Python字符串操作技巧汇总》、《Python入门与进阶经典教程》及《Python文件与目录操作技巧汇总》

希望本文所述对大家Python程序设计有所帮助。

Python 相关文章推荐
Python 代码性能优化技巧分享
Aug 07 Python
python有证书的加密解密实现方法
Nov 19 Python
Python 列表list使用介绍
Nov 30 Python
wxPython定时器wx.Timer简单应用实例
Jun 03 Python
python使用arcpy.mapping模块批量出图
Mar 06 Python
正确理解python中的关键字“with”与上下文管理器
Apr 21 Python
python破解zip加密文件的方法
May 31 Python
python中yield的用法详解——最简单,最清晰的解释
Apr 04 Python
Python 虚拟空间的使用代码详解
Jun 10 Python
Python pandas 列转行操作详解(类似hive中explode方法)
May 18 Python
Django搭建项目实战与避坑细节详解
Dec 06 Python
python绘图pyecharts+pandas的使用详解
Dec 13 Python
简单了解Pandas缺失值处理方法
Nov 16 #Python
python selenium 执行完毕关闭chromedriver进程示例
Nov 15 #Python
浅谈Django2.0 加xadmin踩的坑
Nov 15 #Python
Django 实现xadmin后台菜单改为中文
Nov 15 #Python
django使用xadmin的全局配置详解
Nov 15 #Python
在django-xadmin中APScheduler的启动初始化实例
Nov 15 #Python
解决django-xadmin列表页filter关联对象搜索问题
Nov 15 #Python
You might like
Laravel实现构造函数自动依赖注入的方法
2016/03/16 PHP
删除重复数据的算法
2006/11/23 Javascript
js控制div及网页相关属性的代码
2009/12/19 Javascript
jquery 批量上传图片实现代码
2010/01/28 Javascript
我的javascript 函数链之演变
2011/04/07 Javascript
分享几个超级震憾的图片特效
2012/01/08 Javascript
js与jquery获取父元素,删除子元素的两种不同方法
2014/01/09 Javascript
jQuery如何获取同一个类标签的所有值(默认无法获取)
2014/09/25 Javascript
使表格的标题列可左右拉伸jquery插件封装
2014/11/24 Javascript
jQuery实现网页抖动的菜单抖动效果
2015/08/07 Javascript
BootStrap 智能表单实战系列(二)BootStrap支持的类型简介
2016/06/13 Javascript
用jQuery的AJax实现异步访问、异步加载
2016/11/02 Javascript
jQuery实现单击按钮遮罩弹出对话框效果(2)
2017/02/20 Javascript
import与export在node.js中的使用详解
2017/09/28 Javascript
vue+vue-router转场动画的实例代码
2018/09/01 Javascript
Vue CLI 3搭建vue+vuex最全分析(推荐)
2018/09/27 Javascript
泛谈JS逻辑判断选择器 || &&
2019/05/24 Javascript
vue canvas绘制矩形并解决由clearRec带来的闪屏问题
2019/09/02 Javascript
JavaScript获取某一天所在的星期
2019/09/05 Javascript
vue中watch和computed为什么能监听到数据的改变以及不同之处
2019/12/27 Javascript
JS定时器如何实现提交成功提示功能
2020/06/12 Javascript
[04:30]显微镜下的DOTA2第五期——拉比克
2013/09/26 DOTA
Python多进程分块读取超大文件的方法
2016/04/13 Python
Android分包MultiDex策略详解
2017/10/30 Python
Python中面向对象你应该知道的一下知识
2019/07/10 Python
python单例模式原理与创建方法实例分析
2019/10/26 Python
python的等深分箱实例
2019/11/22 Python
解决在keras中使用model.save()函数保存模型失败的问题
2020/05/21 Python
荷兰本土平价百货:HEMA
2017/10/23 全球购物
英国123鲜花网站:123 Flowers
2019/07/07 全球购物
市场营销策划方案
2014/06/11 职场文书
建筑院校毕业生求职信
2014/06/13 职场文书
党员目标管理责任书
2014/07/25 职场文书
超市收银员岗位职责
2015/04/07 职场文书
教学质量月活动总结
2015/05/11 职场文书
PyQt5结合QtDesigner实现文本框读写操作
2021/06/11 Python