Python大数据之从网页上爬取数据的方法详解


Posted in Python onNovember 16, 2019

本文实例讲述了Python大数据之从网页上爬取数据的方法。分享给大家供大家参考,具体如下:

Python大数据之从网页上爬取数据的方法详解

myspider.py  :

#!/usr/bin/python
# -*- coding:utf-8 -*-
from scrapy.spiders import Spider
from lxml import etree
from jredu.items import JreduItem
class JreduSpider(Spider):
  name = 'tt' #爬虫的名字,必须的,唯一的
  allowed_domains = ['sohu.com']
  start_urls = [
    'http://www.sohu.com'
  ]
  def parse(self, response):
    content = response.body.decode('utf-8')
    dom = etree.HTML(content)
    for ul in dom.xpath("//div[@class='focus-news-box']/div[@class='list16']/ul"):
      lis = ul.xpath("./li")
      for li in lis:
        item = JreduItem() #定义对象
        if ul.index(li) == 0:
          strong = li.xpath("./a/strong/text()")
          li.xpath("./a/@href")
          item['title']= strong[0]
          item['href'] = li.xpath("./a/@href")[0]
        else:
          la = li.xpath("./a[last()]/text()")
          item['title'] = la[0]
          item['href'] = li.xpath("./a[last()]/href")[0]
        yield item

items.py    :

# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class JreduItem(scrapy.Item):#相当于Java里的实体类
  # define the fields for your item here like:
  # name = scrapy.Field()
  title = scrapy.Field()#创建一个field对象
  href = scrapy.Field()
  pass

middlewares.py  :

# -*- coding: utf-8 -*-
# Define here the models for your spider middleware
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/spider-middleware.html
from scrapy import signals
class JreduSpiderMiddleware(object):
  # Not all methods need to be defined. If a method is not defined,
  # scrapy acts as if the spider middleware does not modify the
  # passed objects.
  @classmethod
  def from_crawler(cls, crawler):
    # This method is used by Scrapy to create your spiders.
    s = cls()
    crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
    return s
  def process_spider_input(self, response, spider):
    # Called for each response that goes through the spider
    # middleware and into the spider.
    # Should return None or raise an exception.
    return None
  def process_spider_output(self, response, result, spider):
    # Called with the results returned from the Spider, after
    # it has processed the response.
    # Must return an iterable of Request, dict or Item objects.
    for i in result:
      yield i
  def process_spider_exception(self, response, exception, spider):
    # Called when a spider or process_spider_input() method
    # (from other spider middleware) raises an exception.
    # Should return either None or an iterable of Response, dict
    # or Item objects.
    pass
  def process_start_requests(self, start_requests, spider):
    # Called with the start requests of the spider, and works
    # similarly to the process_spider_output() method, except
    # that it doesn't have a response associated.
    # Must return only requests (not items).
    for r in start_requests:
      yield r
  def spider_opened(self, spider):
    spider.logger.info('Spider opened: %s' % spider.name)

pipelines.py  :

# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
import codecs
import json
class JreduPipeline(object):
  def __init__(self):
    self.fill = codecs.open("data.txt",encoding="utf-8",mode="wb");
  def process_item(self, item, spider):
    line = json.dumps(dict(item))+"\n"
    self.fill.write(line)
    return item

settings.py   :

# -*- coding: utf-8 -*-
# Scrapy settings for jredu project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#   http://doc.scrapy.org/en/latest/topics/settings.html
#   http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#   http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'jredu'
SPIDER_MODULES = ['jredu.spiders']
NEWSPIDER_MODULE = 'jredu.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'jredu (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#  'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#  'jredu.middlewares.JreduSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#  'jredu.middlewares.MyCustomDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#  'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
  'jredu.pipelines.JreduPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

最后需要一个程序入口的方法:

main.py     :

#!/usr/bin/python
# -*- coding:utf-8 -*-
#爬虫文件的执行入口
from scrapy import cmdline
cmdline.execute("scrapy crawl tt".split())

更多关于Python相关内容可查看本站专题:《Python Socket编程技巧总结》、《Python正则表达式用法总结》、《Python数据结构与算法教程》、《Python函数使用技巧总结》、《Python字符串操作技巧汇总》、《Python入门与进阶经典教程》及《Python文件与目录操作技巧汇总》

希望本文所述对大家Python程序设计有所帮助。

Python 相关文章推荐
python 自动去除空行的实例
Jul 24 Python
python单例模式获取IP代理的方法详解
Sep 13 Python
对python中Json与object转化的方法详解
Dec 31 Python
Python+Selenium使用Page Object实现页面自动化测试
Jul 14 Python
django 控制页面跳转的例子
Aug 06 Python
对python中的os.getpid()和os.fork()函数详解
Aug 08 Python
python Manager 之dict KeyError问题的解决
Dec 21 Python
python实现删除列表中某个元素的3种方法
Jan 15 Python
pycharm 对代码做静态检查操作
Jun 09 Python
matplotlib图例legend语法及设置的方法
Jul 28 Python
python四种出行路线规划的实现
Jun 23 Python
Django Paginator分页器的使用示例
Jun 23 Python
简单了解Pandas缺失值处理方法
Nov 16 #Python
python selenium 执行完毕关闭chromedriver进程示例
Nov 15 #Python
浅谈Django2.0 加xadmin踩的坑
Nov 15 #Python
Django 实现xadmin后台菜单改为中文
Nov 15 #Python
django使用xadmin的全局配置详解
Nov 15 #Python
在django-xadmin中APScheduler的启动初始化实例
Nov 15 #Python
解决django-xadmin列表页filter关联对象搜索问题
Nov 15 #Python
You might like
PHP基础学习之流程控制的实现分析
2013/04/28 PHP
curl实现站外采集的方法和技巧
2014/01/31 PHP
php+ajax实现无刷新数据分页的办法
2015/11/02 PHP
PHP设计模式之工厂模式详解
2017/10/24 PHP
JS 巧妙获取剪贴板数据 Excel数据的粘贴
2009/07/09 Javascript
node.js中的fs.futimesSync方法使用说明
2014/12/17 Javascript
Javascript基础教程之数据类型转换
2015/01/18 Javascript
window.open()实现post传递参数
2015/03/12 Javascript
JavaScript实现同一页面内两个表单互相传值的方法
2015/08/12 Javascript
自己动手制作基于jQuery的Web页面加载进度条插件
2016/06/03 Javascript
VueJS如何引入css或者less文件的一些坑
2017/04/25 Javascript
jQuery length 和 size()区别总结
2018/04/26 jQuery
vue中watch和computed为什么能监听到数据的改变以及不同之处
2019/12/27 Javascript
Openlayers实现距离面积测量
2020/09/28 Javascript
[01:00:12]2018DOTA2亚洲邀请赛 4.7 淘汰赛 VP vs LGD 第一场
2018/04/09 DOTA
[02:46]完美世界DOTA2联赛PWL DAY4集锦
2020/11/03 DOTA
python开发之thread实现布朗运动的方法
2015/11/11 Python
Python实现图片滑动式验证识别方法
2017/11/09 Python
Python给图像添加噪声具体操作
2019/03/03 Python
python基础梳理(一)(推荐)
2019/04/06 Python
Python TestCase中的断言方法介绍
2019/05/02 Python
Python Django框架实现应用添加logging日志操作示例
2019/05/17 Python
python实现快递价格查询系统
2020/03/03 Python
python3 logging日志封装实例
2020/04/08 Python
详解用selenium来下载小姐姐图片并保存
2021/01/26 Python
2014年3.15团委活动总结
2014/03/16 职场文书
中国梦我的梦演讲稿
2014/04/23 职场文书
小学生环保倡议书
2014/05/15 职场文书
综治维稳工作承诺书
2014/08/30 职场文书
街道党工委党的群众路线教育实践活动对照检查材料思想汇报
2014/10/05 职场文书
党员批评与自我批评思想汇报
2014/10/08 职场文书
商业门面租房协议书
2014/11/25 职场文书
2014年英语工作总结
2014/12/20 职场文书
活动费用申请报告
2015/05/15 职场文书
一行代码python实现文件共享服务器
2021/04/22 Python
python中如何对多变量连续赋值
2021/06/03 Python