详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库


Posted in Python onJanuary 24, 2021

获取要爬取的URL

详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库

详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库

详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库

详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库

详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库

详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库

爬虫前期工作

详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库

用Pycharm打开项目开始写爬虫文件

字段文件items

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class NbaprojectItem(scrapy.Item):
  # define the fields for your item here like:
  # name = scrapy.Field()
  # pass
  # 创建字段的固定格式-->scrapy.Field()
  # 英文名
  engName = scrapy.Field()
  # 中文名
  chName = scrapy.Field()
  # 身高
  height = scrapy.Field()
  # 体重
  weight = scrapy.Field()
  # 国家英文名
  contryEn = scrapy.Field()
  # 国家中文名
  contryCh = scrapy.Field()
  # NBA球龄
  experience = scrapy.Field()
  # 球衣号码
  jerseyNo = scrapy.Field()
  # 入选年
  draftYear = scrapy.Field()
  # 队伍英文名
  engTeam = scrapy.Field()
  # 队伍中文名
  chTeam = scrapy.Field()
  # 位置
  position = scrapy.Field()
  # 东南部
  displayConference = scrapy.Field()
  # 分区
  division = scrapy.Field()

爬虫文件

import scrapy
import json
from nbaProject.items import NbaprojectItem

class NbaspiderSpider(scrapy.Spider):
  name = 'nbaSpider'
  allowed_domains = ['nba.com']
  # 第一次爬取的网址,可以写多个网址
  # start_urls = ['http://nba.com/']
  start_urls = ['https://china.nba.com/static/data/league/playerlist.json']
  # 处理网址的response
  def parse(self, response):
    # 因为访问的网站返回的是json格式,首先用第三方包处理json数据
    data = json.loads(response.text)['payload']['players']
    # 以下列表用来存放不同的字段
    # 英文名
    engName = []
    # 中文名
    chName = []
    # 身高
    height = []
    # 体重
    weight = []
    # 国家英文名
    contryEn = []
    # 国家中文名
    contryCh = []
    # NBA球龄
    experience = []
    # 球衣号码
    jerseyNo = []
    # 入选年
    draftYear = []
    # 队伍英文名
    engTeam = []
    # 队伍中文名
    chTeam = []
    # 位置
    position = []
    # 东南部
    displayConference = []
    # 分区
    division = []
    # 计数
    count = 1
    for i in data:
      # 英文名
      engName.append(str(i['playerProfile']['firstNameEn'] + i['playerProfile']['lastNameEn']))
      # 中文名
      chName.append(str(i['playerProfile']['firstName'] + i['playerProfile']['lastName']))
      # 国家英文名
      contryEn.append(str(i['playerProfile']['countryEn']))
      # 国家中文
      contryCh.append(str(i['playerProfile']['country']))
      # 身高
      height.append(str(i['playerProfile']['height']))
      # 体重
      weight.append(str(i['playerProfile']['weight']))
      # NBA球龄
      experience.append(str(i['playerProfile']['experience']))
      # 球衣号码
      jerseyNo.append(str(i['playerProfile']['jerseyNo']))
      # 入选年
      draftYear.append(str(i['playerProfile']['draftYear']))
      # 队伍英文名
      engTeam.append(str(i['teamProfile']['code']))
      # 队伍中文名
      chTeam.append(str(i['teamProfile']['displayAbbr']))
      # 位置
      position.append(str(i['playerProfile']['position']))
      # 东南部
      displayConference.append(str(i['teamProfile']['displayConference']))
      # 分区
      division.append(str(i['teamProfile']['division']))

      # 创建item字段对象,用来存储信息 这里的item就是对应上面导的NbaprojectItem
      item = NbaprojectItem()
      item['engName'] = str(i['playerProfile']['firstNameEn'] + i['playerProfile']['lastNameEn'])
      item['chName'] = str(i['playerProfile']['firstName'] + i['playerProfile']['lastName'])
      item['contryEn'] = str(i['playerProfile']['countryEn'])
      item['contryCh'] = str(i['playerProfile']['country'])
      item['height'] = str(i['playerProfile']['height'])
      item['weight'] = str(i['playerProfile']['weight'])
      item['experience'] = str(i['playerProfile']['experience'])
      item['jerseyNo'] = str(i['playerProfile']['jerseyNo'])
      item['draftYear'] = str(i['playerProfile']['draftYear'])
      item['engTeam'] = str(i['teamProfile']['code'])
      item['chTeam'] = str(i['teamProfile']['displayAbbr'])
      item['position'] = str(i['playerProfile']['position'])
      item['displayConference'] = str(i['teamProfile']['displayConference'])
      item['division'] = str(i['teamProfile']['division'])
      # 打印爬取信息
      print("传输了",count,"条字段")
      count += 1
      # 将字段交回给引擎 -> 管道文件
      yield item

配置文件->开启管道文件

详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库

详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库

# Scrapy settings for nbaProject project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#   https://docs.scrapy.org/en/latest/topics/settings.html
#   https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#   https://docs.scrapy.org/en/latest/topics/spider-middleware.html
# ----------不做修改部分---------
BOT_NAME = 'nbaProject'

SPIDER_MODULES = ['nbaProject.spiders']
NEWSPIDER_MODULE = 'nbaProject.spiders'
# ----------不做修改部分---------

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'nbaProject (+http://www.yourdomain.com)'

# Obey robots.txt rules
# ----------修改部分(可以自行查这是啥东西)---------
# ROBOTSTXT_OBEY = True
# ----------修改部分---------

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#  'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#  'nbaProject.middlewares.NbaprojectSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#  'nbaProject.middlewares.NbaprojectDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#  'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# 开启管道文件
# ----------修改部分---------
ITEM_PIPELINES = {
  'nbaProject.pipelines.NbaprojectPipeline': 300,
}
# ----------修改部分---------
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

管道文件 -> 将字段写进mysql

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


# useful for handling different item types with a single interface
from itemadapter import ItemAdapter

import pymysql
class NbaprojectPipeline:
	# 初始化函数
  def __init__(self):
    # 连接数据库 注意修改数据库信息
    self.connect = pymysql.connect(host='域名', user='用户名', passwd='密码',
                    db='数据库', port=端口号) 
    # 获取游标
    self.cursor = self.connect.cursor()
    # 创建一个表用于存放item字段的数据
    createTableSql = """
              create table if not exists `nbaPlayer`(
              playerId INT UNSIGNED AUTO_INCREMENT,
              engName varchar(80),
              chName varchar(20),
              height varchar(20),
              weight varchar(20),
              contryEn varchar(50),
              contryCh varchar(20),
              experience int,
              jerseyNo int,
              draftYear int,
              engTeam varchar(50),
              chTeam varchar(50),
              position varchar(50),
              displayConference varchar(50),
              division varchar(50),
              primary key(playerId)
              )charset=utf8;
              """
    # 执行sql语句
    self.cursor.execute(createTableSql)
    self.connect.commit()
    print("完成了创建表的工作")
	#每次yield回来的字段会在这里做处理
  def process_item(self, item, spider):
  	# 打印item增加观赏性
  	print(item)
    # sql语句
    insert_sql = """
    insert into nbaPlayer(
    playerId, engName, 
    chName,height,
    weight,contryEn,
    contryCh,experience,
    jerseyNo,draftYear
    ,engTeam,chTeam,
    position,displayConference,
    division
    ) VALUES (null,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)
    """
    # 执行插入数据到数据库操作
    # 参数(sql语句,用item字段里的内容替换sql语句的占位符)
    self.cursor.execute(insert_sql, (item['engName'], item['chName'], item['height'], item['weight']
                     , item['contryEn'], item['contryCh'], item['experience'], item['jerseyNo'],
                     item['draftYear'], item['engTeam'], item['chTeam'], item['position'],
                     item['displayConference'], item['division']))
    # 提交,不进行提交无法保存到数据库
    self.connect.commit()
    print("数据提交成功!")

启动爬虫

详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库

屏幕上滚动的数据

详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库

去数据库查看数据

详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库

简简单单就把球员数据爬回来啦~

到此这篇关于详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库的文章就介绍到这了,更多相关Scrapy爬虫员数据存放到Mysql内容请搜索三水点靠木以前的文章或继续浏览下面的相关文章希望大家以后多多支持三水点靠木!

Python 相关文章推荐
python 删除大文件中的某一行(最有效率的方法)
Aug 19 Python
python 异或加密字符串的实例
Oct 14 Python
Pycharm设置去除显示的波浪线方法
Oct 28 Python
python绘制直方图和密度图的实例
Jul 08 Python
python使用paramiko模块通过ssh2协议对交换机进行配置的方法
Jul 25 Python
解决Python使用列表副本的问题
Dec 19 Python
Pycharm生成可执行文件.exe的实现方法
Jun 02 Python
使用Keras实现简单线性回归模型操作
Jun 12 Python
Python依赖包迁移到断网环境操作
Jul 13 Python
python 如何调用远程接口
Sep 11 Python
python爬取youtube视频的示例代码
Mar 03 Python
详解OpenCV获取高动态范围(HDR)成像
Apr 29 Python
Ubuntu20下的Django安装的方法步骤
Jan 24 #Python
selenium+超级鹰实现模拟登录12306
Jan 24 #Python
使用numpngw和matplotlib生成png动画的示例代码
Jan 24 #Python
详解如何修改jupyter notebook的默认目录和默认浏览器
Jan 24 #Python
详解修改Anaconda中的Jupyter Notebook默认工作路径的三种方式
Jan 24 #Python
浅析python字符串前加r、f、u、l 的区别
Jan 24 #Python
python 图像增强算法实现详解
Jan 24 #Python
You might like
DOTA2 1月28日更新:监管系统降临刀塔世界
2021/01/28 DOTA
php阻止页面后退的方法分享
2014/02/17 PHP
PHP命名空间简单用法示例
2018/12/28 PHP
Laravel 实现数据软删除功能
2019/08/21 PHP
Firefox outerHTML实现代码
2009/06/04 Javascript
JavaScript Sort 表格排序
2009/10/31 Javascript
仅IE6/7/8中innerHTML返回值忽略英文空格的问题
2011/04/07 Javascript
js如何获取file控件的完整路径具体实现代码
2013/05/15 Javascript
使用js声明数组,对象在jsp页面中(获得ajax得到json数据)
2013/11/05 Javascript
JS替换字符串中字符即替换全部而不是第一个
2014/06/04 Javascript
javascript实现检验的各种规则
2015/07/31 Javascript
jQuery Easy UI中根据第一个下拉框选中的值设置第二个下拉框是否可以编辑
2016/11/29 Javascript
修改ligerui 默认确认按钮的方法
2016/12/27 Javascript
vue引入swiper插件的使用实例
2017/07/19 Javascript
js HTML5 canvas绘制图片的方法
2017/09/08 Javascript
在nginx上部署vue项目(history模式)的方法
2017/12/28 Javascript
微信小程序按钮去除边框线分享页面功能
2018/08/27 Javascript
浅谈vue项目打包优化策略
2018/09/29 Javascript
[05:24]TI9采访——教练
2019/08/24 DOTA
[41:17]完美世界DOTA2联赛PWL S3 access vs CPG 第二场 12.13
2020/12/17 DOTA
[15:20]DOTA2-DPC中国联赛 正赛 Elephant vs Aster 选手采访
2021/03/11 DOTA
如何解决django配置settings时遇到Could not import settings 'conf.local'
2014/11/18 Python
使用Python读写文本文件及编写简单的文本编辑器
2016/03/11 Python
Python网络编程详解
2017/10/31 Python
python3爬取各类天气信息
2018/02/24 Python
用纯CSS3实现网页中常见的小箭头
2017/10/16 HTML / CSS
IE9下html5初试小刀
2010/09/21 HTML / CSS
葡萄牙鞋子品牌:Fair
2016/12/10 全球购物
《陋室铭》教学反思
2014/02/26 职场文书
高中生家长寄语大全
2014/04/03 职场文书
2014年班干部工作总结
2014/11/25 职场文书
学期个人工作总结
2015/02/13 职场文书
2015年大学生社会实践评语
2015/03/26 职场文书
民事代理词范文
2015/05/25 职场文书
css position fixed 左右双定位的实现代码
2021/04/29 HTML / CSS
MySQL索引 高效获取数据的数据结构
2022/05/02 MySQL