python 写的一个爬虫程序源码


Posted in Python onFebruary 28, 2016

写爬虫是一项复杂、枯噪、反复的工作,考虑的问题包括采集效率、链路异常处理、数据质量(与站点编码规范关系很大)等。整理自己写一个爬虫程序,单台服务器可以启用1~8个实例同时采集,然后将数据入库。

#-*- coding:utf-8 -*-
#!/usr/local/bin/python
import sys, time, os,string
import mechanize
import urlparse
from BeautifulSoup import BeautifulSoup
import re
import MySQLdb
import logging
import cgi
from optparse import OptionParser
#----------------------------------------------------------------------------#
# Name:    TySpider.py                              #
# Purpose:   WebSite Spider Module                     #
# Author:   刘天斯                                   #
# Email:    liutiansi@gamil.com                         #
# Created:   2010/02/16                              #
# Copyright:  (c) 2010                                #
#----------------------------------------------------------------------------#


"""
|--------------------------------------------------------------------------
| 定义 loging class;
|--------------------------------------------------------------------------
|
| 功能:记录系统相关日志信息。
| 
|
"""
class Pubclilog():
  def __init__(self):
    self.logfile = 'website_log.txt'

  def iniLog(self):
    logger = logging.getLogger()
    filehandler = logging.FileHandler(self.logfile)
    streamhandler = logging.StreamHandler()
    fmt = logging.Formatter('%(asctime)s, %(funcName)s, %(message)s')
    logger.setLevel(logging.DEBUG) 
    logger.addHandler(filehandler) 
    logger.addHandler(streamhandler)
    return [logger,filehandler]


"""
|--------------------------------------------------------------------------
| 定义 tySpider class;
|--------------------------------------------------------------------------
|
| 功能:抓取分类、标题等信息
| 
|
"""
class BaseTySpider:

  #初始化相关成员方法
  def __init__(self,X,log_switch):

    #数据库连接
    self.conn = MySQLdb.connect(db='dbname',host='192.168.0.10', user='dbuser',passwd='SDFlkj934y5jsdgfjh435',charset='utf8')

    #分类及标题页面Community
    self.CLASS_URL = 'http://test.abc.com/aa/CommTopicsPage?'

    #发表回复页
    self.Content_URL = 'http://test.bac.com/aa/CommMsgsPage?'

    #开始comm值
    self.X=X

    #当前comm id取模,方面平均到表
    self.mod=self.X%5

    #Community文件下载页
    self.body=""

    #self.bodySoup对象
    self.soup=None

    #发表回复页下载内容变量
    self.Contentbody=""

    #发表回复页内容self.ContentbodySoup对象
    self.Contentsoup=None

    #日志开关
    self.log_switch=log_switch


  #======================获取名称及分类方法==========================
  def _SpiderClass(self,nextpage=None):
    if nextpage==None:
      FIXED_QUERY = 'cmm='+str(self.X)
    else:
      FIXED_QUERY = nextpage[1:]

    try:
      rd = mechanize.Browser()
      rd.addheaders = [("User-agent", "Tianya/2010 (compatible; MSIE 6.0;Windows NT 5.1)")]
      rd.open(self.CLASS_URL + FIXED_QUERY)
      self.body=rd.response().read()
      #rd=mechanize.Request(self.CLASS_URL + FIXED_QUERY)
      #response = mechanize.urlopen(rd)
      #self.body=response.read()

    except Exception,e:
      if self.log_switch=="on":
        logapp=Pubclilog()
        logger,hdlr = logapp.iniLog()
        logger.info(self.CLASS_URL + FIXED_QUERY+str(e))
        hdlr.flush()
        logger.removeHandler(hdlr)
        return
    self.soup = BeautifulSoup(self.body)
    NextPageObj= self.soup("a", {'class' : re.compile("fs-paging-item fs-paging-next")})
    self.cursor = self.conn.cursor()
    if nextpage==None:
      try:
        Ttag=str(self.soup.table)
        #print Ttag

        """
        ------------------分析结构体-----------------
        <table cellspacing="0" cellpadding="0">
          <tr>
            <td>
              <h1 title="Dunhill">Dunhill</h1>
            </td>
            <td valign="middle">
              <div class="fs-comm-cat">
                <span class="fs-icons fs-icon-cat"> </span> <a href="TopByCategoryPage?cid=211&ref=commnav-cat">中国</a> » <a href="TopByCategoryPage?cid=211&subcid=273&ref=commnav-cat">人民</a>
              </div>
            </td>
          </tr>
        </table>
        """

        soupTable=BeautifulSoup(Ttag)

        #定位到第一个h1标签
        tableh1 = soupTable("h1")
        #print self.X
        #print "Name:"+tableh1[0].string.strip().encode('utf-8')

        #处理无类型的
        try:
          #定位到表格中符合规则“^TopByCategory”A链接块,tablea[0]为第一个符合条件的连接文字,tablea[1]...
          tablea = soupTable("a", {'href' : re.compile("^TopByCategory")})
          if tablea[0].string.strip()=="":
            pass
          #print "BigCLass:"+tablea[0].string.strip().encode('utf-8')
          #print "SubClass:"+tablea[1].string.strip().encode('utf-8')
        except Exception,e:
          if self.log_switch=="on":
            logapp=Pubclilog()
            logger,hdlr = logapp.iniLog()
            logger.info("[noClassInfo]"+str(self.X)+str(e))
            hdlr.flush()
            logger.removeHandler(hdlr)
          self.cursor.execute("insert into baname"+str(self.mod)+" values('%d','%d','%s')" %(self.X,-1,tableh1[0].string.strip().encode('utf-8')))
          self.conn.commit() 
          self._SpiderTitle()
          if NextPageObj:
            NextPageURL=NextPageObj[0]['href']
            self._SpiderClass(NextPageURL)
            return
          else:
            return


        #获取链接二对象的href值
        classlink=tablea[1]['href']
        par_dict=cgi.parse_qs(urlparse.urlparse(classlink).query)
        #print "CID:"+par_dict["cid"][0]
        #print "SubCID:"+par_dict["subcid"][0]
        #print "---------------------------------------"
        
        #插入数据库
        self.cursor.execute("insert into class values('%d','%s')" %(int(par_dict["cid"][0]),tablea[0].string.strip().encode('utf-8')))
        self.cursor.execute("insert into subclass values('%d','%d','%s')" %(int(par_dict["subcid"][0]),int(par_dict["cid"][0]),tablea[1].string.strip().encode('utf-8'))) 
        self.cursor.execute("insert into baname"+str(self.mod)+" values('%d','%d','%s')" %(self.X,int(par_dict["subcid"][0]),tableh1[0].string.strip().encode('utf-8')))
        self.conn.commit() 
        
        self._SpiderTitle()
        if NextPageObj:
          NextPageURL=NextPageObj[0]['href']
          self._SpiderClass(NextPageURL)

        self.body=None
        self.soup=None
        Ttag=None
        soupTable=None
        table=None
        table1=None
        classlink=None
        par_dict=None
      except Exception,e:
        if self.log_switch=="on":
          logapp=Pubclilog()
          logger,hdlr = logapp.iniLog()
          logger.info("[ClassInfo]"+str(self.X)+str(e))
          hdlr.flush()
          logger.removeHandler(hdlr)
    else:
      self._SpiderTitle()
      if NextPageObj:
        NextPageURL=NextPageObj[0]['href']
        self._SpiderClass(NextPageURL)




  #====================获取标题方法=========================
  def _SpiderTitle(self):
    #查找标题表格对象(table)
    soupTitleTable=self.soup("table", {'class' : "fs-topic-list"})
    
    #查找标题行对象(tr)
    TitleTr = soupTitleTable[0]("tr", {'onmouseover' : re.compile("^this\.className='fs-row-hover'")})
    """
    -----------分析结构体--------------
    <tr class="fs-alt-row" onmouseover="this.className='fs-row-hover'" onmouseout="this.className='fs-alt-row'">
      <td valign="middle" class="fs-hot-topic-dots-ctn">
        <div class="fs-hot-topic-dots" style="background-position:0 -0px" title="点击量:12"></div>
      </td>
      <td valign="middle" class="fs-topic-name">
        <a href="CommMsgsPage?cmm=16081&tid=2718969307756232842&ref=regulartopics" id="a53" title="【新人报到】欢迎美国人民加入" target="_blank">【新人报到】欢迎美国人民加入</a>
        <span class="fs-meta">
        <span class="fs-icons fs-icon-mini-reply"> </span>0
         /
         <span class="fs-icons fs-icon-pageview"> </span>12</span>
      </td>
      <td valign="middle">
        <a class="fs-tiny-user-avatar umhook " href="ProfilePage?uid=8765915421039908242" title="中国人"><img src="http://img1.sohu.com.cn/aa/images/138/0/P/1/s.jpg" /></a>
      </td>
      <td valign="middle" style="padding-left:4px">
        <a href="Profile?uid=8765915421039908242" id="b53" title="中国人" class="umhook">中国人</a>
      </td>
      <td valign="middle" class="fs-topic-last-mdfy fs-meta">2-14</td>
    </tr>
    """
    for CurrTr in TitleTr:
      try:
        #初始化置顶及精华状态
        Title_starred='N'
        Title_sticky='N'

        #获取当前记录的BeautifulSoup对象
        soupCurrTr=BeautifulSoup(str(CurrTr))

        #BeautifulSoup分析HTML有误,只能通过span的标志数来获取贴子状态,会存在一定误差
        #如只有精华时也会当成置顶来处理。
        TitleStatus=soupCurrTr("span", {'title' : ""})
        TitlePhotoViewer=soupCurrTr("a", {'href' : re.compile("^PhotoViewer")})
        if TitlePhotoViewer.__len__()==1:
          TitlePhotoViewerBool=0
        else:
          TitlePhotoViewerBool=1
        if TitleStatus.__len__()==3-TitlePhotoViewerBool:
          Title_starred='Y'
          Title_sticky='Y'
        elif TitleStatus.__len__()==2-TitlePhotoViewerBool:
          Title_sticky='Y'

        #获取贴子标题
        Title=soupCurrTr.a.next.strip()

        #获取贴子ID
        par_dict=cgi.parse_qs(urlparse.urlparse(soupCurrTr.a['href']).query)

        #获取回复数及浏览器
        TitleNum=soupCurrTr("td", {'class' : "fs-topic-name"})
        TitleArray=string.split(str(TitleNum[0]),'\n')
        Title_ReplyNum=string.split(TitleArray[len(TitleArray)-4],'>')[2]
        Title_ViewNum=string.split(TitleArray[len(TitleArray)-2],'>')[2][:-6]

        #获取贴子作者
        TitleAuthorObj=soupCurrTr("td", {'style' : "padding-left:4px"})
        Title_Author=TitleAuthorObj[0].next.next.next.string.strip().encode('utf-8')


        #获取回复时间
        TitleTime=soupCurrTr("td", {'class' : re.compile("^fs-topic-last-mdfy fs-meta")})

        """
        print "X:"+str(self.X)
        print "Title_starred:"+Title_starred
        print "Title_sticky:"+Title_sticky
        print "Title:"+Title

        #获取贴子内容连接URL
        print "Title_link:"+soupCurrTr.a['href']
        print "CID:"+par_dict["tid"][0]
        print "Title_ReplyNum:"+Title_ReplyNum
        print "Title_ViewNum:"+Title_ViewNum
        print "Title_Author:"+Title_Author
        print "TitleTime:"+TitleTime[0].string.strip().encode('utf-8')
        """

        #入库
        self.cursor.execute("insert into Title"+str(self.mod)+" values('%s','%d','%s','%d','%d','%s','%s','%s','%s')" %(par_dict["tid"][0], \
                                     self.X,Title,int(Title_ReplyNum),int(Title_ViewNum),Title_starred,Title_sticky, \
                                     Title_Author.decode('utf-8'),TitleTime[0].string.strip().encode('utf-8')))
        self.conn.commit()
        self._SpiderContent(par_dict["tid"][0])
      except Exception,e:
        if self.log_switch=="on":
          logapp=Pubclilog()
          logger,hdlr = logapp.iniLog()
          logger.info("[Title]"+str(self.X)+'-'+par_dict["tid"][0]+'-'+str(e))
          hdlr.flush()
          logger.removeHandler(hdlr)


  #======================获取发表及回复方法=======================
  def _SpiderContent(self,ID,nextpage=None):
    if nextpage==None:
      FIXED_QUERY = 'cmm='+str(self.X)+'&tid='+ID+'&ref=regulartopics'
    else:
      FIXED_QUERY = nextpage[9:]

    rd = mechanize.Browser()
    rd.addheaders = [("User-agent", "Tianya/2010 (compatible; MSIE 6.0;Windows NT 5.1)")]
    rd.open(self.Content_URL + FIXED_QUERY)
    self.Contentbody=rd.response().read()

    #rd=mechanize.Request(self.Content_URL + FIXED_QUERY)
    #response = mechanize.urlopen(rd)
    #self.Contentbody=response.read()

    self.Contentsoup = BeautifulSoup(self.Contentbody)
    NextPageObj= self.Contentsoup("a", {'class' : re.compile("fs-paging-item fs-paging-next")})
    try:
      Tdiv=self.Contentsoup("div", {'class' : "fs-user-action"})
      i=0
      for Currdiv in Tdiv:
        if i==0:
          Ctype='Y'
        else:
          Ctype='N'
        #发表时间
        soupCurrdiv=BeautifulSoup(str(Currdiv))
        PosttimeObj=soupCurrdiv("span", {'class' : "fs-meta"})
        Posttime=PosttimeObj[0].next[1:]
        Posttime=Posttime[0:-3]
        #IP地址
        IPObj=soupCurrdiv("a", {'href' : re.compile("CommMsgAddress")})
        if IPObj:
          IP=IPObj[0].next.strip()
        else:
          IP=''


        #发表/回复内容
        ContentObj=soupCurrdiv("div", {'class' :"fs-user-action-body"})
        Content=ContentObj[0].renderContents().strip()
        """
        print "ID:"+str(self.X)
        print "ID:"+ID
        print "Ctype:"+Ctype
        print "POSTTIME:"+Posttime
        print "IP:"+IP
        print "Content:"+Content
        """

        self.cursor.execute("insert into Content"+str(self.mod)+" values('%s','%d','%s','%s','%s','%s')" %(ID,self.X,Ctype,Posttime,IP,Content.decode('utf-8')))
        self.conn.commit() 
        i+=1

    except Exception,e:
      if self.log_switch=="on":
        logapp=Pubclilog()
        logger,hdlr = logapp.iniLog()
        logger.info("[Content]"+str(self.X)+'-'+ID+'-'+str(e))
        hdlr.flush()
        logger.removeHandler(hdlr)

    #如“下一页”有链接刚继续遍历
    if NextPageObj:
      NextPageURL=NextPageObj[0]['href']
      self._SpiderContent(ID,NextPageURL)


  def __del__(self):
    try:
      self.cursor.close()
      self.conn.close()
    except Exception,e:
      pass

#遍历comm范围
def initapp(StartValue,EndValue,log_switch):
  for x in range(StartValue,EndValue): 
    app=BaseTySpider(x,log_switch)
    app._SpiderClass()
    app=None

if __name__ == "__main__":
  
  #定义命令行参数
  MSG_USAGE = "TySpider.py [ -s StartNumber EndNumber ] -l [on|off] [-v][-h]"
  parser = OptionParser(MSG_USAGE)
  parser.add_option("-s", "--set", nargs=2,action="store", 
      dest="comm_value",
      type="int",
      default=False, 
      help="配置名称ID值范围。".decode('utf-8'))
  parser.add_option("-l", "--log", action="store", 
      dest="log_switch",
      type="string",
      default="on", 
      help="错误日志开关".decode('utf-8'))
  parser.add_option("-v","--version", action="store_true", dest="verbose",
      help="显示版本信息".decode('utf-8'))
  opts, args = parser.parse_args()

  if opts.comm_value:
    if opts.comm_value[0]>opts.comm_value[1]:
      print "终止值比起始值还小?"
      exit();
    if opts.log_switch=="on":
      log_switch="on"
    else:
      log_switch="off"
    initapp(opts.comm_value[0],opts.comm_value[1],log_switch)
    exit();

  if opts.verbose:
    print "WebSite Scider V1.0 beta."
    exit;
Python 相关文章推荐
详解Python3中yield生成器的用法
Aug 20 Python
详解Python中contextlib上下文管理模块的用法
Jun 28 Python
Python selenium 三种等待方式解读
Sep 15 Python
Python3编程实现获取阿里云ECS实例及监控的方法
Aug 18 Python
python操作列表的函数使用代码详解
Dec 28 Python
对Python字符串中的换行符和制表符介绍
May 03 Python
python sys.argv[]用法实例详解
May 25 Python
python 创建一个空dataframe 然后添加行数据的实例
Jun 07 Python
python实现猜数游戏
Mar 27 Python
TensorFlow的reshape操作 tf.reshape的实现
Apr 19 Python
Python yield生成器和return对比代码实例
Apr 20 Python
Python绘图之柱形图绘制详解
Jul 28 Python
Python基础语法(Python基础知识点)
Feb 28 #Python
python中map()与zip()操作方法
Feb 27 #Python
python中input()与raw_input()的区别分析
Feb 27 #Python
python PIL模块与随机生成中文验证码
Feb 27 #Python
Pythont特殊语法filter,map,reduce,apply使用方法
Feb 27 #Python
python 网络爬虫初级实现代码
Feb 27 #Python
Python数据库的连接实现方法与注意事项
Feb 27 #Python
You might like
解决PHP里大量数据循环时内存耗尽的方法
2015/10/10 PHP
PHP面向对象详解(三)
2015/12/07 PHP
分析 JavaScript 中令人困惑的变量赋值
2007/08/13 Javascript
修改jQuery.Autocomplete插件 支持中文输入法 避免TAB、ENTER键失效、导致表单提交
2009/10/11 Javascript
Javascript学习笔记-详解in运算符
2011/09/13 Javascript
情人节之礼 js项链效果
2012/02/13 Javascript
轻松创建nodejs服务器(5):事件处理程序
2014/12/18 NodeJs
JQuery基础语法小结
2015/02/27 Javascript
javascript闭包的理解
2015/04/01 Javascript
JS判断网页广告是否被浏览器拦截过滤的代码
2015/04/05 Javascript
jQuery事件绑定on()与弹窗实现代码
2016/04/28 Javascript
AngularJS 购物车全选/取消全选功能的实现方法
2017/08/14 Javascript
浅谈Angular7 项目开发总结
2018/12/19 Javascript
详解Vue 如何监听Array的变化
2019/06/06 Javascript
VUE解决 v-html不能触发点击事件的问题
2019/10/28 Javascript
微信小程序学习总结(二)样式、属性、模板操作分析
2020/06/04 Javascript
微信小程序实现可长按移动控件
2020/11/01 Javascript
[49:43]VG vs FNATIC 2019国际邀请赛小组赛 BO2 第一场 8.15
2019/08/17 DOTA
[01:33]完美世界DOTA2联赛PWL S3 集锦第二期
2020/12/21 DOTA
python两种遍历字典(dict)的方法比较
2014/05/29 Python
Python字典创建 遍历 添加等实用基础操作技巧
2018/09/13 Python
详解Python读取yaml文件多层菜单
2019/03/23 Python
在python中利用dict转json按输入顺序输出内容方式
2020/02/27 Python
设置jupyter中DataFrame的显示限制方式
2020/04/12 Python
python利用Excel读取和存储测试数据完成接口自动化教程
2020/04/30 Python
Python configparser模块常用方法解析
2020/05/22 Python
台湾乐天市场:日本No.1的网路购物网站
2017/03/22 全球购物
Sunglasses Shop德国站:欧洲排名第一的太阳镜网站
2017/08/01 全球购物
英国手机零售商:Carphone Warehouse
2018/06/06 全球购物
用友笔试题目
2016/10/25 面试题
求职简历自荐信
2013/10/20 职场文书
销售岗位职责范本
2014/06/12 职场文书
物理教育专业求职信
2014/06/25 职场文书
党员干部廉政承诺书
2015/04/28 职场文书
学校食堂管理制度
2015/08/04 职场文书
彻底理解golang中什么是nil
2021/04/29 Golang