Python 批量刷博客园访问量脚本过程解析


Posted in Python onAugust 30, 2019

今早无聊。。。7点起来突然想写个刷访问量的。。那就动手吧

仅供测试,不建议刷访问量哦~~

很简单的思路,第一步提取代理ip,第二步模拟访问。

提取HTTP代理IP

网上很多收费的代理和免费的代理IP

如:

Python 批量刷博客园访问量脚本过程解析

无论哪个网站,我们需要的就是爬取上面的ip和端口号,整理到一起。

具体的网站根据具体的结构爬取 比如上面那个网站,ip和端口在td标签

Python 批量刷博客园访问量脚本过程解析

这里利用bs4爬取即可。贴上脚本

##获取代理ip
def Get_proxy_ip():
  print("==========批量提取ip刷博客园访问量 By 卿=========")
  print("     Blogs:https://www.cnblogs.com/-qing-/")
  print("     Started!   ")
  global proxy_list
  proxy_list = []
  url = "https://www.kuaidaili.com/free/inha/"
  headers = {
      "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
      "Accept-Encoding":"gzip, deflate, sdch, br",
      "Accept-Language":"zh-CN,zh;q=0.8",
      "Cache-Control":"max-age=0",
      "Connection":"keep-alive",
      "Cookie":"channelid=0; sid=1561681200472193; _ga=GA1.2.762166746.1561681203; _gid=GA1.2.971407760.1561681203; _gat=1; Hm_lvt_7ed65b1cc4b810e9fd37959c9bb51b31=1561681203; Hm_lpvt_7ed65b1cc4b810e9fd37959c9bb51b31=1561681203",
      "Host":"www.kuaidaili.com",
      "Upgrade-Insecure-Requests":"1",
      "User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0",
      "Referrer Policy":"no-referrer-when-downgrade",
      }
  for i in range(1,100):
    url = url = "https://www.kuaidaili.com/free/inha/"+str(i)
    html = requests.get(url = url,headers = headers).content
    soup = BeautifulSoup(html,'html.parser')
    ip_list = '';
    port_list = '';
    protocol_list = '';
    for ip in soup.find_all('td'):
      if "IP" in ip.get('data-title') :
        ip_list = ip.get_text()##获取ip       
      if "PORT" in ip.get('data-title'):
        port_list = ip.get_text()##获取port
      if ip_list != '' and port_list != '':
        proxy = ip_list+":"+port_list
        ip_list = '';
        port_list = '';
        proxy_list.append(proxy)
    iv_main()
    time.sleep(2)
    proxy_list = []

这样就把 提取的ip和端口放到列表里

模拟访问刷博客园文章

这里就很简单了 ,遍历上面那个代理ip的列表,使用requests模块取访问就是了

def iv_main():
  proxies = {}
  requests.packages.urllib3.disable_warnings()
  #proxy_ip = random.choice(proxy_list)
  url = 'https://www.cnblogs.com/-qing-/p/11080845.html'
  for proxy_ip in proxy_list:
    headers2 = {
      'accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
      'accept-encoding':'gzip, deflate, sdch, br',
      'accept-language':'zh-CN,zh;q=0.8',
      'cache-control':'max-age=0',
      'cookie':'__gads=ID=8c6fd85d91262bb1:T=1561554219:S=ALNI_MZwz0CMKQJK-L19DrX5DPDtYvp63Q; _gat=1; _ga=GA1.2.359634670.1561535095; _gid=GA1.2.1087331661.1561535095',
      'if-modified-since':'Fri, 28 Jun 2019 02:10:23 GMT',
      'referer':'https://www.cnblogs.com/',
      'upgrade-insecure-requests':'1',
      'user-agent':random.choice(user_agent_list),
      }
    proxies['HTTP'] = proxy_ip
    #user_agent = random.choice(user_agent_list)
    try:
      r = requests.get(url,headers=headers2,proxies=proxies,verify=False) #verify是否验证服务器的SSL证书
      print("[*]"+proxy_ip+"访问成功!")
    except:
      print("[-]"+proxy_ip+"访问失败!")

最好带上随机的ua请求头

user_agent_list = [
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) "
           "Chrome/45.0.2454.85 Safari/537.36 115Browser/6.0.3",
  "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
  "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
  "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)",
  "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)",
  "Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
  "Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11",
  "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
  "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)",
  "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0",
  "Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
]

优化整合

这里可以稍微优化下,加入队列线程优化(虽然python这个没啥用)

最终代码整合:

# -*- coding:utf-8 -*-
#By 卿 
#Blog:https://www.cnblogs.com/-qing-/

import requests
from bs4 import BeautifulSoup
import re
import time
import random
import threading

print("==========批量提取ip刷博客园访问量 By 卿=========")
print("     Blogs:https://www.cnblogs.com/-qing-/")
print("     Started!   ")
user_agent_list = [
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) "
           "Chrome/45.0.2454.85 Safari/537.36 115Browser/6.0.3",
  "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
  "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
  "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)",
  "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)",
  "Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
  "Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11",
  "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
  "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)",
  "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0",
  "Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
]
def iv_main():
  proxies = {}
  requests.packages.urllib3.disable_warnings()
  #proxy_ip = random.choice(proxy_list)
  url = 'https://www.cnblogs.com/-qing-/p/11080845.html'
  for proxy_ip in proxy_list:
    headers2 = {
      'accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
      'accept-encoding':'gzip, deflate, sdch, br',
      'accept-language':'zh-CN,zh;q=0.8',
      'cache-control':'max-age=0',
      'cookie':'__gads=ID=8c6fd85d91262bb1:T=1561554219:S=ALNI_MZwz0CMKQJK-L19DrX5DPDtYvp63Q; _gat=1; _ga=GA1.2.359634670.1561535095; _gid=GA1.2.1087331661.1561535095',
      'if-modified-since':'Fri, 28 Jun 2019 02:10:23 GMT',
      'referer':'https://www.cnblogs.com/',
      'upgrade-insecure-requests':'1',
      'user-agent':random.choice(user_agent_list),
      }
    proxies['HTTP'] = proxy_ip
    #user_agent = random.choice(user_agent_list)
    try:
      r = requests.get(url,headers=headers2,proxies=proxies,verify=False) #verify是否验证服务器的SSL证书
      print("[*]"+proxy_ip+"访问成功!")
    except:
      print("[-]"+proxy_ip+"访问失败!")

##获取代理ip
def Get_proxy_ip():
  global proxy_list
  proxy_list = []
  url = "https://www.kuaidaili.com/free/inha/"
  headers = {
      "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
      "Accept-Encoding":"gzip, deflate, sdch, br",
      "Accept-Language":"zh-CN,zh;q=0.8",
      "Cache-Control":"max-age=0",
      "Connection":"keep-alive",
      "Cookie":"channelid=0; sid=1561681200472193; _ga=GA1.2.762166746.1561681203; _gid=GA1.2.971407760.1561681203; _gat=1; Hm_lvt_7ed65b1cc4b810e9fd37959c9bb51b31=1561681203; Hm_lpvt_7ed65b1cc4b810e9fd37959c9bb51b31=1561681203",
      "Host":"www.kuaidaili.com",
      "Upgrade-Insecure-Requests":"1",
      "User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0",
      "Referrer Policy":"no-referrer-when-downgrade",
      }
  for i in range(1,100):
    url = url = "https://www.kuaidaili.com/free/inha/"+str(i)
    html = requests.get(url = url,headers = headers).content
    soup = BeautifulSoup(html,'html.parser')
    ip_list = '';
    port_list = '';
    protocol_list = '';
    for ip in soup.find_all('td'):
      if "IP" in ip.get('data-title') :
        ip_list = ip.get_text()##获取ip       
      if "PORT" in ip.get('data-title'):
        port_list = ip.get_text()##获取port
      if ip_list != '' and port_list != '':
        proxy = ip_list+":"+port_list
        ip_list = '';
        port_list = '';
        proxy_list.append(proxy)
    iv_main()
    time.sleep(2)
    proxy_list = []    
th=[]
th_num=10
for x in range(th_num):
    t=threading.Thread(target=Get_proxy_ip)
    th.append(t)
for x in range(th_num):
    th[x].start()
for x in range(th_num):
    th[x].join()

结果

Python 批量刷博客园访问量脚本过程解析

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持三水点靠木。

Python 相关文章推荐
Python求两个文本文件以行为单位的交集、并集与差集的方法
Jun 17 Python
tensorflow实现softma识别MNIST
Mar 12 Python
PyQt5每天必学之工具提示功能
Apr 19 Python
python 爬虫 批量获取代理ip的实例代码
May 22 Python
为什么str(float)在Python 3中比Python 2返回更多的数字
Oct 16 Python
解决python3运行selenium下HTMLTestRunner报错的问题
Dec 27 Python
使用Python向DataFrame中指定位置添加一列或多列的方法
Jan 29 Python
Python的Lambda函数用法详解
Sep 03 Python
解决Django Haystack全文检索为空的问题
May 19 Python
Python爬虫爬取百度搜索内容代码实例
Jun 05 Python
Python开发入门——迭代的基本使用
Sep 03 Python
如何用Python提取10000份log中的产品信息
Jan 14 Python
快速解决docker-py api版本不兼容的问题
Aug 30 #Python
Python 使用 Pillow 模块给图片添加文字水印的方法
Aug 30 #Python
python pillow模块使用方法详解
Aug 30 #Python
docker-py 用Python调用Docker接口的方法
Aug 30 #Python
tesserocr与pytesseract模块的使用方法解析
Aug 30 #Python
Django获取应用下的所有models的例子
Aug 30 #Python
Django自带日志 settings.py文件配置方法
Aug 30 #Python
You might like
PHP下MAIL的另一解决方案
2006/10/09 PHP
FCKeditor的安装(PHP)
2007/01/13 PHP
php判断终端是手机还是电脑访问网站的思路及代码
2013/04/24 PHP
Laravel 自动转换长整型雪花 ID 为字符串的实现
2020/10/27 PHP
探讨javascript是不是面向对象的语言
2013/11/21 Javascript
ie与ff下的event事件使用介绍
2013/11/25 Javascript
jQuery中关于ScrollableGridPlugin.js(固定表头)插件的使用逐步解析
2014/07/17 Javascript
JavaScript 学习笔记之语句
2015/01/14 Javascript
js实现仿百度风云榜可重复多次调用的TAB切换选项卡效果
2015/08/31 Javascript
javascript实现简单的全选和反选功能
2016/01/05 Javascript
浅谈JS之iframe中的窗口
2016/09/13 Javascript
JS 对java返回的json格式的数据处理方法
2016/12/05 Javascript
EasyUI学习之Combobox级联下拉列表(2)
2016/12/29 Javascript
vue-cli如何快速构建vue项目
2017/04/26 Javascript
bootstrap下拉框动态赋值方法
2018/08/10 Javascript
nodejs检测因特网是否断开的解决方案
2019/04/17 NodeJs
JavaScript代码异常监控实现过程详解
2020/02/17 Javascript
[43:58]DOTA2上海特级锦标赛C组败者赛 Newbee VS Archon第二局
2016/02/27 DOTA
Python检测字符串中是否包含某字符集合中的字符
2015/05/21 Python
python显示生日是星期几的方法
2015/05/27 Python
python学习笔记之调用eval函数出现invalid syntax错误问题
2015/10/18 Python
python实现基于SVM手写数字识别功能
2020/05/27 Python
Python 使用with上下文实现计时功能
2018/03/09 Python
用pycharm开发django项目示例代码
2019/06/13 Python
深入了解Django View(视图系统)
2019/07/23 Python
Python 爬虫实现增加播客访问量的方法实现
2019/10/31 Python
pandas分批读取大数据集教程
2020/06/06 Python
CSS3条纹背景制作的实战攻略
2016/05/31 HTML / CSS
HTML5 Canvas 起步(2) - 路径
2009/05/12 HTML / CSS
普通党员个人对照检查材料
2014/09/18 职场文书
公司授权委托书范文
2014/09/21 职场文书
2014年企业团支部工作总结
2014/12/10 职场文书
少先队中队工作总结2015
2015/07/23 职场文书
技术入股协议书
2016/03/22 职场文书
python引入其他文件夹下的py文件具体方法
2021/05/23 Python
Nginx反向代理、重定向
2022/04/13 Servers