python爬虫请求库httpx和parsel解析库的使用测评


Posted in Python onMay 10, 2021

Python网络爬虫领域两个最新的比较火的工具莫过于httpx和parsel了。httpx号称下一代的新一代的网络请求库,不仅支持requests库的所有操作,还能发送异步请求,为编写异步爬虫提供了便利。parsel最初集成在著名Python爬虫框架Scrapy中,后独立出来成立一个单独的模块,支持XPath选择器, CSS选择器和正则表达式等多种解析提取方式, 据说相比于BeautifulSoup,parsel的解析效率更高。

今天我们就以爬取链家网上的二手房在售房产信息为例,来测评下httpx和parsel这两个库。为了节约时间,我们以爬取上海市浦东新区500万元-800万元以上的房产为例。

requests + BeautifulSoup组合

首先上场的是Requests + BeautifulSoup组合,这也是大多数人刚学习Python爬虫时使用的组合。本例中爬虫的入口url是https://sh.lianjia.com/ershoufang/pudong/a3p5/, 先发送请求获取最大页数,然后循环发送请求解析单个页面提取我们所要的信息(比如小区名,楼层,朝向,总价,单价等信息),最后导出csv文件。如果你正在阅读本文,相信你对Python爬虫已经有了一定了解,所以我们不会详细解释每一行代码。

整个项目代码如下所示:

# homelink_requests.py
# Author: 大江狗
 from fake_useragent import UserAgent
 import requests
 from bs4 import BeautifulSoup
 import csv
 import re
 import time


 class HomeLinkSpider(object):
     def __init__(self):
         self.ua = UserAgent()
         self.headers = {"User-Agent": self.ua.random}
         self.data = list()
         self.path = "浦东_三房_500_800万.csv"
         self.url = "https://sh.lianjia.com/ershoufang/pudong/a3p5/"

     def get_max_page(self):
         response = requests.get(self.url, headers=self.headers)
         if response.status_code == 200:
             soup = BeautifulSoup(response.text, 'html.parser')
             a = soup.select('div[class="page-box house-lst-page-box"]')
             #使用eval是字符串转化为字典格式
             max_page = eval(a[0].attrs["page-data"])["totalPage"] 
             return max_page
         else:
             print("请求失败 status:{}".format(response.status_code))
             return None

     def parse_page(self):
         max_page = self.get_max_page()
         for i in range(1, max_page + 1):
             url = 'https://sh.lianjia.com/ershoufang/pudong/pg{}a3p5/'.format(i)
             response = requests.get(url, headers=self.headers)
             soup = BeautifulSoup(response.text, 'html.parser')
             ul = soup.find_all("ul", class_="sellListContent")
             li_list = ul[0].select("li")
             for li in li_list:
                 detail = dict()
                 detail['title'] = li.select('div[class="title"]')[0].get_text()

                 #  2室1厅 | 74.14平米 | 南 | 精装 | 高楼层(共6层) | 1999年建 | 板楼
                 house_info = li.select('div[class="houseInfo"]')[0].get_text()
                 house_info_list = house_info.split(" | ")

                 detail['bedroom'] = house_info_list[0]
                 detail['area'] = house_info_list[1]
                 detail['direction'] = house_info_list[2]

                 floor_pattern = re.compile(r'\d{1,2}')
                 # 从字符串任意位置匹配
                 match1 = re.search(floor_pattern, house_info_list[4])  
                 if match1:
                     detail['floor'] = match1.group()
                 else:
                     detail['floor'] = "未知"

                 # 匹配年份
                 year_pattern = re.compile(r'\d{4}')
                 match2 = re.search(year_pattern, house_info_list[5])
                 if match2:
                     detail['year'] = match2.group()
                 else:
                     detail['year'] = "未知"

                 # 文兰小区 - 塘桥, 提取小区名和哈快
                 position_info = li.select('div[class="positionInfo"]')[0].get_text().split(' - ')
                 detail['house'] = position_info[0]
                 detail['location'] = position_info[1]

                 # 650万,匹配650
                 price_pattern = re.compile(r'\d+')
                 total_price = li.select('div[class="totalPrice"]')[0].get_text()
                 detail['total_price'] = re.search(price_pattern, total_price).group()

                 # 单价64182元/平米, 匹配64182
                 unit_price = li.select('div[class="unitPrice"]')[0].get_text()
                 detail['unit_price'] = re.search(price_pattern, unit_price).group()
                 self.data.append(detail)

     def write_csv_file(self):
         head = ["标题", "小区", "房厅", "面积", "朝向", "楼层", "年份",
         "位置", "总价(万)", "单价(元/平方米)"]
         keys = ["title", "house", "bedroom", "area", "direction",
         "floor", "year", "location",
                 "total_price", "unit_price"]

         try:
             with open(self.path, 'w', newline='', encoding='utf_8_sig') as csv_file:
                 writer = csv.writer(csv_file, dialect='excel')
                 if head is not None:
                     writer.writerow(head)
                 for item in self.data:
                     row_data = []
                     for k in keys:
                         row_data.append(item[k])
                         # print(row_data)
                     writer.writerow(row_data)
                 print("Write a CSV file to path %s Successful." % self.path)
         except Exception as e:
             print("Fail to write CSV to path: %s, Case: %s" % (self.path, e))

 if __name__ == '__main__':
     start = time.time()
     home_link_spider = HomeLinkSpider()
     home_link_spider.parse_page()
     home_link_spider.write_csv_file()
     end = time.time()
     print("耗时:{}秒".format(end-start))

注意:我们使用了fake_useragent, requests和BeautifulSoup,这些都需要通过pip事先安装好才能用。

现在我们来看下爬取结果,耗时约18.5秒,总共爬取580条数据。

python爬虫请求库httpx和parsel解析库的使用测评

requests + parsel组合

这次我们同样采用requests获取目标网页内容,使用parsel库(事先需通过pip安装)来解析。Parsel库的用法和BeautifulSoup相似,都是先创建实例,然后使用各种选择器提取DOM元素和数据,但语法上稍有不同。Beautiful有自己的语法规则,而Parsel库支持标准的css选择器和xpath选择器, 通过get方法或getall方法获取文本或属性值,使用起来更方便。

# BeautifulSoup的用法
 from bs4 import BeautifulSoup

 soup = BeautifulSoup(response.text, 'html.parser')
 ul = soup.find_all("ul", class_="sellListContent")[0]

 # Parsel的用法, 使用Selector类
 from parsel import Selector
 selector = Selector(response.text)
 ul = selector.css('ul.sellListContent')[0]

 # Parsel获取文本值或属性值案例
 selector.css('div.title span::text').get()
 selector.css('ul li a::attr(href)').get()
 >>> for li in selector.css('ul > li'):
 ...     print(li.xpath('.//@href').get())

注:老版的parsel库使用extract()或extract_first()方法获取文本或属性值,在新版中已被get()和getall()方法替代。

全部代码如下所示:

# homelink_parsel.py
 # Author: 大江狗
 from fake_useragent import UserAgent
 import requests
 import csv
 import re
 import time
 from parsel import Selector

 class HomeLinkSpider(object):
     def __init__(self):
         self.ua = UserAgent()
         self.headers = {"User-Agent": self.ua.random}
         self.data = list()
         self.path = "浦东_三房_500_800万.csv"
         self.url = "https://sh.lianjia.com/ershoufang/pudong/a3p5/"

     def get_max_page(self):
         response = requests.get(self.url, headers=self.headers)
         if response.status_code == 200:
             # 创建Selector类实例
             selector = Selector(response.text)
             # 采用css选择器获取最大页码div Boxl
             a = selector.css('div[class="page-box house-lst-page-box"]')
             # 使用eval将page-data的json字符串转化为字典格式
             max_page = eval(a[0].xpath('//@page-data').get())["totalPage"]
             print("最大页码数:{}".format(max_page))
             return max_page
         else:
             print("请求失败 status:{}".format(response.status_code))
             return None

     def parse_page(self):
         max_page = self.get_max_page()
         for i in range(1, max_page + 1):
             url = 'https://sh.lianjia.com/ershoufang/pudong/pg{}a3p5/'.format(i)
             response = requests.get(url, headers=self.headers)
             selector = Selector(response.text)
             ul = selector.css('ul.sellListContent')[0]
             li_list = ul.css('li')
             for li in li_list:
                 detail = dict()
                 detail['title'] = li.css('div.title a::text').get()

                 #  2室1厅 | 74.14平米 | 南 | 精装 | 高楼层(共6层) | 1999年建 | 板楼
                 house_info = li.css('div.houseInfo::text').get()
                 house_info_list = house_info.split(" | ")

                 detail['bedroom'] = house_info_list[0]
                 detail['area'] = house_info_list[1]
                 detail['direction'] = house_info_list[2]

                 floor_pattern = re.compile(r'\d{1,2}')
                 match1 = re.search(floor_pattern, house_info_list[4])  # 从字符串任意位置匹配
                 if match1:
                     detail['floor'] = match1.group()
                 else:
                     detail['floor'] = "未知"

                 # 匹配年份
                 year_pattern = re.compile(r'\d{4}')
                 match2 = re.search(year_pattern, house_info_list[5])
                 if match2:
                     detail['year'] = match2.group()
                 else:
                     detail['year'] = "未知"

                 # 文兰小区 - 塘桥    提取小区名和哈快
                 position_info = li.css('div.positionInfo a::text').getall()
                 detail['house'] = position_info[0]
                 detail['location'] = position_info[1]

                 # 650万,匹配650
                 price_pattern = re.compile(r'\d+')
                 total_price = li.css('div.totalPrice span::text').get()
                 detail['total_price'] = re.search(price_pattern, total_price).group()

                 # 单价64182元/平米, 匹配64182
                 unit_price = li.css('div.unitPrice span::text').get()
                 detail['unit_price'] = re.search(price_pattern, unit_price).group()
                 self.data.append(detail)

     def write_csv_file(self):

         head = ["标题", "小区", "房厅", "面积", "朝向", "楼层", 
                 "年份", "位置", "总价(万)", "单价(元/平方米)"]
         keys = ["title", "house", "bedroom", "area", 
                 "direction", "floor", "year", "location",
                 "total_price", "unit_price"]

         try:
             with open(self.path, 'w', newline='', encoding='utf_8_sig') as csv_file:
                 writer = csv.writer(csv_file, dialect='excel')
                 if head is not None:
                     writer.writerow(head)
                 for item in self.data:
                     row_data = []
                     for k in keys:
                         row_data.append(item[k])
                         # print(row_data)
                     writer.writerow(row_data)
                 print("Write a CSV file to path %s Successful." % self.path)
         except Exception as e:
             print("Fail to write CSV to path: %s, Case: %s" % (self.path, e))


 if __name__ == '__main__':
     start = time.time()
     home_link_spider = HomeLinkSpider()
     home_link_spider.parse_page()
     home_link_spider.write_csv_file()
     end = time.time()
     print("耗时:{}秒".format(end-start))

现在我们来看下爬取结果,爬取580条数据耗时约16.5秒,节省了2秒时间。可见parsel比BeautifulSoup解析效率是要高的,爬取任务少时差别不大,任务多的话差别可能会大些。

python爬虫请求库httpx和parsel解析库的使用测评

httpx同步 + parsel组合

我们现在来更进一步,使用httpx替代requests库。httpx发送同步请求的方式和requests库基本一样,所以我们只需要修改上例中两行代码,把requests替换成httpx即可, 其余代码一模一样。

from fake_useragent import UserAgent
 import csv
 import re
 import time
 from parsel import Selector
 import httpx


 class HomeLinkSpider(object):
     def __init__(self):
         self.ua = UserAgent()
         self.headers = {"User-Agent": self.ua.random}
         self.data = list()
         self.path = "浦东_三房_500_800万.csv"
         self.url = "https://sh.lianjia.com/ershoufang/pudong/a3p5/"

     def get_max_page(self):

         # 修改这里把requests换成httpx
         response = httpx.get(self.url, headers=self.headers)
         if response.status_code == 200:
             # 创建Selector类实例
             selector = Selector(response.text)
             # 采用css选择器获取最大页码div Boxl
             a = selector.css('div[class="page-box house-lst-page-box"]')
             # 使用eval将page-data的json字符串转化为字典格式
             max_page = eval(a[0].xpath('//@page-data').get())["totalPage"]
             print("最大页码数:{}".format(max_page))
             return max_page
         else:
             print("请求失败 status:{}".format(response.status_code))
             return None

     def parse_page(self):
         max_page = self.get_max_page()
         for i in range(1, max_page + 1):
             url = 'https://sh.lianjia.com/ershoufang/pudong/pg{}a3p5/'.format(i)

              # 修改这里把requests换成httpx
             response = httpx.get(url, headers=self.headers)
             selector = Selector(response.text)
             ul = selector.css('ul.sellListContent')[0]
             li_list = ul.css('li')
             for li in li_list:
                 detail = dict()
                 detail['title'] = li.css('div.title a::text').get()

                 #  2室1厅 | 74.14平米 | 南 | 精装 | 高楼层(共6层) | 1999年建 | 板楼
                 house_info = li.css('div.houseInfo::text').get()
                 house_info_list = house_info.split(" | ")

                 detail['bedroom'] = house_info_list[0]
                 detail['area'] = house_info_list[1]
                 detail['direction'] = house_info_list[2]


                 floor_pattern = re.compile(r'\d{1,2}')
                 match1 = re.search(floor_pattern, house_info_list[4])  # 从字符串任意位置匹配
                 if match1:
                     detail['floor'] = match1.group()
                 else:
                     detail['floor'] = "未知"

                 # 匹配年份
                 year_pattern = re.compile(r'\d{4}')
                 match2 = re.search(year_pattern, house_info_list[5])
                 if match2:
                     detail['year'] = match2.group()
                 else:
                     detail['year'] = "未知"

                 # 文兰小区 - 塘桥    提取小区名和哈快
                 position_info = li.css('div.positionInfo a::text').getall()
                 detail['house'] = position_info[0]
                 detail['location'] = position_info[1]

                 # 650万,匹配650
                 price_pattern = re.compile(r'\d+')
                 total_price = li.css('div.totalPrice span::text').get()
                 detail['total_price'] = re.search(price_pattern, total_price).group()

                 # 单价64182元/平米, 匹配64182
                 unit_price = li.css('div.unitPrice span::text').get()
                 detail['unit_price'] = re.search(price_pattern, unit_price).group()
                 self.data.append(detail)

     def write_csv_file(self):

         head = ["标题", "小区", "房厅", "面积", "朝向", "楼层", 
                 "年份", "位置", "总价(万)", "单价(元/平方米)"]
         keys = ["title", "house", "bedroom", "area", "direction", 
                 "floor", "year", "location",
                 "total_price", "unit_price"]

         try:
             with open(self.path, 'w', newline='', encoding='utf_8_sig') as csv_file:
                 writer = csv.writer(csv_file, dialect='excel')
                 if head is not None:
                     writer.writerow(head)
                 for item in self.data:
                     row_data = []
                     for k in keys:
                         row_data.append(item[k])
                         # print(row_data)
                     writer.writerow(row_data)
                 print("Write a CSV file to path %s Successful." % self.path)
         except Exception as e:
             print("Fail to write CSV to path: %s, Case: %s" % (self.path, e))

 if __name__ == '__main__':
     start = time.time()
     home_link_spider = HomeLinkSpider()
     home_link_spider.parse_page()
     home_link_spider.write_csv_file()
     end = time.time()
     print("耗时:{}秒".format(end-start))

整个爬取过程耗时16.1秒,可见使用httpx发送同步请求时效率和requests基本无差别。

python爬虫请求库httpx和parsel解析库的使用测评

注意:Windows上使用pip安装httpx可能会出现报错,要求安装Visual Studio C++, 这个下载安装好就没事了。

接下来,我们就要开始王炸了,使用httpx和asyncio编写一个异步爬虫看看从链家网上爬取580条数据到底需要多长时间。

httpx异步+ parsel组合

Httpx厉害的地方就是能发送异步请求。整个异步爬虫实现原理时,先发送同步请求获取最大页码,把每个单页的爬取和数据解析变为一个asyncio协程任务(使用async定义),最后使用loop执行。

大部分代码与同步爬虫相同,主要变动地方有两个:

# 异步 - 使用协程函数解析单页面,需传入单页面url地址
     async def parse_single_page(self, url):

         # 使用httpx发送异步请求获取单页数据
         async with httpx.AsyncClient() as client:
             response = await client.get(url, headers=self.headers)
             selector = Selector(response.text)
             # 其余地方一样

     def parse_page(self):
         max_page = self.get_max_page()
         loop = asyncio.get_event_loop()

         # Python 3.6之前用ayncio.ensure_future或loop.create_task方法创建单个协程任务
         # Python 3.7以后可以用户asyncio.create_task方法创建单个协程任务
         tasks = []
         for i in range(1, max_page + 1):
             url = 'https://sh.lianjia.com/ershoufang/pudong/pg{}a3p5/'.format(i)
             tasks.append(self.parse_single_page(url))

         # 还可以使用asyncio.gather(*tasks)命令将多个协程任务加入到事件循环
         loop.run_until_complete(asyncio.wait(tasks))
         loop.close()

整个项目代码如下所示:

from fake_useragent import UserAgent
 import csv
 import re
 import time
 from parsel import Selector
 import httpx
 import asyncio


 class HomeLinkSpider(object):
     def __init__(self):
         self.ua = UserAgent()
         self.headers = {"User-Agent": self.ua.random}
         self.data = list()
         self.path = "浦东_三房_500_800万.csv"
         self.url = "https://sh.lianjia.com/ershoufang/pudong/a3p5/"

     def get_max_page(self):
         response = httpx.get(self.url, headers=self.headers)
         if response.status_code == 200:
             # 创建Selector类实例
             selector = Selector(response.text)
             # 采用css选择器获取最大页码div Boxl
             a = selector.css('div[class="page-box house-lst-page-box"]')
             # 使用eval将page-data的json字符串转化为字典格式
             max_page = eval(a[0].xpath('//@page-data').get())["totalPage"]
             print("最大页码数:{}".format(max_page))
             return max_page
         else:
             print("请求失败 status:{}".format(response.status_code))
             return None

     # 异步 - 使用协程函数解析单页面,需传入单页面url地址
     async def parse_single_page(self, url):
         async with httpx.AsyncClient() as client:
             response = await client.get(url, headers=self.headers)
             selector = Selector(response.text)
             ul = selector.css('ul.sellListContent')[0]
             li_list = ul.css('li')
             for li in li_list:
                 detail = dict()
                 detail['title'] = li.css('div.title a::text').get()

                 #  2室1厅 | 74.14平米 | 南 | 精装 | 高楼层(共6层) | 1999年建 | 板楼
                 house_info = li.css('div.houseInfo::text').get()
                 house_info_list = house_info.split(" | ")

                 detail['bedroom'] = house_info_list[0]
                 detail['area'] = house_info_list[1]
                 detail['direction'] = house_info_list[2]


                 floor_pattern = re.compile(r'\d{1,2}')
                 match1 = re.search(floor_pattern, house_info_list[4])  # 从字符串任意位置匹配
                 if match1:
                     detail['floor'] = match1.group()
                 else:
                     detail['floor'] = "未知"

                 # 匹配年份
                 year_pattern = re.compile(r'\d{4}')
                 match2 = re.search(year_pattern, house_info_list[5])
                 if match2:
                     detail['year'] = match2.group()
                 else:
                     detail['year'] = "未知"

                  # 文兰小区 - 塘桥    提取小区名和哈快
                 position_info = li.css('div.positionInfo a::text').getall()
                 detail['house'] = position_info[0]
                 detail['location'] = position_info[1]

                  # 650万,匹配650
                 price_pattern = re.compile(r'\d+')
                 total_price = li.css('div.totalPrice span::text').get()
                 detail['total_price'] = re.search(price_pattern, total_price).group()

                 # 单价64182元/平米, 匹配64182
                 unit_price = li.css('div.unitPrice span::text').get()
                 detail['unit_price'] = re.search(price_pattern, unit_price).group()

                 self.data.append(detail)

     def parse_page(self):
         max_page = self.get_max_page()
         loop = asyncio.get_event_loop()

         # Python 3.6之前用ayncio.ensure_future或loop.create_task方法创建单个协程任务
         # Python 3.7以后可以用户asyncio.create_task方法创建单个协程任务
         tasks = []
         for i in range(1, max_page + 1):
             url = 'https://sh.lianjia.com/ershoufang/pudong/pg{}a3p5/'.format(i)
             tasks.append(self.parse_single_page(url))

         # 还可以使用asyncio.gather(*tasks)命令将多个协程任务加入到事件循环
         loop.run_until_complete(asyncio.wait(tasks))
         loop.close()


     def write_csv_file(self):
         head = ["标题", "小区", "房厅", "面积", "朝向", "楼层",
                 "年份", "位置", "总价(万)", "单价(元/平方米)"]
         keys = ["title", "house", "bedroom", "area", "direction",
                 "floor", "year", "location",
                 "total_price", "unit_price"]

         try:
             with open(self.path, 'w', newline='', encoding='utf_8_sig') as csv_file:
                 writer = csv.writer(csv_file, dialect='excel')
                 if head is not None:
                     writer.writerow(head)
                 for item in self.data:
                     row_data = []
                     for k in keys:
                         row_data.append(item[k])
                     writer.writerow(row_data)
                 print("Write a CSV file to path %s Successful." % self.path)
         except Exception as e:
             print("Fail to write CSV to path: %s, Case: %s" % (self.path, e))
 
 if __name__ == '__main__':
     start = time.time()
     home_link_spider = HomeLinkSpider()
     home_link_spider.parse_page()
     home_link_spider.write_csv_file()
     end = time.time()
     print("耗时:{}秒".format(end-start))

现在到了见证奇迹的时刻了。从链家网上爬取了580条数据,使用httpx编写的异步爬虫仅仅花了2.5秒!!

python爬虫请求库httpx和parsel解析库的使用测评

对比与总结

爬取同样的内容,采用不同工具组合耗时是不一样的。httpx异步+parsel组合毫无疑问是最大的赢家, requests和BeautifulSoup确实可以功成身退啦。

  • requests + BeautifulSoup: 18.5 秒
  • requests + parsel: 16.5秒
  • httpx 同步 + parsel: 16.1秒
  • httpx 异步 + parsel: 2.5秒

对于Python爬虫,你还有喜欢的库吗?

以上就是python爬虫请求库httpx和parsel解析库的使用测评的详细内容,更多关于python httpx和parsel的资料请关注三水点靠木其它相关文章!

Python 相关文章推荐
python基础教程之数字处理(math)模块详解
Mar 25 Python
Python多线程下载文件的方法
Jul 10 Python
python 读写、创建 文件的方法(必看)
Sep 12 Python
Selenium控制浏览器常见操作示例
Aug 13 Python
解决pandas.DataFrame.fillna 填充Nan失败的问题
Nov 06 Python
Puppeteer使用示例详解
Jun 20 Python
python 将字符串中的数字相加求和的实现
Jul 18 Python
Python中的self用法详解
Aug 06 Python
pytorch中获取模型input/output shape实例
Dec 30 Python
python和php学习哪个更有发展
Jun 17 Python
python 图像插值 最近邻、双线性、双三次实例
Jul 05 Python
用python批量下载apk
Dec 29 Python
Python 中数组和数字相乘时的注意事项说明
May 10 #Python
python 实现的截屏工具
python实现的人脸识别打卡系统
Python词云的正确实现方法实例
python神经网络编程之手写数字识别
利用Selenium添加cookie实现自动登录的示例代码(fofa)
Python基础之教你怎么在M1系统上使用pandas
You might like
PHP安装全攻略:APACHE
2006/10/09 PHP
PHP将回调函数作用到给定数组单元的方法
2014/08/19 PHP
AJAX的跨域与JSONP(为文章自动添加短址的功能)
2010/01/17 Javascript
JS实现自适应高度表单文本框的方法
2015/02/25 Javascript
浅谈jQuery构造函数分析
2015/05/11 Javascript
js实现文本框支持加减运算的方法
2015/08/19 Javascript
以Python代码实例展示kNN算法的实际运用
2015/10/26 Javascript
D3.js实现饼状图的方法详解
2016/09/21 Javascript
JavaScript中在光标处插入添加文本标签节点的详细方法
2017/03/22 Javascript
基于vue 实现token验证的实例代码
2017/12/14 Javascript
vue2.0父子组件间传递数据的方法
2018/08/16 Javascript
浅谈JavaScript 代码整洁之道
2018/10/23 Javascript
JavaScript继承与聚合实例详解
2019/01/22 Javascript
JavaScript对JSON数组简单排序操作示例
2019/01/31 Javascript
javascript 原型与原型链的理解及应用实例分析
2020/02/10 Javascript
react结合bootstrap实现评论功能
2020/05/30 Javascript
python通过自定义isnumber函数判断字符串是否为数字的方法
2015/04/23 Python
Python自动化部署工具Fabric的简单上手指南
2016/04/19 Python
Python代码解决RenderView窗口not found问题
2016/08/28 Python
Django 前后台的数据传递的方法
2017/08/08 Python
教你用Python写安卓游戏外挂
2018/01/11 Python
用python实现对比两张图片的不同
2018/02/05 Python
python使用turtle绘制分形树
2018/06/22 Python
Django跨域请求CSRF的方法示例
2018/11/11 Python
Python之虚拟环境virtualenv,pipreqs生成项目依赖第三方包的方法
2019/07/23 Python
浅谈anaconda python 版本对应关系
2020/10/07 Python
将SVG图引入到HTML页面的实现
2019/09/20 HTML / CSS
英国二手物品交易网站:Preloved
2017/10/06 全球购物
全球性的女装店:storets
2019/06/12 全球购物
iHerb俄罗斯:维生素、补品和天然产品
2020/07/09 全球购物
什么是设计模式
2012/06/17 面试题
会计自荐信范文
2014/03/09 职场文书
文化大革命观后感
2015/06/17 职场文书
校园音乐节目广播稿
2015/08/19 职场文书
微信小程序中使用vant框架的具体步骤
2022/02/18 Javascript
nginx访问报403错误的几种情况详解
2022/07/23 Servers