Python脚本实现Web漏洞扫描工具


Posted in Python onOctober 25, 2016

这是去年毕设做的一个Web漏洞扫描小工具,主要针对简单的SQL注入漏洞、SQL盲注和XSS漏洞,代码是看过github外国大神(听说是SMAP的编写者之一)的两个小工具源码,根据里面的思路自己写的。以下是使用说明和源代码。

一、使用说明:

1.运行环境:

Linux命令行界面+Python2.7

2.程序源码:

Vim scanner//建立一个名为scanner的文件

Chmod a+xscanner//修改文件权限为可执行的

3.运行程序:

Python scanner//运行文件

若没有携带目标URL信息,界面输出帮助信息,提醒可以可输入的参数。

参数包括:

--h 输出帮助信息
--url 扫描的URL
--data POST请求方法的参数
--cookie HTTP请求头Cookie值
--user-agent HTTP请求头User-Agent值
--random-agent 是否使用浏览器伪装
--referer 目标URL的上一层界面
--proxy HTTP请求头代理值

例如扫描“http://127.0.0.1/dvwa/vulnerabilities/sqli/?id=&Submit=Submit”

Python scanner--url="http://127.0.0.1/dvwa/vulnerabilities/sqli/?id=&Submit=Submit"--cookie="security=low;PHPSESSID=menntb9b2isj7qha739ihg9of1"

输出扫描结果如下:

结果显示:

存在XSS漏洞,漏洞匹配漏洞特征库“”>.XSS.<””,属于嵌入标签外的类型。

存在SQL注入漏洞,目标网站服务器的数据库类型为MySQL。

存在BLIND SQL注入漏洞。

二、源代码:

代码验证过可以运行,我个人推荐用DVWA测试吧。

#!-*-coding:UTF-8-*- 
import optparse, random, re, string, urllib, urllib2,difflib,itertools,httplib 
NAME = "Scanner for RXSS and SQLI" 
AUTHOR = "Lishuze" 
PREFIXES = (" ", ") ", "' ", "') ", "\"") 
SUFFIXES = ("", "-- -", "#") 
BOOLEAN_TESTS = ("AND %d=%d", "OR NOT (%d=%d)") 
TAMPER_SQL_CHAR_POOL = ('(', ')', '\'', '"''"') 
TAMPER_XSS_CHAR_POOL = ('\'', '"', '>', '<', ';') 
GET, POST = "GET", "POST" 
COOKIE, UA, REFERER = "Cookie", "User-Agent", "Referer" 
TEXT, HTTPCODE, TITLE, HTML = xrange(4) 
_headers = {} 
USER_AGENTS = ( 
"Mozilla/5.0 (X11; Linux i686; rv:38.0) Gecko/20100101 Firefox/38.0", 
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36", 
"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_7_0; en-US) AppleWebKit/534.21 (KHTML, like Gecko) Chrome/11.0.678.0 Safari/534.21", 
) 
XSS_PATTERNS = ( 
(r"<!--[^>]*%(chars)s|%(chars)s[^<]*-->","\"<!--.'.xss.'.-->\", inside the comment", None), 
(r"(?s)<script[^>]*>[^<]*?'[^<']*%(chars)s|%(chars)s[^<']*'[^<]*</script>","\"<script>.'.xss.'.</script>\", enclosed by <script> tags, inside single-quotes", None), 
(r'(?s)<script[^>]*>[^<]*?"[^<"]*%(chars)s|%(chars)s[^<"]*"[^<]*</script>',"'<script>.\".xss.\".</script>', enclosed by <script> tags, inside double-quotes", None), 
(r"(?s)<script[^>]*>[^<]*?%(chars)s|%(chars)s[^<]*</script>","\"<script>.xss.</script>\", enclosed by <script> tags", None), 
(r">[^<]*%(chars)s[^<]*(<|\Z)", "\">.xss.<\", outside of tags", r"(?s)<script.+?</script>|<!--.*?-->"), 
(r"<[^>]*'[^>']*%(chars)s[^>']*'[^>]*>", "\"<.'.xss.'.>\", inside the tag, inside single-quotes", r"(?s)<script.+?</script>|<!--.*?-->"), 
(r'<[^>]*"[^>"]*%(chars)s[^>"]*"[^>]*>', "'<.\".xss.\".>', inside the tag, inside double-quotes", r"(?s)<script.+?</script>|<!--.*?-->"), 
(r"<[^>]*%(chars)s[^>]*>", "\"<.xss.>\", inside the tag, outside of quotes", r"(?s)<script.+?</script>|<!--.*?-->") 
) 
DBMS_ERRORS = { 
"MySQL": (r"SQL syntax.*MySQL", r"Warning.*mysql_.*", r"valid MySQL result", r"MySqlClient\."), 
"Microsoft SQL Server": (r"Driver.* SQL[\-\_\ ]*Server", r"OLE DB.* SQL Server", r"(\W|\A)SQL Server.*Driver", r"Warning.*mssql_.*", r"(\W|\A)SQL Server.*[0-9a-fA-F]{8}", r"(?s)Exception.*\WSystem\.Data\.SqlClient\.", r"(?s)Exception.*\WRoadhouse\.Cms\."), 
"Microsoft Access": (r"Microsoft Access Driver", r"JET Database Engine", r"Access Database Engine"), 
"Oracle": (r"ORA-[0-9][0-9][0-9][0-9]", r"Oracle error", r"Oracle.*Driver", r"Warning.*\Woci_.*", r"Warning.*\Wora_.*") 
} 
def _retrieve_content_xss(url, data=None): 
surl="" 
for i in xrange(len(url)): 
if i > url.find('?'): 
surl+=surl.join(url[i]).replace(' ',"%20") 
else: 
surl+=surl.join(url[i]) 
try: 
req = urllib2.Request(surl, data, _headers) 
retval = urllib2.urlopen(req, timeout=30).read() 
except Exception, ex: 
retval = getattr(ex, "message", "") 
return retval or "" 
def _retrieve_content_sql(url, data=None): 
retval = {HTTPCODE: httplib.OK} 
surl="" 
for i in xrange(len(url)): 
if i > url.find('?'): 
surl+=surl.join(url[i]).replace(' ',"%20") 
else: 
surl+=surl.join(url[i]) 
try: 
req = urllib2.Request(surl, data, _headers) 
retval[HTML] = urllib2.urlopen(req, timeout=30).read() 
except Exception, ex: 
retval[HTTPCODE] = getattr(ex, "code", None) 
retval[HTML] = getattr(ex, "message", "") 
match = re.search(r"<title>(?P<result>[^<]+)</title>", retval[HTML], re.I) 
retval[TITLE] = match.group("result") if match else None 
retval[TEXT] = re.sub(r"(?si)<script.+?</script>|<!--.+?-->|<style.+?</style>|<[^>]+>|\s+", " ", retval[HTML]) 
return retval 
def scan_page_xss(url, data=None): 
print "Start scanning RXSS:\n" 
retval, usable = False, False 
url = re.sub(r"=(&|\Z)", "=1\g<1>", url) if url else url 
data=re.sub(r"=(&|\Z)", "=1\g<1>", data) if data else data 
try: 
for phase in (GET, POST): 
current = url if phase is GET else (data or "") 
for match in re.finditer(r"((\A|[?&])(?P<parameter>[\w]+)=)(?P<value>[^&]+)", current): 
found, usable = False, True 
print "Scanning %s parameter '%s'" % (phase, match.group("parameter")) 
prefix = ("".join(random.sample(string.ascii_lowercase, 5))) 
suffix = ("".join(random.sample(string.ascii_lowercase, 5))) 
if not found: 
tampered = current.replace(match.group(0), "%s%s" % (match.group(0), urllib.quote("%s%s%s%s" % ("'", prefix, "".join(random.sample(TAMPER_XSS_CHAR_POOL, len(TAMPER_XSS_CHAR_POOL))), suffix)))) 
content = _retrieve_content_xss(tampered, data) if phase is GET else _retrieve_content_xss(url, tampered) 
for sample in re.finditer("%s([^ ]+?)%s" % (prefix, suffix), content, re.I): 
#print sample.group() 
for regex, info, content_removal_regex in XSS_PATTERNS: 
context = re.search(regex % {"chars": re.escape(sample.group(0))}, re.sub(content_removal_regex or "", "", content), re.I) 
if context and not found and sample.group(1).strip(): 
print "!!!%s parameter '%s' appears to be XSS vulnerable (%s)" % (phase, match.group("parameter"), info) 
found = retval = True 
if not usable: 
print " (x) no usable GET/POST parameters found" 
except KeyboardInterrupt: 
print "\r (x) Ctrl-C pressed" 
return retval 
def scan_page_sql(url, data=None): 
print "Start scanning SQLI:\n" 
retval, usable = False, False 
url = re.sub(r"=(&|\Z)", "=1\g<1>", url) if url else url 
data=re.sub(r"=(&|\Z)", "=1\g<1>", data) if data else data 
try: 
for phase in (GET, POST): 
current = url if phase is GET else (data or "") 
for match in re.finditer(r"((\A|[?&])(?P<parameter>\w+)=)(?P<value>[^&]+)", current): 
vulnerable, usable = False, True 
original=None 
print "Scanning %s parameter '%s'" % (phase, match.group("parameter")) 
tampered = current.replace(match.group(0), "%s%s" % (match.group(0), urllib.quote("".join(random.sample(TAMPER_SQL_CHAR_POOL, len(TAMPER_SQL_CHAR_POOL)))))) 
content = _retrieve_content_sql(tampered, data) if phase is GET else _retrieve_content_sql(url, tampered) 
for (dbms, regex) in ((dbms, regex) for dbms in DBMS_ERRORS for regex in DBMS_ERRORS[dbms]): 
if not vulnerable and re.search(regex, content[HTML], re.I): 
print "!!!%s parameter '%s' could be error SQLi vulnerable (%s)" % (phase, match.group("parameter"), dbms) 
retval = vulnerable = True 
vulnerable = False 
original = original or (_retrieve_content_sql(current, data) if phase is GET else _retrieve_content_sql(url, current)) 
for prefix,boolean,suffix in itertools.product(PREFIXES,BOOLEAN_TESTS,SUFFIXES): 
if not vulnerable: 
template = "%s%s%s" % (prefix,boolean, suffix) 
payloads = dict((_, current.replace(match.group(0), "%s%s" % (match.group(0), urllib.quote(template % (1 if _ else 2, 1), safe='%')))) for _ in (True, False)) 
contents = dict((_, _retrieve_content_sql(payloads[_], data) if phase is GET else _retrieve_content_sql(url, payloads[_])) for _ in (False, True)) 
if all(_[HTTPCODE] for _ in (original, contents[True], contents[False])) and (any(original[_] == contents[True][_] != contents[False][_] for _ in (HTTPCODE, TITLE))): 
vulnerable = True 
else: 
ratios = dict((_, difflib.SequenceMatcher(None, original[TEXT], contents[_][TEXT]).quick_ratio()) for _ in (True, False)) 
vulnerable = all(ratios.values()) and ratios[True] > 0.95 and ratios[False] < 0.95 
if vulnerable: 
print "!!!%s parameter '%s' could be error Blind SQLi vulnerable" % (phase, match.group("parameter")) 
retval = True 
if not usable: 
print " (x) no usable GET/POST parameters found" 
except KeyboardInterrupt: 
print "\r (x) Ctrl-C pressed" 
return retval 
def init_options(proxy=None, cookie=None, ua=None, referer=None): 
global _headers 
_headers = dict(filter(lambda _: _[1], ((COOKIE, cookie), (UA, ua or NAME), (REFERER, referer)))) 
urllib2.install_opener(urllib2.build_opener(urllib2.ProxyHandler({'http': proxy})) if proxy else None) 
if __name__ == "__main__": 
print "----------------------------------------------------------------------------------" 
print "%s\nBy:%s" % (NAME, AUTHOR) 
print "----------------------------------------------------------------------------------" 
parser = optparse.OptionParser() 
parser.add_option("--url", dest="url", help="Target URL") 
parser.add_option("--data", dest="data", help="POST data") 
parser.add_option("--cookie", dest="cookie", help="HTTP Cookie header value") 
parser.add_option("--user-agent", dest="ua", help="HTTP User-Agent header value") 
parser.add_option("--random-agent", dest="randomAgent", action="store_true", help="Use randomly selected HTTP User-Agent header value") 
parser.add_option("--referer", dest="referer", help="HTTP Referer header value") 
parser.add_option("--proxy", dest="proxy", help="HTTP proxy address") 
options, _ = parser.parse_args() 
if options.url: 
init_options(options.proxy, options.cookie, options.ua if not options.randomAgent else random.choice(USER_AGENTS), options.referer) 
result_xss= scan_page_xss(options.url if options.url.startswith("http") else "http://%s" % options.url, options.data) 
print "\nScan results: %s vulnerabilities found" % ("possible" if result_xss else "no") 
print "----------------------------------------------------------------------------------" 
result_sql = scan_page_sql(options.url if options.url.startswith("http") else "http://%s" % options.url, options.data) 
print "\nScan results: %s vulnerabilities found" % ("possible" if result_sql else "no") 
print "----------------------------------------------------------------------------------" 
else: 
parser.print_help()

以上所述是小编给大家介绍的Python脚本实现Web漏洞扫描工具,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对三水点靠木网站的支持!

Python 相关文章推荐
pyv8学习python和javascript变量进行交互
Dec 04 Python
python re正则表达式模块(Regular Expression)
Jul 16 Python
Python使用BeautifulSoup库解析HTML基本使用教程
Mar 31 Python
用Python解决计数原理问题的方法
Aug 04 Python
Python简单计算数组元素平均值的方法示例
Dec 26 Python
Python操作mysql数据库实现增删查改功能的方法
Jan 15 Python
python 遍历目录(包括子目录)下所有文件的实例
Jul 11 Python
pycharm创建一个python包方法图解
Apr 10 Python
Django模板Templates使用方法详解
Jul 19 Python
Python pygame绘制文字制作滚动文字过程解析
Dec 12 Python
Keras自定义实现带masking的meanpooling层方式
Jun 16 Python
68行Python代码实现带难度升级的贪吃蛇
Jan 18 Python
python+django快速实现文件上传
Oct 24 #Python
python实现简单爬虫功能的示例
Oct 24 #Python
简单谈谈Python中的反转字符串问题
Oct 24 #Python
Python 内置函数complex详解
Oct 23 #Python
Python检测生僻字的实现方法
Oct 23 #Python
python 写入csv乱码问题解决方法
Oct 23 #Python
解决Python中字符串和数字拼接报错的方法
Oct 23 #Python
You might like
精美漂亮的php分页类代码
2013/04/02 PHP
php简单获取目录列表的方法
2015/03/24 PHP
浅析PHP7新功能及语法变化总结
2016/06/17 PHP
php两点地理坐标距离的计算方法
2018/12/29 PHP
php使用redis的有序集合zset实现延迟队列应用示例
2020/02/20 PHP
jQuery 常见操作实现方式和常用函数方法总结
2011/05/06 Javascript
js仿百度贴吧验证码特效实例代码
2014/01/16 Javascript
Jquery中CSS选择器用法分析
2015/02/10 Javascript
有关easyui-layout中的收缩层无法显示标题的解决办法
2016/05/10 Javascript
jQuery Validate设置onkeyup验证的实例代码
2016/12/09 Javascript
angularjs实现对表单输入改变的监控(ng-change和watch两种方式)
2018/08/29 Javascript
3分钟读懂移动端rem使用方法(推荐)
2019/05/06 Javascript
layer.js open 隐藏滚动条的例子
2019/09/05 Javascript
ES6函数实现排它两种写法解析
2020/05/13 Javascript
IDEA配置jQuery, $符号不再显示黄色波浪线的问题
2020/10/09 jQuery
Python使用cx_Freeze库生成msi格式安装文件的方法
2018/07/10 Python
Python定时发送消息的脚本:每天跟你女朋友说晚安
2018/10/21 Python
Python读取指定日期邮件的实例
2019/02/01 Python
详解Pycharm出现out of memory的终极解决方法
2020/03/03 Python
django-利用session机制实现唯一登录的例子
2020/03/16 Python
Django模板标签{% for %}循环,获取制定条数据实例
2020/05/14 Python
Python利用Xpath选择器爬取京东网商品信息
2020/06/01 Python
通过css3的filter滤镜改变png图片的颜色的示例代码
2020/05/06 HTML / CSS
html5 Canvas画图教程(4)—未闭合的路径及渐变色的填充方法
2013/01/09 HTML / CSS
美国网上鞋子零售商:Dr. Scholl’s Shoes
2017/11/17 全球购物
bonprix匈牙利:女士、男士和儿童服装
2019/07/19 全球购物
瑞士网球商店:Tennis-Point
2020/03/12 全球购物
意大利领先的奢侈品在线时装零售商:MCLABELS
2020/10/13 全球购物
编辑个人求职信范文
2013/09/21 职场文书
家长建议怎么写
2014/05/15 职场文书
学习型班组申报材料
2014/05/31 职场文书
工程学毕业生自荐信
2014/06/14 职场文书
捐款活动总结
2014/08/27 职场文书
2015年安全工作总结范文
2015/04/02 职场文书
英语教学课后反思
2016/02/15 职场文书
解决spring.thymeleaf.cache=false不起作用的问题
2022/06/10 Java/Android