1

Im trying to build a sub-domain brute forcer for use with my clients - I work in security/pen testing. Currently, I am able to get Resolv to look up around 70 hosts in 10 seconds, give or take and wanted to know if there was a way to get it to do more. I have seen alternative scripts out there, mainly Python based that can achieve far greater speeds than this. I don't know how to increase the number of requests Resolv makes in parallel, or if i should split the list up. Please note I have put Google's DNS servers in the sample code, but will be using internal ones for live usage.

My rough code for debugging this issue is:

require 'resolv'

def subdomains
  puts "Subdomain enumeration beginning at #{Time.now.strftime("%H:%M:%S")}"
  subs = []
  domains = File.open("domains.txt", "r") #list of domain names line by line.
  Resolv.new(:nameserver => ['8.8.8.8', '8.8.4.4'])
    File.open("tiny.txt", "r").each_line do |subdomain|
      subdomain.chomp!
    domains.each do |d|
      puts "Checking #{subdomain}.#{d}"
      ip = Resolv.new.getaddress "#{subdomain}.#{d}" rescue ""
        if ip != nil
          subs << subdomain+"."+d << ip
      end
    end
  end
  test = subs.each_slice(4).to_a
    test.each do |z|
      if !z[1].nil? and !z[3].nil?
    puts z[0] + "\t" + z[1] + "\t\t" + z[2] + "\t" + z[3]
  end
end
  puts "Finished at #{Time.now.strftime("%H:%M:%S")}"
end

subdomains

domains.txt is my list of client domain names, for example google.com, bbc.co.uk, apple.com and 'tiny.txt' is a list of potential subdomain names, for example ftp, www, dev, files, upload. Resolv will then lookup files.bbc.co.uk for example and let me know if it exists.

Cœur
  • 37,241
  • 25
  • 195
  • 267
hatlord
  • 151
  • 2
  • 9
  • 1
    `Resolv` is thread-aware, not multi-threaded, i.e. it allows *you* to make requests in parallel. – Stefan Apr 20 '16 at 12:43

2 Answers2

0

One thing is you are creating a new Resolv instance with the Google nameservers, but never using it; you create a brand new Resolv instance to do the getaddress call, so that instance is probably using some default nameservers and not the Google ones. You could change the code to something like this:

resolv = Resolv.new(:nameserver => ['8.8.8.8', '8.8.4.4'])
# ...
ip = resolv.getaddress "#{subdomain}.#{d}" rescue ""

In addition, I suggest using the File.readlines method to simplify your code:

domains = File.readlines("domains.txt").map(&:chomp)
subdomains = File.readlines("tiny.txt").map(&:chomp)

Also, you're rescuing the bad ip and setting it to the empty string, but then in the next line you test for not nil, so all results should pass, and I don't think that's what you want.

I've refactored your code, but not tested it. Here is what I came up with, and may be clearer:

def subdomains
  puts "Subdomain enumeration beginning at #{Time.now.strftime("%H:%M:%S")}"
  domains = File.readlines("domains.txt").map(&:chomp)
  subdomains = File.readlines("tiny.txt").map(&:chomp)

  resolv = Resolv.new(:nameserver => ['8.8.8.8', '8.8.4.4'])

  valid_subdomains = subdomains.each_with_object([]) do |subdomain, valid_subdomains|
    domains.each do |domain|
      combined_name = "#{subdomain}.#{domain}"
      puts "Checking #{combined_name}"
      ip = resolv.getaddress(combined_name) rescue nil
      valid_subdomains << "#{combined_name}#{ip}" if ip
    end
  end

  valid_subdomains.each_slice(4).each do |z|
    if z[1] && z[3]
      puts "#{z[0]}\t#{z[1]}\t\t#{z[2]}\t#{z[3]}"
    end
  end

  puts "Finished at #{Time.now.strftime("%H:%M:%S")}"
end

Also, you might want to check out the dnsruby gem (https://github.com/alexdalitz/dnsruby). It might do what you want to do better than Resolv.

Keith Bennett
  • 4,722
  • 1
  • 25
  • 35
  • Thanks for the response! This all works other than "ip = resolv.getaddress(combined_name) rescue nil" - If i leave that code in it returns no responses, if i change it back to Resolv.getaddress.... it works. What it doesn't do however is give me the speed I am looking for. I may need to rethink how I go about this and Im a little too new to Ruby to be looking at threading etc. – hatlord Apr 20 '16 at 16:19
0

[Note: I've rewritten the code so that it fetches the IP addresses in chunks. Please see https://gist.github.com/keithrbennett/3cf0be2a1100a46314f662aea9b368ed. You can modify the RESOLVE_CHUNK_SIZE constant to balance performance with resource load.]

I've rewritten this code using the dnsruby gem (written mainly by Alex Dalitz in the UK, and contributed to by myself and others). This version uses asynchronous message processing so that all requests are being processed pretty much simultaneously. I've posted a gist at https://gist.github.com/keithrbennett/3cf0be2a1100a46314f662aea9b368ed but will also post the code here.

Note that since you are new to Ruby, there are lots of things in the code that might be instructive to you, such as method organization, use of Enumerable methods (e.g. the amazing 'partition' method), the Struct class, rescuing a specific Exception class, %w, and Benchmark.

NOTE: LOOKS LIKE STACK OVERFLOW ENFORCES A MAXIMUM MESSAGE SIZE, SO THIS CODE IS TRUNCATED. GO TO THE GIST IN THE LINK ABOVE FOR THE COMPLETE CODE.

#!/usr/bin/env ruby

# Takes a list of subdomain prefixes (e.g.  %w(ftp  xyz)) and a list of domains (e.g. %w(nytimes.com  afp.com)),
# creates the subdomains combining them, fetches their IP addresses (or nil if not found).

require 'dnsruby'
require 'awesome_print'

RESOLVER = Dnsruby::Resolver.new(:nameserver => %w(8.8.8.8  8.8.4.4))

# Experiment with this to get fast throughput but not overload the dnsruby async mechanism:
RESOLVE_CHUNK_SIZE = 50


IpEntry = Struct.new(:name, :ip) do
  def to_s
    "#{name}: #{ip ? ip : '(nil)'}"
  end
end


def assemble_subdomains(subdomain_prefixes, domains)
  domains.each_with_object([]) do |domain, subdomains|
    subdomain_prefixes.each do |prefix|
      subdomains << "#{prefix}.#{domain}"
    end
  end
end


def create_query_message(name)
  Dnsruby::Message.new(name, 'A')
end


def parse_response_for_address(response)
  begin
    a_answer = response.answer.detect { |a| a.type == 'A' }
    a_answer ? a_answer.rdata.to_s : nil
  rescue Dnsruby::NXDomain
    return nil
  end
end


def get_ip_entries(names)

  queue = Queue.new

  names.each do |name|
    query_message = create_query_message(name)
    RESOLVER.send_async(query_message, queue, name)
  end


  # Note: although map is used here, the record in the output array will not necessarily correspond
  # to the record in the input array, since the order of the messages returned is not guaranteed.
  # This is indicated by the lack of block variable specified (normally w/map you would use the element).
  # That should not matter to us though.
  names.map do
    _id, result, error = queue.pop
    name = _id
    case error
      when Dnsruby::NXDomain
        IpEntry.new(name, nil)
      when NilClass
       ip = parse_response_for_address(result)
       IpEntry.new(name, ip)
      else
       raise error
      end
  end
end


def main
  # domains = File.readlines("domains.txt").map(&:chomp)
  domains = %w(nytimes.com  afp.com  cnn.com  bbc.com)

  # subdomain_prefixes = File.readlines("subdomain_prefixes.txt").map(&:chomp)
  subdomain_prefixes = %w(www  xyz)

  subdomains = assemble_subdomains(subdomain_prefixes, domains)

  start_time = Time.now
  ip_entries = subdomains.each_slice(RESOLVE_CHUNK_SIZE).each_with_object([]) do |ip_entries_chunk, results|
    results.concat get_ip_entries(ip_entries_chunk)
  end
  duration = Time.now - start_time

  found, not_found = ip_entries.partition { |entry| entry.ip }

  puts "\nFound:\n\n";  puts found.map(&:to_s);  puts "\n\n"
  puts "Not Found:\n\n"; puts not_found.map(&:to_s); puts "\n\n"

  stats = {
      duration:        duration,
      domain_count:    ip_entries.size,
      found_count:     found.size,
      not_found_count: not_found.size,
  }

  ap stats
end


main
Keith Bennett
  • 4,722
  • 1
  • 25
  • 35
  • Wow - thanks for the response. I really do have a lot to learn but structs are in one of my upcoming modules on pragmatic studio at least. When running the app it is fast, but if I give it a large list of sub-domains it crashes after pegging my CPU at 100% for a while, I assume this is while it is building the domain list as I see no relevant DNS queries in Wireshark. I get a variety of different errors, on the Mac this was the latest "dnsruby/config.rb:306:in `initialize': Too many open files @ rb_sysopen - /etc/resolv.conf (Errno::EMFILE)". Again, many thanks for your response! – hatlord Apr 21 '16 at 07:57
  • Yes, that script assumed a manageable number of names. About how many subdomain prefixes and domain names does it take to crash the system? And can you put some print statements in there to see where the 100% CPU is happening (e.g. is it in the Dnsruby code, or in the building of the subdomain array)? – Keith Bennett Apr 21 '16 at 15:13
  • The subdomain list I used to crash it had 5000 entries - so quite a lot ;) Ill try and figure out where its crashing. The subs list is https://github.com/darkoperator/dnsrecon/blob/master/subdomains-top1mil-5000.txt - Thanks again! – hatlord Apr 21 '16 at 15:48
  • 5,000 entries isn't so much by itself, but the question is how many domain prefixes, because the number of subdomains will be the product of those 2 numbers. If you're expecting to have that many subdomain prefixes, I'd recommend modifying the code to process only one domain at a time. – Keith Bennett Apr 21 '16 at 15:52
  • The test was a single domain and 5000 sub-domains - I didn't want to throw everything at it until I had tested it. – hatlord Apr 21 '16 at 15:57
  • I've modified the program to fetch IP addresses in chunks; that should fix your memory and CPU problem. The default chunk size is 50, but you can change that to optimize for your situation. You'll need to gem install awesome_print. – Keith Bennett Apr 22 '16 at 06:49