1

I've created a program that pulls websites off of google and then strips them down to their basic url: example http://google.com/search/owie/weikw => http://google.com. It then saves these to a file.

After that it runs a .each_line on the file then runs a whois command, what I want to do is if the command doesn't respond in a certain amount of time, skip that line of the file and go to the next one, is there a way I can do this?

13aal
  • 1,634
  • 1
  • 21
  • 47

1 Answers1

2

Use the Timeout Module

If your scraper or whois doesn't support timeout natively, you can use Timeout::timeout to set an upper bound in seconds. For example:

require 'timeout'

MAX_SECONDS = 10 

begin
  Timeout::timeout(MAX_SECONDS) do
    # run your whois
  end
rescue Timeout::Error
  # handle the exception
end

By default, this will raise a Timeout::Error exception if the block exceeds the time limit, but you can have the method raise other exceptions if you prefer. How you handle the exceptions is then up to you.

Todd A. Jacobs
  • 81,402
  • 15
  • 141
  • 199
  • 1
    That's awesome! I was searching google for the past hour to find something and couldn't, thanks! – 13aal Mar 06 '16 at 23:36