I love selenium and I love scraperwiki but somehow I cannot get them to work properly together. I've tried to open a website in two ways with selenium on scraperwiki, both methods have been gotten from tutorials:
import selenium
sel = selenium.selenium("localhost",4444,"*firefox", "http://www.google.com")
sel.open("http://google.com")
This does not work. It gives me the following error:
error: [Errno 111] Connection refused
And neither does this:
from selenium import webdriver
browser = webdriver.Firefox()
Which gives another error:
/usr/lib/python2.7/subprocess.py:672 -- __init__((self=<subprocess.Popen object at 0x1d14410>, args=[None, '-silent'], bufsize=0, executable=None, stdin=None, stdout=-1, stderr=-1, preexec_fn=None, close_fds=False, shell=False, cwd=None, env=None, universal_newlines=False, startupinfo=None, creationflags=0))
AttributeError: 'NoneType' object has no attribute 'rfind'
Does anyone see a logical reason for this?
The docs on scraperwiki indicate that seleneium is "Only useful in ScraperWiki if you have a Selenium server to point it to." I don't know what they mean exactly with this but I recon it might be the cause of the problem. Any help would be greatly appreciated.