Right now i'm using Crawler4j and i'm pretty happy with that - but it can not crawl ajax-based websites. I used selenium once for another approach and this works fine combined with phantomjs. So is there a way to plug in Selenium into crawler4j?
If not - is there another good library in Java for handling ajax based websites?
(With webspider i mean, that i have to give the program one url and it automatically starts extracting the content form the site)