I have looked everywhere but cannot find a solution for my particular case.
I have a website that is essential just a directory listing of a bunch of files (directory listing is enabled on the server). The website URL schema is 'ftp://'. All I want to do is extract the HTML so that I can get the names and URLs of the files within the directory. I have tried the following code (sorry, I can't post the actual FTP URL):
String ftpURL = "ftp://blah.com"
URL url = new URL(ftpURL);
URLConnection urlc = url.openConnection();
// open the stream and put it into BufferedReader
BufferedInputStream bis = new BufferedInputStream(urlc.getInputStream()); // ERROR HERE
int inputLine;
String outputHtml = "";
while ((inputLine = bis.read()) != -1) {
outputHtml += inputLine;
}
bis.close();
When I run this code I get this error on the 4th line of code:
java.io.IOException: Unable to connect to server: Unable to retrieve file: 550
EDIT: If extracting the HTML from the ftp site isn't a possibility, how would I go about getting a list of the names and urls to each file in the directory specified in the ftp URL? Also, I should note that I can access the ftp site publically and can view all sub files and directories without any authentication required.
Any ideas? Thank you!