4

I hope you don't frown on me too much, but this should be answerable by someone fairly easily. I want to read a file on a website into a string, so I can extract information from it.

I just want a simple way to get the HTML source read into a string. After looking around for hours I see all these libraries and curl and stuff. All I need is the raw HTML data. I don't even need a definite answer. Just something that will help me refine my search.

Just to be clear I want the raw code in a string I can manipulate, don't need any parsing etc.

Kyle
  • 41
  • 1
  • 1
  • 2
  • 1
    Your searching isn't working because you're looking for the wrong thing. If you want to fetch web pages without parsing them then what you're looking for is an HTTP client library. Don't look for "html" -- since you don't want to parse it "html" is irrelevant to your search. – Laurence Gonsalves Dec 06 '10 at 21:11

4 Answers4

10

You need an HTTP Client library, one of many is libcurl. You would then issue a GET request to a URL and read the response back how ever your chosen library provides it.

Here is an example to get you started, it is C so I am sure you can work it out.

#include <stdio.h>
#include <curl/curl.h>

int main(void)
{
  CURL *curl;
  CURLcode res;

  curl = curl_easy_init();
  if(curl) {
    curl_easy_setopt(curl, CURLOPT_URL, "http://example.com");
    res = curl_easy_perform(curl);

    /* always cleanup */ 
    curl_easy_cleanup(curl);
  }
  return 0;
}

But you tagged this C++ so if you want a C++ wrapper for libcurl then use curlpp

#include <curlpp/curlpp.hpp>
#include <curlpp/Easy.hpp>
#include <curlpp/Options.hpp>

using namespace curlpp::options;

int main(int, char **)
{
  try
  {
    // That's all that is needed to do cleanup of used resources
    curlpp::Cleanup myCleanup;

    // Our request to be sent.
    curlpp::Easy myRequest;

    // Set the URL.
    myRequest.setOpt<Url>("http://example.com");

    // Send request and get a result.
    // By default the result goes to standard output.
    myRequest.perform();
  }

  catch(curlpp::RuntimeError & e)
  {
    std::cout << e.what() << std::endl;
  }

  catch(curlpp::LogicError & e)
  {
    std::cout << e.what() << std::endl;
  }

  return 0;
}
3

HTTP is built on top of TCP. If you know socket programming, you can write a simple networking application that opens a socket to the desired server and issues an HTTP GET command. Whatever the server responds with, you'll have to remove the HTTP headers that precede the actual document you want.

If that sounds complicated, then just stick with libcurl.

chrisaycock
  • 36,470
  • 14
  • 88
  • 125
  • I agree with @chrisaycock, your best option is probably just to find a good libcurl example and go with that. – wajiw Dec 06 '10 at 21:08
1

if it is a hack - then just grab the source from show source, and save as txt. then you can open it with a normal file io stream.

  • all thos pesky libraries are a hint that it is a common and non-trivial excercise to do it right... :)
Randy
  • 16,480
  • 1
  • 37
  • 55
0

If all you want to do is grab the entire HTML code without any kind of parsing and extern libraries, my sugestion would be copying the code with a IO stream into a string.

It is the simplest way that I have in mind but be aware that it isn't the most efficient way to do it.

Renato Rodrigues
  • 1,038
  • 1
  • 18
  • 35