2

As a followup to this question, how can I locate the XHR request which is used to retrieve the data from the back-end API on CNBC News in order to be able to scrape this CNBC search query?

The end goal is to have a doc with: headline, date, full article and url.

I have found this: https://api.sail-personalize.com/v1/personalize/initialize?pageviews=1&isMobile=0&query=coronavirus&qsearchterm=coronavirus

Which tells me I don't have access. Is there a way to access information anyway?

flw
  • 47
  • 6

1 Answers1

3

Actually my previous answer for you were addressing your question regarding the XHR request:

But here we go with a screenshot:

enter image description here

import requests

params = {
    "queryly_key": "31a35d40a9a64ab3",
    "query": "coronavirus",
    "endindex": "0",
    "batchsize": "100",
    "callback": "",
    "showfaceted": "true",
    "timezoneoffset": "-120",
    "facetedfields": "formats",
    "facetedkey": "formats|",
    "facetedvalue":
    "!Press Release|",
    "needtoptickers": "1",
    "additionalindexes": "4cd6f71fbf22424d,937d600b0d0d4e23,3bfbe40caee7443e,626fdfcd96444f28"
}

goal = ["cn:title", "_pubDate", "cn:liveURL", "description"]


def main(url):
    with requests.Session() as req:
        for page, item in enumerate(range(0, 1100, 100)):
            print(f"Extracting Page# {page +1}")
            params["endindex"] = item
            r = req.get(url, params=params).json()
            for loop in r['results']:
                print([loop[x] for x in goal])


main("https://api.queryly.com/cnbc/json.aspx")

Pandas DataFrame version:

import requests
import pandas as pd

params = {
    "queryly_key": "31a35d40a9a64ab3",
    "query": "coronavirus",
    "endindex": "0",
    "batchsize": "100",
    "callback": "",
    "showfaceted": "true",
    "timezoneoffset": "-120",
    "facetedfields": "formats",
    "facetedkey": "formats|",
    "facetedvalue":
    "!Press Release|",
    "needtoptickers": "1",
    "additionalindexes": "4cd6f71fbf22424d,937d600b0d0d4e23,3bfbe40caee7443e,626fdfcd96444f28"
}

goal = ["cn:title", "_pubDate", "cn:liveURL", "description"]


def main(url):
    with requests.Session() as req:
        allin = []
        for page, item in enumerate(range(0, 1100, 100)):
            print(f"Extracting Page# {page +1}")
            params["endindex"] = item
            r = req.get(url, params=params).json()
            for loop in r['results']:
                allin.append([loop[x] for x in goal])
        new = pd.DataFrame(
            allin, columns=["Title", "Date", "Url", "Description"])
        new.to_csv("data.csv", index=False)


main("https://api.queryly.com/cnbc/json.aspx")

Output: view online

enter image description here