-1

Is there any limitation of twitter data fetching using R. I am trying to get 2000 tweets but twitter API returns only 261

Warning messages: 1: In if (nchar(searchString) > 1000) { : the condition has length > 1 and only the first element will be used 2: In doRppAPICall("search/tweets", n, params = params, retryOnRateLimit = retryOnRateLimit, : 2000 tweets were requested but the API can only return 261

Rana Usman
  • 1,031
  • 7
  • 21
Rajib Kumar De
  • 711
  • 1
  • 6
  • 9

2 Answers2

1

To avoid Twitter limitations use :

library(streamR)

filterStream opens a connection to Twitter’s Streaming API that will return public statuses that match one or more filter predicates. Tweets can be filtered by keywords, users, language, and location. The output can be saved as an object in memory or written to a text file.

filterStream(file.name = NULL, track = NULL, follow = NULL, locations = NULL, language = NULL, timeout = 0, tweets = NULL, oauth = NULL, verbose = TRUE)

NOTE: This function gets tweets on real time and avoids limitations.

Suanbit
  • 471
  • 1
  • 4
  • 12
0

You will definitely not be getting as many tweets as exist. The way Twitter limits how far back you can go (and therefore how many tweets are available) is with a minimum since_id parameter passed to the GET search/tweets call to the Twitter API. In Tweepy, the API.search function interfaces with the Twitter API. Twitter's GET search/tweets documentation has a lot of good info:

There are limits to the number of Tweets which can be accessed through the API. If the limit of Tweets has occured since the since_id, the since_id will be forced to the oldest ID available.

In practical terms, Tweepy's API.search should not take long to get all the available tweets. Note that not all tweets are available per the Twitter API, but I've never had a search take up more than 10 minutes.

shivlal kumavat
  • 868
  • 1
  • 12
  • 28