I have a dataset of tweets that I obtained from the Academic Twitter API by querying a list of terms.
search1 <- paste0("(rada'a ", 'OR radda OR radaa OR "al bayda" OR bayda OR AQAP)')
radaa.all <- get_all_tweets(search1,
"2013-12-02T00:00:00Z",
"2013-12-22T00:00:00Z",
bearer_token,
n = 100000,
data_path = "/Users/violetross/Desktop/Data Science/Evan's Project/radaa",
bind_tweets = FALSE)
I extracted every link shared within those tweets and expanded them to full URLs (from t.co shortened links). I would now like to use each of these URLs in a call to the API in order to find every tweet containing a given URL.
Essentially: I am starting with a list of full URLs and want a dataset for each URL containing all tweets sharing that URL.
I have tried using the get_all_tweets() function from the academictwitteR package in R, but for most urls it returns an empty data set (even though we know there is a minimum of one tweet containing each link).
for(i in 1:nrow(urls)){
urlTweets[[i]] <- get_all_tweets(query = NULL,
"2013-12-02T00:00:00Z",
end_tweets = "2013-12-22T00:00:00Z",
bearer_token,
n = 10000,
data_path = NULL,
bind_tweets = TRUE,
url = urls$fullURL[i])
}
Directly searching the API also returned empty datasets for most links.
Is there a better way to search for all tweets that contain a given a URL? Any ideas on why my searches return nothing for most URLs?