I have a text file with thousands of hyperlinks in the format "URL = http://examplelink.com" in a file called mylinks.txt.
What I want to do is search through all of these links, and checks if any of them contains some keywords, like, "2018", "2017". If the link contains the keyword, I want to save the link in the file "yes.txt" and if it doesn't it goes to the file "no.txt".
So at the end, I would end up with two files: one with the links that send me to pages with the keywords I'm searching for, and other one with the links that doesn't.
I was thinking about doing this with curl, but I don't know even if it's possible and I don't know also how to "filter" the links by keywords.
What I have got until now is:
curl -K mylinks.txt >> output.txt
But this only creates a super large file with the HTML's of the links it searches. I've searched and read through various curl tutorials and haven't found anything that "selectively" search for pages and save the links (not the content) of the pages it found matching the criteria.