Is it possible to use wget mirror to save all links from an entire website and save those in a txt file?
If it's possible, how is it done? If not, are there other methods to do this?
EDIT:
I tried to run this:
wget -r --spider example.com
And got this result:
Spider mode enabled. Check if remote file exists.
--2015-10-03 21:11:54-- http://example.com/
Resolving example.com... 93.184.216.34, 2606:2800:220:1:248:1893:25c8:1946
Connecting to example.com|93.184.216.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1270 (1.2K) [text/html]
Remote file exists and could contain links to other resources -- retrieving.
--2015-10-03 21:11:54-- http://example.com/
Reusing existing connection to example.com:80.
HTTP request sent, awaiting response... 200 OK
Length: 1270 (1.2K) [text/html]
Saving to: 'example.com/index.html'
100%[=====================================================================================================>] 1,270 --.-K/s in 0s
2015-10-03 21:11:54 (93.2 MB/s) - 'example.com/index.html' saved [1270/1270]
Removing example.com/index.html.
Found no broken links.
FINISHED --2015-10-03 21:11:54--
Total wall clock time: 0.3s
Downloaded: 1 files, 1.2K in 0s (93.2 MB/s)
(Yes, I also tried using other websites with more internal links)