23

I know how to use wget to download from FTP but I couldn't use wget to download from the following link:
http://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE46130&format=file

If you copy and paste it in the browser, it'll start to download. But I want to download it to our server directly so I don't need to move it from my desktop to the server. How do I do it?

Thanks!

Martin Prikryl
  • 188,800
  • 56
  • 490
  • 992
olala
  • 4,146
  • 9
  • 34
  • 44

3 Answers3

38

This is what I did:

wget -O file.tar "http://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE46130&format=file"
FedeCz
  • 533
  • 3
  • 4
7

Use the -O option with wget, to specify where to save the file that is downloaded. For example:

wget -O /path/to/file http://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE46130&format=file
mti2935
  • 11,465
  • 3
  • 29
  • 33
3
# -r : recursive    
# -nH : Disable generation of host-prefixed directories
# -nd : all files will get saved to the current directory
# -np : Do not ever ascend to the parent directory when retrieving recursively. 
# -R index.html*,999999-99999-1990.gz* : don't download files with this files pattern
wget -r -nH -nd -np -R *.html,999999-99999-1990.gz* http://www1.ncdc.noaa.gov/pub/data/noaa/1990/
Cristian
  • 548
  • 6
  • 8
  • This code could use a little introduction to make it an answer. Like "The `-nd` flag will let you save the file without a prompt for the filename. Here's a script that will even handle multiple files and directories." With no intro I was wondering "Is this really an answer? The URL doesn't match and there's no problem with .gz* files in the question". But it is a good answer IMO! – Noumenon Jun 08 '16 at 06:00