We can simply transfer files between two PCs via the network without having to use USB sticks or other “archaic” methods.
We first cd to the directory we want and then run a simple python HTTP server.
Python2: python -m SimpleHTTPServer <port_number>
Python3: python3 -m http.server <port_number>
On the other machine we then use wget
to recursively retrieve all the files from the remote directory.
wget -r -np -R "<condition>" <remote_url>
-r
Recursively retrieve all files and directories from the remote machine.
-np
No Parent flag. Do not ascend to the parent directory when retrieving recursively
-R
Reject all files that match the conditions
Example: wget -r -np -R "index.html*" http://10.10.10.120:8080/
Why reject index.html?
When we use wget
recursively the command downloads automatically an index.html file for each directory downloaded from the remote machine. With this flag we keep our downloads tidy and clean without any unnecessary files.
Top comments (1)
How will you download the website if it requires authentication using a username, password and an authenticity token? I tried the following below but I get stuck on the sign-in page;
!/usr/bin/env bash
username=username
password=password
code=
wget -qO- https://urlname/sign_in service=https://urlname.io | cat | grep 'name="lt"' | cut -d"_" -f2
hidden_code=_$code
wget --save-cookies cookies.txt \
--keep-session-cookies \
--post-data 'username=$username&password=$password<=$hidden_code&_eventId=submit' \
--auth-no-challenge
--delete-after \
urlname/sign_in?service=https://ur...
wget --load-cookies cookies.txt \
urlname.io