DEV Community

Daniel Florido
Daniel Florido

Posted on

Keeping the tedious fun with Curl.

I like to keep the day job fun and productive by semi automating the everyday tediousness of web development.
The main reason I use curl is to search a file to see if a webpage has the desired string or not.
Its nice to be able to stay on the commandline to do this as its quicker then the manual process of

  • open chrome,
  • right click,
  • view source code,
  • command f to find the string
  • bang head against monitor

Running the curl function below allows me to open the returned html in my favourite code editor, Vim.

This way i can easily search the returned source code by simple typing '/string_name'.

function to add to bash_profile

#get and open webpage html
#provide url as param
#keep it simple - doesnt work if you make it too complicated
function get_html() {
   cd ~/Downloads
    rm curled_html.html
    curl $1 -H "Cache-Control: no-cache, no-store, must-revalidate" -H "Pragma: no-cache" -H "Expires: 0" > curled_html.html
    vim curled_html.html
}
Enter fullscreen mode Exit fullscreen mode

Now run

get_html https://example.com
Enter fullscreen mode Exit fullscreen mode

Top comments (5)

Collapse
 
pbnj profile image
Peter Benjamin (they/them) • Edited

This feature is already built into vim (edit: via netrw plugin; if you start vim without built-in plugins via vim --clean or vim -u NONE then this will not work):

$ vim http://example.com/
Enter fullscreen mode Exit fullscreen mode
:edit https://example.com/
Enter fullscreen mode Exit fullscreen mode
  • Vim will download contents of the URL into a scratch buffer
  • If you want to save it locally, you can :write /path/on/disk
  • From there, you can use all of vim's searching capabilities, like :grep, /, ?, ...etc.
  • Or use external programs on the scratch buffer, like :%! jq (useful if you're interacting with a JSON API).

Demo

$ vim https://jsonplaceholder.typicode.com/comments/

# in vim...

# filter JSON content by email
:%! jq -rc '.[].email'

# sort & deduplicate emails
:%! sort -ui

# ... call any other external programs or scripts for data processing
Enter fullscreen mode Exit fullscreen mode

Collapse
 
pixelstorm profile image
Daniel Florido

that is really cool. I use Alfred instead of spotlight on my mac too. so now the process has become even faster
>vim https://example.com

i do get a warning about wget so i just need to hit enter before the html loads in vim. but all good. everyday things get a little bit better.

Collapse
 
gilfewster profile image
Gil Fewster

this is a really great way to do it. Fantastic tip.

Collapse
 
gilfewster profile image
Gil Fewster

Nice tip, but I'd be wary of that automatically download arbitrary data to specific, hard-coded paths and filenames... and even more wary of scripts which delete files in the same manner.

A safer and more flexible approach might be to just wrap the curl command, and then you can use pipe to redirect the output into whatever is most convenient for your needs -- vim, grep/egrep, write to disk, etc.

function get_html() {
  curl $1 \
  -H "Cache-Control: no-cache, no-store, must-revalidate" \
  -H "Pragma: no cache" -H "Expires: 0" \
   2> /dev/null
}
Enter fullscreen mode Exit fullscreen mode
# open in vim without writing to disk
 get_html https://www.google.com | vim

# pipe result to egrep
# pipe to egrep to find all anchor tags with a class attribute
# and print a list of matching tags and the line numbers they are found in
get_html https://www.apple.com | egrep "<a[^>]* class=" -n -i

# as above, but just return the number of lines
# that the matching pattern was found in
get_html https://www.apple.com | egrep "<a[^>]* class=" -i | wc -l
Enter fullscreen mode Exit fullscreen mode
Collapse
 
pixelstorm profile image
Daniel Florido

this looks fun. love your work. and your safety first approach.