I want to read a file of urls, curl each url and only get the first line which contains the HTTP code. I am running under Windows 10 inside Cmder.
#!/bin/bash
input="urls.csv"
truncate -s 0 dest.csv
while IFS= read -r var
do
result= `curl -I ${var%$'\r'} | grep HTTP $result`
echo "$var $result" >> dest.csv
done < "$input"
however the output file is empty, thank you
Assuming urls.csv is just a simple list of URLs and you're working on a linux system (or any system which has /dev/null), following command will send HEAD requests to each URL and output them next to HTTP response codes.
sed 's/^/url = /; s/\r\?$/\n-o \/dev\/null/' urls.csv |
curl -s -K- -w '%{http_code} %{url_effective}\n' -I >outfile
see curl man page for further information.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With