Just use cURL
85 points by runxiyu
85 points by runxiyu
I see the satire
tag, but this one triggers my PTSD:
So does cURL in a shell script with || and &&
I can't even count the number of times I've had to draw attention to missing -f
(or --fail
if you're kind to your future self) in just about every invocation I've ever seen of curl in a script. The alternative I've seen is -w %{http_code}
(aka --write-out
) but while that allows the caller to be more prescriptive about what, exactly, constitutes a failure I find it's usually more hassle than -f
, and unquestionably so in most automation scripts
The main thing that makes me rewrite shell scripts in a proper programming language is error handling. (I usually feel pain from error handling well before a script is complicated enough to do anything interesting with data: it kicks in long before a script gets to 100 lines.) Robust error handling with curl in shell scripts is very hard, so I don’t do anything more complicated than set -e; curl -f.
Second this. Lately I've taken to writing scripts in Go, experimenting with LLMs for this task. Seems ridiculous as it isn't a scripting language but these models will write a Go script faster than I write a Bash script. Quality isn't a great concern for these kinds of scripts: if it was people wouldn't be writing them in shell script in the first place.
If my script grows to a point where I want a real language with error handling I'm already on that. Plus I get the same kind of portability.
I have no idea what you guys are talking about. Is exit on error not a good paradigm for most scripts?
There's cleanup on error, for one. There's also other cases where you're performing multiple commands and want to keep going if one of them fails.
So, I'm not trying to convince you to write more things in bash, but both of those things are solvable in bash.
You can trap the exit of a script and call a clean up function:
cleanup() {
#cleaning up things
}
trap cleanup EXIT
That trap won't fire on a kill -9 it seems, but it fires reliably if your script breaks for some reason.
You can also trap an error and log it nicely (hat tip to Chris Siebenmann for that one)
trap 'echo "Exit status $? at line $LINENO from: $BASH_COMMAND"' ERR
These two traps have made my bash scripts so much simpler and safer.
The other thing that has made them more robust is using arrays to store parameters to commands i am executing within a script, rather than building up strings. a string works most of the time, but an array is just easier to use and more robust
declare -a ARGS
ARGS=(-o)
ARGS+=(-f foo -b bar)
someprogram ${ARGS[@]}
As for your second point, if you have one line where you don't want to exit on error, you can just turn that off:
set +e
command_that_might_fail_and_that_is_fine
set -e
at-exit cleanup:
declare -a TRAP_EXIT=()
trap 'for A in "${TRAP_EXIT[@]}"; do eval "$A"; done' EXIT
at-exit () { TRAP_EXIT+=( "${*@Q}" ); }
some_dir=$(mktemp -d)
at-exit rm -r "$some_dir"
Single line suppression:
bad_command || :
Nice! Building up your cleanup function as the script progresses isn't something I'd thought of before, that's a neat feature.
Yeah exit-on-error is often ok, which is why I find it’s tolerable to write trivial scripts in the shell to avoid the overhead of a real language. @dlisboa wrote a good summary of the kind of things that make a script sufficiently nontrivial to make me want to rewrite it. Robust error handling is especially important for scripts that run unattended.
Depends on the script, the error, and what you want to display to the user.
At work I maintain a script that helps provision the local dev environment for various services. It has to do a bunch of checks up-front to see if certain standard languages/tools are available (correct Python version, Docker installed and Docker daemon running, etc. etc.). Bailing on the first one to fail wouldn't be a great user experience since there might be others missing, so the script is written in Python and collects failed the failed checks to display them all and let you fix them all before trying to run it again.
I could do that in bash, I suppose, but why bother when it's so easy to do in Python?
That's the kind of thing that is arguably easier with a shell script. Spawning process with other programs it's exactly what it's for and what it does with no syntax.
--fail-with-body
is a newer flag that can help with this, but the body still goes to stdout which is probably getting captured somewhere so you'll need some extra handling.
-f (or --fail if you're kind to your future self)
This! Use long options in shell scripts, so that you (or your future self) see more clearly what they all do. Easier to understand, easier to debug.
100% - I've been very very slowly converting my old scripts to use the long options and sometimes even explaining stuff in comments so I don't need to run the command with -h / --help or even "man command" lol
I had a run in with something doing that. It was checking the response code and verifying it was 200. Except it was downloading a large binary and the connection was being reset halfway through the download. Since it was only looking at the http response code and not the exit code for curl It was happily installing the truncated binary. No, it was not verifying any hashes either.
curl --fail --fail-early --silent --show-error
, along with needing to re-apply --fail
if ever using -:
/--next
I think, I'm probably missing some things too.
For general polite scripting outside of testing an API, its also desirable to have support for caching which means abstracting over --etag-save
/--etag-compare
and --time-cond
/--remote-time
, which means practically writing a full wrapper for the curl tool. Even once dong this, you don't get any form of connection reuse unless you request everything all in one go (along with doing -:
to be able to re-do each etag/time flag for each URL), which is relatively unlikely to be practical.
A curl daemon/client combo would be so nice for shell scripting
Using -X POST
is often wrong as it "changes the actual method string in the HTTP request [... and] does not change behavior accordingly" (Stenberg, 2015).
Although, it is correct for the article's mention of "Send POST requests"... just that typically people don't send POST requests out of the blue with no data.
I found this unreasonably funny. Especially:
Total feelings of superiority: immeasurable.
Man, the tone on this puts me off so much. It reminds me a lot of a former coworker, and how he'd approach technical discussion.
Yes, cURL can do more than you probably think, yes, it's pretty much everywhere, yes, you should take the time to learn it.
Yet there's a reason people reach for tools like Postman, Insomnia, etc. Is it overused? Probably. But this article does little to actually encourage anyone to engage with it. IMO, the only person who would actually get anything out of this article is someone who already agrees with its premise, and can get past the bitter tone.
I agree with most of this, except for the tone. I can't really share this with colleagues who wonder why I use curl, even if it ostensibly has all the reasons why.
The biggest thing with Postman is that it's just totally inaccessible by keyboard navigation. Elements that are navigable don't necessarily highlight, and many other elements aren't inaccessible, as noted in e.g. this open issue nearing three years old. I don't think it's unreasonable to expect any dev tool to be completely accessible through keyboard.
Even worse, Postman currently has an open issue nearing five years old for its inaccessibility for blind developers. The UI and UX of using curl isn't great if you're not already someone who thrives in terminal, but the UI and UX of Postman is not any better IMO.
Another big caveat to this is "except on Windows". "cURL is already installed on your machine", except on Windows, where curl
in Powershell is an alias for Invoke-Webrequest
, and you'd need to explicitly install curl.exe
. "Want to share it with your team? It's text," except on Windows, where you need to do some finicky character escaping in JSON request bodies for "special" characters like "
. This may have changed in the past few years.
Real curl.exe is included on windows now. "curl" is still the broken alias though, so you have to specify curl.exe
Use curlie. Its curl with a more ergonomic interface.
You could also use Hurl which is an open source cli, using libcurl, to test and run API/HTTP requests in a plain text file. Compared to curl, you can easily chain requests, pass data from one request to another, test responses (JSONPath, XPath, SSL certificates, IP addresses, cookies, redirects see https://hurl.dev/docs/samples.html etc...) and there are some sugar syntax for GraphQL body, multipart etc... Under the hood, it's still curl, commands can be exported from Hurl to curl with --curl
.
(I'm one of the maintainer)
Or just use HTTPie, if you're not specifically wedded to curl options and just want the nicer interface.
HTTPie is one of the insta-install programs I get when I end up with a new system. When it's not there it feels like my machine is broken.
If giving up authentication flows, logging, and collection variables parameterized by environment in a handy UI to be a POSIX purist earns me the company of someone who spends $7.48 a year on a soapbox from which to call people “dipshit,” I think I’ll pass.
One of the reasons why I just use curl is because I find those things easier to achieve with a shell script than with finicky intricate UI. I take it it's not everyone's cup of tea, but I would change to a GUI in an eye blink I would be more productive with it.
I suspect the author is ranting because they hate inheriting Postman collections. If I’m right, this is as much about collaboration as it is about personal productivity.
In addition to some of the shortcomings I and others have mentioned here, shell scripts often make implicit assumptions about the environment they’re running in. The less vanilla the tools and flags in the script are, the less likely it will run in someone else’s environment. If you ever need to collaborate with a Windows user, the chances evaporate. This is mostly fine if the script is only ever expected to run in a stable environment like a VM image or Docker container. But as a means of testing or documenting an API, it’s not very portable. When elaborate scripts with lots of these implicit assumptions are committed to a repo, it’s a bit of a middle finger, much like the article.
I get that Postman is annoying. For documenting APIs and rudimentary testing, an OpenAPI schema and a Swagger UI is an easier, better way to expose collections of requests in most cases. For others, Bruno is a decent alternative. Scripts can be fine too, but they need to be written in a language with better error handling and data structures than whatever version of Bash shipped with the stock company laptop.
Scripts you came across are probably poorly written and full of unnecessary bashisms or needlessly relying on non standard tools. I Doo encounter these problems in scripts Income across.
But truth be told, to write HTTP test scripts with curl and jw, is the quickest and easiest way I know automate testing of an HTTP service.
I do agree it is limiting for people using windows. Although these days Microsoft provides WSL which pretty much bridges that gap.
I'd like to second this. I only use the terminal insofar that I think it's simple, easy, and fast.
If I have a set of environment variables to enable, I can just put it in a my_cool_variables.sh
and run source my_cool_variables.sh
. That will work for every shell tool, in every context, everywhere, probably for the rest of my life.
Speaking for myself, the last two times I needed to organize collections of parameterized requests, the parameter values varied across three to six environments. It was clearer for the people I needed to work with to manage these variables in a tabular UI than in a shell script.
No love for wget?
The only time I voluntarily use wget
is in /bin/busybox
setups where /bin/wget
is a symlink and there's no curl installed. However, that perspective is based on the fact that I actually download things very infrequently as compared to the literal http swiss army knife that is curl. If the TLS fingerprinting project ever got folded into mainline curl, curl I shudder to think of how much I could get done from my terminal
I prefer my yaak.app collection than dealing with all these complex and inconsistent cURL behaviors (--binary, @file that escape the content, do not support JSON like xh).
At Meilisearch we have to debug customers everyday and some of them have multiple instances, it's so much easier to click buttons and select machines or specific requests out of drop lists than finding the right one in my limited and unorganized history.
We haven't invented GPU only for playing Doom or the last AAA games. It's been made for any rendering and we should use it to display interfaces. I don't code in my terminal, I love it though but sometimes a good and cosy human interface is better than a terminal.
EDIT: Maybe this article is satire in the end...
Just in case people didn't know, Windows ships with curl as well. Because of a bad PowerShell alias, you might have to call it as curl.exe
.
Another, unrelated, fun fact, Windows also comes with tar.exe
(bsdtar) and ssh.exe
and other OpenSSH client tools.