The Linux curl command is one of the most awesome web-related terminal tools out there. It provides security for transferring data across your computer and the server following protocols such as HTTP, HTTPS, and FTP. Curl has versatile options; it can be used to download files, interact with APIs, or get web pages. In this guide, you will learn how to make effective use of curl commands with examples.

Curl stands for “Client URL,” a command-line tool for transferring data using URLs. It can be used for many tasks, such as downloading web pages, uploading data, or interacting with REST APIs.
First, verify if curl is installed by running:
curl --version
If it’s not installed, you can install it using the following commands:
Debian/Ubuntu:
sudo apt update
sudo apt install curl
Red Hat/CentOS/Alma Linux/Rocky Linux :
sudo yum install curl
To fetch the content of a webpage:
curl https://www.example.com
The command will display the HTML content of the page in the terminal. You can save this content to a file using the -o or -O options.
Below is a table showing various common curl commands, their descriptions, use cases, and examples.
| Command | Use Case | Example |
|---|---|---|
| curl https://example.com | Fetch the contents of a webpage | curl -o myfile.html https://www.example.com saves the webpage’s content to myfile.html. |
| curl -o file.html URL | Download a webpage and save with custom name | curl -o myfile.html https://www.example.com saves the webpage’s content to myfile.html. |
| curl -O URL | Download a file and save it with the original name | curl -O https://www.example.com/file.zip will download file.zip and save it with its original name. |
| curl -I URL | Fetch only HTTP headers | curl -I https://www.example.com shows only the HTTP headers like content type and server. |
| curl -d “data” -X POST URL | Send form data via POST | curl -d “name=John&age=30” -X POST https://example.com/form sends name=John and age=30 as form data using POST method. |
| curl -u user:pass URL | Access a site with authentication | curl -u user:password https://example.com authenticates using the provided credentials to access the site. |
| curl -L URL | Follow redirects | curl -L https://www.example.com will follow any redirects to another URL. |
| curl -T file.txt ftp://URL | Upload a file to an FTP server | curl -C – -O https://www.example.com/file.zip resumes a previously interrupted download. |
| curl –limit-rate 100K URL | Limit download speed | curl –limit-rate 100K -O https://example.com/file.zip limits the download speed to 100 KB per second. |
| curl -C – -O URL | Resume interrupted download | curl -C – -O https://www.example.com/file.zip resumes a download that was previously interrupted. |
| curl -H “Header: value” URL | Set custom headers | curl -H “Content-Type: application/json” https://example.com sets a custom header for sending a JSON request. |
Let’s go over some examples to illustrate further how to use the linux curl commands listed in the table.
Example 1: Download a webpage and save it as myfile.html:
curl -o myfile.html https://www.example.com

Example 2: Download a file with its original name:
curl -O https://www.example.com/file.zip

file.zip and save it with its original name in the current directory.To fetch only the HTTP headers of a webpage (like status codes, content type, etc.), use:
curl -I https://www.example.com

This command will return the headers instead of all the HTML content.
Example 1: Sending form data using POST:
curl -d "name=John&age=30" -X POST https://www.example.com/form

Example 2: Sending JSON data to an API:
curl -H "Content-Type: application/json" -d '{"name": "John", "age":30}' -X POST https://www.example.com/api

If your download was interrupted, you can resume it using the -C option:
curl -C - -O https://www.example.com/file.zip

The command resumes downloading file.zip from the point it was interrupted.
You have the option to transfer files to an FTP server. For example, to upload myfile.txt:
curl -T myfile.txt ftp://ftp.example.com --user username:password

The command uploads myfile.txt to the specified FTP server with the given username and password.
To route your request through a proxy, use the -x option. For example:
curl -x http://proxy.example.com:8080 https://www.example.com

The command instructs Curl to route the request through the specified proxy server.
Limit the download speed to 100 KB per second, use:
curl --limit-rate 100K -O https://www.example.com/file.zip

Such an approach is practical when you need to conserve bandwidth.
By default, the Linux curl command does not follow HTTP redirects. You can use the -L option to make it follow them:
curl -L https://www.example.com

This checks that curl follows any redirects that the server might issue.
One of the most common use cases for curl is running it as part of an automated process via cron jobs. For example, you should regularly trigger a script on your server by sending a request to a specific URL at scheduled intervals.
The –silent (or -s) option is useful when you want the curl to run quietly without printing output or progress information to the terminal. It’s beneficial when using curl in a cron job because it prevents unnecessary output from filling up your system’s cron logs.
Consider you have a script (cron.php) on your server that needs to be triggered every day at midnight. Here’s how you can set up a cron job that uses curl to request that script silently:
crontab -e
0 0 * * * curl --silent http://www.example.com/cron.php > /dev/null
If curl wasn’t run in silent mode, it could print progress information or errors to the system logs, clattering them and making it difficult to identify important messages. Using –silent along with > /dev/null ensures the cron job runs cleanly without unnecessary output.
Curl is a powerful and capable tool for simplifying many tasks: downloading files, fetching web pages, HTTP requests, content manipulation, direct data upload to the server, etc. It is very flexible and is required by (most) system administrators, developers, and casual Linux users. With the knowledge we’ve gained about handling these commands, you can already do multiple things in your web environment directly from your terminal, which allows for some nice workflow streamlining.
Curl, like any tool, could be better. However, considering the vast number of options and syntax can be overwhelming. Some flags (such as -L, to not follow redirects) can be misused with disastrous results, and underlying APIs can be unique, with headers or payloads that are hard to navigate at first. Downloading large files with a—- limit rate will also consume network resources and may rob the rest of your system of resources.
Such error handling has other potential difficulties. For example, sometimes, with failed API responses, curl doesn’t clearly tell us when an error occurs. To debug better, you might want to add additional flags such as—f (seeing the program fail on HTTP errors without printing output) or—v (print verbose output).
With some minor hurdles, these are over. Curl is a handy tool for gaining familiarity with interacting with web APIs, which significantly increases your productivity as you begin to use it in your day-to-day work.

Vinayak Baranwal wrote this article. Use the provided link to connect with Vinayak on LinkedIn for more insightful content or collaboration opportunities.
Markson
Super clear examples, helped me finally understand curl!