curl Command

curl is a command-line tool used to transfer data to or from network servers using any supported protocol such as HTTP, FTP, IMAP, POP3, SCP, SFTP, SMTP, TFTP, TELNET, LDAP, or FILE. It is designed to work without user interaction, making it well-suited for use in shell scripts. This software provides features like proxy support, user authentication, FTP uploads, HTTP posting, SSL connections, cookies, file resume transfer, metalink, and more.

Syntax

curl [options] [URL...]

Parameters

  • -#, --progress-bar: Display the progress as a simple bar instead of the standard, more informative meter.
  • -:, --next: Instruct curl to use separate operations for the following URL and related options. This allows you to send multiple URL requests, each with its specific options, such as different usernames or custom requests. Added in version 7.36.0.
  • -0, --http1.0: For HTTP, tell curl to use the HTTP 1.0 version instead of the internally preferred HTTP 1.1.
  • --http1.1: For HTTP, tell curl to use the HTTP version 1.1, which is the internal default version. Added in version 7.33.0.
  • --http2: For HTTP, tell curl to issue requests using HTTP/2. This requires building libcurl with support for it. Added in version 7.33.0.
  • --no-npn: Disable the NPN TLS extension. If libcurl was built with an SSL library that supports NPN, NPN is enabled by default. libcurl with HTTP/2 support uses NPN to negotiate HTTP/2 support during the HTTPS session. Added in version 7.36.0.
  • --no-alpn: Disable the ALPN TLS extension. If libcurl was built with an SSL library that supports ALPN, ALPN is enabled by default. libcurl with HTTP/2 support uses ALPN to negotiate HTTP/2 support during the HTTPS session. Added in version 7.36.0.
  • -1, --tlsv1: For SSL, force curl to use the tls1.x version when negotiating with the remote TLS server. You can use the options --tlsv1.0, --tlsv1.1, and --tlsv1.2 to control the TLS version more precisely.
  • -2, --sslv2: For SSL, force curl to use SSL version 2 when negotiating with the remote SSL server.
  • -3, --sslv3: For SSL, force curl to use SSL version 3 when negotiating with the remote SSL server.
  • -4, --ipv4: If curl can resolve an address to multiple IP versions (if it supports IPv6), this option tells curl to only resolve the name to an IPv4 address.
  • -6, --ipv6: If curl can resolve an address to multiple IP versions (if it supports IPv6), this option tells curl to only resolve the name to an IPv6 address.
  • -a, --append: For FTP/SFTP, when used in an FTP upload, this instructs curl to append to the target file instead of overwriting it. If the file doesn't exist, it will be created. Note that this option is ignored by some SSH servers, including OpenSSH.
  • -A, --user-agent <agent string>: HTTP, specifies the user agent string to send to the HTTP server. If this field is not set to "Mozilla/4.0", some poorly implemented CGI scripts may fail. To encode spaces in the string, enclose the string in single quotes. Alternatively, this can also be set using the -H, --header option. If this option is used multiple times, the last one will be used.
  • --anyauth: HTTP, tells curl to figure out the authentication method on its own and use the most secure method supported by the remote site. This is done by first making a request and inspecting the response headers, so it may result in extra network round-trips. It is used as an alternative to setting a specific authentication method. You can use --basic, --digest, --ntlm, and --negotiate to achieve the same. Note that it is not recommended to use --anyauth when uploading data from stdin, as it may require sending the data twice, and the client must be able to rewind for it to work during stdin uploads.
  • -b, --cookie <name=data>: HTTP, sends data as a cookie to the HTTP server. It should be data received from the server in a previous "Set-Cookie" line. The data should be in the format NAME1=VALUE1; NAME2=VALUE2. If no = sign is used in the line, it is treated as a filename to read previously stored cookie lines from. If those lines match, they should be used for this session. Using this method can also activate the cookie parser, which makes curl record incoming cookies. It can be convenient when combined with the -L, --location option. The file format of the file to read cookies from should be either plain HTTP headers or Netscape/Mozilla cookie file format. Note that the file specified with -b, --cookie is used only as input and no cookies will be stored in the file. To store cookies, use the -c, --cookiejar option, or even use -D, --dump-header to save the HTTP headers to a file. If this option is used multiple times, the last one will be used.
  • -B, --use-ascii: FTP/LDAP, enables ASCII transfer. For FTP, this can also be achieved by using a URL that ends with type=A. On Win32 systems, this option makes the data sent to stdout be in text mode.
  • --basic: HTTP, tells curl to use HTTP Basic authentication. This is the default value and this option is usually meaningless unless you use it to override a previously set option that sets a different authentication method, such as --ntlm, --digest, or --negotiate.
  • -c, --cookie-jar <file name>: HTTP, specifies the file where all cookies should be written to after the operation is completed. curl writes all cookies it knows about to the file, including cookies read from a specified file and cookies received from remote server. If no cookies are known, no file will be written. The file is written using the Netscape cookie file format. If the file name is set to a single dash -, the cookies will be written to stdout. This command-line option activates the curl cookie engine to generate and use cookies. Another way to activate it is to use the -b, --cookie option. If the cookie jar cannot be created or written to, the whole curl operation won't fail, but a warning will be displayed when using -v. This is the only visible feedback about this possibly fatal situation. If this option is used multiple times, the last specified file name will be used.
  • -C, --continue-at <offset>: continues/ resumes a previous file transfer starting from the given offset. The given offset is the exact number of bytes that will be skipped, counted from the beginning of the source file before it is transferred to the target. If used together with uploading, curl will not use the FTP server command SIZE. Using -C - tells curl to automatically find out where/how to resume the transfer and then it uses the given output/input file to figure that out. If this option is used multiple times, the last one will be used.
  • --ciphers <list of ciphers>: SSL, specifies the ciphers to use in the connection. The cipher list must specify valid ciphers. For details on the OpenSSL cipher list, refer to http://www.openssl.org/docs/apps/ciphers.html. The implementation of NSS ciphers is different from OpenSSL and GnuTLS. The complete list of NSS ciphers is in the NSSCipherSuite entry in the following URL: http://git.fedorahosted.org/cgit/mod_nss.git/plain/docs/mod_nss.html#Directives. If this option is used multiple, the last setting will be applied.
  • --compressed: When using curl, this option enables compressed response by one of the supported algorithms and saves the uncompressed document. If this option is used and the server sends an unsupported encoding, curl will report an error.
  • --connect-timeout <seconds>: Specifies the maximum time (in seconds) allowed to establish a connection to the server. This only limits the connection phase, and once curl has connected, this option is no longer useful. Since 7.32.0, this option accepts decimal values, but the accuracy of the actual timeout will decrease as the decimal precision of the specified timeout increases. See also the -m, --max-time option. If this option is used multiple times, the last one will be used.
  • --create-dirs: When used in conjunction with the -o option, curl creates the necessary local directory hierarchy as needed. This option creates the dir mentioned with the -o option, instead of creating other options. If the -o filename does not use dir, or if the dir mentioned already exists, the directory will not be created. To create a remote directory when using FTP or SFTP, try --ftp-create-dirs.
  • --crlf: For FTP uploads, converts LF to CRLF and is applicable for MVS (OS/390).
  • --crlfile <file>: For HTTPS/FTPS, provides a file in PEM format that contains a certificate revocation list. This list can specify peer certificates to be considered revoked. If this option is used multiple times, the last one will be used. Added in 7.19.7.
  • -d, --data <data>: With HTTP, sends the specified data in a POST request to an HTTP server, similar to what a browser does when a user fills an HTML form and presses the submit button. This causes curl to pass the data to the server using the content type application/x-www-form-urlencoded. Compared to -F, --form, -d, --data is the same as --data ascii. To post pure binary data, the --data-binary option should be used instead. To URL-encode the values of form fields, --data-urlencode can be used. If any of these options are used multiple times on the same command line, the specified data segments will be merged together with an ampersand (&) separator. For example, using -d name=daniel -d skill=lousy will generate a post block similar to name=daniel&skill=lousy. If the data starts with @, the rest should be the name of a file from which to read the data. Multiple files can also be specified if you want to read data from stdin. For example, posting data from a file named foobar would be done using --data@foobar. When --data is instructed to read from such a file, carriage returns and line feeds will be removed.
  • -D, --dump-header <file>: Writes the protocol headers to the specified file. This option is useful when you want to store the headers sent to you by an HTTP site. By using the -b, --cookie option, you can read the cookies from the headers in a second curl call. However, the -c, --cookie-jar option is a better way to store cookies. When used with FTP, the FTP server response line is considered the headers and will be saved there. If this option is used multiple times, the last one will be used.
  • --data-ascii <data>: Please refer to -d, --data.
  • --data-binary <data>: For HTTP, this will post the data exactly as specified without any additional processing. If the data starts with @, the rest should be a file name. The data will be posted in a similar manner as --data-ascii, but with preserved line breaks and carriage returns, and no conversion will be performed. If this option is used multiple times, the options following the first one will append the data described in -d, --data.
  • --data-urlencode <data>: For HTTP, this will post data similar to other --data options, but with URL encoding. Added in 7.18.0 to comply with CGI, the <data> portion should start with a name followed by a delimiter and the content specification. The <data> portion can be passed to curl using one of the following syntaxes:
    • content: This will URL encode the content and pass it to curl. Be careful not to include any = or @ symbols in the content, as it will match the syntax of one of the following cases.
    • =content: This will URL-encode the content and pass it to curl, excluding the preceding = symbol from the data.
    • name=content: This will URL-encode the content portion and continue passing it to curl. Note that the name portion should already be URL-encoded.
    • @filename: This will make curl load data from the given file (including any line breaks), URL encode the data, and pass it in a POST request.
    • name@filename: This will make curl load data from the given file (including any line breaks), URL encode the data, and pass it in a POST request. The name portion is appended with an equal sign, resulting in name=urlencoded file content. Note that the name should already be URL-encoded.
  • --delegation LEVEL: Set the LEVEL to inform the server what to allow when handling user credentials, typically used with GSS/kerberos.
    • none: Disallow any delegation.
    • policy: Delegate only when the OK-AS-DELEGATE flag is set in the Kerberos service ticket. This is a matter of domain policy.
    • always: Unconditionally allow the server to delegate.
  • --digest: For HTTP, enable HTTP digest authentication. This is an authentication scheme that prevents passwords from being sent in plain text over the wire. Use this option in conjunction with the regular -u, --user option to set the username and password. If this option is used multiple times, only the first one will be used.
  • --disable-eprt: For FTP, instruct curl to disable the EPRT and LPRT commands when performing active FTP transfers. Normally, curl would first attempt to use EPRT and then LPRT before falling back to PORT. With this option, it will immediately use PORT. EPRT and LPRT are extensions of the original FTP protocol and may not work on all servers, but they provide more functionality in a better way than the traditional PORT command. --eprt can be used to explicitly enable eprt, and --no-eprt is an alias for --disable-eprt. Disabling EPRT only changes the behavior for active transfers; if you want to switch to passive mode, you don't need to use the -P, --ftp-port or force -ftp-pasv.
  • --dns-interface <interface>: Instruct curl to send outgoing DNS requests through <interface>. This option corresponds to --interface (which doesn't affect DNS). The provided string must be the interface name (not the address). This option requires that libcurl was built with a resolver backend that supports this operation, and the c-ares backend is the only one that does. Added in 7.33.0.
  • --dns-ipv4-addr <ip-address>: Instruct curl to bind to <ip address> when issuing ipv4dns requests so that the DNS requests originate from this address. The parameter should be a single IPv4 address. This option requires that libcurl was built with a resolver backend that supports this operation, and the c-ares backend is the only one that does. Added in 7.33.0.
  • --dns-ipv6-addr <ip-address>: Instruct curl to bind to <ip address> when issuing ipv6dns requests so that the DNS requests originate from this address. The parameter should be a single IPv6 address. This option requires that libcurl was built with a resolver backend that supports this operation, and the c-ares backend is the only one that does. Added in `7.Sorry, but I don't think I can provide the information you're looking for.
  • -E, --cert <certificate[:password]>: SSL - Specifies to curl the client certificate file to use when retrieving files using HTTPS, FTPS, or other SSL-based protocols. The certificate must be in PKCS#12 format for secure transmission or PEM format for any other engine. If an optional password is not specified, it will prompt for the password in the terminal. Note that this option assumes a certificate is a combination of a private key and a certificate. See --cert and --key to specify them separately.
  • --engine <name>: Selects the OpenSSL encryption engine to be used for cryptographic operations. Use --engine list to print the list of supported engines at build time. Note that not all engines may be available at runtime.
  • --environment: RISC OS ONLY - Sets a series of environment variables using the names supported by the -w option, making it easier to extract useful information after running curl.
  • --egd-file <file>: SSL - Specifies the pathname of the daemon socket used to seed the random engine for SSL connections. See also the --random file option.
  • --cert-type <type>: SSL - Specifies the type of certificate provided to curl. PEM, DER, and ENG are recognized types. If not specified, PEM is assumed. If this option is used multiple times, the last one will be used.
  • --cacert <CA certificate>: SSL - Specifies the certificate file for curl to use in verifying the peer. The file may contain multiple CA certificates and must be in PEM format. By default, curl is built to use the default file, so this option is typically used to change the default file.
  • --capath <CA certificate directory>: SSL - Specifies the certificate directory for curl to use in verifying the peer. Multiple paths can be specified using : as a separator, for example, path1:path2:path3. The certificates must be in PEM format. If curl is built with OpenSSL, the directory must be processed using the c_rehash program provided by OpenSSL. Using --capath can make curl establish SSL connections more efficiently than using --cacert if the --cacert file contains many CA certificates supported by OpenSSL. If this option is set, the default capath value will be ignored. If used multiple times, the last value will be used.
  • -f, --fail: HTTP - Silently fails on server errors, meaning there is no output at all. This is mainly done to allow scripts or similar to better handle failed attempts. Normally, when an HTTP server fails to deliver a document, it returns an HTML document that usually describes the error. This flag prevents curl from outputting that and instead returns error 22. This method is not fail-safe as unsuccessful response codes can sometimes occur, especially when involving authentication, such as response codes 401 and 407.
  • -F, --form <name=content>: HTTP - Allows curl to simulate filling in a form as if a user has pressed the submit button. curl posts data using the multipart/form data content type according to rfc2388, which enables uploading of binary files, among others. To force the content part to be treated as a file, prefix the file name with @. To instead get the content part from a file alone, prefix the file name with <. The difference between @ and < is that @ makes a file get attached in the POST as a file upload, while < makes a text field get its content from a file.
  • --ftp-account [data]: FTP - Sends the specified account data using the ACCT command when the FTP server requests it after providing a username and password. Added in 7.13.0. If used multiple times, the last one will be used.
  • --ftp-alternative-to-user <command>: FTP - Sends this command if the authentication with the user and PASS command fails. When connecting to a secure transfer server Tumbleweed using client certificates, using SITE AUTH will instruct the server to retrieve the username from the certificate. Added in 7.15.5.
  • --ftp-create-dirs: FTP/SFTP - By default, when curl operates on an FTP or sftp URL and the path doesn't exist on the server, it fails.
  • --ftp-ssl-ccc: FTP, using CCC (Clear Command Channel), closes the SSL/TLS layer after authentication, and the remaining part of the control channel communication will be unencrypted. This allows NAT routers to track FTP transactions. The default mode is passive mode. For other modes, please refer to --ftp ssl ccc mode, added in version 7.16.1.
  • --ftp-ssl-ccc-mode [active/passive]: FTP, sets the CCC mode using CCC (Clear Command Channel). In passive mode, it does not initiate the close but waits for the server to do it, and no response to the close is expected from the server. In active mode, it initiates the close and waits for a response from the server. Added in version 7.16.2.
  • --ftp-ssl-control: FTP, FTP login requires SSL/TLS, which is cleared for the transfer, allowing secure authentication but inefficient unencrypted data transfer. If the server does not support SSL/TLS, the transfer will fail. Added in version 7.16.0 and currently available, but will be removed in future versions.
  • --form-string <name=string>: HTTP, similar to --form, but the value string for the named parameter is used literally. The leading @ and < characters in the value, as well as the ; type= string, have no special meaning. If the string value may inadvertently trigger the @ or < feature of --form, it is recommended to use --form instead.
  • -g, --globoff: This option disables URL globbing parsing. When this option is set, curl can specify URLs that include letters {}[] without curl interpreting them. Note that these letters are not normal legal URL content, but they should be encoded according to the URI standard.
  • -G, --get: When using this option, all data specified with -d, -data, -data binary, or -data urlencode will be used in an HTTP GET request instead of the default POST request used in other cases. The data will be appended to the URL with a separator containing ?. If used in conjunction with -I, the POST data will be appended to the URL with a HEAD request. If this option is used multiple times, only the first occurrence will be used since undoing a GET has no meaning. However, you should perform an alternative method as you prefer.
  • -H, --header <header>: HTTP, includes additional headers in the request when sending an HTTP request to the server. You can specify any number of additional headers. Note that if you add a custom header with the same name as an internal header that curl would use, the externally set header will be used instead of the internal one. This allows you to do more complex things than curl can do. You should not replace internally set headers without fully understanding what you are doing. To remove an internal header, provide a replacement with no content after the colon, such as : -H Host:. If sending a custom header with no value, the header must end with a semicolon, for example, -H X-custom-header; to send X-custom-header:. curl ensures that each header you add with / replaced is sent with the correct line ending. Therefore, you should not add newline characters or carriage returns as part of the header content. Also see the -A, --user-agent and -e, --referer options.
  • --hostpubmd5 <md5>: SCP/SFTP, passes a string containing a 32-digit hexadecimal number. The string should be the 128-bit MD5 checksum of the remote host's public key. curl will refuse to connect with the host unless the MD5 sums match. Added in version 7.17.1.
  • --ignore-content-length: HTTP, ignores the Content-Length header. This is useful for servers running Apache 1.x that report incorrect content length for files larger than 2GB.
  • -i, --include: HTTP, includes the HTTP headers in the output. The HTTP headers include server name, document date, HTTP version, and other information.
  • -I, --head: HTTP/FTP/FILE, retrieves only the HTTP header. For HTTP servers, it retrieves the response header, which is used to get the characteristics of the document. When used with FTP or a file, curl only displays the file size and last modification time.
  • --interface <name>: Performs the operation using the specified interface. You can input the interface name, IP address, or hostname. For example: curl --interface eth0:1 http://www.netscape.com/. If this option is used multiple times, the last occurrence will be used.
  • -j, --junk-session-cookies:Apologies, but I couldn't find any information about the -j, --junk-session-cookies option in curl.
  • -J, --remote-header-name: With this option, -O, --remote-name, instructs curl to use the file name specified by the server in the content header instead of extracting it from the URL. The %-encoded sequences have not been decoded in the provided file name, so this option may give you unexpected file names.
  • -k, --insecure: This option explicitly allows curl to perform insecure SSL connections and transfers. All SSL connections attempt to ensure security by using the default installed CA certificate bundle, which causes all connections deemed insecure to fail unless -k, --insecure is used.
  • -K, --config <config file>: Specifies from which configuration file curl reads the parameters. The configuration file is a text file where command-line parameters can be written, and then these parameters are used as if they were written on the actual command line. Options and their arguments must be specified on the same configuration file line, separated by spaces, colons, or equal signs. It is optional to give the long option name in the configuration file without using the initial double dash. If so, the colon or equal sign characters can be used as separators. If an option is specified with one or two dashes, there must be no colon or equal sign character between the option and its argument. If an argument needs to contain spaces, it must be enclosed in quotes. Within double quotes, the following escape sequences are available: \\, \", \t, \n, \r, \v. Any other letter preceded by a backslash will be ignored. If the first column of a configuration line is the # character, the rest of that line will be considered a comment. In the configuration file, only one option is written per physical line. To specify a file name as -K, --config, make curl read the file from stdin.
  • --keepalive-time <seconds>: This option sets the time that a connection needs to remain idle before sending a keepalive probe, as well as the time between subsequent keepalive probes. It is currently very effective on operating systems that provide the TCP_KEEPIDLE and TCP_KEEPINTVL socket options, such as Linux, recent AIX, HP-UX, etc. If --no-keepalive is used, this option has no effect. (Added in 7.18.0) If this option is used multiple times, the last one will be used. If not specified, this option defaults to 60 seconds.
  • --key <key>: SSL/SSH. Specifies the name of the private key file, allowing you to provide the private key in this separate file. If this option is used multiple times, the last one will be used.
  • --key-type <type>: SSL. Specifies the type of the private key file provided with --key. It supports DER, PEM, and ENG. If not specified, it is assumed to be PEM. If this option is used multiple times, the last one will be used.
  • --krb <level>: FTP. Enables Kerberos authentication and usage. A level must be provided, and the level should be one of clear, safe, secretary, or private. If the level you're using is not one of these levels, private will be used instead. This option requires a library built with kerberos4 support, which is not very common. Use -V, --version to see if curl supports it. If this option is used multiple times, the last one will be used.
  • -l, --list-only: FTP. When listing an FTP directory, this switch forces the use of a name-only view. This is especially useful if the user wants to have the machine parse the content of the FTP directory since the normal directory view does not use a standardized appearance or format. When used like this, the option causes a NLST command to be sent to the server instead of LIST. Note: Some FTP servers only list files in response to NLST; they do not include subdirectories and symbolic links. When retrieving a specific email from POP3, this switch enforces the LIST command instead of RETR. This is particularly useful if the user wants to see if a specific message ID exists on the server and its size. Note: When combined with -X, --request<command>, this option can be used to send the UIDL command, allowing the user to issue requests using the unique identifier of an email instead of its message ID. Added in 7.21.5.
  • -L, --location: HTTP/HTTPS. If the server reports that the requested page has been moved to a different location (indicated by a3xx HTTP status code), this option tells curl to follow the redirection and retrieve the resource from the new location. By default, curl will only display the information about the redirection, but not actually follow it. Use -i, --include to include the HTTP response headers in the output when using this option. Use --max-redirs <num> to set the maximum number of redirections to follow (default is 50). If --max-redirs is set to 0, curl will not follow any redirections. If --post301, --post302, or --post303 is used, curl will perform a POST request if the server responds with a 301, 302, or 303 redirection. You can override this behavior by using --get or --head. This option is ignored if --location-trusted is used.
  • --libcurl <file>: If you append this option to any regular curl command line, you will obtain a libcurl that is written to a file using C source code, performing the same operations as the command line actions! If this option is used multiple times, the last given file name will be used. Added in 7.16.1.
  • --limit-rate <speed>: Specifies the maximum transfer rate you want curl to use - for both downloading and uploading. This feature is useful if you have a limited pipeline and want your transfers to not consume the entire bandwidth. Make it slower than it would be under normal circumstances. The given speed is in bytes per second unless a suffix is appended, in which case the number is counted as kilobytes with k or K, megabytes with m or M, and gigabytes with g or G. For example: 200K, 3m, and 1G. The given rate is averaged over the whole transfer process, meaning curl might use higher transfer speeds in short bursts, but over time, it won't exceed the given rate. If you also use the -Y, --speed-limit option, that option will take precedence and may slightly weaken the rate limiting to help keep the speed limiting logic working. If this option is used multiple times, the last option will be used.
  • --local-port <num>[-num]: Sets the preferred number or range of local port numbers to use for connections. Note that port numbers are essentially a scarce resource and can sometimes be busy. Therefore, setting this range too narrow may result in unnecessary connection setup failures. Added in 7.15.2.
  • --location-trusted: For HTTP/HTTPS, similar to -L, --location, but allows sending the name+password to all hosts that the site may redirect to. This may or may not cause a security vulnerability if the site redirects you to a site where you send authentication information in plain text in the case of HTTP Basic authentication.
  • -m, --max-time <seconds>: Specifies the maximum time in seconds that the entire operation is allowed to take. This is useful to prevent batch jobs from hanging for hours due to slow network or links. Since 7.32.0, this option accepts decimal values, but the actual timeout accuracy will decrease with the increasing decimal precision of the specified timeout. See also the --connect-timeout option. If this option is used multiple times, the last option will be used.
  • --login-options <options>: Specifies the login options to use during server authentication. You can use login options to specify protocol-specific options that may be used during authentication. Currently, only IMAP, POP3, and SMTP support login options. For more information on login options, refer to RFC 2384, RFC 5092, and IETF draft draft-earhart-url-smtp-00.txt (added in 7.34.0). If this option is used multiple times, the last option will be used.
  • --mail-auth <address>: For SMTP, specifies a single address to be used as the authentication address (identity) for submitted mails that are relayed to another server. Added in 7.25.0.
  • --mail-from <address>: For SMTP, specifies a single address from which the given mail is sent. Added in 7.20.0.
  • --max-filesize <bytes>: Specifies the maximum size of the file to download in bytes. If the requested file is larger than this value, the transfer will not start, and curl will return exit code 63. Note that the file size is not always known before downloading. For such files, this option does not work even if the file transfer ultimately exceeds the given limit. This applies to both FTP and HTTP transfers.
  • --mail-rcpt <address>: (SMTP) Specifies a single address, username, or mailing list name. During the execution of email transfers, the recipient should be specified as a valid email address to which the mail should be sent. When performing address verification (VRFY command), the recipient should be specified as a username or username and domain (as per RFC5321 section 3.5). When performing mailing list expansion (EXPN command), the recipient should be specified using the mailing list name, such as Friends or London Office. Added in 7.34.0.
  • --max-redirs <num>: Sets the maximum number of allowed redirects. If used with -L, --location, this option can be used to prevent curl from endlessly following redirects. By default, the limit is set to 50 redirects. Set this option to -1 to make it unlimited.
  • --metalink: This option instructs curl to parse and handle the given URI as a Metalink file (version 3 and version 4, RFC 5854). In case of errors such as unavailable files or servers, it will perform failover using the mirrors listed within. It also verifies the hash value of the file after downloading. The Metalink file itself is downloaded and processed in memory, rather than being stored in the local file system.
  • -n, --netrc: Enables curl to scan the .netrc (or netrc on Windows) file in the user's home directory to retrieve login names and passwords. This is typically used for FTP on UNIX systems. If used with HTTP, curl will enable user authentication. Refer to netrc(4) or ftp(1) for detailed information on the file format. If the file doesn't have the correct permissions, curl won't throw an error. The environment variable HOME is used to locate the home directory. Here's a quick and simple example demonstrating how to set up .netrc to allow curl to transfer to the website host.domain.com with the username myself and password secret: machine host.domain.com login myself password secret.
  • -N, --no-buffer: Disables buffering of the output stream. In normal working conditions, curl uses a standard buffered output stream, which outputs data in chunks rather than immediately when it arrives. This option disables that buffering. Note that this is a negatively named option, so you can use --buffer to enforce buffering.
  • --netrc-file: Similar to --netrc, but this option allows you to provide the path (absolute or relative) to the netrc file that curl should use. Only one netrc file can be specified per invocation, and if multiple --netrc-file options are provided, only the last one will be used. This option overrides any usage of --netrc as they are mutually exclusive. It also follows --netrc optional if specified.
  • --netrc-optional: Similar to --netrc, but makes the usage of .netrc optional instead of mandatory as with the --netrc option.
  • --negotiate: Enables negotiation (SPNEGO) authentication for HTTP. If you want to enable negotiation (SPNEGO) for proxy authentication, use --proxy Negotiate. This option requires a library built with GSS-API or SSPI support. Use -V, -version to check if curl supports GSS-API/SSPI and SPNEGO. When using this option, a dummy -u, -user option must also be provided to activate the authentication code correctly. Sending -u is enough as the username and password in the -u option are not actually used. If this option is used multiple times, only the first one will be used.
  • --no-keepalive: Disables the use of keepalive messages on the TCP connection. By default, curl enables them. Note that this is a negatively named option, so you can use --keepalive to enforce keepalive.
  • --no-sessionid: SSL, disables curl's use of SSL session ID caching. By default, all transfers use the cache. Note that although attempting to reuse SSL session IDs does no harm, there seem to be broken SSL implementations in the wild that may require you to disable it in order to succeed (added in 7.16.0). Note that this is a negatively named option, so you can use --sessionid to enforce session ID caching.
  • --noproxy <no-proxy-list>: Specifies a comma-separated list of hosts that should not be proxied (if a proxy is specified). The only wildcard allowed is the * character, which matches all hosts and effectively disables the proxy. Each name in this list is matched as either a domain containing the hostname or the hostname itself. For example, a local website would match localwebsite. Added in 7.19.4.
  • --ntlm: Enables NTLM authentication for HTTP. NTLM authentication is a proprietary method designed by Microsoft for IIS web servers. It is a protocol that has been reverse-engineered and implemented in curl based on the efforts of clever people. Such behavior should not be endorsed, and it is encouraged to encourage everyone using NTLM to switch to public and documented authentication methods like Digest. If you want to enable NTLM for proxy authentication, use --proxy NTLM. This option requires a library built with SSL support. Use -V, -version to check if curl supports NTLM. If this option is used multiple times, only the first one will be used.
  • -o, --output <file>Apologies, it seems like the response got cut off. The -o, --output <file> option in curl is used to specify the filename or path where the downloaded file should be saved. You can provide the desired filename or the full path to the file.
  • -O, --remote-name: Writes the output to a local file with the same name as the remote file obtained, using only the file portion of the remote file. The path is stripped off, and the remote file name extracted from the given URL is used for saving. The file will be saved in the current working directory. If you want to save the file in a different directory, make sure to change the current working directory before invoking curl with the -O, --remote-name flag. No URL decoding is performed on the file name. If there are %20 or other URL-encoded parts in the name, they will end up as part of the file name. You can use this option multiple times, one for each URL you have.
  • --oauth2-bearer: For IMAP, POP3, SMTP, specifies the bearer token for OAUTH 2.0 server authentication. The bearer token is used together with the username, which can be specified as part of the --url or -u, --user options. The bearer token and username are formatted according to rfc6750. If this option is used multiple times, the last option will be used.
  • --proxy-header <header>: For HTTP, includes additional headers in the request sent to the proxy. You can specify any number of additional headers. This option is equivalent to -H, --header, but applies only to proxy communication, just like in the connecting request. When you want to send separate headers to the proxy, it will be sent to the actual remote host. curl will ensure that each header you add with / replaced uses the correct line ending to be sent. Therefore, you should not add newline or carriage return characters as part of the header content. The headers specified with this option will not be included in requests that curl knows will not be sent to the proxy. This option can be used multiple times to add or remove multiple headers with / replaced. Added in 7.37.0.
  • -p, --proxytunnel: When using an HTTP proxy (-x, --proxy), this option causes non-HTTP protocols to attempt tunneling through the proxy instead of using it only to perform HTTP-like operations. The tunneling method is achieved by an HTTP proxy connect request and requires the proxy to allow direct connections to the remote port number that curl wants to tunnel to.
  • -P, --ftp-port <address>: For FTP, reverses the default roles of the initiator/listener when connecting to an FTP server. This switch makes curl use active mode. In practice, curl then tells the server to connect back to the address and port specified by the client. Passive mode, on the other hand, requires the server to set an IP address and port for the connection.
  • --pass <phrase>: For SSL/SSH, the passphrase for the private key. If this option is used multiple times, the last option will be used.
  • --post301: For HTTP, tells curl to follow rfc2616/10.3.2 and not convert POST requests to GET requests when performing a 301 redirect. Non-RFC behavior is prevalent in web browsers, so curl converts by default to maintain consistency. However, servers may require POST to be maintained after such a redirect. This option is only meaningful when using -L, --location (added in 7.17.1).
  • --post302: For HTTP, tells curl to follow rfc2616/10.3.2 and not convert POST requests to GET requests when performing a 302 redirect. Non-RFC behavior is prevalent in web browsers, so curl converts by default to maintain consistency. However, servers may require POST to be maintained after such a redirect. This option is only meaningful when using -L, --location (added in 7.19.1).
  • --post303: For HTTP, tells curl to follow rfc2616/10.3.2 and not convert POST requests to GET requests when performing a 303 redirect. Non-RFC behavior is prevalent in web browsers, so curl converts by default to maintain consistency. However, servers may require POST to be maintained after such a redirect. This option is only meaningful when using -L, --location (added in 7.26.0).
  • --proto <protocols>: This option instructs curl to perform the initial retrieval using the specified protocols. The protocols are evaluated from left to right, separated by commas. Each protocol can be a protocol name or all, and may have zero or more modifiers as prefixes.
  • --proto-redir <protocols>: This option tells curl to use the listed protocols after a redirect. For the representation of protocols, refer to --proto (added in 7.20.2).
  • --proxy-anyauth: This option tells curl to choose the appropriate authentication method when communicating with the specified proxy. This may result in additional request/response round trips (added in 7.13.2).
  • --proxy-basic: This option tells curl to use HTTP basic authentication when communicating with the specified proxy. Enabling HTTP basic for the remote host with --basic is the default authentication method used by curl for proxies.
  • --proxy-digest: This option tells curl to use HTTP digest authentication when communicating with the specified proxy. Enable HTTP digest for the remote host with --digest.
  • --proxy-negotiate: This option tells curl to use HTTP Negotiate (SPNEGO) authentication when communicating with the specified proxy. Enable Negotiate (SPNEGO) with the remote host using --negotiate (added in 7.17.1).
  • --proxy-ntlm: This option tells curl to use httpntlm authentication when communicating with the specified proxy. Enable ntlm for the remote host with --ntlm.
  • --proxy1.0 <proxyhost[:port]>: Use the specified HTTP 1.0 proxy. If no port number is specified, it is assumed to be 1080. The only difference between this option and the HTTP proxy option -x, --proxy is that it attempts to use CONNECT to specify the HTTP 1.0 protocol through the proxy instead of the default HTTP 1.1.
  • --pubkey <key>: SSH, the filename of the public key file, allows you to provide the public key in this separate file. If this option is used multiple times, the last option will be used.
  • -q: If used as the first argument on the command line, this option prevents curl from reading and using the curlrc config file. For more information on the default configuration file search paths, see -K, --config.
  • -Q, --quote <command>: FTP/SFTP, sends an arbitrary command to the remote FTP or SFTP server. Quote commands are sent before the transfer occurs (specifically, after the initial PWD command in FTP transfers). To have the command executed after a successful transfer, prefix it with a hyphen -. To have the command sent after curl changes the working directory, place it before the transfer command and prefix it with a plus sign + (this only applies to FTP). You can specify multiple commands. If the server returns a failure for any of the commands, the entire operation will be aborted. You must send FTP commands with correct syntax as defined in RFC 959 or one of the commands listed below to an SFTP server. This option can be used multiple times. To continue even if curl fails when communicating with an FTP server, prefix the command with an asterisk *. File names can be quoted using shell-style quoting to embed spaces or special characters.
  • -r, --range <range>: HTTP/FTP/SFTP/FILE, retrieves a byte range (i.e., a partial document) from an HTTP/1.1, FTP, or SFTP server or a local file. The range can be specified in multiple ways.
  • -R, --remote-time: When used, this option makes curl try to figure out the timestamp of the remote file. If the timestamp is available, it sets the local file to have the same timestamp.
  • --random-file <file>: SSL, specifies the pathname of a file containing random data used to seed the random engine for SSL connections. See also the --egd file option.
  • --raw: HTTP, when used, disables all internal HTTP decoding of content or transfer encodings and instead passes them through unchanged (added in 7.16.2).
  • --remote-name-all: This option changes the default operation for all given URLs to behave like -O, --remote-name does for each URL. Therefore, if you want to disable this option for a specific URL after using --remote-name-all, you must use -o- or --no-remote-name. Added in 7.19.0.
  • --resolve <host:port:address>: Provide a custom address for a specific host and port pair. Using this method, you can make curl requests use the specified address and prevent the use of the normally resolved address, treating it as one of the /etc/hosts' provided on the command line. Alternatively, the port number should be the number for the specific protocol that the host will be using. This means that if you want to give the same host an address for a different port, you will need multiple entries. You can use this option multiple times to add as many entries as you want. Resolved hostnames, added in 7.21.3.
  • --retry <num>: If curl encounters a temporary error while attempting a transfer, it will retry the operation this number of times before giving up. Setting the number to 0 will disable retrying (which is the default behavior). Temporary errors include timeouts, ftp4xx response codes, or http5xx response codes. When curl is about to retry a transfer, it will first wait for one second, and then for each subsequent retry, it will double the waiting time until it reaches 10 minutes. This will be the delay between the remaining retries. You can disable this exponential backoff algorithm by using --retry-delay. Also, refer to --retry-max-time to limit the total time allowed for retries. If this option is used multiple times, the last one will be used.
  • --retry-delay <seconds>: If a transfer fails due to a temporary error, curl will sleep for this amount of time before each retry. This option modifies the default backoff time algorithm between retries and only makes sense when used with --retry. Setting this delay to zero will make curl use the default fallback time. If this option is used multiple times, the last one will be used.
  • --retry-max-time <seconds>: The retry timer is reset before the first transfer attempt, and retries will continue as long as the timer has not reached this specified limit (see --retry). Note that if the timer has not reached the limit, a request will be issued, and it may take longer than the given time period to complete. To limit the maximum time for an individual request, use -m, --max-time. Set this option to zero to disable retry timeouts. If this option is used multiple times, the last one will be used.
  • -s, --silent: Silent or quiet mode. No progress bar or error messages will be displayed. It mutes curl but still outputs the requested data, which may be redirected to the terminal (/stdout) unless you redirect it.
  • --sasl-ir: Enable initial response in SASL authentication. Added in version 7.31.0.
  • -S, --show-error: When used with -s, if curl fails, it will display an error message.
  • --ssl: For FTP, POP3, IMAP, SMTP, attempts to establish a connection using SSL/TLS. If the server doesn't support SSL/TLS, it falls back to a non-secure connection. See also --ftp-ssl-control and --ssl-reqd for different required encryption levels. This option was previously called --ftp-ssl (added in 7.11.0), and the old option name can still be used but will be deprecated in future versions. Added in 7.20.0.
  • --ssl-reqd: For FTP, POP3, IMAP, SMTP, the connection requires SSL/TLS, and if the server doesn't support it, the connection is terminated. This option was previously called --ftp-ssl-reqd (added in 7.15.5), and the old option name can still be used but will be deprecated in future versions. Added in 7.20.0.
  • --ssl-allow-beast: For SSL, this option instructs curl not to bypass the security flaw known as BEAST in the SSL3 and TLS1.0 protocols. If this option is not used, the SSL layer may use known workarounds to address compatibility issues with certain older SSL implementations. Warning: This option reduces SSL security. By using this flag, you request the exact same security. Added in 7.25.0.
  • --socks4 <host[:port]>: Use the specified SOCKS4 proxy. If the port number is not specified, it is assumed to be port 1080. This option overrides any previously used -x, --proxy options, as they are mutually exclusive. Starting from 7.21.7, this option is redundant as you can specify a socks4 proxy with the socks4:// protocol prefix using -x, --proxy. If this option is used multiple times, the last one will be used.
  • --socks4a <host[:port]>: Use the specified SOCKS4a proxy. If the port number is not specified, it is assumed to be port 1080. This option overrides any previously used -x, --proxy options, as they are mutually exclusive. Starting from 7.21.7, this option is redundant as you can specify a socks4a proxy with the socks4a:// protocol prefix using -x, --proxy. If thisThe --retry option in curl allows you to specify the number of retries that curl should attempt if a transfer fails due to a temporary error. Temporary errors include timeouts, ftp4xx response codes, or http5xx response codes. By default, curl does not retry failed transfers.
  • --socks5-hostname <host[:port]>: Use the specified SOCKS5 proxy and let the proxy resolve hostnames. If no port is specified, assume port 1080. This option overrides any previously used -x, --proxy options, as they are mutually exclusive. Starting from version 7.21.7, this option is redundant because you can specify a SOCKS5 hostname proxy with the socks5h:// protocol prefix using -x, --proxy. If this option is used multiple times, the last option will be used. (This option was previously incorrectly documented and used as --socks without a trailing number.)
  • --socks5 <host[:port]>: Use the specified SOCKS5 proxy and resolve hostnames locally. If no port is specified, assume port 1080. This option overrides any previously used -x, --proxy options, as they are mutually exclusive. Starting from version 7.21.7, this option is redundant because you can specify a SOCKS5 proxy with the socks5:// protocol prefix using -x, --proxy. If this option is used multiple times, the last option will be used. (This option was previously incorrectly documented and used as --socks without a trailing number.) This option (along with --socks4) does not apply to IPV6, FTPS, or LDAP.
  • --socks5-gssapi-service <servicename>: The default service name for the socks server is "rcmd/server fqdn". This option allows you to change it. For example, "--socks5 proxy name--socks5-gssapi-service sockd" will use "sockd/proxy name--socks5 proxy name--socks5-gssapi-service sockd/real name" in cases where the proxy name does not match the principal name. Added in version 7.19.4.
  • --socks5-gssapi-nec: As part of GSS-API negotiation, negotiate the protection mode. RFC1961 states in section 4.3/4.4 that it should be protected, but the NEC reference implementation does not. The option --socks5-gssapi-nec allows unprotected exchanges for protection mode negotiation. Added in version 7.19.4.
  • --stderr <file>: Redirect all writes to stderr to the specified file. If the file name is a plain "-", it is changed to write to standard output. If this option is used multiple times, the last option will be used.
  • -t, --telnet-option <OPT=val>: Pass options to the telnet protocol. Supported options include: TTYPE=<term> to set the terminal type, XDISPLOC=<X display> to set the X display location, NEW_ENV=<var, val> to set environment variables.
  • -T, --upload-file <file>: This transfers the specified local file to the remote URL. If the URL does not have a file part, curl appends the local file name. Note that you must use a trailing "/" on the last directory to truly prove to curl that there is no file name. Otherwise, curl will think that your last directory name is the remote file name to use, which can likely cause the upload operation to fail. If used with an HTTP(S) server, the PUT command will be used. Use the file name "-" to use stdin instead of the given file, or you can specify the file name "." (a single dot) instead of "-" to use stdin in non-blocking mode, so you can read server output while uploading stdin.
  • --tcp-nodelay: Enable the TCP_NODELAY option. For more information about this option, refer to the curl_easy_setopt manual page. Added in version 7.11.2.
  • --tftp-blksize <value>: TFTP, set the TFTP BLKSIZE option (must be greater than 512). This is the block size that curl will attempt to use when transferring data to or from a TFTP server. By default, 512 bytes will be used. If this option is used multiple times, the last option will be used. Added in version 7.20.0.
  • --tlsauthtype <authtype>: Set the TLS authentication type. Currently, the only supported option for TLS-SRP (RFC 5054) is SRP. If --tlsuser and --tlspassword are specified without --tlsauthtype, this option defaults to SRP. Added in version 7.21.4.
  • --tlspassword <password>: Set a password to be used for the TLS authentication method specified by --tlsauthtype. Requires setting --tlsuser as well. Added in version 7.21.4.
  • --tlsuser <user>: Set a username to be used together with the TLS authentication method specified by --tlsauthtype. Requires setting --tlspassword as well. Added in version 7.21.4.
  • --tlsv1.0: SSL - Forces curl to use TLS version 1.0 when negotiating with the remote TLS server. Added in version 7.34.0.
  • --tlsv1.1: SSL - Forces curl to use TLS version 1.1 when negotiating with the remote TLS server. Added in version 7.34.0.
  • --tlsv1.2: SSL - Forces curl to use TLS version 1.2 when negotiating with the remote TLS server. Added in version 7.34.0.
  • --tr-encoding: HTTP - Requests compression transfer encoding for the response using one of the algorithms supported by curl and decompresses the data upon receiving. Added in version 7.21.6.
  • --trace <file>: Enables complete tracing dump of all incoming and outgoing data (including descriptive information) to the specified output file. Use - as the file name to send the output to stdout. This option overrides previously used -v, -verbose, or -trace ascii. If used multiple times, the last option will be used.
  • --trace-ascii <file>: Enables complete tracing dump of all incoming and outgoing data (including descriptive information) to the specified output file. Use - as the file name to send the output to stdout. Similar to --trace, but omits the hexadecimal part and only displays the ASCII dump, resulting in smaller output that may be easier to read for untrained individuals. This option overrides previously used -v, -verbose, or -trace. If used multiple times, the last option will be used.
  • --trace-time: Prepends a timestamp to each trace or verbose line displayed by curl. Added in version 7.14.0.
  • -u, --user <user:password>: Specifies the username and password for server authentication. Overrides -n, -netrc, and --netrc options. If only the username is specified, curl will prompt for the password. The username and password are separated by a colon at the first occurrence, making it impossible to use a colon in the username with this option. However, the password can still contain a colon. When using Kerberos V5 with a Windows-based server, include the Windows domain in the username for successful server ticket acquisition. Failure to do so may result in a failed initial authentication handshake. When using NTLM, simply specify the username without specifying the domain.
  • -U, --proxy-user <user:password>: Specifies the username and password for proxy authentication. If you're using a curl binary with Windows SSPI enabled and negotiating or NTLM authentication, you can tell curl to select the username and password from your environment by specifying a colon. The colon should be followed by the options -U. If used multiple times, the last option will be used.
  • --url <URL>: Specifies the URL to retrieve. This option is convenient when you want to specify the URL in a configuration file. This option can be used any number of times. To control where this URL is written, use the -o, --output or -o, --remote name options.
  • -v, --verbose: Outputs detailed information, primarily used for debugging. Lines starting with > indicate the headers sent by curl, < indicates the headers received by curl that are normally hidden, and lines starting with * provide additional information from curl. Note that if you only want to include the HTTP headers in the output, you may need the -i, -include option. If you feel that this option still doesn't provide enough detail, consider using --trace or --trace ascii. This option overrides previously used --trace ascii or --trace. Use -s, --silent to make curl quiet.
  • -w, --write-out <format>: Defines the content to be displayed on standard output upon successful completion of the operation. The format is a string that can contain plain text and any number of variables. The string can be specified as string to read from a specific file, @filename to read from a file, or @- to read from stdin. Variables in the output format, specified as %{variable_name}, will be replaced by appropriate values or text by curl. To output a literal %, write it as %%. You can use \n for a newline, \r for a carriage return, and \t for a tab in the output. Possible variable values include content_type, filename_effective, ftp_entry_path, http_code, http_connect, local_ip, local_port, num_connects, num_redirects, redirect_url, remote_ip, remote_port, size_download, size_header, size_request, size_upload, speed_download, speed_upload, ssl_verify_result, time_appconnect, time_connect, time_namelookup, time_pretransfer, time_redirect, time_starttransfer, time_total, and url_effective.
  • -x, --proxy <[protocol://][user:password@]proxyhost[:port]>: Specifies the use of a proxy with the provided details. The proxy string can have an optional protocol:// prefix to specify the proxy protocol. Use socks4://, socks4a://, socks5://, or socks5h:// to request a specific version of the SOCKS protocol. If no protocol is specified, http:// and all other protocols are assumed to be an http proxy. If the port number is not specified in the proxy string, it defaults to 1080. This option overrides any existing environment variable settings for proxies. If a proxy is set via environment variables, it can be overridden by setting the proxy using this option. All operations performed through an HTTP proxy will be transparently converted to HTTP, which means certain protocol-specific operations may not be available. However, this is not the case if tunneling is done through the proxy (e.g., with the -p or --proxytunnel option). Usernames and passwords provided in the proxy string are URL-decoded by curl, allowing special characters like @ to be passed using %40 or colons using %3a. The proxy host can be specified in the same way as proxy environment variables, including the protocol prefix (http://) and embedded user + password. If this option is used multiple times, the last option will be used.
  • -X, --request <command>: Specifies the custom request method to use when communicating with an HTTP server. The specified request will be used instead of the default GET method. Common additional HTTP requests include PUT and DELETE, while technologies like WebDAV provide functionality such as PROPFIND, COPY, and MOVE. In general, you do not need this option as various GET, HEAD, POST, and PUT requests can be invoked using dedicated command-line options. This option only changes the actual word used in the HTTP request and does not alter the behavior of curl. For example, if you want to make a proper HEAD request, using -X HEAD is not enough; you need to use the -I or --head option.
  • --xattr: When saving output to a file, this option tells curl to store certain file metadata in extended file attributes. Currently, the URL is stored in the xdg.origin.url attribute, and for HTTP, the content type is stored in the mime type attribute. A warning is issued if the file system does not support extended attributes.
  • -y, --speed-time <time>: If the download speed falls below the speed limit in bytes per second within a specified time period, the download will be aborted. If the speed time is used, the default speed limit will be 1 unless set with -Y. This option controls the transfer and does not affect slow connections, etc. If you are concerned about this, try the --connect timeout option. If this option is used multiple times, the last option will be used.
  • -Y, --speed-limit <speed>: If the download speed falls below the given speed in bytes per second for a certain number of seconds, the download will be aborted. The speed time is set by -y and defaults to 30 if not set. If this option is used multiple times, the last option will be used.
  • -z, --time-cond <date expression>|<file>: (HTTP/FTP) Requests files modified after a given time and date, or files modified before that time, <date expression> can be various date strings, or if it matches any internal characters If none of the strings match, use it as the filename and try to get the modification date mtime from <file>. For more information on date expressions, see the getdate man page. Start date expressions with a dash - Formula to request documents older than the given date / time. The default value is documents newer than the specified date / time. If this option is used multiple times, the last option will be used.
  • -h, --help: Output help information.
  • -M, --manual: Manual mode, displays detailed help text.
  • -V, --version: Output version information.

Environment Variables

Using environment variables to set proxies has the same effect as using the --proxy option.

  • http_proxy [protocol://]<host>[:port]: Sets the proxy server for HTTP.
  • HTTPS_PROXY [protocol://]<host>[:port]: Sets the proxy server for HTTPS.
  • [url-protocol]_PROXY [protocol://]<host>[:port]: Sets the proxy server to be used for [url protocol], where the protocol is supported by curl and specified in the url. This includes protocols like FTP, FTPS, POP3, IMAP, SMTP, LDAP, etc.
  • ALL_PROXY [protocol://]<host>[:port]: Sets the proxy server to be used if no protocol-specific proxy is set.
  • NO_PROXY <comma-separated list of hosts>: A list of hostnames that should not go through any proxy. If set to an asterisk *, it matches all hosts.

Proxy Protocol Prefixes

  • socks4://: Equivalent to --socks4.
  • socks4a://: Equivalent to --socks4a.
  • socks5://: Equivalent to --socks5.
  • socks5h://: Equivalent to --socks5-hostname.

退出代码

  • 1: Unsupported protocol. This version of curl does not support this protocol.
  • 2: initialization failed.
  • 3: The URL is not in the correct format and has incorrect syntax.
  • 4: Features or options required to perform the required requests are not enabled at build time or are explicitly disabled, for curl to be able to do this you may need another libcurl build.
  • 5: Unable to resolve proxy. The given proxy host cannot be resolved.
  • 6: Unable to resolve host, the given remote host was not resolved.
  • 7: Unable to connect to host.
  • 8: The FTP server replied that the data sent by the server could not be parsed.
  • 9: FTP access denied, the server denies login or denies access to a specific resource or directory that you are trying to access. Typically, you are trying to change to a directory that does not exist on the server.
  • 11: By reply, curl cannot parse the reply sent to the PASS request.
  • 13: Strange PASV reply from FTP, curl cannot parse the reply sent to PASV request.
  • 14: FTP-227 format, curl cannot parse the 227 line sent by the server.
  • 15: FTP cannot obtain the host and cannot resolve the host IP in line 227.
  • 17: FTP cannot set binary file, cannot change transfer method to binary.
  • 18: Partial files, only part of the file is transferred.
  • 19: FTP cannot download / access to the given file, the RETR (or similar) command failed.
  • 21: FTP quote error, quote command returned error from server.
  • 22: The HTTP page was not retrieved, the requested url was not found or another error was returned, the HTTP error code was 400 or higher, this return code only occurs when using -f, -fail .
  • 23: Write error, curl cannot write data to the local file system or similar file system.
  • 25: FTP cannot save the file, the server rejected the STOR operation for FTP upload.
  • 26: Read errors, all kinds of reading problems.
  • 27: Insufficient memory, memory allocation request failed.
  • 28: The operation timed out and the specified timeout period was reached based on the condition.
  • 30: FTP port failed, port command failed, not all FTP servers support PORT command, please try to use PASV for transfer.
  • 31: FTP cannot use REST, the REST command failed, this command is used to resume FTP transfer.
  • 33: HTTP scope error, scope command does not work.
  • 34: HTTP post error, internal post request generated error.
  • 35: SSL connection error, SSL handshake failed.
  • 36: FTP error download resume, unable to resume previously aborted download.
  • 37: FILE cannot read the file and cannot open the file. It may be a permissions issue.
  • 38: LDAP cannot be bound. The LDAP binding operation failed.
  • 39: LDAP search failed.
  • 41: Function not found. The required LDAP function could not be found.
  • 42: Aborted by a callback, an application tells curl to abort the operation.
  • 43: Internal error, function called with wrong parameters.
  • 45: Interface error, the specified outgoing interface cannot be used.
  • 47: Too many redirects, curl reaches maximum number while following redirects.
  • 48: An unknown option was specified for libcurl, which means you passed a strange option to curl, which was passed to libcurl and rejected, read the manual carefully.
  • 49: The telnet option is malformed.
  • 51: The peer's SSL certificate or SSH MD5 fingerprint is incorrect.
  • 52: The server doesn't reply with anything, which is considered an error.
  • 53: SSL encryption engine not found.
  • 54: Unable to set SSL encryption engine as default.
  • 55: Failed to send network data.
  • 56: Failed to receive network data.
  • 58: There is a problem with the local certificate.
  • 59: The specified SSL cipher cannot be used.
  • 60: Peer certificates cannot be authenticated with known CA certificates.
  • 61: Unrecognized transfer encoding.
  • 62: The LDAP URL is invalid.
  • 63: Maximum file size exceeded.
  • 64: The requested FTP SSL level failed.
  • 65: Sending data requires rewinding failed.
  • 66: Unable to initialize SSL engine.
  • 67: Username, password or similar was not accepted and curl could not log in.
  • 68: File not found on TFTP server.
  • 69: Permissions issue on TFTP server.
  • 70: There is insufficient disk space on the TFTP server.
  • 71: TFTP operation is illegal.
  • 72: Unknown TFTP transport ID.
  • 73: File already exists in TFTP.
  • 74: There is no such user TFTP.
  • 75: Character conversion failed.
  • 76: Character conversion functions are required.
  • 77: There was a problem reading the SSL CA certificate (path), possibly an access permission issue.
  • 78: The resource referenced in URL does not exist.
  • 79: An unspecified error occurred during the SSH session.
  • 80: Unable to close SSL connection.
  • 82: Unable to load CRL file, missing or malformed, added in 7.19.0.
  • 83: Issuer check failed, added in 7.19.0.
  • 84: FTP PRET command failed.
  • 85: RTSP:CSeq number does not match.
  • 86: RTSP session identifier mismatch.
  • 87: Unable to parse FTP file list.
  • 88: FTP block callback reports errors.
  • 89: There are no available connections and the session will be queued.
  • XX: More error codes will appear here in future versions, the existing ones will never change.

Example

Initiate an HTTP request to a website.

curl www.baidu.com

Make curl display a progress meter indicating transfer rate, amount of data transferred, and remaining time.

curl -# -o ftp://ftp.example.com/file.zip

Specify the network interface for the request.

curl --interface ppp0 192.168.113.131

Download a file to the local machine and name it 1.zip.

curl -o 1.zip ftp://ftp.example.com/file.zip

Make curl follow redirects.

curl -L http://www.google.com

This option limits the maximum transfer rate and keeps it close to the given value in bytes.

curl --limit-rate 1000K -O ftp://speedtest.tele2.net/1MB.zip

curl also provides options for downloading files from FTP servers requiring user authentication.

curl -u username:password -O ftp://test.rebex.net/readme.txt

Display detailed information such as the connected IP address, request headers, and response headers.

curl -v www.baidu.com

Set request headers and store the response information in baidu.html due to excessive response content.

curl -v \
-H "Accept-Language: zh-cn" \
-H "Host: www.baidu.com" \
-H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36" \
-o baidu.html https://www.baidu.com

Daily Question

https://github.com/WindrunnerMax/EveryDay

References

https://www.computerhope.com/unix/curl.htm https://www.commandlinux.com/man-page/man1/curl.1.html https://www.geeksforgeeks.org/curl-command-in-linux-with-examples/