How to determine the HTTP status without downloading the complete page?












26














I want to know the HTTP Status of the websites using Ubuntu.
I have used curl and wget command for that purpose. But the problem is these commands download the complete website page and then search for the header and display it on the screen.
For example:



$ curl -I trafficinviter.com
HTTP/1.1 200 OK
Date: Mon, 02 Jan 2017 14:13:14 GMT
Server: Apache
X-Pingback: http://trafficinviter.com/xmlrpc.php
Link: <http://trafficinviter.com/>; rel=shortlink
Set-Cookie: wpfront-notification-bar-landingpage=1
Content-Type: text/html; charset=UTF-8


Same thing happens with Wgetcommand where the complete page is getting downloaded and unnecessarily consuming my bandwidth.



What I am looking for is: how to get the HTTP status code without actually downloading any page so that I can save my bandwidth consumption. I had tried using curl but not sure is I am downloading complete page or just a header to my system to get the status code.










share|improve this question
























  • "tried using curl but not sure is I am downloading complete page or just a header" — curl -v (--verbose) option is a handy way to debug what curl is actually sending & receiving.
    – Beni Cherniavsky-Paskin
    Jan 2 '17 at 19:05










  • I'm afraid I'm downvoting because you already have the solution right there in the question.
    – Lightness Races in Orbit
    Jan 3 '17 at 11:55












  • @LightnessRacesinOrbit I was not knowing whether the question is my answer or not. I was here to have help to resolve my confusion. If you still find that my question is wrong.. I welcome your decision of downvote.. thank you
    – Jaffer Wilson
    Jan 3 '17 at 11:59










  • manpages.ubuntu.com/manpages/trusty/en/man1/curl.1.html
    – Lightness Races in Orbit
    Jan 3 '17 at 12:04










  • "these commands download the complete website page" - no, they don't
    – OrangeDog
    Jan 3 '17 at 14:06
















26














I want to know the HTTP Status of the websites using Ubuntu.
I have used curl and wget command for that purpose. But the problem is these commands download the complete website page and then search for the header and display it on the screen.
For example:



$ curl -I trafficinviter.com
HTTP/1.1 200 OK
Date: Mon, 02 Jan 2017 14:13:14 GMT
Server: Apache
X-Pingback: http://trafficinviter.com/xmlrpc.php
Link: <http://trafficinviter.com/>; rel=shortlink
Set-Cookie: wpfront-notification-bar-landingpage=1
Content-Type: text/html; charset=UTF-8


Same thing happens with Wgetcommand where the complete page is getting downloaded and unnecessarily consuming my bandwidth.



What I am looking for is: how to get the HTTP status code without actually downloading any page so that I can save my bandwidth consumption. I had tried using curl but not sure is I am downloading complete page or just a header to my system to get the status code.










share|improve this question
























  • "tried using curl but not sure is I am downloading complete page or just a header" — curl -v (--verbose) option is a handy way to debug what curl is actually sending & receiving.
    – Beni Cherniavsky-Paskin
    Jan 2 '17 at 19:05










  • I'm afraid I'm downvoting because you already have the solution right there in the question.
    – Lightness Races in Orbit
    Jan 3 '17 at 11:55












  • @LightnessRacesinOrbit I was not knowing whether the question is my answer or not. I was here to have help to resolve my confusion. If you still find that my question is wrong.. I welcome your decision of downvote.. thank you
    – Jaffer Wilson
    Jan 3 '17 at 11:59










  • manpages.ubuntu.com/manpages/trusty/en/man1/curl.1.html
    – Lightness Races in Orbit
    Jan 3 '17 at 12:04










  • "these commands download the complete website page" - no, they don't
    – OrangeDog
    Jan 3 '17 at 14:06














26












26








26


2





I want to know the HTTP Status of the websites using Ubuntu.
I have used curl and wget command for that purpose. But the problem is these commands download the complete website page and then search for the header and display it on the screen.
For example:



$ curl -I trafficinviter.com
HTTP/1.1 200 OK
Date: Mon, 02 Jan 2017 14:13:14 GMT
Server: Apache
X-Pingback: http://trafficinviter.com/xmlrpc.php
Link: <http://trafficinviter.com/>; rel=shortlink
Set-Cookie: wpfront-notification-bar-landingpage=1
Content-Type: text/html; charset=UTF-8


Same thing happens with Wgetcommand where the complete page is getting downloaded and unnecessarily consuming my bandwidth.



What I am looking for is: how to get the HTTP status code without actually downloading any page so that I can save my bandwidth consumption. I had tried using curl but not sure is I am downloading complete page or just a header to my system to get the status code.










share|improve this question















I want to know the HTTP Status of the websites using Ubuntu.
I have used curl and wget command for that purpose. But the problem is these commands download the complete website page and then search for the header and display it on the screen.
For example:



$ curl -I trafficinviter.com
HTTP/1.1 200 OK
Date: Mon, 02 Jan 2017 14:13:14 GMT
Server: Apache
X-Pingback: http://trafficinviter.com/xmlrpc.php
Link: <http://trafficinviter.com/>; rel=shortlink
Set-Cookie: wpfront-notification-bar-landingpage=1
Content-Type: text/html; charset=UTF-8


Same thing happens with Wgetcommand where the complete page is getting downloaded and unnecessarily consuming my bandwidth.



What I am looking for is: how to get the HTTP status code without actually downloading any page so that I can save my bandwidth consumption. I had tried using curl but not sure is I am downloading complete page or just a header to my system to get the status code.







command-line wget curl






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Dec 11 at 10:16









muru

1




1










asked Jan 2 '17 at 14:50









Jaffer Wilson

600724




600724












  • "tried using curl but not sure is I am downloading complete page or just a header" — curl -v (--verbose) option is a handy way to debug what curl is actually sending & receiving.
    – Beni Cherniavsky-Paskin
    Jan 2 '17 at 19:05










  • I'm afraid I'm downvoting because you already have the solution right there in the question.
    – Lightness Races in Orbit
    Jan 3 '17 at 11:55












  • @LightnessRacesinOrbit I was not knowing whether the question is my answer or not. I was here to have help to resolve my confusion. If you still find that my question is wrong.. I welcome your decision of downvote.. thank you
    – Jaffer Wilson
    Jan 3 '17 at 11:59










  • manpages.ubuntu.com/manpages/trusty/en/man1/curl.1.html
    – Lightness Races in Orbit
    Jan 3 '17 at 12:04










  • "these commands download the complete website page" - no, they don't
    – OrangeDog
    Jan 3 '17 at 14:06


















  • "tried using curl but not sure is I am downloading complete page or just a header" — curl -v (--verbose) option is a handy way to debug what curl is actually sending & receiving.
    – Beni Cherniavsky-Paskin
    Jan 2 '17 at 19:05










  • I'm afraid I'm downvoting because you already have the solution right there in the question.
    – Lightness Races in Orbit
    Jan 3 '17 at 11:55












  • @LightnessRacesinOrbit I was not knowing whether the question is my answer or not. I was here to have help to resolve my confusion. If you still find that my question is wrong.. I welcome your decision of downvote.. thank you
    – Jaffer Wilson
    Jan 3 '17 at 11:59










  • manpages.ubuntu.com/manpages/trusty/en/man1/curl.1.html
    – Lightness Races in Orbit
    Jan 3 '17 at 12:04










  • "these commands download the complete website page" - no, they don't
    – OrangeDog
    Jan 3 '17 at 14:06
















"tried using curl but not sure is I am downloading complete page or just a header" — curl -v (--verbose) option is a handy way to debug what curl is actually sending & receiving.
– Beni Cherniavsky-Paskin
Jan 2 '17 at 19:05




"tried using curl but not sure is I am downloading complete page or just a header" — curl -v (--verbose) option is a handy way to debug what curl is actually sending & receiving.
– Beni Cherniavsky-Paskin
Jan 2 '17 at 19:05












I'm afraid I'm downvoting because you already have the solution right there in the question.
– Lightness Races in Orbit
Jan 3 '17 at 11:55






I'm afraid I'm downvoting because you already have the solution right there in the question.
– Lightness Races in Orbit
Jan 3 '17 at 11:55














@LightnessRacesinOrbit I was not knowing whether the question is my answer or not. I was here to have help to resolve my confusion. If you still find that my question is wrong.. I welcome your decision of downvote.. thank you
– Jaffer Wilson
Jan 3 '17 at 11:59




@LightnessRacesinOrbit I was not knowing whether the question is my answer or not. I was here to have help to resolve my confusion. If you still find that my question is wrong.. I welcome your decision of downvote.. thank you
– Jaffer Wilson
Jan 3 '17 at 11:59












manpages.ubuntu.com/manpages/trusty/en/man1/curl.1.html
– Lightness Races in Orbit
Jan 3 '17 at 12:04




manpages.ubuntu.com/manpages/trusty/en/man1/curl.1.html
– Lightness Races in Orbit
Jan 3 '17 at 12:04












"these commands download the complete website page" - no, they don't
– OrangeDog
Jan 3 '17 at 14:06




"these commands download the complete website page" - no, they don't
– OrangeDog
Jan 3 '17 at 14:06










2 Answers
2






active

oldest

votes


















49














curl -I fetches only the HTTP headers; it does not download the whole page. From man curl:



-I, --head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature
the command HEAD which this uses to get nothing but the header
of a document. When used on an FTP or FILE file, curl displays
the file size and last modification time only.


Another option is to install lynx and use lynx -head -dump.



The HEAD request is specified by the HTTP 1.1 protocol (RFC 2616):



9.4 HEAD

The HEAD method is identical to GET except that the server MUST NOT
return a message-body in the response. The metainformation contained
in the HTTP headers in response to a HEAD request SHOULD be identical
to the information sent in response to a GET request. This method can
be used for obtaining metainformation about the entity implied by the
request without transferring the entity-body itself. This method is
often used for testing hypertext links for validity, accessibility,
and recent modification.





share|improve this answer



















  • 2




    is it possible (within the bounds of the standard.. obviously it's possible) for a HEAD request to return a different status code than a GET?
    – KutuluMike
    Jan 2 '17 at 21:42






  • 1




    @KutuluMike: Edited the answer to provide the requested information. In the words of the RFC, it SHOULD provide the same metainformation.
    – AlexP
    Jan 2 '17 at 22:06










  • @duskwuff: Then a HEAD request SHOULD return the same 405.
    – AlexP
    Jan 3 '17 at 20:20










  • @AlexP My mistake. Never mind!
    – duskwuff
    Jan 3 '17 at 20:22



















18














With wget, you need to use the --spider option to send a HEAD request like curl:



$ wget -S --spider https://google.com
Spider mode enabled. Check if remote file exists.
--2017-01-03 00:08:38-- https://google.com/
Resolving google.com (google.com)... 216.58.197.174
Connecting to google.com (google.com)|216.58.197.174|:443... connected.
HTTP request sent, awaiting response...
HTTP/1.1 302 Found
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Location: https://www.google.co.jp/?gfe_rd=cr&ei=...
Content-Length: 262
Date: Mon, 02 Jan 2017 15:08:38 GMT
Alt-Svc: quic=":443"; ma=2592000; v="35,34"
Location: https://www.google.co.jp/?gfe_rd=cr&ei=... [following]
Spider mode enabled. Check if remote file exists.
--2017-01-03 00:08:38-- https://www.google.co.jp/?gfe_rd=cr&ei=...
Resolving www.google.co.jp (www.google.co.jp)... 210.139.253.109, 210.139.253.93, 210.139.253.123, ...
Connecting to www.google.co.jp (www.google.co.jp)|210.139.253.109|:443... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Date: Mon, 02 Jan 2017 15:08:38 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=Shift_JIS
P3P: CP="This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Set-Cookie: NID=...; expires=Tue, 04-Jul-2017 15:08:38 GMT; path=/; domain=.google.co.jp; HttpOnly
Alt-Svc: quic=":443"; ma=2592000; v="35,34"
Transfer-Encoding: chunked
Accept-Ranges: none
Vary: Accept-Encoding
Length: unspecified [text/html]
Remote file exists and could contain further links,
but recursion is disabled -- not retrieving.





share|improve this answer























  • Don't you think my friend that wget will fetch the complete page and then display the header.
    – Jaffer Wilson
    Jan 3 '17 at 5:12










  • @JafferWilson read the last few lines of the output.
    – muru
    Jan 3 '17 at 5:21











Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "89"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f867105%2fhow-to-determine-the-http-status-without-downloading-the-complete-page%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









49














curl -I fetches only the HTTP headers; it does not download the whole page. From man curl:



-I, --head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature
the command HEAD which this uses to get nothing but the header
of a document. When used on an FTP or FILE file, curl displays
the file size and last modification time only.


Another option is to install lynx and use lynx -head -dump.



The HEAD request is specified by the HTTP 1.1 protocol (RFC 2616):



9.4 HEAD

The HEAD method is identical to GET except that the server MUST NOT
return a message-body in the response. The metainformation contained
in the HTTP headers in response to a HEAD request SHOULD be identical
to the information sent in response to a GET request. This method can
be used for obtaining metainformation about the entity implied by the
request without transferring the entity-body itself. This method is
often used for testing hypertext links for validity, accessibility,
and recent modification.





share|improve this answer



















  • 2




    is it possible (within the bounds of the standard.. obviously it's possible) for a HEAD request to return a different status code than a GET?
    – KutuluMike
    Jan 2 '17 at 21:42






  • 1




    @KutuluMike: Edited the answer to provide the requested information. In the words of the RFC, it SHOULD provide the same metainformation.
    – AlexP
    Jan 2 '17 at 22:06










  • @duskwuff: Then a HEAD request SHOULD return the same 405.
    – AlexP
    Jan 3 '17 at 20:20










  • @AlexP My mistake. Never mind!
    – duskwuff
    Jan 3 '17 at 20:22
















49














curl -I fetches only the HTTP headers; it does not download the whole page. From man curl:



-I, --head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature
the command HEAD which this uses to get nothing but the header
of a document. When used on an FTP or FILE file, curl displays
the file size and last modification time only.


Another option is to install lynx and use lynx -head -dump.



The HEAD request is specified by the HTTP 1.1 protocol (RFC 2616):



9.4 HEAD

The HEAD method is identical to GET except that the server MUST NOT
return a message-body in the response. The metainformation contained
in the HTTP headers in response to a HEAD request SHOULD be identical
to the information sent in response to a GET request. This method can
be used for obtaining metainformation about the entity implied by the
request without transferring the entity-body itself. This method is
often used for testing hypertext links for validity, accessibility,
and recent modification.





share|improve this answer



















  • 2




    is it possible (within the bounds of the standard.. obviously it's possible) for a HEAD request to return a different status code than a GET?
    – KutuluMike
    Jan 2 '17 at 21:42






  • 1




    @KutuluMike: Edited the answer to provide the requested information. In the words of the RFC, it SHOULD provide the same metainformation.
    – AlexP
    Jan 2 '17 at 22:06










  • @duskwuff: Then a HEAD request SHOULD return the same 405.
    – AlexP
    Jan 3 '17 at 20:20










  • @AlexP My mistake. Never mind!
    – duskwuff
    Jan 3 '17 at 20:22














49












49








49






curl -I fetches only the HTTP headers; it does not download the whole page. From man curl:



-I, --head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature
the command HEAD which this uses to get nothing but the header
of a document. When used on an FTP or FILE file, curl displays
the file size and last modification time only.


Another option is to install lynx and use lynx -head -dump.



The HEAD request is specified by the HTTP 1.1 protocol (RFC 2616):



9.4 HEAD

The HEAD method is identical to GET except that the server MUST NOT
return a message-body in the response. The metainformation contained
in the HTTP headers in response to a HEAD request SHOULD be identical
to the information sent in response to a GET request. This method can
be used for obtaining metainformation about the entity implied by the
request without transferring the entity-body itself. This method is
often used for testing hypertext links for validity, accessibility,
and recent modification.





share|improve this answer














curl -I fetches only the HTTP headers; it does not download the whole page. From man curl:



-I, --head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature
the command HEAD which this uses to get nothing but the header
of a document. When used on an FTP or FILE file, curl displays
the file size and last modification time only.


Another option is to install lynx and use lynx -head -dump.



The HEAD request is specified by the HTTP 1.1 protocol (RFC 2616):



9.4 HEAD

The HEAD method is identical to GET except that the server MUST NOT
return a message-body in the response. The metainformation contained
in the HTTP headers in response to a HEAD request SHOULD be identical
to the information sent in response to a GET request. This method can
be used for obtaining metainformation about the entity implied by the
request without transferring the entity-body itself. This method is
often used for testing hypertext links for validity, accessibility,
and recent modification.






share|improve this answer














share|improve this answer



share|improve this answer








edited Jan 2 '17 at 22:06

























answered Jan 2 '17 at 14:54









AlexP

7,39211328




7,39211328








  • 2




    is it possible (within the bounds of the standard.. obviously it's possible) for a HEAD request to return a different status code than a GET?
    – KutuluMike
    Jan 2 '17 at 21:42






  • 1




    @KutuluMike: Edited the answer to provide the requested information. In the words of the RFC, it SHOULD provide the same metainformation.
    – AlexP
    Jan 2 '17 at 22:06










  • @duskwuff: Then a HEAD request SHOULD return the same 405.
    – AlexP
    Jan 3 '17 at 20:20










  • @AlexP My mistake. Never mind!
    – duskwuff
    Jan 3 '17 at 20:22














  • 2




    is it possible (within the bounds of the standard.. obviously it's possible) for a HEAD request to return a different status code than a GET?
    – KutuluMike
    Jan 2 '17 at 21:42






  • 1




    @KutuluMike: Edited the answer to provide the requested information. In the words of the RFC, it SHOULD provide the same metainformation.
    – AlexP
    Jan 2 '17 at 22:06










  • @duskwuff: Then a HEAD request SHOULD return the same 405.
    – AlexP
    Jan 3 '17 at 20:20










  • @AlexP My mistake. Never mind!
    – duskwuff
    Jan 3 '17 at 20:22








2




2




is it possible (within the bounds of the standard.. obviously it's possible) for a HEAD request to return a different status code than a GET?
– KutuluMike
Jan 2 '17 at 21:42




is it possible (within the bounds of the standard.. obviously it's possible) for a HEAD request to return a different status code than a GET?
– KutuluMike
Jan 2 '17 at 21:42




1




1




@KutuluMike: Edited the answer to provide the requested information. In the words of the RFC, it SHOULD provide the same metainformation.
– AlexP
Jan 2 '17 at 22:06




@KutuluMike: Edited the answer to provide the requested information. In the words of the RFC, it SHOULD provide the same metainformation.
– AlexP
Jan 2 '17 at 22:06












@duskwuff: Then a HEAD request SHOULD return the same 405.
– AlexP
Jan 3 '17 at 20:20




@duskwuff: Then a HEAD request SHOULD return the same 405.
– AlexP
Jan 3 '17 at 20:20












@AlexP My mistake. Never mind!
– duskwuff
Jan 3 '17 at 20:22




@AlexP My mistake. Never mind!
– duskwuff
Jan 3 '17 at 20:22













18














With wget, you need to use the --spider option to send a HEAD request like curl:



$ wget -S --spider https://google.com
Spider mode enabled. Check if remote file exists.
--2017-01-03 00:08:38-- https://google.com/
Resolving google.com (google.com)... 216.58.197.174
Connecting to google.com (google.com)|216.58.197.174|:443... connected.
HTTP request sent, awaiting response...
HTTP/1.1 302 Found
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Location: https://www.google.co.jp/?gfe_rd=cr&ei=...
Content-Length: 262
Date: Mon, 02 Jan 2017 15:08:38 GMT
Alt-Svc: quic=":443"; ma=2592000; v="35,34"
Location: https://www.google.co.jp/?gfe_rd=cr&ei=... [following]
Spider mode enabled. Check if remote file exists.
--2017-01-03 00:08:38-- https://www.google.co.jp/?gfe_rd=cr&ei=...
Resolving www.google.co.jp (www.google.co.jp)... 210.139.253.109, 210.139.253.93, 210.139.253.123, ...
Connecting to www.google.co.jp (www.google.co.jp)|210.139.253.109|:443... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Date: Mon, 02 Jan 2017 15:08:38 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=Shift_JIS
P3P: CP="This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Set-Cookie: NID=...; expires=Tue, 04-Jul-2017 15:08:38 GMT; path=/; domain=.google.co.jp; HttpOnly
Alt-Svc: quic=":443"; ma=2592000; v="35,34"
Transfer-Encoding: chunked
Accept-Ranges: none
Vary: Accept-Encoding
Length: unspecified [text/html]
Remote file exists and could contain further links,
but recursion is disabled -- not retrieving.





share|improve this answer























  • Don't you think my friend that wget will fetch the complete page and then display the header.
    – Jaffer Wilson
    Jan 3 '17 at 5:12










  • @JafferWilson read the last few lines of the output.
    – muru
    Jan 3 '17 at 5:21
















18














With wget, you need to use the --spider option to send a HEAD request like curl:



$ wget -S --spider https://google.com
Spider mode enabled. Check if remote file exists.
--2017-01-03 00:08:38-- https://google.com/
Resolving google.com (google.com)... 216.58.197.174
Connecting to google.com (google.com)|216.58.197.174|:443... connected.
HTTP request sent, awaiting response...
HTTP/1.1 302 Found
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Location: https://www.google.co.jp/?gfe_rd=cr&ei=...
Content-Length: 262
Date: Mon, 02 Jan 2017 15:08:38 GMT
Alt-Svc: quic=":443"; ma=2592000; v="35,34"
Location: https://www.google.co.jp/?gfe_rd=cr&ei=... [following]
Spider mode enabled. Check if remote file exists.
--2017-01-03 00:08:38-- https://www.google.co.jp/?gfe_rd=cr&ei=...
Resolving www.google.co.jp (www.google.co.jp)... 210.139.253.109, 210.139.253.93, 210.139.253.123, ...
Connecting to www.google.co.jp (www.google.co.jp)|210.139.253.109|:443... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Date: Mon, 02 Jan 2017 15:08:38 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=Shift_JIS
P3P: CP="This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Set-Cookie: NID=...; expires=Tue, 04-Jul-2017 15:08:38 GMT; path=/; domain=.google.co.jp; HttpOnly
Alt-Svc: quic=":443"; ma=2592000; v="35,34"
Transfer-Encoding: chunked
Accept-Ranges: none
Vary: Accept-Encoding
Length: unspecified [text/html]
Remote file exists and could contain further links,
but recursion is disabled -- not retrieving.





share|improve this answer























  • Don't you think my friend that wget will fetch the complete page and then display the header.
    – Jaffer Wilson
    Jan 3 '17 at 5:12










  • @JafferWilson read the last few lines of the output.
    – muru
    Jan 3 '17 at 5:21














18












18








18






With wget, you need to use the --spider option to send a HEAD request like curl:



$ wget -S --spider https://google.com
Spider mode enabled. Check if remote file exists.
--2017-01-03 00:08:38-- https://google.com/
Resolving google.com (google.com)... 216.58.197.174
Connecting to google.com (google.com)|216.58.197.174|:443... connected.
HTTP request sent, awaiting response...
HTTP/1.1 302 Found
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Location: https://www.google.co.jp/?gfe_rd=cr&ei=...
Content-Length: 262
Date: Mon, 02 Jan 2017 15:08:38 GMT
Alt-Svc: quic=":443"; ma=2592000; v="35,34"
Location: https://www.google.co.jp/?gfe_rd=cr&ei=... [following]
Spider mode enabled. Check if remote file exists.
--2017-01-03 00:08:38-- https://www.google.co.jp/?gfe_rd=cr&ei=...
Resolving www.google.co.jp (www.google.co.jp)... 210.139.253.109, 210.139.253.93, 210.139.253.123, ...
Connecting to www.google.co.jp (www.google.co.jp)|210.139.253.109|:443... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Date: Mon, 02 Jan 2017 15:08:38 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=Shift_JIS
P3P: CP="This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Set-Cookie: NID=...; expires=Tue, 04-Jul-2017 15:08:38 GMT; path=/; domain=.google.co.jp; HttpOnly
Alt-Svc: quic=":443"; ma=2592000; v="35,34"
Transfer-Encoding: chunked
Accept-Ranges: none
Vary: Accept-Encoding
Length: unspecified [text/html]
Remote file exists and could contain further links,
but recursion is disabled -- not retrieving.





share|improve this answer














With wget, you need to use the --spider option to send a HEAD request like curl:



$ wget -S --spider https://google.com
Spider mode enabled. Check if remote file exists.
--2017-01-03 00:08:38-- https://google.com/
Resolving google.com (google.com)... 216.58.197.174
Connecting to google.com (google.com)|216.58.197.174|:443... connected.
HTTP request sent, awaiting response...
HTTP/1.1 302 Found
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Location: https://www.google.co.jp/?gfe_rd=cr&ei=...
Content-Length: 262
Date: Mon, 02 Jan 2017 15:08:38 GMT
Alt-Svc: quic=":443"; ma=2592000; v="35,34"
Location: https://www.google.co.jp/?gfe_rd=cr&ei=... [following]
Spider mode enabled. Check if remote file exists.
--2017-01-03 00:08:38-- https://www.google.co.jp/?gfe_rd=cr&ei=...
Resolving www.google.co.jp (www.google.co.jp)... 210.139.253.109, 210.139.253.93, 210.139.253.123, ...
Connecting to www.google.co.jp (www.google.co.jp)|210.139.253.109|:443... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Date: Mon, 02 Jan 2017 15:08:38 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=Shift_JIS
P3P: CP="This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Set-Cookie: NID=...; expires=Tue, 04-Jul-2017 15:08:38 GMT; path=/; domain=.google.co.jp; HttpOnly
Alt-Svc: quic=":443"; ma=2592000; v="35,34"
Transfer-Encoding: chunked
Accept-Ranges: none
Vary: Accept-Encoding
Length: unspecified [text/html]
Remote file exists and could contain further links,
but recursion is disabled -- not retrieving.






share|improve this answer














share|improve this answer



share|improve this answer








edited Mar 20 '17 at 10:18









Community

1




1










answered Jan 2 '17 at 15:09









muru

1




1












  • Don't you think my friend that wget will fetch the complete page and then display the header.
    – Jaffer Wilson
    Jan 3 '17 at 5:12










  • @JafferWilson read the last few lines of the output.
    – muru
    Jan 3 '17 at 5:21


















  • Don't you think my friend that wget will fetch the complete page and then display the header.
    – Jaffer Wilson
    Jan 3 '17 at 5:12










  • @JafferWilson read the last few lines of the output.
    – muru
    Jan 3 '17 at 5:21
















Don't you think my friend that wget will fetch the complete page and then display the header.
– Jaffer Wilson
Jan 3 '17 at 5:12




Don't you think my friend that wget will fetch the complete page and then display the header.
– Jaffer Wilson
Jan 3 '17 at 5:12












@JafferWilson read the last few lines of the output.
– muru
Jan 3 '17 at 5:21




@JafferWilson read the last few lines of the output.
– muru
Jan 3 '17 at 5:21


















draft saved

draft discarded




















































Thanks for contributing an answer to Ask Ubuntu!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f867105%2fhow-to-determine-the-http-status-without-downloading-the-complete-page%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Quarter-circle Tiles

build a pushdown automaton that recognizes the reverse language of a given pushdown automaton?

Mont Emei