I tested this with a simple bash script against a domain I have running behind an ELB:
S='a';
URL='http://example.com/?foo=';
while true;
do
echo $URL$S | wc -c;
curl -I "$URL$S";
S=$S$S;
done
This worked fine for a while:
2081
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
Date: Tue, 05 Feb 2013 15:01:44 GMT
Server: Apache/2.2.22 (Ubuntu)
Vary: Accept-Encoding
Connection: keep-alive
4129
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
Date: Tue, 05 Feb 2013 15:01:46 GMT
Server: Apache/2.2.22 (Ubuntu)
Vary: Accept-Encoding
Connection: keep-alive
But failed once it crossed the 8192 length barrier:
8225
HTTP/1.1 414 Request-URI Too Large
Content-length: 337
Content-Type: text/html; charset=iso-8859-1
Date: Tue, 05 Feb 2013 15:01:47 GMT
Server: Apache/2.2.22 (Ubuntu)
Vary: Accept-Encoding
Connection: keep-alive
16417
HTTP/1.1 414 Request-URI Too Large
Content-length: 337
Content-Type: text/html; charset=iso-8859-1
Date: Tue, 05 Feb 2013 15:01:47 GMT
Server: Apache/2.2.22 (Ubuntu)
Vary: Accept-Encoding
Connection: keep-alive
The failing requests were logged in a different file by Apache because the GET
string comes before the Host:
header and hence Apache didn't ever determine which vhost to use.
Nonetheless, it was still Apache responding, and not the ELB, even up to over 128KB in a single GET
string. The full 128KB request was logged in the default Apache log file. After 256KB, curl
failed to process the request.
It doesn't look like there's any URL length limit in Amazon ELBs.