79

We have a Cloudfront distribution with custom origin that has been working just fine for quite a long time, serving static assets for one of our sites. Just this morning, we noticed that our logo was displaying as a broken link.

Upon further investigation, Cloudfront is returning a strange error message that I've never seen before for the URL in question:

ERROR

The request could not be satisfied.



Generated by cloudfront (CloudFront)

Several other Cloudfront URLs from this distribution return the same error, but then others (again, from the same distribution) are working just fine. I don't see a pattern to what works and what doesn't.

Some other data points:

  • The origin URLs work just fine. There's been no recent interruption in service, to my knowledge.
  • I've invalidated the logo URL specifically, to no effect.
  • I've invalidated the root URL of the distribution, to no effect.

Any idea what's going on here? I've never seen Cloudfront do this before.

UPDATE:

Here's the verbatim HTTP response from Cloudfront:

$ http GET https://d2yu7foswg1yra.cloudfront.net/static/img/crossway_logo.png
HTTP/1.1 502 Bad Gateway
Age: 213
Connection: keep-alive
Content-Length: 472
Content-Type: text/html
Date: Wed, 18 Dec 2013 17:57:46 GMT
Server: CloudFront
Via: 1.1 f319e8962c0268d31d3828d4b9d41f98.cloudfront.net (CloudFront)
X-Amz-Cf-Id: H_HGBG3sTOqEomHzHubi8ruLbGXe2MRyVhGBn4apM0y_LjQa_9W2Jg==
X-Cache: Error from cloudfront

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<TITLE>ERROR: The request could not be satisfied</TITLE>
</HEAD><BODY>
<H1>ERROR</H1>
<H2>The request could not be satisfied.</H2>
<HR noshade size="1px">
</BODY></HTML>

<BR clear="all">
<HR noshade size="1px">
<ADDRESS>
Generated by cloudfront (CloudFront)
</ADDRESS>
</BODY></HTML>
Eddie Fletcher
  • 2,823
  • 2
  • 21
  • 23
David Eyk
  • 12,171
  • 11
  • 63
  • 103
  • 3
    Interesting.... I just created my first distribution (no custom CNAME) and am getting the same thing. Started with everything basic but no luck yet. – Shawn Dube Dec 18 '13 at 18:35
  • Yes, I created a new distribution to test with, and same thing. :\ – David Eyk Dec 18 '13 at 19:10
  • I had a similar issue, although I got a 504 gateway time out for some static files from a CloudFront distribution. I realised that I had enabled `pglcmd` which was blocking IP ranges through iptables. I still don't know why CloudFront was checking for these files, which have expiration headers set for one year. – paradroid Mar 27 '14 at 13:21

18 Answers18

132

I had this error today with Amazon Cloudfront. It was because the cname I used (e.g cdn.example.com) was not added to the distribution settings under "alternate cnames", I only had cdn.example.com forwarded to the cloudfront domain in my site/hosting control panel, but you need to add it to Amazon CloudFront panel too.

adrianTNT
  • 3,671
  • 5
  • 29
  • 35
  • 4
    In my case it was a typo in the CNAME. Duh! – maksimov Dec 25 '14 at 03:07
  • 4
    After adding the alternate CNAME, it takes a while for CloudFront Distribution Status (under the 'General' tab) to go from 'InProgress' to 'Deployed'. During this time you'll still get a similar CloudFront error message. Took about half an hour in my case. – idoimaging Jan 19 '17 at 20:48
  • 1
    For some to find change cname: go to `CloudFront` section => select your cloudfront => select `General` => `Edit` => add your CNAME url in `Alternate Domain Names (CNAMEs)` (it's in 3rd line) – seuling Jul 12 '18 at 06:16
37

I had a similar issue recently which turned out to be due to ssl_ciphers that I was using.

From http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html,

"CloudFront forwards HTTPS requests to the origin server using the SSLv3 or TLSv1 protocols and the AES128-SHA1 or RC4-MD5 ciphers. If your origin server does not support either the AES128-SHA1 or RC4-MD5 ciphers, CloudFront cannot establish an SSL connection to your origin. "

I had to change my nginx confg to add AES128-SHA ( deprecated RC4:HIGH ) to ssl_ciphers to fix the 302 error. I hope this helps. I have pasted the line from my ssl.conf

ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:RSA+3DES:AES128-SHA:!ADH:!AECDH:!MD5;
dminer
  • 1,121
  • 11
  • 9
  • This appears to be the correct solution to my particular problem, though it seems there may be multiple ways to get that error message, reflected by the other answers here. – David Eyk Jun 25 '14 at 23:29
  • 5
    You may wish to use `AES128-SHA` instead of `RC4:HIGH`. Using `RC4:HIGH` downgraded my [Qualsys SSL Labs](https://www.ssllabs.com/ssltest/analyze.html) test score from an A to a C. – David Eyk Jun 25 '14 at 23:39
  • This fixed my issue as well. However, i had to use `AES128-SHA1`, with the "1". Without the 1 it didn't work for me. Also, AWS recommends using `TLSv1` and not `SSLv3`, which is less secure. – bmorenate Dec 17 '16 at 00:13
15

Found my answer and adding it here in case it helps David (and others).

Turns out my origin server (say www.example.com) had a 301 redirect setup on it to change HTTP to HTTPS:

HTTP/1.1 301 Moved Permanently
Location: https://www.example.com/images/Foo_01.jpg

However, my Origin Protocol Policy was set to HTTP only. This caused CloudFront to not find my file and throw a 502 error. Additionally, I think it cached the 502 error for 5 min or so as it didn't work immediately after removing that 301 redirect.

Hope that helps!

Leo Correa
  • 19,131
  • 2
  • 53
  • 71
Shawn Dube
  • 410
  • 3
  • 10
  • Hm! Not quite my situation, but I bet that's really close. – David Eyk Dec 18 '13 at 22:14
  • You sure? i ran your url as http and got: HTTP/1.1 301 Moved Permanently Server: nginx/0.7.65 Date: Thu, 19 Dec 2013 05:00:15 GMT Content-Type: text/html Content-Length: 185 Connection: keep-alive Location: https://my.crossway.org/static/img/crossway_logo.png 301 Moved Permanently

    301 Moved Permanently


    nginx/0.7.65
    – Shawn Dube Dec 19 '13 at 05:01
  • 1
    But I'm not using `HTTP Only` in my Origin Policy, I'm using `Match Viewer`. The strange thing is, it worked just fine until recently. – David Eyk Dec 19 '13 at 17:02
  • I think even if you use 'match viewer' it can cause an issue, even if it shouldn't. I hit the same thing where I had a 301 on the homepage and it broke that page for the 5 minute cache length. – Peter Dec 19 '14 at 23:24
10

In our case, everything LOOKED ok, but it took most of the day to figure this out:

TLDR: Check your certificate paths to make sure the root certificate is correct. In the case of COMODO certificates, it should say "USERTrust" and be issued by "AddTrust External CA Root". NOT "COMODO" issued by "COMODO RSA Certification Authority".

From the CloudFront docs: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html

If the origin server returns an invalid certificate or a self-signed certificate, or if the origin server returns the certificate chain in the wrong order, CloudFront drops the TCP connection, returns HTTP error code 502, and sets the X-Cache header to Error from cloudfront.

We had the right ciphers enabled as per: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html#RequestCustomEncryption

Our certificate was valid according to Google, Firefox and ssl-checker: https://www.sslshopper.com/ssl-checker.html

SSL Checker result without all required certificates

However the last certificate in the ssl checker chain was "COMODO RSA Domain Validation Secure Server CA", issued by "COMODO RSA Certification Authority"

It seems that CloudFront does not hold the certificate for "COMODO RSA Certification Authority" and as such thinks the certificate provided by the origin server is self signed.

This was working for a long time before apparently suddenly stopping. What happened was I had just updated our certificates for the year, but during the import, something was changed in the certificate path for all the previous certificates. They all started referencing "COMODO RSA Certification Authority" whereas before the chain was longer and the root was "AddTrust External CA Root".

Bad certificate path

Because of this, switching back to the older cert did not fix the cloudfront issue.

I had to delete the extra certificate named "COMODO RSA Certification Authority", the one that did not reference AddTrust. After doing this, all my website certificates' paths updated to point back to AddTrust/USERTrust again. Note can also open up the bad root certificate from the path, click "Details" -> "Edit Properties" and then disable it that way. This updated the path immediately. You may also need to delete multiple copies of the certificate, found under "Personal" and "Trusted Root Certificate Authorities"

Good certificate path

Finally I had to re select the certificate in IIS to get it to serve the new certificate chain.

After all this, ssl-checker started displaying a third certificate in the chain, which pointed back to "AddTrust External CA Root"

SSL Checker with all certificates

Finally, CloudFront accepted the origin server's certificate and the provided chain as being trusted. Our CDN started working correctly again!

To prevent this happening in the future, we will need to export our newly generated certificates from a machine with the correct certificate chain, i.e. distrust or delete the certificate "COMODO RSA Certification Authroity" issued by "COMODO RSA Certification Authroity" (expiring in 2038). This only seems to affect windows machines, where this certificate is installed by default.

Eddie Fletcher
  • 2,823
  • 2
  • 21
  • 23
6

One more possible solution: I have a staging server that serves the site and the Cloudfront assets over HTTP. I had my origin set to "Match Viewer" instead of "HTTP Only". I also use the HTTPS Everywhere extension, which redirected all the http://*.cloudfront.net URLs to the https://* version. Since the staging server isn't available over SSL and Cloudfront was matching the viewer, it couldn't find the assets at https://example.com and cached a bunch of 502s instead.

Devin
  • 851
  • 12
  • 32
4

I just went through troubleshooting this issue, and in my case it indeed was related to redirects, but not related to incorrect settings in my CloudFront Origin or Behavior. This will happen if your origin server is still redirecting to origin URLs, and not what you have set up for your cloudfront URLs. Seems this is very common if you forget to change configs. For example lets say if you have www.yoursite.com CNAME to your cloudfront distribution, with an origin of www.yoursiteorigin.com. Obviously people will come to www.yoursite.com. But if your code tries to redirect to any page on www.yoursiteorigin.com you WILL get this error.

For me, my origin was still doing the http->https redirects to my origin URLs and not my Cloudfront URLs.

Peter
  • 29,498
  • 21
  • 89
  • 122
3

In my case, it was because we had an invalid ssl cert. The problem was on our staging box and we use our prod cert on that as well. It had worked for the past couple of years with this configuration, but all of a sudden we started getting this error. Strange.

If others are getting this error, check that the ssl certificate is valid. You can enable logging to s3 via the AWS CloudFront Distribution interface to aid debugging.

Also, you can refer to amazon's docs on the matter here: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html

Peter P.
  • 3,221
  • 2
  • 25
  • 31
2

I ran into this problem, which resolved itself after I stopped using a proxy. Maybe CloudFront is blacklisting some IPs.

lid
  • 786
  • 8
  • 13
2

Fixed this issue by concatenating my certificates to generate a valid certificate chain (using GoDaddy Standard SSL + Nginx).

http://nginx.org/en/docs/http/configuring_https_servers.html#chains

To generate the chain:

cat 123456789.crt gd_bundle-g2-g1.crt > my.domain.com.chained.crt

Then:

ssl_certificate /etc/nginx/ssl/my.domain.com.chained.crt;
ssl_certificate_key /etc/nginx/ssl/my.domain.com.key;

Hope it helps!

Pedro
  • 3,511
  • 2
  • 26
  • 31
2

The problem, in my case, was that I was using Amazon's Cloudflare and Cloudfront's Cloudfront in tandem, and Cloudfront did not like the settings that I had provided Cloudflare.

More specifically, in the Crypto settings on Cloudflare, I had set the "Minimum TLS Settings" to 1.2, without enabling the TLS 1.2 communication setting for the distribution in Cloudfront. This was enough to make Cloudfront declare a 502 Bad Gateway error when it tried to connect to the Cloudflare-protected server.

To fix this, I had to disable SSLv3 support in the Origin Settings for that Cloudfront distribution, and enable TLS 1.2 as a supported protocol for that origin server.

To debug this problem, I used command-line versions of curl, to see what Cloudfront was actually returning when you asked for an image from its CDN, and I also used the command-line version of openssl, to determine exactly which protocols Cloudflare was offering (it wasn't offering TLS 1.0).

tl:dr; make sure everything accepts and asks for TLS 1.2, or whatever latest and greatest TLS everyone is using by the time you read this.

johnwbyrd
  • 3,432
  • 2
  • 29
  • 25
2

For my particular case it was due to the fact that the Origin ALB behind my CloudFront Behavior had a DEFAULT ACM certificate which was pointing to a different domain name.

To fix this I had to:

  1. Go to the ALB
  2. Under the Listeners tab, selected my Listener and then Edit
  3. Under the Default SSL Certificate, choose the correct Origin Certificate.
Paco G
  • 33
  • 4
2

Beware the Origin Protocol Policy:

For HTTPS viewer requests that CloudFront forwards to this origin, one of the domain names in the SSL certificate on your origin server must match the domain name that you specify for Origin Domain Name. Otherwise, CloudFront responds to the viewer requests with an HTTP status code 502 (Bad Gateway) instead of returning the requested object.

In most cases, you probably want CloudFront to use "HTTP Only", since it fetches objects from a server probably hosted with Amazon too. No need for additional HTTPS complexity at this step.

Note that this is different to the Viewer Protocol Policy. You can read more about the differences between the two here.

Paul Razvan Berg
  • 16,949
  • 9
  • 76
  • 114
1

In my case, I use nginx as reverse-proxy for an API Gateway URL. I got same error.

I resolved the issue when I added the following two lines to the Nginx config:

proxy_set_header Host "XXXXXX.execute-api.REGION.amazonaws.com";
proxy_ssl_server_name on;

Source is here: Setting up proxy_pass on nginx to make API calls to API Gateway

Adil
  • 1,008
  • 11
  • 21
1

In our case, we had dropped support for SSL3, TLS1.0, and TLS1.1 for PCI-DSS compliance on our origin servers. However, you have to manually add support for TLS 1.1+ on your CloudFront origin server config. The AWS console displays the client-to-CF SSL settings, but does not easily show you CF-to-origin settings until you drill down. To fix, in the AWS console under CloudFront:

  1. Click DISTRIBUTIONS.
  2. Select your distro.
  3. Click ORIGINS tab.
  4. Select your origin server.
  5. Click EDIT.
  6. Select all protocols that your origin supports under "Origin SSL Protocols"
leiavoia
  • 533
  • 1
  • 6
  • 12
0

Make sure you have correctly configured your SSL/TLS/Cipher settings. CloudFront will drop HTTPS connections if your origin server is not handshaking with the propper TLS Ciphers.

I recommend the following settings:

# Apache
SSLCipherSuite 'ECDHE+AES:@STRENGTH:+AES256'
SSLCipherSuite 'ECDHE+AES:DHE+AES:@STRENGTH:+AES256:kRSA+3DES'
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLProtocol all -SSLv3
SSLHonorCipherOrder on

# Nginx
ssl_ciphers 'ECDHE+AES:@STRENGTH:+AES256';
ssl_ciphers 'ECDHE+AES:DHE+AES:@STRENGTH:+AES256:kRSA+3DES';
ssl_protocols all -SSLv3 -TLSv1 -TLSv1.1;
ssl_protocols all -SSLv3;
ssl_prefer_server_ciphers on;

The @STRENGTH directive will sort the ciphers in order of strength.

SSLHonorCipherOrder on (apache) and ssl_prefer_server_ciphers on (Nginx) will ensure that the order is respected.

You can view the complete list of available ciphers your version of openssl is supporting by running the command on the shell:

$ openssl ciphers -v 'ALL'

You can also list the available ciphers for the above suggested directives in strength order in a similar fashion:

$ openssl ciphers -v 'ECDHE+AES:@STRENGTH:+AES256'

If you are having such issues, I strongly suggest you increase the verbosity of your web server logs, in particular, pertaining to SSL.

In Apache, you must may do this with the following directive in your conf file

LogLevel debug

Here is a list of possible Apache LogLevel directives:

emerg (emergencies - system is unusable)
alert (action must be taken immediately)
crit (critical conditions)
error (error conditions)
warn (warning conditions)
notice (normal but significant condition)
info (informational)
debug (debug-level messages)
trace1 (trace messages)
trace2 (trace messages)
trace3 (trace messages)
trace4 (trace messages)
trace5 (trace messages)
trace6 (trace messages)
trace7 (trace messages, dumping large amounts of data)
trace8 (trace messages, dumping large amounts of data)

Generally speaking, debug will be sufficient for you to identify the negotiation issue (it might be a cipher issue, it might be a cert issue)

A typical reaction from CloudFront not being able to successfully negotiate SSL with your origin server would look something like this:

[ssl:info] [pid 25091] [client xxx.xxx.xxx.xxx:15078] AH01964: Connection to child 1 established (server example.com:443)
[ssl:debug] [pid 25091] ssl_engine_kernel.c(2372): [client xxx.xxx.xxx.xx:15078] AH02043: SSL virtual host for servername example.com found
[ssl:debug] [pid 25091] ssl_engine_io.c(1368): (104)Connection reset by peer: [client xxx.xxx.xxx.xxx:15078] AH02007: SSL handshake interrupted by system [Hint: Stop button pressed in browser?!]
[ssl:info] [pid 25091] [client xxx.xxx.xxx.xxx.15078] AH01998: Connection closed to child 1 with abortive shutdown (server example.com:443)

In this case, Apache interprets CloudFront's dropping of the connection by the "Hint: stop button pressed in browser?!". Yeah, kinda comical.

On a side note, CloudFront can be a tricky beast. If you force HTTPS or direct HTTP to HTTPS in your distribution, you must ensure that the communication between CloudFront and your origin server is running over a valid SSL connection with a valid SSL certificate. This may mean obtaining a certificate over Amazon ACM by linking it over an Elastic Load Balancer (ELB).

... or you may also achieve the desired results by having the ACM certificate linked to your distribution (make sure you have both the apex domain and subdomains listed, ex: example.com and *.example.com). This will propagate the ACM certificate to your origin/target in the distribution, however, your Apache/Nginx server in your origin server will have to have a valid and working SSL certificate, even if it is generated by something like Letsencrypt/Certbot (our some other valid non-self-signed certificate). Make sure you have the full chain configured in your apache *.conf setup.

You may read more about issues pertaining to 502 Bad Gateway or 502 Could Not Satisfy The Request from AWS as well as requiring HTTPS for communication between CloudFront and your custom origin (this may also help). This information about ciphers was extremely helpful, as well as the list of supported ciphers on AWS

Some of these commands may also be useful for debugging your SSL situation:

openssl s_client -connect test.example.com:443 -tls1_1

(you may experiment using -tls1_2 , -tls1_3 or other protocols)

You may also want to try out using the http command line tool (you might have to install it), which would give you an output like such:

$ http https://example.com --head

HTTP/1.1 301 Moved Permanently
Connection: keep-alive
Content-Length: 251
Content-Type: text/html; charset=iso-8859-1
Date: Mon, 20 Dec 2021 19:20:15 GMT
Location: https://example.com
Server: Apache
Strict-Transport-Security: max-age=63072000; includeSubdomains; preload
Via: 1.1 xxxxxxxxxxxxxxxxxxxxxxxxxxxxx.cloudfront.net (CloudFront)
X-Amz-Cf-Id: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==
X-Amz-Cf-Pop: GIG51-C2
X-Cache: Miss from cloudfront
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN

Good luck debugging, and remember: keep digging deeper, don't give up, and increase the verbosity of your logs!

Maximo Migliari
  • 109
  • 1
  • 4
0

In my case was that I didn't have nginx configure to listen port 80 and send it to the port of the node app.

Santy87
  • 1
  • 1
  • As it’s currently written, your answer is unclear. Please [edit] to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Jan 19 '23 at 15:46
0

for me I have to set Origin request policy from none to ALlViewer

enter image description here

GkAm1
  • 31
  • 2
0

If everything else is looking correct another reason to get a 502 can be because of a SSL handshake error due to a mismatch between the viewer domain name and its certificate set for your CloudFront distribution, and the domain name and its certificate of your origin.

For example, if you have a custom domain name set for your CloudFront distribution my-app.domain.com and the following cache behaviors:

  • /api/* with an origin pointing to the AWS API Gateway [ID].execute-api.us-west-2.amazonaws.com
  • /* (Default) pointing to a frontend app hosted on Vercel my-app.vercel.app

then in both cases the SSL handshake will fail with a 502.

What happens is that the viewer makes a request to your custom domain my-app.domain.com and its client sets the Host header to my-app.domain.com and therefore in both cases the origin server is searching to find the certificate for that domain to establish the connection. Since there is no such certificate present on either of the origin servers the connection fails in both cases.

One way to fix this is by attaching the Managed-AllViewerExceptHostHeader request policy to both the /api/* and the /* routes. This instructs CloudFront to remove the original Host header value set to my-app.domain.com and replace it with either the [ID].execute-api.us-west-2.amazonaws.com or my-app.vercel.app respectively. This in turn allows the origin to find the correct certificate and establish the connection.

One last peculiarity is that the request policy requires you to have a cache policy set as well. For the Vercel app in particular it is a good idea to use the Managed-CachingDisabled cache policy because the Vercel platform has its own caching layer and it automatically invalidates the cache after every deployment.

simo
  • 15,078
  • 7
  • 45
  • 59