0

I was googling how to set up the ssl/tls configuration in my nginx file when I noticed if I type https://example.com I am getting the green connection is secure message. I was expecting it to not work like it was doing before I set up the route53 and cloudfront. I must mention my whole website is served through cloudfront, not from an S3 bucket.

Here is how my /etc/nginx/conf.d looks like. I was getting ready to add the 443 piece but it seems it's not needed. Why is this no longer needed; is it because the client connects with the .cloudfront.net domain instead? If I don't need to change anything else then only thing missing is to figure out how to always prefer https so users don't see the connection is not secure message.

#

server {
 listen 80;

 server_name example.com;


 location / {
 proxy_pass http://localhost:3000;
 }
}

#
Dong
  • 135
  • 5

1 Answers1

0

In this scenario, you can leave your EC2 instance configured to listen over port 80. In fact, adding TLS to the nginx configuration will only slow down your website serving content, as the traffic will be encrypted when it serves the content to the CloudFront edge locations upon request. However, unless you lock the EC2 instance down to only being accessible from, or only serving content to, AWS CloudFront, then you could potentially leave yourself open for a man-in-the-middle attack for feeding content to your CloudFront CDN.

I found this Reddit thread with a few suggestions for how you can lock down your EC2 instance(s) to only be accessible from CloudFront: https://www.reddit.com/r/aws/comments/82nolm/restricting_access_to_my_origin_webserver_so_only/

You mentioned that https://example.com is already working for you. I have to assume that means you have added your custom example.com as an Alternate domain name to your CloudFront Distribution, and then configured a certificate issued by ACM for your domain. You have then added a CNAME for your example.com to your unique CloudFront endpoint name, such as e8anx24185c3y.cloudfront.net.

CloudFront caches all of your content at edge locations. When your end-user requests content from https://example.com it is returned from the edge location physically closest to them. It doesn't actually talk to your server at all during that request, unless the closest edge location does not have a local cache of the content being requested. But, given that CloudFront does not have the content yet, the end-user is not redirected to your EC2 instances. Instead, CloudFront downloads the response from your EC2 instance to the edge location, and then proceeds to serve that content back to the end-user, over the TLS connection initiated between the customer and the CloudFront edge location.

To answer your last question, you can force a connection to http://example.com to be upgraded to https://example.com in your CloudFront distribution configuration. This setting is configured on a Behavior. Browse to your CloudFront distribution and click the Behaviors tab at the top. From there, select your default behavior (or whichever behavior is serving the content from your EC2) and then click Edit. For the Viewer Protocol Policy, select Redirect HTTP to HTTPS, then click Yes, Edit at the bottom to save your changes.

Aaron St. Clair
  • 211
  • 1
  • 3