Contrary to the accepted answer you can, and often you should, put CloudFront in front of your application, not just in front of static files.
To clarify this is the setup that I'm talking about:
example.com -> CDN -> origin (ALB, API Gateway, S3 ...)
Why would you do that?
1. Performance reasons
1.1 TLS termination
You can use CloudFront to terminate TLS meaning that your users will make a secure connection to the CloudFront and then CloudFront will communicate with the origin over plain HTTP.
How can this help? Instead of your Australia users doing a 3-way-handshake with your server in Germany, they'll do it with Cloudfront closes PoP probably in Melburn. You can see how this lowers the latency.
You might be wondering if is it okay to transfer data over plain HTTP from CloudFront to origin.
AWS will still encrypt your data:
All data flowing across AWS Regions over the AWS global network is automatically encrypted at the physical layer before it leaves AWS secured facilities.
They just do it on the first OSI layer, instead of 4 where TLS happens.
Is this enough for you? If you are not dealing with regulators or highly sensitive data, probably yes. If AWS security measures fail and somebody is able to spoof traffic inside the AWS data centre - you are probably not the attacker's target.
1.2 Connection reuse from PoP to Origin
Quote from Cloudfront FAQ:
For files not cached at the edge locations and the regional edge caches, Amazon CloudFront keeps persistent connections with your origin servers so that those files can be fetched from the origin servers as quickly as possible.
What they are saying is that when a user makes a connection to CloudFront, CloudFront will make a connection to the origin (if one is not already open and a file is not cached) and will keep that connection alive. That means that any subsequent, not cached, request coming from any user will experience lower latency since CloudFront will reuse that same TCP connection from CloudFront to the origin.
1.3 Use of AWS Backbone
Use optimized AWS global network to communicate from the PoP to the origin instead of going over slow internet. AWS Docs
2. Security reasons
Cloudfront (or other CDN providers like Cloudflare) can help you absorb some DDOS attack requests and protect you, especially if you turn on AWS WAF and AWS Shield Advanced.
SUMMARY
You can benefit from Cloudfront even for routes/files that are not cacheable.
My go-to cache setup is the following. I cache /img/*
, /css/*
and /js/*
routes, and all others I just proxy to the origin over HTTP.
You are probably aware of performance gains on cacheable content. So I'm sharing here results from non-cacheable requests (the one which goes all the way to Laravel) with and without CloudFront in front of app:

Please note the SSL/TLS latency decrease.
Response time is just the global average. The performance increase is higher from locations that are further away. For example, we have lowered the latency from Sydney from 1.3 seconds to 0.6s for the same request just by having Cloudfront in front of it!
To answer your question, both the ASSET_URL
and APP_URL
are the same, we point to the domain to CDN distribution and play a bit with caching policies.