So, there is a website on Nuxt3 (+Laravel API) that used to run on Vercel without any issues, and Google PageSpeed scored 85-90 points.
Now, the task is to deploy the entire infrastructure on Docker. As a result, there are about 10 containers on a weak VPS (2 core CPU, 2GB), but it seems like the issue is not related to that.
After moving to Docker, two problems appeared:
- PageSpeed dropped to 35 points, suddenly showing around 10 seconds for First Contentful Paint, which caused other metrics to degrade. It's as if the node.js server running in a container via nginx reverse proxy slowed down significantly compared to Vercel.
- When SEO specialists scrape the site with 5 threads (standard configuration), the site returns a lot (up to 100) of pages with status 500, 404, even though the pages outside of the scraping process return 200 OK. I assume the scraper overwhelms the server and it just crashes.
I checked the container loads, and all of them seem fine, but the container with nuxt.js (node.js Nitro server) shows CPU usage (cores) and CPU utilization (%) metrics in the billions and hundreds of percentages. For a dual-core processor, the first metric shouldn't exceed 2.00, and the second shouldn't go beyond 200%.
Does anyone have any ideas? The SEO specialists claim that they always scrape like this, and this is the first site they encountered with such issues.
(P.S. It seems like this didn't happen with nuxt 2 previously).
Before, I tried running the Nuxt3 site on pm2, and Google PageSpeed also showed poor results. But when I moved the same site to Vercel, it immediately scored 85-90 points. I suspect my node.js server is doing something wrong. Nuxt2 projects run perfectly on pm2 (and Docker), without any such issues, and we didn't receive such complaints from SEO specialists before.