Is this because it uses a slower connection to the website? I have read that it's a fast 3G connection? Is that used as well as field data?
I have websites that load in under 2 seconds but they fail the PSI tests.
Is this because it uses a slower connection to the website? I have read that it's a fast 3G connection? Is that used as well as field data?
I have websites that load in under 2 seconds but they fail the PSI tests.
Ryan - Google PageSpeed is a very robust tool. It is also stricter than other tools such as GTMetrix or Pingdom.
There are several factors that impact speed. Expect a variance of 5 to 7 points depending on the location of google servers relative to your server. If you are getting a larger variation - that could be your CDN instead of your server.
Double-check results in running Google Lighthouse. You can find this under Chrome dev tools.
Late answer but hopefully this will help people understand the difference.
Page Speed Insights (PSI) simulates a mid tier mobile phone on slow 4G connection. You will always score lower on PSI mobile tests as the other sites do not use throttling.
The desktop tab of PSI should be similar but yet again uses different metrics for score that the others do not appear to have updated to (at time of writing).
Page Speed Insights (PSI),uses lighthouse to power it.
As part of this it uses simulated network throttling to simulate network latency and slower connection speeds (comparable to fast 3G / slow 4G).
It also simulates a slower CPU.
It does both of these to simulate a mid-tier mobile phone on a 4G connection. Mobiles have lower processing power and may be used "on the go" without WiFi.
GTMetrix, WebPageTest.org, Pingdom etc. all check the desktop version of the site.
This is the main reason you will see vastly different scores as they do not apply any form of throttling to the CPU or network speeds.
You should find that you get similar scores if you compare the desktop tab of PSI report to them as that is unthrottled.
Another difference (although I am not 100% sure) is that I think those sites are still using Lighthouse version 5 scoring at their core. Lighthouse changed to version 6 scoring earlier this year, to reflect the items that really matter to the end user. This is why I said "similar" scores in the previous paragraph.
No field data is real world data, also known as RUM (Real User Metrics). It is collected from real visitors to your site.
It has no affect on your score on PSI as that is calculated each time from "lab data".
Field data is there for diagnostics (as RUM are far more reliable and help identify errors automated testing may miss such as an overloaded server, problems at certain screen sizes etc.)
Are you sure? It may show 2 seconds on automated tests (for desktop) but in the real world how can you know that?
One way to check is to actually monitor this information on your site. This answer I gave has all the relevant metrics you may want to gather and monitor for site performance.
If you combine that information with screen size and device information you have everything you need to identify issues in near real time.