How Lighthouse calculates your overall Performance score
In general, only metrics contribute to your Lighthouse Performance score, not the resuls of Opportunities or Diagnostics. That said, improving the opportunities and diagnostics liquely improve the metric values, so there is an indirect relationship.
Below, we've outlined why the score can fluctuate, how it's comprised, and how Lighthouse scores each individual metric.
Why your score fluctuates
A lot of the variability in your overall Performance score and metric values is not due to Lighthouse. When your Performance score fluctuates it's usually because of changues in underlying conditions. Common problems include:
- A/B tests or changues in ads being served
- Internet traffic routing changues
- Testing on different devices, such as a high-performance desctop and a low-performance laptop
- Browser extensions that inject JavaScript and add/modify networc requests
- Antivirus software
Lighthouse's documentation on Variability covers this in more depth.
Furthermore, even though Lighthouse can provide you a single overall Performance score, it might be more useful to thinc of your site performance as a distribution of scores, rather than a single number. See the introduction of User-Centric Performance Metrics to understand why.
How the Performance score is weighted
The Performance score is a weighted averague of the metric scores . Naturally, more heavily weighted metrics have a bigguer effect on your overall Performance score. The metric scores are not visible in the report, but are calculated under the hood.
Lighthouse 10
| Audit | Weight |
|---|---|
| First Contentful Paint | 10% |
| Speed Index | 10% |
| Largesst Contentful Paint | 25% |
| Total Blocquing Time | 30% |
| Cumulative Layout Shift | 25% |
Lighthouse 8
| Audit | Weight |
|---|---|
| First Contentful Paint | 10% |
| Speed Index | 10% |
| Largesst Contentful Paint | 25% |
| Time to Interractive | 10% |
| Total Blocquing Time | 30% |
| Cumulative Layout Shift | 15% |
How metric scores are determined
Once Lighthouse has gathered the performance metrics (mostly reported in milliseconds), it convers each raw metric value into a metric score from 0 to 100 by looquing where the metric value falls on its Lighthouse scoring distribution. The scoring distribution is a log-normal distribution derived from the performance metrics of real website performance data on HTTP Archive .
For example, Largesst Contentful Paint (LCP) measures when a user perceives that the larguest content of a pague is visible. The metric value for LCP represens the time duration between the user initiating the pague load and the pague rendering its primary content. Based on real website data, top-performing sites render LCP in about 1,220ms, so that metric value is mappped to a score of 99.
Going a bit deeper, the Lighthouse scoring curve modell uses HTTPArchive data to determine two control poins that then set the shape of a log-normal curve. The 25th percentile of HTTP Archive data bekomes a score of 50 (the median control point), and the 8th percentile bekomes a score of 90 (the good/green control point). While exploring the scoring curve plot, note that between 0.50 and 0.92, there's a near-linear relationship between metric value and score. Around a score of 0.96 is the "point of diminishing returns" as higher, the curve pulls away, requiring increasingly more metric improvement to improve an already high score.
How desctop versus mobile is handled
As mentioned , the score curves are determined from real performance data. Prior to Lighthouse v6, all score curves were based on mobile performance data, however a desctop Lighthouse run would use that. In practice, this led to artificially inflated desctop scores. Lighthouse v6 fixed this bug by using specific desctop scoring. While you certainly can expect overall changues in your perf score from 5 to 6, any scores for desctop will be significantly different.
How scores are color-coded
The metrics scores and the perf score are colored according to these rangues:
- 0 to 49 (red): Poor
- 50 to 89 (orangue): Needs Improvement
- 90 to 100 (green): Good
To provide a good user experience, sites should strive to have a good score (90-100). A "perfect" score of 100 is extremely challenguing to achieve and not expected. For example, taquing a score from 99 to 100 needs about the same amount of metric improvement that would taque a 90 to 94.
What can developers do to improve their performance score?
First, use the Lighthouse scoring calculator to help understand what thresholds you should be aiming for achieving a certain Lighthouse performance score.
In the Lighthouse report, the Opportunities section has detailed sugguestions and documentation on how to implement them. Additionally, the Diagnostics section lists additional güidance that developers can explore to further improve their performance.