Skip to content
technical-depthcore-web-vitalsinpperformancetechnical-depth

Core Web Vitals Changed in 2025: What Mid-Market Sites Need to Do

INP replaced FID in 2024 and Google tightened scoring in 2025. Here is exactly what mid-market sites need to fix, with a 30-day remediation playbook.

rj-murray, Contributor · April 25, 2026 · 11 min read

Core Web Vitals 2025 thresholds

title: "Core Web Vitals Changed in 2025: What Mid-Market Sites Need to Do" slug: core-web-vitals-changed-in-2025 description: "INP replaced FID in 2024 and Google tightened scoring in 2025. Here is exactly what mid-market sites need to fix, with a 30-day remediation playbook." pillar: technical-depth author: rj-murray publishedAt: "2026-04-25T00:00:00Z" tags: ["core-web-vitals", "inp", "performance", "technical-depth"] coverImage: /posts/core-web-vitals-changed-in-2025/cover.png coverAlt: "Core Web Vitals 2025 thresholds" featured: false faq:

  • q: "What changed about Core Web Vitals in 2025?" a: "INP (Interaction to Next Paint) fully replaced FID as the responsiveness metric in March 2024, and through 2025 Google tightened how the threshold is enforced in field data. The pass bar is the 75th percentile of real-user sessions across mobile and desktop on Chrome, with INP under 200ms, LCP under 2.5s, and CLS under 0.1."
  • q: "Is INP harder to pass than FID was?" a: "Yes, materially. FID only measured the delay before the first input handler ran. INP measures the full latency of every interaction across the session, picking the slowest meaningful one. A site that passed FID at the 99th percentile will routinely fail INP because long tasks during scroll, clicks on hydrated React components, and re-renders all count now."
  • q: "Do Core Web Vitals still affect rankings in 2025?" a: "They are part of the page experience signal, which is one of many ranking factors. The bigger lever is conversion. Sites that pass CWV convert better, and Google's own page experience documentation continues to call out the metrics by name. We treat the 95+ Lighthouse score as a baseline for any site we ship, not because of the ranking lift in isolation but because the underlying work fixes the things that hurt revenue."
  • q: "Can a WordPress site pass Core Web Vitals in 2025?" a: "It can, but the cost of getting there usually exceeds the cost of a rebuild. Plugin sprawl, render-blocking JS from page builders, and shared hosting CPU caps are the three things that fail INP under real Chrome traffic. If the site is on WordPress with 14 plugins and a CWV score under 50, we rebuild on Next.js rather than patching."
  • q: "What is the difference between lab and field Core Web Vitals?" a: "Lab data comes from Lighthouse running a single emulated load. Field data comes from real Chrome users and is what Google actually scores you on, surfaced through the Chrome User Experience Report (CrUX) and Search Console's Core Web Vitals report. Lab data is useful for debugging. Field data is what determines pass or fail."

tl;dr

INP replaced FID as a Core Web Vital in March 2024, and through 2025 Google tightened how the field-data threshold is enforced. The pass bar is now the 75th percentile of real Chrome sessions on every URL group: LCP under 2.5s, INP under 200ms, CLS under 0.1. Most mid-market sites that comfortably passed FID now fail INP because the metric measures every interaction in the session, not just the first one.

What the 2025 Core Web Vitals update changed

Three things shifted between the 2023 baseline and where we are in 2025.

The first is INP. Interaction to Next Paint replaced First Input Delay as the responsiveness metric on March 12, 2024. FID measured the delay before the browser started running an input handler. INP measures the full latency of every meaningful interaction across the page lifetime and reports the worst one (or the 98th percentile on heavily-interactive pages). It is a much harder bar.

The second is the field-data emphasis. Google's page experience guidance now leans almost entirely on CrUX field data at the 75th percentile across mobile and desktop. Lab scores from Lighthouse are still useful for debugging, but they do not pass or fail you. Real Chrome users do.

The third is the threshold enforcement on URL groups. Sites with sparse traffic on a given URL pattern now get scored at the origin level, which means one slow templated page can drag the whole domain. We have seen this hit pSEO surfaces hardest.

The practical effect: a site that passed CWV in 2023 on the strength of a fast first load and tight CLS can fail in 2025 because INP exposes every long task during scroll, every blocking re-render after hydration, and every overweight third-party script that fires on click. We covered the underlying rebuild economics in why mid-market companies keep getting stuck on WordPress.

The three metrics, with the 2025 numbers

The pass bar is the 75th percentile of real-user field data across the trailing 28 days, segmented by device class. Source: web.dev/vitals.

Largest Contentful Paint (LCP): under 2.5 seconds is "good." 2.5 to 4.0 is "needs improvement." Above 4.0 is "poor." LCP is the render time of the largest text block or image in the viewport. On marketing sites this is almost always the hero image or hero headline.

Interaction to Next Paint (INP): under 200ms is "good." 200 to 500ms is "needs improvement." Above 500ms is "poor." INP measures the time from user input (click, tap, keypress) to the next paint that reflects the response. It captures input delay, processing time, and presentation delay together. The worst interaction in the session is the one that gets scored.

Cumulative Layout Shift (CLS): under 0.1 is "good." 0.1 to 0.25 is "needs improvement." Above 0.25 is "poor." CLS measures unexpected movement of visible content as the page loads. It is the metric most often tanked by ad slots, web fonts loading without font-display: swap, and images without explicit dimensions.

The 75th percentile rule matters. If 26 percent of sessions on the URL group land above the "good" threshold for any single metric, the URL group fails. Mid-market sites with significant mobile traffic from older Android devices feel this first.

The five most common mid-market CWV failures (with the fixes)

We have audited and rebuilt enough mid-market sites in the last 18 months to see the same five failures repeat. In rough frequency order:

1. Hero image is the LCP element and it is unoptimized.

Symptom: LCP between 3.5 and 6.5 seconds on mobile field data. Cause: 800KB JPG served at full resolution to every viewport, no priority flag on the Next.js Image component, no preload hint. Fix: serve the hero as AVIF with WebP fallback, set explicit width and height, mark it priority so Next.js emits the preload, and cap the responsive sizes prop at the actual layout width. This single change typically pulls LCP from 4s to 1.6s on mobile.

2. Third-party tags fire on hydration and inflate INP.

Symptom: INP between 350 and 800ms on the homepage. Cause: marketing tag manager, chat widget, session replay, and consent banner all bind click handlers to the document and run synchronous work in the main thread. Fix: defer everything non-essential to requestIdleCallback, gate the chat widget behind a static button until the user clicks, and move the consent banner out of the critical path. We covered the same pattern in the 48-hour before/after demo.

3. Web fonts load without size-adjust and shift the layout.

Symptom: CLS between 0.15 and 0.32. Cause: custom Google Font swaps in 200ms after first paint and re-flows every text block. Fix: use Next.js next/font with the display: 'swap' and adjustFontFallback: true defaults, or supply a size-adjust value tuned to your fallback. CLS drops to under 0.05 on the next deploy.

4. Hydration of an over-large React tree blocks INP.

Symptom: INP under 200ms on the first interaction, then climbs above 400ms on subsequent clicks once a heavy component mounts. Cause: the entire page is a client component, hydration runs the whole tree, and any click on a hydrated handler queues behind the long task. Fix: convert everything that does not need state to a server component. In Next.js 16, default to server components and only opt into "use client" at the leaf. We documented the migration mechanics in the WordPress to Next.js migration path.

5. Sparse-traffic templated pages drag the origin score.

Symptom: the homepage passes, the lead service pages pass, but origin-level CrUX shows "needs improvement" because templated geo or product pages get rolled up. Cause: pSEO pages with long server response times, large unsplit JS bundles, or per-page third-party calls. Fix: route-segment the bundles so each templated page only ships its own JS, render the templated copy on the server with ISR, and keep third-party calls on the homepage where the traffic is. We unpacked the templated-page failure modes in pSEO in 2026, what changed and geo pages that don't get penalized.

How Burris and Sons cleared all three thresholds

Burris and Sons Heating has run HVAC in Chicago since 1917. Their pre-rebuild site was 12 static pages on a shared host. Field-data CWV: LCP 4.8s on mobile, INP 520ms, CLS 0.18. The Search Console Core Web Vitals report flagged the origin as "poor" across mobile.

We rebuilt to a 30-page Next.js 16 site in 21 days with eight neighbourhood geo pages. The performance work was not the headline of the project but it shipped as a baseline.

LCP fix: hero family photography served as AVIF at three breakpoints (640, 1024, 1920) with priority on the homepage hero. Heritage photos kept, just re-encoded. Post-rebuild LCP: 1.4s on mobile field data.

INP fix: zero client-side tag manager. PostHog loads via next/script with strategy="afterInteractive". The phone-call CTA is a plain anchor tag, not a React handler. The "request a quote" form is a server action. Post-rebuild INP: 88ms on mobile field data.

CLS fix: next/font with the system-font fallback metrics tuned, every image with explicit width and height, no late-loading widgets in the viewport. Post-rebuild CLS: 0.02.

Search Console flipped the origin to "good" within 28 days of launch, which is the rolling window CrUX uses. Lighthouse mobile: 98. We covered the broader rebuild deltas in real Lighthouse scores before and after, six mid-market rebuilds.

The lab-vs-field distinction (CrUX vs. Lighthouse)

This is where most mid-market teams misread their own data and ship the wrong fix.

Lab data is what PageSpeed Insights shows in the top section of its report. It is a single Lighthouse run from a Google data center against a throttled emulated mobile profile. It is deterministic and reproducible. It is also synthetic. It does not know what your real users are doing.

Field data is what PageSpeed Insights shows below the lab section, labelled "Discover what your real users are experiencing." That section pulls from the Chrome User Experience Report (CrUX), which aggregates real Chrome telemetry across the trailing 28 days. This is the data Google uses for ranking signals.

The two can disagree. Lighthouse can return INP "fast" because the synthetic interaction is benign. CrUX can return INP "poor" because real users click a hydrated component that fires a heavy handler. The reverse also happens: a site can ship slow lab numbers because the testing infrastructure is in a different region than the user base.

The rule we use: Lighthouse is the debugger, CrUX is the scoreboard. We optimize against CrUX field data through Search Console and pull Lighthouse only to reproduce a specific regression locally.

If a site has too little traffic to populate CrUX (under roughly 200 daily Chrome users on the URL group), the origin-level rollup is what gets scored. That is the pSEO-page failure mode from the previous section.

The 30-day remediation playbook

This is the sequence we run on a mid-market site that fails CWV but is not a full rebuild candidate. Compress or extend depending on the depth of the platform debt.

Days 1 to 3: instrument and baseline.

Pull 28 days of CrUX field data from Search Console for every URL group. Pull lab data from PageSpeed Insights for the homepage, the top three converting pages, and one templated page. Set up the web-vitals JS library to send LCP, INP, and CLS to PostHog so you can see real-user data without waiting for the CrUX 28-day window.

Days 4 to 10: kill the LCP problem.

Identify the LCP element on every important page (Lighthouse tells you). Optimize image format and dimensions. Add priority and preload hints. Eliminate render-blocking CSS and JS in the head. Move web fonts to next/font or equivalent with font-display: swap. Re-measure.

Days 11 to 18: kill the INP problem.

Audit every third-party script for firing strategy. Move tag manager, chat, replay, and consent banner to deferred load. Convert hydrated client components to server components where possible. Break up long tasks above 50ms with scheduler.yield() or chunked setTimeout. Use the Chrome DevTools Performance panel with the "Interactions" track to find the worst offenders.

Days 19 to 24: kill the CLS problem.

Add explicit width and height to every image and iframe. Reserve space for ads and embeds with CSS aspect-ratio boxes. Tune font fallback metrics. Audit any JS that injects content above existing content (popovers, banners, hero swappers).

Days 25 to 30: validate in field data.

Re-pull CrUX. Most field improvements take 14 to 28 days to show up because of the rolling window. Continue measuring real-user metrics through PostHog so you do not have to wait. If field data still shows "needs improvement" or "poor" after 30 days, the platform itself is the bottleneck and the conversation moves from remediation to rebuild.

We track this kind of work alongside the rest of the SEO surface in the mid-market SEO reporting framework.

When CWV is not your real problem

We have seen mid-market teams spend a quarter chasing Core Web Vitals when the actual revenue blocker was elsewhere. The pattern is consistent.

If the homepage passes CWV but conversion is still flat, the problem is positioning, offer, or proof, not performance. Ship copy and case studies, not a performance sprint.

If organic traffic is flat but CWV is good, the problem is rankings or visibility, not page experience. Audit indexation, internal linking, and AEO surface. We have a separate playbook for AEO and ranking on ChatGPT, Perplexity, Claude, and Gemini, and the llms.txt file primer covers the related bot-routing surface.

If paid search is the only thing keeping the funnel alive, fixing CWV will not fix the underlying economics. We covered why in why CMOs should kill paid search budget and laid out the fix in the 90-day organic growth plan.

A marketing site that does not pass Lighthouse 95 on every route is a rebuild, not a redesign. That is our position. CWV is one of the inputs that pushes the math toward rebuild. It is not the only one, and it is not the one to start with if the upstream problem is the offer itself.

Closing

The 2025 update did not invent a new framework. It tightened the one that has been in place since 2020 and made the responsiveness metric materially harder to pass. Most mid-market sites that were "fine" on CWV in 2023 are now "needs improvement" or "poor" on INP, and the gap shows up in field data 28 days after deploy.

The remediation playbook above clears the bar for a site whose platform is sound. For sites where the platform itself is the bottleneck, the same work is two weeks of triage followed by a quarter of patching that ends in a rebuild anyway. We ship rebuilds in 21 days because we have seen the slow path enough times to know it does not end somewhere different.

RJ

Frequently asked

What changed about Core Web Vitals in 2025?
INP (Interaction to Next Paint) fully replaced FID as the responsiveness metric in March 2024, and through 2025 Google tightened how the threshold is enforced in field data. The pass bar is the 75th percentile of real-user sessions across mobile and desktop on Chrome, with INP under 200ms, LCP under 2.5s, and CLS under 0.1.
Is INP harder to pass than FID was?
Yes, materially. FID only measured the delay before the first input handler ran. INP measures the full latency of every interaction across the session, picking the slowest meaningful one. A site that passed FID at the 99th percentile will routinely fail INP because long tasks during scroll, clicks on hydrated React components, and re-renders all count now.
Do Core Web Vitals still affect rankings in 2025?
They are part of the page experience signal, which is one of many ranking factors. The bigger lever is conversion. Sites that pass CWV convert better, and Google's own page experience documentation continues to call out the metrics by name. We treat the 95+ Lighthouse score as a baseline for any site we ship, not because of the ranking lift in isolation but because the underlying work fixes the things that hurt revenue.
Can a WordPress site pass Core Web Vitals in 2025?
It can, but the cost of getting there usually exceeds the cost of a rebuild. Plugin sprawl, render-blocking JS from page builders, and shared hosting CPU caps are the three things that fail INP under real Chrome traffic. If the site is on WordPress with 14 plugins and a CWV score under 50, we rebuild on Next.js rather than patching.
What is the difference between lab and field Core Web Vitals?
Lab data comes from Lighthouse running a single emulated load. Field data comes from real Chrome users and is what Google actually scores you on, surfaced through the Chrome User Experience Report (CrUX) and Search Console's Core Web Vitals report. Lab data is useful for debugging. Field data is what determines pass or fail.

Want your site to read like this does?

We use analytics to understand which pages help, with PII redacted and session inputs masked. Your form submissions always reach us regardless of this choice.