
AI cited pages have been nearly 3 times extra more likely to have JSON-LD than non-cited pages.
That’s a giant hole, and the form of stat that will get shared in LinkedIn carousels and convention slides as proof that schema is an AI visibility lever.
However we weren’t glad with the info because it may simply have been correlation, not trigger.
Schema markup tends to stay on better-maintained, extra technically refined websites, and those self same websites publish stronger content material, construct extra authority, earn extra hyperlinks, and do all the opposite issues that get pages cited.
Schema may very well be doing actual work, nevertheless it may additionally simply be using the wave of each different sign.
So we couldn’t truly reply the query SEOs actually care about: if I add schema to my web page, will I get cited extra by AI?
To seek out out, we ran a second examine designed to isolate the impact of including schema.
Right here’s what we discovered.
We tracked 1,885 internet pages that added JSON-LD schema between August 2025 and March 2026, matched them in opposition to 4,000 management pages, and measured quotation modifications throughout Google AI Overviews, AI Mode, and ChatGPT.
Including schema produced no main uplift in citations on any platform.
| AI supply | Impact on citations | Verdict |
|---|---|---|
| Google AIO | −4.6% | Small however statistically vital decline relative to matched controls; (each teams have been declining collectively, however handled pages fell barely sooner) |
| Google AI Mode | +2.4% | Statistically indistinguishable from zero |
| ChatGPT | +2.2% | Statistically indistinguishable from zero |
These percentages come from our most dependable evaluation (a matched difference-in-differences [DiD] take a look at).
On this take a look at, each AI Mode and ChatGPT handled pages carried out barely higher than management pages on common, however the variations are sufficiently small that they may simply be random noise throughout hundreds of URLs.
AI Overviews confirmed a 4.6% decline, which is small however statistically vital relative to matched management pages.
However that isn’t fairly the total story—we’ll get into that within the subsequent part.
So, total, we are able to’t inform whether or not the schema did a tiny bit of excellent or nothing at all.
AI Overview citations on handled pages fell by 4.6% relative to regulate pages, and the result’s “statistically vital” (the percentages of seeing a niche this huge by pure probability are about 1 in 2,500).
However earlier than anybody reads this as “including schema hurts your AI Overview citations”, there are two issues it is advisable bear in thoughts.
- Absolutely the measurement is small. We’re speaking about a median lack of round 12 each day citations per web page, in a pattern the place most pages have been getting a whole bunch.
- Each handled and matched management pages have been already on a steep downward trajectory earlier than schema was added—the form of decline you’d count on from AI Overviews pulling again from these particular forms of content material for causes unrelated to schema (e.g. a Google replace altering what will get surfaced, the content material getting stale, or Google not having recrawled the web page not too long ago).


Sidenote.
How one can learn this chart: each strains are anchored to 1.0 at week −1 (the week earlier than schema was added), so that they all the time begin on the similar level by design. Earlier than therapy, each teams decline collectively. After therapy, handled pages sit barely under the matched controls (that is the −4.6% hole).
That mentioned, if including schema had no impact on citations both means, we’d count on handled pages and matched controls to say no collectively on the similar fee (which is broadly what we see for AI Mode and ChatGPT).
The truth that handled pages declined barely extra suggests schema had a small unfavourable impact—nevertheless it may additionally simply be coincidence.
We will’t inform which one it’s from this information alone.
Utilizing Model Radar, Xibeijia pulled a number of million URLs cited in AI Overviews.
She then retrieved the HTML historical past from our crawler database, labeled whether or not every URL contained , and spotted the date that schema presence transitioned from “False” to “True”.
This left her with 1,885 pages that introduced JSON-LD between August 2025 and March 2026.
Finally, to analyze all of that data, she used Agent A, our new AI marketing agent.

For each page Xibieijia knew two key dates:
- The last day our crawler checked the page and found no JSON-LD
- The first day our crawler detected JSON-LD on the page




Where schema might still matter: pages not yet cited by AI
There’s one important thing you need to know about this data: we studied pages that were already being cited heavily by AI. Every page in the dataset had 100+ AI Overview citations in February 2025, before any schema was added. These pages were already inside the consideration set, being crawled and surfaced by LLMs. If a page is already getting picked up, our data suggests that adding schema isn’t going to push it higher. But for pages that aren’t being seen by AI systems at all, schema markup might still play a role in helping them get crawled, parsed, or indexed in the first place. Our study can’t speak to that directly, but a recent experiment from searchVIU answers a related question. They tested whether five major AI systems (ChatGPT, Claude, Perplexity, Gemini, and Google AI Mode) actually used schema markup when fetching a page in real-time. Spoiler: none of them did. During direct retrieval, every system extracted only visible HTML content. JSON-LD, hidden Microdata, and hidden RDFa were all ignored. A few other points to flag, and some questions worth testing next:- Pages that add JSON-LD often change other things at the same time (e.g. links, content, technical fixes). We can’t fully separate schema from these kinds of co-occurrences.
- We pooled all schema types together. Article, FAQ, Product, HowTo, Organization. It’s possible some types help more than others. This may be worth digging into.
- We measured 30 days post-treatment. If JSON-LD has a slow-burn effect, a 60- or 90-day window might reveal more growth.
- We studied JSON-LD—the most widely used schema format. Other formats exist (Microdata and RDFa), but we haven’t yet tested them.
- We only looked at schema in the page’s HTML, not schema injected via JavaScript. AI crawlers appear to treat the two differently. ¹
- The small AI Overview decline is real but unexplained. Treated pages dropped about 4.6% more than matched controls, and we don’t know why. A follow-up study could look at whether specific schema types or specific content types account for the gap.
- Pick 5–10 test pages where you plan to add JSON-LD. Ideally pages already getting some AI citations, so you have a baseline (pages with zero citations make it harder to tell whether schema did nothing, or whether the page just wasn’t going to get cited either way). You can check this in the Cited Pages report.

- Pick 5–10 control pages with similar citation levels that you’re not adding schema to. This is what separates “schema did something” from “AI Overviews shifted for everyone that month.”
- Record baseline citations for both groups across AI Overview, AI Mode, and ChatGPT in Brand Radar. Just apply URL filters to isolate those citation numbers.

- Add schema to your test pages and note the date. Don’t change anything else on those pages during the test window.
- Compare both groups after 30 days (or longer if you can). The question is: “did treated pages go up more than control pages did?”








