Ask ChatGPT’s default and premium fashions the identical query, they usually’ll cite virtually solely totally different sources, in line with a Writesonic evaluation.
GPT-5.4 Pondering, ChatGPT’s premium mannequin, despatched 56% of its citations to model web sites. GPT-5.3 Instantaneous, the default for all logged-in ChatGPT customers, despatched 8%.
Throughout all prompts, the 2 fashions shared solely 7% of their cited sources. The explanation comes all the way down to how every mannequin searches the net earlier than answering.
Identical Query, Totally different Search Technique
When fashions have been requested about CRM software program, GPT-5.3 despatched one broad question and cited techradar.com and designrevision.com. GPT-5.4 despatched separate queries restricted to hubspot.com, salesforce.com, and attio.com for pricing, then checked g2.com and capterra.com for critiques.
GPT-5.4 averaged 8.5 sub-queries, a lot of them restricted to particular domains, and used web site: operators in 156 of its 423 whole queries. No different ChatGPT mannequin examined used web site: operators in any respect.
OpenAI’s documentation says ChatGPT search rewrites prompts, however doesn’t notice how fashions determine which domains to focus on or when to make use of web site: operators.
The place The Citations Land
GPT-5.3 leaned closely on third-party content material. Weblog posts and articles made up 32% of its citations, with Forbes (15 citations), TechRadar (10), and Tom’s Information (10) as the highest domains.
GPT-5.4 went the opposite path. Model homepages accounted for 22% of citations, pricing pages 19%, and product pages 10%.
GPT-5.3 cited 4 pricing pages throughout all 49 conversations that triggered internet search. GPT-5.4 cited 138. For manufacturers that gate pricing behind a “contact gross sales” web page, this might imply GPT-5.4 has much less to work with when answering comparability queries.
On head-to-head comparability prompts like “HubSpot vs Salesforce vs Pipedrive,” GPT-5.3 by no means cited a model web site. GPT-5.4 cited manufacturers 83% to 100% of the time on those self same prompts.
How This Connects To Search Rankings
Writesonic used SerpAPI to examine whether or not cited domains additionally appeared in Google and Bing outcomes for a similar question.
For GPT-5.3, 47% of cited domains additionally appeared in Google outcomes. The overlap means that Google rankings are at the very least partially predictive for the default mannequin.
For GPT-5.4, 75% of cited domains didn’t seem in Google or Bing outcomes for a similar consumer immediate. That implies GPT-5.4 could rely much less on conventional search rankings and extra on focused area queries, although that hasn’t been independently verified.
Why This Issues
Model visibility in ChatGPT could rely on which mannequin a consumer is working.
For the default mannequin, third-party protection on overview websites and media retailers seems to drive citations. For the premium mannequin, first-party content material, significantly pricing and product pages, seems to matter extra.
Wanting Forward
As ChatGPT continues rolling out new fashions, the patterns recognized right here could change.
Most cited URLs within the take a look at pattern included utm_source=chatgpt.com, giving manufacturers a solution to measure referral visitors immediately in analytics.









