The frenzy so as to add AI to customer support, which we’ve got been witnessing recently in nearly each sector, can typically come at a excessive value for safety. On December 22, 2025, the crew of moral hackers at Pen Take a look at Companions (PTP) went public with a sequence of flaws they discovered within the new AI chatbot for Eurostar.
To your data, Eurostar is the well-known high-speed rail operator that connects the UK to mainland Europe by means of the Channel Tunnel, carrying hundreds of thousands of travellers between main hubs like London, Paris, and Amsterdam.
How The Flaws Have been Found
What began as a researcher planning a easy practice journey from London was the invention of “weak guardrails” that left the system open to manipulation. To your data, guardrails are the digital “security brakes” that cease an AI from going off-topic or leaking secrets and techniques.
In line with PTP researchers, Eurostar’s bot had a significant design flaw; it solely checked the final message in a chat for security. By merely modifying earlier messages within the dialog on their very own display, the researchers discovered they may trick the AI into ignoring its personal guidelines.
The technical facet of the “hack” was surprisingly easy. As soon as the security checks have been bypassed, the researchers used immediate injection to make the bot reveal its inner directions and the kind of AI mannequin it was utilizing.
Additional probing revealed two different vital points. First, the chatbot was weak to HTML injection and may very well be compelled to show malicious code or faux hyperlinks immediately within the consumer’s chat window. Secondly, dialog and message IDs weren’t verified.
This implies the system didn’t correctly test if a chat session really belonged to the consumer, probably permitting an attacker to “replay” or inject malicious content material into another person’s dialog.
Fixing the Flaws
This analysis, which was shared with Hackread.com, reveals that discovering the vulnerabilities was really simpler than getting them fastened. The crew first alerted Eurostar on June 11, 2025, however there was no response. Lastly, after a month of chasing, they tracked down Eurostar’s Head of Safety on LinkedIn on July 7.
Researchers later discovered that Eurostar had apparently outsourced their safety reporting course of proper when the bugs have been reported, main them to assert that they had “no file” of the warnings.
At one level, the rail operator even accused PTP’s safety crew of “blackmail” only for attempting to flag the problems. The accusation got here regardless of the corporate having a publicly accessible vulnerability disclosure program accessible right here.
“We had disclosed a vulnerability in good religion,” the researchers famous, expressing their shock on the hostile response.
Whereas the issues have now been patched, the crew warned that this ought to be a wake-up name for large manufacturers. Simply because a instrument is AI-powered doesn’t imply the outdated guidelines of net safety don’t apply, and if the backend isn’t stable, the flowery AI options are little greater than “theatre.”











