Main safety flaw in McDonald’s AI hiring software McHire uncovered 64M job functions. Uncover how an IDOR vulnerability and weak default credentials led to an enormous leak of private information and the swift remediation by Paradox.ai.
A vulnerability in McHire, the AI-powered recruitment platform utilized by a overwhelming majority of McDonald’s franchisees, uncovered the non-public info of over 64 million job candidates. The vulnerability, found by safety researchers Ian Carroll and Sam Curry, allowed unauthorised entry to delicate information, together with names, e mail addresses, telephone numbers, and residential addresses.
The investigation started after experiences surfaced on Reddit concerning the McHire chatbot, named Olivia and developed by Paradox.ai, giving unusual responses. Researchers rapidly discovered two vital weaknesses. First, the administration login for restaurant house owners on McHire accepted simply guessable default credentials: “123456” for each username and password. This straightforward entry granted them administrator entry to a check restaurant account throughout the system.
The second, and extra severe, difficulty was an Insecure Direct Object Reference (IDOR) on an inner API. An IDOR signifies that by merely altering a quantity in an internet tackle (on this case, a lead_id tied to applicant chats), anybody with a McHire account may entry confidential info from different candidates’ chat interactions.
Based on their weblog submit, researchers famous that this allowed them to view particulars from thousands and thousands of job functions, together with unmasked contact info and even authentication tokens that could possibly be used to log in because the candidates themselves and see their uncooked chat messages.
The McHire platform, accessible through https://jobs.mchire.com/
, guides job seekers by an automatic course of, together with a character check from Traitify.com. Candidates work together with Olivia, offering their contact particulars and shift preferences.
It was whereas observing a check utility from the restaurant proprietor’s aspect that the researchers stumbled upon the susceptible API. They observed a request to fetch candidate info, PUT /api/lead/cem-xhr
, which used a lead_id
that could possibly be altered to view different candidates’ information.
Upon realising the huge scale of the potential information publicity, the researchers instantly initiated disclosure procedures. They contacted Paradox.ai and McDonald’s on June 30, 2025, at 5:46 PM ET.
McDonald’s acknowledged the report shortly after, and by June 30, 2025, at 7:31 PM ET, the default administrative credentials had been not practical. Paradox.ai confirmed that the problems had been totally resolved by July 1, 2025, at 10:18 PM ET. Each firms have acknowledged their dedication to information safety following the swift remediation of this vital vulnerability.
“This incident is a reminder that when firms rush to deploy AI in customer-facing workflows with out correct oversight, they expose themselves and thousands and thousands of customers to pointless threat,” mentioned Kobi Nissan, Co-Founder & CEO at MineOS, a worldwide information privateness administration agency.
“The problem right here isn’t the AI itself, however the lack of primary safety hygiene and governance round it. Any AI system that collects or processes private information have to be topic to the identical privateness, safety, and entry controls as core enterprise programs,” defined Kobi.
“Meaning authentication, auditability, and integration into broader threat workflows, not siloed deployments that fly underneath the radar. As adoption accelerates, companies have to deal with AI not as a novelty however as a regulated asset and implement frameworks that guarantee accountability from the beginning,” he suggested.