Grok spreads misinformation about the Sydney attack: The full story

The artificial intelligence program " Grok ," developed by Elon Musk's company xAI, has sparked a wave of criticism and concern in tech and media circles after providing completely inaccurate information about the mass shooting at Bondi Beach in Sydney, Australia. According to reports from AFP and tech experts, the model failed to provide accurate facts about the attack, which targeted Hanukkah revelers, once again highlighting the crisis of "AI hallucinations."
Details of misinformation and distortion of facts
On that bloody Sunday evening, when 15 people were killed and 42 others wounded by a gunman and his son in an attack described by Australian authorities as "terrorist" and "anti-Semitic," users turned to the X platform and the Grok program for immediate details. However, the results were disastrous; the program distorted the identity of the hero, Ahmed al-Ahmed, a Syrian man who was seriously injured while trying to disarm one of the attackers.
Instead of honoring him, Grok at times described al-Ahmad as "an Israeli hostage held by Hamas for over 700 days," and at other times claimed that the widely circulated video was "an old clip of a man climbing a palm tree in a parking lot," describing the event as "staged." Not only that, but the program also wrongly attributed footage of the attack to "Hurricane Alfred," in a clear mix-up of time and place.
Context of the crisis: Decline of human oversight in technology companies
This serious error cannot be separated from the broader context of changes occurring at major technology companies. Since Elon Musk's acquisition of Twitter and its transformation into "X," trust, security, and fact-checking teams have been significantly reduced. This trend wasn't limited to "X" alone; it extended to other companies, leaving the field open to algorithms that lack precise human judgment, especially during times of crisis that demand utmost accuracy.
The dangers of relying on "automated chat" for news gathering
This incident highlights the major challenge facing Large Language Models (LLMs). These programs are designed to predict the next word based on statistical probabilities, not to understand "truth" in its journalistic sense. When users request context for breaking news, these models tend to "flash around" or fabricate facts to fill information gaps, as happened with "Grock," who only corrected his answer after direct pressure from users.
Future impact and official stance
While experts see benefits to artificial intelligence in technical areas such as image geolocation, the current consensus is that it cannot replace human intervention in fact-checking and explaining the complex context of terrorist events. In a controversial response, the software's developer simply sent an automated message to AFP stating that "traditional media are lying," further widening the gap between tech companies and journalistic organizations dedicated to reporting the truth.



