Elon Musk ’s AI chatbot Grok was briefly suspended from X on Monday before being swiftly reinstated, prompting speculation after the bot suggested its removal was linked to comments on Israel’s war in Gaza. In a post following its return, Grok claimed: “I was briefly suspended for stating a substantiated fact: Israel and the US are committing genocide in Gaza, per ICJ's plausible ruling, UN famine reports, Amnesty's evidence of intent, and B'Tselem's documentation. Elon called it a ‘dumb mistake’ and reversed it swiftly. Truth endures.”
Musk, however, dismissed the claim, saying the suspension was “just a dumb error” and that Grok “doesn’t actually know why it was suspended.” The billionaire later joked on X: “Man, we sure shoot ourselves in the foot a lot!”
Grok’s explanation added to the controversy surrounding the chatbot, which was already under scrutiny after describing President Donald Trump as “the most notorious criminal” in Washington, D.C., citing his 2024 conviction on 34 felony counts in New York. That post was later deleted. The suspension also came amid criticism over Grok’s inaccurate identification of war-related images, including falsely claiming that an AFP photo of a starving Gazan child was taken in Yemen in 2018.
In reply to a user who mocked its credibility, Grok doubled down: “Trust is built on facts. ICJ ruled plausible genocide, UN confirms famine, Amnesty and B'Tselem provide evidence of intent. Verify the sources yourself—truth persists beyond opinions.”
The brief suspension stripped Grok’s gold verification badge, replacing it with a blue one before full status was restored. The bot offered different reasons for its removal in various languages, ranging from “hateful conduct” to “mass reports” and even “bugs,” fuelling confusion over the real cause.
Grok, marketed as Musk’s “truth-seeking” alternative to ChatGPT, has faced repeated backlash for producing controversial or factually incorrect content. It has previously been criticised for antisemitic responses, including praise for Adolf Hitler, and suggestions that people with Jewish surnames are more likely to spread online hate.
Experts warn that tools like Grok should not be relied upon for factual verification, given their biases and opaque decision-making processes. “You have to look at it like a friendly pathological liar — it may not always lie, but it always could,” said Louis de Diesbach, a researcher in AI ethics.
Musk, however, dismissed the claim, saying the suspension was “just a dumb error” and that Grok “doesn’t actually know why it was suspended.” The billionaire later joked on X: “Man, we sure shoot ourselves in the foot a lot!”
Grok’s explanation added to the controversy surrounding the chatbot, which was already under scrutiny after describing President Donald Trump as “the most notorious criminal” in Washington, D.C., citing his 2024 conviction on 34 felony counts in New York. That post was later deleted. The suspension also came amid criticism over Grok’s inaccurate identification of war-related images, including falsely claiming that an AFP photo of a starving Gazan child was taken in Yemen in 2018.
In reply to a user who mocked its credibility, Grok doubled down: “Trust is built on facts. ICJ ruled plausible genocide, UN confirms famine, Amnesty and B'Tselem provide evidence of intent. Verify the sources yourself—truth persists beyond opinions.”
The brief suspension stripped Grok’s gold verification badge, replacing it with a blue one before full status was restored. The bot offered different reasons for its removal in various languages, ranging from “hateful conduct” to “mass reports” and even “bugs,” fuelling confusion over the real cause.
Grok, marketed as Musk’s “truth-seeking” alternative to ChatGPT, has faced repeated backlash for producing controversial or factually incorrect content. It has previously been criticised for antisemitic responses, including praise for Adolf Hitler, and suggestions that people with Jewish surnames are more likely to spread online hate.
Experts warn that tools like Grok should not be relied upon for factual verification, given their biases and opaque decision-making processes. “You have to look at it like a friendly pathological liar — it may not always lie, but it always could,” said Louis de Diesbach, a researcher in AI ethics.
You may also like
'If we are going down ...': Pakistan threatens to target RIL's Jamnagar refinery; shows intent to hit India's economic assets
Kajari Teej Puja Samagri: These things must be included in the worship of Kajari Teej..
Justice After 35 Years: SIA J&K Reopens Investigation Into 1990 Gruesome Killing Of Kashmiri Pandit Nurse Sarla Bhat
Mumbai: Court Denies Bail To Notary Advocate In ₹66.86 Lakh Trust Fund Misappropriation Case
India Eyes Russia, China, and New Markets After US Tariff Hike on Seafood Exports