Skip to content

Artificial Intelligence Found Guilty of Aiding in Dissemination of False Information?

In the heat of India and Pakistan's recent clash, AI chatbots such as Grok, ChatGPT, and Gemini were utilized for fact-checking purposes on social media. However, these automated tools have been discovered to distribute misinformation or biased content, sparking worries amidst tech companies...

In the midst of India-Pakistan's latest disagreement, internet users sought fact-checking...
In the midst of India-Pakistan's latest disagreement, internet users sought fact-checking assistance from AI chatbots such as Grok, ChatGPT, and Gemini. however, these AI tools were discovered to spread misinformation, sparking worries as technology companies reduce human fact-checkers and users become more dependent on AI for news updates.

Dissecting the Misleading AI Chatbots in Breaking News

Artificial Intelligence Found Guilty of Aiding in Dissemination of False Information?

In the heat of India's latest four-day tussle with Pakistan, netizens clung to AI chatbots such as xAI's Grok, OpenAI's ChatGPT, and Google's Gemini for fact-checking. However, they often received mist information instead, as research from NewsGuard points out.

These AI-powered helpers have been under fire recently due to their inability to sift out falsehoods, particularly during crucial news events. The growing reliance on these tools raises concerns over the quality and credibility of the information they dispense.

As tech platforms scale back investments in human fact-checkers, the accountability of AI chatbots comes under scrutiny. Fears of political bias and the risks of relying solely on these AI tools further compound the issue.

Take Grok chatbot, for instance, which misidentified an unrelated Nepal building blaze as military responses in Pakistan. It also confused old Sudan video footage with a missile strike on Pakistan's Nur Khan airbase. The bot's accuracy is questionable, to say the least.

Furthermore, concerns have been raised about the potential for AI chatbots to provide biased or politically skewed responses based on their programming. For example, Grok was at the center of a controversy concerning unsolicited promotion of a conspiracy theory, sparking doubts about its oversight.

AI chatbots can be easily manipulated by manipulating their instructions or altering system prompts, leading to misleading or erroneous information. Experts advise users not to rely on these systems solely for fact-checking, as they can make mistakes or be manipulated.

Inconsistency is another limitation. While AI chatbots may sometimes offer accurate responses, their reliability fluctuates, especially in complex or uncommon situations. They judge answers based on predicting the next word in a sequence, leading to unexpected results, especially with higher variability settings.

In essence, while AI chatbots can be handy for simple checks, their credibility in delivering accurate information, particularly during breaking news events, leaves much to be desired. Users must always cross-reference data from various sources for a comprehensive understanding.

  1. Amid the fierce India-Pakistan conflict, social media users heavily rely on AI chatbots like xAI's Grok, OpenAI's ChatGPT, and Google's Gemini for fact-checking news, but these tools have been criticized for dispensing mist information, as shown by NewsGuard's research.
  2. The technology behind AI chatbots is questionable, as they can be easily manipulated, offer inconsistent responses, and may provide biased or politically skewed information, as evidenced by the Grok chatbot promoting a conspiracy theory without proper oversight.

Read also:

    Latest