Why Bing Chat and other AI models answering political queries is a bad idea

A new study suggests that using large language models like Bing Chat or ChatGPT as a source of information for deciding how to vote is actually quite dangerous. That’s because an experiment by AlgorithmWatch, a human rights organization based in Berlin and Zurich, and AI Forensics, another European non-profit that investigates algorithms, has shown that the bots’ answers to important questions are partly completely wrong and partly misleading.

View Full Article