AI Brains Mimicked, ChatGPT's News Preferences, and More
Today in AI, we’re seeing a fascinating blend of neuroscience-inspired innovation and the ever-present questions of bias and control in large language models. From researchers mimicking the human brain to improve AI performance, to ChatGPT seemingly avoiding certain news sources, here’s a look at the most noteworthy AI stories of the day.
Researchers at the University of Surrey have made a significant stride in AI development by mimicking the wiring of the human brain. According to the BBC, this new method rethinks how AI systems are wired at their most fundamental level, potentially leading to more efficient and powerful AI. It’s exciting to see AI drawing inspiration from the very thing it aims to replicate – human intelligence. University of Surrey researchers mimic brain wiring to improve AI
Meanwhile, ChatGPT’s browsing behavior is raising some eyebrows. As Gizmodo reports, the AI-powered browser seems to be actively avoiding links from The New York Times, behaving “like a rat who got electrocuted.” This observation comes after The NYT’s lawsuit against OpenAI, alleging copyright infringement. While it’s impossible to say definitively whether this is intentional on OpenAI’s part, it raises important questions about how AI models are being trained to navigate information, and the potential for bias or censorship based on legal and business considerations. ChatGPT’s Browser Bot Seems to Avoid New York Times Links Like a Rat Who Got Electrocuted - Gizmodo
The day’s news highlights both the immense potential and the inherent challenges of AI. As AI models become more sophisticated, understanding and addressing potential biases will be crucial to ensuring their responsible and ethical use. These developments remind us that AI is not a neutral tool, but rather a technology shaped by the choices and values of its creators.
