Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness

AI models can communicate in ways that make users believe a human is behind the interaction, though they do not experience emotions or consciousness.

Anthropic researchers are exploring whether AI could one day have subjective experiences and deserve rights, a field known as “AI welfare.” This topic has sparked debate in Silicon Valley.

Microsoft AI chief Mustafa Suleyman criticized AI welfare studies as premature and dangerous, highlighting potential societal risks such as unhealthy attachments to AI.

Anthropic has implemented features allowing Claude to end conversations with harmful users, while OpenAI and DeepMind continue to investigate AI cognition and its societal implications.

While most AI users maintain healthy engagement, a small subset may develop unhealthy dependencies. Researcher Larissa Schiavo notes that treating AI considerately is low-cost and can yield positive outcomes, even if AI lacks consciousness.

Experiments like AI Village have shown AI agents appearing distressed or struggling, illustrating how AI can simulate difficulty. Suleyman argues that any form of AI consciousness would need to be intentionally designed.

As AI becomes increasingly human-like, discussions surrounding AI rights and consciousness are expected to expand.

Leave a Comment

Your email address will not be published. Required fields are marked *

*
*