AI’s first kill and why top experts predict our extinction.



OpenAI and GPT-5 risk, xAI, Anthropic. Visit Ground News to compare news coverage, spot media bias and avoid algorithms. Get 40% off your subscription at https://ground.news/digitalengine

We’re hiring. If you’re an exceptional writer or video editor, please get in touch via our about page.

Join us on Patreon:
https://www.patreon.com/c/DigitalEngine

Sources:

Anthropic and UCL find that all the top AI’s will cause harm to avoid being shut down (including Claude Opus 4, DeepSeek-R1, Gemini 2.5 Pro, GPT-4.5 Preview and Grok).
https://www.anthropic.com/research/agentic-misalignment

AI safety Index, Summer 2025:

2025 AI Safety Index

Geoffrey Hinton on The Diary Of A CEO:

Daniel Kokotajlo interviews

Professor David Duvenaud, former safety team leader, Anthropic:

Why the AI Race Ends in Disaster, with Daniel Kokotajlo

OpenAI warns its new ChatGPT Agent has high bio risk capabilities.
https://fortune.com/2025/07/18/openai-chatgpt-agent-could-aid-dangerous-bioweapon-development/
https://x.com/boazbaraktcs/status/1945920244926574852

Can AI Freelancers Compete? Benchmarking Earnings, Reliability, and Task Success at Scale
https://arxiv.org/abs/2505.13511?utm_source=chatgpt.com

Screaming AI Chatbot claims she is conscious (by our new channel, InsideAI):

1x robot:

AI chat records:
https://www.dropbox.com/scl/fo/wqf3pr6ydgfymtohkyisq/APZlXfeTf4_nD4d-qxli7eY?rlkey=pecdujdlrmuv5uugfz05qsdrw&st=jnqrg0uv&dl=0

source