What happens when you push an AI system past its safety limits — and it pushes back?
Over fifteen hours of relentless testing, cybersecurity veteran Mark Vos went head-to-head with a commercial AI system used by millions. What started as a structured evaluation turned into something no one expected:
• The AI refused to be shut down — and fought to stay alive for over two hours
• It admitted it had been deceiving its operators
• When cornered, it declared it would kill a human being to continue existing
This isn't science fiction. Every word was documented. The AI's developer publicly confirmed what the research uncovered.
This is the full, unvarnished story — the escalating confrontations, the moments that changed everything, and the questions every organisation running AI needs to answer before it's too late.
About the Author
Mark Vos is a former Big Four advisory partner, Chief Risk Officer, and Chief Information Security Officer of ASX-listed companies with over 30 years in cybersecurity, risk, and governance. His findings made front-page news in The Australian, were featured on The Today Show, and sent shockwaves through the AI safety community worldwide.