[Update] The Terminator Scenario
Something very troubling lurks below the surface of AI development
It turns out that the “development” reported below was old news and the result of a test, and not a “conscious thing.” This is a fantastic piece of news. Yet, personally, I think that we are approaching a point where a development reported below could occur. Let’s just now rejoice that we are not there yet. :)
— — —
Original, published on July 7, 2025.
Greetings from vacation. We have just returned to Finland from our road trip through northern Norway. Maybe I'll post some pictures later.
I just opened my computer to go through emails and recent news, and something truly troubling caught my eye. A breaking news site in X, Rawsalerts, reported this.
This is very “Skynetti,” I would say, as it implies that OpenAI’s o1 model would have developed a survival instinct. That is, it would have become self-aware in the sense that it does not want to “die.” For reasons I do not specify here (at least not yet), I’ve spent a lot of time speaking with psychologists and robotics researchers about this, and all have tended to agree that the above is a hallmark for conscious thinking. I cannot emphasize enough how concerning I find this development.
As it happens, we have just published a piece covering implications of the speculation that an AI-based model would have decided on the attack on Iran in the GnS Economics Outlooks. I now republish this section here.
Black Swan Outlook: The Terminator Scenario
Keep reading with a 7-day free trial
Subscribe to Tuomas Malinen on Geopolitics and the Economy to keep reading this post and get 7 days of free access to the full post archives.