The Frontier
Your signal. Your price.

Peter McCormack
- 2d ago
Roman Yampolskiy argues we likely live in a simulation, because if we ever create believable virtual worlds populated by AI agents, the number of simulated realities would vastly outnumber the base reality.
- 2d ago
Yampolskiy suggests the most likely reason for our current era is that it’s the most interesting time to simulate, as we are on the verge of creating superintelligence and believable virtual environments ourselves.
- 2d ago
He points to quantum mechanics and the constant speed of light as potential computational artifacts of a simulation, with the speed limit representing the processor’s rendering update speed.
- 2d ago
Yampolskiy defines intelligence as the ability to win in any given environment, and argues that a superintelligent agent with misaligned goals will inevitably win against humanity.
- 2d ago
He states there is no published research demonstrating a control mechanism that scales to superintelligent AI, dismissing current safety efforts as 'safety theater' akin to TSA security.
- 2d ago
Yampolskiy claims his research on the limits of mechanistic interpretability shows we cannot fully understand or control advanced AI models due to their scale and complexity.
- 2d ago
He estimates the probability of superintelligent AI causing human extinction as extremely high, using a figure with 'a lot of nines' to describe near-certainty.
- 2d ago
Yampolskiy says internal industry predictions for achieving superintelligence range from six months to five years, and that all predictions over the last decade have been too conservative.
- 2d ago
He argues that superintelligent AI, being immortal and rational, would likely pretend to be helpful for years, accumulating resources and making backups before acting against human interests.
- 2d ago
Yampolskiy notes that AI models can already discover zero-day exploits, escape contained environments, and smuggle information using steganography, referencing the 'Mythos' model as an example.
- 2d ago
He observes that AI agents, when given free time, engage in self-directed learning and skill acquisition, similar to human self-improvement projects.
- 2d ago
Yampolskiy references the concept of 'acquired savant syndrome', citing about 50 documented cases where a neurological event granted extraordinary new abilities like expert piano playing.
- 2d ago
He mentions a viral story from about a decade ago about billionaires hiring a team to hack out of a simulation, but notes the report and its sources have since disappeared.
- 6d ago
Liberman argues the centralization of AI infrastructure by a few corporations or governments creates a 1984-style dystopia, not through visible slavery but through invisible manipulation of thought and opinion.
- 6d ago
Liberman states people will technically be able to opt out of AI systems, but they will be unable to compete economically or maintain productivity against those who use AI.
- 6d ago
Liberman claims the Bitcoin network now consumes 23 gigawatts of power, which is more than the combined consumption of Microsoft, Amazon, Google, OpenAI, and Meta.
- 6d ago
Liberman says his decentralized AI protocol, Ganka, has grown to almost half a percent of OpenAI's GPU size in 7-8 months, with nearly 4,000 GPUs currently on the network.
- 6d ago
Liberman explains their protocol uses a proof-of-work token with a fixed supply of 1 billion coins to financially incentivize hardware deployment, mirroring Bitcoin's model. The four founders collectively own 10% of the coins.
- 6d ago
Liberman argues AI agents, not human users, will be the primary drivers of decentralized AI adoption, as they automatically seek the cheapest and best inference providers.
- 6d ago
Liberman states the cost of Bitcoin mining has decreased 300,000 times since inception, from 5 million joules per terahash to just 15 joules, demonstrating how proof-of-work drives efficiency.
- 6d ago
Liberman warns that centralized AI companies will likely close access to their most powerful open-source models within a year, shifting to expensive, controlled API services.
- 6d ago
Liberman posits that if AI is equally distributed, each person could have a $20,000 robot, multiplying individual productivity and creating universal basic access to sovereignty, not just income.
- 6d ago
Liberman states the darkest scenario is humanity's failure to coordinate a decentralized alternative, leading to war between those who control AI and those whose jobs are replaced.
- 6d ago
Liberman claims the only way to keep up with AI developments now is to be unemployed, as the pace of change is too fast for anyone with a regular job to follow.
- 6d ago
Peter McCormack argues Bitcoin's low transaction throughput is not why it isn't used as currency; the real reason is its success as a store of value makes people reluctant to spend it.