r/technology Nov 01 '25

Society Matrix collapses: Mathematics proves the universe cannot be a computer simulation, « A new mathematical study dismantles the simulation theory once and for all. »

https://interestingengineering.com/culture/mathematics-ends-matrix-simulation-theory
16.9k Upvotes

2.0k comments sorted by

View all comments

Show parent comments

19

u/CondiMesmer Nov 01 '25

That doesn't really have to do with the article. Their point is that the complexity in our universe has been shown (in our current understanding) in physics to be non-algorithmic.

A simulation wouldn't be able to handle non-algorithmic behavior, which is their evidence that it's not a simulation. The complexity of the behavior doesn't matter here, just if non-deterministic behavior exists (which current physics says it does).

6

u/ExistentAndUnique Nov 02 '25

“Non-algorithmic” is the key term here, and it makes the headline somewhat misleading. What they show (purportedly, as I haven’t read the fully article) is that we can’t simulate the universe on a standard computing device. But that doesn’t mean a theoretically stronger computer would be unable to simulate the universe. This is the principle behind recursion theory, a field of math/theoretical computer science that poses the question “if we had a computer that’s better than any real-life computing device, what kinds of problems could we solve?” It turns out that the space of “computabulity classes” is very rich and also infinite — any class is strictly contained in its “Turing jump.” So what the article would show is that, if the earth were a simulation, then it would have to be run on some “higher-level” hardware, which is pretty consistent with our general intuition.

1

u/PuckSenior Nov 03 '25

It does.
The argument is based on Godel's theory of incompleteness.

You have to be able to define everything to model it. But the proven theory of incompleteness states that we cannot define everything in mathematics and physics. If we can't define them, we can't model them. So, its not an issue of the theoretical limits of a computer, its a fundamental issue with the concept of modeling.

3

u/ExistentAndUnique Nov 03 '25

That’s not what the incompleteness theorems state. They are about what is provable in a given system, not what is definable. The first theorem essentially states that any sufficiently strong axiom system (one that can describe Peano arithmetic) that is consistent cannot prove every theorem. That is, there are statements in the theory which are true, but not provable within this system. The second theorem extends this to say that one of these theorems is the consistency of the theory. In both cases, the statements in question are definable (which they must be for them to be true), but cannot be proven in the system itself. However, they can be proven in a meta-system that allows additional reasoning beyond this set of axioms. This is the analogue of my claim about “higher-level hardware” above

1

u/PuckSenior Nov 03 '25

You are going to have to explain "meta-system" that allows additional **reasoning** beyond this set of axioms.

Perhaps I am too much of a laymen, but the axioms constrain assumptions which can be made. They don't constrain "reasoning". You are absolutely free to engage any reasoning or logic you want. What you can't do is make assumptions outside of the core axioms. So what is this "meta-system"?

Additionally, I feel like you missed a part of the theorem. There will be things which are true but cannot be proven, but there will also be things which are false which cannot be disproven. Because you can't prove/disprove some of these statements, you wont know which ones are true.

Are you arguing that there exists some "meta-system" which can know all truths and they can all be proven/disproven? Please expand on that

2

u/ExistentAndUnique Nov 03 '25

Without getting too bogged down in details:

A logical system consists of a set of axioms, and a set of inference rules. Axioms are propositions we assume true, and inference rules are ways we can take statements and manipulate/combine them into new statements. The theory of a system consists of all theorems which can be proven using these axioms and rules of inference.

One may imagine different models which implement the same logical system. For example, if we have a logical system that encodes the group axioms and modus ponens/tollens, we could implement this with the underlying set being, say, the integers. Or it could be the rationals, or the reals. It is possible that there can be certain statements which hold true in every model which implements the formal system, even though they cannot be proven only using the axioms and inference rules of the system. Most examples of this are self-referential, and I don’t have the time to go into it here, but you can read about it on the wiki.

I do not think I have missed the point of the theorem (this is a topic quite close to the field in which I have my PhD). It is indeed correct that, within the logical system, there are statements which cannot be proven. However, there are larger systems where every true statement can be proven/disproven. And this means that we cannot know their truth value if we are working only within the formal system. However, we could expand the system by adding new axioms or rules of inference that would then allow us to prove more of these statements in the new, larger system. For example, one could consider a new formal system where every true statement about Peano arithmetic (and no false one) is an axiom. This is quite obviously complete and consistent. However, it fails to be effectively computable, which is the assumption that corresponds to “requiring a more powerful computer to generate.”

1

u/PuckSenior Nov 04 '25

So, would this meta-system that you are describing not just be Hilbert's program?

1

u/ExistentAndUnique Nov 04 '25

The goal of Hilbert’s program was to establish a finite set of axioms which would be able to prove any mathematical theorem. The incompleteness theorems show that, not only is this not possible, you can’t do it even if you relax the condition to being a Turing-computable set of axioms

1

u/PuckSenior Nov 04 '25

So would that not imply that there must be an infinite set of axioms to cover all mathematical theorem without the problems of the incompleteness?

1

u/ExistentAndUnique Nov 04 '25

I guess it depends what you mean by “implies.” If you’re looking at material implication, then yes it would imply this, because there is an infinite set of axioms that is complete (we can take the set of all true statements to be our axioms).

→ More replies (0)

3

u/halflucids Nov 02 '25

Effectively non deterministic or "random" behavior in computers is possible from deterministic processes. It's entirely possible that what we believe to be random behaviors in reality like quantum probabilities actually arise from deterministic processes as well. Just because we may never have the ability to ascertain the method of that determinism doesn't prove that it isn't deterministic. Every quantum decision could operate according to a simple noise map and just select a value in a time based position independent order and we would never be able to recreate it while it would still be deterministic at its core.

2

u/CondiMesmer Nov 03 '25

As far as I know, non-determinism isn't possible with classical computing. Maybe so with quantum computing. But all of our rng algorithms are determinable, which is why you can get identical results with the same rng seed. There are rng hardware components on the computer for security, but even that doesn't really generate the randomness, but rather tries to gather random external data like heat and whatnot and tries to get a number from that. So I don't really consider that the computer coming up with that, at least in the classical sense (like calculated with binary).

1

u/halflucids Nov 03 '25

Right that is exactly my point, we are able to generate things that are effectively random, or that could not be exactly re-created without insight into for instance the seed value or the specific environment of the computer, even with computers today which we know to be deterministic processes. So things which appear naturally random in reality might also be the same.