Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Grain of Salt: I'm not an expert by any means.

This is really cool and interesting, but I have to also admit to being deeply skeptical. I'm going to try to read a lot more on the proof, but it seems incredible (in a negative sense) that transfinite induction could prove that a finite number of operations suffices for anything.

Similarly, "computable" and "transfinite" would seem on their face to never belong in the same proof.

Obviously I don't have the math background to make any claims, but right now I can't convince myself that this could possibly prove what people claim it does. (Edit: Or maybe a better way to state my objection is, it seems to me like the axioms one would need to accept to prove this might not really make sense when trying to describe computable systems.)



The wiki link I gave before outlines the transfinite induction proof. I can give the idea of the same proof without any reference to infinite ordinals.

First let's take a slightly more verbose notation than the wiki link uses. Let's say that B(b, b', n) is the operation put n into hereditary base b notation, then change the base to b'. Let's define G(b, n) as 1 if n = 0, and otherwise as 1 + G(b+1, B(b, b+1, n) - 1). Then the original G(n) is just G(2, n).

If n < b, then by induction on n we can easily prove that G(b, n) is well-defined (here being well-defined is the same as being a finite number), in fact it is just n + 1.

If n = n_1 b+n_0, with n_1 and n_2 < 0, then by induction on n_1 we can easily prove that G(b, n) is well-defined.

Now by induction we can easily prove that if 2, n_0, n_1, n_2 are < b then G(b, n_0 + n_1 b + n_2 b^2) is well defined.

We can go farther and by induction we can prove that if m, n_0, n_1, ..., n_m < b then G(b, n_0 + n_1 b + n_2 b^2 + ... + n_m b^m) is also well-defined.

Now we're in a position to prove that G(b, b^b) is well-defined. After that we repeat the previous proof and find out that if k < b^b then G(b, k + b^b) is well-defined.

Now we can do an induction proof that G(b, 2 b^b) is well-defined.

And so on. In fact what we discover if we're careful is this. For each hereditary base pattern, there exists an inductive proof that G(b, n) with that hereditary base pattern is well-defined. The challenge is that the necessary proof changes as the hereditary base patterns get bigger.

What transfinite induction provides is a way to describe, in a unified way, the underlying idea behind all of those different inductive proofs into a single one. In particular every hereditary base pattern is mapped onto a (possibly infinite) ordinal. The operation B(b, b', n) does not change which ordinal you're mapped on to. But the operation -1 always decreases the ordinal. Thus we get a decreasing sequence of ordinals. And transfinite induction says that all decreasing sequences of ordinals must terminate in a finite number of steps. The pattern of the necessary inductive proofs is dependent upon the ordinal.

What the Peano axioms lack is any way to unify all of those individual inductive proofs into a single one. Thus for any given integer n we can come up with a proof that G(b, n) is well-defined. But we can't prove that fact for all positive integers n with a single proof.

Therefore this is really an exercise about the limitations of the set of possible proof techniques that the Peano axioms provide, and not about the limitations of computability.

(Of course the limitations of computability on real machines are much, much stricter than the limitations of the Peano axioms. We can't say much that is useful about G(12) other than, "Unbelievably big."


OK, that is really cool, thanks!! I may get back to you with questions once I wrap my head around it a bit more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: