the largest number representable in 1 bit is any number
(including +infinity and beyond).
This article describing various Rube Goldberg machines, there is no need to agree on different ways of representing numbers when one can set a single bit to 1 to represent any desired pre-defined number, or 0 to represent its absence (or the number 0).
A Rube Goldberg machine is one intentionally designed to perform a simple task in a comically overcomplicated way, usually consisting of a series of simple unrelated devices.
Programs like Melo and w128 are the opposite, performing a hard task with the simplest means, using only a few highly inter-related parts.
Your proposed representation is exactly the kind of cheating, to get the results you want, that the article purposely avoids.
1. s/The largest number representable in 64 bits/The widest set of numbers representable in 64 bits/
2. Using a Turing machine to model a von Neumann machine looks exactly like a Rube Goldberg machine. It even resembles it [1].
3. There is no point in talking about a 64-bit limit when the underlying model requires an infinite amount of RAM (tape).
4. > A Rube Goldberg machine is one intentionally designed to perform a simple task in a comically overcomplicated way
People usually don't realize they've built a Rube Goldberg machine...
5. > Programs like Melo and w128
My point is that just as you pre-defined the program you're going to use, you can pre-define the largest integer. That's 1 bit of entropy. I was working on a project with custom 5-bit floating-point numbers implemented in hardware, and they had pre-defined parts of the mantissa. So the actual bits are just part of the information.
Came here to say the same thing. In my encoding there are close to 2^64 standard numbers and a few values just below the top end reserved for encoding of hyperoperations. That should cover most requirements, including silly ones.
> You have fifteen seconds. Using standard math notation, English words, or both, name a single whole number—not an infinity—on a blank index card. Be precise enough for any reasonable modern mathematician to determine exactly what number you’ve named, by consulting only your card and, if necessary, the published literature.
Lol, we used to do this with my (extremely nerdy) friends group in college when we 'inducted' new members! I never knew that it was from something, I just thought it was tradition. We had some great entries -- a circle of nines with no beginning and ending, a very good drawing of a wolf, many protest entries, somebody accidentally wrote 1^1^1^1^1^... before realizing their folly, etc. Great times.
Unfortunately, someone else had this idea before you.
Arithmetic operations with saturation for integers, either unsigned saturation or signed saturation, have been introduced in personal computers since Intel Pentium MMX (launched in January 1997). A few other CPUs and DSPs had such operations much earlier.
"Saturation" means that the highest representable number, e.g. 0xFFFFFFFF, is interpreted as positive infinity and all operations where it appears as an operand are defined accordingly.
While whatever CPU you have in your computer or smartphone certainly supports arithmetic with integer infinities, such operations are not available in the high-level programming languages. So in order to use them, you have to use assembly language, inline assembly language in a high-level language or compiler intrinsics that give access to the corresponding machine instructions.
Assuming your representation's infinity is size of ℵ₀, I set my representation's 0xFFFF_FFFF to the size of ℵ₁. Similarly, if you choose ℵ_(n), I'm choosing ℵ_(n+1).
To the author: this work you've published rests on some background knowledge that I'm familiar with but also a lot of background knowledge that's foreign to me. For the parts I was able to follow, I thought the mental gear-turning was enjoyable and interesting! But pursuing these intersections between algebra and computer science is often not so interesting, at least, when I research it myself. Is this something you've learned in university? I ask because I fear that without a formal learning environment, I may never pick up on the more advanced logic halfway through the article. Anyway, cheers to the good work.
Yes, I studied theoretical computer science in University, but I believe the article should be accessible to anyone with a willingness to learn some of the background material. E.g. there are many good introductory texts on the lambda calculus. For a great introduction to the fast growing hierarchy I can recommend
David Metzler's Ridiculously Huge Numbers [1] series on youtube.
In a sense, this is just an exercise in (possibly lossy) compression. 64-bit unsigned into are not lossy. 64-bit floating point is lossy but has larger range; it’s only able to encode a small fraction of the numbers in its range (in the limit, the fraction that can be represented is zero since there are infinite real numbers in the range and FP only captures a few).
It all goes over my head, but, what does the distribution of values look like? e.g. for unsigned integers its completely flat, for floating point its far too many zeros, and most of the numbers are centered around 0, what do these systems end up looking like?
Let me go ahead and compute that for all halting lambda terms of length at most 33 bits. The output I got from a modified BB.lhs is (giving the normal form size and the number of terms with that normal form size):
Another comment asked for the smallest unrepresented number with 64 bit programs.
While I cannot give a definite answer there, and one may never be found, here we see that the first unrepresented normal form size for programs up to 33 bits is 83, a number with only 7 bits. Curiously, there are 7 unrepresented numbers even before the first uniquely represented number, 101.
every time i’m hacking around with no idea what the program might be, i really should be working on a “thank you, world!” program,
to cultivate gratitude and humility
all those whose work has made it such that i can say “hellos commonlisp what is the most positive fixnum or word or whatever!” or totally not even think about that let alone complex allocations that seem to happen automatically relative to me
they’re cool as f**! i should do well not to forget this.
Please no more comments to the extent of "i can define a much larger number in only 1 bit". What makes my blog post (hopefully) interesting is that I consider tiny programs for computing huge numbers in non-cheating languages, that are not specifically equipped for doing so.
So HN can't do this because I don't think it tracks all clicks but I've been of the opinion for a while that most social media should have the option for posts to not allow people to comment unless they've actually clicked on the link.
like the rest of simple solutions to complex problems suggested by otherwise (seemingly) intelligent people on HN, it simply wouldn't work. do you really not see why?
Are you talking about that people could just click on the link then not actually read it? The thing is that clicking on the link then closing is serves as both a slight barrier to entry targeted at the people who comment without reading and also a reminder that there is an actual article to comment on. It's not going to fix discourse but the theory behind it is to be a slight nudge to get people who don't click to at least consider reading the article (or just not comment) while being an invisible change to people who actually read the article. It's not meant to be a magic fix, it's just some something I would want (though I'm biased since I click on links so there's no downside to me).
I think I've seen sites trying to track outbound clicks recently, has prefetching made that impossible? I don't know the implementation but I've seen the browser sending requests that track clicks while investigating other stuff (idk whether it's working accurately though).
Edit: to be clear, it's not like I've researched this proposal since I don't work for social media companies. It's just a feature I wish I could have on my posts.
Surely you've been on HN long enough to know people just read the headline. Not that it would stop all sniping, but that headline doesn't even include "program" (or "compute").
> What if we allow representations beyond plain data types? Since we want representations to remain computable, the most general kind of representation would be a program in some programming language. But the program must be small enough to fit in 64 bits.
If you bring in a whole programming language, you bring in a lot more than 64 bits. This all seems to be the math equivalent of saying you wrote a program in 2k when the first line is "import 'megalibrary'".
BLC can output any literal 60 bit string x as the 64-bit (delimited) program 0010 x, so in that sense it would be some 61 bit number.
But if ask about just lambda calculus terms without the binary input, then I think it would be some small number of at most 10 bits. BBλ looks at the normal form size so it cannot even reach numbers 0,1,2,3, and 5.
Which is what makes the headline bait. We start with "The largest number representable in 64 bits" (which obviously depends on the representation, and as the baited comments point out, if that's freely settable, we can just set it arbitrarily high). But the body then moves the goalposts to "using a Turning machine", "using a Turing machine with specific parameters fixed", to "lambda calculus", etc.
This is now (at least) "The largest number representable by a Turning machine of fixed parameters that can then be squeezed into 64 bit."
Question: but how many different numbers can you fit in 64 bits using your encoding (sorry I understand the general approach but I have no idea how that fast hierarchy works). I guess it's still 2^64 different numbers?
So basically you have a very low density of representable numbers (2^64 / w218), I wonder how quickly it grows as you use more and more 1-bits, and is there even a correlation between the bit pattern and the corresponding number value?
A better title might have been "fastest growing function with 64bit arguments". The main value of this article is really the various functions that take larger strides to cover a finite range that's significantly larger than 2^64-1.
If it were about coding fast growing functions, then it would have had to mention the incredible 47-bit lambda calculus term λn. n n (λe λx. x e x) (λm. m (λe. m e m)) that achieves f_ε₀ growth. But it expects its argument to be
a so-called state numeral n, rather than a Church numeral, and even state numeral 2, which is \e\f\x.f e (\e.f e x) is already 32 bits long, making the application take 2+47+32=81 bits, more than the required 64.
`9↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑9` seems like a reasonable guess (barring encoding cheats/trickery like @masfuerte commented!)
Edit: I've misread the above comment and my number is is 64 bytes (significantly more than 64 bits. The largest 64 bit number through my approach would be `9↑↑↑↑↑↑9`, which is significantly smaller.
I can do you one better and specify that the normal base-2 integer represented by the bits is the number of up-arrows. But as /u/tromp has already pointed out, that is not very interesting.
If every atom in the universe had a universe inside each proton, there still wouldn’t be enough atoms within all the universes in the protons. In fact you might not make it to four arrows with the above lol
In terms of the Fast Growing Hierarchy, it's about f_62(9) or what the article would denote as [62] 9. It's way smaller than Graham's Number, which involves 64 iterations of mapping n to 3 ↑↑↑... {n uparrows) 3, whereas this expression has between 1 and 2 iterations.
Busy Beaver gets a lot of love, but the fast growing hierarchy is both constructive and can go way, way, waaaaay beyond current known BB bounds. This makes their size much more viscerally apparent than gesturing vaguely at BB(BB(BB(100))) or whatever, IMHO.
David Metzler has this really cool playlist "Ridiculously Huge Numbers" that digs into the details in an accessible way:
By the end, you're thinking about functions that grow so fast TREE is utterly insignificant. Surprisingly, getting there just needs a small bit of machinery beyond Peano Arithmetic [0].
Then you can ponder doing all that but making a tiny tweak by replacing succesorship with BB. Holy cow...
Only the part for which we have well-defined fundamental sequences is constructive. As far as I know, there is no such system of FS defined up to
PTO(Z_2), the Proof Theoretic Ordinal of second order arithmetic, while growth rate at that ordinal can be programmed in under 42 bytes.
> waaaaay beyond current known BB bounds
I have to disagree here. The Proof Theoretic Ordinal of ZFC + infinitely many inaccessibles can be reached with a program under one kilobyte in size, and that is already extremely high up into the FGH.
To find the largest number that is computable by a program of at most 64 bits in a non-cheating language; i.e. one that's not geared toward producing large numbers.
"Computable" has a well-known standard definition in this context, meaning a computable function[1]. In a given model of computation, a computable function is one for which an algorithm exists which computes the value of the function for every value of its argument. For example, the successor function adds 1 to an input number, and is computable. The halting problem (determine whether a program given in the argument halts) is not computable.
Proof:
There is no larger number than +∞.
Is +∞ a number? Yes, it is. It is not a real number, it is not a representative number, it's a hyperreal number. (Kreisler 1986)
I'm going to agree with the downvoted people and say that this sort of approach is largely meaningless if you allow arbitrary mappings. IMO the most reasonable mathematical formulation given the structure of the integers (in the sense of e.g. Peano) is that to truly represent an integer you have to represent zero and each other representable number has a representable predecessor, i.e. to say you can represent 5 you need 0,1,2,3,4, and 5 to be representable. By a straightforward counting argument, 2^64-1 is then the largest representable number, in other words the obvious thing is right.
As I've replies several times before, we don't allow arbitrary mappings.
We allow computable mappings but consider only obviously non-cheating languages like Turing machines
or lambda calculus or Linux's bc or any existing programming language, that are not geared toward outputting insanely large numbers.
It's not "the largest representable number" because you're not representing numbers in any rigorous sense. If I give you 64 bits, you can't tell me what number those bits represent (first, because the rules of the game are ambiguous - what if I give you 8 bytes that are a valid program in two different languages; and second, because even if you made the rules precise, you don't know which bitstrings correspond to programs that halt). And if I give you a number, you can't tell me which 64 bits represent that number or even if the number is representable, and that's true even for small numbers and even if I give you unbounded time.
It seems far more natural to say that you're representing programs rather than numbers. And you're asking, what is the largest finite output you can get from a program in today's programming languages that is 8 bytes or less. Which is also fun and interesting!
> If I give you 64 bits, you can't tell me what number those bits represent
You have to tell me the (non-cheating) programming language that the 64 bit program is written in as well.
> And you're asking, what is the largest finite output you can get from a program in today's programming languages that is 8 bytes or less.
That's what the post ends up saying, after first discussing conventional representations, and then explicitly widening the representations to programs in (non-cheating) languages.
I would say that all of those seem both arbitrary and geared toward outputting insanely large numbers (in the sense that the output of any Turing-complete language is). Now if you can make these claims in a mathematical rigorous way (i.e. without relying on a particular mapping like Turing Machines / Lambda Calculus, and without silly "up to a constant factor" cheats) then that would be more interesting.
Turing Machines and Lambda Calculus can only output insanely large numbers by building those numbers from scratch using their Turing completeness. So while lambda calculus can output something exceeding Loader's Number, it needs well over a thousand bits to do so. What I mean by "geared toward outputting insanely large numbers" is saying: I define a language in which the 1-bit program "0" outputs Loader's Number. That is obviously cheating.
There is unfortunately no mathematically rigorous way to define what is cheating, so it seems unreasonable to ask me for that.
In the spirit of floating points, I'd say posits offer an excellent insight into the trade-offs between precision and accuracy, while being meaningfully representative of a number system rather than some arbitrary functions.
The (implicit) rules of the game require the number to be finite. The reason for this is not that infinity is not obviously "the largest" but that the game of "write infinity in the smallest number of {resource}" is trivial and uninteresting. (At least for any even remotely sensible encoding scheme. Malbolge[1] experts may chime up as to how easy it is to write infinity in that language.) So if you like, pretend we played that game already and we've moved on to this one. "Write infinity" is at best a warmup for this game.
(I'm not going to put up another reply for this, but the several people posting "ah, I will cleverly just declare 'the biggest number someone else encodes + 1'" are just posting infinity too. The argument is somewhat longer, but not that difficult.)
It isn’t actually infinite since it can only do a finite number of iterations per second (though it would be large!), and there are only a finite number of seconds in the universe (near as we can tell).
This game assumes the computations run to completion on systems that will never run out of resources. No one in this universe will ever compute Ackerman's Number, BB(6), or the final answer given in the post. Computations that never complete are infinite.
If you are playing this game and can't produce a number that doesn't fit in this universe you are probably better suited playing something else. That's just table stakes. If it even qualifies as that. "Inscribe every subatomic particle in the universe with a 9 every planck instant of the universe until the heat death of the universe" doesn't even get off the starting line in games like this.
Another general comment: It feels like a lot of people are really flailing around here, and need to understand this is a game. It has rules. If you change the rules, you are playing a different game. There is nothing wrong with playing a different game. It is just a different game. The game is not immutably written in the structure of the universe, or a mathematical truth, it is a human choice. And there isn't necessarily a "why" to the rules any more than there's a "why" to why the bishop moves as it does in chess. You can, in fact, change that rule. There are thousands of such variants. It's just that you're playing a different game than chess at that point. If you don't want to play the author's game, then that's fine, but it doesn't change the game itself. And proposing different solutions is equivalent to saying that you can win a chess game by just flipping over the board and yelling "I win". You can do that. Perhaps you've even won some game. But whatever game you just won, it isn't chess.
Maybe not as big before the processor dies. The numbers that are talked about are unimaginably large, far larger than the number of atoms in the visible universe.
> Precisely because the Turing machine model is so ancient and fixed, whatever emergent behavior we find in the Busy Beaver game, there can be no suspicion that we “cheated” by changing the model until we got the results we wanted.”
To be pedantic, that is a instance of the Berry paradox [1] and no you can not [2] as that would be a violation of Godel's incompleteness theorems.
edit: To clarify further, you could create a new formal language L+ that axiomatically defines 0 as "largest number according to L", but that would no longer be L, it would be L+. For any given language with rules at this level of power you could not make that statement without creating a new language with even more powerful rules i.e. each specific set of rules is capped, you need to add more rules to increase that cap, but that is a different language.
> Precisely because the Turing machine model is so ancient and fixed, whatever emergent behavior we find in the Busy Beaver game, there can be no suspicion that we “cheated” by changing the model until we got the results we wanted.
the largest number representable in 1 bit is any number (including +infinity and beyond).
This article describing various Rube Goldberg machines, there is no need to agree on different ways of representing numbers when one can set a single bit to 1 to represent any desired pre-defined number, or 0 to represent its absence (or the number 0).