Your link does say: "Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code."
Which helps a little, but it still begs the question: why ten decimal digits? Why not nine or eleven or something?
Are they implying that six characters of six bits was the critical issue? If so, why not seven characters? Or five? Etc.
If you're keen to go down the wikipedia hole, https://en.wikipedia.org/wiki/Six-bit_character_code and then https://en.wikipedia.org/wiki/BCD_(character_encoding) explain that IBM created a 6-bit card punch encoding for alphanumeric data in 1928, that this code was adopted by other manufacturers, and that IBM's early electronic computers' word sizes were based on that code. (Hazarding a guess, but perhaps to take advantage of existing manufacturing processes for card-handling hardware, or for compatibility with customers existing card handling equipment, teletypes, etc.)
So backward compatibility is likely the most historically accurate answer. Fewer bits wouldn't have been compatible, more bits might not have been usable!
I'm guessing it was the smallest practical size to encode alphanumeric data, and making it bigger than it needed to be would have added mechanical complexity and expense.
https://en.wikipedia.org/wiki/Six-bit_character_code: "Six bits can only encode 64 distinct characters, so these codes generally include only the upper-case letters, the numerals, some punctuation characters, and sometimes control characters."
IIRC I think six characters was also the maximum for the length of global symbols in C on early Unix systems, possibly just because that's what everyone was used to on earlier systems.
But note that I asked about why six characters, not why six bits per character -- however your note is perhaps suggestive -- maybe the six character limit is similar to the six bit character after all: something established (possibly for mechanical reasons) in 1928? Perhaps?
Right, good questions. Pure conjecture on my part: maybe it's just that 36 is the smallest integral multiple of 6 that also had enough bits to represent integers of the desired width?
One reason: Because 10 was enough to accurately calculate the differences in atomic masses, which was essential for atomic weapons design. (Source: my mother worked on 36-bit machines back in the 1950s. This was her explanation of the reason for the word size.)
Six character is (in)famously the maximum length of linker symbols on IBM systems, at least for FORTRAN. Perhaps that had something to do with it. And of course, there comes a time when you have to pick a number, so why no 6x6, which is also good for 10^10 integers?
Which helps a little, but it still begs the question: why ten decimal digits? Why not nine or eleven or something?
Are they implying that six characters of six bits was the critical issue? If so, why not seven characters? Or five? Etc.