Yesterday, during the monthly Florida Bibliophile Society meeting, a book collecting former math teacher with a profound interest in Lewis Carroll (of Alice & Wonderland fame) made some interesting comments on an area of mathematics I find quite intriguing. He spoke in particular on the Collatz Conjecture. A conjecture, in mathematical terms, is a supposition that appears to be always true but is not rigorously proven. There are, of course, several ways to prove a statement, which would change the conjecture into a theorem (which is rigorously proven and is followed by three of my favorite latin words--quod erat demonstradum, which mean "it has been demonstrated/proven."
Logic gives us several ways of proving a statement. One is a proof by contradiction. In this proof, we assume that the theorem is not true and then demonstrate that this leads to a contradiction with what is known to be true. For example, if we assumed a theorem was not true and then showed that if this was the case then 1 = 0, then we would prove by contradiction that the theorem was true. This is a particularly elegant form of proof, and I quite enjoy that as well. Other proofs seek to demonstrate that a statement boils down, eventually, to already proven statements, so that the theorem rests on the shoulders of prior statements (standing on the shoulder of giants, as it were). The immediate implications of a given theorem are considered corollaries, and are added to the pool of proven statements as well. Alternatively, theorems may be proven exhaustively, such that within a given range one can demonstrate that the theorem is true because one has exhausted all of the possibilities. I dislike this manner of proof because I tend to get rather bored of calculating algorithms and my attention has the tendency to wander, leading to mistakes in arithmetic. Since this is apparently a common affliction, generally computers are used in this form beyond the simplest of cases, because their attention does not wander (unless they use a Microsoft operating system and have a blue screen error, but that is another subject entirely).
I particularly enjoy conjectures because they are often composed of rather simple statements and can be verified to very large numbers, but because they resist proof by contradiction or traditional methods of proof, and because they make such grand claims that they cannot be proven exhaustively. One such example of this sort of conjecture is that the digits of pi and Euler's constant do not have a repetitive pattern, making them transcendental numbers. Well, this has not been proven (or disproven, for that matter) despite the fact that computers have calculated pi and Euler's constant to extreme lengths and have found (in the case of pi at least) that there are roughly equal numbers of each digit despite no discernable pattern.
Collatz's conjecture is likewise appealing to me because the conjecture sets two rather simple rules, one for odd numbers and another for even numbers, and these rules have been demonstrated to be correct to very large numbers, but because there are an infinite number of postive numbers, the conjecture cannot be proven true exhaustively. Furthermore, the conjecture has resisted a proof by contradiction or the traditional deductive proof.
The Collatz Conjecture has two rules:
If a number is odd, multiply by three and add one (3x +1),
If a number is even, divide the number in half (x/2).
The conjecture states that, eventually, any number will reduce, given these two rules, to one.
Some examples:
1 4 2 1
2 1
3 10 5 16 8 4 2 1
4 2 1
5 16 8 4 2 1
6 3 10 5 16 8 4 2 1
7 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1
8 4 2 1
9 28 14 7 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1
10 5 16 8 4 2 1
And so on...
It becomes apparent rather quickly that there is some kind of pattern in the way the rule works, and that once a sequence reaches a number already present in an existing sequence then it follows the same trail (for example, 11 would be 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1). However, the proof of this conjecture has evaded the attempts of mathematicians of far greater ability than I (though, to be honest, it is not a difficult task to find mathematicians of greater ability than I am).
I consider such examples as this simple conjecture as evidence that the world in which we live, even when we look only at such narrow areas as mathematics which appear at first capable of exhaustive rational proof, nonetheless possesses aspects which we hold to be true even though we cannot demonstrate them to be true by logic. The world we live in, that is, requires something we call faith. If this is true in the world of number theory, it is certainly even more true in those aspects of life that are not as quantifiable. This may not make some mathematicians very happy, but it should nonetheless provide food for thought for those of us who are able to handle a bit more uncertainty and are able to reflect upon intelligence greater than our own.
Monday, April 16, 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment