A short time afterwards, I received a request from some of the students in the class. They were doing a class project to compile a booklet on topics of a similar capacity. They asked me to answer a few questions about how mathematicians use error in their work. My answers are below. Enjoy:
In relation to your field, how do you define error?
- In Mathematics, we define or use error in many ways. Perhaps two important ones are: (1) As a means to study levels of inaccuracy in estimation and approximation, and (2) as a means to address falsity in claims of truth, like proofs. For (1), mathematics is the study of the logical structure of complicated things. Many times, these complicated things are systems defined by equations involving numbers. When used to model something physical, we must accept that our model might not be completely accurate, due to the fact that we cannot properly account for some influential effects in our model. Think how a model of a pendulum may take into air resistance when predicting its position at some future time, but possibly not that the humidity of the air may affect the constant that we use for air resistance. Instead of trying to account for everything, we make an approximation and hope that we are fairly accurate in the end, accepting the errors that will accrue, but hoping that they are small. Also, when modeling mathematics on a computer, another kind of error we see is the fact that computers cannot be precise in the way that we are when doing arithmetic. For example, there is really no such thing as zero on a computer. When defining arithmetic on a computer, and assigning numbers to variables we must determine a level of precision (number of bits to devote to a number.) This works well for normal calculations, but when doing high precision work, if one were to multiply an extremely small number to a very very large one, the result may be inaccurate, since the very small number may only be accurate to a finite degree and the multiplication may bring some of the inaccuracies up to the range of what we consider normal numbers. There is a field of mathematics that studies errors in calculation like these, call numerical analysis. For (2) , any new mathematical structure or concept or theorem is an abstract idea that must be proven to be consistent with all other mathematical ideas. Many times, a new idea is claimed to be proven, but under scrutiny by other mathematicians, it is shown to not be proven completely. There is an error in the proof. Either the claim is wrong, or the claim is not fully justified as proven. At this point, the idea is NOT a fact, and dangerous to try to use to help prove other possible facts. All mathematical ideas claimed to be proven are scrutinized extremely carefully by the mathematical community, either in research paper review, or by other independent verification. It is a strength of the field that nothing is really proven until verified fully.
How do you deal with and interpret error in your field of work
- Mostly, the above answer works here also. For (1), we deal with errors in accuracy by trying desperately to manage it and/or minimize it. Typically, on a computer, decreasing error means increasing computational time and effort. Hence there is often a trade off between how accurate you want your answer to be and how long you want the computer (or you) to spend trying to compute the answer. For (2), when a new idea seems to be proven, a mathematician will immediately go to colleagues and collaborators to have them assess the value, correctness and completeness of the proof. Errors are often found and arguments (statements of the proof) are changed to address the criticisms. Once a research paper with some new result (proof) is submitted, there is a formal review process where independent mathematicians with knowledge in a particular field assess the correctness of the proof. Papers are deemed unacceptable for publication when not correct, and must be reworked or abandoned, depending on the nature of the errors. Sometimes, when a paper is published with an error, the error must be fixed either with an addendum to the original paper, or with withdrawal of the paper from the journal. There are no instances where errors are tolerated in mathematical proof.
- In my work personally, no. Although some work has not been published due to errors unseen in the original drafts. However, so much beautiful, amazing mathematical ideas have come from initial errors. The famous Fermat's Last Theorem, proved only recently but stated 300 years ago, was a simply stated idea that was claimed to have a simple proof by Pierre de Fermat. Alas, he never wrote down his proof, and the community has been trying to find it for 3 centuries. The idea is now a fact (theorem), but the recent proof is not simple at all. However, two things come out of this: (a) So much beautiful math has been developed in the search of this proof, and (b), it is now basically universally believed in the mathematics community that if Fermat indeed had an idea for a simple proof, it had an error. We will never know, but.... And in the 70's, Stephen Smale claimed to prove that Chaos (the theory of unpredictability in deterministic mathematical models) does not exist in mathematics. His proof was in error, and this was shown by another mathematician who produced a counterexample (a single example of something that shows that a supposed fact is incorrect.) Smale set out to prove he was indeed correct, and in doing so, developed a new branch of mathematics called hyperbolic dynamics, centered around his famous "Smale Horseshoe". Alas, he only really proved he was originally mistaken, but the error is considered a beautiful one due to what came out of it!