[00:00:00] ANDREW SZERI:
Good afternoon. My name is Andrew Szeri. I’m Dean of the Graduate Division.
We’re pleased, along with the Graduate Council, to present Ian Hacking, this year’s speaker in the Howison Lectures in Philosophy series. As a condition of this bequest, I’m obligated to tell you how the endowment supporting the lectures came to UC Berkeley. George Holmes Howison was born in 1834.
Following his attendance at college in Marietta, Ohio, shortly before the Civil War, he moved to St. Louis and became a member of the city’s Philosophical Society’s Kant Club, where he had an opportunity to meet with influential thinkers of his times, Ralph Waldo Emerson, Amos Bronson Alcott, and William James. In 1884, or after a six-year professur- professorship at the Massachusetts Institute of Technology, Howison accepted an offer from the University of California to establish the Department of Philosophy here. This included his appointment as Mills Professor of Intellectual and Moral Philosophy and Civil Polity.
In his position, he occupied the university’s first endowed chair. With a strong, outgoing personality, he made philosophy a factor not only in the university, but in the surrounding community as well. Howison took great interest in his students and was greatly involved in their lives.
He taught at the university until nineteen hundred and nine. Six years later, he died at the age of eighty-one. Howison gave all of his property to the university.
and thus created several foundations, including one for a fellowship in philosophy. In nineteen-nineteen, friends and former students of Professor George Holmes Howison established the Howison Lectures in Philosophy at the University of California in his honor. Now I welcome Professor John Campbell, a member of the Howison Committee, to the podium to introduce our speaker, Ian Hacking.
[00:02:14] JOHN CAMPBELL:
Ian Hacking, uh, did his undergraduate work at the University of British Columbia and Trinity College, Cambridge. He did his doctoral work in Cambridge, uh, with Casimir Lewy. And then he moved around quite a lot.
Princeton, Virginia, Peterhouse, Cambridge, Makerere University in Kampala, Uganda, Peterhouse, Cambridge again, Stanford, Bielefeld in Germany. He came to rest, as it were, at the University of Toronto, where he held their highest honor, a university professorship, for many years. And then in two thousand, he became the first native English speaker to hold a professorship at the Collège de France.
Now he has a truly spectacular collection of honors and awards. It’s customary in this kind of introduction to explain something about the honors and awards that the, um, uh, the speaker has accumulated, but really it would take far more time than we have available to work through this list. So I, I’m not going to do that.
I want to, um, instead say something about the main things, something that, uh, many of you will already be familiar with, his many strikingly original books and articles. I think it’s always a, a dangerous thing to try to, um, sum up the main theme of someone’s work. But I think if there is a main theme running through his work, it’s maybe this: that science has to be understood in its historical context.
Now, to get a sense of what the force of that is, what the bite of that is, suppose… well, I have here a simple camera. Suppose I walk around taking shots. Perhaps I don’t even look at the viewfinder.
I take shots more or less at random. And then later, um, I print out all the photos, and I select one arbitrarily, and I say, ‘Why is that photo the way it is? Why did that one come out like that?’
Then if I’m going to explain why that particular photograph came out the way it did, all that matters is how the camera was set and what was in front of it at the time. That’s all I need to explain why that photo came out the way it did. Which photographs were earlier in the sequence and which photographs were later just doesn’t matter.
And you might think science is like that. If you want to understand why contemporary science is the way it is, all you need to do is look at the evidence, at the data, the problems that scientists are currently grappling with, um, and the theories that they have cooked up to explain that data, that evidence. Um, of course, what theorists did in previous centuries might be helpful.
It might suggest ideas to contemporary scientists. But a dream might suggest a theory to a contemporary scientist. Reading a horoscope might suggest something to a contemporary scientist.
That doesn’t mean that the dreams or the, um, horoscopes or the past history have any constitutive connection with why science is the way it is today. That’s a very natural way to think about scientific theorizing. Hacking’s point, as I understand it, is that this picture is profoundly wrong.
It’s, it’s not just wrong, it’s profoundly wrong because it actually condemns us to live with puzzles and problems that we have no way of understanding. The ordinary language philosopher John Austin once said, “Trailing clouds of etymology do we come.” I mean, many of us are interested in the etymology of the words we use.
If you look up a dictionary, you, you, you, you’re puzzled about ex- the exact significance of some word, it’s very natural to think it’s, it’s helpful to get some, um, sense of the meanings that the word had in the past. And when we’re interested in this, when we look up the etymology of a word, uh, it’s not that we’re interested in the history of the word’s meanings for its own sake, it’s that we think those past meanings that the word had are not truly dead.
They live on, they affect the use of the word that we make today. And I think you could think of etymology as a kind of junior cousin of Hacking’s massive project. Um, in the emergence of probability, Hacking’s suggestion, as I, um, read him, was that when we think about the, uh, concept of probability that we use today, that concept was laid down in the seventeenth century when, um, ideas that we have now forgotten were very much alive.
The fundamental tensions that there are in the concept of probability that we use today were laid down then in response to theoretical structures that we have now altogether forgotten. But they still animate the disputes that we have today. And that leaves us in, um, a very unsatisfactory position because on the one hand, we have these puzzles and tensions driven by the concept that we use today, and on the other hand, they seem somehow inevitable because we can’t find any contingent reason that is driving them.
It seems like there is something inescapable about these puzzles and tensions. Um, we don’t know where they come from. So although Ian is, uh, uh, of course, a proficient and fascinating historian of science, um, I think he is not fundamentally a historian in the sense that historical understanding of the past for its own sake is not his objective.
His objective is a better understanding of the problems that confront us today. The idea is to make explicit those ghosts from the past that are animating our current disputes. When we do that, we see that our current disputes are not actually being driven by puzzling, um, inevitabilities.
What’s driving them is past historical contingencies. Now in itself, that will have the liberating effect of allowing us to see ways forward. But I don’t know, some philosophers have had the hope that, um, their work would make philosophical problems evaporate, that once you grasped their point, the puzzles would go away.
Um, I’m, I’m afraid, uh, for me at any rate, the effect of reading Ian’s work is never to make the problems go away. What you get is a much livelier sense than you had before of how deep and complicated the questions are. But even although it doesn’t make things easy, the approach is nonetheless liberating.
It i- gives us the only way forward there is. I have to say that, um, ch- choices are invidious, but my own personal favorite among Ian’s books is, um, Representing and Intervening. And as I read it, the central insight here is that very often in the sciences, there are traditions of experiment that are, how should I say, semi-autonomous, semi-independent of, uh, the theories that surround them.
Theories may come and go, but a single tradition of experiment, a single style of experiment can just keep going. And Ian was, I think, the first person to notice this is– it’s kind of obvious once you, once you, once it’s pointed out,
(coughs)
but I think he was the first person to notice that there are these semi-autonomous traditions of experimentation in the sciences. And in a bold stroke, he tied the questions about the reality of the entities studied in science to these semi-autonomous experimental traditions. All our theories of the electron may be false, but nonetheless, since you can spray electrons, since you can do experiments involving them, they are real.
The reality of the electron is tied to the experimental tradition and not to the theory. He’s developed his theme of, uh, historical understanding in books on transient mental illnesses such as personal– multiple personality disorder or mad travelers. And the idea here is that, um, these are mental illnesses that seem to be local to a particular context.
They show up, um, they may be prevalent in a particular context. Um, about regal context, uh, significant level of synergy between therapists and patients, and then they just disappear, then you don’t get it anymore. Um, you can’t understand what the illness is unless you understand the psychia- psychiatric context in which it’s being diagnosed and treated.
Given the range of his interests, when we invited, um, Ian Hacking to give the Howison Lectures, of course we had no idea, uh, on what specific topic he would speak. Now, the posters for his lecture have been all around town for a couple of weeks now. His title, as I guess we all know, is Proof, Truth, Hands, and Mind.
Now, I must have been in half a dozen speculative discussions as to what exactly the specific topic is. Um, and I guess like all of you, I’m eager to find out. So will you please welcome, um, Professor Hacking?
(applause)
[00:13:08] IAN HACKING:
Thank you very much for the excessively generous introduction. Uh, including a quotation from Austin, whom I immensely admire, which I had never heard before, “Trailing clouds of etymology we come,” which the, uh, more literate among us may recognize as Wordsworth. Did you know it’s Wordsworth?
Yes. Okay. So it’s Intimations of Immortality, Trailing Clouds of Glory we come.
Anyway, uh, thank you very, very much for asking me to give a Howison Lecture. I’m humbled by the list of ninety years of predecessors on this forum, but I do have to begin by explaining my title. I’m talking about mathematics.
I have a very folksy eye doctor who is a bit of a fan He said to me he had read a book of mine, and he gave me a marvelous compliment. He said, “You write just like my third-grade teacher told me to write.
Never use a ten-cent word when a three-cent word will do.” Well, of course, proof, truth, hands, and mind are pretty expensive words, but they’re no more than five letters long, much better than eleven-letter words like mathematics. But why my title?
First, because proof has been an essential part of Western mathematics ever since Plato. And Plato thought that mathematics was the sure guide to truth. I want also to think of how we do mathematics in a material way that Plato would hardly have acknowledged.
We think with our hands and our whole bodies. We communicate with each other not only by talking and writing, but by gesticulating. If I am thinking mathematically, I may draw a diagram to take you through a series of thoughts.
And in this way, I pass my thoughts to you. I was astonished a short time ago when an established American poet, Kelly Cherry, wrote a poem, a sonnet, about me proving Gödel’s theorem in a logic class some 50 years ago when I was a novice prof, and she was a grad student. The poem describes me as covering all four blackboards in a room in Virginia with chalk, gesticulating the while.
“I still remember,” she writes, “how you started on one blackboard and worked your way around the room, four walls whited out in the blizzard of chalk.” The concluding couplet of the sonnet runs, “Like gazing into someone’s mind and seeing his thoughts, no two alike, come into being.” Not a very good poem, a book sent out of the blue, but it surprised me by expressing one of the connections between mathematics and the body that interests me, and hence my mysterious title, Proof, Truth, Hands, and Mind.
But of course, there’s much more to the connection between hands and mathematics than gesticulating. George Lakoff here at Berkeley has done much to make us aware of how so many mathematical concepts are, as he puts it, embodied. Mathematics is a specialist interest.
Yet it’s the only branch of human knowledge that has consistently obsessed many of the dead great men in Western philosophical canon. Not all, by any means, but Plato, Descartes, Leibniz, Kant, Husserl, and Wittgenstein form a daunting array, and that omits the angry skeptics about the significance of mathematical knowledge, such as Berkeley and Mill, and the logicians such as Aristotle and Russell. If I cast my eye down this list of Howison lecturers, I would add at least Michael Dummett, Hilary Putnam, Saul Kripke, and Quine to that list.
And in the opinion of some, those are the most important Howison Lectures that you read on that list. Others would point to many more distinguished speakers who could care less about mathematics. And yet, a certain ideology of mathematics has infected philosophy.
The terms of the bequest of the Howison Lecture, which you can read here, penned in 1919, use an idiom that few of us or none of us would use today. It tells us that Professor Howison thought of the world as a community of free persons, finite and infinite, sustained by a vision of the perfect. Perfect with a capital P. And where did this vision originate?
In Plato’s conception of mathematics. Why has mathematics mattered so much to so many famous philosophers? Aside from the naysayers such as Mill, it’s first of all because they have experienced mathematics and found it passing strange.
The mathematics they have encountered has felt different from other experiences of learning, discovery, or simply finding out. Now, many people do not respond to mathematics with such experience or feelings. They really have no idea what is moving those philosophers, and they’re in good company.
Take David Hume, one of my heroes who can do no wrong. I doubt that he had a mathematical moment in his entire life. Experience in mathematics in no way implies the possession of philosophical gifts or vice versa.
But those philosophers who have experienced mathematics have built it into their conceptions of pretty well everything. So what is philosophy of mathematics?
(clears throat)
There are three types of philosophical issues about mathematics. I call them ephemeral, scholastic, and perennial. The ephemeral ones are the pressing ones arising out of recently discovered but disquieting mathematical facts we have not yet figured out how to live with.
Ephemeral doesn’t mean unimportant. It means present, but perhaps not long-lasting. Most generations during which mathematical research has been intense, produce their own ephemeral philosophical difficulties.
Mathematicians worry about them just as much or more than philosophers do. What I call the scholastic issues, again, not with a negative sense, no more than ephemeral, are usually generated within philosophy itself and are almost unknown outside it. The perennial questions are for everyone, including, and maybe especially including, beginners.
There are a few topics that are easily grasped if you have any live experience of mathematics at all. They’ve always been there, and they’re not going to go away. I have no intention of making them go away.
I shall touch on the ephemeral, but more briefly than I would like. The issues really matter to how we should think about mathematics now in 2010. But they’re not my primary topic.
I shall sketch a few, because it seems to me that far too many philosophers of mathematics, intent on scholastic issues, tend to ignore pressing matters arising from current math. In contrast, I shall bypass what I call scholastic matters, and so this talk will not much resemble most current literature in the field of academic philosophy. Instead, I shall strive for a second adolescence and address perennial questions which, in my opinion, are the reason why there is philosophy of mathematics at all.
Second, adolescence. I’ve written far too many books, but noth- published almost nothing about mathematics. Yet it was my first love.
My PhD thesis came in two parts. One was called Proof, the first word of this lecture. The other of thirty-odd pages had nothing to do with it, but proved some results in logic.
Happily, the examiners liked the latter, and they passed me. But I have been grow– gnawing at the topics of the other thesis, proof, all my life, and I finally venture fifty years later to continue writing that thesis. And the talks I am giving this week, including this one, are different aspects of that project.
My focus in the philosophy colloquium tomorrow is on ideas I have taken from Wittgenstein. I have been quietly obsessed by Wittgenstein’s Remarks on the Foundations of Mathematics ever since I bought the book on the sixth of April, nineteen fifty-nine. And my dissertation was a first passionate, confused expression of the effect of that book on an as yet unformed mind.
If Professor Chihara is present tonight, he can witness that I was impassioned with the remarks when we were both very young and knew each other very briefly. I do, however, take up new topics which it would have been hard to think about fifty years ago. My talk for the anthropology department on Friday is called The Mathematical Animal.
It develops themes of the unreasonable application of mathematics, of embodiment of hands and brain, cognitive science, cognitive history, and the ecological roots of mathematics. I don’t promise a solution to anything, but I will ask the audience to join me on what may be a voyage of exploration. Kant asked, “How is pure mathematics possible?”
On Friday, I will ask, “How is it possible to have an organism on this planet that does mathematics?” It’s an ecological question. Proof.
I want to say something about the first word in my title. Proof has been a hallmark of Western mathematics descended from Greece. There was plenty of what we can recognize as mathematical thought in ancient China, Egypt, Babylonia, possibly in Mesoamerica.
We might venture this. As soon as the people had invented writing, they were ready to invent mathematics, and maybe the Inca did it without inventing writing. But writing enabled us to tap cognitive skills in ways Very difficult without it.
But the existence of proofs is a cur– due to a curious historical accident. A handful of people in the Medi– ancient Medi– Medi– in the Eastern Mediterranean, Greeks, as we call them, discovered the very possibility of deductive proof. They happened to live in an argumentative society, a few of whose members wanted a better tool for settling arguments than rhetoric.
I want to remind you of Kant’s story of this. We’re heirs to this critical anomaly in human thought. In the traditional story relayed by Aristotle, a legendary Thales discovered proof, and all the greats repeat what Aristotle passed on.
It’s not terribly good history in our present opinion, but it’s an important parable. A short time, Kant, a short time after publishing the first edition of the Critique of Pure Reason in seventeen eighty-one, Kant caught the wave of the future, something not often noticed, and became something of a historicist about human reason. The new introduction to the second edition of the first critique, seventeen eighty-seven, Pure Reason Has a History.
I love his rendering of the story of the light bulb going on over the head of Thales or some other. He writes, “In the earliest times to which the history of human reason extends, mathematics among that wonderful people, the Greeks, had already entered upon the sure path of science. The transformations must have been due to a revolution brought about by the happy thought of a single man.
The experiment which he devised, marking out the path upon which the science must enter, and by following which secure progress throughout all time and in endless explana-expansion is infallibly secured. You didn’t know Kant did– went in for purple prose, perhaps. And he speaks about
(cough)
how the history of this intellectual revolution has been preserved, uh, in a way which at least shows that the memory of the revolution brought about by the first glimpse of this new path must have seemed to mathematicians of such outstanding importance as to cause it to survive the tide of oblivion. A new light flashed upon the mind of the first man, be he Thales or some other, who demonstrated the properties of the isosceles triangle. Now, of course, we would not do all this.
It requires a community. We would not do it without a single man and all that, but there’s a remarkable, uh, parable here being told by Kant. Uh, in the, uh, opinion of one of the most interesting historians of mathematics revealed– of Greek mathematics revealed, Netz, something tumultuous took place involving a small number of individuals who corresponded and traveled in the Eastern Mediterranean.
Using the metaphor recently favored in paleontology, Netz suggests, to quote, “The early history of Greek mathematics was catastrophic.” Fantastic changes. “A relatively large number of interesting results “would have been discovered practically simultaneously.”” “Now, maybe this is no more than a just-so story,” but it is a great parable.
Like so many parables, it cuts two ways. In a brilliant critical notice of Netz’s book, Bruno Latour has recently written, Bruno Latour is a sociologist of knowledge, uh, the one who has m-most influenced my own thought, who says that Plato kidnapped mathematical proof, kidnapped—making it a cornerstone of his epistemology and metaphysics? And he’s quite rude about Plato.
“To the great surprise of those who believe in the Greek miracle, the striking feature of Greek mathematics, according to Netz,” writes Latour, “is that it is completely peripheral to the culture, even to the highly literate one.” Medicine, law, rhetoric, political sciences, ethics, history, mathematics. No.
With one exception, the Plato-Aristotelian tradition. But what did this tradition, itself very small at the time, take from mathematics? Only one crucial feature, that there might exist one way to convince, which is apodictic and not rhetoric or sophistic.
Philosophy extracted from mathematicians not a full-fledged practice, but only a way to radically differentiate itself through the right manner of achieving persuasion, which he thinks is— which of course Latour thinks is all a con trick. And Latour speaks for myriads who say that they hate or fear mathematics. “If you had to suffer through geometrical demonstrations at school, which is certainly the case for me,” he writes, and so on.
Now, I’m in the minority. In fact, I am lucky enough to have gone to a mediocre high school long ago, a rather unreformed place where I had to do slightly watered-down Euclid. And in my case, it was at the age of thirteen or fourteen.
Nerd I was, so I loved Euclid. It was my escape from a harsher world around me. I learned about proofs, and I loved them.
So I am a gullible victim of Plato’s ad– abduction of mathematics, and a victim of Kant. But do we need proofs? The little sketch I’ve given is of highly contingent events, It’s flukes of happenstance.
We have, by good fortune, an experimental illustration of that. China, in ancient times, developed brilliant mathematical ideas, but it chiefly worked on a system not of proofs, but of approximations. Proof seldom reared its head and was seldom esteemed in its own right.
Geoffrey Lloyd, the historian, comparative historian of Chinese and European intellectual history, suggests the hierarchical structure of a powerful education system with the emperor’s civil service as the ultimate court of appeal, had no need of proofs to settle it, settle anything, whereas a democratic, very oligarchic, but I mean very limited in the democracy, democratic society like Athens, where you had to argue things out, at least to an anti-democrat like Plato, proof was a wonderful replacement for rhetoric. We could have got on just fine without proof. We could have been Chinese, using a sophisticated method of approximate solutions and got ourselves to the differential calculus that way directly.
If we had fast computers long ago, who would have needed proof? I don’t support that line of thought about what we could have done, because I think proof is so intimately em-embedded in the history of the Western sciences as a model, often for the worse rather than for the better. But, um, I– we could, could, just could have done without the concept and practice of deductive proofs, but that’s where we are.
That’s what we have, and we still have it. Now, I was once keen on proofs, but then let them rest. But there are advantages to being Rip Van Winkle.
Rip slept for only twenty years, which happened to span the American Revolution. I’m a Rip who has stayed away from mathematics and its philosophizing for no fewer than fifty years. Hence, I am struck by some radical changes that may escape the notice of those who have come upon the scene more recently or have been able to watch it evolve continuously.
There’s, of course, a lot more mathematics. There are not just new questions, but new classes of problems and new methodologies, new techniques pretty much unthinkable until quite recently. I think it’s not just a matter of degree, But it doesn’t matter.
Rip Van Winkle has difficulty catching up on the new math, but he easily notices changes in the way in which mathematicians of various sorts talk about their field. He attends to the new gossip. I shall single out four changes as done deeds: certainty, kinds of evidence, experimental mathematics, and application.
These lead on to what I have called ephemeral issues, important topics that arise from mathematics as she is practiced right now. Certainty. Mathematics used to be admired as the only certainty.
Any of you who have read Bertrand Russell’s biographies or autobiographies will know what he vowed as a young man to make mathematics completely certain. Mathematicians have finally allowed themselves to be relaxed about what would once have been scandal. Published proofs are full of gaps, even though the result proved is usually more or less right.
One mathematician, Richard Borcherds of the math department here, he won a Fields Medal in 1998, casual asked me- casually asked me at our first meeting after we had discussed his own work, if anyone had looked at the error rate of the proofs in Principia Mathematica, especially volume three. The very concept of an error rate in published proofs was, to my knowledge, nonexistent or unmentionable fifty years ago. Now it’s a commonplace anywhere.
Once it seemed a terrible insult to the queen of the sciences that proofs are full of holes. Experts now expect that and accept it, so long as one sees pretty clearly how to clean up the odds and ends. Sometimes it matters, and the proofs are deeply fallacious.
In important cases, one worries. But in general, certainty has become a matter of degree. These thoughts have become common parlance.
During the last fifty years, the old vision of absolute mathematical certainty, which Kant took for granted and which drove Russell to spend the best years of his life toiling with Whitehead on Principia, they’re all history. Second, kinds of evidence. It’s now publicly acknowledged that all sorts of evidence may be used in support of mathematical propositions.
Proofs are important. Perhaps they still define the discipline, but the working mathematician uses all sorts of non-demonstrative reasoning. A nice example of sheer hands-on experimentation is in packing problems.
What’s the best way to fit a bunch of tetrahedra into a container, which actually has a lot of important applications? People do that with real little plastic objects. Pieces from Dungeons and Dragons are the cheapest tetrahedra in town.
And they also do it by computer simulations. More generally, there is a widespread awareness that demonstrate– demonstrative proof is not as important as it seemed to be. George Polya, in 1945, had already shown that all sorts of non-deductive good reasons arise in mathematical thought.
Polya’s one-time student, Imre Lakatos, wrote his brilliant Proofs and Refutations to show that all mathematical proofs are fallible. The procedure of proof is what he called quasi-empirical. Only at the end of a Hegelian synthesis of proof and refutation might one produce something that amounted to analyticity.
But it’s not Lakatos that made mathematicians, or Polya that made mathematicians come out of the closet about fallibility. There’s a much richer internal story that demands serious sociological study. One thing with which it’s connected is the most radical single change to have occurred in mathematics these fifty years, namely fast computation.
This has become an almost invaluable tool for mathematical exploration. In the early days, computers were merely prosthetics, sort of a better pencil. What matters today are how computers help us to explore.
So experimental mathematics. There’s been a lot of discussion by both mathematicians and philosophers about proofs checkable only by computer and about computer-generated proofs. The prime example to which philosophers usually turn is the four color thr- theorem proved in nineteen seventy-seven.
Far more important, in my opinion, is the advent of large-scale experimental mathematics with its own journal, Experimental Mathematics, founded by David Epstein in 1992. David, David just happens to have been one of my undergraduate chums. To avoid misunderstanding, it was emphasized from the start that this was a journal of pure mathematics.
The point was not to deny the claim that I shall make about pure and applied mathematics in a few moments, but to make plain that the journal was not interested, for example, in the booming industry of simulating experiments in the material world, an activity that depends, of course, on the constant use of mathematics, both in modeling the micro and macro universes around us, but also in designing programs in which the models are embedded. Thus far, most philosophers seem to have discounted experimental mathematics as nothing new. I think not enough attend to the way in which it is such an extraordinary tool for mathematical exploration.
Perhaps an awoken Rip Van Winkle can arouse his colleagues from dogmatic dozing. How about this for a sentence destined to rouse hackles? Experimental mathematics provides the best argument for Platonism in mathematics.
At any rate, as David Epstein wrote me recently, these are things that no one dreamt of when we were students. Sometimes what is found out is quickly replaced by a deductive proof. Some of those deductive proofs are old-fashioned proofs that make sense.
Others are themselves only machine-checkable. Here there is a real division of attitude. Some mathematicians regard computers merely as tools of discovery or as search machines for counterexamples.
After discovery comes justification. Others think this attitude is obsolete. I don’t take sides.
Mathematicians differ. Timothy Gowers, a Cambridge mathematician who won a Fields Medal in nineteen ninety-eight, same year as Richard Borcherds’, is the author of a wonderful book, Mathematics: A Very Short Introduction, in which he writes, “My own view, which is a minority one, is that over the next hundred years or so, computers will be eventually supplanting us entirely.” But he continues, “Most mathematicians are far more pessimistic, or should that be optimistic, about how good computers will ever be at mathematics.”
Incidentally, and perhaps it is relevant, I have encountered very few working mathematicians who take Wittgenstein seriously. Gowers writes at the end of his little book, “Anybody who has read both this book and the Philosophical Investigations will see how much the later Wittgenstein has influenced my philosophical outlook, and in particular, my view on the abstract method.” But now with computers, we enter a strange region.
We tend to think of computers as never making mistakes, but computers are imperfect, and their error rate is far greater than their manufacturers ack-acknowledge. This has become a little branch of mathematics in its own right. For an exaggeration, Roger Penrose argues that all computation is a quantum sensitive operation.
That is, it’s probabilistic only. It only takes a cos– one cosmic ray to turn a zero into one and make a different computation. So here is a one more ephemeral problem, which of course might go on troubling us for a very long time.
Applications, Another change. There was in 1960 something of a Cold War division of opinion. Soviet dogma held that pure mathematics was bourgeois idealism, and that only applied mathematics was substantial.
But in the West, where what we call the philosophy of mathematics was practiced, the concept of pure mathematics was dominant, not only among philosophers, but among many schools of mathematics. It was a curious situation, for never before had mathematics such an effect on the world. In 1960, everyone was aware of the possibility of blowing everything up and was busy digging futile bomb shelters or walking from Aldermaston with the Campaign for Nuclear Dis-Disarmament.
Perhaps that was the very reason that mathematicians, ostrich-like, wanted to maintain their purity. I believe that the sharp distinction between pure and applied mathematics was an aberration that has passed. A lovely end-of-career account of the reunion of pure and applied is to be found in a recent survey article called Geometry and Physics by Mi-Michael Atiyah, who won his Fields Medal long ago and an Abel Prize recently.
I mention these prizes just to make clear to people who are not mathematicians. I’m d– I’m quoting people who have got a great reputation in their own field. The Fields Medals, th-are given every four years to people who have made outstanding, no more than four, who have made outstanding contributions before the age of forty.
The doors between pure and applied have been unlocked. Mathematicians have had to learn how to think like physicists. Not only metaphorically unlocked, but literally, in Atiyah’s case.
It seems that when he was at MIT, he discovered that a door separating mathics and phy– mathematics and physics was permanently locked in the building or the extended building in which he worked. Why? Well, the physicists had just installed new carpeting and didn’t want the grubby mathematicians messing it up with muddy snow on their boots.
Peter Galison tells of a contretemps that can ensue when the mathematicians and the physicists get different answers. In his example, the physicists were right. Their sense of how the physical world works trumped the mathematicians’ conception of space.
In retrospect, one just says that somebody made a program er-error, but that is really not to get at the heart of it. I’ll give just one example of how the boundaries between pure and applied sort of collapse. It’s a real-life example, quite easy to understand.
It’s rigidity. Nothing, it seems, could be more practical than rigidity. As humans– soon as human beings began to build shelters, moved out of caves, they wanted structures that would not fall down.
Some predecessor concept of rigidity must have emerged very early in human consciousness, not as a cognitive universal, but as an ecological one. Take only a very late civilization about which we actually know something. The two classic structures of the North American prairie, where building materials are scarce, are the wigwam and the teepee.
The former, the wigwam, is a fairly permanent structure in which hide is fixed around a dome frame made of branches, and the latter, as you know, is built around a movable frame of poles that becomes rigid when erected. Now, there’s a wonderful mathematics of rigidity. Some say it goes back to Euclid, but in eighteen fourteen, Cauchy, the French mathematician, proved the foundational theorem that any convex polyhedron with rigid plates but with hinges at its edges is, despite the hinged edges, rigid.
Here we move from a serious practical problem, what stays up, to pure mathematics. An entire discipline which becomes an aspect of topology develops. But then real life strikes back, and the mathematicians learn from the engineers.
One of the loveliest rigid structures is the geodesic dome, often called the Bucky dome after Buckminster Fuller, who worked out some of its principles and patented it in 1954. It’s a network of great circles that in turn form the triangles. Much the same effect, by the way, seems to have been achieved by wigwams built by Apache Indians.
Fuller generalized the concept of rigidity to what he called tensegrity, tens- tensional integrity, based on a balance between tension and compression, which has a lot of engineering consequences, both in the large and the small. That is totally applied science, but tensegrity has generated its own rich and aesthetically pleasing field of mathematics. Late in the 19th century, a French engineer, I emphasize engineer, discovered flexible non-complex polyhedra.
But it was long thought that any polyhedron homeomorphic to a sphere has to be rigid. In 1978, Robert Cornet– Connelly at Cornell found a flexible eighteen-faced eleven-vertex counting counterexample.
Does this research have practical consequences? Not obviously. One application of Connelly’s discovery is one can make a flexible polyhedron for exhibition.
According to Wikipedia, there’s one in the Washing– one in Washington at the National Museum of American History. I don’t know why it’s not in the Smithsonian, but there. Well, that’s an application which we should not forget in thinking about applications.
What I call a secondary application, namely to enchant schoolchildren with the wonders of mathematics. Connelly’s polyhedra are in familiar three-dimensional space. Flexible polyhedra are known in four space.
Are there any flexible polyhedra in more than four dimensions? A question investigated at present for purely aesthetic reasons. The answers are not known.
So it’s pure for the nonce. I want also to say a word about the application of mathematics to mathematics. Um, Descartes, uh, wrote the first canonical textbook of analytic geometry, the Géométrie.
We often forget it was published as an appendix to the Discours, that is the discourse of properly conducting one’s reason and seeking the truth in the sciences. I like to think of his geometry as a mo-model of how to conduct one’s reason and seek truth. The Géométrie made plain, what had been glimpsed at before, for all the world to see that algebra, born of arithmetic, could be applied to geometry.
It’s the first indubitable example of the unreasonable effectiveness of mathematics developed for one purpose applied to mathematics developed for another purpose. And I’m dead serious in making a statement parallel to the physicist Eugene Wigner’s statement nineteen sixty article, on “Unreasonable Effectiveness of Mathematics in the Natural Sciences.”
Wigner’s title is often read as The Unreasonable Applicability of Mathematics to Physics. Descartes gave a stunning example of the unreasonable applicability of mathematics to mathematics. Now, why is it unreasonable?
Because it’s almost too good to be true. Geometry is spatial. Algebra is a child of arithmetic.
Arithmetic is for counting, a process that, as Kant emphasized, takes place in time once you get past the point of just seeing how many, that is, past six or so. Kant used his thought of the distinctness of space and time manifested by the difference between geometry and arithmetic as the road to an entire critical philosophy that has haunted us ever since. It would be a plausible conjecture for those who take a modular view of the brain, that the modules that enable us to navigate our spatial environment and invite geometry are distinct from those that gave us the number sense, to use the title of a book by Stanislas Dehaene.
Somehow the two coalesce. How come? Well, that’s not an ephemeral question, but as a matter of fact, it’s not one which philosophers have addressed.
The history of mathematics is one of diversification and unification. We start with diversity. Some peoples, the ancient Greeks, fixed on geometry as primary, while others in India and working in Sanskrit fixed on numbers as primarily and bequeathed that ob-obsession to Islam and Arabic.
But it all keeps turning out to be the same stuff. And if you read science journalism, you’ll know, for instance, that one of the reasons that Andrew Wiles’ result on Fermat’s Last Theorem is deemed to be so exceptional in creating new proof ideas was that it brought together branches of mathematics which seemed to have nothing whatsoever to be do with, do with each other. But there’s something special about arithmetic and algebra on the one hand and geometry in the other.
Perhaps it’s only a contingent fact about the historical development of mathematics as we know it. But there’s a strange play of arithmetic and geometry throughout our experience. Descartes is a focal point in that dance.
Category theory can be seen as starting from the premise that the two are the same and seeing what happens. So understood, it would be a fulfillment of a Cartesian insight that Cart– geom-geometry and algebra are win– one. I think that the question about algebra and geometry is one that deserves much thought and which is not ephemeral, but it does not bear on my lead question.
Why is there a philosophy of mathematics at all? First, Russell’s answer. Why has mathematics exercised such a powerful effect?
When he had finished and done with Principia, Bertrand Russell sat down to write a potboiler that has charmed young people ever since, The Problems of Philosophy in 1912. It charms me still. In the course of covering the waterfront, he wrote, “Every philosophy which is not purely skeptical must find an answer to Kant’s question, how is mathematics possible?”
Of course, Russell exaggerated. Some great philosophies that are not purely skeptical have had no interest in mathematics whatsoever. What’s so strange about mathematics?
that it should, however, attract so many philosophers? Russell mentioned one source of wonder, the apparent power of anticipating facts about things of which we have no experience is very surprising. That’s a much better way of putting things than Kant’s glorious exclamation of awe, the one that put together our philosophical argot.
In the Prolegomena to any future metaphysics that will be able to present itself as a science, that’s seventeen eighty-three, between the fir-first edition of the Critique and what I call the historist, uh, historicist edition of the Critique, uh, we get the question for the first time in Kant, how is pure mathematics possible? And it’s a sort of shout. Here is a great and proved field of knowledge, which is already of admirable compass for the, and for the future promises unbounded extension, which carries with it thoroughly apodictic certainty, that is, absolute necessity, hence rests on no grounds of experience, and is a pure product of reason, and moreover is thoroughly synthetic.
How is it possible for human reason to bring into being such knowledge wholly a priori? There we have the philosopher’s words, a priori, synthetic, and qualified by adjectives that turn a noun into a sort of shout, apodictic certainty, absolute necessity. Yet that doesn’t explain why so many philosophers should be obsessed by mathematics.
The words are more names for aspects of an obsession than explanations of that ex– uh, of that obsession. What is it about mathematics that’s driven so many philosophers to make it central to their philosophy? I have long said the experience of making or following a proof which somehow carries necessity with it.
Mark Steiner asked exactly the same question in a book published nineteen ninety-eight, and he answered, “It’s the application of mathematics.” And both words point– both words, proof and application, point to an essential aspect of an answer. The application of mathematics covers all too many things.
Steiner distinguishes some of them. Russell’s neat phrase captures the one that’s important here. The apparent power of anticipating things of which we have no experience.
That is, the power of applying our mathematics in order to know what’s true in the material world around us. But you can’t do that unless you know that the relevant mathematical proposition is true, and that comes from being proved. Application without proof is tentative and would often not work.
Conversely, proof without sense of some sense of potential application is a mere game. Proof and application when combined point to a reason why there has been a philosophy of mathematics ever since Plato. In fact, Plato is, I think, the first example of genuinely unreasonable, uh, effectiveness of mathematics.
In the 13th and last book of the Elements, Euclid constructed a regular convex polyhedron and proved there are only five of them. Some scholars have argued this is the conclusion which was the whole point of the Elements. We call these five polyhedra the Platonic solids.
In the Timaeus, Plato proposed that the elements of which the universe are built are shaped as regular solids. There was plenty of pre-Socratic speculative physics, but Plato took mathematics devised for supposedly aesthetic reasons and applied it to speculative physics. It may be the first known case of a primary and theoretical application of mathematics.
At any rate, neither proof nor application suffices on its own to keep the ball rolling, but together they are a force which must be reckoned with. Yet something else is even closer to the experience of mathematical proof. It’s this sensation that mathematics is just out there.
And where is there? Now, that’s a problem, but I take the sensation very seriously indeed. That’s why I talk about the experience of mathematical discovery.
It seems no more than a vulgar way of describing Platonism. Indeed, it may appear to be the most crass form of Platonism imaginable. It’s the very crassness that brings it close to the roots of the philosophy of mathematics.
Philosophizing about mathematics is haunted by Platonism, both naive forms and sophisticated forms. It’s supposed to be a kind of ontology, but one is tempted to recycle Jacques Derrida’s brilliant pun and call it hauntology. Until December the 5th, you can see the taunt– hauntology exhibit at the Bar- Berkeley Art Gallery just up the street, and you can read phrases like, “Such an uneasy mixture of the ancient and the modern,” which are applied to a particular form of, uh, British jazz or British music-making, sorry.
Uh, but one could apply the same thing to Platonism itself, an uneasy mixture of the ancient and the modern. I’m afraid to say the art gallery is cashing in on Halloween, and on Fri- next– this coming Friday, the twenty-ninth, they are having a sort of hauntology party.
But I think hauntology is really a very serious idea. I’m not a Derrida admirer, but I think that pun is really enormously thoughtful. No ghost more effectively haunts all Western philosophy than Plato’s, and sometimes I wish for an exorcist.
Platonism, in its various guises and disguises, will never be exorcised. I find it best to say that in its strongest forms it is a pleasant fable, while in its more modest forms it’s banal. One logician, Bill Tait, exemplifies the view that when properly stated, it is trivial.
I do tend to agree with an idea I attribute to Nietzsche, that European languages demand an existential presupposition for the terms in the subject position. European grammars generate the feeling that if there are truths about numbers, then numbers exist. This Nietzschean idea is intended to undermine interest in the existence of numbers.
Their existence is a trifling byproduct of our grammar. One can also say what the mathematician Timothy Gowers, whom I’ve referred to before, got from Wittgenstein, that it’s what we do that matters, and in the way in which, much the same way in which the piece in chess, the king’s rook, is not an abstract object, it is rather what it can do which constitutes its reality. There are many scholastic positions here.
Quine had what he called the indispensability argument, which is supposed to settle the question, numbers and functions do exist because we quantify over them in the natural sciences. Quine’s maxim, to be is to be the value of a variable. Numbers are values of variables in contemporary physics.
That’s our best available theory of how things are. Hence, numbers exist, which means that in some sense Platonism is true, QED. And there are widespread opposition to this.
Hart– Some of you will know Hartry Field or doctrines of fictionalism. Structuralism, advocated by Charles Chihara, among others, holds that what exists are not mathematical objects, but mathematical structures, and they exist.
Nominalists think that’s pretty cold comfort, and there’s some truth in their reaction. Uh, there really is nothing more to numbers than their relations. So numbers, uh, any– so anything you say about numbers are about structures.
And Plato need not have been very perturbed by structuralism. Uh, the numbers are characterized by certain structures, and there are structures instantiated by numbers. It’s still kind of Platonism.
Now, I apologize for this gross superficiality, you experts. I want only to emphasize that these skirmishes and alarms are generated within philosophy itself. They’re scholastic, not in any pejorative sense, but only in the sense that they are of interest to those who participate in the activity of academic analytic philosophy.
Many mathematicians are content to call themselves Platonists. Is a single one of them moved by the fact that all the sciences quantify over numbers? I doubt it.
They’re moved by the fact that they discover amazing facts about numbers by means of proof or rumors of proof. At least some mathematical objects are experienced as out there, or structural relations between them are experienced as out there. What’s this out there?
It’s convenient to use a debate in which a self-declared Platonist mathematician faced off against a nominalist neurobiologist. The protagonists were two French colleagues of mine, Alain Connes and Jean-Pierre Changeux, published a book in French in eighteen eight– 1989, and slightly expanded in English in 1995. Alain Connes, who received the Fields Medal in 1982, cannot doubt there is a mathematical reality out there independent of human thought.
Changeux is convinced that mathematical structures are by-products of the innate endowments of the human brain. I suspect that these two attitudes are compatible, but fortunately, Connes and Changeux do not think so. Fortunate because some lines are drawn.
I don’t say they’re drawn clearly, but clear enough that we can look at them. Connes was enormously impressed with the fact about mathematical research. “Here we come,” he writes, “upon a characteristic peculiar to mathematics that is very difficult to explain.
Often it is possible,
(clears throat)
although only after considerable effort, to compile a list of mathematical objects defined by very simple conditions. Intuitively, one believes that the list is complete and searches for a general proof of its exhaustiveness. New objects are frequently discovered in just this way as a result of trying to show the list was exhausted.
Take the example of finite groups. The notion of a finite group is elementary, almost on the same level as that of an integer. A finite group is the group of symmetries of a finite object.
Mathematicians have struggled to classify the finite simple groups. That is to say, the finite groups that, like the prime numbers to some extent, can’t be decomposed into smaller groups. Fifteen years ago, this is 1995, the last finite simple group, the monster, was discovered by purely mathematical reasoning.
It’s a finite group with a considerable number of elements. Then he writes down the number of elements. It takes fifty-four digits to write down the number of elements.
It’s meaningless. It’s now been shown, he continues, as a result of heroic efforts, that the list of twenty-six finite simple groups is indeed complete. Now, Connes writes as if the monster had been just sitting out there quietly grinning, waiting for us to discover it.
Now, there’s a side tale here bearing on stuff about evidence and certainty and so forth. Um, uh, the person who demonstrated, uh, the existence of the monster actually called it the friendly giant. Any mathematician still had doubts about the extent to which this fact had been proven.
Uh, the fact, the existence of the monster and the completeness of the classification was actually not established till long after Conway had written his— those paragraphs I quoted, by Aschbacher and Smith in Caltech. That takes two volumes to prove the result. The reviewer, very knowledgeable, Ronald Solomon, in, uh, the Bulletin of the American Mathematical Society, asserts that no single person will ever read the proof.
But a team has checked it. That is, particularly the second volume was divided into five parts, and five different people checked each bit of it. Yet Kahn, like many others, had no doubts about that fact, namely the existence of the Monster and the completeness of the system of finite groups some fifteen years earlier.
A nice example of the way in which certainty in mathematics has just changed. Mathematicians soon made sense of the Monster. That began with the prolifically creative mathematician John Conway, with something he called the monstrous moonstein– monstrous moonshine conjecture.
It seemed so preposterous it was called moonstein because the monster turned out to be identical to an object derived from a completely different branch of mathematics. It had to be a coincidence. There couldn’t be any connection between these two completely disparate fields, moonshine.
Except it was instead one of the cases of the underlying unity within the diversity of mathematics. But here I want only to emphasize Connes’ heart feeling, heart feeling, heart… heartfelt feeling of experience that this, at first sight, absurd object was just there waiting for us. And it’s quite a common reaction.
Richard Borcherds, uh, one I mentioned before, got his Fields Medal for proving the Moonshine conjecture. In conversation, he said, and he’s allowed me to quote this: “When you think about the Monster, you have to wonder who made it. It’s almost like that intelligent design stuff.
The monster had such a complex and yet organized structure that it is as if it had been engineered by someone. Now, he was expr– honestly expressing his persistent astonishment about the sheer existence of the monster that he had proved to exist. He was not advancing…
Sorry, whose, whose properties, whose equivalence to something else he had proved to exist. He was not advancing an opinion. He was expressing heartfelt incredulity of a fact that, in any ordinary sense, he understood at least as well as anyone else in the world.
He was not so much surprised at the fact as by the existence of such a fact. He doesn’t, of course, imagine the monster had a designer. He said only the object is so complex and so delicate in all its parts that it’s, it as if, is, i-it is as if it had been designed.
And Connes was giving vent to the same sentiment. But one may feel that he was also using a certain sort of mathematician’s rhetoric to scare us into submission. It didn’t work on his fellow debater, Jean-Pierre Changeux, who said the monster was just a bore.
So we’re presented with this meaningless sequence of fifty-four digits. We can see that it actually factors into some very interesting primes up to seventy-one, which are called the supersingular primes. But all the same, one is reminded of Wittgenstein’s remark about glitter.
We’re supposed to be impressed when Connes writes down the number, but somehow it’s overdone. Instead, we’re offered what Wittgenstein called mysteriousness. In the same context, he asks, “Is it already alchemy that mathematical propositions are regarded as statements about mathematical objects and mathematics as the exploration of these objects?”
That is, Wittgenstein is saying, “Look, this would be– the whole thing is a confidence trick.” ” It’s that alchemy that Kahn takes for granted, and which he presents as awesomely mysterious. Mystery, glitter, and alchemy, Wittgenstein’s words, are not all the same, but all three of these words can be used to refract different aspects of examples like the monster.
Wittgenstein himself said, “All I can do is to show an easy escape from this obscurity and this glitter of concepts.” I don’t believe there is an easy escape from this obscurity and glitter of concepts of the sort illustrated by Connes. Maybe there is no escape.
It just is astonishing that a few elementary axioms for groups should generate this extraordinary object which turns out to be related to a lot of other things, and some people claim is even significant for quantum field theory. Connes is not an indiscriminate Platonist. He does not think that all mathematical objects are like the monster, grinning and waiting to be found.
His idea has more content than Quine’s semantic doctrine that if we quantify over any class, then entities of that kind exist. He’s a highly discriminating ontologist. He agrees that most of the tools devised by mathematicians are inventions, not discoveries.
His label for such tools is projective. But they are used to investigate what he calls, strange word, it’s really French, archaic or primordial mathematical reality. This is his way of expressing a realism about ma- mathematics that is both modest and specific.
“We construct mathematical tools in abundance,” he writes, “but what’s remarkable is that, using them, we can identify uniquely various objects that are not,” in his opinion, “constructed.” They constitute archaic, primitive, original mathematical reality. And at least the integers in his judgment are a familiar part of that reality.
I don’t share Kahn’s convictions, but I find them more in-in-instructive than blanket scholastic Platonism, which says without discrimination that abstract objects exist or anything over which we quantify exists. The Nietzschean use of grammar to undermine Platonism is a powerful tool against the blanket, but not against Connes. But there’s the neurological retort, neurobiological retort.
Jean-Pierre Changeux holds that mathematical truth is constrained by the neuronal structure of the brain. In answer to Connes’s example, he retorts that here we have merely, I would say, glittering, um, an exam– exotic example of the finite list of regular polyhedra which so impressed Plato. And he writes, “That does not prove,” despite Descartes, that we are concerned with properties that are immutable and eternal.”
Here he speaks as the neurobiologist who doesn’t really experience mathematical proofs. We can easily imagine a lost dialogue in which Socrates used the Platonic solids to make the same point, almost word for word, that Connes was making. He was merely choosing an example that was, in 1989, still recent and indeed unfinished business.
My own opinion is much more obscure than those of either of the two controversialists. I think that what Kahn says is right, but what he means is wrong. He means a fabulous domain of numbers, archaic and primordial, whose structures have nothing to do with the brain.
I think that what Changeux says is wrong, but what he means is right. He means that the structures Kahn so admires are byproducts of our genetic envelope. Both men engage in a debate that could be called archaic in a technical sense of the word, if Greeks before Thales had discovered proofs.
In fact, this debate is merely ancient and Athenian. It began at the time of Plato. It has stage props of today, the newly discovered monster, and recent findings in cognitive science pitted against it.
It’s nevertheless the perennial debate. It is one of the two underlying reasons why we have philosophy and mathematics at all. The other is the sense that somehow by proof we discover that which is necessarily true.
Thank you very much.
(applause)
[01:14:37] JOHN CAMPBELL:
I see. Okay. So I understand we have about 15 minutes for questions, and, um, the way we’re going to do it is…
Well, actually, first we should pause a moment because obviously some people have to leave. So, um, if you wish to leave, stand not upon the order of your going, but go now, and then we’ll have, we will have questions from the hardcore for about 15 minutes.
[01:14:59] SPEAKER 4:
Please come up to the microphone to, to ask your brief questions. Thank you.
[01:15:28] AUDIENCE MEMBER:
Thank you, Professor Hacking, uh, for your talk. I didn’t get to hear much on hands, which is what I wanted to hear because I work on chiral symmetry in particle physics, handedness. But my question to you is on one of your ephemeral issues.
Um, so the question is really that, uh, like the standard model in physics, which is about the elementary forces of nature and, uh, primal matter, is governed neither by matter nor by forces, but by the abstract principle of gauge symmetry. And so the question is: how do you classify principles? So I mean, if you go by Latour, uh, Latour’s view of mathematics, if I– which is about quantification and certainty and proof, et cetera, and if I said body A has velocity twenty miles, let’s say, that would not be construed as a law of physics, whereas the relation between a body and its, uh, mass, its acceleration, is a law of physics because it is a relation.
And so that goes back to what Hermann Weyl had to say, that objectivity in some sense is the notion of symmetry, what stays invariant even while undergoing a transformation. And so this is the other view of mathematics, and I’m sure Plato would have no hassle. He would not raise this issue of the unreasonable effectiveness of mathematics.
Dirac actually also has a paper with the same title. It’s called The Uncanny Ability: Why Mathematics Works for, uh, Natural Science. Because, I mean, they didn’t have this notion of an incongruence between mind and the world.
And so I would like you to address this notion of relations. I mean, it’s not banal, it’s not a fable, it’s certainly not trivial. Thank you.
[01:17:10] IAN HACKING:
Yeah, um, I’m not totally sure that I understand, and I think you’re pointing to a n-number of different things. Um, uh, one, I mean, you mentioned symmetry. I was intending to say some more things about symmetry, uh, but, uh, I didn’t.
Uh, there’ll be a little bit in the anthropology talk on Friday. Uh, it, it’s– the ver– the symmetry is very curious. There’s a recent book, an enormous book written by Giora Hon and somebody else, uh, ca– about, about the history of symmetry in mathematics, and he claims that it only comes up really in i-in the work of Legendre, which is the end of, uh, the time of the Napoleonic Wars and so forth.
And it, it– that’s, uh, it– it, it’s very surprising because we find a fascination with symmetry, uh, very, very ancient. We find artifacts which are clearly made for because they are symmetric. But apparently– but according to Hahn, the notion of symmetry being important to doing physics is very recent.
And yet at the same time, if you just think of the simplest bits of Euclidean geometry, that stuff attributed to Thales, you know, you ha- you have a line, you have two parallel lines and a line like this, and you can immediately see that this angle is the same as this angle. It’s surely a symmetry consideration. Symmetry underlies far more thinking about the world than we say.
So that’s just one, one remark. More generally, uh, on the question of the a-applicability of mathematics to physics. Well, of course, the particular propositions about the motions of, about the motions of a particular body are just em-empirical observations.
The laws which govern them, uh, we think we have some quite deep understanding of why they, why they obtain. Some people think that it’s basically the universal thing we go for is symmetry. That has been a goal, ex– despite what Hahn says, uh, has been a goal of physicists, uh, at least since Leibniz.
I mean, Leibniz really wants to produce symmetry arguments. That’s what the principle of, of sufficient reason and insufficient reason are all about. Uh, that, that i-in speaking in a very general terms, and of course, this has been an extremely powerful motivation, uh, in, in all attempts to produce some kind of general theory of everything.
In string theory and so forth, one uses symmetry arguments all the time. Now, what the relations, but now here is a s- kind of how is pure mathematic– how is mathematics possible?
What is the relationship between the human perception of symmetries and the creation of mathematical organization? I really think we know very, very little about that. So that is what I think of as some kind of, to be, to be studied under some mix of cognitive science, and I want to say, uh, ecology.
For one thing, human beings are almost symmetric, just like– but there are very, there are, uh, there are, um, there are cases of non-parity. That is, I comb my hair, I comb my hair that way. Uh, and there are other kinds of m-mu-very trivial asymmetries, but basically the way human beings look at each other is to see a symmetric figure, so that we aren’t really put out by the fact that everything is reversed in a mirror.
It’s still– And we inst– we very quickly learn to read mirrors and, uh, the higher apes, some of them learn to read mirrors very quickly. As if this, as if the transposal of symmetry doesn’t– transposal of direction doesn’t affect the fundamental symmetry of what we’re looking at.
There’s something very deep in human beings and in human per– systems of perception about symmetry that must have something to do with the creation of mathematical reasoning. Not so clearly in the case of numbers, but certainly in the case of geometrical reasoning, which is, is also symmetric. I think I’ve not answered all of your problems, but I hope I’ve addressed some points.
I wonder if you would expatiate a little bit on the remark I couldn’t help writing down, which was that proof is often a model for the worse and not for the better. Yes, indeed. Sure.
Uh, I wonder if you would expatiate a bit further on what I wrote down, as you said. Uh, the proof is a model often for the worse, not for the better.
[01:22:05] AUDIENCE MEMBER:
Often a model for the worse and not for the better. A model for the worse and not for the better.
[01:22:12] IAN HACKING:
Well, in what context? I’m just having trouble recognizing that in my own words. Uh, I’m actually not sure what I said, I confess. Well, at what juncture did I say this? Early? Late?
[01:22:28] AUDIENCE MEMBER:
Approximately seventy-five percent through, I think.
[01:22:34] IAN HACKING:
Um, a proof is a model often for the worse. Um, I honestly don’t know what I meant. If I s– I will have to go through this and weed this out ’cause I don’t understand the sentence.
That’s why I’ve been having trouble picking it out. I don’t know what I meant. I apologize.
I mean, I, you know, one of the things that, that, that the, the sentence suggests to me is the way in which the Aristotelian enthusiasm for demonstrative science, uh, undoubtedly, in my opinion, uh, was a, did a lot of, in some ways, did an awful lot of harm for science until the, until the discovery of the laboratory in the seventeenth century. But I don’t, I don’t think that’s what I could have said. So I don’t know.
I, I, uh, I just stand embarrassed.
(laughter)
[01:23:32] AUDIENCE MEMBER:
Um, yes, thank you. Really enjoyed that. Um, I’m wondering if you could say a bit more about, uh, what one might call, uh, materialist historiography of mathematics.
I’m thinking of Dirk Struik, for example, um, in the Marxist tradition, and, uh, an interesting footnote in J.D. Sage Bernal’s Science in History, where he says that it’s really the forensic environment in which, uh, that is to say, the court in which, um, proof comes out of, of fore- forensic environment, uh, in which, uh, there is, um, democratic decision-making in, in a, in a, in the s- in, in the environment of a court rather than arbitrary, uh, authoritarian judgment.
[01:24:33] IAN HACKING:
Uh, yes. That I think is the thesis I take from Geoffrey Lloyd and, uh, Reviel Netz That, uh, the reason why this very, very specialist activity of proofs participated in by a very small number of people in the eastern Mediterranean was somehow picked up by this very powerful and anti-democratic, uh, school of Plato and his followers. I say anti-democratic because the democratic was to win by the power of rhetoric a-according to Plato.
And here was something in which you could win established truth in which could be seen, it was thought, by anybody. Now, of course, there’s a much more general notion of proof, and it is, uh, quite properly called forensic. It’s what happens in our courts, both in the, uh, common law adversarial tradition and in the Roman law tradition in a very different way, in both of which we’re trying to winnow out a conflict of facts.
And there, of course, we do talk about, you know, proved beyond reasonable doubt, etc. So there’s a more general notion of proof. Proof, the word proof is closely connected with something else, namely the idea of a test, as in the proof of the pudding is in the eating, uh, which again, uh, almost harks back to the torture tradition in court, in forensics, when, uh, you, uh, you were simply put to the test, and if your body managed to make it through, then you were innocent, and if it didn’t make it through, you were guilty, and so you were killed.
And it says there’s an enormously rich story about the– about detailed notions of proof. And of course, I have completely, uh, occluded most of the interesting things about proof because I mean, fixating on the idea of mathematical proof, which does have a connection with settling in a, in, in a democratic society, uh, democratic, but anyway, democratic society such as Athens, rather than in a hierarchical society such as that which was current in the imperial court in China. So I’ve, uh, so, so in a sense the, the– what I’ve been saying today completely fits that material account and at the same time drives a particular line through, which is the one which interests mathematicians.
And more generally, there’s a wonderful book by Theodore Porter at UCLA called Trust in Numbers, which I think is an, uh, is, uh, a sort of as a functionalist explanation of why numbers are so important in our society. They’re s– They’re the necessity for people to stop fighting each other.
That is, in a democratic society, we don’t have anybody to appeal to, and so we start having systems of measurement which we then tacitly agree as being the last resort. You’ve got to produce the right numbers. And this is all– this is this larger historical conception,
whereas I was– which I enormously respect, whereas I was going in this very narrow line concern– Sorry, concerned with, with the idea of proof in mathematics itself to this very day. Is that okay? Yes, no?
[01:28:15] AUDIENCE MEMBER:
Thank you. It’s commonplace to think of proof as being about securing truth or persuading others of truth. But some mathematicians will talk about proofs being of different kinds.
Not all proofs are created equally. Some gain a particular kind of insight, some are more explanatory than others. I wonder whether you think this is a– th-this is true, and whether you think it sheds any light on any of the perennial philoso-philosophical problems about mathematics.
[01:28:45] IAN HACKING:
Thank you very much. Uh, one. In many, many mathematicians are inclined to say that what’s important is not a proof.
What really gets you kudos is not a proof, but a proof idea. That what was important, just to take the trite case of Fermat’s Last Theorem, was not showing this rather boring fact about integers, but rather showing it by a particular way of using com– seem, uh, what– to– well, people had suspected for quite a while, it’s all part of the Langlands program, that you would bring in something here, and here, and here, and you’d be able to put them together into such a proof. But it’s, uh, but so, but what, so one th– so one thing about the Wiles result is simply it was very hard to do and took a long time.
The other is that it, it provided a way of solving a whole, of attacking a whole lot of other problems as well. So it’s the proof idea rather than the proof. And this may be connected with the not being very interested in filling in all the lacunae in a proof.
That is, what you’re really interested in is how the proof goes, and you get a good enough sense of wha– how you would fill in all the holes so that the certainty, the change in the attitude to certainty may come, go along with a change in an attitude, a change in a professed attitude. It’s always been there, but professed attitude to what’s an important mathematical achievement, where it’s not the proof, but the proof idea. But still, the proof idea is connected obviously with partic- with the idea of generating particular proofs.
And I don’t think here one has got a ch- It’s not a chicken or egg thing. I mean, first of all, it’s proof of idea, And then there’s the proof, or there’s proof, and then there’s the proof idea.
It’s rather the, the, the, the interlocking of, of both of these. And certainly, it is, it is the case that that many, uh, demonstrations, use that word, are in, are of no interest whatsoever. I mean, most p-
There is this practice which we have on our universities, which has, uh, got introduced around nineteen twenty or something, of teaching symbolic logic, where people are learn- in some courses in symbolic logic, learn how to make correct derivations from a set of axioms, which may or may not be, um, pedagogically a good thing, teach people something. But what’s the, what ends up is usually of no interest whatsoever and is known before. Those, th-those are the sorts of things which are really unequal or inferior proofs in my opinion.
[01:31:31] AUDIENCE MEMBER:
Mr., uh, Professor Hacking, I have just as a continuation of the, uh, idea on symbolic logic and the gaps in logic versus mathematics. Um, the… Everything from the paradoxes that arise, and I immediately think of Carl Hempel’s work versus, you know, finding holes in proofs and holes in logic.
Um, is there a disconnect in these two areas of mathematics and philosophy and logic, or could you speak to the, the… You spoke to the demonstrations of, of, of, of mathematics in, in proofs. But is there a parallel for that in symbolic logic or, uh, in, in, in pairing these areas?
Um, sorry if I’m just a little bit vague there, but, uh, just a couple of ideas on that.
[01:32:20] IAN HACKING:
Yeah. I have– I’ve said, um, nothing about uh logic at all. Of course, uh, you know, there were a group of problems of which for philosophers Russell’s paradox is the best known.
But other questions about, uh, basically about Lebesgue integrals and so on, which deeply perplexed people at the end of the nineteenth and the beginning of the twentieth century. And one of the– and as you, as everybody knows, there were several responses to what became to be called a foundations crisis, one of which was to use logic, one of which was intuitionism, and then there was Hilbert’s program and all the rest. Uh, it seems to me that I am inclined to say that the questions arising and asso-from and associated with the so-called foundations crisis are good examples of what I would call ephemeral.
That is, they were enormously important at the time, but really don’t matter very much anymore. Uh, I remember Bernays saying in nineteen thirty-four, “There is no longer…” this very distinguished mathematical logician and mathematician, a founder of one of the great set theories, von Neumann, Gödel, Bernays, uh, saying, uh, there isn’t any foundations crisis anymore, and I believe that was a c– , that was just, just simply a fact. And that lo– without the foundations crisis, logic becomes merely another branch of mathematics.
Now, there are particular questions which are more in the domain of what we call foundations of mathematics. There’s a wonderful website called FOM, Foundations of Mathematics, which is full of interesting problems, uh, which are more or less… Well, some of which are, are really quite internal to that discipline, others of which, uh, address really fascinating mathematical questions.
Thus, uh, the question that people like Saul Feferman at Stanford devoted so much of their work to, uh, the question of how much– how strongly non-constructive does a proof of any particular s- i-interesting mathematical proposition have to be? And the general consensus among many mathematicians now is that we don’t need anything like the whole of set theory, uh, to prove anything of any interest. I mean, a very–
That’s the strongest possible statement, but basically the kind of, uh, reverse m– reverse foundations where you just try to go down to simpler and simpler, uh, bits of non-constructive reasoning until often, often you don’t need any. So there’s a whole range of important questions, and those are ones which I just kind of put aside. Uh, uh, again, I think there’s very important questions about what exactly is the relation between the things which are going to be discussed tomorrow on the FOM site and mathematics in general.
My own prejudiced belief is very few of them are of deep importance. Although one of the striking things is that fifty years ago, mathematician mathematicians, as opposed to logicians, didn’t think that Gödel’s theorem was very interesting. Now, an awful lot of mathematicians think that it’s enormously interesting to consider on the consequences of Gödel’s theorem.
So these are whole written– So what, what, your, your question, uh, which you said wasn’t completely clear, has brought out in me a much less clear, uh, simple illustration of the way in which it goes off in all directions. And I hope that the way in which I’ve answered suggests some of the ways in which I would answer in detail.
Um,
[01:36:11] JOHN CAMPBELL:
Um, we have time for just one last question, which I’m afraid I’m going to ask you myself.
(laughter)
Um, uh, well, you, you- Philosophers have, um, uh, often contrasted the certainty of mathematical knowledge with the fallibility of knowledge you got through the senses, and you challenged that. You talked about error rates and proofs and so on. Um, and it, it, it struck me that, um, when philosophers have been questioning the, uh, certainty of the senses, they’ve often been concerned with systemic ways in which the senses can get things wrong.
Um, I mean, they show colors in the world, and colors aren’t really there. Um, th- They show us lots of stuff that isn’t at all to do with what’s going on.
The senses get- there are reliable ways in which perceptual illusions are produced and so on. And that the mere existence of error rates doesn’t show that there are systemic ways in which mathematical reasoning gets things wrong. And I guess I’m just curious as to whether you would think that there may be, I don’t know, might there be whole tranches of mathematics that are really just an illusion pro-pro-produced by systemic errors in our styles of mathematical reasoning?
Or, um, is it really? I-is there nothing systemic about the kind of errors in mathematical reasoning we, we might make?
[01:37:28] IAN HACKING:
I know of nothing systemic.
[01:37:31] JOHN CAMPBELL:
Well, on that note. Thank you very much.
(applause and cheering)