[00:00:00] PROFESSOR STANSBURY:
I would like to welcome you all to the Foerster Lecture. This lecture is given approximately annually under the auspices of the Graduate Council. The lectureship was endowed in nineteen twenty-eight to honor the memory of Agnes and Constantine Foerster, and you have that information on the announcement that you were handed when you came in the door.
The Foerster lecturer this year is Professor Gerald Edelman. He is Vincent Astor Professor at The Rockefeller University in New York and Director of the Neurosciences Institute. Professor Edelman trained in medicine at the University of Pennsylvania and received his MD in nineteen fifty-four.
He then served in the Army Medical Corps for two years with an assignment at the American hospital in Paris. He then returned to the United States as a physician at the hospital of the Rockefeller Institute, which ultimately ev-evolved into the Rockefeller University. At– when– at the Rockefeller Institute Hospital, he began graduate study, and he received his PhD from the Rockefeller in nineteen sixty.
His graduate research was on the chemical structure of antibodies, and he continued this work for roughly a decade, the result of which was a Nobel Prize in nineteen seventy-two. For all of you who are graduate students, you should take note. The work that you do may bear fruit in the future, even though it doesn’t seem like it today.
Uh, he has received many awards and honors, too numerous to mention here. Suffice it to say he has a number of honorary degrees. Six, I think, is the latest number.
Uh, he’s a member of the National Academy of Sciences, the Nat-uh, American Academy of Arts and Sciences, the American Philosophical Society, many other professional organizations as, uh, diverse as the American Chemical Society, uh, the American Society of Immunology, uh, the Genetic Society of America, and so on. Uh, this, I think, reflects the, uh, the diversity of his interests. His, uh…
His, uh, work in subsequent years has wandered in several directions. It is noteworthy that, uh, he has not been constrained by disciplinary boundaries. Rather, he finds interesting and important problems to study and uses whatever tools from whatever discipline are needed to seek solutions to those problems.
Uh, he has, uh, done significant work in, uh, in addition to the work on antibody structure. He has made significant contributions on our understanding of cell-cell interactions during embryonic development and the regulation of cell division. His latest direction is the study of the nervous system and higher brain functions.
In short, he is looking at the relationship of the mind and the brain. He published in nineteen eighty-seven a seminal work on this, uh, problem entitled Neural Darwinism: The Theory of Neur-Neuronal Group Selection. His lecture today focuses on the mind-brain problem, I hope.
Uh, uh, and it is entitled Physical Theory, Brain Theory, Theories of Everything. Let us welcome Professor Edelman.
(applause)
[00:03:38] PROFESSOR GERALD EDELMAN:
Professor Stansbury, I thank you for your remarks. You can see that the first stage is a coarse-grained intelligence test, which is to see whether you can talk and tie square knots at the same time.
[00:03:51] PROFESSOR STANSBURY:
You have passed that test.
[00:03:52] PROFESSOR GERALD EDELMAN:
Thank you very much. Well, first I’d like to say to Professor Stahl and the committee how grateful I am for this opportunity to be here, And, uh, of course, also to be in Berkeley. When I dropped out of college, uh, I don’t remember the year, it doesn’t matter, and flew off to Miami Beach.
My mother gave me an alternative, back to college or work, so instead I came to Berkeley. And, uh, I spent a delightful time at the G.N. Lewis Building trying to understand quantitative analysis, at which point I decided that she wasn’t going to relent, and I better go back to school. The result you see before you.
I regret this, but there’s nothing to be said. Um, when I first received the invitation of the committee, I reached in an almost reflex fashion to, to say no, because the subject, of course, intimidated me on the immortality of the soul. But then, in fact, on considered reflection, I thought, given my then present views on the subject, that I would only have one chance in all of eternity to-
(laughter)
And, uh,
(laughter)
so you find me here, and I, I must again say how delighted I am to be in this neighborhood. I deeply regretted to hear of the catastrophe recently, but I’m extraordinarily pleased to see that people have reacted in a noble and generous way. Um, in, uh,
(clears throat)
sixteen nineteen, on November the tenth, uh, René Descartes slept inside a wall stove and had a vision. And that night, he had three dreams. And the third dream, uh, consisted of, uh, if I might interpret, um, a message to attempt to illuminate by reason all of the subject matter of the sciences, and indeed by extension, as it turned out, all subject matter.
And eighteen years later, as you know, uh, he came forth with his Discours de la méthode in which he proposed a method in which one might do this, and the method seems banal when you reduce it to my kinds of phrasing. Uh, first, uh, do not consider anything that leaves you with residual doubt. Second, break, uh, big s- problems into small ones.
Third, proceed from the simple to the difficult. And fourth of all, just as in calculus, check your work. Um, Now, some two decades or perhaps two generations later, Leibniz came along with his proposal of a characteristic, uh, universalis,
(cough)
that there would be a method
(clears throat)
logically of computing all matters, not only scientific, but in politics. And indeed, um, as Reuben and Hersh have pointed out in one of their delightful books, uh, Cartesianism in some sense implied the primacy of the mathematization of the world, the idea that there might be a possibility of a singular description which was complete and which would exhaust, in a sense, that which had to be described. Well, obviously physics is at the base of this subject, although mathematics is its mode.
And physics is, as you know, one of the most pretentious of all sciences. It goes towards everything in the universe. And uh, as you also know, from the time of Leucippus and Democritus all the way on through Newton and uh, Galileo and others.
Finally, to the dream of Laplace that you could, if you knew the position and characteristic of things in the world, you would know all about bodies great and small, and indeed, that intelligence that could see that would consider not only the present but also the past and the future in the same sense of mastery. Well, we know now, of course, that that Laplacian dream has no hope. We know also that modern physics has come remarkably close to this notion that, um, if you had a complete and consistent theory of the ultimate particles of this world, that in effect, if you were a severe physical reductionist, all else would have to fall under its sway.
You would have, in effect, a theory of everything. And I suppose being on the West Coast, you all know that this is a place where that word has, uh, uh, achieved a certain poignancy. Uh, the successes of modern physics are very great
And indeed it was Einstein’s dream to unify physics. That unification seems to be coming along quite swimmingly, and there may very well be in the transition beyond the standard theory and quantum chromodynamics, uh, a theory which will universally connect all of the four major forces of nature, the strong force, the weak force, the, uh, force of electromagnetism, and gravity. Well, so far, gravity’s a tough problem, and physics does have to connect the topology and geometry of the world to all these other forces in a completely satisfactory way.
But physics, while it seems to be progressing very nicely in that way,
(coughs)
is haunted by a
(coughs)
Cartesian demon. Descartes himself saw a demon in his dream and vision. That Cartesian demon haunts physics because one of the great, it’s extraordinary things about physics is that ineluctably the observer was drawn in.
Now, he is psychologically a rather transparent observer. He has no Freudian dreams. He has, uh, simple connectivities with clocks and measuring rods, or he collapses a wave function, tempting some physicists towards rather mystical theories of how consciousness intervenes upon quantum mechanics.
But be that as it may, that is a very severe problem, and it’s one that physics itself has, uh, not been able to avoid. Indeed, now it welcomes it. And so the question is: Can you construct a theory of everything and be naive about an observer?
My proper subject today is that you cannot. And indeed, if you are presumptuous enough to say you have a theory of everything, you must have a theory not only of the grandeur, uh, and pretentiousness of physics, but also one which will explain in a satisfactory way, if only in tension, how an idea of the observer would fit. We must confront ourselves, in other words.
Okay. Um, well, there it is. Um, now what I would like to do in this lecture, therefore, is to discuss some of these subjects, and I first have to start by making an apology.
There’s no way that a scientific specialist in the twenty– late part of the twentieth century can avoid technique. This is very boring, and, uh, um, I found no way to escape it. Uh, you may think me a fool with technique at the end of this lecture, but without it, I would be utterly shamed.
And so what we have to do is enter into a pact with each other about how this lecture is going to go. I’m going to show you lots of little details. I’d like you to ignore that and just sort of try to get a, an impressionist view of what it is I’m trying to say.
What I shall be trying to say is the following. I would like to tell you a little bit about the brain. I would like to tell you a little bit more about perception.
I would like to point out to you that no simple-minded notion of computing the world as a piece of tape for the brain as a computer will account for our problem of the observer. And then what I would like to do is propose more or less humbly, uh, a way of getting around this difficulty and show you at the end a film of a new kind of automaton, which we think in New York is sort of amusing. It’s not a computer, but it does require computers heuristically to work.
So that’s my program. And what I’d like to do is start by going back to the seventeenth century and point out these two gentlemen. If you could take the lights down and, uh, put on the first slide.
Could we shut those somehow? And I… I see.
Have we lost the lamp? Here we are. Well, here we are in the seventeenth century now with some slides.
And, uh, over here is our friend René Descartes, the most remarkable, uh, man. But I want for the beginning to talk about this chap, Galileo, who might be safely said to have invented mathematical physics and indeed, in a sense, all of modern science. And as Erwin Schrödinger pointed out, one of the things he did was to remove the mind from nature, meaning, of course, a kind of metaphor that he took away the Aristotelian involvement in explanation, replaced cause by law, kept his own sensibilities out, as you may see if you read his Essayer, in which he anticipates John Locke by a considerable amount of time on secondary qualities, uh, and then proceed to test by experiment your mathematical model.
As I mentioned before, Descartes was, um, terribly, um, uh, concerned not only with matters scientific but also with matters philosophical. And of course, I think it’s obvious he’s one of the greatest philosophers of all times. He still haunts our science, and we must deal with the problem that he came up with.
And that problem I didn’t advert to y– uh, immediately is the problem, uh, of his dualism, which perhaps doesn’t follow so strongly from his, uh, method, but it– which he insisted upon. That there were in fact res extensa, the kinds of things that Galileo could deal with, but also res cogitans, uh, uh, a kind of thinking substance not subject to the kind of analysis, but only to this particular kind of introspection which he had in this, uh, extraordinary hyperbolic doubt that he indulged himself with. Well, um, I think anybody who has a scientific theory of everything has got to do something to ex-expunge res cogitans, And, uh, in some sense, you might think of this as our task.
Well, here’s a creature linked by evolution, I assure you, to those two distinguished gentlemen of the previous slide. But he has more the response that I had when the projector didn’t work. This is, this is the startle response of the nine-banded armadillo.
This is what happens if you pop a flashbulb at an armadillo. And I put it here to point out that you might think that this is merely a matter of motion since he just leaps in the air. But I assure you, it has the abstraction of algebra itself, and it poses the deepest of problems to us who are interested in the nervous system, the problem of movement rather than motion, and I shall come back to that.
Well, to deal with as sweeping a subject as, uh, this, uh, if you look into the literature, you will see that most working neurobiologists don’t want to be bothered with any theory at all, especially philosophy. Uh, but, uh, that bothers my friend John Searle here, Professor Searle, as much as it bothers me because we both agree that it simply won’t do to consider the brain as a computer and the world as a piece of tape. Here on this slide is a little cartoon of a general mathematical expression put forth by Alan Turing to describe all possible digital computers, a Turing machine.
A machine which is a finite state automaton, but which has an infinite tape, which it can read, uh, in this little square here, and it can do very simple things, either read or erase and write, or move to the right or to the left and change state, as shown here. And it has a program with conditions and actions, and as a function of that particular read and the, uh, condition on the program, it will then proceed to do another action and carry out an so-called effective procedure. Now, Turing showed that if you made a, a universal Turing machine, it would describe all possible such machines.
And the assumption, tacit or explicit, that the brain is such a thing and that the world is a piece of tape is something I wish to cause doubt about today. So let me try. But before I try, let’s leave the Turing machine up, and let me establish a pact with you and give you a feeling for how I believe science goes in the absence of a completed theory of the kind I wanted to talk about today.
The story is of a young man who suspected that his girl was carrying on with someone else. So he came home on this very hot New York summer evening, and he looked under the bed and in the closets for his rival, and they shouted with each other, had a terrific fracas, and he didn’t find him. But at the end of this, he found himself at the rear window of this dreary flat with the window open, and he looked out of the corner of his eye, and he saw on the fire escape below a man wiping his brow and loosening his tie.
So he flew into an enormous rage and picked up a huge five hundred pound refrigerator, smashed it through the window, aimed it carefully, and dropped it on this man’s head, at which point he dropped dead. The scene switches to heaven. Three souls are being admitted, and this is as close as I come to our proper subject.
Um,
(laughter)
Saint Peter, Saint Peter
(laughter)
says, “You fulfilled all the bureaucratic requirements, but for the record, you have to say how you died.” The first one– man said, “Well, I thought there was a little hanky-panky.” “I came home early.
I couldn’t find this fellow, but I finally saw him. I must have had an adrenaline fit. I dropped this refrigerator I couldn’t ordinarily lift on his head, and I must have had a heart attack.”
And the second fellow said, “Well, I can’t afford an air conditioner, so I came home from work a little early. I stepped out on the fire escape, loosened my tie, wiped my brow, and this refrigerator fell on my head.”
(laughter)
And the third fellow said, “I don’t know. I was just sitting in this refrigerator, minding my own business.””
(laughter)
I hope, uh, I hope by this example to show you the, the difficulties of reductionism. But I’m going to plunge ahead into it, and again, by our pact, forget the technical details. The first, this rather abstract slide from one of our papers just turned out to be in the box, And I thought I’d use its simple, simple cartoon-like character to show you something about what neurons are like.
They have a cell body marked here by this number J or letter J. They have something called an axon, which is a long extension, and they attach at a synapse to another neuron by something that neuron puts out called a dendrite or a dendritic spine. And at that so-called synapse, where all the action is, chemicals usually are released when an electrical signal goes down this fiber and activates certain very complex mechanisms, releasing a transmitter, stimulating under certain circumstances the next neuron, and so on.
Now, that might convince you, well, I’m just something you could buy at Heathkit or RadioShack or, you know, you just put a lot of them together. And I do want to tell you how many there are, just so we can be struck with awe. If nothing else I say imposes on you, maybe the numbers will.
Um, if we take a brain like the human brain and peel off the cortex, we get something somewhat larger than a table napkin with about the same thickness. It will have ten followed by ten zeros worth of neurons. Not just cells, there are more cells than that, but the neurons, the ones I just showed you, ten billion neurons, one million billion connections or synapses.
If you counted one per second, you would finish just counting them thirty-two million years later. That’ll give you the feeling for our problem. It’s not an easy problem.
So as good scientists, we go back to how the thing is generated. And this is just the kind of cartoon I’ve made for you of neuroembryology, the field that studies how do you make one of these things. Well, we don’t really know very well how we do it, but let’s try to say a little bit about the details.
It starts off with a chick egg, the kind of thing Aristotle described in his classic studies. He, by the way, had loads of graduate students, it turns out. So he could do this.
If you’re a realist, you need a lot of graduate students. Now, uh, The idea is if you looked at that chickadee, you see something called a, a neural plate. And very rapidly, it would fold up into a tube, never mind how for now, which would fill with fluid, begin to crease in key directions, and at the same time, these neurons I pointed out to you would start moving, either right through this whole structure, through various tracts or in sheets cutting across, say, this cortex in sheets in which they go along guide cells and move from here to here or from here to here, et cetera.
Finally, to land up in this intricate architecture, which is the so-called Golgi stain. Now, a Golgi stain stains less than one percent of all the neurons. If it stained everything, you’d just see a sheet of black.
That’ll give you a feeling for the connection of the qualitative with these numbers. So this is just a sample of this intricate kind of circuitry. Why am I showing this to you?
Because in fact, if you are a Turing machine maniac or a person who thinks that information is what your brain takes from the world, that you categorize the things of this world that way, you have some crisis. And here are some examples of the crisis. Given the what– what we know, it’s absolutely out of the question to talk about precise pre-specified point-to-point wiring of a kind a computer engineer talks about.
Not only that, uniquely specific connections cannot exist. A sort of lemma on that point. If you try to say it must be that X is uniquely connected to Y, you don’t have to go through many brains to find a counterexample.
Worse than that, as you’ll see later in this lecture, if you look at the way those neurons arborize like trees into each other to make the connections, they overlap in an immense way in three-dimensional space with an extraordinary kind of sprouting set of arbors, in such a way that if I’m sitting as one cell which is connected to some of these arbors, I would hardly know which cell is connecting to me. So now I’m getting into something more than simple counting. I’m getting into combinatorics.
Let me give you some examples. The first by one of your professors who’s done some pioneering work here, Corey Goodman and Pearson, his colleague, who looked at the common locust and who filled these neurons. These are–
This is the so-called contralateral descending movement detector, which has something to do with flight. And here are four outbred locusts, and look at these neurons with the same name. These creatures would have more or less the same behavior.
Well, you might object that this is an outbred population, so let’s move here to a simpler animal called Daphnia magna, the water flea, which is a parthenogenetic female. It breeds strictly by cloning, if you will. Identical genetically, here is an ommatidial neuron, a visual neuron, li-left and right from individual one, left and right from two.
Notice no two are alike. Third of all, this is enough, by the way, to drive a computer engineer sort of wacky. He would have a lot of problems with that.
Uh, here, looking at one individual is a thing from the great anatomist, the Spanish anatomist, Ramón y Cajal, a repeating structure in the same rabbit of a cerebellar, um, uh, wiring system. And even from the repeats in the same individual, there’s an enormous variation. So there’s extraordinary structural variance goes beyond any notion of signal and noise in an engineering sense, And that’s the first problem anybody would have saying the brain is a computer.
I’m not even getting into the kind of sophistication Dr. Searle gets into on BBC. I’m talking literally about just what you see. Now let’s go to function.
If you look at these connections and you probe around with an electrode for the electricity, you find out that at any particular time, the majority are not expressed, and it’s rather difficult to find any easy pattern in them, except for the easy kinds of situations. If you look in the brain that I showed you, that human brain, you’d find, as in medical textbooks, it is organized evolutionarily into maps. If I touch a finger here, there’ll be a part of the cortex which will map, as it were, my skin.
Ever since the great Johannes Müller, that has been known that there’s a kind of specific dynamic action of neurons, and it’s which one you tap that says it’s touch, as it were, or which one you shine light on which says that it’s sight. This is organized in maps, but contrary to the classical medical school text, the maps are enormously dynamic, as you’ll see from the work I’m going to show you a little later. That poses a terrific problem, uh, upon any notion that this computer is sort of sitting still, even in adult life.
But this is the one that really gets close to the philosophers and really points out what I think is the central problem of modern neurobiology. It is that there’s an extensive generalization in object, uh, recognition by animals who do not have language. They see a few examples, and they generalize enormously, and this is a Cartesian problem.
The unitary appearance to a perceiver of a scene such as the one I’m seeing now, a little intimidating, but, uh, uh, with color, direction, movement, and what have you, taken care of by twenty-five independent maps of my visual brain and no superordinate map saying that it all sh-should hold together. To me, it looks all just fine, at least right up to this point. Um, I won’t touch too much on that, if you don’t mind, but I’ll go on, to-to show you something about this perception.
I want to get back to this particular issue here, because I think it serves as the function that poses this point best. I want to say four things. First, that it is not necessarily veridical, that what you see is not necessarily what scientists tell you is there, and what we have to yield to scientists after all, especially if they have grants.
I mean, you can’t… Uh, second of all, um, second of all, uh, There’s an enormous context dependence of perception. Third of all, there is this extraordinary generalizing power in the system that we don’t yet understand.
And finally, there’s no external judge saying what’s what. It’s an adaptive system. So let’s take this Wundt-Hering illusion here.
You’ll, I think, all agree that this— these two lines look parallel. I don’t know if the screen is keystoned or whatever, but from here it does. And incorrigibly, I think you’ll agree, that these lines seem bowed in and these bowed out.
This is the Wundt-Hering illusion. Another one, perhaps more striking in its incorrigibility, is the Kanizsa triangle. Kanizsa is a psychologist, lives in Trieste, still alive, who’s invented these wonderful figures that give illusory contours.
And I don’t know if it shows here because I’m rather poor in vision, but this should show a slightly different apparent illuminance, and it should show a very definite overlying triangle. If you did physics on it, you’d find out very rapidly nothing there. But in fact, there is something there, incorrigibly there.
This has to be answered. It depends on the context, of course, of what’s around it. If I take away these Pac-Men, it won’t work.
(laughter)
This is one example of a kind of paradox within a paradox, because they should be gobbling it up, but in fact, they create it.
(laughter)
Now, here’s a picture by a remarkably talented artist whose humility goes beyond his artistic talent, that’s clear, and I think most of you will see a face. Uh, if I put this figure in the context of Wallace Stevens, our greatest poet’s, uh, curiously quixotic titles, Frogs Eat Butterflies, Snakes Eat Frogs, Hogs Eat Snakes, Men Eat Hogs, and you look at that a while, um, um, I’ve cheated, I’ve given you conceptual things of butterflies being consumed by frogs and whatever. But you won’t ever look at that picture quite the same way again.
It would take much longer if I showed you a perceptual contextual difference, but believe me, it’s there. So, so far I’ve said veridical, not necessarily. Context dependent, yes.
And now we come to the deep one. These are the experiments of Professor Cerella at Harvard, who showed pigeons in an operant conditioning mode, oak leaves. This is Quercus alba, the white oak.
And when a piece of corn or whatever you feed pigeons came along. After about four such trials, this pigeon could discriminate all oak leaves from all other kinds of leaves. Now, you mightn’t find that very exciting, but Professor Herrnstein then took a thousand Kodachromes randomly of trees and also of pseudo-trees, and he conditioned in the same way, and the pigeon then could generalize after conditioning on this to that as a tree, to that as a tree, to those as trees, and reject all these.
This is getting a little more nervous-making. And when you read about this, you don’t believe it. So I’ve gone up to Harvard.
Well, that doesn’t mean anything, does it?
(laughter)
So I went to several– I went up to several other laboratories and found out, by golly, it seems to be true. Now, the first objection you’d make is, well, this is an ethological problem. Pigeons evolve in trees.
You could see how evolution could pick for tree-ness, whatever it is. So Professor Herrnstein then hired a scuba diver. This is the nice thing about being a professor, I guess.
He hired a scuba diver to take pictures of fish, and he showed the pigeons fish in every possible context, and they did the same thing. It took a little longer, but they did it. Pigeons don’t live with fish.
(laughter)
Uh, pigeons don’t eat fish, and pigeons don’t evolve with fish. This poses what our problem is. Some people have supposed that the problem is embodied here in the so-called polymorphous set, a set named by Professor Ryle, a phil- philosopher, after some notions of Wittgenstein, And I won’t take the time to try to let you puzzle out what the difference between
yes and no is. I’m going to tell you because it took Cambridge students about eleven hours to figure it out. That doesn’t say anything in itself, but–
Um, the fact is, if I said at least two of dark, round, or symmetric decides on. Yes, I think you will see what a polymorphous set it is, one in which there’s neither necessary nor consing– Um, uh, sufficient conditions to define membership.
And some people suspect that’s what pigeons are doing. So we have this terrible problem of generalization without any kind of message, but worse than that comes the judge. There’s no judge.
If you look here and define these as triangles and these as ellipses, how do you define this? As a large object or as an ellipse? Well, it depends on where the food is, right?
So the problem is there isn’t any judge that sets category in nature. I think I’ve belabored this thing into the ground, and I’m going to just summarize. I believe that there’s sufficient evidence to say that from the engineering or even mathematical point of view, the world is an unlabeled place.
It does not have a singular map, and if it did, Darwin would be wrong. Brain order shows biologically significant variance, which is at the key of biology, but it’s enough to really confute any engineering description of an ordinary kind. Perception is adaptive, context sens- uh, sensitive.
I showed you this for this jumping armadillo, take my word for it. And perceptual categorization and generalization are a fundamental business. Well, how are we going to solve this problem?
How are we, if it, there isn’t a computer description of the brain, going to deal with this? I believe, and that’s one of the secondary messages of this talk, that the answer lies in the greatest biological theorist who ever lived, Charles Darwin, who invented population thinking. Population thinking, unlike the variance of physics, where variance is error, ineluctable but to be eliminated as much as possible, In population thinking, variance is of the very essence.
It is as– it’s, it’s real, as Professor Ernst Mayr has put it. It is the basis upon which natural selection occurs. Prior variance in a population in which every individual differs from others is your stake, if you will, on the future.
If anything happens and selection occurs, if you have enough variance with enough adaptation, you will survive and procreate. One way of looking at this to make it easier for those of you who haven’t thought of it is that competitive situations can yield structure without any other message. For example, the jungles of Panama and Puerto Rico are very similar, but Panama being more landlocked and that being an island has about ten times as many bird species.
If you looked in the trees and counted vertically and sort of catalog the species, you’d see a high degree of order and layering, very reminiscent, as a matter of fact, of the layers of the cerebral cortex of the brain, i-in specialist birds going from here to here and here to here, et cetera, with overlapping
(cough)
intra and interspecies
(cough)
competition. If you looked in Puerto Rico, uh, you would see no such ordered structure. The goodies are rampant.
There’s no t-terribly strong competition, and the structuring is much less. Well, I want to give you a quick example, and now here we get to the apologies for technique, from immunology to cement this idea. That used to be a field, as Professor Sensabaugh said I worked in, and, uh, um, uh, And in immunology, it was once thought that your body recognized foreignness, foreign substances, sort of the way a cookie cutter made a cookie out of dough, instructionally, or just the way the world might be a piece of tape.
And Linus Pauling proposed this elegant theory. It turns out it was wrong. The right way is selective.
Your body makes an enormous number of cells called lymphocytes. I mean, ten followed by eleven zeros of them. And the numbers here are supposed to indicate different antibodies on their surface.
And when a foreign molecule comes in and binds, as indicated by these black dots, some of these divide and make more and change the population. So the next time around, you recognize. Well, notice that’s very different.
You already have the variant order before you begin. You have a huge amplifier for any event that’s satisfactory, whatever that means. Well, this is the molecule I worked on, and over here at the corners of the molecule, right at the edges of this Y, and this molecule will be sitting on a cell the size of this building or larger, um, is a place where every single cell has a different one, sort of like a different lock for a different key beforehand.
Sort of zany, isn’t it? But it works absolutely magnificently. Now, that’s the idea of selection, and I want to pursue that now for the nervous system and then show you this automaton.
Um, um, the idea is that during the development of the nervous system which I showed you, the animal, uh, cells are selected to make certain particular kinds of patterns, of course, under the influence of genetic constraints, but also as a result of local movements which can’t be controlled by genes, introducing an enormous amount of variation in the kinds of nets that are formed in each individual, Even in individual twins. And then later, somewhat later, but not necessarily non-overlapping, after the structures are formed, if signals come from the world onto this prior, very various network, certain connections are going to be strengthened over the competition, indicated by this line. Well, that’ll do a lot, but, uh, a nervous system is not as boring as an immune system.
Uh, it can do remarkably new things, map into space and time. And so the third premise of this theory is that there are specific kinds of relationships between maps consisting of these variants, and these relationships are called reentrant. Now, that’s the hard part of the theory.
I’m not gonna talk about that now. You’ll see it in this automaton. So to summarize, the idea is that you form groups of neu-neurons which are like the lymphocyte antibodies, except this time it’s little Heathkits, every one of which is different, all over your brain before you see the world.
And then when you do receive signals, certain preexisting ones are differentially amplified by strengthening their connections, but not their pattern, and finally, they interact with each other in maps to give something related to categories. I’m going to talk about that last. The predictions of this theory get technical, and I’m going to race through them for those of you who are scientifically interested, just to give you a hint that it isn’t completely up in the air.
If this theory is correct, it can’t be that nerve cells recognize each other like jigsaw pieces. You can’t have addresses of the kind that you can see if you visit Mont Saint-Michel. You see, they chiseled the numbers of each stone on the land into its companion and put them together on the island.
Instead, there must be some kind of dynamic thing which will give roughly a pattern, but would also generate diversity. The second idea is that it should be groups of these neurons which are selected, not individual ones, but clusters of them. The third idea is that in maps, the ones that I showed you that would vary, uh, this theory should be able to account for it.
And finally, you should see some relationship between this kind of re-entry between these maps, the connections of several maps, and how you can do categorization. Well, here comes some technique. Unique.
Forget it. Take a look at this. This is a picture of a so-called cell adhesion molecule, the neural cell adhesion molecule first discovered on our laboratory a little over a dozen years ago, and one of the molecules that glues your body together.
I show it here just to indicate that when you did the structure of this, a great surprise came up. It turns out that the molecule has a structure very much related to antibodies. And as a result of Professor Goodman’s work and others, the, uh, prediction we made that, that in fact the whole immune system arises from this cell adhesion system seems fairly secure.
This kind of molecule binds a cell here to a cell here by binding to itself, as shown here. In other words, apposing cells will stick out their molecules, glue like that, but you mustn’t think that it’s all addressing. While they are specific for each other, and there are perhaps a couple of dozen different kinds, it by no means accounts for why one cell is next to another.
More technique. What does account for it is that these CAMs, or cell adhesion molecules, turn off and on depending on what structures they build. So they just keep making more and more and changing the environment until the signal changes, and it’s a very dynamic system which creates diversity.
Of course it creates pattern, but in the course of making it, it must make diversity. Now, the idea that cells are selected in terms of interacting groups of neurons, as illustrated here, has recently received some extraordinary support from an experiment in Germany, in which a cat was presented with a moving bar moving in this direction to its visual system, and visual neurons were recorded from by an individual electrode that could see the frequency of firing. At the same time, uh, the so-called local field potentials of whole clusters of neurons could be measured.
And what you’re seeing in this slide is the correlation of the individual electrical firing of neurons with these local field potentials, which is really absolutely remarkable. The minima here at about forty hertz correlate exactly. So these neurons that are responsible for responding to this bar are in fact doing it as a coherent oscillatory group.
Third of all, in San Francisco, Professor Michael Merzenich has done a most remarkable kind of experiment. He’s mapped the sensory cortex of the brain by tapping on a monkey’s finger or palm and recording from that part of the somatosensory cortex which yields the map right here, 3B, shown illustrated here. He could map very carefully when he tapped finger one, two, three, this kind of map in an individual monkey, white for the palm and, uh, dark for the hairy surface.
Each monkey had a different map, but here came the experiment. He then cut the median nerve which supplies the thumb, this finger, and the middle of that finger, and all of the maps switched right away. All the borders changed, sort of colonial Africa in the nineteenth century.
There was an enormous increase in map taking over by the dorsal surface, and if he kept the nerve tied, the map adjustments for the fields of the remaining nerves would form a unique and quite functional case after about six months. If he trained a monkey to tap with one finger for a hundred thousand times, the map would move in from that finger and take away neurons, we believe as groups, of its companion map.
[00:40:46] AUDIENCE MEMBER:
Who is that? Michael Merzenich?
[00:40:49] PROFESSOR GERALD EDELMAN:
Michael Merzenich, Professor Merzenich at the University of California Medical School. Um, now here I just want to deviate, and then I’m going to show you this film. Um, this thing is so complicated that even if I had the right theory of the brain, if I don’t use a computer as a heuristic, uh, instrument, I can’t figure out what’s going on.
Try it. I say to you, “Think of what I’ve told you. Close your eyes and think of a hundred thousand neurons firing off for a hundred milliseconds and tell me what they’re doing.”
When I tell you there are ten to the tenth of them, you really get scared. So one of the really exciting things happening in neuroscience is that computers are being used to enable visual imagination. And here’s an example of a model of Merzenich’s experiments in which a four-fingered hand and a palm with a dorsal hairy surface and a smooth surface are mapped with pretty reasonable geometry, but not as good as the real anatomy.
Close. And here’s a picture in the computer of fifteen hundred neurons. Fifteen hundred neurons.
Nothing. A match head of your cortex would have possibly, uh, close to half a billion connections. Uh, here is this with about a hundred and twenty thousand connections shown here where they’re connected at random, deliberately in this anatomy, and that’s what explains why green is prevalent.
It’s the mean connection. We drew a green line to– from every neuron to every other neuron, uh, with the median value. And, uh, weak connection strengths are blue and strongest are yellow.
And then we tapped on the hand using a selective theory, and this is what we got immediately. Nothing changes in the wiring, but these things organize themselves in competing groups which strengthen their connections and weaken that of their neighbors, except to go and snatch one. And the whole thing is dynamic as can be, and you can actually look at the map by expanding your computer scale of the hand.
And here is the palm and then the dorsum, dark green. Wherever it’s dark, it’s the dorsal part. And here’s what happens if you tap on this finger.
This thing just proceeds to rob groups from all its neighbors. If you let it go for a while, it’ll sag back. If you cut the nerve, as shown here, not quite a median nerve, you get some dark spots.
This back portion leaps up and takes over all of the area, and there are subtle changes in the borders of all the other maps, even those not served by that nerve. This is not the kind of thing computer engineers deal with. I’m not sure we’ve dealt with it adequately, but it certainly isn’t what they deal with.
Now, this brings me to my final point. What I’ve shown you so far is some evidence for this kind of theory which says… Could I have some lights for a minute?
Which says, instead of a tape and instead of a prefigured algorithm, a brain is more like—
(laughter)
That’ll help.
(laughter)
It sure attracted my attention.
(laughter)
Um, instead of a tape and a prefigured algorithm, maybe what’s going on is more like an evolutionary jungle. Maybe you have an incredible kind of neuroecology going on inside there. You come in under a genetic constraint, of course, having been selected for certain very definite survival traits of the nervous system.
But in every individual case, there’s a many-one kind of mapping to an unpredictable and unlabeled world that will yield category. That’s the problem I now want to attack, and that’s my final statement. Um, and here comes the hardest subject.
Um, this subject has to do with, well, how can you put all this together to give categorization? Can you build a categorization machine? Can you build a nervous system and have the darn thing behave without being told in a way that would collect according to adaptive criteria what’s out there?
Well, the, the key idea is hard. Uh, maybe we could knock those down. If you leave that on, it’ll comfort me immensely.
You, you, if you could knock the top lights down, and we’ll try to go through this. Supposing I have two maps made of these groups of neurons, like the ones I showed you from Merzenich. Uh, one of them which detects features of objects like corners and orientations and things of that.
Another one of which correlates features, like what happens when I caress this box and sort of makes everything sort of lumped together as it, as, as it were, and assume that these two things are disjunctively sampling the world independently of each other, but in the anatomy, they are mapped densely to each other by re-entrant connections, connections that go back and forth, back and forth, not by feedback, but by a parallel recursive kind of set of structures which could light up anywhere. Now imagine a signal comes in here and lights that one up, and it so happens that I fall in a region where groups will light up here, and this connection is strengthened. So I map the maps in a reentrant dynamic fashion.
Will this allow us to categorize? Well, I’m going to show you a machine, um, which we have been working on, it’s the earliest model, which has some of these properties. It is an automaton which deals with an unlabeled world, and it’s a sessile creature.
It’s called Darwin III. It sits there. It’s like an educated barnacle.
It, uh, it has a four-jointed arm and a head with an eye, but not much else. It can’t move much. Objects at scale with it can be moved at random into its visual field and its tactile grasp.
And it’s constructed, according to this theory, with reentrant selective maps. Now, before we show the movie, which is my final, uh, gesture here. Well, I hope not ultimate, although we examine the structure of this lecture in some other place.
Um, um, before I do that, I should perhaps tell you how to look at this movie, or you’re going to misunderstand me. The movie is not really designed to be, you know, the slickest presentation of this. It was designed actually for us because it involves doing this: simulating such a creature in a very large supercomputer with a program that follows a Turing machine.
So what am I saying? Well, the simulation is like I knew your genetics and your evolution, and I knew it now, and yours, and I built it in there. I have a program which describes it, but I do not have a program which says what’s going to happen or what’s going to happen inside the nervous system.
Neither the creature nor myself can know that. A way of doing that is after you’ve built the whole program of the muscles, the nerves, the maps, everything, drive the intrinsic activity of the neurons with a random number generator and drive the outside world with another. And then there’s no sort of notice.
At no time do my hands leave my arms. There’s no tricks. So, um, that’s very important to remember.
After the thing has been simulated in a Turing machine by a Turing machine procedure, from there on in, things are just mapping at random. Now, there’s more to it than that. The creature has three senses: sense of visual contour, black and white, no gray, no color.
It has kinesthesia or joint sense. Whenever its no– joints move, its neurons pop up and down. Um, it has light touch, but no other touch sense at the end of its last joint.
The final thing is a philosopher’s dream. We have not been able to make these things work unless we inserted value. Value.
So I’m going to have to tell you what value is. This is not easy. Uh, it’s an overcharged term which we use.
When we build a creature, for example, we literally build it so that that particular version likes light rather than darkness. That’s all we build in, not category, value. We can build one arbitrarily that likes caves, dark things.
We can build one that hates light touch, et cetera. That low-level value is absolutely essential to the functioning of this creature. So now what I’m going to do, or now we should shut off that light.
(light switch clicks)
And if you could, um, I’ll just hope that I did this right. No, wait a sec. You can leave it on.
Good. Now, if you’ll run that movie, here is Darwin. When I show this movie, I feel like that Crimes and Misdemeanors movie I saw at, uh, the local cinema the other night.
This is, uh, uh, based on this theory, and the people who really did that are the ones under the first name. Uh, uh, when I say I, I mean we. When I meant them, I mean them.
Okay. Um, Darwin III looks at a world. The center of its vision is eleven percent of the world, its peripheral vision is eighty percent.
It has an oculomotor system that moves its eye, a four-jointed arm at the end of which is a tactile system for light touch. That means neurons simulated to respond to pressure, et cetera, and a central nervous system. And we’re going to go down system by system, letting the creature be born into a world we can’t predict and it can’t predict.
Here’s the oculomotor system. Quote, “It has a retina.” Quote, “It has a colliculus.”
Those neural structures connect to an eye motor system that moves the eye back and forth, antagonist muscles. And this value-dependent modification refers to strengthening of that synapse when in fact value is positive, meaning light is falling on the retina. And now here you see this colliculus strength, the strength from this to this.
Yellow, blue is the strongest, yellow is weaker, red is weaker than that, and black is weakest. These are populations of synaptic strengths as this computer starts to move this object at random into a randomly operating nervous system. And here you see there’s no correlation between the two.
But if you look carefully, every time value goes positive, and this moves in and light falls on that, uh, you begin to see changes like there’s a red there, and the populations begin to shift their strengths by selection, by competition with each other like those birds. After 4,000 such trials, um, this creature will track. It’s certainly not sophisticated, but it will track.
Here it is. Now the object, moving at random, is tracked, of course with some lag and jitter by this creature, which has now had its population selected, as you see here, inside its brain. And of course, you can look inside the machine, see the world, its nervous system, and the actual firings.
It’s a little confusing. That is not the name of the exercise. The name of the exercise is to see what happens, if we get a number of these things happening with re-entry and whether it’ll categorize.
So the next thing you’re going to see is the arm. This is a creature that’s- Yeah, the arm. Uh, this is a-
This is like Woody Allen’s trying to do a serious movie, you know. Um, here is Darwin III. Value in this case is the terminal joint is in the visual fovea, the center of its vision.
Any neuron that happens to be firing there has a chance over its competition of being strengthened. And there’s a much more complicated neural system to flail this arm around with a cerebellum and a motor cortex sifting to a spine which is moving it. The main point is, this creature now, which has seen the world and can track it, is just learning to grope.
And you’ll see this arm going. What you’re looking at here is the changes in the synaptic strength, not any connection pattern, just the strengths. And you notice it’s a real mess.
Uh, but the arm finally, after it gropes a while, selects against bad movements and begins to settle down, as you’ll see, into that kind of movement. Every time this last joint touches that solid object, these neurons light up because that’s– there, there it is settling down. Now, we built a reflex in at this point that says if you touch anything, just straighten out, no matter what your brain says, listen to your spinal cord, sort of like a praying mantis, um, grabbing prey.
And we did that to simplify the next step, which you’re going to see. So here it is, this touch exploration. This is a heuristic for you.
It reaches over, touches, gets this reflex from the spinal cord, and straightens out, and then it begins to grope. It grabs for things, but it has no instructions. It just has pressures better than no pressure up to a point.
It has muscle jitter, and so if that’s the case, it’ll prefer edges over flats. It’ll go to edges of things, and it does that, and while it’s doing that, and you can watch it, I’ll sort of jabber on and try to explain why we do all of this nonsense. Uh, here it is touching.
It’s a fixated object. It’s reaching along. This is rather more uniform than most of the time, but it’ll give you the idea.
The main point is that all of the kinesthetic or joint responses are going from this thing to the brain, while the visuals responses are going to another part of the brain, and the two parts are mapped to each other. So here it is. This is the shoulder joint mapped to this kinesthetic map, to a higher order map, the visual system to this map, and the two maps have all of this re-entry.
Now, at this point, we built in a higher order evolutionary value of the kind that anthropologists, or I guess people who study primatology, say apes may have, namely the fear of snakes or something like this. We built in, if you decide as a creature that this is a striped object, not a background, bumpy, Uh, uh, reject it. And we-
And rejection you’ll see in a moment. So here you’re now looking sophisticatedly at the whole insides, as it were, synopsized here. A smooth object fixated visually being tracked by this straightened arm with touch, sending signals to this map of kinesthesia from the shoulder, and also sending signals to this visual system.
Occasionally, you’ll see the reentrant firing, but most of it is subthreshold. Down here, we built in an oscillator in its spinal cord that says, “If you really do come to a category, flail,” the way I would slap my forehead while talking if a mosquito bit me. So watch this.
Here it is looking at a striped smooth object, much more visual activity, but plenty around when it goes around corners in this kinesthesia for a bumpy object. And now watch carefully this next one, a striped bumpy object. And remember it, we’re going to do it again.
Here it is feeling around this object, sending stuff to here, sending visual stuff to here, firing off populations and selecting synaptic strengths with re-entry that you can’t quite see, but watch this now. There. It explodes, it oscillates, and it flails that object away.
Now let’s do this again. Let’s just do this again. We’ve done this about a hundred times, give or take, and it does it in no two times the same way.
Here it is, uh, doing it again. This is a sophisticated one. It’s seen the world.
You can see it’s fixating, jittering on this object. It reaches up, touches, gets a reflex, starts to grope, and now you can see by looking in its brain that it’s seen the world by, of course, you can see its selective connection strengths, and it takes a considerably different path until it builds up enough re-entrant connectivity in this kind of polymorphous set that connects the two maps, this visual map and this touch map. When it does that again, I think it’s when it gets to these kinda radiator veins here, uh, it exercises re-entry and again flips.
Could you put the lights up? This is a very primitive form of categorization. And if it does not satisfy you, Uh, could you put the lights on and shut the film?
Thank you.
(cough)
Um, let me say that current versions can do considerably more than that. But remember, this is a solipsistic creature. We’re not telling it what to do.
We can make versions of it. And indeed, we can do something William James thought of very deeply in his Principles of Psychology. We can attempt to– Do we have that slide thing on?
Um, now I’m going backwards. Here we are. We can do something William James warned against.
We can avoid it, we hope. The psychologist’s fallacy, which is assuming that your point of view is the same as that of the mental object which you are studying. Now, one reason you get into that problem is you can’t look at someone’s neurons, behavior, and the world all at once in successive conjuncts.
With these automatons, which are not Turing machines, you can. And here’s an example of machine psychology done by Olaf Sporns in our institute, where a new, a new creature born in the world is flailing away with its arm, and he’s tracking the patterns, and this is what happens to its arm movements after it’s had some selection of those repertoires. Well, this is hardly Jascha Heifetz, you know?
I mean, but it’s
(laughter)
uh, it’s okay. Uh, and in fact, it sort of makes you think about the problem of individuality as well as the problem of value because here’s another piece of machine psychology where he tests without looking at the nervous system such a creature at various stages with objects that are bumpy and striped and orange and any one of the infinite subset of the infinite set, and he gets a pattern that looks like this. These are rejected eight out of eight trials.
These are rejected seven out of eight tri-trials. These are never rejected. If he does different exemplars, he gets more or less the same thing for the same genetic constraint, but individually, every animal has its own little pattern.
When we cut the value circuits, they are definite synaptic circuits. The machine never converges. And so, we believe, um, that, uh, to pay homage to this very great philosopher and psychologist, William James, that we are going to need more and more in trying to understand this relationship between physics, psychophysics, and the mind, uh, the product of the brain’s activities.
We’re going to need more and more of this modern hoopla, but not as our model, as our tool to help our deficient imagination. And I believe that that’s going to be a very exciting time indeed, and I hope you realize that what I’ve shown you is like the zeroth stage of something which has hardly begun to move. Um, I’m– I’ll end by saying that, of course, different people, like different automata, have different styles.
The I used to be a violinist, and I was very anxious, when I set up this microphone that the middle C would play, believe me. Um, the story is told of Fritz Kreisler, who, um, never had a practice, and Rachmaninoff, who always practiced. Genius comes in two flavors: those who work and those who don’t, I guess.
And they were recording the Grieg Third Sonata in Berlin, and, uh, Kreisler went out on the town, and Rachmaninoff was very angry with him because he didn’t practice. The next day, he rolled in bleary-eyed and said, “Let’s go,” Oh, and Rachmaninoff, and he had quite a fight. He won.
They recorded it. It’s beautiful. Um, they were playing it, uh, Six months later in Carnegie Hall, and in the middle of the second movement, Kreisler forgot.
That’s what happens, you see? So he, being Kreisler, just made up the cadences. He just sort of got into it, you know.
But after about two minutes, he got a little anxious, and he leaned over and he said, “Sergei, where are we?” And Rachmaninoff said, “Carnegie Hall.” So I want you to know where I think we are.
We are just at the beginning of exploring these very difficult questions. I think, uh, since I believe that science is imagination in the service of the verifiable truth, that, um, we are going to see an extraordinary period of the efflorescence of neuroscience in this coming century, perhaps even in the next ten years, theoretically speaking, one which will not ignore philosophy, whi-which will not ignore individuality, but which will incorporate in biology some new principles not yet fully understood, and which in the end will bring physics, I think, and biology together in a very interesting way, which will take this naive scientific observer of Einstein and Heisenberg and perhaps, uh, paint him in a little richer set of colors, which will perhaps keep us from the temptation of thinking that it’s consciousness that collapses the wave function, whatever. That we will in fact, uh, get a much deeper insight into how we know and what our position in the world is.
And when that time comes, and it– I think it’s very near to us in a very exciting period, I believe what’s going to happen is this, this grand edifice of physics, even if it achieves a theory of everything, will have complementary to it and possibly even in tension, a theory of the brain, and that that theory of the brain will bring together by marrying physics and brain science, an extraordinary set of ideas to yield an extraordinary set of children. And I have no doubt that since it’s been demonstrated in the past, that Berkeley is going to be one of the places where they’re spawned. Thanks very much.
(applause and cheering)
[01:02:13] PROFESSOR STANSBURY:
Okay, I’m sure Professor Edelman will take some questions and then we’ll take that forward.
[01:02:20] PROFESSOR GERALD EDELMAN:
Yes, sir.
[01:02:21] AUDIENCE MEMBER:
Um, you’ve used Darwin a lot, but, um, why, after all, if you take every other organ of the body— What do you think? You have a function for it: heart, lungs, kidneys. What is the function of the brain? I mean, forget about now— yes, all the details of the neurons and all the other—
[01:02:38] PROFESSOR GERALD EDELMAN:
Sure. Uh, may I repeat your question? Correct me if I do it wrong.
Like Chris, I may have been out on the town last night. So, um, um, your question is I’ve used Darwin a lot. Uh, every organ of the body has a function.
The heart beats De Motu Cordis, the kidney makes urine, etc. What is the function of the brain? Well, if you’ll excuse me for a slight flippancy, it’s a bit like the lady who goes to the psychiatrist and he says, “You say you have trouble making up your mind.”
She says, “Yes and no.” Um, the, the problem is that, of course, the function of the brain depends on which brain and which organism. Uh, if I can say, for example, that a kidney consists of something which filters the blood and then reabsorbs electrolytes in a certain way to a vast variety of species from sharks all the way on up, It’s rather more difficult to categorize brain function, because it’s dealing with this extraordinary motile behavior in echo niches, and it is that there’s every indication that different brains do different things.
But I can, with that caveat, say one of the most central features of a brain is to control your glands. It is a remarkable supergland, homeostatic, which controls things like breathing, coughing, the release of sexual hormones, et cetera, et cetera. The second thing a brain does, which I hope you got from Darwin III, that’s terribly important, and not understood, is it, it’s responsible for movement.
And movement is not motion. I mean, it– you can’t belong to the half g t squared school if you study neural movement because movement comes in these extraordinary synergies which are not simply described kinematically by physics. They require a remarkable kind of selection, as I’ve hoped I’ve shown you, although we don’t fully understand it.
That was emphasized by the Soviet neurophysiologist Bernstein in a brilliant book, uh, which is available in English on the coordination of movement. The third thing, of course, a brain does, I hope Professor Searle will agree, is think. Now, some brains do and some brains don’t.
(laughter)
And opinions differ on this subject. I once went to a brain conference with Leo Szilard, who was a close friend of mine, and he, uh, had opinions on everything, including the brain. And some poor fellow got up and he said, “I think that I’ve discovered the secret of memory.
Of when you think you secrete a protein, you make an antibody-like molecule, that’s the memory.” And Leo got up and said, “Maybe that’s how your brain works.”
(laughter)
So the problem in the latter, higher-order functions of the brain becomes very, very complex, and I’m afraid I’m going to have to waffle a little. But the first two functions are for sure, to get out of there, to move, and to regulate an enormous in parallel set of diverse bodily functions. As evolution went on, of course, with visual evolution, for example, the subtlety of categorization It’s certainly part of the brain and eventually, of course, of learning and then thinking.
Yes?
[01:05:37] AUDIENCE MEMBER:
What do you think a theory of the observer might look like?
[01:05:42] PROFESSOR GERALD EDELMAN:
Well, um, uh, if you look at it from the standpoint of physics, it looks pretty much the way a physicist would look at it. Something a little, um, um, uh, white and, uh, how do I say? Abstract.
A bit like what, uh, Einstein did when he read Mach. But if you look at it from the standpoint of biology, I think what it’s going to look like is in an extraordinary population structure. It’s going to look like categories come out of variance the way species come out of variance, and that only when you get social transmission and communication that things settle down in a common mode.
And that my guess is going to be that when we finally understand something about the relationship of linguistics to brain structure, we’re going to find many-one relations of a very interesting kind, and maybe even a theory of that that’s now lacking. Because I believe when I say red and you say red, the likelihood that we’re firing exactly the same set of neurons, even though if we put us in a brain scan, we’d sort of light up in an area pretty slight. So one of the things I think the observer’s gonna look like is an envelope of selection possibilities.
If, if that’s mapped onto understanding of how transmission occurs, I think we can then sort of close the loop back to physics. Physics, physics quite necessarily avoids all of this. But, um, I think you will all agree, or those of you who’ve looked at the problem of quantum measurement, if you try to ignore it completely, you get into trouble.
[01:07:08] AUDIENCE MEMBER:
What do you think, uh, the thinking is?
[01:07:12] PROFESSOR GERALD EDELMAN:
What do I think thinking is? I have to think about it.
(laughter)
Yeah.
[01:07:17] AUDIENCE MEMBER:
Can, can you relate the pre-linguistic categor-categorization of objects to, um-
[01:07:23] PROFESSOR GERALD EDELMAN:
Yeah.
[01:07:24] AUDIENCE MEMBER:
Quine’s theories of language?
[01:07:25] PROFESSOR GERALD EDELMAN:
Yes, yes. And maybe some help from John Searle? Can I, can– Yes, can I relate? Well, for me to talk about Quine in front of Searle is a presumption even larger than the issue of the immortality of the soul.
(laughter)
I have to warn you. Let me see. Uh,
(laughter)
my personal belief is that concepts are not, if you wish, formal objects which have to be subject to truth value. That already gets me in trouble with a certain school. I believe that a conceptual system in the brain is a brain’s way of mapping its own transactions much prior to any symbolic or even transmittal, uh, of any kind.
That when we do get speech, what we certainly must, uh, do is have a new form of memory. Memory is a systems property in my belief, and therefore, whatever centers, I think prefrontal and others, which deal with concepts that are already evolved must link up to a new memory system, Broca’s, Wernicke’s. But that’s not telling you what things are inside of, on top of, etc.
Conceptual issues having to do with jumping when a flashbulb goes off. If I relate that to what Mr. Quine said, I’m gonna get in larger trouble, but I, if, if I gather what he’s– First of all, I believe Quine is a philosophical behaviorist. He doesn’t go beneath the skin, unlike my friend John, who gets inside your skin and then rattles it around.
Um, Quine is a man, I think, who just abjures all of that. He takes a position of philosophical behaviorism. He talks about meaning holism and the fact that definition in a theory cannot be precisely specified.
It’s the interactionist attack on so-called analyticity. I don’t know how these ideas connect up to Quine. Fine, I’ll have to pass you on to John.
Yeah.
[01:09:10] AUDIENCE MEMBER:
Would you consider Darwin III to be using parallel distributed processing?
[01:09:17] PROFESSOR GERALD EDELMAN:
Yes, would I consider Darwin III to be using parallel distributed processing? For those of you who don’t know, parallel distributing processing is a buzzword in so-called neural nets. Uh, the problem now becomes that at the early stage of any science, The metaphor is used rather richly, actually, in the absence sometimes of theory.
Parallel distributed processing applies to a kind of modeling which is very removed from what I’ve shown you, very removed from the nervous system, and in fact involves a kind of matrix called a correlation convolution matrix, but never mind that. And it looks like neurons because you change values of things, but it has no anatomy, and unlike what I’ve said today, it fixes the input and the output, and it is no different in its assumptions than artificial intelligence. We have taken the position that, uh, this kind of creature, Darwin III, isn’t like that at all.
We make no presupposition except how we fix its genetics and then from there on in, it converges. However, it is true that it has parallel systems, that they are distributed, and if you loosen up the term, it’s sort of processing. But, uh, the analogy falls down very, very sharply when you get into the details and the assumptions.
I’ll say it very clearly, and I’m grateful for your question. We do not assume any input/output fixed before. We can’t guess or outguess this creature any better than its next buddy will.
If you have a whole population of them, you find they diverge very rapidly within an envelope. But the output function is a little bit more like this, like E.M. Forster’s novel, where the lady says, “How do I know what I think until I see what I say?” That’s not parallel distributed processing.
For parallel distributed processing, you clamp the output, decide on the input, and let it relax to a measured interaction between the two. Now, some of them speak and work in this very burgeoning field of unsupervised learning, but I believe they’ve bypassed the fundamental philosophical and scientific problem of categorization, which stands in front of us like the biggest, greatest challenge, I believe, to anybody, uh, working whether in linguistics or in neuroscience. Yes, sir?
[01:11:35] AUDIENCE MEMBER:
In the neural Darwinist model, the, um, neural groups are your units of selection. The function of, uh, of natural selections is being played by the proclivity for mutual, uh, stimulation then. Is that right?
[01:11:47] PROFESSOR GERALD EDELMAN:
Uh, no. You’ve replaced sex and procreation. They really differ.
Uh, let’s see. I’m not gonna get into this in Berkeley. I– Let’s see what I can say about this.
Your question was that, uh, in Darwinism, in neural Darwinism, it’s groups which are the unit of selection, and it’s the amplification of the connections. If I could just fix a little what you said in the second, Well, I- in Darwinian evolution-
[01:12:08] AUDIENCE MEMBER:
No, no, no. I, I’m asking you to clarify what cer- what plays the role of, of, of natural selection.
[01:12:13] PROFESSOR GERALD EDELMAN:
Okay. What determines? That’s what I was about to do, because I think the way you said it is not quite on.
I’m not sure. Let’s try it. In natural selection, you get differential reproduction.
Okay? You have variance in a population, and natural selection is differential reproduction. It is the essential survival over generations, even with the slightest difference in fitness of the progeny of a particular kind of set of genes that happen to match, like the lady in the supermarket, V4 to Area 4.
In our model, procreation and replication is not at stake. Differential amplification occurs at the synapses. So what’s differential here is the synaptic strength rather than the replication of units.
[01:12:56] AUDIENCE MEMBER:
And it’s the coincidence of different groups stimulating each other through reentrancy.
[01:13:00] PROFESSOR GERALD EDELMAN:
Precisely. Excellent. It is the– I should have said that, in fact. It is the coincidence and the correlation across maps of particular ones that bear this many-one relationship to a given signal. Thank you. Yes, sir? That kind of thing.
[01:13:14] AUDIENCE MEMBER:
Why is it that your Darwin is not a Turing machine?
[01:13:18] PROFESSOR GERALD EDELMAN:
Oh, it isn’t. It’s not a Turing machine because it is sort of a physical system, so–
[01:13:21] AUDIENCE MEMBER:
If it’s not a Turing machine, is it still a computational device of some sort?
[01:13:24] PROFESSOR GERALD EDELMAN:
It’s, uh… could you say the last thing? I didn’t hear.
[01:13:26] AUDIENCE MEMBER:
If it, if you still insist it’s not a Turing machine, is it still a computational device of some sort?
[01:13:31] PROFESSOR GERALD EDELMAN:
Well, the word computation is used sort of like the word love by this, this day and age. I, I won’t get into that, but, uh, y-your question was, why is it not a Turing machine? If I insist it’s not, why isn’t it just another computational device?
Let me just brush aside the last one and say, you wanna call it a computational device, that’s fine with me, if you relax the stringency that says all computational devices have to be Turing machines. Uh, and then the issue there-therefore becomes this: Why I do not consider it a Turing machine. I need to have an effective procedure.
In adaptive systems based on population sorting of the kind we just talked about, there is none. For example, I challenge you to tell me that evolution is a Turing machine. After they waste all our money on the Human Genome Project, thus making it even harder to get grants when you’re
(laughter)
a decent biologist trying to make a living.
(laughter)
Um, what happens, what happens if I give you a newborn baby’s genome and ask you to tell me a hundred years from now what its nephews are going to look like? The answer to that is like the two old Jewish men who were drinking tea, and one says, “You know, Max, life is, life is like a glass of tea.” And the other one says, “Why?”
He says, “How do I know? Am I a philosopher?”
(laughter)
So, so I believe, I believe I’ve set the challenge for you just the way my friend John does. You show me how evolution can be a Turing machine. I have one domain in which stochastically variation is occurring at an extraordinary degree, constrained by the laws of physics, but far from equilibrium.
Another domain with these genes rattling around in a population, and I don’t know what’s going to map onto what. So exactly what do I specify for my computer?
[01:15:14] AUDIENCE MEMBER:
Yes. This should be the last question.
[01:15:16] PROFESSOR GERALD EDELMAN:
This should be the last question and the last answer.
(laughter)
Yes, ma’am.
[01:15:20] AUDIENCE MEMBER:
How do we relate mind and brain? Do you, do you, do you identify the dance of the neurons with the mind?
[01:15:28] PROFESSOR GERALD EDELMAN:
No. Uh, the question was, how do I identify mind and brain? Do I identify the dance of neurons with the mind? No, ma’am. That is a real hard last question, all right?
(laughter)
Um, the easy answer is no, I do not. I do not believe an identity theory will account for the situation. That is not, however, to say that I believe that there’s some seventh astral plane, that there’s some dualism, that there’s…
What I do mean is that there’s a very complex set of levels going through the social to communication and language, so extraordinary that that is what our challenge is going to be. That when we see that set of levels going all the way from automata like ourselves, which are no longer anywhere, uh, limited because of our higher-order consciousness, communicating with yo- each other through symbols, that we will know that relationship. But it cannot come down to the dance of the neurons because of the many-one relations that I’ve shown you that occupy even simple-minded creatures like us.
But there’s an implication. I- yes.
Well, would you tell me what it is? It has an implication, huh?
[01:16:36] AUDIENCE MEMBER:
No, there’s an im- an implicate order. that goes beyond what we’re talking about.
[01:16:44] PROFESSOR GERALD EDELMAN:
Well, it sounds like David Byrne, but I’m not sure whether there’s an implicate order. What I am sure of is that it won’t be a simple method, and it won’t be something you can write down in an algorithm.
[01:16:56] PROFESSOR STANSBURY:
Yeah. Thank you.
(applause)