Thinking about the brain
Think for a few moments about a very special machine, your brain - an organ of just 1.2 kg, containing one hundred billion nerve cells, none of which alone has any idea who or what you are. In fact, the very idea that a cell can have an idea seems silly. A single cell, after all, is far too simple an entity. However, conscious awareness of one’s self-comes from just that: nerve cells communicating with one another by a hundred trillion interconnections. When you think about it this is a deeply puzzling fact of life. It may not be entirely unreasonable therefore to suppose that such a machine must be endowed with miraculous properties. But while the world is full of mystery, science has no place for miracles and the 21st century’s most challenging scientific problem is nothing short of explaining how the brain works in purely material terms.
Thinking about your brain is itself something of a conundrum because you can only think about your brain with your brain. You’ll appreciate the curious circularity of this riddle if you consider the consequence of concluding, as you might, that your brain is the most exquisitely complex and extraordinary machine in the known universe. Clearly, this is and may be nothing more than, the opinion of your brain about itself: the brain’s way of thinking about the brain. So it seems we are caught in the logical paradox of a self-referencing, and in this case also a self-obsessed, system. Perhaps the only reliable conclusion from this thought experiment is that the brain is about as conceited as it is possible to be!
Notwithstanding the brain’s well-developed personal vanity, we must grant that it provides you with some very distinctive abilities. It operates in the background of your every action, sensation, and thought. It allows you to reflect vividly on the past, to make informed judgments about the present, and to plan rational courses of action into the future. It endows you with the seemingly effortless ability to form pictures in your mind, to perceive music in noise, to dream, to dance, to fall in love, cry, and laugh. Perhaps most remarkable of all, however, is the brain’s ability to generate conscious awareness, which convinces you that you are free to choose what you will do next.
We have no idea how consciousness arises from a physical machine and in trying to understand how the brain does that we may well be up against the most awkward of scientific challenges. That is not to say that the problem cannot in principle be solved, just that the brain is a finite machine and presumably has a finite capacity for understanding. But what are the limits of its intellectual capacity and, at that limit, might we still be asking unanswerable questions about the brain? Neuroscientists accept that they are faced with an awesome challenge. The accelerating pace of discovery in neuroscience, however, shows that we are a long way from any theoretical upper limit on our capacity for understanding that might exist. So rather than despairing of the limitations of the human intellect, we should be optimistic in our striving for a complete physical understanding of the brain and of its most puzzling of properties – consciousness and the sensation of free will.
Although we have barely started these texts we have already made a fundamental conceptual error in the way we have referred to ‘the brain’. The brain is not an independent agent, residing in splendid and lofty superiority in our skulls. Rather it is part of an extended system reaching out to permeate, influence, and be influenced by, every corner and extremity of your body. As the spinal cord, your brain extends the length of the backbone, periodically sprouting nerves that convey information to and from every part of you. Practically nothing is out of its reach. Every breath you take, every beat of your heart, your every emotion, every movement, including involuntary ones such as the bristling of the hairs on the back of your neck and the movement of food through your guts – all of these are controlled directly or indirectly by the action of the nervous system, of which the brain is the ultimate part.
From this perspective the brain is not simply a centre for issuing instructions, it is itself bombarded by a constant barrage of information flowing in from our bodies and the outside world. Specialized cells called sensory receptor neurons feed information via sensory nerves into the nervous system, providing the brain with real-time data on both the internal state of the body and about the outside world. Furthermore, information flowing into and out of the brain is carried not only by nerve cells. About 20 per cent of the volume of the brain is occupied by blood vessels, which supply the oxygen and glucose for the brain’s exceptionally high energy demand. The blood supply provides an alternative communication channel between the brain and the body and between the body and brain. Endocrine glands throughout the body release hormones into the blood stream. These hormones inform the brain about the state of bodily functions, whilst the brain deposits hormonal instructions into its blood supply for distribution globally to the rest of the body.
So when we say the brain does x or y, the word ‘brain’ is a shorthand for all of the interdependent interactive processes of a complex dynamical system consisting of the brain, the body, and the outside world. The human brain is a highly evolved and stupendously complex ‘machine’ that is often compared to the most complex of man-made machines, digital computers. But brains and computers differ fundamentally. The brain is an evolved biological entity made from materials such as small organic molecules, proteins, lipids, and carbohydrates, a few trace elements, and quite a lot of salty water. A modern computer is built with electronic components and switches made from silicon, metal, and plastic. Does it matter what a machine is made of? For computers the answer is no – computer operations are ‘medium independent’. That is to say, any computation can in principle be performed in any medium, using components made from any suitable material. Thus cogs and levers, hydraulics or optical devices for that matter could replace the electronics of a modern computer, without affecting (except in terms of speed and convenience) the machine’s ability to compete.
It seems extraordinarily unlikely either that the brain is simply performing computational algorithms or that thinking could equally well be achieved with cogs and levers as with nerve cells. So perhaps we cannot expect computers to perform like brains unless we find a way to build them in a biological medium.
From marks to meaning
To gain an insight into questions about the brain that must be answered, and to set the stage for later chapters, I will now briefly examine the activity of the brain in the context of a familiar act of everyday life. Let us consider the behaviour in which you are currently engaged – namely, reading these words. What exactly is your brain is doing right now? What kind of behaviour is reading and what must the brain do in order to achieve it?
Obviously, the brain must first learn how to read and equally obviously reading is a means of learning and engages our imagination. Reading also demands concentration and attention. Therefore as you read these words your brain must direct your attention away from the many potential distractions that are constantly in the background, all around you. You need not worry however because, without bothering your conscious awareness, your brain is keeping a watchful ‘eye’ on external events. It can at any moment redirect your attention away from this page and towards something more important. Your attention can also be distracted by events internal to the brain, the various thoughts that constantly pass through it and compete for your consciousness.
Reading, when reduced to the rather prosaic level of motor actions, depends on the brain’s ability to orchestrate a series of eye movements. Now, as you read these words, your brain is commanding your eyes to make small but very rapid (about 500° per second) left-to-right movements called saccades (right-to-left or up-and-down for some other written languages). You are not consciously aware of it, but these rapid movements are frequently interrupted by brief periods when the eyes are fixed in position.
Watch someone reading and you will see exactly what I mean. You’ll notice that the eyes do not sweep smoothly along the line of text, rather they dart from one fixation to another. It is only during the fixations, when the eyes dwell for about a fifth of a second, that the brain is able to examine the text in detail. Reading is not possible during the darting saccadic movements because the eyes are moving too quickly across the page. You are not aware of the blur and confusion during a saccade because fortunately there is a brain mechanism that suppresses vision and protects you from visual overload.
Reading is only possible between saccades, not only because the eyes are then stationary but also because gaze is centred on the retina’s fovea. The fovea is the only part of the retina specialized for high acuity vision (see Chapter 5), but it scrutinizes a very small area of our visual world. As a literal rule of thumb, foveal vision is restricted approximately to the area of your vision covered by your thumbnail held at arm’s length. It is a small window of clear vision within which you are able to decipher just 7 or 8 letters of normal print size at a time. The task for the brain is to generate a precise series of motor commands to the eye muscles which ensure that at the end of each saccade your high acuity vision is fixed on that part of the text you need to see most clearly next. As your eyes approach the end of a line, the brain generates a carriage return. Of course, the return saccade must be to the left, of the correct magnitude and associated with a slight downward shift in gaze in order to bring the first word on the next line onto the fovea.
I have considered only the simple case of the brain directing eye movements alone as if nothing else affects gaze direction. But of course, the relative positions of the eye and page are affected continuously by head, body, and book motion. Thus the brain must continually monitor and anticipate factors affecting the future position of your eyes relative to the text. The fact that you can effortlessly read on a moving train while eating a sandwich is evidence that your brain can solve this problem quite easily.
Importantly, it is done automatically and on an unconscious level without you having to think through every step. If you had to consciously think about the mechanical process of reading, you would be illiterate! Our lack of conscious awareness of underlying brain processes can also be illustrated by reflecting on the subjective experience that the comprehension of written material represents. While reading we are not conscious of the fragmented nature of comprehension imposed by underlying move-stop-move-stop activity of the eyes I’ve just described or by the fact that only 7 or 8 letters can be deciphered at each stop. On the contrary, our strong subjective impression is that comprehension of the text flows uninterrupted and moreover that we can read several words or even whole sentences ‘at a glance’. That this is not the case can be illustrated by reading a sentence containing a word that has more than one meaning and pronunciation. For example, the word tear has two very different meanings and pronunciations in English – tear the noun of crying and tear the verb of ripping apart. Clearly, such word ambiguity complicates the brain’s task of providing you with an uninterrupted comprehension. If for instance, the word tear occurred at the beginning of a sentence its meaning might remain ambiguous until the subject of the sentence appears later. Because you cannot read the whole sentence at a glance your brain may be left with no option but to choose one of the alternative meanings (or sounds, if you are reading aloud) of a word and hope for the best.
While we cannot read whole sentences at a glance, the brain does recognize each word as a whole. What is quite surprising however is that the order of the letters is not particularly important (good news for poor spellers). That is why you will be able to read the following passage without consciously having to decode it.
We will now consider how and in what form textual information at the gaze point enters the brain. Light-sensitive cells called photoreceptors capture light focused as two slightly different images on the left and right retinae. The photoreceptors undertake a fundamental and remarkable transformation of energy, a transformation that must occur for all of our senses. This process is known as sensory transduction and always involves converting the energy in the sensory stimulus, in this case, light energy, into an electrical signal. This is because the nervous system cannot use light or sound or touch or smell directly as a currency of information transmission. In the brain electricity is the critically important currency of information flow.
The brain interprets or decodes electrical signals according to their address and destination. We see an electrical signal coming from the eyes, hear electrical signals from the ears, and feel the electrical signals coming from touch sensitive cells in the skin. You can demonstrate the importance of signal origin by pressing very gently with your little finger into the corner of your closed right eye, next to your nose. The touch pressure will locally distort the retina and produce an electrical signal that will be transmitted to the brain.
Your brain will ‘see’ a small spot of light in the visual field caused by touch. Notice that the light appears to be coming from the peripheral visual field somewhere off to the right; a moment’s thought should tell you why this is so. The photoreceptor cells of the retina are not connected directly to the brain. They communicate with a network of retinal neurons through a mechanism that couples the fluctuating electrical signal in the photoreceptor to the release of a variety of chemicals known as neurotransmitters. In their turn, neurotransmitters convey signals from one neuron to another by generating or suppressing electrical signals in neighbouring neurons that are specifically sensitive to particular neurotransmitters. This transformation of an electrical into a chemical signal occurs mostly at specialized sites called chemical synapses. Electrical signals can also pass directly between neurons at sites known as electrical synapses. Thus, through a combination of direct electrical transmission between neurons and the release of chemical messengers, information about the visual image captured by the eyes is processed in the retina before being conveyed by the optic nerve to the brain.
There are about one million output neurons in the retina, known as retinal ganglion cells, and each one extends a long, slender fibre or axon in the optic nerve. Axons are specialized for the high-speed (up to 120m/second), long-distance, and faithful transmission of brief electrical impulses. Impulses travelling along the axons of the retinal ganglion cells in the optic nerve reach the first neurons in the brain about 35 thousandths of a second after the capture of photons by the photoreceptors. In the brain, the axons of the retinal ganglion cells terminate and form synapses with a variety of other neurons which in turn interconnect with many others, a process which results finally in the conscious awareness of a vivid picture in your mind of what your eyes are looking at.
Somehow this astonishing electrochemical process that involves no conscious effort whatsoever produces meaningful pictures in your mind – close your eyes and the picture goes away, open them and it appears to you apparently instantaneously and effortlessly. Truly amazing!
Reading does not come naturally; it is a difficult skill that must be acquired painfully. Once learnt however it is rarely, if ever, forgotten – thankfully. So we do not have to worry about forgetting how to read because the skill is robustly established in our long-term memory banks. Although the enabling skill of reading is retained in permanent memory, an entirely different type of memory is required during the active process of reading itself. While reading, we must retain a short-term ‘working memory’ for what has just been read. Some of the information acquired while reading may be committed to long-term memory but much is remembered for just long enough to enable you to understand the text. Memories must somehow be represented physically in the brain. Brain chemistry and structure is altered by experience and the stability of these physicochemical changes presumably corresponds to the retention duration of memory. So what exactly is a memory? What kind of physical trace is left in the brain after we have learnt some new skill or fact? What is forgetting and why are some memories quickly forgotten and others never? These are questions to which I shall return later.
Finally, we must consider one of the most elusive of problems. While accepting that everything that the brain does depends on lawful processes occurring within and between the brain’s cells, how can we explain how ‘meanings’ arise in our minds while reading words? How do marks on paper become images in the mind, how do they make you think? How can any of this be explained completely by the responses of individual brain cells and interactions between them? Consider for example what happens when I recognize the word banana. I instantly call up an image of a yellow, curved object about 20 cm in length, 4 cm in girth, that is edible and incidentally delicious. We might propose that there is a single neuron in my brain that responds when I read ‘banana’ and triggers all of the remembered associated thoughts. Maybe this is the same neuron that responds when I see a real banana.
According to this logic, different neuron fire when I look at an apple and another recognizes my grandmother. While it is true that neurons can respond rather specifically to particular stimuli, most neuroscientists believe that there can be no one-to-one correspondence between the response of an individual neuron and a perception. Surely a separate neuron cannot detect and represent every object and percept? After all, in order to know that that object is a banana, information about shape, size, texture, and colour must somehow be bound together with stored knowledge about fruit, my appetite, and so on. These processes are associated with different networks of neurons in different parts of the brain and there is no known way they could all converge on a single neuron which when activated could trigger ‘aha, a banana for lunch’. Another way to think about the relationship between the activities of neurons and a perception is to consider how assemblies of nerve cells in different parts of the brain cooperate with one another in parallel. Having said that, we are far from understanding how objects, meanings, and perceptions are encoded in the brain by the activities of neurons. This is one of the most intriguing of problems in neuroscience. While the notion that there is a separate nerve cell in the brain for each object, meaning, and perception (parodied by the term ‘Grandmother cell’) has been roundly rejected, there is a lasting appeal in this simple idea. Indeed provocative research published at the time of writing this sheds a fresh perspective on the way objects are represented in the brain. It suggests that the idea there may be a neuron in your brain that only recognizes your grandmother deserves some serious reconsideration.
Starting with a historical perspective on the development of modern brain science I go on to describe electrical and chemical signalling mechanisms that underlie all mental functions, how the nervous systems evolved, how the brain responds to sensations and perceptions, the formation of memories and what can be done when the brain is damaged. The potential for interfacing the brain with computers is discussed, as is the contribution of neuroscience to developments in robotics and artificial nervous systems.
From humors to cells: components of mind
The widespread occurrence of the ‘surgical’ technique of trepanation, the removal of parts of skull to expose the brain, in early civilizations suggests that ancient cultures recognized the brain as a critical organ. This is not to suggest that a link between the brain and the mind has its roots in prehistory. In fact the long history of neuroscience prior to the scientific period suggests that it is not at all self-evident that mental functions must necessarily be attributed to the brain. The Egyptians for instance clearly did not hold the brain in particularly high esteem since in the process of mummification it was scooped out and discarded (a practice that stopped around the end of the 2nd century ad). To the ancient Egyptians, it was the heart that was credited with intelligence and thought - probably for this reason it was carefully preserved when mummifying the deceased.
Although Hippocrates (460-370 bc) is usually accredited with being the first in the West to argue that the brain is the most important organ for sensation, thought, emotion, and cognition, he was not the first Greek to consider the question of physical embodiment of mind. Prior to the Hippocratic revolution, Pythagoras (582-500 bc) believed that matter and mind are connected somehow and that the mind is attuned to the laws of mathematics. It was probably of little interest to Pythagoras whether mind and matter were connected in the brain or, as the Egyptians and the Greeks prior to 500 bc believed, in the heart.
Alcmaeon of Croton (b. 535 bc), himself a follower of Pythagoras, is among the first to have realized that the brain is the likely centre of the intellect. He is also the first known to have conducted human dissections and in doing so he noticed that the eye is connected to the brain by what we now know is the optic nerve. It was on the basis of his direct observations that Alcmaeon astutely speculated, a century before Hippocrates came to a similar conclusion, that the brain was the centre of mental activity. Hippocrates went further than this however and elaborated a theory of four humors that together were responsible for the temperament. Thus, according to Hippocrates, the four determinants of temperament were black bile (melancholy), yellow bile (irascibility), phlegm (equanimity and sluggishness), and sanguine (passion and cheerfulness). To us the humoral theory seems implausible, puzzling, and arbitrary. It seems to have been inspired, not by the evidence of observation, but by the need to conform with the equally unlikely postulates of contemporary Greek natural law, namely that there are four elements: earth, air, water, and fire.
The influence of Hippocrates was to be profound and remarkably long lasting. Some 400 years after Hippocrates died, Claudius Galenus of Pergamum (ad 131-201), better known as Galen, and became the most influential physician of his time, in part by building his own theory on the humoral conjectures of Hippocrates. Galen was unusually well informed on the internal anatomy of the human body, an intimate understanding of which he gained while he was physician at a school for gladiators. However, although we can be grateful to him for perpetuating the idea that the brain is the seat of the mind, he continued the Hippocratic tradition of disregarding the importance of the solid tissues of the brain for mental functions. Instead Galen associated the presence in the brain of three fluid filled cavities, or ventricles, with the tripartite division of mental faculties - the rational soul - into imagination, reason, and memory. According to Galen, the brain’s primary function is to distribute vital fluid from the ventricles through the nerves to the muscles and organs, thereby somehow controlling bodily activity. Precisely how the brain’s ventricles were supposed to regulate the three cognitive functions is not explained, unsurprisingly.
Galen’s positive contribution to medical knowledge is undeniable, but many of his ideas were seriously flawed. This would not have mattered too much were it not for the fact that, after he died, Galen’s authority dominated and therefore hampered medical science and practice for some 400 years. A particularly interesting example of his influence can be seen in the early anatomical drawings of Leonardo da Vinci (1452-1519). In one drawing of the head, the brain is depicted crudely consisting of three simple cavities labeled O, M, and N. Leonardo interpreted the anatomical division in functional terms in a way that would have been immediately recognizable to Galen in the 1st century. Later Leonardo was to make some of the most important observations on the brain and its ventricles. He can be credited with the first recorded use of solidifying wax injection to make castings to study the internal cavities of the brain and other organs, including the heart. Using this method, Leonardo accurately determined the shape and extent of the brain’s cavities, but he clearly continued to place a Galenical interpretation on the fluid-filled structures. For instance the lateral ventricles carry the word imprensiva (perceptual) in Leonardo’s hand, the third ventricle is labeled sensus communis, and the fourth ventricle, memoria. Leonardo’s use of wax injections represented a scientific advance of enormous potential and importance. Unfortunately, the dominance of Galen’s conjectures about the functions of the ventricles diverted his attention from the solid tissue of the brain, the true seat of the mind.
Ideas about brain function and mechanisms continued to be strongly influenced by theories involving the flow and distillation of vital fluids, spirits, and humors well into the 17th century. Indeed the influence of Hippocrates and Galen can be seen in the hydraulic model of the brain formulated by the most famous 17th-century French philosopher, René Descartes (1596-1650). Descartes however reformulated the humor-based description of the brain’s functioning and expressed it in contemporary terms by comparing the brain to the working of intricate machines of his time, such as clocks and moving statues, the movements of which were controlled by hydraulic systems. Importantly he departed from the classical tradition of locating cognitive processes exclusively in the brain’s fluid-filled ventricles, but he nonetheless still referred to the flow of spirits through nerves and made no attempt to assign functions to specific brain structures, with the notable exception of the pineal gland. The pineal, because it was a unitary and central structure, was supposed to be the link with the singular soul but was also given executive control, directing the flow of animal spirits through the brain.
In one important respect Descartes was breaking new ground. By comparing the workings of the brain with that of complex hydraulic machines, he was regarding the most technologically advanced artifacts of his day as templates for understanding the brain. This is a tradition that persists today; when we refer to computers and computational operations as models of how the brain acquires, processes, and stores information, for example. So while Descartes was hopelessly wrong in detail, he was adopting a modern style of reasoning.
Perhaps it is not surprising those theories involving the solid tissues of the brain were difficult to conceive – after all, the brain’s solid substance has no visible moving parts. By the 17th century, however, the grip of humoral theory was weakening, in part due to the works of a new generation of anatomists who were describing the internal structure of the brain with increasing accuracy. Notably, the Englishman Thomas Willis (1621-1695), who coined the term ‘neurology’, argued that solid cerebral tissue has important functions. He still held that fluid-flow was the key to understanding brain function, but his focus was on the solid cerebral tissues and he showed that nervous function depends on the flow of blood to them. Today’s functional brain imaging technique (fMRI) shows that small local increases in blood flow are associated with the activation of nerve cells. That there is in effect a local ‘blushing’ of the brain when the neurons in that region are active is an observation that Willis might well have expected and enjoyed.
Among the more obvious problems of vital fluid and hydraulic models of nervous system function, and no doubt known to Willis, is that nerves are not hollow conduits. And even if they were, the speed of fluid movement through them could hardly be sufficiently swift for the rapidity with which sensations and motor commands seem to be conveyed by nerves. These and other inconsistencies with fluid models of the nervous system must have worried physicians of the stature of Willis and caused them pause for thought. But Willis remained a fluid theorist and the beginning of the end for the fruitless elaboration of such theories did not come until the discoveries attributed to Luigi Galvani (1737-1798). In the late 18th century he discovered the importance of electricity to the operation of the nervous system. As electrical mechanisms were to provide the necessary speed, attention inevitably turned from fluid to electrical models. Ironically, the last gasp of the legacy of Hippocrates and Galen is to be found in the interpretation Galvani himself placed on his own experiments with ‘animal electricity’. Having demonstrated that he could control the contractions of a frog muscle by applying electrical currents to the muscle’s motor nerve, Galvani claimed to have discovered that animal nerves and muscle contain an electric fluid’. A decisive leap of understanding however was achieved when Galvani and his contemporary Alessandro Volta (1745-1827) crucially together linked electricity to the functions of the nervous systems.
What neither Galvani nor Volta could know however is that the From humors to cells externally applied electrical stimuli were activating biological processes causing high-speed electrical impulses to travel along nerves to muscles, resulting in their contraction. It was not until the middle of the 19th century that the ability of nerves and muscles to generate rapidly propagating electrical impulses was confirmed by the German physiologist Du Bois-Reymond (1818-1896). This was a major impetus to the study of the physical workings of the brain and set the stage for the modern scientific era, which was launched in a most spectacular way at the dawn of the 20th century by the recognition of the cellular nature of the brain’s tiny functional units – the neurons.
The true cellular nature of the brain and of its mental functions was first recognized by the father of modern neuroscience, the Spanish neuroanatomist Santiago Ramon y Cajal (1852-1934). Although his proposition that the brain is a cellular machine may today seem commonplace, in fact it was revolutionary. In the later 19th century, and indeed in the early years of the 20th century, most neuroanatomists believed that the brain was not composed of cells at all – in spite of a universal recognition that all other organs and tissues in our bodies were. What was it about the brain that made it so difficult to see its cellular composition under the microscope?
Part of the answer is that brain cells are quite unlike any other cells. The very term ‘cell’ implies uniformity; simple structures defined by clear boundaries.
In contrast neurons are hugely diverse in morphology. They have exceedingly fine and profusely branched processes ramifying from the cell’s body and intermingling among the branches of other neurons. The complexity and diversity of their physical appearance easily exceeds that of all other cell types found in any other part of the body. All of this contributed to a rather confusing picture which anatomists found difficult to reconcile with a simple cellular model of brain structure. When viewed through a microscope the brain appeared to consist of a hopelessly tangled morass (a reticulum), without the distinct cell-defining boundaries that are so evident in other tissues. It was therefore not surprising perhaps that cell theory, the idea that tissues are composed of cells, was thought not to apply to the brain and a radical alternative model was proposed. This came to be called the ‘reticular theory’ of brain anatomy - a surprisingly resilient interpretation that persisted well into the 20th century. The reticular theory was wrong, but that was not the only problem with it. Scientific theories are allowed to be wrong so long as they are helpful, but the reticular theory, which held that the brain contains no discrete components, was actually obstructive to scientific progress. Progress was hindered by the concept of a machine without discrete functional components because without them it is impossible to formulate a plausible mechanism to explain how the brain might work. Scientists were sure the brain machine must have components and, given the complexity of what the brain does, lots of them. But what were they, what did they look like, and what did they do? It was clear that to understand the brain science had to identify the functional components of the brain’s microscopic structure.
Towards the end of the 19th century, the Italian anatomist Camillo Golgi (1843-1926) developed a way of highlighting the morphology of very few neurons in any particular region of the brain. It was a staining method that fitted the bill because it allowed individual neurons to be viewed unobstructed by the tangled mass of branched processes of neighboring cells. It incorporated the chemistry of photographic processing and it revealed individual neurons as dark, silver-impregnated silhouettes. Paradoxically, the crucial feature of Golgi’s method was that it hardly ever worked! Just one in a thousand or so neurons were ever revealed and these were scattered more or less randomly throughout the brain tissue. It was precisely because of this uncertain aspect of the method that neurons could for the first time be seen in their entirety disentangled from their neighbors. Immediately it was apparent that there are discrete cells in the brain, but they are astonishing cells - unlike any others. They differed markedly from one another, in particular with respect to the complex patterns of their numerous branched processes. Golgi’s method was the key to a new set of scientifically testable ideas about how the brain works. The reticular theory was about to be replaced by a far more powerful one called the neuron doctrine, the idea that the brain is composed of discrete cellular components.
The neuron doctrine is rightly attributed to Ramon y Cajal who, with the help of Golgi’s new staining method, made two profound propositions. The first quite simply is that the neuron is a cell. You might think that this must have been self-evident to anyone who bothered to view a brain treated with Golgi’s method. After all, cells in the brain would be clearly visible and thus by the evidence of one’s own eyes the reticular theory must be wrong. Somewhat astonishingly, however, in spite of the images provided by his own technique, even Camillo Golgi remained a convinced reticularist. The second of Cajal’s propositions was brilliantly insightful: neurons are structurally polarized with respect to function. For the first time, the workings of the brain were explicitly associated with the functions of physical structures at a microscopic level. Cajal concluded that a neuron’s function must be concerned with the movement and processing of information in the brain. He could only guess about the form in which information might be encoded or how it might move from place to place. In a stroke of genius, however, he postulated that it would be sensible for the components of function to impose directionality on information flow (or streaming as he called it). So he proposed that information flows in one direction, from an input region to an output region. The neuron’s cell body and its shorter processes, known as dendrites, perform input functions. Information then travels along the longest extension from the cell body, known as the axon, to the output region - the terminals of the axon and its branches that contact the input dendrites and cell body of another neuron.
Cajal was fascinated by the differences between the brains of markedly different organisms: human, worms, snails, insects, and so on. He thought comparisons of their brains might be instructive precisely because vast differences exist between the behaviour and intellectual capabilities of different creatures. There is unquestionably an enormous gulf between human and insect intelligence, so it would be reasonable to suppose that a comparison of their brains would expose how structural components reflect intelligence. Surely, the human brain should contain ‘high performance’ components and the insect brain markedly less sophisticated ones. But the difference between insect and human neurons does not at all betray the gulf between insect and human intelligence. Insect neurons are as complex and display as much diversity as neurons in the human cortex. Cajal himself expressed considerable surprise at this: the quality of the psychic machine does not increase with the zoological hierarchy. It is as if we are attempting to equate the qualities of a great wall clock with those of a miniature watch. Brains of the most advanced insects (honey bees) have about one million neurons, snails about 20,000, and primitive worms (nematodes) about 300. Contrast these numbers with the hundred billion or so that are required for human levels of intelligence. But the individual neurons of simple organisms operate with the very same electrical and chemical signalling machinery found in today’s most advanced brains. Like it or not, the astonishing conclusion from comparative studies is that the evolution of our brains, capable of such extraordinary feats, did not require the evolution of ‘super neurons’. The basic cellular components of mental functions are pretty much the same in all animals, the humble and the human.
In 1906 Cajal shared the Nobel Prize for Physiology and Medicine with Golgi, ‘in recognition of their work on the structure of the nervous system’. This was the first time that the Nobel Prize had been shared between two laureates. The award was controversial ecause the two disagreed on a crucially important matter - Golgi remained convinced that Cajal was wrong to reject the reticular theory. It was of course Golgi who was wrong and fundamentally so. Other questions over Golgi’s interpretations raised serious doubts in the minds of some of the scientists advising the Nobel Council as to the appropriateness of his nomination for the prize. But whatever the merits of the case for a shared prize, 1906 marked the beginning of the modern era in the neurosciences and it was the first of a series of Nobel Prizes to be awarded to neuroscientists over subsequent decades.
Cajal could not have anticipated the extraordinary advances in brain science that were about to be made. His recognition of the neurons as polarized units of information transmission was a defining moment in neuroscience. But at the start of the 20th century many questions about precisely how and in what form neurons signal information in the brain remained unanswered. By the middle of the 20th century, neuroscience had become the fastest growing discipline in the history of scientific endeavor and by the end of that century a more or less complete understanding, in exquisite molecular detail, of how neurons generate electrical and chemical signals would be achieved.
In this very short history of man’s discovery of the workings of the brain I cannot avoid reference to the discredited pseudo-science of phrenology, a theory developed by the idiosyncratic Viennese physician Franz Joseph Gall (1758-1828). Gall believed that the brain is the organ of the mind but he went much further and postulated that different distinct faculties of the mind, innate attributes of personality, and intellectual ability, are located in different sites in the brain. Gall reasoned that different individuals will have these innate faculties and that the degree of their development would be reflected in the size of the surface region of the brain that housed that particular faculty. These ideas have a very modern ring to them, but Gall thought that the skull would take the shape of the brain’s relief and therefore that the bumps on the surface of the skull could be ‘read’ as an index of various psychological aptitudes.
The practice of phrenology grew and flourished in Europe and then in America from about 1820, becoming a popular fad in the latter part of the 19th century before effectively dying out early in the 20th century (though in fact the British Phrenological Society was not disbanded until 1967). Its demise in the early 20th century coincided with the rapid accumulation of real evidence for the principle that many discrete mental functions are highly and specifically localized to particular parts of the brain. Much of the evidence came as a consequence of the First World War in the form of the many unfortunate victims of gun-shot and shrapnel lesions to specific regions of the brain that produced reproducible disorders. More recently, functional brain imaging techniques such as fMRI have shown beyond doubt that different cognitive functions are indeed localized to specific parts of the brain. So while the exaggerated claim of phrenologists to be able to read the mind from the bumps on the head was refuted, their premise was vindicated.
Imaging the future of brain research
The first high definition imaging system, called Computed Axial Tomography (CAT scanning), was developed in the 1970s. It is an X-ray-based technology that was used, and still is, as a medical diagnostic tool to resolve the position of brain tumours in the brain for example. In the past 30 years more powerful imaging technologies have been developed that have the potential to associate different cognitive functions with different structures in the brain. These techniques include most notably Positron Emission Tomography and Magnetic Resonance Imaging. When PET is used to link function to structure, increases in local blood flow and glucose consumption associated with increased neuronal activity are measured. A radioactive isotope, of glucose for instance, is injected into the blood stream and the high-energy photons that fly off in exactly opposite directions from the site of an emitting isotope are detected by an array of detectors that encircle the head. The detectors facing one another on opposite sides of the head will simultaneously detect the two photons generated from the same place within the brain. By the integration of simultaneous photon detection in the array, the source of the isotope can be calculated. In this way a computer builds an image of the structures that contain the isotope. In other applications of PET, the radioactive label is attached to molecules that bind to particular receptors, thus revealing the distribution of neurotransmitter systems receptors in the brain, for example.
A more sensitive technique, importantly that does not involve radioactive tracers, is Magnetic Resonance Imaging or MRI. Briefly the technique involves the pulsing of a strong external magnetic field, which evokes transient magnetic responses within the brain. The evoked magnetic signals are used to compute 2D and 3D images of the brain’s structure. This technique can be used for purely structural studies, as it was in the experiments on London taxi drivers that showed they have a larger than expected hippocampus. But in its most interesting experimental application it provides images of the brain in action. When used to reveal active regions of the brain involved in particular functions, the technique is known as functional MRI, or fMRI for short. To understand how fMRI works, and to appreciate its limitations, it is important to realize that it does not image the electrical activity of neurons directly. It monitors the indirect consequence of their activity. When a region of the brain is actively working, more neurons in that region will require more glucose and oxygen. This is a consequence of two interesting facts. First, it seems neurons only store enough energy for the briefest of bouts of activity. If neurons are active long enough, they need refueling to enable them to produce the energy storage molecule ATP required to recharge their batteries. An active brain region therefore may have a significantly higher metabolic demand for oxygen and glucose than a quiescent region. A simple solution to this problem would be to pump more blood into the active brain, much in the same way that a muscle is supplied with more blood when exercised vigorously. However unlike a muscle, which becomes engorged with blood and swells when exercised, the brain is confined by the skull and cannot be allowed to swell significantly. The solution to this tricky problem is to maintain a constant overall blood-volume in the brain and to arrange for blood to be diverted preferentially to active regions. Blood is diverted by the ability of blood vessels in the brain to dilate in response to signals coming from nearby active neurons. Dilation reduces resistance to blood flow, thereby increasing the supply of blood to the region of elevated neuronal activity. We are not really sure how the blood vessels ‘know’ that nearby neurons are hyperactive. It is likely however that the signal for blood vessel dilation is the gas nitric oxide (NO), because NO causes the relaxation of muscle cells in the walls of blood vessels. It is thought that NO-producing neurons sense increased activity of nearby neurons and respond by producing NO in the same region – thus coupling increased neural activity to increased blood flow in that region. In detecting regions of increased blood flow, fMRI recognizes the different magnetic signatures of oxygenated and deoxygenated hemoglobin. When neurons in a brain region are sufficiently active for long enough, blood in their vicinity becomes oxygen depleted. This is followed by an increased flow of oxygenated blood to that region; quite literally there is a local blushing of the brain. The fMRI technique is responsive to the blushing and indirectly assigns increased neural activity to that region at a spatial resolution of just a few cubic millimeters. It is in this way that we now have a far more fine-grained functional map of the brain than was previously possible. Bold claims are now being made about complex cognitive functions: where in the brain we recognize faces and words, where executive functions are carried out, where false memories are located, and so on.
Signaling in the brain: getting connected
The problem of connection, the sending of information effectively around the nervous system, arises because signals must be communicated undistorted over the length of the body, which might be a very large distance indeed, in the case of the blue whale for example. Coupled to this is the fact that, in an unforgiving world, animals must react quickly to be an effective predator or so as to avoid being eaten. So the basic requirements of signaling coded information in the nervous system are that the signals have to be routed correctly and sent reliably over long distances as rapidly as possible.
In order to achieve this neurons convey and encode information electrically. Brief electrical pulses (lasting a few thousandths of a second), known as action potentials or nerve impulses, travel along biological cables (axons) that extend from the cell bodies of neurons to connect their input to their outputs with other neurons.
Compared to the speed of electrical information traffic along the wires in a computer (close to the speed of light), conduction velocities of impulses in the brain are slow, about 120 meters per second in the fastest conducting axons. When they reach the terminals of axons, impulses trigger the release of chemical signals that are able to initiate or suppress electrical signals in other neurons. In this way neurons transmit information from one to another by an alternating chain of electrical and chemical signals. The chemical signals are released at specialized sites called synapses, at which the chemical signals (neurotransmitters) pass across a very narrow gap separating two neurons. Released neurotransmitter molecules work by binding to and thereby activating specialized receptor molecules located on the surface of the receiving neuron on the other side of the synapse. An activated receptor causes a brief electrical response, called a synaptic potential, in the receiving neuron. These potentials may be either inhibitory or excitatory depending on whether the voltage in the receiving neuron becomes more negative (inhibitory or hyperpolarizing) or less negative (excitatory or depolarizing).
Inhibitory potentials make the receiving neuron less likely to fire a nerve impulse. Excitatory potentials increase that probability. A ‘decision’ to produce nerve impulses is therefore made through the summation of all of the inhibitory and excitatory potentials impinging on a neuron. Once a critical threshold voltage is reached by this summation, nerve impulses will be generated. The more the excitation, the higher will be the frequency of the impulse train. An important way that information is coded in the brain is by the impulse frequency (number of impulses or action potentials per second) and by the pattern of impulses. Nerve impulses travel rapidly along the axon, feeding information to many other neurons where the process of neurotransmitter release and chemical communication is repeated.
Neurons may receive chemical signals from hundreds of other neurons through a thousand or more synapses on their surfaces, each having some influence on the ‘decision’ to fire a nerve impulse and on the firing rate. The complexity of the resulting signaling network in the brain is almost unimaginable: one hundred billion neurons each with one thousand synapses, producing a machine with one hundred trillion interconnections! If you started to count them at one per second you would still be counting 30 million years from now!
Physics and the problem of electrical signaling when a neuron is inactive or at rest there exists a stable negative voltage across the membrane of about −70mv, known as the resting potential. When excited by another neuron or in the case of a sensory receptor cell by a sensory stimulus, the neuron may generate a train of action potentials. Nerve impulses attain a positive voltage of about +50mv before returning to the resting potential. So the total voltage excursion of a nerve impulse is about 120mv or 0.12 volts.
We need now to understand something about how these electrical impulses are generated and propagated along axons in the wet, salty, and gelatinous medium that is the brain: a very unsuitable environment for an electrical signaling system. The problem is made even more difficult by the dreadful electrical properties of axons. Axons are very poor conductors of electricity, so bad in fact that over relatively short distances, far less than a typical axon’s length, most of the original signal will leak away into the salty surroundings. This inescapable problem is a consequence of the 3. Neuron-to-neuron communication. An electrical action potential or nerve impulse travels at speeds up to 120 meters per second along the axon of the presynaptic neuron. When it reaches the synapse the impulse causes neurotransmitter molecules to be released. Receptor molecules react to the neurotransmitter molecules causing the postsynaptic neuron to be either excited (illustrated) or inhibited. An inhibitory synaptic potential would dip below the resting potential, making the postsynaptic neuron less likely to fire an action potential way the laws of physics apply to the flow of electricity in electrical cables immersed in salty water.
These laws were first formulated by the British scientist Lord Kelvin (1824–1907) who figured out how to send telegraphic information across the Atlantic Ocean through a submarine cable. Lord Kelvin defined a parameter called the ‘length constant’, which allows us to compare how good different types of cable are at transmitting electrical signals over a distance. A length constant is the distance over which about two-thirds of the electrical signal’s amplitude will be lost and its value can vary enormously. For example, the length constant of a submarine cable is a few tens of miles. This means it is not possible simply to lay a cable across the Atlantic and expect an electrical signal injected at one end to appear at the other end undiminished, several thousands of kilometers away.
For a submarine cable, the length constant is a small fraction of the distance over which information must be sent and the same is true for biological cables, axons. So in a similarly salty environment both submarine cables and axons must detect a failing electrical signal and boost it back to its original strength before sending it on its way again. In submarine cables booster amplifiers placed at regular intervals achieve this, and axons solve the problem in a rather similar way. But how, using the unlikely ingredients of a few proteins, fats, some smaller organic molecules, and plenty of salty water, can nerve cells make a battery-powered amplifier?