The last decade of the twentieth century and the first years of the new century have seen enormous advances in our scientific knowledge about the human brain, based in part on some major technological advances. New techniques first developed experimentally in the 1980s to examine brain dynamics, such as functional magnetic resonance imaging (fMRI) or positron emission tomography (PET), have become more widely available. Thus we are no longer limited to making “still photos” of the brain with conventional MRI or CT scans; now we can make “motion pictures” of changes taking place in the brain from moment to moment. It can be demonstrated, with a high level of confidence, that in the presence of a specific kind of stimulus, a specific area of the brain receives more blood, or more oxygen, or displays more electrical activity, or produces more metabolites—all of which at least implies that this part of the brain reacts selectively to this kind of stimulus.
Given all this new technology, we would reasonably expect to know a great deal more about the brain than we did in 1990, when the “International Decade of the Brain” began. In the popular press, and in some publications for general readers, one often encounters claims to the effect that in a short time the brain will be “mapped” as thoroughly as the human genome is now, so that we will (putatively) know exactly which mental and emotional functions are performed by which clusters of neurons in the central nervous system. The errors committed by the phrenologists of the nineteenth century, who imagined that they could measure a person’s character by examining the lumps and depressions in her skull, may make us smile today, but the dream lives on: that we can read minds by mapping brains. If we know exactly what is happening exactly where in the brain, perhaps we can learn to “switch off” aggression in violent criminals and racism in bigots, “switch on” intelligence and compassion, rid alcoholics and drug addicts of their cravings and compulsions, and so forth.
It perhaps goes without saying that this kind of enthusiasm is premature, indeed misplaced. Reality is far more complicated. Indeed, the new technologies have done much to unsettle our confidence in what we thought we knew about the brain, and relatively little to enable the mapmakers to go on about their business. More than a decade ago, Bolwig (1994) was already warning that the progress in “neuroimaging” was all very interesting, but to date all the masses of new data had not produced any significant developments in the treatment of brain damage or mental illness. Twelve years later, despite yet more technical progress, Bolwig’s pessimism still seems justified. Even when we know what structural or dynamic changes in the brain are associated with particular diseases or injuries, we remain unable to bring any kind of therapeutic intervention to bear on a given part of the brain, apart from perhaps surgical excision, which is drastic, costly, dangerous, and uncertain in its effects. No drugs, no psychotherapies can selectively reach a particular brain region and effect changes there.
More to the point, however, we cannot really be certain that even if we did know how to target specific clusters of neurons for pharmaceutical or psychotherapeutic intervention, we would actually produce the intended effects. What the neuroimaging results reveal is that nearly every brain function involves a complex network of brain regions and neuron clusters, sometimes widely separated from each other. Moreover, it is becoming increasingly clear that most neurons and groups of neurons participate in a number of different brain functions, which do not always seem to be related to each other in any obvious way. Thus the relationship between structure and function turns out to be far more complex than was imagined in the latter half of the nineteenth century, when the work of such pioneers in brain research as Paul Broca and Carl Wernicke demonstrated that lesions to particular parts of the brain tended to have predictable effects on the patient’s speech. In the mid-twentieth century, it was demonstrated that various parts of the body are rather precisely mapped to the posterior parts of the frontal lobes for movement, and the anterior parts of the parietal lobes for sensation (the so-called “sensomotorium”). Later, with the development of computer technology and artificial intelligence, such cognitive processes as memory and perception were analyzed into specific functions performed by specialized processors, each of which received a certain input, performed a specific operation upon it, and transmitted a certain output. All this led to a picture of the brain as an elaborate biological computer, the “modular mind” of Fodor (1983), a system of neural processors shuttling raw sensory data around to make them into coherent pictures, much as a computer takes millions of binary bits to make texts and pictures. In a word, it was assumed that cognitive and emotional functions could be mapped to the brain in the same way as movement and sensation.
All along the long road that has led from Broca and Wernicke to the modular brain of contemporary cognitivism there have been dissenting voices—Jackson, Freud, Head, Goldstein, Luria, Brown, to name only a few. The irony is that much of the newly available data, rather than crowning the modular approach with its ultimate achievement, i.e. the definitive map of the brain, seem instead to be eroding our confidence in the received wisdom, which, as Brown points out (1988), has actually changed very little since the end of the nineteenth century. By the same token, brain science has not succeeded (despite the confident assertions of some of its practitioners) in proving that “mind” and “brain” are but two words for the same thing. To be sure, any theory of the human mind that does not take account of the brain seems manifestly incomplete, but the converse is equally true. What a lobotomized rat can or cannot do in a maze may tell us much about the effects of brain damage on a human being, but from there to the highest expressions of philosophy and art is a quantum leap or two, which existing brain science cannot explain. The cognitivist, modularist paradigm has exhausted its possibilities, and in such situations a scientific revolution is imminent.
In our opinion, a Copernican revolution has already begun in the neurosciences, even if many are or profess to be unaware of that fact (Pachalska 2003; Bradford 2006). In this case, the role of Copernicus has been played by Jason W. Brown, Clinical Professor of Neurology at the New York University Medical Center, whose microgenetic theory serves as the equivalent of the heliocentric universe. The theory has been expounded in a series of books, beginning with Aphasia, Apraxia, and Agnosia (Brown 1972) and concluding (for the moment) with Process and the Authentic Life: Toward a Psychology of Value (Brown 2005). As indicated by the titles of these “endpoint” works, Brown’s emphasis and interests have tended to move from clinical to philosophical, but it would be a mistake to regard this as a change of direction; rather, the general direction was laid down in the first book and followed through to the last. The clinical antecedents of this approach can be found in the work of great “dissenters” of nineteenth- and twentieth-century neurology and neuropsychology—Jackson, Head, Goldstein, Luria and others, who for one reason or another could not accept the anatomical-functional approach to the mind/brain that led in a straight line from Wernicke to cognitivism (Brown 1988). Brown’s microgenetic theory owes much to evolutionary theory (Gould 1982), and even more to Whitehead and process thought. The final product, however, is an original synthesis of ideas, backed up by extensive clinical observation.
The English term “microgenesis” was originally coined to render Heinz Werner’s term Aktualgenese, referring to the process by which a mental state is formed in the present moment (Werner 1956, 1957; Werner & Kaplan 1956; Werner & Kaplan 1963). There are two main assumptions:
(1) a mental state is momentary and transitory, appearing on the surface and immediately giving way to the next subsequent state; this is associated with the epochal theory of time, which in psychology can be traced back to William James (1890);
(2) a mental state becomes manifest as a result of a process that proceeds from depth to surface (more literally than metaphorically), which means that in every mental state there are older and more primitive reactions from which the manifest percepts and behaviors emerge.
The sequence of phases through which a mental state arises in microgenesis are determined by the patterns established during the evolution of the species (phylogeny) and the development of the individual (ontogeny), with a general movement “upward” in the direction of elaboration, specification, and articulation, as in the structure and growth of a tree (trunk, branches, leaves) or a brain (brainstem, subcortical nuclei, neocortex). The mind is thus the end product of phylo-, onto-, and microgenesis, with the brain as the stage upon which these events take place on their respective scales of time: eons, years, and milliseconds.
From the perspective of process thinking, then, it becomes clear that the mind in microgenetic theory arises from the becoming of the brain, which explains on the one hand why the mind and the brain are inextricably bound up with each other, and on the other hand why the bond is not after all one of identity. A sentence does not appear until a string of sounds or letters is produced according to the rules of a given language, but these strings are not in and of themselves sentences. Nor are sentences merely strings of phonemes or graphemes. In the same way, the human mind cannot be conceived without a brain, but neither can it simply be reduced to a brain. If we could hold Einstein’s formalined brain in our hands, would we be holding his mind in our hands? Most would say, certainly not; and yet if this is true of a dead brain, why should it not be true of a living brain as well? And yet contemporary neuropsychological theory would have us believe that what is mental is cerebral and nothing more than cerebral. In our opinion, the only escape from this kind of materialistic neuropsychology is either ghost-in-the-machine dualism or microgenetic theory, which in the tradition of Whitehead looks at process and becoming rather than entities and relations.
The concept of the brain/mind that emerges from microgenetic theory is thus directly opposed to the currently fashionable information-processing models of the brain, which more or less explicitly attempt to apply the laws of artificial intelligence to natural brains. We are asked to believe in a brain that consists in a fixed arrangement of neuronal elements performing specific functions according to a predetermined plan, as in a computer, contrary to the evidence of our own eyes, which tell even the layman that a brain looks nothing like a computer. This is why microgenetic theory cannot be bent and twisted to fit into the cognitivist paradigm without destroying either the theory, or the paradigm. Hence, too, the comparison to the Copernican revolution: if we accept the theory, we have committed ourselves to rejecting the old paradigm, and thus going back to the start and thinking things through all over again. Either microgenetic theory is correct, and the brain is a place where transitional phases occur in the process of forming a self, a world, feelings and ideas, or cognitivism is correct, and the brain is a sort of biological computer, where such notions as “mind,” “soul,” or “psyche” are quaint artifacts of old superstitions (like “sunrise” and “sunset” after Copernicus). No compromise is possible. Microgenetic theory can explain why brains grow and change, and why they possess such a remarkable capacity to repair themselves when they are damaged (Brown 2002; Brown and Pachalska 2003; Kaczmarek 2003; Papathanasiou 2003). For cognitivism, the constantly changing nature of brain function and structure is a major embarrassment; for microgenetic theory, it is the point of departure.
The human brain eludes definitive description, to a large extent because brains differ dramatically over time and between individuals. This obvious fact, which can be seen with the naked eye, is glossed over by standard theories, which present models of “the brain,” as though they were as much alike as every “Amilo” notebook computer manufactured by Fujitsu-Siemens is alike. Real brains, however, are as much different as people are different: that is, there are general laws of structure that can only be violated with grave consequences, but within a certain framework most of us are distinguishable in one way or another from everyone around us. Even identical twins, with age, tend to become less and less alike, as the experience of life leaves its marks on the body, face, and personality. Why should the brain be different? Identical twins each have the same genes for the brain, and indeed they show some marked similarities in cognitive and emotional functioning, in temperament and mental habits, even in esthetic preferences. It is not the similarities that should amaze us, however, as much as the differences. The brains of twins are not nearly as alike as two computers of the same mark manufactured by the same company in the same factory on the same day, and yet they are not as different as the brains of two unrelated individuals. Once again, we are clearly looking at a process that unfolds in accordance with certain patterns and regularities constraining the direction of development, but not in accordance with a prefigured plan. Brains are simply not things that are constructed according to blueprints and then brought on line to fulfill their preordained function. They become minds.
2. The Main Premises of Microgenetic Theory
In microgenetic theory, the brain is at once the product of evolution and the template within which the mind evolves. The laws and regularities of evolution never cease to be in effect, including natural selection and the complex dialectic of change and continuity. Thanks to the genome and the continuity of genetic material passed from generation to generation, a given species retains a set of basic features, within which a certain range of variants is allowed, and beyond which one can no longer speak of the same species. A new species does not appear at the end point of development of one species, which would thus somehow metamorphose into a new form, but rather results from a divergence of paths, a forking off. The new species always contains a substratum of genes that represent its continuity with its evolutionary past, even when these genes (such as those that produce the “gills” in a human embryo) are either never expressed or are “switched off” by other genes (so that the gills quickly disappear). This is a fundamental characteristic of evolutionary change: that the past is not so much discarded as buried.
The course of evolution is thus guided by a dialectic between continuity and change (Brown 2005), between the continuity of genes and the adaptation forced by a dynamically changing environment. When the environment changes and the genes do not, the species is threatened with extinction; when a novel mutation is too radical, the result is monstrosity, which cannot become the basis for a viable new species capable of surviving and reproducing itself. In an analogous manner, each individual human brain begins with a DNA blueprint, and yet it takes on its individual character in constant interaction with the dynamically changing environment. From birth to death we have the same brain within our skulls, and yet from one moment to the next it is not precisely the same. This explains why ontogeny travels the road built by phylogeny, but the journey is never quite the same. A human being is human from conception; the fertilized egg is not an amoeba, which metamorphoses into a slug, which metamorphoses into a frog or a fish, which metamorphoses into a rat, which metamorphoses into a monkey, which is finally born as a human infant. And yet the development of an embryo into an infant passes through stages that very closely resemble the “lower orders” of the process of evolution. At any given moment, then, we find ourselves at the culmination of a process which has lasted for six million years, and sixty years, and three hundred milliseconds, yet is always the same process.
Phylogeny has given us a brain, which becomes itself in ontogeny according to phases that represent those moments in evolutionary time when “quantum leaps” occurred in the differentiation and specialization of the central nervous system. The most important transitions are represented by the three main levels of the central nervous system above the spinal cord:
(1) the brainstem (including the midbrain), which structurally and functionally mediates between the central and peripheral nervous systems, and the cerebellum, a kind of “proto-brain”;
(2) the limbic system and the basal ganglia (sometimes collectively referred to as the “subcortical nuclei”), which lie above the brainstem and below the cortex;
(3) the cerebral cortex (the outer layer of gray matter, whose wrinkled surface is what most of us associate with the word “brain”).
The higher we go in this system, the more the human brain differs from that of animals. The human brainstem and midbrain differ relatively little from those of reptiles and fish. The cerebellum and the subcortical nuclei are very similar in all mammals. It is the cortex that distinguishes the human brain, especially the frontal lobes, which are considerably larger than those of even our nearest cousins, the pygmy chimpanzees. Each of these three main layers possesses somewhat different mechanisms for communication with the periphery, i.e. the sense organs and the effectors (muscles, ligaments, glands) by which the central nervous system interacts with the external world. The stimulus-response arc can be closed at any of these three levels, producing a behavior (when this happens below the brainstem, we speak of a “reflex”). Thus the brain is neither a monad nor a set of interconnected processors, but rather the stage on which evolutionary processes occur, including the microgenesis of particular mental states, when the course of phylogeny and ontogeny is re-created in the formation of a mental state.
Plato, in the Republic, divided the soul into three parts (the appetitive part, the spirited part, and the rational part); Freud, into Id, Ego, and Superego. In microgenetic theory, we begin by observing that the brain has been deposited in three main layers, and by examining the character of those layers, we can begin to understand how and why human behavior and feeling is layered:
(1) At the level of brainstem, the organism perceives objects as Gestalts, instantly classed in a finite number of very broad categories according to survival value. Reactions to perceived stimuli are immediate, stereotyped, and irreversible, while movements are largely axial and whole-body. There is language at this level, but it is the “language” of crying, laughing, moaning, growling, snorting, etc., without words or grammar, the direct expression of affect unmediated by any specific language rules. Time is simply the minimal neuronal epoch that transpires from stimulus to response.
(2) In the limbic system, the seat of emotion (affect and mood), the “pleasure principle” (Freud 1920), is dominant: stimuli are evaluated on a scale from “ugly” to “beautiful” or some variant thereof. Responses to stimuli are more variegated and subtle, yet the processes involved here are physiological, i.e. biochemical and biophysical. The perception of a “beautiful” object prompts changes in the chemical environment of the entire body, which explains why emotions as such are always felt “in the body”—that is, the peripheral nervous system is affected by the chemical changes initiated by the subcortical nuclei. Limbic perception and limbic action are inseparable from mood and affect, as in dreams or hallucinations (Brown 2000). The limbic system possesses its own vocabulary and grammar (primarily that of cursing), but in this respect “limbic language” belongs to a particular human language and is rule-bound to a certain extent. Emotions also play an important role in the supra-segmental aspects of speech, i.e. tone of voice, gesture, facial expression, prosody, etc., those dimensions of the speech act that can usually be read by a person who does not speak our language, or even by a pet dog. The limbic system is richly connected to the memory system, which explains why emotionally laden material is more easily remembered (or forgotten at the cost of such enormous mental effort), and why memories evoke emotions. The perceived object is remembered into conscious, emerging from within, which, again, is a feature of dreams and hallucinations (Brown 2003). Time is the felt “now,” with a tendency to the cyclic recurrence of moments, i.e. sequences of event and affect.
(3) The guiding principle of the cortex is division, classification, analysis, articulation in the broadest sense of the term. The cortex gathers detailed sensory data and makes judgments that can be used to constrain the percepts and behaviors generated by the lower levels. In the “limbic” perception of dreams, identities (including one’s own) are fluid and indefinite; the cortex analyzes features and fixes identity, distinguishing one object from another. There appears what Kurt Goldstein (1995) called “the abstract attitude,” which depends on an ability to reason inductively or deductively, analyze cause-and-effect and other relations, and use metaphors, analogies, and other figures of speech, while retaining criticism. Language is fully rule-bound, and this generally applies to the cognitive processes taking place at this level.
The existence of the three levels described here can be demonstrated clinically by observing, for example, the verbal behavior of persons awakening from general anesthesia, or a prolonged coma. When the central nervous system first begins to stir, there is inarticulate verbalization: groaning, sighing, sometimes shouting or even howling. This is often followed by a period of uncontrolled cursing, which gradually gives way to simple utterances and the recovery of what neurologists call “logical contact.”
It should be stressed once again, before we proceed, that the interactions of these three layers are evolutionary, which means that the higher evolves from the lower but does not displace it. Even the most abstract acts of mental reasoning originate, not in the cortex, but in the brainstem, and then pass through the limbic system before the rule-bound cortex sculpts them into thoughts. The lower layers deposited along the way may be more or less thoroughly hidden, but they are always present. Thus the lower levels of processing serve a two-fold purpose: on the one hand, they serve as preliminary phases in the formation of cognitions and behaviors, feeding information forward to the cortex, but on the other they are self-contained systems operating on different principles, each independently capable of taking in sensory information and putting the body into motion. In a reflex, including a conditioned reflex, the stimulus-response arc closes at or below the level of the brainstem, so far below the threshold of consciousness that there is no possibility for a healthy nervous system not to complete the cycle. At the other end of the spectrum, the pursuit of a lifetime aspiration requires that completion of the stimulus-response cycle be delayed, usually broken down into a series of overlapping mental processes, whose goals are subordinate. This does not mean, however, that such elaborate lifetime projects are the product of the cortex acting alone. In principle there is no such possibility, which once again recalls Plato’s concession in the Republic that the appetitive and spirited parts of the soul are necessary to the economy of the soul: the rational part must govern, but it cannot act alone.
One of the key differences between the successive layers is the time frame. The brainstem perceives and reacts in a fraction of a second; the limbic system, in seconds or minutes; the cortex, anywhere from several seconds to decades. Time is experienced quite differently at the respective levels (Brown 1996): from the instantaneous stimulus-response of the brainstem, through the “dream time” of the limbic system, to clock time in the cortex. The problem is complicated, however, by the fact that the nervous system as a whole “chunks” time into epochs of 300-400 milliseconds (the time it takes an impulse to reach from any given point in the nervous system to any other point). Microgenesis (i.e. one complete pass through all the phases) lasts just this long, but is followed immediately by the next, as drops of water rise through a fountain and fall away, replaced so quickly by subsequent drops traveling the same (or nearly the same) route, that the fountain itself takes on the appearance of a stable, solid object. This explains why the duration of the present (an ancient philosophical problem placed by William James at the heart of psychology, though the issue is mostly bypassed in contemporary psychology) continues to be one of Brown’s primary preoccupations (Brown 1996, 2000, 2005).
For this system to work, then, there must be a mechanism that can suspend the reactions of the lower levels, in order for the upper levels to have time to work. This explains (among other things) why the frontal lobes are so richly connected with the limbic system, and that in turn is the physiological basis for the “rule of the rational part over the spirited and appetitive parts,” which for Plato was the essence of justice. It also explains why we can have affective reactions to our own cognitions.
3. The Microgenesis of the Speech Act
As we have already suggested, speech and language are not the products of the cortex alone, as most neurolinguistic theories assume. After all, a speech act, in the Austin-Searle sense of the term, is a behavior, and behaviors are the products of microgenesis, which means that they pass through all the layers of the central nervous system. Parrots, which have very little cortex, can learn to speak in complete sentences, though it highly unlikely that the parrot who says “Polly wants a cracker!” is actually producing a sentence that could be parsed, or even that it “knows” what it is saying, beyond the conditioned reflex that the utterance of this string of sounds produces a desired effect. On the other hand, dogs and other higher mammals, especially the higher apes (gorillas and chimpanzees), can understand even several hundred words and simple sentences, though their expressive abilities are far more limited. Even more interesting: recent research has demonstrated that many words and expression evoke emotional reactions in the human hearer some time before the meaning of the word or expression has been comprehended. These same words, especially interjections and curses, can often be uttered flawlessly by patients with profound aphasia, who otherwise cannot say more than a word or two. Clearly, other parts of the brain, other than the well-known cortical “speech centers” in the perisylvian area of the left hemisphere, can also use and understand words.
These observations point to the nature of the mutual interaction of the respective levels of the central nervous system. The lower levels form the evolutionary background, from which the complex behavior of the upper levels evolves; at the same time, the mechanisms of inhibition and learning can shape the behavior of the lower levels, by offering ever more behavioral options and the time to choose among them. In certain cases, however, the more primitive behavior emerges of its own accord, i.e. comes to the surface without the control of the cortex. This happens, for example, when we act impulsively, under the influence of strong emotions, or alcohol; it also happens when brain damage or disease affects particular regions of the brain, or the whole brain. In the microgenetic approach, then, the symptoms of brain dysfunction are not understood simply as deficiencies, that is, “zeros” on the psychometric tests favored by cognitivism for diagnostic and research purposes. Rather, the symptom is a sample of an early phase in the microgenesis of a percept or a behavior, which as a result of damage surfaces prematurely, revealing that which is ordinarily concealed (Brown & Pachalska 2003; Luria 1966).
The disturbances in the naming of objects that occur in the course of aphasia are a case in point (Brown 1994; Brown & Perecman 1986). The standard test used to measure these disturbances is the Boston Naming Test (Goodglass & Kaplan 1983), consisting of sixty pictures of objects, which the patient is shown and asked to name. A naming score is achieved by marking one point for each properly named object, and zero for every mistake. The number that results is speciously objective, but remarkably uninformative: we learn little apart from the fact that the patient has some difficulty in naming objects shown to him in pictures. The problem can be somewhat alleviated by switching to a 0-1-2 system, where “2” is normal performance, and “1” is a response that falls short of normal but is not completely erroneous. Without a sense of what the patient actually says when looking at a picture, however, we can really have no idea of what has happened to his brain, or what is going on in his mind when his brain is unable to make the connection between a visual image and a verbal production.
From clinical observations of patients with brain damage of various kinds, errors in naming fall into discernable patterns. If the patient is shown a picture of an elephant, for example, the following errors may be noted:
(1) With a cortical lesion in the lower part of the left temporal lobe, the patient is likely to produce a nonsense word with no obvious connection to the target word “elephant,” though usually in compliance with the phonetic rules of the patient’s native language, e.g. “fellermaster.” When such errors occur frequently, the patient’s speech becomes a kind of private jargon, which also sometimes occurs in schizophrenia (“schizophasia”).
(2) When the lesion is found along the temporal-parietal border in the left hemisphere, the patient produces what is called a “semantic paraphasia,” i.e. a word that belongs to the same general category as the target word, e.g. “rhinoceros.” Alternatively, the patient may know what the object is, but be unable to recall its name, as in the “tip of the tongue” errors we all sometimes make. Often there is a resort to periphrasis: “Oh, you know, the big animal with a trunk, eats peanuts.” When these kinds of errors occur so often that they interfere with the fluency of speech, the patient is said to have “anomic aphasia” or “amnestic aphasia,” to use Luria’s term (1977). If we prompt the patient with “e,” she may be able to produce “elephant.”
(3) When the lesion occurs in the upper part of the left temporal lobe, the patient may know the correct name for the object, but be unable to pronounce it correctly: “elephant” become “ephelant” or the like. These kinds of errors are characteristic of what is called conductive aphasia.
In the standard psychometric approach, “fellermaster,” “rhinoceros”, “the big animal with a trunk,” and “ephelant” would all be counted as errors, i.e. zeros or ones. From the microgenetic perspective, however, it is not the failure in performance that matters, but rather the nature of the error and what it reveals about normal processing.
At first glance the foregoing analysis may appear to be a reversion to locationalism, when the nature of the symptom is correlated with the location of the lesion. The modular approach, however, falsely attributes these correlations to the destruction of a particular processor. From the microgenetic perspective, the lesion blocks or slows down a segment of processing, which disorganizes the final product in a manner that depends more on “when” than on “where” the lesion occurs (Brown & Pachalska 2003). The sequence described above is “bottom-up,” both in terms of anatomical location, and in terms of the microgenesis of the production of a name: (1) from whole to part; (2) from context to object; (3) from background to figure.
This can be observed not only in the process of specification of the lexemes, but also in their phonological realization, as the target word is narrowed down from a broad concept to ever more specific semantic, lexical and phonological features. This is fully analogous to the microgenesis of a visual percept: the “target” object does not emerge from bits of specific sensory data made into a whole in second-pass processing, but from visual wholes, Gestalts, that are broken down and analyzed into specific features. It is not the nose, eyes, mouth, and chin that make us see the face, but rather seeing the face enables us to see the nose, eyes, mouth, and chin. Analogously, the word is neither understood nor uttered by gathering phonemes from a store and assembling them into words that are associated with mental states and concepts; rather, the concept constrains the word, which constrains the string of phonemes needed to realize it. There is no separate processor that handles only the lexicon, and another that handles grammar, and another that deals only with phonology. It may be useful for analytical purposes to divide language into these aspects, but it is a serious mistake to make these aspects of the linguistic utterance correspond directly to modules. Further subdividing linguistic processes into ever more discrete functions increases the number of necessary processors without producing any useful results, either theoretically or clinically. In reality, what we see in the clinic seldom if ever resembles the kind of “pure” lexical, semantic, or phonological disturbances that a modular brain would lead us to expect. Rather, the effect of a lesion resembles the disturbances caused by putting a stick into a fountain: the extent and nature of the effect depend on how large the stick is, how far it is inserted, and how high or low the point at which it interrupts some or all of the flow.
4. Morphogenesis and the Formation of Symptoms
The best approach to understanding what symptoms can tell us about normal mental functions is to look at the ontogeny of the brain, or more specifically, at morphogenesis, the process by which the brain takes shape. For cognitivism this is an uncomfortable point: it is easier to assume a ready-made brain, built off-line according to a blueprint and then brought on-line as a whole. In morphogenesis a series of transformations, simultaneously constrained by DNA, by the need to adapt to the changing demands of the environment, and by various kinds of learning and habituation, makes a cluster of undifferentiated matrix cells into a brain. In the process, however, the brain is not first built and then put into use; rather, function and structure constrain each other as the brain becomes a mind.
In traditional neurology the problems of development and morphology are kept separate from the problems of function and behavior, on the assumption that developmental processes first form the anatomical structures of the brain, and then these structures begin to process information according to the functions they have been designed to fulfill. For the cognitivists, then, the stages in the ontogeny of the brain are as relevant to its cognitive function as the phases in the manufacture of a computer are relevant to how it works. This approach cannot explain morphogenesis, plasticity, or cognition. The basic problem here is that the real relation of structure to function in morphogenesis is obscured by the obvious but misleading assumption that function is something active, dynamic, an event and not an object, while structure is fixed, static, localized, an object and not an event. Thus the constant temptation to assign mental functions to speciously logical compartments, putting functions into the boxes created by anatomic structures. From the microgenetic perspective, however, structure and function are understood as inseparable expressions of one and the same process, just as brain and mind constitute moments in the actualization of a single process. By looking at morphogenesis in this way, the relation between behavior and growth can be seen in a different and more illuminating light. Growth is the dynamic of a population of cells, while cognitive processes are the realized properties of these populations of cells. As the brain matures, the emphasis shifts from structure to function, but there is no moment when structure ceases to change, or when function ceases to shape structure. Even in advanced old age, the brain is still changing.
Thus the growth of the neural network in the brain is not a phase preceding the achievement of the final, intended form, but the dynamic of morphology itself. Morphology in turn is a cross-section of growth, while behavior is its extension into the fourth dimension. Language and other cognitive processes become possible in ontogeny when the neural network is sufficiently organized to enable the requisite functions to be performed; later, the demands of usage sculpt the network into the form needed to solve problems. If we can describe the processes by which the cells at the end of the spinal cord in the embryo divide and differentiate into the brainstem, the cerebellum, the subcortical nuclei, and finally the cortex, we have by the same token described the structure of human behavior. This is the essence of microgenetic theory.
These general principles are well illustrated by a brief consideration of how identity and personality develop in ontogeny, and how this process shapes the microgenesis of the mental states involved. The first structure of the central nervous system that can be discerned in the embryo is the spinal cord. Approximately four weeks after conception, the end of the spinal cord that lies nearest the head has developed a structure that somewhat resembles an old-fashioned woman’s hairpin (see Fig. 1a), with three bulges, known as the forebrain, the midbrain, and the hindbrain. At this point, the embryonic human brain differs very little from the brain of a fish or an amphibian at the corresponding stage, with this essential difference, that the brains of fish and amphibians do not develop structurally much farther than this.
Fig. 1. The morphogenesis of the human brain: A) the human brain at about four weeks after conception; B) the human brain at about eight weeks after conception; C) the adult human brain in cross section (medial surface of the right hemisphere).
Approximately four weeks later, the brain has developed considerably, assuming the characteristic forms of a mammalian brain, common to rats, dogs, apes, and humans. There are now five distinct segments above the spinal cord (cf. Fig. 1B):
(1) the medulla (the lower part of the brainstem that merges into the spinal cord);
(2) the hindbrain, which will become the cerebellum and the pons (the part of the brainstem that “bridges” between the medulla oblongata and the midbrain;
(3) the midbrain, considered by most anatomists to be part of the brainstem;
(4) the diencephalon, which develops into the thalamus and other subcortical nuclei;
(5) the forebrain, which develops into the limbic system, the basal ganglia, and the cortex.
The adult human brain little resembles the microscopic structure shown in Figure 2, but when viewed in cross section (Fig. 1C), the morphogenetic continuity can be seen.
In Figure 2, this brief and somewhat simplified account of the morphogenesis of the brain is applied to the ontogenetic and microgenetic development of the self, identity, and personality.
Fig. 2. The microgenesis of identity and personality in relation to morphogenesis.
Identity and personality thus constitute transition phases in the microgenesis of the self (Brown 2005). The self of the brain stem and midbrain is a biological creature, governed by instincts, drives, conditioned reflexes, and oriented towards survival. There is no reflection, no thinking, not even affect as we ordinarily understand it. It is hard to speak of identity at this stage, but there is a generalized will to survive that might imply the existence of something that Whitehead would perhaps call a “prehension.” As this self emerges into the realm of the limbic system, affect and the pleasure principle appear, along with the dream self—feeling and experiencing, though distinctly passive in relation to the outside world. Identity emerges into a kind of consciousness, but remains fluid. The likes and dislikes that inform action and perception at this level become the patterns of personality, which is not determined or measured by any one action or omission, but by that which the individual habitually prefers to do or wants to do in various situations. Personality is not a product of cognition, then, but rather is the force by which mood and affect shape cognition. The prompts of personality, shaped by preferences, can sometimes be overcome, at least in those particular situations when the cortex, especially the frontal lobes, finds it necessary or expedient to choose a course of action that runs contrary to one’s feelings (cf. Freud’s “Ego”).
The microgenetic model briefly described here is evolutionary in more respects than simply anatomical. The processing of information, like growth and evolution, is unidirectional, obligatory, and cyclical. The cycles of birth and death, sleeping and waking, are reflected in every moment of life in the growth and perishing of successive mental states. A mental state is not a reversible function, something that can be erased and replaced, like a typing error on the computer screen: the brain has no Backspace key. Thus brain damage does not remove a particular work station from a mental assembly line, or break the conveyor belt that shuttles semiproducts from one processor to another. Rather, it changes the way in which mental states move from depth to surface. In both evolutionary and growth processes, then, the mental process proceeds:
(1) from potential to actuality;
(2) from past to present;
(3) from unity to diversity;
(4) from simplicity to complexity.
In microgenesis, then, the mental process—a cognition, a behavior, an idea, an intuition, a feeling—moves in the same direction through the same phases. The millions of years of evolution, all the years of a person’s life leading up to this moment, are compressed into and realized within a span of time measured in milliseconds. Cognition is thus an iterative process, like the heartbeat, which repeats itself a billion times in the course of a single life, though no two heartbeats are exactly the same (cf. Brown & Pąchalska 2003).
For many years Jason Brown has been virtually a vox clamantis in deserto (Pachalska 2003; Bradford 2006). To be sure, Karl Pribram (1991), in his book entitled Brain and Perception, following Lashley, referred to earlier work in neuroembryology, which indicated that the patterns of development in embryogenesis remain as force lines determining the direction and nature of information processing in mature perception. It is no longer heresy to suggest that adult cognitive processes are sculpted by ontogeny, which influences learning processes and the formation of mental representations. More and more neuropsychologists are becoming aware that the part-to-whole model of perception is basically faulty (Rosenthal 1988, 2004). And finally, the iron grip of cognitivism on contemporary psychology may at last be weakening. Still, there is a great deal of work remaining to be done. Revolutions are necessary, often exciting, but they can only be accomplished at a cost.
Works Cited and Further Readings
Bradford, D. 2006. “Review of Jason W. Brown, Process and the Authentic life: Toward a Psychology of Value,” Acta Neuropsychologica, 4, 1/2, 90-102.
Brown, J. W. 1972. Aphasia, Apraxia and Agnosia. Clinical and Theoretical Aspects (Springfield IL, Charles C. Thomas).
Brown, J. W. 1988. The Life of the Mind. Selected Papers (Hillsdale NJ, Lawrence Erlbaum Associates).
Brown, J. W. 1994. “Morphogenesis and mental process,” Development and Psychopathology, 6, 551-64.
Brown, J. W. 1996. Time, Will and Mental Process (New York, Plenum).
Brown, J.W. 2000. Implications of microgenesis for a science and philosophy of mind. In: Combs, Germine, M. and Goertzel, (Eds) Mind in Time: the Dynamics of Thought, Reality and Consciousness (New Jersey, Hampton Press).
Brown, J. W. 2002. The Self-Embodying Mind. Process, Brain Dynamics and the Conscious Present (Barrytown NY, Barrytown / Station Hill).
Brown, J. W. 2003. “What is an object?” Acta Neuropsychologica, 1, 3, 239-59.
Brown, J. W. 2005. Process and the Authentic Life: Toward a Psychology of Value (Frankfurt, Ontos).
Brown, J. W., and Pachalska, M. 2003. “The nature of the symptom and its relevance for neuropsychology,” Acta Neuropsychologica, 1, 1, 1-11.
Brown, J. W., Perecman, E. 1986. “Neurological basis of language processing,” in Intervention strategies in adult aphasia, edited by R. Chapey (Baltimore, Williams and Wilkins).
Fodor, J. (1983). The Modularity of mind: An essay on Faculty Psychology (Cambridge MA, MIT Press).
Freud, S. 1920. Jenseits des Lustprinzips. Beihefte der Internationalen Zeitschrift fur arztliche Psychoanalyse No. 2. (Leipzig, Internationaler Psychoanalytischer Verlag).
Goldstein, K. 1995. The Organism: A Holistic Approach to Biology Derived from Pathological Data in Man, with a foreword by Oliver Sacks (New York, Zone Books).
Goodglass, H., and Kaplan, E. 1983. Boston Naming Test (Philadelphia, Lea and Febiger).
James, W. 1890. The Principles of Psychology (New York, Henry Holt and Co).
Kaczmarek, B. L. J. 2003. “The life of the brain,” Acta Neuropsychologica, 1(1), 12-21
Luria, A. R. 1966. Human Brain Psychological Processes (New York, Harper and Row).
Luria, A. R. 1977. Neuropsychological Studies in Aphasia (Amsterdam, Swets & Zeitlinger).
Pachalska, M. 2003. “The microgenetic revolution: Reflections on a recent essay by Jason Brown,” Journal of Neuropsychoanalysis, 4(1), 109-117.
Papathanasiou, I. 2003. “Nervous system mechanisms of recovery and plasticity following injury,” Acta Neuropsychologica, 1(3), 345-54.
Pribram, K. 1991. Brain and Perception (Hillsdale NJ, Erlbaum).
Rosenthal, V. 1988. “Does it rattle when you shake it: Modularity of mind and the epistemology of cognitive research,” in Perspectives on Cognitive Neuropsychology, edited by G. Denes, P. Bisiacchi, and C. Semenza (London, Lawrence Erlbaum), 31-58.
Rosenthal, V. 2004. “Microgenesis, immediate experience and visual processes in reading,” in Seeing, Thinking and Knowing: Meaning and Self-organisation in Visual Cognition and Thought, edited by A. Carsetti (Amsterdam, Kluwer), 221-43.
Werner, H. 1956. “Microgenesis and aphasia,” Journal of Abnormal Social Psychology, 52, 347-53.
Werner, H. 1957. Comparative Psychology of Mental Development (New York, International Universities Press).
Werner, H., and Kaplan, B. 1956. “The developmental approach to cognition: Its relevance to the psychological interpretation of anthropological and ethnolinguistic data,” American Anthropologist, 58, 866-80.
Werner, H., and Kaplan, B. 1963. Symbol Formation: An Organismic-Developmental Approach to Language and the Expression of Thought (New York, Wiley).
Bruce Duncan MacQueen
Institute of Psychology at the University of Gdansk, Poland
Department of Comparative Literature, University of Silesia, Katowice, Poland
How to Cite this Article
Pachalska, Maria, and Bruce Duncan MacQueen, “Process Neuropsychology, Microgenetic Theory and Brain Science”, last modified 2008, The Whitehead Encyclopedia, Brian G. Henning and Joseph Petek (eds.), originally edited by Michel Weber and Will Desmond, URL = <http://encyclopedia.whiteheadresearch.org/entries/thematic/psychology/process-neuropsychology-microgenetic-theory-and-brain-science/>.