"Implicate Order" and the Good Life
Chapter 3: From the Physical World to the Human World: The Unfoldment of the Implicate Flux
Having introduced our protagonist, the concept of implicate order, we are now ready to apply it in the expectation that it will help us see old problems in a new and clearer light. The present chapter sets out to do this by examining the two issues not treated in Bohm's writings (see Section 2.9): the evolutionary connection between the physical and the human worlds, and the relation of implicate order to the literature on human perception, decision making, experience and interpretation.
We shall cast the evolutionary relationship between the physical and human worlds in terms of the coming into being of an explicate order that is being unfolded from the implicate order. Reaching back to the birth of the universe, we will propose that explicate phenomena unfold (evolve, develop) from the implicate order and come to populate the expanding universe, from atoms and stars through macromolecules, cells and organisms, on to the constituents of the human world, such as percepts, concepts, ideologies and social institutions.
The evolutionary account will build on the concept of implicate order in both its quality as enfoldedness (the point that the whole is enfolded in the parts) and its dynamic aspect (the notion of the flowing river channelled by vortices). In the literature on the various phases of evolution to be reviewed, some writers have emphasized the whole-in-part aspect, and others, the dynamic aspect. Our account will play out these two aspects accordingly, not to prove that they are the "right" concepts to use, or that biological evolution or human development "really" happened this way, but to give a reasonably consistent account of a handful of evolutionary episodes that sees the stable world of explicate phenomena as deriving from and being sustained by the dynamic implicate order.
Since our emphasis in the later chapters will be on the human world, the account of evolution is similarly skewed toward the human end. This will help us substantiate the intuition underlying our research opportunity, namely, that these concepts may lead to an enriched understanding of the human world.
A major conclusion to emerge is that human experience can usefully be seen as an explicate order of categories and concepts, more or less distinct from each other, in close parallel to the way Newtonian-Cartesian mechanics imagines the world's objects to be inherently distinct from each other. We thus end up amplifying the similar point made by Bohm (that the mind gets filled with explicate content), but our argument is going in a different direction: in the next chapter we shall suggest rather a different role for the implicate order in human experience than what Bohm has in mind.
3.2 Evolution as the Unfoldment of Explicate Forms from the Implicate Flux
As discussed at length, the implicate order is intrinsically dynamic in character. This was expressed by means of the image of the flowing stream upon the surface of which more or less stable vortices are formed. For the present purposes, more general terms for the flowing stream and the vortices in the analogy are needed, terms of such generality that they can be given status of ontological concepts.
Bohm introduced the terms "holomovement" (1980b, p. 150) and later "holoflux" to emphasize the holistic character of the dynamic flow that is the implicate order. In order to avoid burdening Bohm's terms with the ideas to be presented in later chapters, the more generic term "flux" will be used to refer to the dynamic aspect of the implicate order. To accompany the notion of flux, "form" is proposed. Forms channel and "give form" to the flux, just as vortices channel flowing water into stable patterns. Forms are of the explicate order, and the flux is of the implicate order. "River" and "vortices" are terms pertaining to an image or analogy, as discussed in Chapter 2, whereas "flux" and "form" will be considered ontological concepts, on a par with "implicate" and "explicate order." Consequently, we shall occasionally refer to the "implicate flux" (= implicate order) and, perhaps foolhardily, the "implicate flux ontology."
In the view proposed here, "form" is not to be understood as the opposite of "substance," since the latter concept belongs to an essentialistic ontology that is antithetical to the dynamic flux ontology proposed here. To be sure, in special, limited areas of interest, the distinction between form and substance can be quite useful, as when we say that a sculptor imposes a form on a substance like clay or bronze. However, from the flux point of view, this "substance" is itself a form. The "substance" known as bronze is simply a particular organization of the energy (flux) that makes up the atoms in the copper and tin of which the bronze is an alloy. In other words, "form," in the usage established here, is not the "shape" or "contour" or "outline" of an substantial object; it is a stable manifestation of a dynamic flux.
The concept of form proposed here is also not the Aristotelean "Form," which is the conceptual essence of an object: "By Form I mean the essence of each thing" (Aristotle, Metaphysics 1032b 1). In Aristotle Form is contrasted with "Matter," the accidental substance that gives physical realization to the Form (Russell, 1946, p. 177). An object's Form propels it toward a more perfect state. In overcoming the inertia of Matter, the Form expresses a fundamental striving for perfection, termed entelechy by Aristotle. The use of "form" proposed in this chapter implies neither the Aristotelean contrast with matter, nor the idea of discrete essences inherent in each object, nor a teleology.
The word "form" will be used to denote practically any isolable phenomenon. A form is not a special kind of object; all identifiable phenomena may be understood as forms. Naming them so serves an attention-directing purpose: the term points to the idea that phenomena are not static or permanent essences; they emerge from and are sustained by the flux.
The emergence of forms from the flux may be called "unfoldment," a term that was originally used in the glycerine analogy to denote the gradual manifestation of the explicate (unfolded) ink drops from their stretched-out or implicate state. The flux is capable of giving rise to forms that direct the flux into distinct and explicate patterns. As forms unfold, they begin to channel and organize the flux, which in turn nourishes and feeds the unfolding forms. Before unfoldment, there is only flux. During unfoldment, forms appear and distinction and difference are created in the flux. As we shall see, the term "unfoldment" subsumes both "evolution" and "development." Its use serves to remind the reader of the underlying conceptual framework, the implicate flux ontology. However, both "development" and "evolution" will be used occasionally in their conventional senses, whenever called for by the context.
Consider cosmological evolution. If painted with very broad strokes, the evolution of matter and life in the universe may be expressed in terms of the unfoldment of forms from the flux. We recall from Chapter 2 that in the modern view, matter is equivalent to energy and can indeed quite usefully be seen as a particularly stable and observable form of energy. If one takes seriously the immense energy of the quantum vacuum, matter is a tiny ripple on an ocean of energy. So vast is this ocean of energy as compared to matter that Bohm considers even the Big Bang a mere convergence of ripples on the surface of such an ocean. In this interpretation, the Big Bang, far from creating this energy, merely serves to gather some of it and spew it forth in a concerted manner, thus "creating" the universe, much like hundreds of tiny ripples converging on the water's surface may create a huge wave.
In the first instants after the Big Bang, only energy at tremendously high temperatures was present. Physicists have advanced various estimates of the temperatures and energy densities of the very early universe (e.g., Weinberg, 1978, p. 135). What is common to all estimates is the point that in the beginning of cosmic evolution, energy preceded matter. There was energy before there was anything else.
Chaisson (1987) summarizes current research in cosmology by dividing the evolution of the early universe into six epochs. Each of these lasts exponentially longer than the previous one, and during each the energy in the universe cools down in like manner. In the first epoch, lasting less than 10-20 second, only superhot radiation is present. The second epoch is the hadron epoch, during which the strongly interacting and heavier elementary particles are "created by a straightforward materialization of matter from the energy of the primal bang" (Chaisson, 1987, p. 13). The subsequent lepton epoch sees the birth by a similar mechanism of the lighter particles, like electrons, and during the fourth epoch, extending from a few minutes to about a million years after the Big Bang, the lighter atoms, such as hydrogen and helium, are synthesized. Matter aggregates into galaxies in the fifth epoch, and stars are formed in the sixth epoch. Inside the stars, the heavier elements are born from the fusion of lighter atoms.
Thus, with the cooling of the universe, matter forms from the background sea of energy (the flux) and undergoes a series of changes, resulting in the emergence of the elements and much larger material structures, such as stars and galaxies. With materialization, these structures do not reify irreversibly or lose their kinship with the energy from which they were born, since they are liable to transform back into energy again at any instant. All material particles have been observed to do just that; none lasts forever. We may say that matter consists of forms called particles that channel the flux for a while and, during this time, appear as solid "objects." They may aggregate and create the impression of tangibility and permanence, but ultimately they are absorbed into the flux again, as a vortex finally dissolves and merges with the river.
At the end of the sixth epoch described by Chaisson, matter has converged in lumps dotting the universe: galaxies, stars, cosmic dust and, in our solar system, planets. It has been suggested that the more interesting aspects of the further evolution of the universe take place on or around these lumps, which are seen as pockets of high complexity and order. The classical belief, deriving from early thermodynamics, that the universe, through the inexorable increase of entropy, would disintegrate and run down was countered by Schrdinger (1945) and Brillouin (1949) who pointed out that the entropy principle (as expressed in the second law of thermodynamics) holds only for closed systems (of which there are none, as it was realized).
Open systems would be able to build up internal order and complexity by utilizing energy fed to them by the environment and releasing it in a low-quality, high-entropic form. Thus, a highly organized pocket of negentropy, or information, could arise locally, at the cost of increasing the entropy in the surroundings. The earth with its biosphere is such an open system, as it receives energy from the sun in the form of light and other radiation and dissipates it to the cosmos again, mostly in the form of heat energy rising from the surface of the earth.
This evolutionary principle sets the scene for an evolutionary phase beyond the six epochs, as systems much more complex than material particles and aggregates of particles now arise (Chaisson, 1988). By maintaining a throughput of energy, open, negentropic systems utilize energy more efficiently than material particles, which simply bind it. The throughput of energy requires the establishment and maintenance of a boundary that sets off the inside of a system from its outside. We may see such an open system as a complex form that channels the flux (the energy flowing through it) in a such a way that a boundary is created and maintained.
3.3 Flux and Implicate Order in Biological Systems
The work of Prigogine and his co-workers (Nicolis & Prigogine, 1977; Prigogine & Stengers, 1984) in the field of far-from-equilibrium thermodynamics adds to our understanding of the evolution of complex systems, pre-biotic as well as biological ones. These researchers have constructed a theory of evolutionary phenomena that identifies the mechanisms by which the build-up of negentropy occurs. When a simple physical-chemical system is displaced from thermodynamic equilibrium (that is, seriously disturbed) by large fluctuations in the energy available in the environment, it may jump to a higher level of organization, provided the energy supply is sufficient. To maintain itself in the more highly organized state, the system uses and gives off, or dissipates, more energy to the environment. This state of the system is therefore called a "dissipative structure."
A dissipative structure is of course not a static object or an arrangement of indestructible, explicate elements, but a temporarily stable form that organizes the flow of energy passing through it. A dissipative structure is a channel for energy, whether this energy enters the system in the form of electromagnetic radiation (sunlight) or the particles of matter we call nutrients, which are, of course, like all matter, simply stable bundles of energy.
Prigogine's work supports the idea that the complex phenomena appearing on the surface of the earth in the late stages of cosmic evolution, whether they are geo-chemico-physical systems or biological organisms proper, may be seen as forms that utilize a throughput of energy or flux and thus maintain (and, in some cases, reproduce) themselves. As expressed by another evolutionarily minded student of thermodynamics: "Organisms are informed dissipative structures, maintaining organization by processing energy" (Wicken, 1985, p. 378).
In general terms, the evolution of life on earth is the story of the progressive complexification of the forms and systems that channel the energy available on earth, whether this energy derives from the earth (such as geothermal energy) or from the sun or other extraterrestrial sources (Bertalanffy, 1952; Ehrlich et al., 1977). Unicellular organisms, plants, animals, ecosystems and the entire biosphere (Polanyi, 1968; Lovelock, 1979) may be considered patterns of energy transformed, utilized and dissipated in biological processes like photosynthesis and metabolism. The common ontological distinction between, on the one hand, organisms seen as collections of matter and, on the other, the energy that is contained in the food they ingest and exploited by the organism in work, this distinction may now be replaced by a view of matter (and, hence, organisms) as wholly energetic. In this view organisms are stable patterns of energy flows rather than material creatures feeding on material objects.
Consider now some evidence on the organization and development of biological systems that draws on a concept of order similar to the implicate order. For contrast, let us first note that the conventional view of biological evolution and development, the Neo-Darwinian paradigm, relies on a very explicate view of biological order. Forged in the 1930's, the Neo-Darwinian paradigm combines Mendel's focus on genes as the carriers of inheritance with Darwin's view of evolution as taking place through a struggle between organisms in nature (Huxley, 1942; Dobzhansky et al, 1977).
The modern scientific foundation for this paradigm is molecular biology (Crick, 1966; Laszlo, 1986). Molecular biologists hold that information about an organism resides in the DNA, which are large molecules in the cell nucleus. The genes postulated by Mendel sere later identified as strands of DNA. Various genes have been found to correspond to particular traits in the organism, such as pigmentation, the presence of certain macromolecules, various diseases and so on. Such genes have been found to be responsible for the development of these traits, which may be changed through manipulation of the relevant gene (as in genetic engineering and gene therapy).
There is a widespread, tacit assumption among biologists that an organism is essentially a collection of such gene-governed traits or features (Dawkins, 1982; Lumsden & Wilson, 1981). Evolution is seen as the reshuffling of genes in reproduction (mutations), where the random emergence of a new gene is supposed to singlehandedly produce an entirely new trait that may confer reproductive advantage to the organism and hence change the species or create a new one. Ontogenetic development is believed to be similarly gene-governed: genes turn on and off to initiate the formation of particular traits at various stages in the life of the organism.
As is evident, this view of the organism is a wholly explicate one: genes are found in distinct locales on the DNA (although they are frequently understood to interact--as mechanically as the parts of a machine); one distinct gene corresponds to one distinct trait; there is a causal relationship from gene to trait; and an organism is the sum of its explicate parts.
Instead of seeing organisms as aggregates of genetically controlled traits, Goodwin (1982) considers them to be self-organizing wholes. Organisms develop and evolve, not so much through the random mutations of genes, as through series of ordered transformations that are rationally intelligible and susceptible to mathematical modelling. Consider the analysis of the cleavage process in embryogenesis offered by Goodwin and Trainor (1980).
In the cleavage process the fertilized egg (the zygote) divides first into two cells, then four, then eight, and so on. The zygotes of different organisms go through different sequences of cleavaging, with different combinations of constrictions along latitudinal and longitudinal lines (if the zygote is seen as a globe). The problem is to determine why different cells cleave along different lines. This is a classical unsolved problem in the gene-based theory, because cleavaging is a global process involving the reordering of the entire cell, and it is notoriously difficult to assign responsibility for global processes to local agents, such as genes.
Goodwin and Trainor take a different approach. They see the cell as a whole described by a morphogenetic field. This field they define by harmonic functions (sine and cosine waves), such that the surface of the cell before the first division is described as a standing harmonic wave oscillating with a wavelength of one circumference. This wave may be visualized as a string laid along a longitude from North pole to South pole and back up again. The length of this string is the wavelength of the wave and there is one nodal point (= the point of no motion), the South pole. We may visualize this wave causing one side of the cell to bulge out while the other is sucked in, and vice versa. The frequency (also called the wave number) of this wave is an important parameter in the organism's morphogenetic field. In the case of the undifferentiated cell, the wave number is the fundamental frequency, or first harmonic, and its value is 1.
To understand what happens during the cleavage process, where a line around the cell is selected for the constriction, imagine that the wavelength of the wave defining the cell's field is halved, such that the new shorter wavelength corresponds to the distance from the North to the South pole. This length of the "string" now has both an in-bulge (on the Northern hemisphere, say) and an out-bulge (on the Southern hemisphere). The nodal points of the new shorter wave will extend around the equator of the cell and form a nodal line of no wave motion. According the model, nodal lines are where constriction and cleavage take place.
The halving of the wavelength corresponds to the wave number having jumped to 2, double its original, producing the second harmonic. This transition from the single-cell to the two-cell stage is thus described by a change in the parameter of the morphogenetic field: the wave number jumps from 1 to 2. The next cleavage may occur when the wave number jumps to 4, creating three latitudinal lines of constriction, or one latitudinal and one longitudinal line. Different further sequences of higher harmonics define different courses of cell cleavaging.
The fact that to describe the cell as a whole, the harmonic function needs to cover the entire circumference of the cell ensures that the wave number parameter only takes whole numbers as values, otherwise the ends of the waves would not meet, and the wave would not be a standing wave. Thus, by the logic of standing waves and harmonic functions, the field parameters can take as values only whole numbers.
The relationship between the young and the mature organism, as well as between different species, can be understood by reference to morphogenetic fields, Goodwin suggests. The parameters describing these fields can only take discrete values (such as whole numbers), and when a parameter changes from one value to another, this corresponds to a qualitative or discontinuous jump in the morphology of the organism. This occurs in embryogenesis (the cleavage process) and in evolution, when a qualitatively new aspect of an organism or a new species emerges.
As an example of the action of morphogenetic fields in evolution, Goodwin and Trainor (1983) describe the emergence of the pentadactyl limb in terms of changes in the values of the parameters that define the morphogenetic field of the limb. When the parameters change, the equations yield solutions that correspond to cartilage being formed in five lines (fingers) instead of four. The Neo-Darwinian, gene-based explanation would explain this through the mutational emergence of new genetic material that controls for a fifth finger. In view of the enormous complexity of a limb this seems a rather unreasonable explanation. Goodwin posits instead a hierarchy of fields responsible for smaller and larger portions of the organism or tissue in question.
The role of genes, according to Goodwin (1985), is not to control or direct development of traits, but to select the values of the field parameters. Different genetic make-up thus selects different sequences of cleavaging, and, by extension, this is what accounts for the difference in embryogenetic development. The morphogenetic field equations determine what kinds of biological order are possible; the genes merely select particular solutions for the equations. Thus, genes only trigger development (and evolution), they do not determine it. The field is primary and genes are secondary; they rank with environmental contingencies as selectors of particular developmental paths.
This quite radical research program pursued by Goodwin and his collaborators aims to understand biological forms as expressions of field equations that describe organisms as ordered wholes. It is no coincidence that the harmonic functions used in the model of the cleavage process are also the functions used in Fourier analysis, the mathematics upon which holography is built. Such functions have global properties that are well suited to the description of certain "holographic" properties of organisms. About this Goodwin says:
Coming now to the question of regeneration, there is a very important property of harmonic functions... If a harmonic function is defined over any part of a domain, such as the bit of the sphere [of the cell], then the function can be uniquely reconstructed over the whole of the sphere. Thus the part contains the whole in a specific mathematical sense, and one has here an analogue to the familiar holograph. We may then use this to deduce the regenerative properties of organisms, defined as harmonic fields: from a part, the whole can be generated. (1980, p. 402)
In other words, the description of organisms in terms of harmonic functions implies they possess the enfolded quality of implicate order, in which information about the whole is present in each part. In a salamander, for example, this quality appears as the ability to regenerate a whole limb from a part of it.
Although by Goodwin's own admittance still a research program and a philosophical position, rather than a full-blown theory, this field approach (which in its main thrust is shared by a diverse British school of "new-paradigm" biology; see Pollard, 1984; Ho & Sanders, 1984; Goodwin et al., 1983) clearly represents an alternative to the elemental approach of traditional molecular biology with its highly localized and explicate view of the organism as a collection of traits and features. The use of harmonic equations with "holographic" properties suggests that implicate-order principles are at work in the organism, maintaining its internal order amid the throughput of energy feeding the organism as a dissipative structure. It thus appears that although cosmological and biological evolution can be seen as the unfoldment of explicate forms channelling the flux in the background sea of energy, these forms retain an affinity with the implicate order, an affinity that expresses itself in the global order of the organism.
3.4 Perception and the Construction of Objective Reality
The evolutionary emergence of biological forms depends on the formation of membranes or boundaries that distinguish between an inside and an outside. The difference or "gap" between inside and outside is mediated chiefly by the organism's nervous system, which enables the organism to orient itself in the world.
At some point during biological evolution an inner world arises that speeds up considerably the facility of the organism to act on and in the world. There is widespread disagreement among anthropologists as to what primates or stages of homo sapiens and under what cultural regimes such an inner world arises. Rather than enter this discussion of historical origins we may take the evolutionary emergence of the inner world for granted and explore the manner in which it is ordered and maintained in the contemporary human being.
A contemporary researcher on perception sums up the classical and modern views on perception thus: "It has usually been thought that perception occurs passively from inputs from the senses. It is now, however, fairly generally accepted that stored knowledge and assumptions actively affect even the simplest perceptions" (Gregory, 1987a, p. 601). In other words, the modern scientific view is that the energy received by the sensory surfaces and relayed to the brain interacts with already structured neural (chemical, electrical, etc.) activity occurring inside the organism. The sensory inputs are subject to the active, shaping influence of the patterns of activity already present in the organism. The organism is no mere receptacle of energy relayed from the senses, but contributes in the very act of perception to the generation of the form of what is perceived (the percept).
Everything perceived leaves a trace in memory. These traces mingle with and become part of the ever-present, internally maintained neural activity (Skarda & Freeman, 1987, p. 164), which meets the energy coming in from the senses. This internal neural activity is patterned, to various degrees, by forms deriving either from earlier experience (that is, memories) or from genetic inheritance.
Broadly speaking, it is in the meeting of more or less highly patterned neural activity deriving from external and internal sources that conscious experience is created. Exactly what the relationship between neural activity and consciousness is has been the subject of debate among scholars for at least a hundred years (some recent positions include Popper & Eccles, 1977; Bunge, 1980; Dennett, 1978; Churchland, 1986). The modern view seems to be that neural and mental activity are related in a non-trivial way and that an understanding of neural activity is important for an understanding of mental activity. Churchland (1988, p. 22) states that "...to put it succinctly, mental processes are brain processes." Although this statement may be a somewhat extreme view, there seems, nevertheless, to be a general agreement that neural processes lead to mental processes and that mental processes presuppose neural processes. Let us treat these two aspects of what goes on in a human nervous system as one and call it "neural-mental activity."
Like other processes and behaviors, neural-mental activity is the flux as channelled by particular kinds of forms, forms occurring in the nervous system. Neural-mental forms may be described in terms of a number of characteristics. First, the forms or patterns of neural-mental activity need not be localized in physical space or organized in a way that is isomorphic to the shape of that with is perceived or imagined by the mind. More likely these forms are to be understood as configurations of neural events, the order of which may be depicted as stable states in a phase space (Hopfield, 1984), as attractors in chaotic regime (Skarda & Freeman, 1987, Babloyantz & Destexhe, 1987) or as Fourier transformations of interference patterns, as will be discussed in detail below.
Second, the degree of unfoldment of these forms varies. Very distinct and well-differentiated neural-mental forms, such as instantaneous motor reflexes or the unerring recall of one's own birth date, may be called highly unfolded or manifest, whereas more indistinct, tentative or flexible forms, such as the coordination of muscles in a person recovering from paralysis or the guesswork of an unprepared test-taker, are less so.
Third, neural-mental forms may be more or less known to the person holding them, which is to say that "unfolded" does not mean "conscious." Highly unfolded forms, such as a well-established habit, may be known in detail to the subject (I eat too much sugar and I know it) or they may be unknown to him (some bad habit I am not aware of, but which other people see). Similarly, less unfolded forms may be known to the person (I have only the rudiments of the kind of motor skills and musical discrimination required of a piano player and I know it) or they may be unknown (the forms channelling the activity of my vocal cords may not be as well-developed as I thought, that is, nobody ever told me that I don't sing well). In other words, how unfolded a form is has nothing to do with whether we are aware of the form or not.
Summing up, there is flux outside the organism, and there is flux inside. Some of the outside flux enters the organism through the senses and interacts with the patterned flux inside. Both the inside and the outside flux derive from the same source: the ocean of energy, the implicate domain of flux, from which the energy is channelled through the Big Bang and aggregates in such repositories as the sun, the earth's primary source of energy today. This states in other words a point that Bohm made without going into the evolutionary story (see Section 2.8): the implicate ocean of energy is ultimately the source of both the material and the mental worlds; there is no absolute distinction between them.
Having suggested the general relevance of the form/flux distinction to the study of perception and neural-mental phenomena, we may proceed to consider in more detail how the forms that appear in our experience are generated. That is, how do we come to experience the world as consisting of objects of particular sizes and shapes? What is the nature of the process by which the flux from the world outside our skin meets the flux inside and creates the stable forms that populate our awareness of the world?
In the dominant philosophical tradition, the question of how humans perceive the world and come to know about it has been addressed in terms of direct apprehension or representation. Classical speculative epistemologies, such as those of John Locke (1975) and David Hume (1977; Price, 1981), as well as more modern experimental theories of perception held that proper knowledge consists of direct mental representations of real objects in an external world.
This empiricist view was supported by the work of Hubel and Wiesel (1959) who won a Nobel Prize for their now widely accepted theory that different cells in the visual cortex are sensitive to and hence pick out different features in the environment, such as bars, lines and edges. The perception of a whole image is thought to involve the assembly of such quasi-Euclidian elements. This theory assumes that the world is organized in correspondence with Euclidian geometry, an assemblage of points, lines, angles and other discrete, externally related features or elements: in other words, an explicate order.
The brain is thought to process information about this world in an explicate manner. Individual cells or cell assemblies pick up the Euclidian features and then aggregate them in higher brain centers. This is commonly (and facetiously) referred to as the grandmother-cell theory of memory storage because it implies that one particular cell is responsible for the recognition of one's grandmother.
Against this mainstream view of perception the neuropsychologist Karl Pribram introduces the idea of implicate order (1971, 1975, 1986; Pribram, Nuwer & Baron, 1974). He argues that the information we receive about the external world arrives at our senses as an implicate order, not as a simple, explicate order of Euclidian elements or features. Recall from Chapter 2 the example of the sunlit room, in which information about every exposed surface in the room is distributed in the form of light waves to all other parts of the room. The order of the interference patterns of light is implicate, and "information" about the room reaches the eye as an implicate order of electromagnetic energy, not as distinct and well-formed, explicate objects.
Likewise, the voices from a roomful of cocktail party guests are distributed as sound waves throughout the room in such a way that voices from the whole room are present in (or enfolded in, as Bohm would say) every part of the room (with variable audibility, of course). The voices and other sounds reach the ears of a given guest as an enormously complex, implicate interference pattern of sound waves.
The question addressed by Pribram is this: How do we transform the implicate information that reaches our senses into the explicate objects and boundaries that we experience? When we look around in a room we see stable and well-bounded shapes, such as windows, chairs and books, all of which constitute an explicate order. How does this experience come about when the light reflected from the walls arrive at our retinas in an implicate interference pattern that contains no separable objects or neat distinctions? At the cocktail party we are able to pick out several distinct and recognizable voices. How do we do this when the sounds of the entire room are distributed and enfolded and overlap? How can it be that the content of our experience, our world of experience with its usually quite distinct entities and events, seems to be organized so differently from the flux received by the senses?
To answer these questions Pribram draws on work that shows that cells in the visual cortex are sensitive, not so much to explicate geometric features as to stimuli that are wave-like (or, in our terms, implicate) (Campbell & Robson, 1968; Pollen & Taylor, 1974; DeValois, Albrecht & Thorell, 1978). Bar patterns (gratings) are most frequently used to demonstrate this effect. Alternating vertical bars of black and white are equivalent to a square wave if the light intensity is plotted as one scans the bars horizontally (black is a trough, white is a crest). Such a square wave has a "spatial frequency," as it is called, the magnitude of which is determined by the number of black bars per unit distance as measured by degrees of visual field. Narrow bars and spaces correspond to a high spatial frequency, wide bars to a low frequency. Brain cells in the visual cortex are responsive to different frequencies of such spatial "waves." That is to say, a given cell fires when a bar pattern of a particular frequency is presented in the visual field and not when a bar pattern of another frequency is presented (the selective firing of neurons being the standard means whereby inferences are made about the workings of the brain).
Patterns more complex than a grating can be similarly analyzed into its component frequencies by the mathematical technique of Fourier analysis. The spatial-frequency hypothesis, which has been adopted by Pribram and numerous other researchers (Weisstein & Harris, 1980; De Valois & De Valois, 1980; Graham, 1980; Shapley & Lennie, 1985), says that the visual system performs a Fourier analysis on the complex patterns of light received by the eyes.
What this means in our terms is this. An implicate order of electromagnetic energy (light) is reflected from a given image in the subject's surroundings and impinges on the retina. We perceive this image by decomposing or analyzing the (explicate) image into its (implicate) spatial frequencies. Each frequency is picked up and processed by specific neurons or neuronal complexes that "resonate" to that particular frequency. This is how visual images are thought to be perceived and stored in memory. Retrieval from memory occurs through a synthesis of the component frequencies. This synthesis transforms or unfolds the implicate spatial frequencies into the distinct, explicate images that appear in our conscious experience.
Besides vision, the sense of hearing is clearly based on the transformation of an implicate flux of energy into an explicate order of experienced phenomena. What reaches the ear at a cocktail party is an interference pattern of sound waves coming from all over the room, not voices neatly separated into distinct categories. The cochlea of the inner ear performs a Fourier analysis of this interference pattern, whereby the constituent frequencies are separated and processed by different neurons or sets of neurons. Voice recognition results from the synthesis of these component frequencies into well-known acoustic gestalts characteristic of individual speakers (Moore, 1982).
Thus, the implicate order of impinging sound waves is picked up and processed in the brain as frequencies and then appear in our consciousness as an explicate order of distinct and localized voices and sounds heard in a room: "Ah, isn't that my wife I hear over there!" (The ability to distinguish sounds and unfold and explicate acoustic order is, of course, an acquired skill, as is all experience. How we learn to make such distinctions will be discussed in the next section.)
On the basis of evidence from the sense of touch (Bekesy, 1959), Pribram (in Goleman, 1979) hypothesizes that all the senses operate similarly to construct distinctions and categories from the flux of energy that reaches the sense receptors, whether this energy is electromagnetic (as in sight), kinetic-mechanical (hearing, touch), chemical (smell), thermal (temperature sense), hydrodynamical (sense of balance), or of any other kind.
To repeat: in perception the senses receive an implicate flux of energy, do a Fourier analysis of this pattern of energy, distribute each component frequency to resonating brain structures. To recall from memory is to access the stored component frequencies and perform a Fourier synthesis on them so as to construct an explicate pattern or object similar to the original, only this time the pattern or object appears in conscious experience: "I see a car."
Although (according to this view) our brains operate in the implicate frequency mode, we do not experience the world as consisting of frequencies. We experience mostly objects and events with fairly well-defined boundaries, the common-sense explicate order of tables and cars and people, particular smells and tactile sensation, the distinct clicks of typewriter keys, and so on. This seems to be a major function of the nervous system: to transform implicate order into explicate order, that is, to take the implicate order of interference patterns around us and unfold an inner explicate world, a world of consciousness and experience that enables us to orient ourselves in the outer world and manipulate it.
The transformation of implicate energy into explicate experienced objects also involves a projection of the percepts away from the sensory surfaces. We receive light on our retinas, but we experience the things from which the light is reflected as being away from ourselves, out there. The air vibrates in the cochlea, but we hear the sound as coming from somewhere away from ourselves. When we mix a sticky batter with a spoon the pressure from the spoon is in the hand, but we experience that the batter is at the end of the spoon, down in the bowl.
This sensory projection is fundamentally associated with the aforementioned distinction between the inside and the outside of an organism. The faculty of projection involves the establishment of a difference between a projecting subject and projected objects (cf. Kaplan, 1983) and makes possible an inner world, an arena where the outer world is experienced and played out through perception, memory, imagination, and so on.
Furthermore, to project something is to place it somewhere specific away from us: "I saw the ball over there, ten feet away, a little to the right, further down, etc." Projection involves fixing the projected objects in space (and time) and thus establishes the localizability that is characteristic of the explicate order. Projection implies the construction of a world filled with objects and object-like categories--an objective reality, in other words--which occupies definite and distinct locations in real or conceptual space. This world is the world of experience with its explicate constituents.
It follows that the senses and the nervous system are not so much the means whereby we connect with an external reality, as common sense has it, but the means whereby we distance ourselves from the rest of the world, and thus create a distinction between subject and objects. This distancing involves projection and objectification of an outer world, which then appears in our experience as such.
3.5 Decision Making: A Holographic Theory
Consider now another strand of research that suggests a key role for implicate order in the workings of the nervous system, specifically the human brain. Decision theory is the study of how people behave in decision situations and why they make particular decisions. In this section we shall contrast the "explicate" approach of standard decision theory (e.g., Edwards & Tversky, 1967) with a novel and more "implicate" approach proposed by Rashi Glazer (1987, 1988). The following discussion is based largely on his account.
A decision situation involves an individual who is to choose between a number of objects in an object set and who does so on the basis of certain preferences. These preferences specify an ideal object. The standard approach to decision making assumes that each object in the choice set has a number of attributes, which can be assigned certain values, or scores. In the decision process, the scores for each attribute of an object are weighted and combined arithmetically to produce an aggregate score for the object. Each object is then compared with the ideal object, and the object whose score most closely resembles that of the ideal object is chosen (or, in a limited search situation, the first object that exceeds some predefined threshold score is chosen).
Underlying the standard approach is the so-called axiom of preferential independence (Krantz et al., 1971). This says that an individual's preferences are neutral to transformations of the choice objects. That is, if a person prefers a Chevrolet to a Toyota, this preference will not be altered if a radio is added to the car, this addition being a "transformation" of the choice object, the car. In other words, we can add or subtract attributes from objects without changing a person's judgments about which object is the better. The attributes are independent of each other, they can be shuffled at random without affecting the individual's preferences. The independence axiom is thus an assumption that people are insensitive to the pattern or structure of object attributes. (This assumption is widely recognized as unrealistic, but is required to make the mathematics of decision theory work. More about this below.)
Underlying the independence axiom is, according to Glazer, a further assumption about the independence of the world's attributes or features, which we may call ontological independence. The standard approach assumes atomistically that features come in all possible combinations; there is no pattern or structure or redundancy in things.
This assumption obviously imposes a tremendous burden on human information processing, because when confronted with a decision situation an individual cannot rule out any combination of features beforehand, but has to evaluate all possible combinations. This is the view of the decision situation underlying one of the two dominant schools in the standard approach to decision making, the rational school (cf. Glazer, 1987, pp. 4-10). To be fully rational is to search through all combinations and find the optimal solution.
We may pause here to note that the axiom of preferential independence paints a very explicate picture of the human mind, not unlike the feature-detection theory of perception discussed in the previous section. People are supposed to be sensitive to discrete attributes of explicate objects. The assumption of ontological independence extends this explicate picture from the human perceptual apparatus to the world as a whole, with its notion of the world being an essentially random collection of distinct features with no structure or redundancy.
In reaction to this impossible demand on human decision making, Simon (1957) advanced the bounded-rationality model, which assumes that to cut down on the amount of processing required, individuals use numerous rules-of-thumb for finding shortcuts through searches, or so-called heuristics. Empirically, humans engage only in limited searches that turn up good-enough solutions; they "satisfice" rather than optimize.
However, as Glazer points out, the bounded-rationality school retains the axiom of preferential independence as well as the deeper and less recognized assumption about ontological independence. The difference is that the bounded-rationality school assumes that people impose some structure on the world by using the heuristics; they "cheat," so to speak, because they cannot live up to the expectations of the rational model that has them engage in exhaustive searches for every decision. This notion has given rise to a now common distinction between the rational model being seen as normative ("this is how people should behave if they were rational...") and the bounded-rationality model being seen as descriptive ("...but this is how people actually behave").
Glazer argues that it is time to resolve this dichotomy and find a third point of view that is both compatible with human information processing abilities and describes people as making good or even optimal decisions. To do this, he first disposes of the assumption of ontological independence. The world's features do not come in any arbitrary combination. There is structure and redundancy in the real world; there are definite constraints on which attributes go together. For example, if told that a certain rodent has a bushy tail, any child will know that it lives in trees (and is a squirrel) and not in the gutter. The features of the animal world are so organized that there are no rodents with bushy tails that live in the gutter; such rodents have slender tails (and are called rats). In other words, the redundancy of rodent attributes enables us to infer from "bushy-tailed" to "tree-dwelling," but rules out the combination "bushy-tailed" and "gutter-resident." Instead of ontological independence we must assume ontological interdependence.
Glazer suggests that the human cognitive apparatus is attuned to this ontological interdependence and is used in decision making (as the child does in determining where the bushy-tailed rodent lives). In other words, people use the redundancy of the world to make more efficient decisions. If the world's features do not come in all conceivable combinations there is obviously no need for the human mind to plow through them all (as the rational model would have us do).
If there is no need for searches through all combinations there is also no need to assume that people get around these searches by using heuristics (as the bounded-rationality model presupposes). With fewer combinations to choose from and definite patterns among the attributes in these combinations, optimal decision making may require less than total information and would hence be compatible with the well-known limitations on the information-processing capacities of human beings.
Now, what kind of theory assumes ontological interdependence and human sensitivity to structure and pattern? Theories of pattern recognition, Glazer suggests (Duda & Hart, 1973; Young & Calvert, 1974; cf. Margolis, 1987). Inspired (pers. com.) by Pribram's work on a holographic hypothesis of brain functioning (Pribram, 1971), Glazer opts for a particular kind of pattern recognition theories, namely, holographic ones, and he constructs what he calls a holographic theory of decision making. This theory is based on Fourier analysis (as was the spatial-frequency school in visual perception discussed in the previous section), which uses the concept of frequencies to detect structure and order in data.
According to the holographic theory of decision making, the set of attributes of a given object are subjected to Fourier analysis, whereby they are transformed from the "feature domain" of distinct attributes into the "frequency domain" of mathematical waves. (In human decision making, this is presumed to happen in the brain. More about concrete brain mechanisms below.) Fourier analysis produces a description of the object in terms of a set of waves with characteristic frequencies, amplitudes and phases. The set of these waves, which are commonly referred to as component frequencies, as in the previous section, is called the Fourier transform. The lower frequencies (long waves) in the transform are associated with the coarser aspects of the object; and the higher frequencies (short waves) with the finer aspects.
Recall the example of the bar grating as interpreted by the spatial-frequency school. When this grating is Fourier-analyzed, the lower frequencies will describe in "broad strokes" the alternating pattern of bars, whereas the higher frequencies will describe exactly the boundary where a black bar turns into one of white spaces between bars. The lower frequencies sweep the entire object and help us distinguish, say, a squirrel from a rat, whereas the higher frequencies relate to finer details in the animals and help us distinguish between kinds of squirrels, or between particular individual squirrels.
The more component frequencies are used in the description of an object, the more complete and detailed is the description. In other words, the depth of processing is proportional to the number of frequencies involved. Informally, to know something cursorily is to use only the lowest frequencies; to know something in depth is to use the higher frequencies, too. If an individual does not have to know all the details of a given decision situation (that is, all the frequencies required for a complete description of the choice objects), but can still make a decision that is as good as if he had used all the frequencies, then his decision making is efficient.
The Fourier approach allows us to see decision making as efficient because of being based on pattern recognition, that is, the recognition of broad-stroke patterns and redundancy in the data. If there is interdependence between features in the world, only a few low frequencies will suffice to pick up the redundancy and distinguish properly between objects, that is, make the best decision. Thus, the Fourier approach reduces the amount of information processing required of the individual (which was the virtue of the bounded-rationality model), while accounting for the human ability to make good or even optimal decisions (the virtue of the rational model). Herein lies Glazer's resolution of the dichotomy between the normative, but unrealistic rational model and the descriptive, but suboptimal bounded-rationality model.
Now, in the actual decision process, according to Glazer's theory, the Fourier transform of each object (that is, its set of characteristic frequencies) is compared in turn with the Fourier transform of the ideal object. In other words, the comparison or matching takes place in the frequency domain, rather than in the feature domain as assumed by the standard models of decision making. In the Fourier approach, the object chosen is the one whose Fourier transform most closely resembles the Fourier transform of the ideal object (or the first one that exceeds a given threshold value, just as in the standard approach).
While this comparison can be performed by a purely mathematical procedure (as done in Glazer, 1987, 1988), the assumption is that the brain can do this, too. Before we consider possible brain mechanisms for this procedure, it is instructive to consider another physical realization of the Fourier transformations required. A hologram is essentially a Fourier transform of an object (Caulfield, 1979). Optical systems that employ holograms to perform comparisons like the one just described are called optical computers (Abu-Mustafa & Psaltis, 1987). An optical computer determines how closely two objects match by simply overlaying their holograms and shining laser light through them. The brightness of the light beam that emerges on the other side of the two holograms is proportional to the goodness of the match.
Because the incoming laser light hits the entire hologram at once, this procedure operates on all the available information in parallel, rather than sequentially, as in the extensive calculations required by the standard decision making models. The comparison is holistic, in that the whole gestalt or pattern of information that is the hologram (Fourier transform) is compared to that of the other hologram in the instant it takes for light to pass through. Recognition is immediate; the result simply falls out and records as a more or less bright spot on the wall behind the two holograms. To choose one object among many, make holograms of them all (Fourier-transform them into the frequency domain), compare each of them to the hologram of the ideal object, and pick the one that produces the brightest spot.
The optical matching of holograms is reminiscent of certain aspects of the phenomenology of human decision making. The lightning speed (literally) of hologram comparisons compares favorably with the speed with which people can evaluate complex objects, for example, decide which painting on a wall is the most interesting. Such decisions often occur in a half-second, which would be much too short a period for the lengthy calculations required by the standard models. The common experience that a decision "feels right" suggests that there may not be any arithmetic computation involved, but simply a fast pattern-recognition mechanism: we often "recognize" the right course of action more than "compute" it. The fact that we do not know how such decisions are formed suggests that the processes are pre-symbolic and subconscious.
Let us sum up the discussion so far: If the world's features are indeed interdependent and redundant, the Fourier approach will be a way to capture that redundancy. Individuals are attuned to this interdependence and they use it to reduce the amount of information processing they have to do, which increases their efficiency (in the squirrel example, efficiency comes from knowing that the world is so structured that bushy-tailed rodents live in trees, not in the gutter). In the Fourier approach, efficiency involves using fewer frequencies, the lower ones.
To test the hypothesis that human information processing is indeed based on pattern recognition, specifically the Fourier approach, one must devise a decision-making experiment where individuals make decisions in ways that can be predicted by the mathematical Fourier approach. This would involve Fourier-transforming the attributes of the objects in the choice set, comparing them in the frequency domain by an efficient method (one that uses a reduced set of frequencies), and predicting the choices that the individuals in the experiment make.
Glazer designed and performed such an experiment (1988). Subjects were asked to imagine themselves to be managers about to purchase a product for their firm. Each product (or choice object) is described by five attributes, such as ease of use, ease of trial and relative advantage. The subjects were told not to weight the attributes (for the sake of simplicity). Each attribute can take three values: High, Medium or Low. 25 product profiles were listed, such as HHHLH, HMHMH, and so on, and the subjects were required to rank them in decreasing order of preference (given that the profile of the ideal object is HHHHH). The ranking procedure allowed for no ties to be entered, that is, the subjects had to make up their minds about the ranking of every single case.
What rank orderings do people produce? The standard model assumes that the attributes are independent of each other and that the score for each object is simply the sum of the attributes' values. If H=3, M=2, and L=1, the standard model would predict a tie between HHHLH and HMHMH, each of which scores 13, and between HLHLH and HMMMM (score 11), and so on. That is, people should be preferentially indifferent to products with such profiles. Given that the experiment permitted no ties, the standard model would expect people to break the ties randomly, some ranking HHHLH over HMHMH and some the other way round.
Glazer's preliminary results indicate that people are not indifferent to the profiles in these "ties." People tend to agree on how the ties are to be broken, preferring HMHMH over HHHLH, and HLHLH over HMMMM. Running the mathematical Fourier transformations with a reduced set of frequencies on the 25 objects, Glazer found that the Fourier method predicted the overall ranking, as well as the particular break of each tie, with some accuracy (precise results are not available in Glazer's working paper), and better than the standard models.
The reason why profiles that are ties in the standard approach are not in the Fourier approach is, as discussed, that the Fourier approach is sensitive to redundancy and overall structure in the object. Objects with a highly regular or symmetric distribution of attributes will be favored over those that are irregular or asymmetric. Hence the ranking of HMHMH over HHHLH; in the first object each attribute score is related to its adjacent attribute scores in a harmonic wave pattern, up-down-up-down-up, whereas the distribution of attribute scores in the other object is more irregular. Similarly for tie break of HLHLH over HMMMM; the first is a recurrent, harmonic pattern, the other is not.
In this experiment, the redundancy between attributes is, of course, contrived and without practical relevance, since it depends on the arbitrary concatenation of attributes in attribute space. This, however, does not detract from the general point Glazer is making here, namely, that people rely on redundancy and structure in the information presented to break ties and make decisions. Probably unknowingly, the experimental subjects have picked up the relationships between the attribute scores, that is, the redundancy in the data, to decide how to rank the objects and break the ties, preferring harmonic patterns to anharmonic ones.
Moreover, since the Fourier transformations performed in this experiment used only a subset of the total set of frequencies needed to describe the objects completely (nine out of 25), the results indicate that Fourier pattern recognition is indeed efficient. Thus, they suggest a mechanism whereby efficient and optimal decision making is actually possible, a method that recognizes that people use the redundancy present in the environment in such a way that less than total information is required to obtain optimal results.
What is of particular interest to us in the holographic theory of decision making is of course the idea, tentatively supported by Glazer's preliminary results, that the brain processes that underlie decision making may not take place in the feature domain of distinct and explicate attributes, but rather in the holistic frequency domain, or implicate order, of the Fourier approach, much as the spatial-frequency school pointed to a central role for the implicate order in visual perception.
How the Fourier transforms are produced, processed and stored in the brain Glazer does not speculate about. However, a number of researchers in the neurosciences have developed the holographic metaphor into a theory of brain memory whereby information in the form of Fourier transforms would be distributed across some volume of the brain, somewhat in the manner of light waves sweeping across a holographic film (Westlake, 1968; Baron, 1970; Pribram, 1971; Pietsch, 1981; Willshaw, 1981; Eich, 1982).
Pribram's work is representative of this research (see also Pribram, Nuwer & Baron, 1974; Pribram, Sharafat & Beekman, 1984). As indicated above, he finds that the brain computes in the implicate Fourier domain by storing and correlating frequencies rather than explicate geometric features. To use the computer analogy, Pribram suggest the brain works on Fourier transforms as the optical computer works on holograms; the brain does not work on explicate Euclidian features, in the manner of the digital and sequential von Neumann computer. But in the absence of holographic films with high-resolution emulsions, exactly how does the brain represent (store and process) Fourier transforms?
Pribram (1971) suggests that Fourier transforms may be stored and processed in the network of small nerve fibres (dendrites) that conduct potentials from the synapses into the nerve cell bodies. These so-called dendritic networks have been overlooked in traditional neurophysiology, which tends to focus on the transmission of fast and discrete (read: explicate) on-off potentials along the large nerve fibres, the axons. Pribram suggested that the dendritic networks, which envelope cell bodies and axons throughout the brain, could be the locus of Fourier transform storage and processing. The transmission of potentials in the dendritic networks is slow and distributed, and a surge of neural (electrical, chemical, or other) activity may be seen as a wavefront spreading across the network. Such sets of waves may constitute the physical realization of the mathematical frequencies of which Fourier transforms consist.
These waves in the dendritic network may interfere with other waves arriving from elsewhere, such as other senses or brain centers, much as light waves interfere in our example of the sunlit room. This interference or blending of waves would be equivalent to a comparison of Fourier transforms and thus constitute information processing. The waves passing through the dendrites could generate standing waves of neural activity in the dendritic networks, which would constitute storage.
The storage would not be narrowly localized, as is presupposed in the grandmother-cell theory, but would be distributed across large areas of the brain. Distributed storage implies that the information contained in any part is present in the whole, and vice versa. This would help account for other problems unexplained in the conventional explicate theory, such as the brain's tremendous resistance to damage (areas next to the damaged one contain the same information and can reconstitute it), its storage redundancy (hydroencephalics with large amounts of brain tissue missing may still function normally), the transfer of skills from one motor region to another (we can write with our elbows although the part of the brain responsible for the elbow has never used the arm muscles that way), content-addressable memory, and numerous other well-known but insufficiently explained capacities of the brain (Pribram, in Goleman, 1979; Pribram, Nuwer & Baron, 1974).
In sum, the global processing and storage required of the brain to perform these functions could well be accomplished by waves of neural activity carrying Fourier transforms of component frequencies in an implicate order. The objects entertained in decision situations could be represented by such Fourier transforms and compared in the manner suggested, with the results emerging as fast as the interference of wavefronts in the dendritic networks allows. The transforms of the objects in the object set would be formed during perception, while the transform of the ideal object (recall that the ideal object is assumed to be given) would be present in memory, maybe as a set of standing waves in the dendritic network, and activated in the appropriate decision situation.
Starting from the assumption that the ideal object is given, Glazer and the other decision theorists answer the question of what a good decision is by suggesting how an individual may find the object that most closely resembles the ideal object. This however, begs the further question, which takes us outside the bounds of decision theory: What is an ideal object? Why should we accord the status of "ideal object" to a particular object? To this question of values we will return in the next chapter.
Let us conclude this section by noting that although decision making is conventionally considered a topic of study in consumer research and management science, it is hardly an isolated or rare mental activity. Any action can be seen as being preceded by a decision-making process in which we decide what to do. There seems to be no clear dividing line between the big decisions typically studied by decision theorists, such as buying a car or voting for a candidate, and the little decisions that permeate everyday life: deciding what to wear today, how to get to the office, how to pay the bus driver, what seat to sit in, where to grab the backrest to sit down, and so on, all the way down to the most routine and unconscious "decisions."
Rational decision theory can dismiss such perceptual, motor and other physiological "decision" situations by reference to their not being rationally or consciously accessible. However, if decision making is a pattern-recognition process that occurs largely beyond our conscious awareness, as Glazer's theory suggests, we cannot easily separate the little decisions from the big decisions. If the holographic theory of decision making is correct and if there are indeed brain mechanisms that can realize them, this indicates a much more central role for the implicate order in mental functioning than the limited field of decision theory would suggest.
With Glazer's theory about the possible implicate basis of decision and action under our belt, we may continue the account of the emergence of forms from the flux. Glazer's contribution addresses the interface between pre-symbolic perception and fully symbolic cognition and thus serves as a convenient mid-station between those two levels of human mental functioning. We will now explore the manner in which the world of human experience is expanded and consolidated through symbolism and language.
3.6 Symbolism and Interpretation: Abstracting Categories from the Flux
Picking up the notion from Section 3.4 of there being a neural-mental flux within the nervous system, we may see this as a potential capable of being shaped and differentiated, or unfolded. We may say that human beings are born with a potential for having their neural, mental, physiological and other kinds of activity organized by forms. This potential has a history involving countless previous generations of organisms all the way back through the evolution of life and is, of course, shaped by these previous forms through inheritance. An organism's potential is realized through the interplay between inherited forms of activity (nature) and the energy from the environment, including foods, heat and the energy picked up by the senses (nurture).
The realization or unfoldment of the potential of the human organism may be said to constitute human development, or human unfoldment. An account of the unfoldment of the human potential in terms congenial to the idea of a neural-mental flux patterned by forms may be found in the work of Piaget, to which we will now turn.
Piaget (1952, 1954) and his associates have identified a sequence of processes whereby a human infant unfolds its potential for acting on and thinking about the world in certain regular and stable ways. Piaget shows how spontaneous motor action in early childhood, that is, the neural flux involving motor activity, helps to construct the various perceptual constancies that form the backbone of intelligence. A classical example is object identity, which the child learns through its spontaneous activity, such as the more or less unspecific handling and fingering of objects. For example, when a young child brushes an object aside or drops it from view, the child is testing the object's permanency of existence. Seeing the object reappear after it was out of sight helps the child learn to attribute permanence and objectiveness to the object.
Through the appreciation of such invariants as object constancy, conservation of liquids, commutativity of objects and transitivity of relationships, stable, explicate forms are unfolded from the undifferentiated neural/mental flux of the infant's world. Distinctions between self and other, figure and ground, edible and non-edible, and hot and cold, are among the first pieces of the explicate order that come to constitute a human being's world of experience.
The order of explicate forms generated through exploratory motor activity by the young infant Piaget calls a proto-logic, "logic itself [stemming] from a sort of spontaneous organization of activity", as one commentator puts it (Gardner, 1976, p. 54). This is a forerunner for a truly conceptual logic, which can only be formed by means of language (Inhelder & Piaget, 1964). The learning of language may be considered the creation of linguistic forms from a pre-linguistic flux. Like other motor activity, the infant's early exercise of his vocal chords appears to be a fairly unstructured activity, which is technically known as holobabble (presumably because the babble is a prelinguistic, undifferentiated "whole").
"It is believed that babies produce all the speech sounds of any language in their babbling, and that speech sounds or phonemes of the language of their environment are gradually selected" (Gregory, 1987b, p. 68). The holobabble is thus a kind of reservoir or potential from which are selected the phonemes used in the language being learnt by the child. Whenever the child's holobabble approaches recognizable words for relevant objects or situations, the encouragement offered by adult speakers induces the child to channel his vocal activity into the conventional linguistic forms (Skolnick, 1986, p. 280). Holobabble turns into holophrases (single-word statements evoking broad meanings. Dore, 1985), which turns into multi-word sentences as a system of linguistic forms and categories begins to channel the child's mental activity in particular ways. Thus, the infant's potential for speaking any language is being unfolded into one particular language with its manifest order of forms--vocabulary, phonetics, grammar, syntax, and so on.
Words are symbols; they stand for something else than themselves. The command of language implies that phenomena that are not present in immediate sensory experience can be manipulated anyway: not concretely, "by hand," but symbolically, in the mind. By simple perception an infant can recognize an apple without being able to name it. With naming arises the possibility of thought and cognition: the child can think about the apple even when it is not in view (Langer, 1942). The apple is present in the mind although it is not present physically. The capacity for symbolic manipulation of objects not present to the senses is a prerequisite for intelligent decision making, because making a decision involves choosing between courses of action whose consequences do not exist yet but must be considered in the mind, in symbolic form.
Language and symbolism may thus be considered a further stage in the unfoldment of the neural/mental flux , superseding the pre-linguistic logic of perception that enabled the child to recognize and physically manipulate objects, and establishing a truly conceptual logic of abstract or symbolic objects. The use of a conceptual logic mediated by language is a convenient criterion distinguishing cognition from perception, evolutionarily as well as developmentally. Organisms at early stages of evolution or development possess the faculty of perception, but only in later stages does cognition arise.
Although there is no general agreement as to what is to be understood by a concept, it seems reasonable to suggest that for a percept to become a concept it must be interpreted, that is, it must be situated in a cultural context of meaning. When percepts are interpreted, they become meaningful, they make sense for being placed in a context of other meanings. Interpreting a percept typically, but not necessarily, involves assigning a name to it. For example, driving out of a wood in the dark we may suddenly feel a massive, towering presence before and over us. At first we do not know what it is; it holds no meaning for us, it is merely a percept. In the next instant we realize what it is: "A bridge!" With this realization, the percept (the towering presence) is interpreted and assigned a name and thus becomes a concept (or, more correctly, an instance of a concept, the concept being "the class of bridges").
Interpretation objectifies a percept by assigning a name, symbol or meaning to it, thereby making it amenable to mental deliberation. Simply stated, perception plus interpretation yields cognition. In the human world, interpretation thus emerges as a vehicle of unfoldment, enabling the human organism to establish an explicate order of concepts and linguistic distinctions in an inner, mental world.
Linguistic forms play a major role in human experience, although there is considerable disagreement among scholars as to the nature and extent of this influence (Wittgenstein, 1963; Whorf, 1956; Piattelli-Palmarini, 1980; Fodor & Katz, 1964). However, one may assume that a sizeable part of our experience of the world is channelled by linguistic forms. Still, much neural-mental material is non-linguistic and plays a large role in our lives, such as images, intuitions, moods, bodily sensations, automatic responses, feelings of various kinds and so on. These non-linguistic forms extend from the most creative kinds of mental functioning (such as the imaginal and non-verbal, intuitive capacity of creative artists and scientists, or the improvisational skills of a veteran saxophone player), through the tacit assumptions and premises underlying our everyday thinking, to the most basic forms of physiological activity, such as motor reflexes, immune responses and other autonomous regulatory functions of the body.
In modern Western philosophy there has been a strong tendency to emphasize highly unfolded and linguistically expressible forms of mental activity and to downplay the less unfolded aspects of the neural-mental flux. For example, Descartes chose for the foundation of his philosophy clear and distinct ideas (Ree, 1974). In its exclusive concern with things explicate, the French Cartesian tradition of rationalism was equalled by the English empiricist tradition (John Locke, David Hume), in that both stress distinctness, precision, clarity and certainty, whether of reasoning or of sensation.
This trend culminated with the logical positivists of the early twentieth century who combined the rationalist and empiricist traditions by dismissing from serious discourse statements that are not either analytical (belonging to a formal system) or empirical (verifiable) (Ayer, 1971). The importance placed on linguistically expressible ideas is brought home by the famous last proposition in Wittgenstein's treatise on the proper nature of philosophy: "What we cannot speak about we must pass over in silence" (1961, p. 74).
A broadly alternative approach to the understanding of the human mind is taken by what may be called the interpretive school in late 19th and 20th century philosophy. It includes such thinkers as Dilthey (1976), Weber (1949), Husserl (1970), Heidegger (1962), Schutz (1967), Gadamer (1976) and Taylor (1971). This tradition represents a reaction against the natural-scientific orientation of positivism and focuses on the domain of human experience. For this domain, the interpretive writers reject the classical idea of the world as directly given in pre-formed (explicate) categories for the observer simply to pick up and display in his mind. The human world is a world of sense and meanings, and everything experienced is subjected to an active ordering process in which meanings arise. This process is called interpretation. "Interpretation begins from the postulate that the web of meaning constitutes human existence to such an extent that it cannot ever be meaningfully reduced to constitutively prior speech acts... or any predefined elements" (Rabinow & Sullivan, 1979b, p. 5).
Rather than being founded on elements, meaning, according to Husserl, arises from the flux of consciousness. In one passage he discusses this flux in terms compatible with our flux/form ontology: "But in a certain sense is there not... something abiding about the flux, even though no part of the flux can change into a not flux? What is abiding, above all, is the formal structure of the flux, the form of the flux...." (Husserl, 1964, p. 152, quoted in Rogers, 1983, p. 28). This form is the nexus of the subject's lived presence, it generates a "now" filled with objects abstracted or "constituted" from the flux of consciousness. "Every lived experience... is subject to the original law of the flow.... Every concrete lived experience is a unity of becoming and is constituted as an object in internal consciousness in the form of temporality" (Husserl, 1973, p. 254, quoted in Rogers, 1983, p. 26).
In simpler terms, from the flux of experience are constructed the subjectively meaningful objects that constitute the human world. In a preface to their discussion of the related work of Dilthey, Burrell and Morgan summarize the hermeneutic viewpoint thus: "Human beings in the course of life externalize the internal processes of their minds through the creation of cultural artifacts which attain an objective character. Institutions, works of art, literature, languages, religions and the like are examples of this process of objectification" (1979, p. 236) (italics added).
A formulation that is typical for the interpretive school is that of Alfred Schutz (1967). Inspired by Bergson and Husserl to adopt the notion of a "stream of consciousness," he argues that consciousness is fundamentally a flow of unformed experiences that possess no prior meaning. Only reflexively, when the subject turns his attention to the flow of experience, does this flow congeal into meanings. Borrowing from Husserl (1973), Schutz calls this process "typification" (Schutz, 1964, part I), because it consists in the application of Weberian "ideal types" to the stream of consciousness. (This corresponds to the process of objectification mentioned in the previous paragraph.)
These "types" or interpretive constructs are used not merely by the social scientist in his attempts to understand the actions of others, as Weber had it, but by any actor in the everyday world attempting to carve from the flow of experience recurrent and meaningful forms (Burrell & Morgan, 1979, p. 245). The double emphasis maintained by the interpretive writers (Weber, Gadamer, Schutz) on social science methodology and on the everyday processes of human understanding may be seen as evidence of one single concern, namely, to understand the human world as being brought forth or unfolded through interpretation, the abstraction of meanings and concepts from the human flux.
The uniquely human world that is constructed in this process of interpretation has been termed the life-world by several of the interpretive philosophers. "Life-world" was used by Husserl to denote the world that to the conscious subject is simply there, "the world in which we are always already living" (1973, p. 41) and which, as such, is taken for granted. "The Lebenswelt is constituted by the entire constellation of sensory, affective, and cognitive events observed as subjectively "there" by the person at a given time and place" (Massarik, 1982, p. 251). Schutz (1973; see also Luckmann, 1970) refined this concept and pointed out that actors at various times are engaged in different life-worlds, or "finite provinces of meaning" that constitute the life-world for actors playing specific roles or acting in the contexts of different institutions.
What is of relevance in the present context is the point that the world surrounding the individual human being is based on the subject's experience (understanding, interpretation). Illuminating is Carl Rogers' notion of the person as situated within a phenomenal field, which is the sum of the person's individual experience and constitutes reality for the person. As quoted and commented on by Hopkins (1986), he says that...
..."behavior is basically the goal-directed attempt of the organism to satisfy its needs as experienced, in the field as perceived" (Rogers, 1951, p. 482). Rogers stresses the point that such behavior is a reaction to the perception of reality, that is, to the subjective interpretation afforded by the individual standpoint (p. 637) (italics in original).
This quote highlights our interest in the interpretive tradition with its emphasis on the human world as a world perceived or interpreted, as opposed to the simple naturalism of the positivist position in the natural sciences. What is relevant in a human-social discourse is not the world as naturally there, but as humanly constructed and interpreted; not the world as it supposedly is, independent of human interpreters, but the world as experienced and meaningfully constructed by human beings. As Rogers states, we act not on the world as it "is," but as we perceive or experience it to be.
In the evolutionary view, as discussed in the previous sections of this chapter, the human world that is unfolded from the flux is this life-world of stable meanings and interpretations. The life-world is the inner world of experience referred to above, that additional stage upon which cosmic unfoldment takes place and creates an explicate order of things, this time not physical forms like atoms and planets, but forms of the mind, concepts, symbols, ideas, the constituents of the interpretive life-world.
Our principal concern here is to point to the experiential or interpretive basis of the human world and to see it in terms of stable forms abstracted from a flux of experience. The approach taken to the human and social world in this dissertation thus draws on the interpretive tradition for the insight that the world has to be interpreted to become a human world (see Gendlin, 1973). The crucial role of interpretation in human experience is implied when the term "the world of experience" is later used as a synonym for the human-social world.
3.7 The Social Context of Human Experience
The way in which a person interprets or experiences the world is of course highly influenced by the culture in which this person is embedded. Human beings are social animals par excellence, as Aristotle noted, and the conventions and practices of society influences the individual human being to such an extent that it it difficult to speak of an person as truly human if he is isolated from any such context. "The community is... constitutive of the individual, in the sense that the self-interpretations which define him are drawn from the interchange which the community carries on" (Taylor, 1985, p. 8).
Although a crude hierarchical view of the evolution of life and society may consider the social level of human being to have emerged after the individual level of human being, just as the biological level evolved after the physical level, this would be as incorrect as to assume that first individual biological organisms evolved and then an ecosystemic context arose. Human being, a fortiori, presupposes social being. "To live as a person is to live in a social framework" (Husserl, 1965, p. 150, quoted in Rogers, 1983, p. 49).
Many writers in and out of the interpretive tradition have contributed to an understanding of the social dimensions of individual experience. They go back to the young Marx (Avineri, 1968; Bottomore & Rubel, 1963) and among the modern writers of an interpretive bent may be mentioned Gergen (1985) and Shotter (1984) in social psychology, Silverman (1970), Argyris (1982) and Pondy et al. (1983) in organization research, and Goffman (1959), Garfinkel (1967) and Giddens (1984) in sociology, among many others.
In the scientific study of the human world there is, nevertheless, a conventional dividing line between the individual mind and the social-cultural context, between psychology and sociology. This distinction is reflected in the argument of this chapter, which continues the story of the unfoldment of the explicate order by moving from a consideration of individual perception and cognition to their broader social context. An influential account of the emergence of the structures and forms that guide human experience and social interaction will be reviewed briefly (Berger and Luckmann, 1966).
Berger and Luckmann's discussion of the construction of social reality starts from the premise that "all human activity is subject to habitualization. Any action that is repeated frequently becomes cast into a pattern" (p. 53). By being repeated, the action becomes meaningful to the actor, as he says to himself, "There I go again," as he performs according to the habit being established. "...Habitualization makes it unnecessary for each new situation to be defined anew, step by step" (p. 54). An actor comes to understand his own activity when it thus congeals into stable patterns, or habits, recognizable as particular actions. Expressed in our terms, from the unstructured flux of human activity stable forms unfold.
Berger and Luckmann refer to the proverbial desert island, where a man, A, wakes up every morning and goes to work on his habitualized task, building a canoe from matchsticks. Someone else, B, may observe A repeating his habitualized action and he might say to himself, "There he goes again, working on his canoe." Typification occurs when an actor is apprehended as the type of person who performs a particular habit, and the expectation of standard actions defines a role. Thus, B thinks, "Oh, he is a canoe-builder, works on the beach." When two actors come to typify each other, institutionalization is said to occur. The institution created in the example is a sort of ritual in which A and B interact in standard ways. The typifications involved are reciprocal: "There I go again" becomes "There we go again."
Through institutionalization a social order emerges. Social order, which is the backbone of society, consists in a more or less coherent complex of interwoven institutions--dining, shopping, attending lectures, wedding, elections, and so on. Membership of a society or any social group may be defined by the extent to which a person's activities participates in its institutions. The historical persistence of a particular social order may be understood in terms of the support given to it by people acting according to the institutions defining it. This support may be supplied by people unconsciously or deliberately, voluntarily or through any formal or informal system of control ensuring the maintenance of the social order.
Berger and Luckmann's account of the genesis of the social world lends itself to expression in terms of the relationship between flux and form. In the human-social world, flux, which was termed "energy" in the world of physical and chemical systems and "behavior" in the world of organisms, may be called "human activity." Not particular actions, such as building a canoe from matchsticks, but human activity considered in isolation from the forms it takes is the human-social equivalent of the ontological flux. The flow of human activity is unfolded and channelled into particular forms, referred to as habits, roles and institutions by Berger and Luckmann. Through institutionalized interactions between actors, shared interpretations of actions and events are constructed by and among people and an explicate order of manifest social forms emerges.
In other words, flux is human activity and forms are the ways in which this activity expresses itself. According to the conceptualization proposed here, flux and form necessarily go together in the human world. One never observes human activity in isolation from the forms in which it expresses itself. Similarly, forms in the human world always require some human activity for their expression and maintenance. Whenever we speak about human activity in anything but the most abstract terms, we have particular kinds of human activity in mind--such as walking, listening or dancing--and these are forms, forms of human activity. Any human-social form requires for its existence or recognizability some human activity that is channelled by it: if no one dances a particular dance anymore, it ceases to exist or to be recognized (except by the folklorists for whom the dance is still an active form qua historical-analytical concept).
To illuminate this point further we recall the river image. We imagine that the surface of the river is so smooth that only where the flow is led into vortices do we become aware of the flow at all, because we see the water swirling round inside the vortex. Similarly, vortices do not appear or persist without there being a flow (of water) through them. So, while the flow feeds the vortices and thus renders them real and "existent," the flow depends on vortices to manifest itself. Without flow, there can be no vortices; but without vortices, we see no flow.
So it is with human activity. Although a person running down the street could be thought of as an example of "pure" human activity, the proposal here is that the ontological distinction between flux and form be employed to understand this situation: What we experience are forms that guide flux or activity. The forms pertaining to the jogger are a set of characteristic expressions of human activity: an idiosyncratic sequence of body movements (his running style), a particular pattern of respiratory activity (his type of breathing), conformity to norms of jogging behavior (his manners), and so on.
It is the flux or energy channelled by the person that makes possible the manifestation of the running activity, but whatever we experience or talk about or think about are forms, kinds of activity, particular channels. These forms are what we see and experience; they are the constituents of our world of experience. In other words, in the present terminology, "human activity" is not an observable; it is not some specific thing that appears in our experience or thinking. (When occasionally the term is used in the countable form--"an activity" or "activities"--particular kinds, or forms, of human activity or flux are being referred to.)
Examples of forms structuring human activity are legion. A norm is a form channelling human activity in the sense that it specifies what activity is acceptable and what is not. For example, norms about civilized eating define what kinds of thing one may, must and must not do around a dinner table, that is, what kinds of activity one may engage in and what kind one may not. To conform to a norm is to let one's activity be guided by the prescriptions of the norm, which is thus similar to the vortex that attracts and channel the flow of water approaching it. To approach a dinner table is in a sense to be "sucked into" the forming vortex of the expectations directed towards behavior in that situation. Common terms for similar social forms are habit, role, job description, institution, convention, rule and law.
Another group of terms refer to human activity that is conventionally seen as mental, psychological, cognitive or affective. They include concept, word, idea, distinction, category, assumption, belief, value, prejudice, theory, ideology, religion, and many more. To take an example, if people hold the assumption that "sports is about winning", it is a form in the sense that it shapes their behavior on the basketball court.
The connection between psychological and social forms is evident from the starting point in human activity taken by both Piaget (the child's motor activity) and Berger and Luckmann (the islander's boat building). From this activity forms arise, Piaget stressing their cognitive aspects (forms as concepts and logical relations) and Berger and Luckmann their social aspects (forms as roles and institutions). In their emphasis on human activity and the social forms that arise from them, Berger and Luckmann are clearly close to Piaget's account of the cognitive dimensions of human development. This underscores the fact that the individual/cognitive and the social are two sides of the same coin, and one cannot be properly considered in isolation from the other. One cannot think of psychological forms as residing in the mind and social forms as being "out there" in the social world. Learning to interpret the world is a social exercise, just as language is a social phenomenon, and all experience and mentation are thus social.
Kuhn's (1962, 1977) work in the sociology of science is a particularly eloquent demonstration of the intimate connection between psychological and social forms. He stresses the rootedness of scientific discourse, including the deep personal beliefs and assumptions about the world held by individual scientists, in a community of investigators complying with implicitly as well as explicitly sanctioned norms for proper scientific inquiry.
To sum up, the term "experience" does not denote the individualistic antithesis of "social structure" or "political system" or some similar macrosociological concept. The frequent use of the term in later stages of our argument does not negate or downplay the role of the social world in shaping human activity, but serves to point to the rootedness of the human world in the interpretive activities carried on by individuals, bearing in mind that these activities are shaped by the embedding culture.
3.8 Summary and Conclusions
The chapter opened with the distinction between the flow of a river and the vortices that channel the flow. This image was generalized into the ontological distinction between flux and form, corresponding to implicate and explicate order, respectively. Forms are the stable constituents of explicate order, and the flux that feeds the forms expresses the dynamic aspect of the implicate order.
To connect the worlds of physics and human being, we set out to apply the notions of implicate flux and explicate forms in an account that identified key stations in the evolution of matter, life and human experience. Seeing this evolution as the unfoldment of explicate forms from the implicate flux, we saw that matter (forms) emerged from the immense energy (flux) of the early universe. Prigogine's theory of the self-organizing emergence of dissipative structures that exploit their throughput of energy to develop into more complex systems indicates how prebiotic systems may evolve. Goodwin accounted for the development of biological systems in terms of morphogenetic field equations that specify a global (or implicate-like) order for organismic processes.
Our exploration of the world of human experience started with perception. Inspired by the spatial-frequency school in visual perception, Pribram suggested that in perception we transform the implicate order of light, sound, and other stimuli into an explicate order, in such a way that an inner world is constructed that enables us to negotiate an outer world of objects, the objective world. Glazer's holographic theory of decision making suggested that making a decision is a holistic and Fourier-based process of perceiving the "right" alternative. With Pribram we explored a possible physical realization of this process, namely, an implicate order of neural activity in the brain.
The world of human experience arises fully only with interpretation which is what distinguishes us from the pre-human animals. In the process of interpretation meanings are assigned to percepts which then become concepts, or symbols standing for things in the world. Interpretation is the process that abstracts from the implicate flux of neural-mental activity the stable and more or less distinct (hence explicate) categories and concepts that shape human experience and populate consciousness.
Lastly, the social dimension was considered. Here, the forms that shape human activity go by the name of roles, norms, institutions and the like. The social dimension of human being is intrinsic to, and very directly constitutive of, experience; it is not an extra layer one may consider later, after the "individual level" has been accounted for. Simply put, whether conventionally "located" in the mind or in society, distinctions and categories and all the other human forms constitute the explicate order of human experience.
Come this far, we have occasion to offer a few clarifications of the concept of form. It is important to notice that a given form may be described in two different, but equivalent ways. For example, if a speed-limit regulation is considered a form regulating human activity, it may be thought of as establishing a distinction (the speed limit) that separates legal from illegal driving behavior. But it may also be thought of as pointing to two categories, legal driving and illegal driving.
However one expresses this fact, speed limits are forms that regulate and channel the activity of drivers. Thus, two kinds of forms may be distinguished: those that point to the difference between two things and those that point to the fact that two things have been distinguished. A "distinction" is a form of the first kind, and and a "category" is of the second kind. Distinctions call attention to difference, and categories call attention to identity. Each, of course, presupposes the other and may be expressed in terms of the other.
Depending on one's level of analysis, "a form" may also be referred to as "n forms," or a set of forms. If the forms are organized within a boundary and possess some complexity they may be called "a system," or a system of forms, as already discussed. Whatever their traditional labels as "rules," "institutions," or "systems," forms should not be thought of as agents that perform actions like "shaping behavior" or "guiding human activity," although such are the linguistic shortcuts used to designate the effects of forms.
As all systems are forms (or complexes of forms with a boundary around them), it follows that the agency (capacity to act) that is commonly attributed to systems such as people and organizations ultimately derives from the flux as channelled through the forms that constitute the system. The same holds for other systems that are conventionally described as "doing" something.
For example, the expression "the nerve cell fires" does not presuppose the existence of a distinct ontological entity residing inside the cell that "does" the firing. The firing consists in a complex pattern of chemical and electrical processes that do not originate in any specific agent. Similarly, when we say that a person goes to see a movie, we do not mean to imply that a homunculus or some other internal agent issues orders and effects the decision to go. What happens is that a complex of neural, cognitive, affective and social processes converge to produce the effect that the person departs for a movie-theater. This holds also for supra-individual systems: when we say that a country goes to war with another, we do not believe that there is a national essential entity that does the warring; this, too, is a complex pattern of political, economic and military processes.
Concluding this chapter, we may say that an evolutionary argument has been presented so as to elucidate a point made by David Bohm in his discussion of the fragmentation of society. He suggested that the divisions inherent in our language (that is, those between words) often become projected onto the world, which then comes to appear similarly divided and fragmented. The treatment given in this chapter shows in more detail how an explicate order of distinctions and categories is created in human experience. These categories may indeed crystallize in such a way the world is believed to consist of fragmented pieces, as Bohm said, but this is no more than a special or extreme case of the generally explicate nature of human experience.
In section 2.8 we noticed that Bohm introduces the implicate order into the human world by arguing that in certain cases, the implicate order is perceived "directly," as in listening to music (cf. Bohm, 1980b, p. 200). Our discussion of the crucial role of explicate forms in human experience suggests that such "direct" experience is not possible, if by "direct" is meant "unencumbered by the explicate order." According to the position developed in this chapter, there can be no such thing as experience of the implicate order unmediated by forms (categories, distinctions, images, etc.), as these are the very vehicles that carry experience and make it possible. Pribram's work suggests that the implicate order is not experientially accessible to us in any "direct" way, because before the implicate order that impinges on the senses is "experienced" by us it is transformed (in biologically given or culturally learnt ways) by the nervous system into the explicate order of distinctions and concepts that shape the contents of consciousness. We shall elaborate this point in Chapter 6 where we return to the critique of Bohm's use of implicate order in the human world.
The next chapter proposes an alternative role for the implicate order in human experience.
In his writings and public presentations, Bohm seems at pains to avoid such general labels for his philosophy, presumably because such labels tend to generate stereotyping and antagonisms.
In terms of ancient Greek ontology, the spirit of the present dissertation is contrary to Aristotle's ontology of permanent essences, as well as to the essentialistic thinking of his master, Plato, and of his sources of inspiration, Parmenides and the Eleatics (Copleston, 1962). All of these take reality to be fundamentally constituted of something permanent and fixed, whether ideational or material. The present work owes more to the type of dynamic or process thinking evidenced by Heraclitus and the Ionian School (Kirk, Raven & Schofield, 1983).
The current estimate of the age of the longest-living particle, the proton, is about 1030 years (Jones, 1985). This is, to be sure, a long period, but it is still finite, and the point about the impermanency of particles holds.
Some of the ways in which a form or a system channels the flux are commonly referred to as the system's "behavior." Although in everyday language only animals and humans exhibit behavior, common usage in the sciences also has moons, electrons, chemical solutions and equations "behaving." Thus, we need to stretch the scientific usage only a little to speak of a houseplant as "behaving" when it reacts to light or withers if deprived of water, or to speak of any other system as behaving. The behavior of a given phenomenon is an expression of the flux that the phenomenon channels. Different behaviors are equal to different ways of channelling the flux, and since ways of channelling the flux are referred to as "forms," behaviors are "forms." To appreciate this point one must suspend the common distinction between an agent (a system) and its behavior. As pointed out originally by Whorf (1956; see also Bohm, 1980b, chap. 2), this distinction may derive from the categories of the Indo-European language, where a noun corresponds to an agent and a verb to its behavior. These categories have then been projected onto reality and we have come to see the distinction between agent and behavior as natural and obvious. In the context of the implicate flux ontology, however, this distinction has little ontological relevance. The alternative convention to be adopted here is that all identifiable things or phenomena are called forms, whether commonly understood as agents or objects (particles, molecules, cells, organisms) or as behaviors (moons circling a planet, atoms emitting photons, gasses reacting with liquids, ribosomes assembling peptide chains, lymphocytes generating antibodies, muscle tissue contracting, animals running). All these phenomena are equally forms channelling the flux. In the concluding section of this chapter we will return to this point, elaborating and extending it to the case of human-social behavior.
More correctly, the incorrect matching during reproduction of the base pairs that constitute the genes.
A field is a region of space where a force exerts an influence, such as the gravitational force in a field of gravity, a magnetic force in a magnetic field, or, as in this case, an (unspecified) morphogenetic force that acts on the cell and causes it to cleave.
The mathematical description requires no physical "bulging" of the cell membrane, only that some physical realization exists.
This was exactly the problem faced by Niels Bohr in quantum mechanics, in the case of the orbits of electrons "circling" the atomic nucleus. He solved this by similarly postulating the discrete nature of electron energies, corresponding to the discrete nature of the wave numbers describing the morphogenetic field.
The possible link between the harmonic functions of biological forms and the all-pervasive implicate order evokes the romantic view of nature, as expressed, for example, by Samuel Taylor Coleridge in "The Eolian Harp":
...And what if all of animated nature
Pribram's work in neuropsychology is highly compatible with that of David Bohm in quantum physics and is partly inspired by it (see Pribram, 1987)
This is a simplification. Actually, to be describable in terms of frequencies (or, more correctly, wave numbers) a bar grating, and any other pattern, must be frequency-analyzed into its component harmonics. Each of these has a particular frequency, phase and amplitude coefficient. It is the sum total of these harmonics that describes the grating uniquely.
A simplification is involved here. As has recently been pointed out, cells in the visual cortex may be sensitive to both geometric features (explicate order) and to spatial frequencies (implicate order), as well as to orientation, all at the same time (MacKay, 1981; Daugman, 1985). This may call for an explanation considerably more complex than the one suggested here, the basics of which, however, would remain correct.
The epistemological stance implied by this view of the nervous system may be called "constructional realism" (cf. Pribram, 1986). It is realist because it assumes there are indeed things in the world (conceptualized in the present work as "forms channelling the flux"). These forms do not depend for their existence on any observing mind. The form that is the moon existed before there were people, and it will continue to exist even if all observers cease to exist. The epistemological stance is constructional in the sense that the nervous system is held to construct from the energy received by the senses a world of experience populated by explicate phenomena (symbols, categories, distinctions). This construction is no simple or direct matter and it is influenced by numerous cultural, linguistic, motivational, hormonal, and other factors.
Pribram calls attention to the fact that a lens in an optical instrument is also called an "objective." Lenses perform a Fourier transform on the implicate order of incident light and create focussed and explicate forms and objects from this. In other words, an objective, such as the lens in the eye, serves to "object"-if the implicate order of light.
Strictly speaking, this example is not about decision making, but problem solving, in that there is a "true" answer. Decision making is guided by preferences or utility functions and not beliefs about what is true.
More precisely, a hologram is recording of the interference pattern of the Fourier transform of one object and the Fourier transform of another object. This second object is, in the most simple case (as discussed in Section 2.5 on holography), simply a reference beam.
Adding a speculative point to Glazer's theory, we may suggest that the intensity of feeling or clarity of mind associated with those decisions that we "just know" to be right may actually correspond to some surge of neural flux or coherent energy in the brain produced by the matching process, not unlike the bright spot of light produced in a close matching of two holograms. Recall that light is just a particular range of frequencies of electromagnetic radiation, and that such radiation was the source of all other kinds or of energy that emerged later in the evolution of the universe (cf. Section 4.2). "Enlightenment," which involves the wisdom of right action (good decision making), may indeed derive from such high-energy coherence.
Compare this suggested preference of the human mind for harmonic patterns with Goodwin's harmonic analysis of embryogenesis, as discussed in Section 4.3.
This presumed preference for harmonic patterns, which in Glazer's preliminary interpretation of the experimental results seems to explain the data reasonably well, is a product of the Fourier approach, which frequency-analyzes or decomposes data into harmonic functions. If a different type of frequency analysis were used, say, one based on sigmoid functions, the experimental subjects would appear to prefer profiles with a sigmoid distribution of the attribute scores--but then, the sigmoid-based analysis might not predict the rankings as well as the harmonic Fourier approach.
In its present form, Glazer's theory assumes that in the decision situation the brain starts from the explicate attributes of the choice objects and only arrives at an implicate order (the frequency domain) later, after a Fourier analysis has been performed on them. However, our discussion of Pribram's work showed that the energy that arrives at our sensory surfaces is an implicate order of flux (interference patterns of light, sound, and other waves), which only later, in our consciousness, becomes the explicate world that we experience. Glazer is aware of this view (pers. com.) but must assume the distinguishability of features to be able to perform experiments. He suggests (1988) that the situations in which the holographic theory of decision making is likely to work the best are very complex ones where it is difficult to tease out discrete variables. This idea seems close to a view of the world as essentially implicate, where explicate features and variables are abstractions isolated from a deeper flux, as in the implicate flux ontology. If we assume that the information arrives at the senses as an implicate order of energy, then what the brain seems to do, rather than performing a Fourier analysis first and then doing the comparisons, is simply do its comparisons right away on the implicate order of information coming from the senses. Features, attributes and variables then arise as constructions "later," unfolded from the implicate brain processes into consciousness and experience. To thus unfold the implicate Fourier-processes into explicate phenomena requires learning and, in its later stages, language, as we will discuss in the next sections, drawing on Piaget and the phenomenological tradition in the human sciences.
It is important to note that Pribram's work, as well as that of the other researchers on holographic brain functioning, is still tentative. The mechanism summarized here (slow waves in dendritic networks) is still largely a mathematical hypothesis in need of empirical substantiation. Recently, Pribram (in preparation) has begun to blend his holographic work with the very promising research in neural networks performed by psychologists (Rumelhart, McClelland & the PDP Research Group, 1986), neurophysiologists (Grossberg, 1987), physicists (Hopfield, 1984) and computer scientists (Sejnowski & Rosenberg, 1987). This work may point to a different locus for Fourier transform storage and processing than the dendritic networks--maybe the much coarser networks of neurons studied by neural network researchers mentioned (perhaps in the manner of Farley, 1965). Whatever the locus, the possibility of there being brain mechanisms for the processing and storage of Fourier transforms is the important point here.
The affinity of Bohm's early ideas on perception and thought with Piaget's was noted in Section 3.4.
Whether cognition emerges in the higher primates, in language-learning chimps, or only in the human child (and if so, at what age?) is of minor concern here. Only the fact that we can meaningfully establish a difference between perception and cognition is of interest.
This example is due to Karl Pribram.
The meaning of "experience" referred to here is that implied by continental European philosophy, which refers to active perception as well as outlook on the world, not the empiricist "experience" of Hume or Locke, which primarily refers to mere sense impressions (see Gendlin, 1962). Another relevant distinction between two kinds of experience is evident in the two German terms traditionally covered by the English "experience": "Erlebnis" and "Erfahrung." Massarik (1982, p. 252) explains: "Erlebnis, partaking of the same root as Lebenswelt [leben, to live], addresses the notion of ongoing, direct, and lively experience in the sense of 'what is happening to me now.' Erfahrung, however, is a different kind of experience: It relates to that which we have experienced at some time in the past, something on which we may reflect and from which we may perhaps learn." The sense implied by "experience" as used in this dissertation is the current or immediate perception of the world, as in "that was quite an experience" (this corresponds to Erlebnis), rather than the accumulated store of many experiences, as in "he has experience in driving" (this corresponds to Erfahrung).
By referring to human activity as a kind of flux is, of course, to introduce a distinction into the domain of flux, a distinction between human flux as opposed to other kinds of flux, such as biological and physical kinds. Thus differentiating the flux takes us out of the domain of flux and into the explicate order. To be consistent, one would have to refrain from speaking about the flux at all. This is the classical mystical position, as taken, for example, by Wilber (1982b). Here, we take the more secular approach of trying to talk about these things anyway, risking the contradictions. Speaking of human activity as a special kind of flux is justified by the fact that we are particularly interested in only a subset of all the possible forms shaping the flux in the entire universe, namely, the symbols, concepts, roles and institutions that shape and constitute human experience.
Henceforth, whenever the term "a social form" is used no qualitative difference from "a psychological form" is implied. What is then referred to is merely one aspect of the joint psychological-social world.
A short but densely packed call for a "constructional biology," of which Goodwin and Pribram are co-authors (Danielli et al., 1982), points out the remarkable compatibility of Goodwin's and Pribram's approaches. In this very promising programmatic statement it is suggested that both lines of work proceed from the hypothesis that the ground state or origin of biological activity and development is a distributed, homogeneous potentiality (the undifferentiated field and the frequency domain, respectively; or, in Bohm's terms, the implicate order), from which unfolds a localized foreground of heterogeneous actuality (the mature organism and the space-time domain, respectively; or the explicate order). A recent paper by Goodwin (1989) develops these ideas by including an explicit process or flux perspective, along the lines of the present inquiry.