Computer Science: Form without Content

Whitehead’s mathematical, scientific and philosophical work evidently preceded the first wave of automated computing that appeared after World War II. Insofar as automated computation rests upon matters associated with the foundations of mathematics (Gödel, Church, Turing, logicism, the theory of computability, and formalized systems), Whitehead was certainly involved in cognate matters, but he had no exposure to the early paradigms that governed the development of the field beginning in the 1940s. Accordingly, any engagement of Whiteheadian scholarship and computer science must be a matter of abstract reconstruction. Nonetheless, we shall see that process metaphysics does engage modern computing compellingly in three distinct and significant ways: paradigmatically, formally, and—perhaps most importantly—comparatively.

1. The Context for Computing

The enterprise of automated digital computing consists of two separate and reciprocal domains—computer engineering and computer science. Computer science studies the representation of data in certain discrete mathematical spaces and the transformation of such information according to a limited repertory of formally defined actions (see below). Computer engineering concerns the design and construction of physical systems that implement, with virtually absolute faithfulness, the abstract representations and transformations of computer science. In short, then, computer science is about computer programs and computer engineering is about devices that realize the execution of these programs. In this sense, one might want to say that computer science is the prior of the two activities, but in practice this priority has not always been well established: the machines have often preceded the programs.

To prepare for the discussion of the relation between process metaphysics and computer science, we need to establish certain fundamentals from both sides of computing. From computer science we need only understand one key abstract construction, which is presented in the following subsection. From computer engineering we need only understand a few fundamental features of the physical construction of any digital system, and this is covered directly afterwards.

1.1. The Turing Machine

The abstract construction called the Turing machine lies at the foundations of any cogent discussion of the theory of computing because it explicitly defines a simply specified architecture that is universal in the sense that any computation that can be programmed on any digital computer can be programmed on a Turing machine.[1] The specification of the machine, which we review presently, is given in such a way that its components map directly onto a mathematical system. Thus while the construction suggests the feel of an actual computing device, we remain quite properly in the domain of computer science, not computer engineering.

The Turing machine has only three components: a processor, a tape interface, and a tape, infinitely long in both directions and segmented into blocks. The processor is only capable of finitely many states, and one may think of these as the totality of the internal storage of the machine, with the further simplification that the only thing stored at any given location is either the number 0 or the number 1. The tape interface has only two functions: it may either read from or write to the current tape block and may shift the tape one block forward or backward. The tape is an abstract storage medium that extends infinitely in both directions, but with the proviso that only finitely many blocks may be nonblank. Again we can assume that the data stored are binary (bits) or binary groupings (bytes, words, etc.).

The specification of a Turing machine proceeds entirely in terms of elementary mathematical objects—so elementary, in fact, that we need appeal only to sets and relations without any mention of arithmetic. The defining data consist of the following:

  • A finite set S of possible machine states. Among these states is a distinguished element s0, the initial state.
  • A designated subset F of S consisting of the designated final states, whose role we shall make clear below.
  • A set M of symbols that the system reads and writes.
  • Among these symbols is a distinguished element m0 called the blank symbol.
  • A table T of instructions. This is again a finite set, and it consists of ordered quintuples (s, m, s‘, m‘, t), where each component has the following interpretation:
    • s        —     the current machine state
    • m      —     the symbol stored on the tape at its current block (position)
    • s‘       —     the next machine state
    • m‘      —    the overwrite symbol
    • t    —    the tape transport instruction. This may specify one of three instructions: move right one block, move left one block, or remain at the current block.

The table of instructions T is essentially a program, and whether we think of it as part of the architecture of T or as something loaded to perform some specific task is irrelevant. For reasons that emerge shortly, we do have to make the important assumption, however, that every possible pair (s, m) consisting of a machine state and a tape symbol occurs at least once in the table T.

The operation of a Turing machine is now easily described in three phases:

  • Initialization. The machine begins in state s0 with finitely many nonblank symbols having been written on the tape by some external agency.
  • Iteration. Suppose that the machine has reached state s and has read symbol m from the current tape block. It then matches the pair (s, m) to the first two entries in some line of the table T and retrieves some corresponding quintuple (s, m, s‘, m‘, t).[2] The machine now passes into state s‘, overwrites the symbol m on the current tape block with the new symbol m‘, and executes the tape transport instruction t.
  • Termination. The machine halts when it reaches a state s that lies in the set of final states F.

Insofar as the construction above seems to define a single-purpose machine running a single program, Turing showed famously that one could construct (abstractly, of course) a universal Turing machine capable of running the program of any other Turing machine, and indeed here is the kernel of the idea of a stored program, without which computers would be far less ubiquitous, valuable, and interesting.

1.2. The Computer as a Physical System

The general business of physics is to associate a mathematical model and, in particular, a mathematical space with a physical system in such a way that mathematical transformations in the associated space can be used to predict or, in some cases, even control the behavior of the system in question.[3] Implicit in this idea is that there must be points of correspondence between the physical world and the mathematical model, and these are of course provided by certain measurements. (We shall have more to say about this below in connection with Whitehead.) A physical theory then may be evaluated in terms of the accuracy with which an actual system meets its corresponding mathematical model at these points of contact. Put another way, the physics of a system is a matter of projecting it by a set of measurements into some mathematical space and then checking how well the behavior of the modeled system tracks the behavior of the actual system. We shall illustrate this now by example to prepare for a major subsequent point.

Consider the kinematical analysis of an object orbiting the earth; this is, of course, an instance of the fundamental problem of celestial mechanics: the prediction of trajectories. After the selection of an appropriate coordinate system (itself no small feat), our modeled trajectory will take place in a mathematical space of six dimensions in total: three to represent position and three to represent velocity. The mathematics we use to model the forces (accelerations) felt by our satellite might be given simply by Newton’s law of universal gravitation as it applies to two point masses (or homogeneous spherical distributions). The point is that we now have a system of second-order differential equations that can be treated entirely by abstract methods to give results that can then be compared to physical measurements (observations) that will tend either to corroborate or to invalidate the model. In this idealized case, no one will be surprised to learn that the results are a good enough approximation to validate this particular model in part, but they are far from perfect. For example, this two-point representation will predict an exactly elliptical orbit, and, in the short term, observations will confirm something pretty nearly, but not quite, elliptical, with the defect becoming more and more pronounced over time. If we want to do better, we must recognize and incorporate into our model the following features of the actual world: the earth and satellite do not exist alone in the solar system; the earth is not a spherically symmetric homogeneous mass; solar pressure exerts a slight but inexorable force on all bodies concerned. All of these will tighten up our model and its application considerably, but the new elements fit essentially into the same overall Newtonian scheme. If we still aren’t satisfied, we might even introduce elements of relativity theory, at which point the mathematical space of our model actually changes to something non-Euclidean.

And yet having done all that we know how, would we now expect a precise match between model and reality? The answer is no, for two reasons that trace back to the same source: measurement. In the first place, insofar as the physical measurements used to parameterize these kinds of astrodynamic models are imprecise, we expect such errors of measurement to be propagated through the model. In the second place, even if we could somehow achieve a precise model independent of any instrumentation, the measurements used to confirm the model’s efficacy would be likewise susceptible to imprecision. In practice, as long as the resulting discrepancies are not of a systematic nature (e.g., measurement biases), we must be satisfied.

We stress the essential nature of physics as proceeding first from real events in real space to abstract constructions in mathematical spaces, and the attendant imprecision, because, as we shall argue, computer engineering exhibits a strangely inverted relationship between mathematics and physics that is extraordinary by virtue both of its direction and precision. Ignoring only the matter of the infinite length of the Turing tape medium, the data and operational specifications of a Turing machine serve as a complete set of functional requirements for an actual physical device. While admittedly it would not be useful to build a device from such an austere specification, in light of its universality, we can learn something from assuming that such a device were indeed built. Two essentially intertwined considerations are paramount.

First, the possible states of the Turing machine under construction must correspond unambiguously with the states of our physical device, and under this correspondence the evolution of states of the abstract machine must precisely match the evolution of states of the actual machine. For example, one form of computer memory in common use today is based on capacitor-transistor pairs (see the Encarta reference below for an elementary description). The transistor controls the state of the capacitor, which is set to either a charged or neutral state. Associating a charged state with the number 1 and an uncharged state with the number 0, an array of such pairs of sufficient length may faithfully represent the state of a Turing machine. The transition from one state to another is then driven by digital logic embedded in the hardware, as the system clock drives it from state to state.[4]

Second, and implicit in our first consideration, the only physical states of interest in the actual machine are those for which the correspondence with the associated abstract state space is exact; or, to put this another way, in understanding our computing device as a computer, our interest lies only in a discrete model of the system in a mathematical space similar to that in which the abstract machine is specified. Thus, returning to the example of the previous paragraph, if one were to look at any one of the myriad memory capacitors continuously, one would expect that its charge, if any, would decay over time, and indeed it is necessary to refresh these elements periodically. But as a computer, this decay is not part of any relevant description: a sufficiently charged capacitor is read as a 1, and any deficiency or excess with respect to some “perfect” state is ignored. In this sense, the computer reads its internal memory as we read letters of the alphabet, and to much the same effect: within wide limits, we report the letter A, for instance, simply as an A, regardless of the particular characteristics of any of the multitude of printed fonts or literally uncountable idiosyncratic variations of the human hand. Moreover, this condensation of the actual experienced instance of an image to the Gestalt for a certain upper-case letter is critical to our capacity to read. The point is that the appropriate physical description of a computer qua computer is restricted to a doubly discrete representation: first in terms of time, insofar as we are interested in its evolution of states at discrete intervals of time defined by the clock cycles, and second in terms of the discreteness of the physical parameters (measurements) that define these states.

The upshot of this physical characterization of digital computer systems is, with respect to what follows, then twofold: (1) For a computer to function correctly, its representation by an abstract model cannot be an approximation, but an isomorphism, a cor­respondence that preserves all the contextually relevant information. Moreover, a computer represents physics done not only exactly, but in reverse, too: one constructs a physical system that corresponds with a mathematical space that has been specified in advance. Thus one might well say that the computer represents the given mathematical space, rather than the other way around. (2) In consequence of the discreteness of its associated abstract state space, the only physical states of interest for a computer qua computer are discrete in two dimensions of measurement: time and descriptive content (the attribute measured).

2. Process Metaphysics and Computing

2.1. The Paradigmatic Connection

The most evident and striking affinities between process thought and modern computing lie in their treatment of time as discrete and in a related, subtle consequence regarding causality.

Two decades before the first electronic implementations of digital architectures, Whitehead committed his own metaphysical system to an epochal theory of time. By this he meant in particular that while there was a genetic structure to concrescence, and one that in fact admitted four phases, these phases were atemporal. He argued—and we think unassailably—that the very idea of subjective unity requires that only the actual entity as a unit admits duration, for this is precisely its quantum of experience. Thus time is consumed by actual entities in discrete granules of whose subdivision one cannot meaningfully speak. Our point near the end of Section 1 was that much the same holds for digital computers insofar as they faithfully represent their abstract counterparts in the domain of discrete mathematical spaces. Time is epochal for a digital system, and in a sense that goes beyond the mere fact that it is driven from state to state by a clock that runs in some fixed rhythm. A computer operating (in isolation) would achieve the same evolution of states independently of some potential arrhythmia in its system clock. Indeed, one might like to say that there is no way a computer could tell from its internal state that there was some irregularity in its clock.[5]

We hold that there is, moreover, a second element to this affinity, one that goes back to the nature of a Turing machine. Except for its initial state, the current state of a Turing machine is determined (allowing for a finite multiplicity of choices for nondeterministic machines) by its previous states, here interpreted to include all three components, not just the processor. One can accordingly make the corresponding statement for the physical system that represents the abstract machine. The point is that the current state of such a physical system, again viewed discretely as a computer, depends only on its previous (discrete) states. The internals of how such a state has been achieved are of no consequence, and in this sense, the totality of what we might know of the full physical system (as, for instance, an exercise in the physics of semiconductors), is irrelevant. What matter are the previous achieved states as a Turing machine.

The connection with Whiteheadian metaphysics can now be made: An actual entity is a concrescence of prehensions which in turn are tripartite in nature, consisting of subject, subjective form, and—most important to the point under discussion—the datum to be prehended (PR 23). Insofar as this datum is an actual entity, Whitehead speaks of physical prehensions, and such physical prehensions constitute “[t]he primary stage in the concrescence […] in which the antecedent universe enters into the constitution of the entity in question, so as to constitute the basis of its nascent individuality” (PR 152). Thus “past” internal processes play no role in a new becoming; only objectified actual entities, the discrete outcomes of past processes, can serve that function. This is, of course, completely analogous to what we have just said of a computer system vis-à-vis its trajectory of discrete states.[6]

2.2. The Formal Connection

The paradigmatic connections of the previous subsection depend only on the granularity of time and impenetrability of process, which in the latter case is to say that only an actual entity, not a concrescence in progress, may serve as a physical datum for a subsequent prehension. Yet this second consideration need not prevent us—and indeed did not prevent Whitehead—from developing intellectual perspectives on the process of concrescence that lead to a structural description that includes a four-phase characterization of concrescence (the conformal phase, followed by three supplemental phases) and associated species of feelings (e.g., pure simple physical feelings, conceptual feelings, simple and complex comparative feelings).[7] The assertion that these forms of concrescence may indeed be modeled by computer programs is, to say the least, astonishing, especially in light of the priority given to experience in process metaphysics. Not surprisingly, we can make only the most cursory case for this in a brief article like this.[8]

In general, a computer program is a sequence of data assertions and transformative statements, supplemented often by other data or parameters supplied during execution. For instance, a program to compute the area of a circle will consist of a declaration of p, a declaration of, and instruction to read, the radius r during program execution, and the formula that uses these two values to calculate the requisite area. In the computer language Prolog, the data assertions and transformative statements are simply facts and rules expressed in a natural, but alternative formalization of predicate logic. Thus we have the following three-line example of a Prolog program:

mother_of(‘Catherine’, ’Mary’).
mother_of(‘Anne’, ’Catherine’).
grandmother_of(X, Z) if
mother_of(X, Y) and
mother_of(Y, Z).

The program expresses two facts (Catherine is the mother of Mary, and Ann is the mother of Catherine) and one rule that defines the relation that X is the grandmother of Z. The execution of a Prolog program requires just one execution parameter, in this case a query expressing a proposition that the program may evaluate according to its facts and rules. Thus in our example, an apt query might by

grandmother_of(‘Anne’, ’Mary’),

to which the program would respond true.

The connection between Prolog programs and process metaphysics proceeds, broadly speaking, in three steps. First, the three-part nature of a prehension maps directly onto the three components of Prolog predicate:

Subjective_form(Subject, Datum)

Thus one has a natural representation that the subject Subject prehends the datum Datum with subjective form Subjective_Form.[9] Second, the structure of concrescence as characterized by the species of prehensions and corresponding phases of concrescence may accordingly be captured by a list of facts and rules in a Prolog program, with the additional virtue that a such listing is by its nature atemporal. The phases of concrescence, in particular, take the form of a rule which shows how a subsequent feeling is derived from certain antecedents.

Subjective_form_1(Subject, Datum_1) if
Subjective_form_2(Subject, Datum_2) and
Subjective_form_3(Subject, Datum_3) and

Subjective_form_n(Subject, Datum_n)

Third, in the actual execution of the corresponding Prolog, one achieves a representation of concrescence, which, while necessarily temporal, nonetheless cultivates deeper insights into Whitehead’s overabundant complexities (Henry 1993, Chapter 1). Thus in this formal connection we are able to represent something of the interior structure of process metaphysics.

2.3. The Comparative Connection

There is evidently a progression from the crude to the fine structure of process theory in the two preceding subsections, but now we must put any further refinements aside. Instead we make a brief categorical argument against the strong Artificial Intelligence thesis, to the effect that the Whiteheadian worldview will not accommodate machine consciousness in a digital system as such.

We saw in Section 1 that the physical representation of a computer qua computer is, unlike any plausible description of the motions of the heavens, in fact a full isomorphism between the actual and the abstract. But as in all physics, the vehicle of correspondence for this representation is measurement, and it is exactly at this point that a foundational element of process theory intervenes.

Whitehead claimed that perception occurs in two distinct modes—that of causal efficacy and that of presentational immediacy. The first might be characterized as vague, non-quantitative, yet causally efficacious experiences of actual physical events; the second as a matter of sharp patterns abstracted from the first, and shorn of all subjectivity. The key point for us is that “all exact measurements concern perceptions in the mode of presentational immediacy” (PR 326), so that any set of measurements is at best a filter that transmits an abstraction of the past that cannot include the creative, experiential aspects that might be conveyed in the mode of causal efficacy.

The inference to be drawn from the two preceding paragraphs is simple: a computer system, as an isomorphic replica of a given mathematical system via a system of measurements, is nothing more than a set of measurements and must therefore be noncreative and nonconscious.[10]

3. Relevant Scholarship and Speculative Assessment

As noted in the introduction, via PM Whitehead was well versed in the deep theory of computability as it relates to the foundations of mathematics. But in light of his historical placement, scholars have not paid him much attention with regard to computing and computer science.

Much of our Section 2 is abstracted from Henry’s book (1993) and the article by Henry and Valenza (1997). Tanenbaum’s text is a clear and insightful introduction to computer systems design, while Sedgewick’s is more targeted at computer science in the abstract. The work by David Ray Griffin is an excellent application of Whiteheadian thought to mainstream problems in philosophy.

We make two points related directly to the issues raised in Sections 2.2. and 2.3., respectively.

With regard to the representation of forms of concrescence by programming constructs, one might suggest other structural points of emphasis that could profitably be brought out by moving from the predicate logic of Prolog to an object-oriented language. Such languages, while less facile in the expression of predicates, are correspondingly more naturally expressive of hierarchical relations via the notions of subclassing and inheritance. Hence the relationships among the species of objects and their mutual interactions in a process ontology might well be exhibited from the most general to the most specific.

We think, moreover, that the argument we have made above against machine consciousness is capable of generalization, and, more importantly, even if one ultimately rejects its cogency, it nonetheless illuminates key elements of process thought. In other words, by abstracting the components of the argument, one might frame the decisive features of any experiential metaphysical system.

Notes

[1] The key point is that the Turing machine is powerful enough to realize any general recursive function, and, according to the Church-Turing Thesis, this suffices for all computable functions, to our best understanding of computability: see Turing 1936.

[2] Our requirement that every pair (s, t) occur at least once in T insures that the machine always has at least one path forward. Note, however, that since we do not insist that such pairs are represented exactly once, there may be several paths forward. In this case, the machine is called nondeterministic. See Sedgewick 1988 for an excellent account of nondeterminism in computer science.

[3] The use of the word space in this context is standard and intended to suggest physical space metaphorically, although technically, within mathematics, a space is usually just a set with some elements of formal structure.

[4] The clock in this sense is not a timekeeper, although it does act at regular intervals, but rather a device that systematically applies voltages to the electronics, thereby initiating the state transitions. Cf. note 6 below.

[5] This is only true, of course, within certain limits. For instance, we mentioned above that the capacitors used in some types of memory need to be refreshed, and in many systems this refreshment rate is tied to the operation of the system clock. Such systems would certainly fail at drastically reduced clock speeds. Digital engineers could probably give other examples, but, nonetheless, we think our main point remains intact.

[6] In this context it is well worth noting that for Whitehead there is a specific mention of metaphysical rhythm that carries a sense far beyond bare temporal periodicity. In speaking of the rhythm of the creative process (PR, 151), he describes a two-phase cycle that moves from the private experience of concrescence to the public objectification that then becomes data for the next cycle. Thus one might well make some further analogy between private experience and the “intermediate” analog physical states of a computer system that are completely and deliberately ignored by the discrete model.

[7] Indeed it is the conformal phase alone that admits the data of prior actual occasions, and it is thus more specifically to this initial phase of concrescence that the just mentioned notion of impenetrability applies.

[8] See Henry 1993 for full details.

[9] We think of all of these three relata as variables, and capitalize them accordingly.

[10] See Henry and Valenza 1997 for a more expansive presentation of this argument.

Works Cited and Further Readings

Griffin, David Ray. 1998. Unsnarling the World Knot: Consciousness, Freedom, and the Mind-Body Problem (Berkeley and Los Angeles, University of California Press).

Henry, Granville C. 1993. Forms of Concrescence: Alfred North Whitehead’s Philosophy and Computer Programming Structures (Lewisburg PA, Bucknell University Press).

Henry, Granville C. and Robert J. Valenza. 1997. “The preprojective and the postprojective: A new perspective on causal efficacy and presentational immediacy,” Process Studies, 26, 1-2, 33-56.

Microsoft Encarta Online Encyclopedia, 2005. “Computer Memory.”

Sedgewick, Robert. 1988. Algorithms, 2nd Edition (New York, Addison-Wesley).

Tanenbaum, Andrew S. 2005. Structured Computer Organization, 5th Edition (Englewood Cliffs NJ, Prentice-Hall).

Turing, Alan M., 1936. “On computable numbers with an application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, 2nd Series, 42, 230-65.


Author Information

Granville C. Henry
Professor Emeritus, Departments of Mathematics and Philosophy
Claremont McKenna College, Claremont, California 91711
mishka1@msn.com

Robert J.Valenza
Dengler-Dykema Professor of Mathematics and the Humanities
Department of Mathematics
Claremont McKenna College, Claremont, California 91711
Robert_valenza@mckenna.edu

How to Cite this Article

Henry, Granville C., and Robert J. Valenza, “Computer Science: Form without Content”, last modified 2008, The Whitehead Encyclopedia, Brian G. Henning and Joseph Petek (eds.), originally edited by Michel Weber and Will Desmond, URL = <http://encyclopedia.whiteheadresearch.org/entries/thematic/sciences/computer-science-form-without-content/>.