Vector mathematics reveals in its history a splendid tension that may be described both as philosophical and cognitive. In this article we shall review the prevailing technical characterization of the concept of a vector and briefly trace its history, especially in the nineteenth century, where a mathematical fault line long in place opens into a fissure. Whitehead’s use of the term, first in mathematics and later in philosophy, then becomes a source of some insight into his worldview.

### 1. The Historical Context for Linear Algebra

#### 1.1. Vectors and Vector Spaces

The province of *vector mathematics* of course subsumes any mathematical processes or formalisms concerning vectors, including, for instance, vector analysis, the extension of the fundamental limit constructs of calculus to functions whose domain (space of admissible inputs) and codomain (space of possible outputs) consist of vectors. This begs the question of what exactly counts as a vector, the precise definition of which belongs to the more particular domain of *linear algebra*, a field of study that we shall characterize presently. Indeed, the point of this section is to sketch formal characterizations of the terms *vector* and *vector space*, as they are currently in use, to serve as a kind of standard against which various historical developments may be compared.

One’s first encounter with vectors may well be in high school physics, where one learns that a vector is something that has both magnitude (length) and direction. Accordingly, the first graphical depiction may then be an arrow representing, for instance, a force: the arrow has a length proportional to the size of the force represented, and a direction indicating its direction of application. This magnitude-direction characterization adheres so nicely to matters of physics (not just to force, but to velocity, acceleration, linear and angular momentum) that it is genuinely illuminating, and yet with regard to any formal development of vector mathematics, it is hopelessly inadequate.

The undergraduate majoring in either mathematics or one of the more quantitative sciences will—or should, at least—give quite a different response than given by our adolescent physicist to the question of what a vector is, a response that makes no direct reference whatsoever to magnitude and direction. As unenlightening as it may seem, he or she might well say that a vector is an element of a vector space, and before we undertake the obvious ensuing question, let us at least notice that this response does have content insofar as it indicates that the notion of a vector requires some explicit systematic context that is lacking for our more naïve high school student.

So what indeed is a vector space? To make matters more palatable for the uninitiated, we shall first work through the definition of the special case of a *real* *vector space* and then indicate how the general construction goes.

##### 1.1.1. Some Axioms and Examples

A *real vector space* consists of a set *V*, whose elements are called *vectors*, together with two operations of strikingly distinct character. The first operation is called *addition*, and while it is denoted by the familiar plus sign (+), neither the term nor the symbol is restricted to its ordinary meaning. To say that addition is an operation on *V* is to say that for any two vectors *v* and *w,* their sum *v* + *w* must also lie in *V* (we call this closure). But this is not enough. If *V *is to be a vector space, this first operation must satisfy certain natural rules of arithmetic which we need not list here. The important thing is that they are ordinary arithmetic laws (for example, the associative and commutative laws) such as we learn in elementary algebra, and they make reference only to this one operation.[1]

The second operation is called *scalar multiplication*, and it is an external operation in that one of the two operands (the one on the left) in general does not come from the set *V* but, in the case of a real vector space, from the set of real numbers. To be more precise, for every real number *a* and vector *v* there is defined the product *av* of the vector *v* by the scalar *a*, which then must also be an element of *V*. (In the context of this form of multiplication, real numbers are called *scalars*.) Note carefully that this is *not* the ordinary multiplication of numbers, but something else entirely. Again if *V* is to be a vector space, certain ordinary-looking rules of arithmetic must hold (for example, the distributive law of multiplication over addition), and, as above, it is not important to list these rules, but only to note that in this latter case they make reference to both vector space operations, addition and scalar multiplication.

A pair of contrasting examples should help to clarify this. Consider first the set traditionally denoted **R**^{2} (read *R-two*) that consists of all ordered pairs (*x*_{1}, *x*_{2}) of real numbers.[2] We can define the addition of two such pairs *component-wise*, which is to say that we add the corresponding components. For instance, (2, 7) + (6, 4) = (2+6, 7+4) = (8, 11). This operation has many familiar algebraic properties, including the associative and commutative laws, and all of the other laws needed to qualify it as addition in a vector space.

But we are not done until we can say how to multiply an element of **R**^{2} by an ordinary real number in an appropriate way. The answer is again to work component-wise: to multiply the real number *a* times the pair (*x*_{1}, *x*_{2}) we multiply each of the components by *a*. For instance, 4 × (2, 11) = (4×2, 4×11) = (8, 44), or—and this second example is most important for what follows—*p* × (2, 11) = (2*p*, 11*p* ). Here again we will find some very familiar arithmetic properties, including the distributive law for multiplication over addition, and all of the other laws needed to qualify **R**^{2} as a real vector space with respect to these two operations.

Now for the contrasting example: Just as the symbol **R** generally denotes the set of real numbers, the symbol **Z** generally denotes the set of integers (the whole numbers, their negatives, and zero). We can then follow the pattern above to form the set **Z**^{2} (read *Z-two*) that consists of ordered pairs (*m*_{1}, *m*_{2}) of integers. Of course **Z**^{2} is just a subset of **Z**^{2}, and this helps because we can use the definition of component-wise addition given above just as well in this case, and we seem to be on our way to a second vector space. But the road ends here: the definition of scalar multiplication will no longer do. The problem is one of closure; for example, (2, 11) is a bona fide element of **Z**^{2}, and *p* is a bona fide element of the real numbers, but the scalar product *p* × (2, 11) = (2*p*, 11*p* ) no longer consists of a pair of integers. Thus **Z**^{2} is not a real vector space under these two very natural operations.

To conclude this meager introduction to vector spaces, let us honor a promise made above: if what we have just sketched out is only a *real *vector space, what is a vector space in general, with no such qualifier? The real numbers are far more complicated than meets the eye. They have not only an arithmetic structure, but an elusive and subtle topological structure that was only finally well understood in the nineteenth century.[3] With regard to an effective set of axioms for a vector space, it turns out that only the most fundamental arithmetic properties are needed and the rest can be discarded. The point is that the scalars required by the definition of a vector space can come from any system on which the arithmetic operations of addition, subtraction, multiplication and division (by everything but zero) are defined and have their usual properties. Such sets are called *fields*, and two examples distinct from the real numbers are the sets of rational numbers (fractions) and complex numbers. Beyond those two, there are hoards of interesting and important examples of fields, some of which have very little to do with numbers, and a whole class of which are built out of finite sets. The upshot is that a vector space is always defined in tandem with an associated *field of scalars*, and that field may be at quite a cognitive distance from the real numbers.

##### 1.1.2. Some Reflections

While almost no one who has not studied linear algebra will feel any deep rapport with the sketchy definition given above, it is in essence the currently accepted definition, and with nothing more than this in hand, we make three important observations.

First, the entire definition is given in symbolic algebraic terms, with no direct appeal to geometry. Whatever the ultimate connection of a vector space with the concept of space, it does not appear explicitly. To give two paramount indications of this, the ideas of length and angle not only are not mentioned, but indeed they are not defined for a vector space in the abstract.[4] This is all the more remarkable given that these two geometric notions are at the heart of the naïve definition given in introductory physics.

Second, the definition of a vector space fundamentally requires two kinds of objects: vectors and scalars, and the latter act externally on the former.

Third, while technically there is no geometry encoded in any of the defining symbols or axioms, one suspects that something of geometry is implicated in the system as a whole. Not only does the use of the word *space* suggest this, but also the use of the word *scalar* (as might be associated with *scaling factor* or the *scale* of a map).[5] Indeed, linear algebra has been called most astutely “the generalization to *n*-variables of the elementary theory of proportions that is learned in childhood” (Mostow and Sampson, Preface), and here again in this informal appeal to the notion of proportionality we find suggestions of space, shape, and similarity. These suggestions are not ultimately misleading: we shall see below that what these austerely algebraic axioms capture *in toto* is perhaps the most fundamental spatial notion of all.

#### 1.2. The Amphibious Concept

Algebra and geometry differ fundamentally not simply insofar as the domain of one is primarily number and the other space. Historically the terms and transformations of geometric discourse have been more difficult to disentangle from the antecedent spatial concepts from which they are abstracted than have been the terms and representations of algebraic discourse from their respective origins in the fundamental activities of counting and reckoning. The symbols and rules of algebra would seem to detach from their abstract semantic base more easily than those of geometry, and in that sense we might assert, with some irony, that of the two, algebra is the more formal, while geometry is more about forms.[6] As an introspective illustration of this point, one might consider the difference between calculating a quotient by long division—implicitly an exercise in the rules of algebra—in contrast to the composition of the simplest geometric proof—for which one will find it almost impossible to proceed without semantics. We might in fact partially characterize algebra as the study of formal schemes that address the transformation of symbolic expressions whose underlying symbols remain, by design, opaque, while characterizing geometry as the study of those properties of spatially extended objects that remain invariant under certain transformations (such as rotation, reflection, and translation). In this light, the greater apparent semantic commitment in geometry is perhaps less of a mystery.

One’s fascination with the emergence of the vector concept in the nineteenth century lies in its amphibious nature vis-à-vis algebra and geometry. Crowe has presented a detailed historical study of this (1967, especially Chapters Two, Three and Eight). The algebraic movement, Crowe claims, involves the geometric representation of the complex numbers (which indeed constitute a real vector space in the sense defined above) and their later extension to the quaternion algebra of Hamilton. As noted above, the complex numbers are an example of a field, in the technical sense suggested previously, and therefore are primarily algebraic in character. The quaternions are in fact an extension of the complex numbers generated by two new elements which, like the complex number *i*, have square equal to –1. On general principles, one knows that they cannot constitute a field—the complex numbers exhibit a kind of maximality called *algebraic closure*—but they come close: the only familiar property of arithmetic lost is the commutative law for multiplication. The point is that again we have an object too algebraic in conception to serve as a basis for linear algebra.

The contrary movement, Crowe shows, was carried forward primarily by Grassmann, who attempted to make algebraic (formalistic) certain fundamentally geometric constructs, thus freeing the notion of product, for instance, from arithmetic. Not surprisingly, his system was better adapted to the geometric elements of the vector concept needed in the application of vector methods to calculus, but despite its genius, it, too, proved insufficient as a foundation for linear algebra.

#### 1.3. Explaining the Resolution

How does the tension between algebra and geometry resolve itself in the currently accepted definition of a vector space? We make three broad points.

First, although all of the terms in the modern definition seem algebraic, the explicit separation between scalar and vector is in fact a geometric accommodation. Consider the characterization of linear algebra as an extended theory of proportions: it is basic to the notion of similarity in plane geometry that the ratio that relates similar objects is numerical, while the objects themselves are geometrical. Two distinct species of objects are thus implicated.

Second, while the abstract definition of a vector space over a field of scalars will not in general accommodate the notions of length and angle, it does, emphatically and spectacularly, accommodate the more primitive notion of dimension. Given the algebraic character of the axioms, it is astounding that one can derive from them the following two statements: (a) every vector space admits a kind of coordinate system, technically called a *basis*, and (b) while such a basis is generally not unique, any two bases for the same vector space have the same cardinality.[7] When this cardinality is finite, we speak of *finite-dimensional vector spaces *and the number of elements in any basis is called the *dimension* of the space. All of this is to say that dimensionality—the number of degrees of freedom, the number of independent directions in space—is captured by these axioms. This indeed speaks to the geometric content latent in this formulation.

Third—and this is essentially a technical variation of the preceding point—two key associated algebraic constructs, linear transformations and the determinant function, are likewise powerful enough to encompass the elements of Grassmann’s approach needed for vector calculus. These include the so-called inner product and cross product that will be familiar to many from basic physics.[8]

### 2. Vectors in Whitehead

#### 2.1. Mathematical Uses

Whitehead’s approach to the vector concept in mathematics is certainly best gleaned from his *Universal Algebra *(*UA*). This is at best a difficult and knotty work that has had little impact on the development of twentieth-century mathematics, although its early reception certainly enhanced Whitehead’s stature in both the British and international academic community. There is broad agreement that the work is much indebted to Hermann Grassmann, both in its evident respect for the technical approach taken by Grassmann in both versions of his *Ausdehnungslehre* and in its explicit attempt to give algebraic expression to certain geometric intuitions.[9]

To broaden the geometric connection thematically, let us first note that a mathematician trained in the latter half of the twentieth century would find himself or herself somewhat at sea with *UA* insofar as it is not grounded even in naïve set theory, but rather in vague semantics. Thus Whitehead’s key chapter on manifolds (Chapter II) makes the following definitions:

Consider any number of things possessing any common property. That property may be possessed by different things in different modes: let each separate mode in which the property is possessed be called an element. The aggregate of all such elements is called the manifold of that property (

UA13).

Since he precedes this language with his own acknowledgement of debt to Grassmann, one might argue that any discomfort with Whitehead’s mathematical style is merely a matter of idiom. Yet consider a fundamental definition given in a work published just a quarter century later by a far more influential mathematician, the eminent number theorist Erich Hecke:

Definition of a group. A systemSof arbitrary elementsA,B,C… is called agroupif the following conditions are satisfied: I. There is a rule (a law of composition) given, by virtue of which from an elementAand an elementBwe can always uniquely derive another element ofS, sayC.We write this relationship symbolically asAB=C…[10]

Our point here clearly is not to demean Whitehead, but to indicate that the comparative difficulty in decoding some of his foundational statements lies not in its excessive formalism, but rather in the lack of such. As the example from Hecke shows, given the direction that mathematics did take over the next fifty years, it is not surprising that the style of *UA* would soon go stale. In this sense, *PM*, despite its ultimate failure as a foundation for mathematics, may be seen, in its emphasis on formalism, as far more characteristic of twentieth-century methods than *UA*.[11]

Against this background of stylistic dissonances, what can one say briefly about vectors in *UA*? Two things: First, in the later parts of the work (Book VII), where Whitehead in fact uses the term vector, he certainly means it in the geometric sense of direction and magnitude and is at some pains to free the concept from any attachment to coordinate systems insofar as he asserts that two parallel vectors of the same length are to be considered identical, regardless of the point of application. In doing so, he reflects a very modern discomfort which arises in connection with the application of coordinate systems, no matter how useful they might be: they seem to break the symmetry (homogeneity and isotropy) of space in conflating the coordinate system with space itself. In this sense the dominance of algebra over geometry in the current definition of a vector space may sometimes tend to obscure the very deep issue of covariance. Indeed this seems to be very much Whitehead’s point in Chapter IV, Book VII.[12]

In the earlier parts of the work (Book III, in particular), where Whitehead is developing the idea of a *positional manifold*, despite the foundational squishiness, one does see some arguments that look very much like present-day linear algebra (pertaining to what we now call the span and linear independence of a family of vectors), but not quite. In his insistence that the “character” of a sum of elements depend only on the ratios of certain associated coefficients (or “intensities,” in his terminology), Whitehead seems to be collapsing the standard (affine) spaces of linear algebra into real or complex projective spaces, and in that respect he is again primarily doing geometry.

#### 2.2. Philosophical Uses

The first occurrence of the term vector in *Process and Reality *is more explicitly related to its etymologically antecedent sense of *conveyor* than to its mathematical sense (*PR* 55). In commenting on Locke’s characterization of mind as “being […] furnished with a great number of simple ideas conveyed in by the senses, *as they are found in exterior things*,” Whitehead states:

Here the last phrase, ‘as they are found in exterior things,’ asserted what later I shall call the

vectorcharacter of the primary feelings. The universals involved attain that [vector] status by reason of the fact that ‘they are found in exterior things.’

Later he writes more explicitly: “Feelings are vectors; for they feel what is *there* and transform it into what is *here*” (*PR* 87). This of course comports better with the direct geometric interpretation of a vector as bearing both length and direction, and indeed in several instances Whitehead appeals to this more technical sense.

We have also a slightly more abstract sense, less explicit in Whitehead’s own use of language, in which this geometric notion of vector applies. To make this point, we turn again to the representation of forces in high-school physics, this time to consider the construction of *force diagrams*. We might use such diagrams to resolve, either graphically or trigonometrically, a question such as this: What is the resultant of two forces of unit magnitude acting at right angles at the same point? We draw first a directed line segment—an arrow—of unit length, say, from left to right, then a second arrow, again of unit length, originating at the right endpoint of the first and at right angles to it. These, then, are the legs of a right triangle, the hypotenuse of which represents our answer: a force of magnitude Ö2 acting at an angle of 45 degrees. The point here is that in the vector representation of these forces, the magnitudes do not directly represent length, as might be associated with any sort of conveyance from point *A* to point *B*, but have a more general interpretation that only corresponds to length in the mathematical modeling of the physical forces. In the same way, prehensions have in Whitehead not only the sense of direction associated with conveyance from object to subject, but of magnitude in an analogous non-geometric sense insofar as the subjective form of a prehension admits degrees of intensity.

Finally, and perhaps most fundamentally, in the algebraic domain we see one other affinity between process theory and the vector concept. To the extent that modern linear algebra inevitably leads to the representation of vectors and linear processes as, respectively, *n*-tuples of numbers and the rectangular arrays known as matrices, it implicates the same ontological move, albeit on a far more modest scale, that is at the heart of process metaphysics: the creation of a new object from the structured amalgamation of prior objects. Indeed, Whitehead writes:

‘Concrescence’ is the name for the process in which the universe of many things acquires an individual unity in a determinate relegation of each item of the ‘many’ to its subordination in the constitution of the novel ‘one’ (

PR211).

And so, too, as an object of algebra, goes the vector.

### 3. Relevant Scholarship

#### 3.1. In Mathematics

As noted above, there is some literature on the evolution of the vector concept from Grassmann to its later development by Whitehead in *UA*, but, mathematically speaking, there is little going forward from there. Crowe’s *History of Vector Analysis* mentions Whitehead just twice (1967, 104, 244), to the effect that *UA* contains a restatement of Grassmann’s ideas in language somewhat at variance with Grassmann’s own. Hence Crowe finds it of limited value to his own research goals. Nonetheless, some later works cited in the Dawson article on *UA* in this volume suggest a new interest in Grassmann and Whitehead quite beyond the restricted matter of assessing their relation to the actual evolution of vector mathematics in the late nineteenth and early twentieth century.

#### 3.2. In Philosophy

In addition to Henry’s *Forms of Concrescence* (1993), the tension between the algebraic and geometric formulations of mathematical concepts flowers delightfully in the most famous work of one of Whitehead’s own students. In her most famous work, *Philosophy in a New Key* (1942)—a book in fact dedicated to Alfred North Whitehead—Susanne K. Langer introduces a new dichotomy into the old notion of symbolism. In recognizing that the usual *discursive symbols* of language are insufficient to the description of human cognition, she extends the concept of symbolism to include a previously unrecognized, ignored or excluded species that she calls *presentational symbols*. Langer characterizes the difference between the discursive and the presentational, in part, with these words:

Visual forms—lines, colors, proportions, etc.—are just as capable of

articulation, i.e. of complex combination, as words. But the laws that govern this sort of articulation are altogether different from the laws of syntax that govern language. The most radical difference is thatvisual forms are not discursive. They do not present their constituents successively, but simultaneously, so the relations determining a visual structure are grasped in one act of vision. Their complexity, consequently, is not limited, as the complexity of discourse is limited, by what the mind can retain from the beginning of an apperceptive act to the end of it (1942, 93)

We thus have a cognitive trade-off in these species of symbolism: one has a grammar that detaches the symbols from their content to facilitate recombination and a kind of projective precision, while the other binds meaning so tightly with symbol that detachment and grammar must fail. This, we might recognize, is very close to our previous comparative characterizations of algebra and geometry, and we think that Langer is well worth studying in this connection as the appropriate generalization of this distinction beyond mathematics.

### 4. A Speculative Assessment

We have commented elsewhere (Henry and Valenza 1993a, 1993b) on Whitehead’s deficiencies as a historical force in mathematics, claiming that both his style and content missed the broad sweep of twentieth-century trends. In light of the issues raised in this article, one might well feel that despite the profound failure of *Principia Mathematica* and the eternally daunting limitative theorems of Gödel, the current foundations of mathematics, such as they are, remain firmly discursive in the particular sense that Langer introduced.[13] Generally, the framework (rather than the foundations) of mathematics given by the innovation of category theory may be seen as a nod in the direction of presentational symbolism, but, all in all, we must acknowledge that mathematics in its current practice is almost entirely *speakable*. Think of some of the beautifully suggestive notations from elementary calculus: we still learn to say, for instance, “the integral of *f* with respect to *x* over the interval from *a* to *b*” when presented with the appropriate symbols, although integration is intrinsically an essentially visual process. Our point here is to affirm the difference between speaking and seeing. Speaking depends on hearing, and hearing is predominantly a matter of a sequence of events in time. Sight, in contrast, is more of an exercise in simultaneity. Mathematics today is much more like speaking.

Certainly it is possible to *do *mathematics in a more presentational way, even if its current systematization is strongly discursive. One might even argue, as one may plausibly suppose Plato was doing in his metaphor of the Divided Line in the *Republic*, that the discursive is ultimately *not* the better way to proceed, that we should be working toward the presentational. Conceivably this is how mathematics will someday go. With this in mind—and here we can only speak with a modicum of speculative audacity—it is possible that Whitehead’s general conception of mathematics, and in particular his conception of vector mathematics, may one day be considered not only relevant, but visionary.

### Notes

[1] To put it succinctly, *V* is an *additive group* with respect to addition of vectors. See Valenza 1993 for an elementary group-theoretic approach to the vector-space axioms.

[2] By *ordered pair* we just mean to indicate, for instance, that the pair (2, 7) differs from the pair (7, 2).

[3] This involves, for instance, the existence of certain limits as required by the theoretical foundations of calculus.

[4] For these ideas, one needs a so-called *inner product space*, and for this construction the scalars must come from the real or complex numbers.

[5] While this point about the term *scalar* does make sense in terms of its current usage, one needs to be a little careful about its historical origins: see Crowe 1967, 31.

[6] See Henry 1993, 19 for a beautiful discussion of these matters using somewhat different language.

[7] Two sets are said to be of the same cardinality if there exists a one-to-one correspondence between them. For finite sets this amounts to saying that they have the same number of elements.

[8] See Spivak 1965, for an especially elegant treatment.

[9] See Dawson’s article in this volume for an extensive discussion of *UA*, its reception, and its particular relationship to Grassmann.

[10] Hecke 1923, 17 (my translation).

[11] See Henry and Valenza 1993a and 1993b for a more technical treatment of these issues.

[12] See the biographical entry on Einstein in this volume for the development of this problem and its relationship to Whitehead’s philosophical position.

[13] When considered in the context of this section, Whitehead’s participation in the entirely discursive exercise of *PM* is striking for its irony.

### Works Cited and Further Readings

Crowe, Michael John. 1967. *A History of Vector Analysis. The Evolution of the Idea of a Vectorial System*. (Notre Dame, University of Notre Dame Press).

Hecke, Erich. 1948 [1923]. *Vorlesungen über die Theorie der algebraischen Zahlen* (New York, Chelsea Publishing Company; originally Leipzig, Akademsiche Verlagsgeselschaft).

Henry, Granville C. 1993. *Forms of Concrescence: Alfred North Whitehead’s Philosophy and Computer Programming Structures*. (Lewisburg PA, Bucknell University Press.)

Henry, Granville C. and Robert J. Valenza. 1993a. “The dichotomy of idempotency in Whitehead’s mathematics,” *Philosophia Mathematica* 3, 1, 157-72.

Henry, Granville C. and Valenza, Robert J. 1993b. “Whitehead’s Early Philosophy of Mathematics,” *Process Studies*, 22, 1, 21-36.

Langer, Susanne K. 1942. *Philosophy in a New Key: A Study in the Symbolism of Reason, Rite, and Art* (Cambridge MA, Harvard University Press).

Mostow, George D. and Joseph H. Sampson, 1969. Linear Algebra (New York, McGraw-Hill).

Spivak, Michael. 1965. *Calculus on Manifolds*. (New York, W. A. Benjamin, Inc.)

### Author Information

Robert Valenza

Dengler-Dykema Professor of Mathematics and the Humanities

Department of Mathematics

Claremont McKenna College, Claremont, California 91711

Robert_valenza@mckenna.edu

### How to Cite this Article

Valenza, Robert, “Vector Mathematics: Symbol versus Form”, last modified 2008, *The Whitehead Encyclopedia*, Brian G. Henning and Joseph Petek (eds.), originally edited by Michel Weber and Will Desmond, URL = <http://encyclopedia.whiteheadresearch.org/entries/thematic/mathematics-and-logic/vector-mathematics/>.