Computers are useless; they can only give you answers.
Metaphor plays a key role in the discourse of science as a tool for constructing new concepts and terminology. The utility of theory-constitutive metaphors depends upon how accurately the concepts they generate actually do "carve the world at its natural joints", in Boyd's terms. More radically constructivist writers have argued over the assumption that such natural joints exist to be found (Kuhn 1993), but most are realists who believe at least that the natural world exists and has structure independently of our metaphors. However, in the discourse of computer science metaphor plays an even more central role. Here metaphors are used not so much to carve up a pre-existing natural world, but to found artificial worlds whose characteristics can then be explored. The metaphors create both the world and the joints along which it can be carved. Computer scientists thus live in metaphor the way fish live in water, and like fish rarely take note of their medium. Their metaphors tend to become transparent, and so terms that start their careers with clear metaphorical roots, such as "structures", "objects", or "stacks", very quickly gather formalized technical meanings that appear to be detached from their origins in the source domain. This phenomenon exists in other fields but is particularly acute in computer systems because the objects involved are so hidden from view that the only terms we have for referring to them are metaphorical. This makes the metaphor dead on arrival--since there is no truly literal way to refer to computational objects, the metaphorical terms soon take on a literal quality.
But under our view of metaphor, technical terminology does not really ever become completely detached from its metaphorical roots. In this section we'll take a look at some of the metaphors underlying computation and the diverse set of metaphorical models that underlie programming languages. A theme of the discussion will be the idea that anthropomorphic metaphors are often present in computation, in varying degrees of explicitness. This derives from the fact that programs are often concerned with action and actors, and that our tools for understanding this domain are grounded in our understanding of human action. This idea is taken up in more detail in the next chapter.
Computation itself is a structuring metaphor for certain kinds of activity in both people and machines. Human behavior may be seen as a matter of "computing the answer to the problem of getting along in the world", although there are certainly other ways to look at human activity. Similarly, a computer (which from a more literal view might be seen as nothing more than a complex electrical circuit) may be seen variously as solving problems, monitoring and controlling external devices, servicing its users, simulating a collection of virtual machines, and so forth. The use of the term "computation" to describe the activity of certain complex electronic devices is itself metaphorical, a historical artifact of the computer's origins. As an organizing metaphor, it privileges certain aspects of the domain over others. In particular, like the mentalistic view of the mind, it privileges the formal operations that take place inside the a computer while marginalizing the interactions of the computer with the outside world.
Historically, computation grew up around the formal notion of a mechanical realization of a mathematical function. A computer was seen as a device for accepting an input string, generating an output, and then halting. This model was perfectly adequate for the tasks that early computers were asked to perform (such as cryptography) but was stretched further by later applications that could not be so readily cast as "problem solving". In particular, the problem-solving model lacked any notion of an ongoing relationship with the external world. Cyberneticists such as Gregory Bateson were thus impelled to attack the computational model for ignoring feedback relationships (Bateson 1972), and more recently a rebellious faction of the artificial intelligence field has grown dissatisfied with the problem-solving model of controlling autonomous agents and has proposed alternative models that emphasize interaction with the world (Agre 1995).
Of course, despite the limitations of the formal model of computation, computing devices have since been employed in a vast array of applications that involve this kind of ongoing, time-embedded control. A great many more computers are employed as embedded control devices in mechanisms such as cars, airplanes, or microwave ovens than as purely symbolic problem-solvers. So the limitations of the metaphor have not, in this sense, proved to be a limitation on practice. However, they have certainly had their affect on computational theory, which on the whole has had little relevance to the design of embedded control systems.
Explicit metaphors are often used to teach beginners how to program. One common example, by now a cliché, is to describe the interpretation of a program as similar to following the steps of a recipe. These instructional metaphors allow a student to understand the abstruse operations of the computer in terms borrowed from more familiar domains. This is almost a necessity for learning about the operations of a system that cannot be directly perceived. Since the purpose of these metaphoric models is to describe something that is hidden in terms of something visible, the source domains are often taken from the concrete physical world, such as boxes containing pieces of paper as a metaphor for variables containing values.
(Mayer 1989) showed that giving novices metaphorical models of computer language interpreters resulted in improved learning compared to a more literal technical presentation. Mayer used a variety of metaphorical models in his experiments. One such model included mappings such as ticket windows for input and output ports, scoreboards for memory, and a to-do list with an arrow marker for the interpreter's program and program counter. This model was presented both as a diagram and a textual description. Students who were presented with the model did better on tests, particularly on problems requiring "transfer"--that is, problems that involved concepts not presented directly in the original instructional materials. Further studies showed that presenting the model before the main body of material resulted in students who scored higher on tests than those who had the model presented to them afterwards. This supports the idea that familiarity with the model aids in the assimilation of the technical content by giving it a meaningful context.
Sometimes tangible metaphors can result in invalid inferences that bring over irrelevant characteristics of the source domain. In one case, students who were told a variable was like a box inferred that, like a physical box, it could hold more than one object (Boulay 1989). In a similar vein, students shown the sequence of assignment statements LET A=2; LET B=A interpreted them to mean (again using the container metaphor) that a single concrete object, 2, is first placed into A, then taken out of A and put into B, leaving A empty. In this case the students were overapplying the object and containment metaphors, concluding that 2 had the property of only being capable of being in one place at one time and thus having to leave A before it could be in B. These sorts of overattribution errors indicate that learning to apply a metaphor is not always a simple matter. In addition to the metaphoric mapping itself, one must also know the limits of its application.
Of course an alternative to using concrete metaphors for computation is to change the computer system itself so that its operations actually are concrete (that is, tangible). This possibility is discussed in Chapter 4. Systems with tangible interfaces still might generate problems of invalid inference from the physical domain, but provide an opportunity for users to debug their metaphoric mapping through interaction with the objects of the target domain.
In this section we examine some of the common models of programming and the metaphor systems that underlie them. These include:
The analysis presented here is in terms of the broad metaphors used to explain and understand programs, and will gloss over many of the more formal properties of programming languages. For instance, (Steele and Sussman 1976) presents the fascinating result that many imperative constructs, such as goto, can be easily simulated using only functional constructs given a language that supports recursive high-order procedures. Despite this theoretical result, and many other similar ones that demonstrate that different languages have equivalent formal powers, the basic concepts used to make sense of imperative and functional programs remain quite distinct. The discussion here sets the stage for the next chapter, which explores one particular aspect of metaphorical grounding in more detail (in particular, see section 3.3.2). This is the role of anthropomorphic or animate metaphors in the description of computational activity. This metaphor system is pervasive in computation for historical and practical reasons. In particular, we will look at agent-based models of programming, which are explicitly organized around anthropomorphic metaphors.
One of the fundamental metaphor systems used to describe computer processes is the imperative model. This model underlies most discourse about the hardware levels of computer systems, and is the source of such terms as instruction and command. The imperative metaphor underlies most naive models of computing such as the transaction level model (Mayer 1979) and the descriptions of computation found in beginner's texts. It also forms the conceptual basis underlying popular early computer languages such as BASIC and FORTRAN. But it may also be found in its earliest forms in Turing's and von Neumann's description of the first theoretical computing machines, and so is really at the very root of the modern idea of computation itself. The imperative model captures the notion of the computer as a device capable of sequentially executing simple instructions according to a stored program. The basic elements of this metaphoric model are:
In the next chapter we will look at the anthropomorphic roots of the imperative metaphor. Here we should just notice the emphasis on a single implicit agent, step-by-step activity, and the mechanical nature of each step. In the imperative metaphor, the interpreter is visualized as a sort of person, albeit a rather stupid or robotic person, reading instructions and following them in a manner that could readily be duplicated by machinery. Each primitive action is simple enough to be executed without any need of further interpretation; no intelligence or knowledge is required of the instruction follower. What sort of language is suitable for specifying the program for such a computer? This model is called the imperative model because the elements of such languages are commands. If such a statement were to be translated into English it would be in the imperative mode. They are instructions to the implicit agent inside the computer. An imperative sentence (i.e., "Give me the pipe!") has an implied subject, namely the target of the command, which does not appear explicitly as a word but is implied by the structure of the sentence. Similarly, in an imperative language the subject of the instruction does not appear explicitly but is implied--the computer itself, or the instruction follower within it, is the implied subject who will execute the command.
If the imperative model emphasizes control of sequential operations, then the functional model emphasizes values, expressions, and computation in the mathematical sense. In functional languages (i.e. Haskell (Hudak, Jones et al. 1991)), the fundamental unit is not an imperative command, but an expression that specifies a value. While most languages support functional expressions to some extent, pure functional languages enforce the functional style by having no support for state and no imperative constructs like assignment and sequencing. Most functional languages support high-order functions, or functions that can accept other functions as arguments and return them as values.
The functional model uses the concept of a mathematical function as its founding metaphor. Like the imperative model, the functional model was present at the very start of the history of computing. Whereas the imperative model emphasizes action, the functional model emphasizes the results of action, expressed as a functional relation between input and output items. The Turing machine computes a function using imperative operations. In some sense, the joining of these two different ways of thinking in the Turing machine was the founding act of computer science, and the two models continue to be interwoven in various ways as the field grows. Functional languages are favored as a theoretical tool by computer scientists, because functional programs are much easier to analyze than those that incorporate state and state change. They also permit the use of a variety of powerful expressive techniques, such as lazy evaluation, which are problematic in the presence of state change. Conversely, functional languages do poorly at integrating imperative constructions and state, which in turn introduces issues of time, control, and serialization. There have been quite a few efforts to graft imperative capabilities onto purely functional languages, but as one paper on the subject put it, "fitting action into the functional paradigm feels like fitting a square block into a round hole" (Jones and Wadler 1993). One of the most successful end-user programming techniques ever invented, the spreadsheet, uses what is essentially a functional model of programming. Each cell in a spreadsheet contains a functional expression that specifies the value for the cell, based on values in other cells. There are no imperative instructions, at least in the basic, original spreadsheet model. In a sense each spreadsheet cell pulls in the outside values it needs to compute its own value, as opposed to imperative systems where a central agent pushes values into cells. In some sense each cell may be thought of as an agent that monitors its depended-upon cells and updates itself when it needs to. As Nardi (1993)(Nardi 1993) points out, the control constructs of imperative languages are one of the most difficult things for users to grasp. Spreadsheets eliminate this barrier to end-user programming by dispensing with the need for control constructs, replacing them with functional constructs.
Functional programming lends itself to metaphors of connection and flow. Functions can be pictured as physical devices, akin to logic gates, with a number of input and output ports, continuously computing the appropriate values for their outputs given their inputs. Functional composition then is simply connecting up the input ports of one device to the output ports of other devices. The network acts to continually maintain a relationship between inputs and outputs.
Flow metaphors are straightforward to represent graphically, and there have been quite a number of visual programming environments that make use of them, including Hookup (Sloane, Levitt et al. 1986), VennLISP (Lakin 1986), Fabrik (Ingalls, Wallace et al. 1988) and Tinkertoy (Edel 1986). Graphic dataflow languages like these are especially well-suited to building programs that operate real-time devices or process streams of data. In this context, a program essentially operates as a filter-like device, accepting a stream of data, processing it a single element at a time, and producing a corresponding output stream. Hookup, for instance, was designed to work in real time with streams of MIDI data to and from electronic musical instruments, while LabView was designed to handle laboratory instrumentation tasks.
Figure 2.2 (after Hookup). A network for computing centigrade
temperature from Fahrenheit. Data flows left to right.
Figure 2.2 shows an example of a graphic representation, modeled after the appearance of Hookup, of a functional program to convert temperature from Fahrenheit to centigrade units using the formula:
C = (F - 32) * 5/9
The flow of data is left to right. Functions are represented by graphical symbols akin to those used to represent gates in digital logic. Input and output values are indicated in boxes (input boxes are on the left and supply values, output boxes receive values). In Hookup, the values of outputs are effectively being computed continuously, and so the value in centigrade will update instantaneously whenever any of the inputs change. Note that this presents a somewhat different environment than a standard functional language, in which the application of a function to arguments must be done explicitly. In a Hookup-like language, function application is performed automatically whenever an argument changes. Graphic data flow languages thus take a step towards viewing functions as continuously maintained relations rather than procedures.
Whereas regular functional languages do not deal well with input and output, the dataflow variant is able to model these in the form of continuous streams of changing values. In this model, input devices appear as sources for the flow of values through the networks, but sources that change their values over time. Output devices correspondingly are sinks which accept a stream of changing values. The mouse icon, highlighted at the center of figure 2.3, is an example of an input device, with three separate output flows for X, Y, and BUTTON.
Hookup also extended the flow metaphor to deal with state. Its dataflow metaphor was based loosely on digital logic, with functions represented as gates. In addition to stateless devices such as gates, Hookup included devices with state that functioned in a manner analogous to registers in digital logic. A register had a special clocking input that would cause the current input value to become current and present on the output wire. This way of handling state at least made sense within the dominant metaphor of the system. However, the presence of state also introduces a requirement for sequential control, which was not readily supported by the metaphor. Hookup included clocks and clockable sequence objects that provided some ability to provide sequences of values, but using these to control sequences of events was awkward.
Figure 2.3 A Hookup network to control a bouncing animated figure. It incorporates input from the mouse, output with sound and animation, and internal state registers.
The Playground environment (Fenton and Beck 1989) was another interactive system and language that was organized around a functional model but ran into trouble when trying to deal with tasks that were more naturally expressed using imperative constructs. Playground, like LiveWorld, was designed to be a platform for modeling worlds in which graphic objects were expected to perform actions. The basic unit of computation was an "agent" that functioned more-or-less as a functional spreadsheet cell. Slots inside objects were agents, and each agent was in charge of computing its own value in a functional style (specified by means of an expression language with English-like syntax). This included slots specifying basic graphic information such as x and y position and size. In essence the processing of the system involved a parallel recomputation of all cells, with the new value specified as a functional expression.
As might be expected, it was difficult to express actions using this model. For instance, you could not easily say "go forward 10 steps"--instead, you had to specify separate functional expressions for computing the next values of the x and y coordinates. It was possible to have a rule for, say, the x cell that set its value to x + 10 continually, and this which would produce a constant motion in the x direction, and this could even be made conditional, but essentially this model forced the program to be organized around low-level concepts like position rather than behavior. Eventually this spreadsheet-based programming model had to be augmented with additional imperative constructs. By dispensing with state and control issues, the functional metaphor presents a very simple model of computation that can be made readily accessible to novices through a variety of interface metaphors. The lack of state and control drastically simplifies the language and makes the system as a whole more transparent. But Playground shows that there are fundamental problems with the use of the functional model as the sole or primary basis for programming animate systems. The downside of eliminating control is that programs that need to take action or exert sequential control are difficult or impossible to construct. For animate programming, where action is foremost, functional programming seems unnatural in the extreme.
The procedural model of programming, which underlies modern languages like Lisp and Pascal, combines elements of the imperative and functional metaphors within the more powerful overarching framework of procedural abstraction. The procedural model thus is not founded directly on a single metaphor, although it lends itself to new and powerful forms of description based on anthropomorphic and social metaphors. Under the procedural model, a program is constructed out of smaller programs or procedures. A procedure can both carry out instructions (like an imperative program) and return values (like a function). The procedural model introduces the notion that one procedure can call another to perform some task or compute some value. The metaphoric and animate suggestiveness of the term call indicates the beginnings of an anthropomorphic, multiagent view of computation. A procedure, encapsulates as it does both the imperative and functional aspects of the computer, is in essence a miniature image of the computer as a whole.
Of course, "procedures" in the loose sense of the word can be created in any language. What procedural languages do is to reify the notion of procedure, so they become objects for the user, that can be manipulated and combined into more complex arrangements. Some beginner's languages (i.e. Logo) have a distinct notion of procedure, while others (BASIC, at least in its original form) do not. The availability of named procedures can have an effect on the developing epistemology of the student:
In programming cultures like those of LISP, Pascal, and LOGO, in which procedures and hierarchical structures have been given concrete identity, programmers find powerful metaphors in tree searches and in recursive processes. There is a tendency to anthropomorphize, to look at control mechanisms among procedures and within the flow of procedures in terms of actors or demons, or other creatures resident in the computer capable of giving advice, passing data, receiving data, activating procedures, changing procedures, etc. (Solomon 1986, p. 98).
Papert also emphasizes the importance of procedures as a thinking tool. They are a computational realization of what he calls the principle of epistemological modularity, that is, the idea that knowledge and procedures must be broken up into chunks that can be called up from the outside without the caller having to know about the inside:
Everyone works with procedures in everyday life ... but in everyday life, procedures are lived and used, they are not necessarily reflected on. In the LOGO environment, a procedure becomes a thing that is named, manipulated, and recognized as the children come to acquire the idea of procedure (Papert 1980, p154).
In procedural languages, control is still essentially imperative, in that there is a single locus of control serially executing commands, but instead of a program being an undifferentiated mass of instructions, it is organized into a multiplicity of procedures. The locus of control passes like a baton from procedure to procedure, with the result that one can see the operation of a program in either single-actor or multiple-actor terms. The procedural model lends itself to an animistic metaphor of "little people" who live in the computer and can execute procedures and communicate among themselves (see section 22.214.171.124). The properties of the procedural model that lend itself to anthropomorphization include the modularization of programs into small, task-based parts and the existence of a simple yet powerful model for inter-procedure communication through calling and return conventions. The procedural model, then, is the first step towards an agent-based model of computation.
Procedures, like the computer itself, can be approached through any or all of the metaphor systems mentioned thus far: imperative, functional, and anthropomorphic. Since a procedure is as flexible and conceptually rich as the computer itself, it essentially permits recursive application of the metaphorical tools of understanding. But the procedural world introduces new powers and complications. Because there are multiple procedures, they need to have ways to communicate and new metaphors to support communication. In languages that support procedures as first-class objects, the metaphor is complicated by the fact that procedures can create and act on other procedures, as well as communicate with them. Issues of boundaries and modularity also arise in a world with multiple actors. Some of these issues are treated by object-oriented models of programming.
Although procedures are anthropomorphized, they are in some sense more passive than the metaphor suggests. They will only act when called from the outside. This, too, derives from the formal Turing model of the computer as a whole, which treats it as a device for computing the answer to a single input problem and then halting, with no interaction with the world other than the original parameters of the problem and the final result. Real computers are much more likely to be running a continual, steady-state, non-terminating process, constantly interacting with external devices. The formal model does not adequately capture this aspect of computation, and the procedural model too tends to marginalize it. Real procedural programming systems often, but not always, make up for this by extending the model to include ways to interface with the outside world, for instance by being able to specify a procedure that will execute whenever a particular kind of external event occurs. This is a useful feature, but still leaves control, interaction, and autonomy as marginal concepts relative to the basic procedural model. One purpose of agent-based models of programming is to bring these issues to the center.
Object-oriented programming (OOP) is an interesting example of a programming methodology explicitly organized around a powerful metaphor. In OOP, computational objects are depicted metaphorically in terms of physical and social objects. Like physical objects, they can have properties and state, and like social objects, they can communicate and respond to communications.
Historically, OOP arose out of languages designed for simulation, particularly Simula (Dahl, Myhrhaug et al. 1970) and for novice programming in graphic environments such as SmallTalk (Goldberg and Kay 1976). In object-oriented simulations, the computational objects are not only treated as real-world objects, but they also represent real-world objects. A standard example is the spaceship, which is modeled by a computational object that has properties like position, orientation, and mass; and can perform actions like rotate and accelerate. The object-oriented metaphor explicitly acknowledges the representational relationship between computational structures and real-world objects, and encourages the development of such representational relationships. But because computational objects have properties that are quite different from spaceships and other real-world objects, the elements of the mapping must be carefully selected so that, on one hand, the computational elements are both powerful and parsimonious, and on the other, a sufficiently rich subset of real-world properties and behaviors are encompassed. In most OOP languages, objects are organized into classes or types that make up an inheritance hierarchy.
OOP may be viewed as a paradigm for modularizing or reorganizing programs. Rather than existing in an undifferentiated sea of code, parts of programs in OOP are associated with particular objects. In some sense they are contained in the objects, part of them. Whereas the procedural model offered communication from procedure to procedure, through the metaphor of calling, in OOP, communication between procedures (methods) is mediated by the objects.
A variety of metaphors thus are used to represent the communication and containment aspects of OOP. The earliest OOP languages used the metaphor of sending messages to objects to represent program invocation. Objects contain methods (or behaviors or scripts in some variants) for handling particular kinds of messages; these methods are procedures themselves and carry out their tasks by sending further messages to other objects. OOP languages use constructs like send, ask, or <== to denote a message send operation. Objects also contain slots that hold state. In general the slots of an object are only accessible to the methods of that object--or in other words, the only way to access the internal state of the object is by means of sending the object messages. Objects can have a tightly controlled interface that hides its internal state from the outside world. The interface of an object thus acts somewhat like the membrane of a cell.
These simple elements have given range to a wide variety of extensions to the basic metaphor, and a correspondingly vast literature on object-oriented methodologies and more recently, object-oriented "design patterns" (Gamma, Helm et al. 1994). The diversity of such schemes indicates that while the basic mapping between computational and real-world objects may be intuitive and straightforward, the ramifications of that mapping are not. In any real OOP system, there are always hard design choices to make reflecting the fact that there will always be more than one way to carve the world up into objects.
The history of object-oriented programming shows how technology that evolved around a particular metaphor can be subject to forces that tend to stretch or violate that metaphor. The original simple idea behind OOP--objects receive messages and decide for themselves how to respond to them--is complicated by many issues that come up when trying to realize the idea. One complication is the related issues of object types, hierarchies, inheritance, and delegation. Multiple inheritance, while a useful technique, involves quite complicated issues and does not have a single natural formulation. The prototype-based, slot-level inheritance of Framer and LiveWorld (see chapter 4) are attempts to deal with some of these problems in a more intuitive way.
It is known that message-passing and procedure calling are formally equivalent (Steele 1976). Some OOP languages (like CLOS (Steele 1990) and Dylan (Apple Computer 1992)) try to exploit this by getting rid of the message-passing metaphor and using regular procedure calling to invoke object methods. This has the advantage that method selection can be specialized on more than one object. This technique, while powerful, is somewhat at variance with the object-oriented metaphor as previously understood. Because a method can be specialized on any argument, the method can no longer be seen as associated with or contained inside a single object or class. Here we have a case of two metaphors for communication clashing and combining with each other. Proponents of the generic procedure approach point out that it is more powerful, more elegant, and (in the case of CLOS) more integrated with the underlying procedural language. Opponents decry the violation of the object metaphor and the increased complexity of dispatching on multiple arguments.
The Actor model of computation (Hewitt 1976) was another important early influence in the development of OOP, and deserves mention here. The name is obviously anthropomorphic, and a variety of anthropomorphic metaphors influenced its development, including the little-person metaphor (Smith and Hewitt 1975) and the scientific community metaphor (Kornfeld and Hewitt 1981). The Actor model was explicitly designed to support concurrency and distributed systems.
The object-oriented programming model is a natural outgrowth of the procedural model, and shares a good many of its features. From a broad historical perspective, it can be seen as a further step in the reification and anthropomorphization of parts of programs, necessitated by the need to manage more complex programs and distributed systems. Rather than programs and data existing in an undifferentiated mass, the various components are organized, managed, and encapsulated. The emphasis as a result is on communication between the now separated parts of the system.
The constraint model is something of a departure from the other programming models considered so far. Despite their differing metaphoric bases, to program in all the previous models is to provide the computer with a fully deterministic procedure for carrying out a computation. Even functional languages generally have an imperative interpretation so that the programmer will be aware of the sequence of events which will occur when the program is executed. Constraint languages, in contrast, implement a form of declarative programming in which only the relations between objects are specified by the programmer while leaving the procedural details of how to enforce these relations up to the constraint-solving system. As a result, constraint languages require significantly more intelligence in their interpreter, whose operation is thus harder to understand.
However, from the metaphorical standpoint constraint systems may be seen as a natural extension of the flow metaphor found in functional languages. In a functional language, flow is unidirectional, but in a constraint system, data can flow in either direction along a link. To illustrate this, let's return to the temperature conversion example of section 126.96.36.199. In a constraint language, the statement:
C = (F - 32) * 5/9
is not only an instruction about how to compute C given F, but a general declaration of a relationship between the two quantities, so that either may be computed from the other. The constraint system has the responsibility for figuring out how to perform this calculation, and thus must have some algebraic knowledge or the equivalent. This knowledge takes the form of a variety of constraint-solving techniques. The simplest technique, local propagation of known values, is readily expressed through the flow metaphor. In the functional version of the flow metaphor, values flow from inputs along wires, through devices, and eventually produce output values. In local propagation, values flow in a similar way, but the wires are bi-directional and input and outputs are not distinguished. There are still computational devices, but instead of having distinguished input and output ports any port can serve as an input or an output (or in other words, instead of implementing functions they implement relations). Such a device will produce an output on any port whenever it receives sufficient inputs on its other ports.
Figure 2.4: A constraint network.
Figure 2.4 shows a constraint-based variant of the dataflow network from Figure 2.2. In contrast with the earlier figure, here data can flow in either direction along a wire, so that the value in the centigrade box might have been specified by the user or computed by the network. The arrows show one possible flow pattern, which would result from the user specifying the value of centigrade. The distinction between input and output boxes no longer holds, but a new distinction must be made between constants and variables--otherwise the network might choose to change the value of the constant 32 rather than the value in the Fahrenheit box! Constants are indicated by putting a padlock beside them, to indicate to the solver that it is not to alter those values. This circuit-like metaphor for constraints was introduced by Sketchpad (Sutherland 1963) which was the first system to represent constraints graphically. The technique was used as an expository device in (Steele 1980) and implemented as an extension to ThingLab (Borning 1986). Constraint programming may be viewed as one particularly simple and powerful way of combining declarative and procedural knowledge. A constraint combines a relationship to enforce, expressed in declarative terms (i.e., an adder constraint that enforces the relationship a = b + c) with a set of procedural methods for enforcing it. In the case of the adder, there would be three methods corresponding to the three variables that can serve as outputs; each of which computes a value from the remaining two variables
Other languages, most notably logic programming languages like Prolog, have been fashioned along the idea that programs should take the form of declarative statements about relationships. One slogan put forth by advocates of logic programming is "algorithm = logic + control" (Kowlaski 1979), where "logic" refers to an essentially declarative language and "control" refers to some additional mechanisms for controlling the deductive processes of the language interpreter. The problem with Prolog is that it shortchanges both logic and control by attempting to use the same language to specify both. Thus the language does not have the full expressive power of logic, because doing so would make the imperative interpretation of a logic program intractable. And while Prolog adds some control constructs such as the cut to its declarative language, in general the ability to control the deductions of the interpreter are limited.
Constraint languages have the potential to overcome this limitation, since they separate out the declarative and imperative parts of a program. Declarations of intent and procedures can each be made in the idiom appropriate to the task, then linked together in a single constraint. ThingLab (Borning 1979) was organized in this fashion. However, rather than develop this idea as a programming paradigm, ThingLab, its descendants (i.e. (Freeman-Benson, Maloney et al. 1990)), and constraint-based systems in general evolved in a different direction.
A typical contemporary constraint system (i.e. Juno-2 (Heydon and Nelson 1994)) is designed as a declarative language together with a black-boxed constraint solver which can solve constraint systems in that language. The user of such a system can specify constraints using a given declarative language, but cannot specify procedures for satisfying them. In other words, the imperative side of the equation is given short shrift. The reason this path was taken is probably a desire for constraint solvers that are both fast and theoretically tractable. A system that permits constraints to be built with arbitrary user defined procedures would be quite difficult to control.
Constraint Imperative Programming (CIP) (Lopez, Freeman-Benson et al. 1993) is an attempt to integrate declarative constraints with an imperative programming language. The idea behind CIP is to combine an object-oriented procedural language with automatic constraint solving capability that can enforce relations between slots of objects. While this is a promising line of research, CIP languages are still limited in their expressive power. The constraint solver is monolithic--you can't create new imperative methods to solve declarative constraints--and in some sense subordinate, relegated to the role of automatically maintaining relations among variables in an otherwise procedural language. The same is true of other efforts to combine constraints with more general models programming (Hetenryck 1989) (Siskind and McAllester 1993).
These efforts, while certainly valuable, do not seem to me to explore the full range of possibilities of constraints as an organizing metaphor for programming. All of them take an essentially fixed constraint solver and make it an adjunct to an otherwise ordinary procedural language. A programming system that was fully organized around a constraint metaphor would have to have a much more powerful concept of constraints, one that could encompass computation in general, as does the object-oriented model. Constraints would have to be as powerful as procedures (which they could be, if they had a fully expressive procedural component) but also capable of being invoked without an explicit call. The agent-based systems described in chapter 5 are attempts to realize this alternative version of the constraint model.
The only area of computer science that makes a regular practice of engaging in explicit discourse about its own metaphors is the field of human interface design. No doubt this is due to the nature of the interface task: to make hidden, inner, abstruse worlds of computational objects and actions accessible to users who may not have any direct knowledge of their properties.
From our point of view, just about everything about computation is metaphorical anyway, and the distinction between a highly metaphorical interface (such as the Macintosh desktop) and a command-line interface (such as that of UNIX) is a matter of degree only. It is not hard to see that the UNIX concepts of files and directories are just as grounded in metaphors as the documents and folders of the Macintosh--the former have simply been around longer and thus have achieved a greater degree of transparency.
Interface metaphors are generally easier to design than metaphors for programming languages, for a couple of reasons. First, interface metaphors are usually designed for more specific tasks than languages. They generally have only a small number of object types, relations, and actions to represent, and so each element can be given a representation carefully tailored to its purpose. A programming language, by contrast, has an extensible set of objects, operations, and relations, and an interface that presents its elements to the user must necessarily operate on a more general level. For example, a non-programmable interactive graphics program might have special icons for each of the objects it allows the user to draw (i.e. rectangles and lines) and operations it can perform (i.e. erase, move, resize). On the other hand, a general-purpose graphic programming environment will have an ever-expanding set of objects and operations and thus its interface, unless extended by the user, can only represent objects and operations in general, and thus will be limited in the metaphors it can employ.
One way to deal with this problem is to find an intermediate level between application-specific but non-programmable tools, and general-purpose environments that are programmable but difficult to present metaphorically. This approach is used by Agentsheets (Repenning 1993), an environment building tool which is designed using a layered approach. The substrate level is a general-purpose programming environment featuring a grid-based spatial metaphor which can contain active objects, programmed in a general-purpose language (an extension of Lisp called AgenTalk). Adapting Agentsheets for a new domain task involves having a metaphor designer build a set of building blocks that work together using a common metaphor, such as flow. The end-user then can construct simulations using the block-set and the metaphor, but is insulated from the power and complexity of the underlying programming language.
Another reason that interface metaphors are easier to design than metaphors for programming is that direct-manipulation interfaces in general don't need to represent action. Under the direct-manipulation paradigm, all actions are initiated by the user. While the results of action or the action itself might have a graphic representation (for instance, opening a folder on a desktop can be represented by a zooming rectangle) the action itself is soon over and does not require a static representation. Nor is there a need to represent actors, since the user is the only initiator of action in the system. The interface world is beginning to realize limits to the direct-manipulation paradigm and embrace representations of action in the form of anthropomorphic interface agents (see section 3.3.4).
Programming, however, is all about action and thus programming environments have a need to represent actions and actors. It's interesting that functional programming models, in which action is de-emphasized, have been the most amenable to presentation through graphic metaphors. To represent action, programming metaphors may also need to turn to anthropomorphism.
 For a longer discussion of pushing/pulling metaphors in multiagent systems see (Travers 1988).
There are some tricky techniques that allow functional languages to express imperative constructs (Henderson 1980) (Jones and Wadler 1993), for instance turning actions into representations of actions that can then be manipulated through functions. These techniques are theoretically interesting but do not really affect the arguments here.
 As far as I know this idea was first formulated by Alan Borning (Borning 1979).