|
February 11, 2004book: computers and cognitionIt seems we've all been getting phenomenological these days, so now is no time to stop. I just finished reading one of the better things to come out of the 1980's (my little brother and Metallica being two other notable exports) -- Winograd and Flores' monograph "Understanding Computers and Cognition". The book is the retelling of an intellectual journey, philosophically examining the failure of Artificial Intelligence to achieve its lofty goals and directing the insights gained from this exploration towards a new approach to the design of computer systems. Or, more simply, how Heidegger and friends led an AI researcher to the study of human-computer interaction. The authors begin by challenging what they call the "rationalistic" tradition (what today might be referred to as positivism?) stretching throughout most of Western thought. This tradition's problem solving approach consists of identifying relevant objects and their properties, and then finding general rules that act upon these. The rules can then be applied logically to the situation of interest to determine desired conclusions. Under this tradition, the question of achieving true artificial intelligence on computers, while daunting, holds the glimmer of possibility. Winograd and Flores instead argue for a phenomenological account of being. The authors pull from a variety of sources to make their claims, but rest primarily on Heidegger's Being and Time and the work of biologist Humberto Maturana. One of the important implications is the notion of a horizon, background, or pre-understanding, making it impossible to completely escape our own prejudices or interpretations. Much of our existence is ready-to-hand, operating beneath the level of recognition and subject-object distinction, and this can not, in its entirety, be brought into conscious apprehension (i.e. made present-at-hand). AI programs at the time, however, were largely representational. The program's "background" is merely the encoding of the programmer's apprehension and assumptions of the program's domain. While this approach can certainly create useful programs, they are characteristic of the decontextualized, desituationalized nature commonly attributed to computer interaction and are a far cry from human intelligence. The authors further delve into the issue of language, arguing that "...the essence of language as a human activity lies not in its ability to reflect the world, but in its characteristic of creating commitment. When we say a person understands something, we imply that he or she has entered into the commitment implied by that understanding." Thus, the authors argue that computers, by their very nature, are incapable of commitment and therefore prevented from entering into language on the same terms as humans. The authors' conclusion? Move from AI to HCI. There is an error in assuming that success will follow the path of artificial intelligence. The key to design lies in understanding the readiness-to-hand of the tools being built, and in anticipating the breakdowns that will occur in their use. A system that provides a limited imitation of human facilities will intrude with apparently irregular and incomprehensible breakdowns. On the other hand, we can create tools that are designed to make the maximal use of human perception and understanding without projecting human capacities onto the computer. Other thoughts and notes are in the extended entry. The design section at the end of the book discusses the Coordinator system, which explicitly represents different speech acts as a way of attempting better coordination of organizational communication, in particular supporting the formation and evaluation of commitments. I'm not familiar with the literature on this system, but colleagues of mine have referred to it as a known failure of early CSCW (computer-supported cooperative work). The explicit encoding of otherwise "ready-to-hand" communication seems potentially dangerous and limiting of social nuance. For example, if a commitment is encoded formally, how much room for ambiguity (or delaying, or weaseling, or whatever) is left without making it present-at-hand? It is similar to one of the projects discussed in my friend Scott's thesis, in which by trying to leverage a theory of human behavior (in this case Goffman's notion of different fronts or faces), he encoded formally what people practice unconsciously with high degrees of nuance, thus creating a disconnect between actual human behavior and the well-intentioned mechanisms of the interface. How would more recent AI developments be treated through the lens of this book? Modern statistical techniques can incorporate probabilisitic logic and learning from example data, but still revolves around the statistical model (e.g. specific graphical models) and training techniques (e.g. the EM algorithm) used. These are still representational (primarily in the choice of statistical model), but less strictly so. How far can we extrapolate this, loosening the representation? Do we have any of our own 'hard-coded' models (e.g. Chomskian grammar)? Where do our own representational structures lie on the spectrum of nature (genetics, evolution) and nurture (socially learned and negotiated meaning)? The question here is at the heart of modern cognitive neuroscience - at what representational level, if any, can we understand human functioning, cognition, and experience (at varying levels of consciousness)? Physics? Chemistry? Neuronal interaction? At what level should we look for the organization (or perhaps better stated, embodiment) of a structure-determined, autopoietic system that allows for experience, intelligence and a background to arise? In short, where and how do science and phenomenology dovetail? In the meantime, it is argued that the design of computer programs should steer clear of these pretensions. The lesson from above teaches us that even as we understand mechanisms of thought, language, experience, etc, the way we naturally perceive and act in the world is not experienced or conceptualized in the terms of these mechanisms. The big challenge left for us after reading this book: How do we determine the readiness-to-hand of the tools being built (or the desired 'invisibility' of ubiquitous computing environments)? How do we design for it, how do we measure it, evaluate it, and value it? Furthermore, how do we look beyond just 'tools'? How do we build things that appropriately shift between ready-to-hand and present-at-hand, and that are designed to evoke emotional as well as rational responses? (e.g. a nuclear missile launch control interface should be anything BUT ready-to-hand, requiring conscious deliberation). We've had almost 20 years of HCI research since this book was published, with numerous successes in various (often constrained) domains, but these are still the core theoretical and methodological motivations pushing us forward. --NOTES-- Heideggerian Philosophy Ready-to-hand: the world in which we are always acting unreflectively. The ready to hand is taken as part of the background, taken for granted without explicit recognition or identification. Present-at-hand: the world in which we are consciously reflective, identifying, labeling, and recognizing artifacts and ideas as such. Breakdown: the event of the ready-to-hand becoming present-at-hand Throwness: the condition of understanding in which our actions find some resonance or effectiveness in the world Properties of throwness ------------------------------------------ The Biology of Cognition: Humberto Maturana p.43 Autopoiesis. An autopoietic system is defined as: "...a network of processes of production (transformation and destruction) of components that produces the components that: (i) through their interactions and transformations continuously regenerate the network of processes (tealtions) that produced them; and (ii) constitue it (the machine) as a concrete unity in the space in which they (the components) exist by specifying the topological domain of its realization as such a network..." -Maturana and Verla, Autopoiesis and Cognition (1980), p.79 A plastic, structure-determined system (i.e., one whose strucutre can change over time while its identity remains) that is autopoietic will by necessity evolve in such a way that its activities are properly coupled to its medium. Structural coupling is the basis not only for changes in an individual during its lifetime (learning) but also for changes carried through reproduction (evolution). In fact, all structural change can be viewed as ontogenetic (occurring in the life of an individual). A genetic mutation is a structural change to the parent which has no direct effect on its state of autopoiesis until it plays a rolue in the development of an offspring. A cognitive explanationis one that deals with the relevance of action to the maintenance of autopoiesis. It operates in a phenomenal domain (domain of phenomena) that is distinct from the domain of mechanistic structure-determined behavior. For Maturana the cognitive domain is not simply a different (mental) level for providing a mechanistic description of the functioning of an organism. It is a domain for characterizing effective action through time. It is essentially temporal and historical. The sources of pertrubation for an organism include other organisms of the same and different kinds. In the interaction between them, each organism undergoes a process of structural coupling due to the pertrubations generated by the others. This mutual process can lead to interlocked patterns of behavior that form a consensual domain. --------------------------------------------------- Speech Acts Five categories of illocutionary point: ------ The failures of AI p.123 ...the essence of language as a human activity lies not in its ability to reflect the world, but in its characteristic of creating commitment. When we say a person understands something, we imply that he or she has entered into the commitment implied by that understanding. But how can a computer enter into a commitment?
p.137 Comments
Trackback Pings
"no anthropology knows the real, living existence of man"
Excerpt: anyone playing with heidegger might be interested in a book i'm reading now -- or, more precisely, rereading, since i first read it way back in late 1994 -- Philosophy of Existence by Karl Jaspers. jaspers was a german existentialist... Weblog: ungrok.org Tracked: February 18, 2004 01:26 AM Trackback URL
|
jheer@acm.ørg |