British cybernetics

Authored by: Joe Dewhurst

The Routledge Handbook of the Computational Mind

Print publication date:  August  2018
Online publication date:  September  2018

Print ISBN: 9781138186682
eBook ISBN: 9781315643670
Adobe ISBN:

10.4324/9781315643670-4

 

Abstract

This chapter will explore the role of embodiment in British cybernetics, specifically in the works of Grey Walter and Ross Ashby, both of whom have had a distinctive influence on later research in embodied cognition. The chapter will also consider the relationship between Alan Turing and the British cyberneticists, and contrast Turing’s work on computation with the contributions of Walter and Ashby. Contemporary ‘embodied’ approaches to cognitive science are often contrasted with computational approaches, with the latter being seen as emphasizing abstract, ‘disembodied’ theories of mind. At their most extreme, proponents of embodied cognition have rejected computational explanations entirely, as is the case with the enactivist tradition. This chapter will conclude by suggesting that the work of the British cyberneticists, which combined computational principles with embodied models, offers a potential route to resolving some of these tensions between embodied and computational approaches to the mind.

 Add to shortlist  Cite

British cybernetics

Introduction

This chapter will explore the role of embodiment in British cybernetics, specifically in the works of Grey Walter and Ross Ashby, both of whom have had a distinctive influence on later research in embodied cognition. The chapter will also consider the relationship between Alan Turing and the British cyberneticists, and contrast Turing’s work on computation with the contributions of Walter and Ashby. Contemporary ‘embodied’ approaches to cognitive science are often contrasted with computational approaches, with the latter being seen as emphasizing abstract, ‘disembodied’ theories of mind. At their most extreme, proponents of embodied cognition have rejected computational explanations entirely, as is the case with the enactivist tradition. This chapter will conclude by suggesting that the work of the British cyberneticists, which combined computational principles with embodied models, offers a potential route to resolving some of these tensions between embodied and computational approaches to the mind.

Section 1 will give a brief overview of the cybernetics movement, highlighting the relationship between American and British cybernetics. Section 2 will explore Turing’s engagement with the Ratio Club (a hub of British cybernetics), and consider themes of embodiment in Turing’s own work on computation and artificial intelligence. Section 3 will turn to Walter’s experiments in designing autonomous robots, and the similarities between these designs and later work on embodied robotics. Section 4 will discuss the centrality of homeostasis in Ashby’s work on cybernetics, and the subsequent influence that it has had on second-order cybernetics and enactivism. Finally, Section 5 will draw all of these themes together and suggest that paying attention to the role of embodiment in cybernetics might offer some insight into how to resolve contemporary disputes concerning the proper place of computation in our theories of mind and cognition.

1  The cybernetics movement in the US and the UK

The cybernetics 1 movement emerged in the post-Second World War US, building on interdisciplinary connections established during the war. The movement was centered around a series of ten interdisciplinary conferences sponsored by the Josiah Macy, Jr. Foundation, held from 1946 to 1953 and focusing primarily on the themes of circular causality and feedback mechanisms. Key figures associated with the movement in the US included Norbert Wiener (who worked in information theory and control engineering, and further developed the concept of a feedback loop, first introduced in the early twentieth century), John von Neumann (who was instrumental in developing some of the earliest electronic computers), and Warren McCulloch (who along with his collaborator Walter Pitts developed the idea of treating neurons as basic logic gates, and inspired later computational theories of mind). 2 McCulloch’s contributions are described elsewhere in this volume (Abrahams, this volume; see also Abrahams, 2016), and the history of cybernetics in the US is relatively well documented (see e.g. Heims, 1993; Edwards, 1996; Dupuy, 2000/2009). Less frequently discussed are the contributions of the British cyberneticists (Pickering, 2010, being a notable exception) which will form the focus of this chapter.

The cybernetics community in the UK was centered around the Ratio Club, an informal dining/discussion group that met intermittently from 1949 to 1958 (see Husbands and Holland, 2008). The club initially met monthly at the National Hospital for Nervous Diseases in Queen Square, London, with the aim of bringing together “those who had Wiener’s idea before Wiener’s book appeared” (Bates, 1949) to discuss questions and topics relating to cybernetics. 3 After a year the club began to meet less frequently (although still regularly), and in different locations, with several meetings scheduled to take place outside London (Husbands and Holland, 2008, pp. 113–122). After 1953, the frequency of meetings declined further, and by the end of 1955 the club was essentially disbanded, with only one final reunion meeting held in 1958 (ibid., pp. 125–129). Topics for discussion at the club ranged from topics familiar to contemporary cognitive science, such as ‘pattern recognition’ and ‘memory’, to topics more specific to the cybernetic milieu, such as ‘adaptive behavior’, ‘servo control of muscular movements’, and even ‘telepathy’ (ibid., p. 116). The club’s membership included not only psychologists, psychiatrists, and neurophysiologists, but also physicists, mathematicians, and engineers, reflecting the diverse interests and backgrounds of the cybernetics movement.

Several of the key British cyberneticists also attended one or more of the Macy conferences, and there was significant overlap between the two groups (US and UK), including a 1949 visit by Warren McCulloch to give the opening talk at the inaugural meeting of the Ratio Club (Husbands and Holland, 2012). In this chapter, I will focus primarily on three figures associated with the British cybernetics movement, chosen to highlight the role that embodiment played in British cybernetics. The first of these, Alan Turing, is not normally considered to be a ‘cyberneticist’, but was a member of the Ratio Club and was highly relevant to the development of cybernetics (and subsequently, artificial intelligence and cognitive science). The other two, Grey Walter and Ross Ashby, are probably the most famous of the British cyberneticists, and have been influential in the development of what has come to be known as ‘embodied cognition’ (see e.g. Shapiro, 2014; Miłkowski, this volume). By embodied cognition, I have in mind the strong claim that the specific details of a cognitive system’s body or environment are essential to our understanding of cognition, and play a constitutive role in cognitive processing, rather than the weak claim that a computational theory of mind must be physically implemented in some form or other (typically assumed to be the brain and/or central nervous system, although some theories are happy to remain neutral about the precise details of the implementation). This chapter will explore the contributions of Turing, Walter, and Ashby to embodied cognition, and consider how the UK cybernetics movement might offer a model for contemporary computational theories of mind that take seriously the role of embodiment in explanations of cognition.

Alan Turing (1912–1954) was born in the UK (in Maida Vale, London), where he lived and worked almost all of his life. He completed his undergraduate degree in mathematics at Cambridge, before being elected a fellow at King’s College Cambridge on the strength of his undergraduate dissertation. He subsequently completed a PhD at Princeton with Alonzo Church, returned to the UK and then spent the Second World War as a cryptanalyst, designing mechanical and computational systems with which to break German ciphers. After the war, he worked on various projects designing early stored-program computers, first at the National Physical Laboratory in London, and then in the mathematics department at the University of Manchester. Towards the end of his life he also became interested in chemical morphogenesis, on which he published an influential paper in 1952.

William Grey Walter (1910–1976) was born in the US to an English father and an Italian-American mother, but moved to the UK in 1915 and remained there for the rest of his life. He was educated in Cambridge as a physiologist, and subsequently made some important early contributions to the development of electroencephalography (EEG). From 1939 until his retirement in 1975 he was based at the Burden Neurological Institute just outside Bristol, where he continued to conduct EEG research, alongside which he developed a personal interest in cybernetics and the general study of brain and behavior. He was a founding member of the Ratio Club, attended the final Macy conference in 1953, and helped organize the first Namur conference (‘The First International Congress on Cybernetics’) in 1956.

William Ross Ashby (1903–1972) was born in the UK and lived there for most of his life, although from 1961 to 1970 he was based at Heinz von Foerster’s Biological Computing Laboratory in Illinois. 4 Ashby’s first degree was in zoology, and he also had experience in clinical psychiatry. His contributions to cybernetics included two major books, Design for a Brain (1952, republished as a revised edition in 1960) and An Introduction to Cybernetics (1956), alongside many other journal articles and research papers. He was also a founding member of the Ratio Club, and an invited speaker at the 1952 Macy conference.

As Pickering (2010, p. 5) notes, it is probably significant that many of the British cyberneticists, unlike most of their American counterparts, 5 had a background in neurobiology and psychiatry, making the question of physical implementation especially salient. Walter’s undergraduate degree was in physiology, and he went on to conduct early studies with the then-emerging technique of electroencephalography (EEG), now a staple of neuroscientific research. Ashby initially trained as a zoologist, but went on to work as a clinical psychiatrist and research pathologist. Some of both Walter’s and Ashby’s contributions to cybernetics were conducted in their spare time, alongside their other responsibilities, lending them what Pickering describes as an “almost hobbyist character” (2010, p. 10). Turing, in contrast, was a mathematician and engineer whose contributions to cybernetics and cognitive science were made during his working life.

In the next section, I will describe how Turing’s work relates to cybernetics and embodied cognition, before moving on in Sections 3 and 4 to look at the respective contributions of Walter and Ashby. In each case I will focus on presenting their contributions to cybernetics in terms of the physical models they designed to illustrate these concepts: Walter’s early experiments in embodied robotics and Ashby’s illustration of ultrastability with his homeostat. Finally, in Section 5 I will suggest that the British cyberneticists offer a model for a potential reconciliation between contemporary computational and anti-computational theories of mind.

2  Turing and the Ratio Club

The focus of this section will be on drawing out the (sometimes overlooked) themes of embodiment that can be found in Turing’s work. I will first describe his relationship with the British cyberneticists, before considering how both of his most famous ideas, the eponymous ‘Turing machine’ and ‘Turing test’, relate to themes of embodiment. 6 The behavior of the Turing machine, I will suggest, can be understood in terms of cybernetic feedback loops, whilst the role of embodiment in human communication reveals some of the limitations of the original Turing test.

It was agreed unanimously after the first meeting of the Ratio Club that Turing should be invited to join, and he was apparently glad to accept the invitation (Husbands and Holland, 2008, p. 115). He gave two talks at the club, one on ‘Educating a Digital Computer’ and the other on ‘The Chemical Origin of Biological Form’ (Boden, 2006, p. 222; Husbands and Holland, 2008, p. 101). He was also in regular communication with at least some members of the club, 7 and would have been familiar with the general principles of cybernetics, which were at the time fairly widespread. It is therefore interesting to consider his views on computation and embodiment in relation to those of the other British cyberneticists.

The Turing machine, original described by Turing in a 1936 paper, is usually characterized as a mechanism consisting of a one-dimensional tape of unbounded length and an automaton that moves along the tape, reading and writing symbols according to a set of simple instructions (see e.g. Barker-Plummer, 2016, sec. 1). Described in this way, it makes sense to think of the tape itself as something like the memory of the system, and therefore as a component of the machine as a whole. Characterizing the machine in this way contributes to an internalistic reading of Turing’s account of computation, where every aspect of the computational process is carried out within the system.

However, as Wells (1998) points out, this characterization leaves out an interesting distinction made by Turing in his original description of the machine, which was introduced by analogy with a human ‘computer’ performing mathematical calculations with a pencil and paper. Here the automaton corresponds to the human computer, whilst the tape corresponds to the paper that they are writing upon, forming what Wells describes as the “environment” of the Turing machine (ibid., 272). Viewed in this way, Turing’s account of computation is no longer so obviously internalistic. Whilst the tape could obviously be placed inside the machine (as is typically the case with the memory components of modern electronic computers), it could also remain outside, providing something like an external or distributed memory for the system (cf. Clark and Chalmers, 1998).

Wells argues that this distinction helps to overcome various issues that he identifies with the ‘classical’ (internalistic) view of computation (ibid., 276–279), and more recently Villalobos and Dewhurst (2017a, sec. 4) have adopted Wells’ analysis in order to demonstrate a potential compatibility between computational and enactivist accounts of cognition. Villalobos and Dewhurst argue that the Turing machine, understood in this way, exhibits what is known in the autopoietic/enactivist tradition as ‘functional closure’, i.e. a form of functional organization where the system’s output loops back through its environment (in this case, the tape) in order to form a part of its input (ibid., sec. 5). This notion of functional closure is closely related to the cybernetic notion of a feedback loop; essentially, the claim here is that a Turing machine operates according to a feedback loop that is generated when the symbols written on the tape are in turn read by the machine, meaning that its output at one time-step has become its input at a later time-step. The implementation of a Turing machine (a paradigmatic computational system) can thus be understood with a concept drawn from enactivism (a paradigmatically anti-computational tradition), and which ultimately originated in cybernetics.

In “Computing Machinery and Intelligence”, Turing (1950) describes a simple test (now known as the Turing test) for determining whether a machine could pass for a human in a text-based conversation. This test is intended to replace the question ‘could a machine think?’, which Turing dismisses as “too meaningless to deserve discussion” (ibid., p. 442). It involves a human interrogator having to distinguish between two subjects, one human and the other artificial, with whom they can only communicate through the medium of written messages. The test is ‘disembodied’ in the sense that no physical details of the machine’s implementation are considered relevant to the question of whether or not it is intelligent. It does not need to directly interact with the interrogator, and as such there is no need to program into it an awareness of body language, or tone of voice, or any other aspects of face-to-face communication. It also does not require the ability to regulate its own body language or tone of voice, or any other of the myriad complex abilities necessary for synchronous face-to-face communication. This is not to criticize the test as such, but rather to indicate its limitations: it might well serve as an adequate minimal test of intelligence, but passing the Turing test in its original form would not guarantee the ability to pass as a human in more general day-to-day interactions.

In responding to what he calls “Lady Lovelace’s objection” (Turing, 1950, p. 450), the idea that computers are deterministic and thus incapable of originality, Turing considers the role that learning might play in intelligence, and especially in the generation of novel or surprising behaviors. Later on in the paper he dedicates a whole section to the consideration of building a machine that could learn, and suggests that it might be easier to create a machine simulating the mind of a child, which he considers to be much simpler than an adult’s brain. He goes on to propose that by subjecting this ‘artificial child’ to “an appropriate course of education one would obtain the adult brain” (ibid., p. 456; cf. Sterrett, 2012). It is fair to say that Turing probably underestimated the complexity of a child’s brain, but his suggestions in this section do provide an interesting precursor to contemporary approaches to artificial intelligence which focus on developing systems that are able to learn (see Colombo, this volume, for further discussion of learning algorithms). These systems are now commonly designed using a connectionist or neural network approach, which takes inspiration from the structural organization of the biological brain (see Stinson, this volume). In a 1948 report (unpublished until 1968), Turing even provides the outlines of what we might now consider an example of this approach to learning and machine intelligence (Boden, 2006, p. 180; cf. Copeland and Proudfoot, 1996).

By discussing the role of learning in intelligence, and considering an analogy with how a (human) child might learn, Turing pre-empted topics that have now re-emerged in contemporary cognitive science, and that have been of especial interest to those coming from an embodied cognition perspective (see e.g. Flusberg et al., 2010). However, despite acknowledging the role that learning might play in the creation of an intelligent machine, Turing remains fairly dismissive about the idea of embodiment. He does not think that it would be important to give the machine “artificial flesh”, or even legs or eyes (1950, p. 434). His conception of intelligence is still very much based around the idea of abstract computational processes, perhaps due to his earlier work on the Turing machine. A universal Turing machine is capable of running any (Turing-computable) program, and thus presumably he was convinced that such a machine, if programmed correctly (whether by its designer or by experience) would be able to produce any answer required by the test. This conception of intelligence is in stark contrast with the work of both Walter and Ashby, for whom (as we shall see) embodiment seems to have played a crucial role in intelligence.

3  Walter and embodied robotics

Whilst Turing’s work on machine intelligence was (for the most part) purely theoretical, Walter and Ashby were both more interested in trying to put their theories into practice. In this section, I will consider Walter’s early experiments in what we might now call ‘embodied robotics’, and how these relate to contemporary work in robotics and artificial intelligence. In foreshadowing this contemporary work, Walter’s robots provide a connection between the cybernetic notion of embodiment and what has now come to be known as ‘embodied cognition’, spanning the gulf of several intervening decades of relatively ‘disembodied’ cognitive science. The focus of this section is on Walter’s most memorable contribution to cybernetics, the creation of several artificial ‘creatures’ (or robots), whose behavior was intended to model the behavior of living organisms and the human brain.

Walter’s creations were not entirely original, forming part of an emerging tradition of robotics described by Cordeschi (2002, ch. 3) and Boden (2006, p. 221). During the 1920s and 1930s several researchers had been developing robotic systems based on essentially behaviorist principles, implementing a form of conditioned learning in simple electronic circuits (Cordeschi, 2002, pp. 82–111). For example, one such system consisted of a series of switches connecting a charged battery to several uncharged batteries. If the switch connecting the charged battery was pressed, a light would come on, representing an unconditioned stimulus. When the switch connecting the charged battery was pressed simultaneously with a switch connecting one of the uncharged batteries, the latter would itself become charged, representing a conditioned stimulus (Krueger and Hull, 1931; cf. Cordeschi, 2002, pp. 86–87). These systems were seen as implementing or demonstrating basic aspects of behaviorist theories of learning, and could be taught to navigate their way around simple mazes, in the same manner as the behaviorists’ animal subjects. Subsequently, maze-running robots (known as ‘rats’) were designed independently by Shannon (1951), Wallace (1952), and Howard (1953), each of which operated by systematically exploring a maze until they found their way through it, after which they could ‘remember’ the route that they had taken and replicate it in future attempts.

Walter’s own robots, in contrast to these ‘rats’ and other similar creations, were not designed for any particular task (such as running a maze), and Walter considered them to be models of how the brain itself might function (Walter, 1953, p. 118; cf. Holland, 2003a, pp. 2096–2097). Walter’s ‘tortoises’, as he called them, were “small electromechanical robots” (Pickering 2010, p. 43), consisting of a chassis with one front wheel and two back wheels, two battery-powered electric motors (one to ‘drive’ the front wheel and the other to rotate or ‘steer’ it), and a light-sensitive cell attached to the vertical axis of the front wheel (Walter, 1950a; 1950b; 1951; see also Holland, 2003b). Inside the chassis was a set of basic electronic circuitry connecting the components. 8 When no light was detected, both motors would activate, causing the tortoise to move in a cycloidal path whilst scanning its environment, but as soon as a bright enough light was detected, the steering motor would deactivate, making the tortoise appear to gradually seek out the light. When the light got too bright, however, the steering motor would reactivate, and so the tortoise would tend to wander around in the area it considered to be ‘comfortably’ illuminated. A switch would also be triggered if its shell was displaced by hitting an obstacle, alternately activating the drive motor and steering motor to allow the tortoise to move away from the obstacle. Later versions of the tortoises would become less sensitive to light as their batteries ran down, allowing them to return to their ‘hutches’ (which were lit by a bright light) to recharge. Each also had an external light that would switch on when the steering motor was active, appearing ‘attractive’ to other tortoises, but which would switch off again when its own steering was locked. This caused them to engage in a playful dance, never quite making contact but also never straying too far away – or as Walter put it, “the machines cannot escape from one another, but nor can they ever consummate their ‘desire’” (Walter, 1953, pp. 128–129).

Furthermore, with the addition of a behaviorist learning module (‘CORA’), and by adjusting various parameters and settings, Walter aimed to use his tortoises to model brain malfunctions, or mental illnesses (Walter, 1950a, p. 45; 1951, p. 63; cf. Pickering, 2010, pp. 67–68). In one such arrangement the system could learn to associate a sound (it was equipped with a microphone) with the signals received from detecting a light (which it would usually move towards) and hitting an obstacle (which it would usually move away from). Over time it came to no longer be attracted to the light, and thus was unable to seek out the “nourishment” of its recharging hutch (Walter, 1950a, p. 63). For Walter, then, these robots offered the possibility of an entire theory of human neural activity and the behavior generated by it, although within his own lifetime they remained relatively simplistic and, to a contemporary eye, perhaps not especially impressive.

Although extremely simple in design, the tortoises exhibited complex and apparently purposeful behavior, thus embodying a core cybernetic principle of the emergence of (apparent) teleology from mere mechanism. Crucial here is the role of feedback, first described in the (pre-)cybernetic context by Rosenblueth, Wiener, and Bigelow (1943). 9 Rosenbleuth et al. defined teleological behavior as “behaviour controlled by negative feedback” (ibid., p. 19), and claimed that this definition provided a naturalized account of all teleological (or purposeful) behavior. A system controlled by negative feedback will correct itself when diverted from its ‘goal’, and so appears to be purposefully aiming for this goal. Walter’s robots were also purposeful in this sense, as their reaction to differing levels of illumination served as a form of negative feedback (gradually approaching a distant light, before gradually retreating once it became too bright). Whether, and to what extent, such behavior is ‘genuinely’ purposeful, or even if it makes sense to ask this question, is beyond the scope of this chapter.

Walter’s robotic tortoises foreshadowed the modern development of embodied robotics, emphasizing the emergence of complex behaviors from extremely simple systems. Examples of this approach include Valentino Braitenberg’s (1984) book of Walter-style robotic thought experiments, 10 Rodney Brooks’ ‘subsumption architecture’ (1986, see also his 1999), Randall Beer’s evolutionary robotics (see e.g. Beer and Gallagher, 1992), Barbara Webb’s insect-inspired robots (see e.g. Webb, 1995), and Michael Arbib’s work on social robotics (see e.g. Arbib and Fellous, 2004). Brooks’ robots in particular bear a resemblance to Walter’s tortoises, consisting of deceptively simple computational architectures that interact with their environments to produce relatively complex behaviors. One of the robots built in Brooks’ lab, affectionately named ‘Herb’ (after Herbert Simon), was able to navigate around the laboratory, identifying and collecting up drinks cans and disposing of them appropriately (see Connell, 1989). Walter’s approach to robotics was ahead of its time, and in conceiving of his creations as simple models of the human nervous system he foresaw a future tradition of using artificial creations and computer simulations as a means to learn about ‘natural’ human cognition.

4  Ashby and homeostasis

In contrast with Walter’s somewhat sensationalist and media-savvy presentation of his tortoises, Ashby’s homeostat was both “less sexy [and] less media-friendly” (Boden, 2006, p. 228). Nonetheless, it provides an interesting model of how the biological concept of homeostasis might be applied to explanations of cognition. His work influenced the development of the enactivist theory of cognition, via Humberto Maturana’s autopoietic theory, which extended his analysis of homeostasis to develop the concept of ‘autopoiesis’. In this section I will focus primarily on his work on homeostasis, the application of this work in the creation of his ‘homeostat’, and the influence of his ideas on later developments in embodied cognition.

The concept of homeostasis was originally developed by Walter Cannon (see e.g. his 1929 paper), who used it to describe the mechanisms responsible for maintaining certain vital parameters in living organisms. Norbert Wiener’s co-author Arturo Rosenblueth worked at Cannon’s laboratory (Dupuy 2000/2009, p. 45), and the general idea of homeostasis was highly influential on the development of cybernetics. Ashby expanded the concept of homeostasis to give a general definition of adaptive systems, including not only biological organisms but any system (whether artificial or natural) that displays homeostatic behavior (1952, ch. 5). His account introduces several additional concepts, such as ‘adaptation’ (the behavior a system performs to retain stability), ‘viability’ (the limits within which a system can remain stable), and ‘essential variables’ (the bounds within which a system remains stable, and beyond which it will collapse into a new configuration). For example, the essential variables of the human body include maintaining a core temperature between (approximately) 36.5 °C and 37.5 °C, beyond which it ceases to be viable and begins to collapse. The account is intended to explain how an adaptive system is able to keep the values of its essential variables within the limits of viability, and how, if it is pushed beyond these limits, it will collapse into a new configuration that may or may not be adaptive. If the core temperature of a human body falls below 36.5 °C, it will either move to seek out a warmer environment, or begin generating and maintaining heat via various mechanisms (including shivering and blood flow constriction), or else become hypothermic and eventually die.

The mechanisms that allow a system to remain stable are essentially versions of the homeostatic mechanisms described by Cannon, but Ashby extended the account to include another set of mechanisms, those of an ‘ultrastable’ system, which is a system with the ability to adapt its own external environment and/or internal structure to preserve its overall homeostasis (1952, ch. 7). A system of this kind appears to seek out a new viable format, avoiding the threat of a greater and potentially more catastrophic collapse (a cold human seeking out a warmer environment is an example of this). An ultrastable system, Ashby argued, could be considered genuinely intelligent, as it would deploy complex behaviors allowing it to retain structural integrity (1952, ch. 9). Ashby thus developed a general theory of intelligent behavior (i.e. cognition), based around the central idea of biological homeostasis. 11

In order to illustrate these principles Ashby designed and constructed a model system, the ‘homeostat’, which exhibited a form of ultrastability (Figure 3.1). The homeostat consisted of a set of four boxes, each containing an induction coil and a pivoting magnetic needle, the latter attached to a wire hanging down into a trough of water. Electrodes placed at the end of each trough generated a potential gradient in the water, which, via the hanging wire, affected the movement of the needle. The system was set up such that the movement of the needle also modified the current generated by the coil, which in turn influenced the gradient in the water, resulting in a feedback loop between water, needle, and coil (Ashby, 1948, pp. 380 ff.; adapted from Boden, 2006, p. 230).

The homeostat, with hanging wires visible at the top of each box.
                              Reproduced with permission of the Estate of W. Ross Ashby (

Figure 3.1   The homeostat, with hanging wires visible at the top of each box. Reproduced with permission of the Estate of W. Ross Ashby (www.rossashby.info)

Each box by itself was not especially interesting, displaying a simple form of feedback, but when connected together the four boxes were able to maintain a stable (homeostatic) equilibrium, with each needle eventually turning to point towards the center of the device. This was achieved by the current generated by each box exerting an influence on each magnet, which in turn adjusts the current, and so on (Ashby, 1952, p. 96). Furthermore, when the maximum value of any one of the four currents exceeded a certain level (its ‘essential variables’), a switch would be flipped that “changed the quantities defining the relevant feedback link” (Boden, 2006, p. 231), creating a new configuration and preventing the system entering an uncontrollable feedback loop. By this ‘second-order’ mechanism the system was able to achieve ultrastability, as it could avoid exceeding its viable limits by adopting a new form of organization.

To Ashby this system presented a simplified model of the basic homeostatic principles and dynamics that he thought were necessary (and perhaps sufficient) for both life and cognition (which for him were continuous with one another). He presented the homeostat at the ninth Macy conference, where it proved to be somewhat controversial. His claim that the system exhibited a form of intelligence was met with incredulity by many of the attendees (Boden, 2006, p. 232), and several of them pointed out that its supposedly ‘adaptive’ behavior relied essentially on a random search (Boden, 2006, p. 235; Dupuy, 2000/2009, p. 150). Ashby was in fact willing to accept the latter charge, replying “I don’t know of any other way that [a] mechanism can do it [i.e. exhibit intelligent behavior]” (ibid.). In a sense he was just taking the cybernetic vision of ‘mind as machine’ to its logical conclusion, fully automating (and thus ‘demystifying’) apparently purposeful behavior. Although his reduction of intelligent behavior to random search was extreme, it has in common with other cybernetic approaches the idea that apparently complex behaviors can be explained in terms of simple (and automatable) procedures. Turing’s theory of computation also shares this approach, as it explains how to carry out complex mathematical operations in terms of a few basic procedures.

Although its immediate influence was somewhat limited, Ashby’s work on homeostasis has gone on to inspire many subsequent theorists in cognitive science. Humberto Maturana’s autopoietic theory of cognition draws on Ashby’s work (Maturana, 1970; Maturana and Varela, 1980), 12 and this theory subsequently provided the foundation for the contemporary enactivist tradition (Varela, Thompson, and Rosch 1991/2017; see Ward, Silverman, and Villalobos, 2017 for an overview; see also Hutto and Myin, this volume). Enactivism emerged out of what became known as ‘second-order’ cybernetics (see Froese, 2010; 2011), which aimed to recursively apply cybernetic insights to the analysis of the discipline itself, by acknowledging the role of the observer in scientific experimentation. The movement included Maturana, along with others, such as Heinz von Foerster, who continued applying cybernetic principles after the original movement collapsed.

5  (Dis-)Embodiment in cybernetics and cognitive science

The original cybernetics movement, and British cybernetics in particular, emphasized the role of the body and environment in explanations of cognition. Walter’s tortoises were early experiments in embodied robotics, simple systems that exhibited complex behaviors due to their interactions with their environments. Ashby’s homeostat applied biological principles in an attempt to model cognitive processes, which he considered to be essentially continuous with one another. Even Turing’s work on computational intelligence contained hints of embodiment, insofar as his Turing machine can be interpreted as interacting with an environment, and in the suggestive comments he makes about the relationship between learning and intelligence. Each of these approaches has since been rediscovered by researchers working in the broad tradition of embodied cognition, namely in embodied robotics, enactivism, and connectionism. The British cybernetics tradition also exemplifies how to combine embodied and computational approaches to the study of mind and cognition, and thus how we might be able to reconcile these approaches in the modern day.

Walter and Ashby were not the only British cyberneticists of note, although they were probably the most influential and well-known during the era of the original cybernetics (1946–1953). Other British cyberneticists 13 included Gregory Bateson (1904–1980), an anthropologist who developed the application of systems theory to social groups, and who attended many of the Macy conferences; R.D. Laing (1927–1989), a psychiatrist most well-known for his association with the anti-psychiatry movement; Stafford Beer (1926–2002), who founded the field of ‘management cybernetics’ and was involved in an attempt to cyberneticize the Chilean economy (see Medina, 2011); and Gordon Pask (1928–1996), an early adopter of cybernetics who contributed to many different areas of research over the years, including educational psychology, systems theory, and human–computer interaction. 14 The last two continued their work after cybernetics had ceased to be popular, and thus provide a continuity of sorts between the classical era and the modern rediscovery of cybernetic principles.

Towards the end of the 1950s the cybernetics movement began to splinter, divided roughly between those who focused more on developing abstract computational models of how cognitive tasks might be solved, and those who focused more on physical implementations of cognitive systems, taking their lead from specific details of the biological brain. Boden (2006, p. 234) calls the latter group the ‘cyberneticists’, although both groups had their origins in the cybernetics movement. These two groups split not so much due to any deep-seated ideological or theoretical disagreements, but rather as a result of the inherent instability of the cybernetics movement, which cut across (too) many disciplinary boundaries and thus struggled to find a permanent home within any one institution or department.

The computationalist tradition that subsequently became dominant, and remained so for the next few decades, played down the importance of body and world, focusing primarily on abstract models of cognition (Aizawa, this volume; cf. Boden, 2006, p. 236). Insofar as this approach was able to describe general computational principles that might govern cognition it was relatively successful, but since the late 1980s embodied approaches have been making a resurgence, most significantly in embodied robotics and the various versions of enactivism. Both draw on the cybernetic principles and ideals found in the works of Walter and Ashby, emphasizing in particular the importance of the interaction between a system and its environment, and the emergence of cognition out of basic biological processes such as homeostasis.

As Miłkowski (this volume) argues, embodied explanations of cognition need not be at odds with computational ones, as computational principles can be applied to the design of embodied systems. Here it is important to distinguish between the trivial sense in which every computational system must be implemented in some physical medium, and the more interesting sense in which ‘embodied’ approaches think that specific details of physical implementation might play a role in our explanations of cognition. The classical view of computation accepts embodiment in the trivial sense, but denies that the specific details of physical implementation are essential to our explanations of cognition. Nonetheless, there is scope here for a hybrid approach, where computational theories of mind are implemented in ways that take advantage of the specific details of their embodiment, without necessarily conceding that these details are essential for cognition. It might be that certain kinds of implementation allow for aspects of a task to be offloaded onto the body or environment, enabling an efficient solution to a problem, even if there exists a distinct, more computationally intensive solution that could in principle remain implementation-neutral. Given economic and pragmatic constraints, we should prefer the former solution, but this is not to say that the specific details of embodiment are strictly necessary for cognition.

The cybernetics movement in general, and the British cybernetics tradition in particular, offers a useful model for how to integrate computational and embodied approaches. On the one hand, Turing’s classical formulation of computation allows for the environment to play a role, whilst on the other hand Walter and Ashby’s embodied models of living systems can be implemented computationally, as contemporary work in embodied robotics, dynamical systems research, and connectionism demonstrates. Furthermore, even avowedly anti-computationalist theories such as enactivism can be traced back to the same cybernetic roots as computationalism itself, suggesting that there need not be any fundamental incompatibility between these now apparently divergent approaches (see Villalobos and Dewhurst, 2017b). The enactivist rejection of computationalism relies on the premise that computation requires (semantic) representation, and whilst this has historically been a popular position, it has recently been challenged by a variety of ‘mechanistic’ accounts of computation (see e.g. Piccinini, 2015). Such accounts may offer a route to reconciliation between enactivism and computationalism, perhaps based around a revival of cybernetic principles.

Conclusion

In this chapter I have presented themes of embodiment in British cybernetics, focusing on the work of two central British cyberneticists, Grey Walter and Ross Ashby. Walter’s tortoises represent an early exploration of ideas now described as ‘embodied robotics’, and Ashby’s work on homeostasis inspired the enactivist tradition in embodied cognition. I have also considered themes of embodiment in Turing’s work, as despite not typically being considered part of the cybernetic movement, Turing was a contemporary of the British cyberneticists, and attended and presented at their main venue for discussion, the Ratio Club. I suggested that the Turing machine could be considered to interact with an environment of sorts, and that Turing’s work on machine learning predicted certain ideas that have now become relatively commonplace. The milieu of British cybernetics therefore presents a model for future work on embodiment and computation, offering the possibility of reconciling two sometimes oppositional traditions.

Acknowledgments

I am grateful to Mark Sprevak, Matteo Colombo, and Owen Holland for providing extensive feedback on earlier drafts of this chapter, and to Mario Villalobos for originally introducing me to cybernetics and autopoietic theory.

Notes

The term ‘cybernetics’ was coined by Norbert Wiener in his 1948 book, and formed part of the title of the later Macy conferences (from the seventh conference onwards).

See Wiener (1948), von Neumann (1945; 1952/2000), and McCulloch and Pitts (1943); Rav (2002) provides an overview of the contributions of all three, and describes some of their influences on later scientific developments; Heims (1980) presents a comparative biography of Wiener and von Neumann.

In a journal entry dated September 20, 1949, Ashby refers to the formation of a “cybernetics group”, and the club seems to have been comfortable using this terminology to refer to themselves.

Von Foerster was present at several of the Macy conferences, and edited the proceedings of the latter five conferences. He went on to become highly influential in ‘second-order’ cybernetics.

McCulloch is a notable exception, but neither Wiener nor von Neumann had any background in neurophysiology. Wiener had initially wanted to train as a biologist, but was unable to do so due to his poor eyesight.

Turing’s later work on ‘morphogenesis’, which I do not have space to discuss here, also raises themes of embodiment, and marks an early exploration of what we would now call ‘artificial life’ (Turing, 1952; Proudfoot and Copeland, this volume).

In 1949 Turing wrote a letter to Ashby, suggesting that rather than building a physical model of the nervous system, he could instead simulate one on the ACE machine (an early stored-program computer) that Turing was then working on at the National Physics Laboratory (Turing, 1949).

Like much of the electronic equipment used by the cyberneticists, this circuitry was analog, meaning that unlike most contemporary (digital) computers it was sensitive to continuous rather than discrete signals.

The term ‘cybernetic’ was not actually coined by Wiener until 1948, but this earlier paper is typically considered part of the cybernetic canon.

Braitenberg never mentions Walter by name, but was clearly influenced by his work.

Walter’s tortoise, ‘seeking’ to maintain its position at a certain distance from a source of illumination, could also be seen as embodying a kind of homeostatic mechanism.

For further discussion of the relationship between Ashby’s work and Maturana’s autopoietic theory, see Froese and Stewart (2010).

Although he died before the movement could really get started, Kenneth Craik (1914–1945) is often associated with the British cybernetics movement. Craik was a psychologist and philosopher whose work pre-empted many of the themes in what would become cybernetics. The Ratio Club was almost named in his honor (Boden, 2006, p. 222), and Walter said of the American cyberneticists that “These people are thinking on very much the same lines as Kenneth Craik did, but with much less sparkle and humour” (Holland, 2003a, p. 2094).

Each is discussed in more detail by Pickering (2010).

References

Abrahams, T. (2016) Rebel Genius: Warren S. McCulloch’s Transdisciplinary Life in Science. Cambridge, MA: MIT Press.
Arbib, M. and Fellous, J.-M. (2004) ‘Emotions: From Brain to Robot’, TRENDS in Cognitive Sciences, 8 (12), pp. 554–561.
Ashby, W.R. (1948) ‘Design for a Brain’, Electronic Engineering, 20, pp. 379–383.
Ashby, W.R. (1949) Ashby’s Journal, 1928–1972, p. 2624. The W. Ross Ashby Digital Archive. Available at: www.rossashby.info/journal.
Ashby, W.R. (1952/1960) Design for a Brain. New York, NY: John Wiley & Sons.
Ashby, W.R. (1956) An Introduction to Cybernetics. New York, NY: J. Wiley.
Barker-Plummer, D. (2016) ‘Turing Machines’, The Stanford Encyclopedia of Philosophy, December 21, Zalta, E.N. (ed.), Available at: https://plato.stanford.edu/archives/win2016/entries/turing-machine/ (Accessed: 14 February 2018 ).
Bates, J. (1949) Letter to Grey Walter, July 27. Unpublished Papers and Records for the Ratio Club. J.A.V. Bates Archive, The Wellcome Library for the History and Understanding of Medicine, London.
Beer, R. and Gallagher, J. (1992), ‘Evolving Dynamical Neural Networks for Adaptive Behavior’, Adaptive Behavior, 1 (1), pp. 91–122.
Boden, M. (2006) Mind as Machine. Oxford: Oxford University Press.
Braitenberg, V. (1984) Vehicles: Experiments in Synthetic Psychology. Cambridge, MA: MIT Press.
Brooks, R. (1986) ‘A Robust Layered Control System for a Mobile Robot’, IEEE Journal on Robotics and Automation, 2 (1), pp. 14–23.
Brooks, R. (1999) Cambrian Intelligence. Cambridge, MA: MIT Press.
Cannon, W. (1929) ‘Organization for Physiological Homeostasis’, Physiological Reviews, 9 (3), pp. 399–431.
Clark, A. and Chalmers, D. (1998) ‘The Extended Mind’, Analysis, 58 (1), pp. 7–19.
Connell, J. (1989) A Colony Architecture for an Artificial Creature. MIT Artificial Intelligence Laboratory Technical Report 1151.
Copeland, J. and Proudfoot, D. (1996) ‘On Alan Turing’s Anticipation of Connectionism’, Synthese, 108, pp. 361–377.
Cordeschi, R. (2002) The Discovery of the Artificial. Dordrecht, Boston, MA, and London: Kluwer Academic Publishers.
Dupuy, J.-P. (2000/2009) On the Origins of Cognitive Science. Translated by M.B. DeBevoise. Cambridge, MA: MIT Press.
Edwards, P. (1996) The Closed World. Cambridge, MA: MIT Press.
Flusberg, S. et al. (2010) ‘A Connectionist Approach to Embodied Conceptual Metaphor’, Frontiers in Psychology, 1, p. 197.
Froese, T. (2010) ‘From Cybernetics to Second-Order Cybernetics’, Constructivist Foundations, 5 (2), pp. 75–85.
Froese, T. (2011) ‘From Second-Order Cybernetics to Enactive Cognitive Science’, Systems Research and Behavioural Science, 28 (6), pp. 631–645.
Froese, T. and Stewart, J. (2010) ‘Life after Ashby: Ultrastability and the Autopoietic Foundations of Biological Autonomy’, Cybernetics and Human Knowing, 17 (4), pp. 7–50.
Heims, S.J. (1980) John von Neumann and Norbert Wiener: From Mathematics to the Technologies of Life and Death. Cambridge, MA: MIT Press.
Heims, S.J. (1993) The Cybernetics Group 1946–1953: Constructing a Social Science for Postwar America. Cambridge, MA: MIT Press.
Holland, O. (2003a) ‘Exploration and High Adventure: The Legacy of Grey Walter’, Philosophical Transactions of the Royal Society A, 361, pp. 2085–2121.
Holland, O. (2003b) ‘The First Biologically Inspired Robots’, Robotica, 21, pp. 351–363.
Howard, I.P. (1953) ‘A Note on the Design of an Electro-mechanical Maze Runner’, Durham Research Review, 4, pp. 54–61.
Husbands, P. and Holland, O. (2008) ‘The Ratio Club: A Hub of British Cybernetics’, in Husbands, P. , Holland, O. , and Wheeler, M. (eds.) The Mechanical Mind in History. Cambridge, MA: MIT Press, pp. 91–148.
Husbands, P. and Holland, O. (2012) ‘Warren McCulloch and the British Cyberneticians’, Interdisciplinary Science Reviews, 37 (3), pp. 237–253.
Krueger, R.G. and Hull, C.L. (1931) ‘An Electro-chemical Parallel to the Conditioned Reflex’, Journal of General Psychology, 5, pp. 262–269.
Maturana, H. (1970) Biology of Cognition. Urbana, IL: University of Illinois Press.
Maturana, H. and Varela, F. (1980) Autopoiesis and Cognition: The Realisation of the Living. Dordrecht: D. Reidel Publishing Company.
McCulloch, W. and Pitts, W. (1943) ‘A Logical Calculus of the Ideas Immanent in Nervous Activity’, Bulletin of Mathematical Biophysics, 5, pp. 115–133.
Medina, E. (2011) Cybernetic Revolutionaries. Harvard, MA: MIT Press.
Piccinini, G. (2015) Physical Computation: A Mechanistic Account. Oxford: Oxford University Press.
Pickering, A. (2010) The Cybernetic Brain. Chicago, IL: University of Chicago Press.
Rav, Y. (2002) ‘Perspectives on the History of the Cybernetics Movement’, Cybernetics and Systems, 33 (8), pp. 779–804.
Rosenblueth, A. , Wiener, N. , and Bigelow, J. (1943) ‘Behavior, Purpose and Teleology’, Philosophy of Science, 10 (1), pp. 18–24.
Shannon, C. (1951) ‘Presentation of a Maze Solving Machine’, in von Foerster, H. (ed.) Cybernetics: Transactions of the Eighth Conference, pp. 173–180.
Shapiro, L. (2014) The Routledge Handbook of Embodied Cognition. London: Routledge.
Sterrett, S.G. (2012) ‘Bringing up Turing’s “Child-Machine”’, in Cooper, S. , Dawa, A. , and Lowe, B. (eds.) How the World Computes. Dordrecht: Springer, pp. 703–713.
Turing, A. (1936) ‘On Computable Numbers, with an Application to the Entscheidungsproblem’, Proceedings of the London Mathematical Society, 2 (42), pp. 230–265.
Turing, A. (1948) ‘Intelligent Machinery’, in Copeland, B.J. (ed.) The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life plus The Secrets of Enigma. Oxford: Clarendon Press, pp. 395–432.
Turing, A. (1949) Letter to W. Ross Ashby, November 19. The W. Ross Ashby Digital Archive. Available at: www.rossashby.info/letters/turing.html.
Turing, A. (1950) ‘Computing Machinery and Intelligence’, Mind, 49, pp. 433–460.
Turing, A. (1952) ‘The Chemical Basis of Morphogenesis’, Philosophical Transactions of the Royal Society B, 237 (641), pp. 37–72.
Varela, F. , Thompson, E. , and Rosch, E. (1991/2017) The Embodied Mind. Cambridge, MA: MIT Press.
Villalobos, M. and Dewhurst, J. (2017a) ‘Enactive Autonomy in Computational Systems’, Synthese. Available at: https://doi-org.ezproxy.is.ed.ac.uk/10.1007/s11229-017-1386-z.
Villalobos, M. and Dewhurst, J. (2017b) ‘Why Post-cognitivism Does Not (Necessarily) Entail Anti-computationalism’, Adaptive Behavior, 25 (3), pp. 117–128.
von Neumann, J. (1945) ‘First Draft Report on the EDVAC’, report prepared for the U.S. Army Ordnance Department under contract W-670-ORD-4926, in Stern, N. (1981) From ENIAC to UNIVAC. Bedford, MA: Digital Press, pp. 177–246.
von Neumann, J. (1952/2000) The Computer and the Brain. New Haven, CT and London: Yale University Press.
Wallace, R.A. (1952) ‘The Maze-Solving Computer’, Proceedings of the ACM, pp. 119–125.
Walter, W.G. (1950a) ‘An Imitation of Life’, Scientific American, 182, pp. 42–45.
Walter, W.G. (1950b) ‘An Electromechanical Animal’, Dialectica, 4 (3), pp. 206–213.
Walter, W.G. (1951) ‘A Machine that Learns’, Scientific American, 185, pp. 60–63.
Walter, W.G. (1953) The Living Brain. London: Duckworth.
Ward, D. , Silverman, D. , and Villalobos, M. (2017) ‘Introduction: The Varieties of Enactivism’, Topoi, 36 (3), pp. 365–375.
Webb, B. (1995) ‘Using Robots to Model Animals: A Cricket Test’, Robotics and Autonomous Systems, 16 (2–4), pp. 117–134.
Wells, A.J. (1998) ‘Turing’s Analysis of Computation and Theories of Cognitive Architecture’, Cognitive Science, 22 (3), pp. 269–294.
Wiener, N. (1948) Cybernetics: Or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press.
Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.