Explaining the Computational Mind

I’m writing a book on computational explanation in cognitive science under the tentative title Explaining the Computational Mind. The book is on contract with MIT Press.

A good summary of what is going to be the main argument of the book is contained in the syllabus of my lecture at Warsaw University (English programme in the Institute of Philosophy).

If you are interested in reading the drafts of the chapters, simply ask me for the id and password for the below files and presentations. Here is my contact form.

Chapter 1: Computation in cognitive science: four case studies and a funeral
I start by conducting four case studies drawn from (i) traditional computational simulation, (ii) connectionist modeling, (iii) computational neuroscience, and (iv) radical embodied robotics. Although the theoretical stances involved vary considerably, I contend that they share enough to assert that it is too early for a funeral of computation in cognitive science, even if some proponents of radical embodied and enactive approaches have pronounced it dead. The goal of the chapter is to show how computationalism differs from cognitivism, and to provide clear examples that will guide the rational reconstruction undertaken in subsequent chapters. I also argue that the notions of information processing and computation, as used in cognitive science, are equivalent. Whenever information processing is a key component in explanation, the explanation is computational.

Chapter 2. Computational processes
In this chapter, I analyze what it is for a computation to be realized by a physical process. I offer several criteria for deciding when a particular process is best described as computational, ranging from general organizational principles to detailed considerations of computation models used in a given computational description. By “computation model” I mean a formalization of the notion of computation, be it standard or non-standard, such as a universal Turing machine, finite-state automaton, membrane analog computer, quantum digital computer or perceptron. I will endorse a broadly mechanistic account that encompasses both traditional and non-standard models (I defend transparent computationalism, as defined by Ron Chrisley).
Although my construal resembles, in some respects, the views of Gualtiero Piccinini, it takes into consideration interlevel relations to a greater extent, and provides criteria for delineating computational systems. In particular, I argue that the lowest level of the computational mechanism must be realized (or bottomed-out in the Machamer, Darden & Craver sense) by non-computational component parts. Robust computational mechanisms are also realized by relatively isolated systems, and a significant change of causal density between the system and its environment is required to delineate it effectively. This also answers some antirealist objections to computation voiced by John Searle and Hilary Putnam, who argued that there is no way to distinguish computational processes from non-computational ones, as there is no effective way to individuate them. The criteria I propose include therefore requirements that computational systems be organized in such a way so that they would be relatively isolated and cohesive, distinct from the environment as reflected by the frequency of interaction of their parts, and that the lowest level of organization correspond properly, on a fine-grained level, to the abstract structure of dynamics of a given computational model.

Chapter 3. Computational explanation
This chapter supplies additional epistemic criteria that are useful in deciding whether a process is best explained as computational, such as explanatory and predictive value (which must be greater than in a mere physical explanation), simplicity and parsimony, as well as the constancy and coherence of computational ascriptions. As regards the scope of computational explanation, I argue that all processes that crucially involve information processing should be explained computationally, and that includes mental processes.
I distinguish between a full explanation of a cognitive system and a mere computational simulation of a cognitive task; in this context, Marr’s levels of explanation will be discussed but not wholly endorsed, as they do not apply to all possible models of computation. Computer simulation may be considered a highly idealized computational explanation of the cognitive task in question, in which the lower-level details are completely left out. It is therefore not explanatory of any of the properties of the phenomenon that depend on the lower-level details of the mechanism that realizes it. To explain such properties, the computational account must reflect, to some extent, the internal organization of the lower-level mechanism. In most cases, for the account to be practically useful in science, it must nonetheless abstract from some details and thus remain an idealization.
Despite espousing a mechanistic account of computational explanation (along the lines of Carl Craver and Bill Bechtel) I will argue that it is not an alternative to the covering-law conception of explanation (where the algorithm is identified with the law) but a species of it, which provides additional criteria that guarantee a reliable connection between the explanans and the explanandum, escaping the standard objection to the traditional CL model. I claim that mechanistic explanations support counterfactuals and have predictive power in principle but may turn out to be merely explanatory given sufficiently complex boundary conditions.

Chapter 4. Computation and representation
My aim in this chapter is to show the role of representation in computational explanations in cognitive science. This is a highly debated and controversial topic, and for very clear reasons: for many, the computational theory of mind was vindicated as a theory that makes place for intentionality, or representation, in a physical. Jerry Fodor’s defended vigorously the representational character of computation, which was aptly summarized by his slogan “no computation without representation”. Yet this is exactly what I denied in chapter 2. There are computations that have nothing to do with representation as it is commonly understood: as something that has reference and content. A Turing machine that just halts and does nothing else is a fine example of a non-representational computation; a logical gate is another.
Representation has a role to play in cognitive science, and computational models make use of representation. Cognitive systems use information to get by in the world; they process information, and information that they manipulate becomes representation under special conditions. The prevalent view on the explanatory practices in cognitive science is that they say how the input information gets converted to output information.
So my task is first to show the motivations behind the claim that there is no computation without representation, then to dispel confusion surrounding the notion of the symbol that was sometimes used interchangeably with representation. I will also show some fatal flaws that the classical views on representation have. Next, I will sketch an alternative conception of representation that will be useful to analyze representational explanations. As in the previous chapter, I will rely on the mechanistic framework of explanation, so my account will be an abstract specification of representational mechanisms. After this rather conceptual discussion, I will turn to analysis of my four cases introduced earlier. The limitations of space do not permit to include a full discussion of all corollaries of my view, and I do not wish my model of representational mechanism to be a competitor to full-featured theories of representation. It is designed to capture key organizational requirements for representation, not to explain everything in detail.
The upshot of my discussion will show that there is something to Fodorian slogan if you have it backwards. No representation without computation. But surely a lot of computation without representation.

Chapter 5. Limits of Computational Explanation

The purpose of this chapter is to discuss several limits to computational explanation. For example, it is impossible to computationally explain the physical makeup of a cognitive system, or completely account for its actual performance of cognitive tasks. The performance of a computational mechanism depends, on the one hand, on the actual algorithm being realized, specified in a fine-grained fashion, and on the physical properties of the mechanism that cannot be explained purely computationally, on the other hand. Some properties of the environment, or cognitive niche, may also make some algorithms more feasible than others, though these properties also escape the computational explanation.

To answer the question what are the limits of the computational explanation, you need to know what this explanation involves. In my mechanistic account, as I will show below, only one level of the mechanism – the so-called isolated level – is being explained in computational terms. The rest of the mechanism is not computational, and indeed cannot be only computational according to methodological norms of this explanation. In this light, numerous objections posed against the computational accounts of cognition will turn out to be correct – but at the same time not as serious as it was presupposed, as a purely mechanistic account of cognition is not undermined at all. It just leads naturally to a certain explanatory pluralism. This is especially true of representational mechanisms that I introduced in the Chapter 4.

It will also be instructive to review the four case studies in the light of the objections that were raised against them, especially the objections against programmatic assumptions. The classical models of Newell & Simon, as well as early connectionist models of Rumelhart and McClelland were analyzed and criticized much in the literature. One could also raise some doubts against the Neural Engineering Framework or the biorobotic models proposed by Barbara Webb.

After reviewing some objections that can be easily accommodated by the mechanistic framework, I will also briefly review some of the specific charges against cognitive science and AI. Some of them, as I will point out, seem to be only red herrings in the discussion over the explanatory practices in cognitive science. At the same time, by drawing on examples from radical embodied cognitive science, I show that some purportedly non-computational mechanisms involve at least some computational processes. But I do not treat them as exclusively computational; they are multidimensional, which any descriptively correct theory of cognitive science needs to acknowledge.



Chapter 1

Figure 1: The structure of Rumelhard & McClelland model.

Figure 2: Rats are able to return directly to the starting point (A) after exploring the environment in the search for food (B). The return path is symbolized as a dotted line.

Figure 3: Female crickets localize the males with their specially structured ears. The sound is out of phase on the side of the cricket that is closer to the male, and in phase on the other side.

Chapter 4

Figure 1: Watt Governor.

Leave a Reply

Your email address will not be published. Required fields are marked *