What Kind of Explanation, If Any,
Is a Connectionist Net?

Christopher D. Green
Department of Psychology
York University

christo@yorku.ca
John Vervaeke
Department of Philosophy
University of Toronto

jvervaek@chass.utoronto.ca

(1996). In C. W. Tolman, F. Cherry, R. van Hezewijk, & I. Lubek (Eds.), Problems of theoretical psychology (pp. 201-208). North York, ON: Captus.


Summary

Connectionist models of cognition are all the rage these days. They are said to provide better explanations than traditional symbolic computational models in a wide array of cognitive areas, from perception to memory to language to reasoning to motor action. But what does it actually mean to say that they "explain" cognition at all? In what sense do the dozens of nodes and hundreds of connections in a typical connectionist network explain anything? It is the purpose of this paper to explore this question in light of traditional accounts of what it is to be an explanation.

We start with an impossibly brief review of some historically important theories of explanation. We then discuss several currently-popular approaches to the question of how connectionist models explain cognition. Third, we describe a theory of causation by philosopher Stephen Yablo that solves some of the problems on which we think many accounts of connectionist explanation founder. Finally, we apply Yablo's theory to these accounts, and show how several important issues surrounding them seem to disappear into thin air in its presence.


1. Traditional Views of Explanation

From the time of Aristotle until the beginning of the 19th century, it was widely believed that giving an explanation of an event meant giving an account of its causes. The 19th century positivists, however, rejected talk of causes and explanations as being so much "metaphysical" nonsense. Much of this attitude was passed on to the Logical Positivists of the Vienna Circle. Rudolf Carnap rejected the explanation of events in favor of the explication of terms and sentences, initially by a process of explicit definition and, later, via the process he dubbed "reduction." Neither of these projects was destined to succeed and, partly as a consequence, explanation began to creep back into the language of philosophy of science (Feigl, 1945; Braithwaite, 1946; Hospers, 1946; Miller, 1946, 1947), finally exploding on to the scene with the publication of Hempel and Oppenheim's (1948) "Studies in the logic of explanation." It was in this paper that the deductive-nomological (D-N) approach to science was first fully articulated. In Hempel's view, an observation is explained when its description can be made the conclusion of a deductive argument that has at least one "lawlike" universal generalization and one observation statement as its premises. By the 1970s, however, various anomalies with respect to the D-N tradition and its variants had accumulated to the point where many philosophers were beginning to look to other approaches to explanation, and the traditional view that explanations must include references to causes was an obvious alternative.

In addition to the inference-based D-N approach and the resurgent causal approach to explanation, there was a third tradition developing as well; one which considered pragmatic, rather than logical or causal, factors to be the crucial elements of explanation. This approach began with Braithewaite's (1953) comment that "any proper answer to a 'Why?' question may be said to be an explanation of a sort" (p. 316), but it was Bromberger (1962, 1966) who first deeply explored the pragmatics of explanation. For pragmatists, theories do not themselves explain anything. Only speakers can explain, and they do so by using theories. Thus, rather than describing the features of explanatory arguments, as had Hempel, Bromberger concentrated on the kind of relation that must hold between two speakers in order for one to be said to explain something to the other. In more recent years, Bas Van Fraassen (1980) has led the charge toward a more pragmatic understanding of explanation.

2. Explanation in Cognitive Science

Advocates of traditional symbolic computationalism argue that mental causation is explained by instantiating each basic element of thought in an individual physical entity that can causally interact with other such entities. Thus, each interaction between tokens of thought bears two interpretations; one at the physico-causal level (e.g., the physical event, "token A 'bumped' token B"), and one at the symbolic level (e.g., the thought, "If A then B"). That is to say, the physical tokens that instantiate the thoughts represent the semantic contents, or meanings, of the thoughts as well (e.g., McCullough & Pitts, 1943; Turing, 1950; Newell & Simon, 1976). This hypothesized parallelism between physical and symbolic events, problematic as it might be, is still just the best explanation we yet have of how mental events can be both causal and intentional at the same time.

Contemporary connectionists on the other hand, reject this framework outright for a number of reasons. A primary one is what might be called the "fragility" of symbolic systems. In the face of partial information, or when the processing system is even slightly incomplete, symbolic systems tend to break down entirely. Connectionist networks, on the other hand, do not typically suffer catastrophic crashes when faced with slightly incomplete information, or slightly less than optimal organization in the system itself. Instead, they exhibit what is widely known as "graceful degradation." That is, they give somewhat worse, but often recognizable outputs. As the quality of information or internal structure degrades, so does the quality of the output, but for a wide range of circumstances, they are able to function quite adequately. What is more, they are able to operate this way precisely because they contain distributed representations.

Although this solves the technical problem of how to get computer programs to work better with whatever information and internal organization they have got, it completely ignores the central philosophical problems of intentionality and mental causation. In fact, many connectionist researchers have been tempted to simply reject these age-old concerns out of hand as being pseudo-problems, borne of an incorrect "metaphysical" (in the pejorative sense) view of what the mind is and what properties it actually has; if questions of intentionality and mental causation are so vexatious, the thinking goes, perhaps the mind is not, after all, intentional and perhaps the content of its ideas, even if there be any, has no real role in the causal chain of things.

Such a counterintuitive suggestion requires some explication, and connectionists have offered some appealing metaphors. One popular analogy is that connectionist nets bear the same relation to symbolic models as accounts of quantum activity do to Newtonian explanations of "middle sized" physical phenomena; viz., the aggregate microlevel activity can be approximately captured by the macrolevel descriptions, but one must drop to a level of description below this if one wants the "real" story (McClelland, Rumelhart, and Hinton, 1986; Smolensky, 1988).

Fodor & Pylyshyn (1988) argue that the move toward connectionist cognitive theory is completely misguided from the outset, for whatever solutions it might provide to technical problems, it does so only at the cost of losing the explanations of the productivity, systematicity, and compositionality implicit in symbolic computationalism. Some connectionists (e.g., Smolensky 1991; Van Gelder, 1990) have responded that connectionist systems can be made to exhibit these properties outwardly, even if true symbol-processing is not going on internally, but these responses tend miss the mark because Fodor and Pylyshyn never denied that they could. On the contrary, the thrust of their criticisms is that not that connectionist models are not powerful enough to imitate cognitive phenomena, but rather that they are too powerful--viz., they are not inherently constrained to exhibit these properties. Thus, the argument continues, if human cognition were really connectionist at root, one would expect to see failures of productivity, systematicity, and compositionality that one just never sees. One the other hand, if whatever connectionist aspects of the brian there might be are rigged up in such as way as to be constrained to merely implement a symbol processor, then the interesting level of analysis for cognitive science is symbolic, not connectionist.

Some more radical connectionists have argued for the outright elimination of the traditional psychological vocabulary. Talk of beliefs, desires, and the like, according to eliminativists, is simply the misleading conceptual residue of bad "folk" theories of psychology. Thus, rather than being renovated, they argue, it should simply be dropped altogether, as have the terms of other false scientific theories of the past, such as phlogiston and caloric.

Ramsey, Stich, and Garon (1991) have gone so far as to argue that connectionism is not only compatible with eliminativism, but actually entails it. Specifically, they argue that if all our "beliefs" are stored in a superposed form (as in a strongly distributed connectionist network), then no sense can be made of the claim that any one belief, or small set of beliefs is responsible for any particular action. Since the activity of the whole net goes into producing its output, all of our "beliefs" must be equally responsible. Thus, "belief" turns out to be a notion of no explanatory value. We would be better off to explain behavior in terms of the internal structure of the network itself.

Andy Clark (1993) attempts to evade this conclusion by defending the claim that one can be a connectionist without having to reject the vocabulary of "folk psychology," wholesale. Clark's response to Ramsey et al. is, in essence, that beliefs are not, in any straightforward way, the causes of our actions. They still play a role, however, in explanations of our behavior. By way of analogy, he points out that the claim, "the match lit because it was struck," does not describe the causal microstructure of combustion, but it does give us a good counterfactual-supporting explanation (i.e., if the match hadn't been struck, ceteris paribus, it wouldn't have lit).

Interestingly, the notion of counterfacutal-support as a criterion for a given generalization being suitable for candidacy as a scientific law comes directly from the Hempelian approach to explanation. Some generalizations, such as "all the coins in my pockets are dimes," are not counterfactual- supporting. That is they do not support inferences like, "if this coin (say a penny, currently not in my pocket) were put into my pocket, then it would be a dime." Other generalizations, such as "all ravens are birds," are counterfactual-supporting; they license inferences like, "if this (say) refrigerator were a raven, then it would be a bird." Significantly, although being counterfactual-supporting is the mark of a statement being lawlike, it does not seem to explain why some statements are lawlike and others not.

Smolensky (1995a) takes a much tougher line against eliminativism than Clark. He argues that Ramsey et al. (1991) are misled into arguing that there are no beliefs explicitly represented in connectionist networks because they only look at numerical representations of the nets. Smolensky agrees that by looking at the bunch of numbers representing the activation levels and connection strengths of the, perhaps, hundreds of nodes in a net, there is little that leaps out as a sign of a specific representation of anything. But the numerical representation of a net is only one way to look at it. By looking, by contrast, at a geometrical representation of a net--viz., one in which activations and weights are represented by vectors in space--it is a quite simple matter to designate portions of space that represent specific beliefs that can be said to be "held" by the net. Thus, the argument runs, the first step in the Ramsey et al. argument--that individual beliefs are represented nowhere in the net--is plain false. If beliefs are explicitly represented in the network, then there is no reason to concede that all of them take part in the generation of each behavior. And if this inference fails, then the concept of "belief" can once again play an important role in the explanation of behavior.

Smolensky (1995b) has extended this line of reasoning into a full-blown account of explanation in what he calls "integrated connectionist/symbolic" (ICS) architecture. In brief, Smolensky's argument is that if one is interested in the causal bases of behavior, one must look to the connectionist mechanisms believed to underlie it. Unfortunately, although the connectionist level of description may be useful for questions of cognitive cause, it doesn't give rise to a satisfactory explanation of behavior. For this, one must turn to a symbolic level of description. Explicitly included among such descriptions are Chomsky-style accounts of language competence that Smolensky himself once (1988) rejected (see, e.g., Prince & Smolensky, in press).

This amounts to a compromise between eliminativism, on the one hand, and the implementationalism of Fodor and Pylyshyn on the other. Instead of either the connectionist or the symbolic level holding exclusive importance in the scientific account of cognition, Smolensky has given a crucial role to each. Splitting the question of causation from that of explanation, Smolensky has assigned the former to the connectionist level and the latter to the symbolic level. The obvious question, then, is what sort of explanation the symbolic level gives; what makes it explanatory. Since Smolensky explicitly denies it causal power, given the models of explanation we have, thus far, surveyed, it must be either inferential (a la D-N) or pragmatic. Smolensky (personal communication, 1994) has affirmed that he understands explanations to be arguments in which the observations to be explained are the conclusions (a la Hempel).

To summarize the situation, then, we have three different accounts of the connectionist explanation of cognition on offer. The first--eliminativism--says that explanation arises from a description of the causal factors leading to behavior.

The second is Clark's claim that explanations are a matter of making counterfactual-supporting claims rather than of giving a complete description of what Clark calls the "causal microstructure." This would indicate that he has adopted some variant of the D-N account of explanation. The microstructural account, however, is just one of many counterfactual-supporting accounts that could be given, and he consistently hedges on the question of whether such alternative accounts are to be regarded as causal or not. Although he seems to believe that the microstructural account is uncontroversially causal, about the symbolic account he says only that it is not causal "in any straightforward way."

Third is Smolensky's account, which explicitly splits cause from explanation. The problem with this is that if the explanation does not give an account of the cause, then what is its status? What gives it its alleged explanatory power? Smolensky says that, given the choice between explanation being inferential, causal, or pragmatic, he comes closest to supporting Hempel's inferential account. There are significant, probably lethal, problems with the Hempelian approach, however, and Smolensky is unable to resolve these.

In addition, there is Fodor and Pylyshyn's claim that connectionist networks are only of interest to cognitive science inasmuch as they implement symbol processors. This is because it is only by physically processing physical tokens that represent the contents of mental states that mental causation can be explained. We believe that we can go some way toward resolving the difficulties that lie in the debate among these four positions by examining the concept of cause in more detail, and offering a more sophisticated account of its nature.

3. Toward a More Sophisticated Understanding of Cause

Causation, so goes the refrain any first year psychology student can recite, is not equivalent to correlation. But Hume's famous theory of cause, in essence, reduced it to precisely this. Correlation is a symmetrical relation, however, and causation is not. Thus Hume added the restriction that a certain temporal ordering must exist if the observed correlation is to be a suitable candidate for cause. But even this does not really get at the heart of the matter, for there are many temporally well-behaved correlations that are not causes (e.g., I never push the button for the elevator in my apartment building until after I have closed my apartment door). Cause is a modal notion, one that cannot be established solely by the observation of empirical facts, or other extensional entities. To put things into the current modal idiom, for A to be said to cause B, the Humean correlation between them must occur not only in this, the actual world, but in other possible worlds as well. Exactly which other worlds is a matter of some debate.

Philosopher Steven Yablo (1987, 1992a, 1992b) has refined this crude characterization, breaking it down into four conditions that must hold for some event x to be properly said to have caused some other event y.

(1) Causes must be adequate for their effects; if x did not occur, then if it had, y would have occurred as well.
(2) Causes must be just enough for their effects as well; for any x+ that has all the properties of x plus some others as well, x+ is more than what is needed for y to occur.

Effects bear parallel obligations to their causes as well.

(3) Effects must be contingent on their causes; if x had not occurred, then y would not have occurred either.
(4) Effects must require their causes; for any x- that does not have all of the properties of x, if x- had occurred then y would not have occurred (see Yablo, 1992a, pp. 413-419; 1992b, pp. 274-277).

When all four of these conditions are satisfied, Yablo says that x and y are proportional to each other, and thus suitable candidates for cause and effect.

To help clarify all of this, consider the following example. Imagine that a particular bolt, important but not utterly crucial to a certain bridge's structural integrity, snaps suddenly. The bridge is sent into a series of oscillations that quickly result in the bridge's collapse. If the bolt had snapped more slowly, the bridge would not have been shaken so, and would not have collapsed. What was the cause of the bridge's collapse? Certainly not the snapping of the bolt, per se, for the bridge would not have collapsed if the details of the snapping had been different. Slow snapping would count as an x- in the requirement condition (4). The cause was the bolt's snapping suddenly. Only this satisfies the adequacy condition (1). From the effect's perspective, the bridge's collapse was contingent on the bolt snapping suddenly, in accordance with condition (3). Now, imagine that the bolt was made of steel. Why should not the steel bolt's snapping suddenly be considered the cause of the bridge's collapse? Because that would be considered an x+ in the enoughness condition (2). Presumably the bridge would have collapsed if, say, the bolt had been made of zinc, but snapped suddenly all the same.

There is a disturbing implication to all of this, however. Since there is only one bolt-snapping to be had here, one which was, as a matter of fact, sudden, how is it that the bolt's snapping per se, cannot be properly said to have been the cause, but the bolt's snapping suddenly can? There is no "real world" difference between the snapping of the bolt and the sudden snapping of the bolt. They are all rolled into a single "real world" event. This is where the modal nature of cause is best revealed. It is only by examining alternative possible worlds that we can decide whether Yablo's four proportionality conditions are satisfied. There are no empirical differences between the bolt's snapping per se and the bolt's snapping suddenly.

To carry this over to a psychological example, consider a situation in which I decide to pour some milk out of a pitcher. There is the mental event of my decision and there is, presumably, the physical event in my brain that corresponds to my taking that decision. Which is the (better) cause of the pitcher being poured? Assuming that the currently popular "token-identity" view of mind-brain relations is approximately right, there are indefinitely many other physical states on which my mental state--viz., my decision pour the milk--could have supervened. Thus, the mental event seems the better candidate for the cause. It is more proportional to the effect. It is adequate and it is enough. Each of the various brain states that might have subserved the mental state, though perhaps adequate, does not meet the enoughness condition. They all include properties which are causally irrelevant to the fact of my decision to pour the milk (viz., because they might have been replaced by another subserving brain state entirely without significantly affecting the supervening mental state).

Now imagine that neuroscientists are secretly monitoring my brain with a machine that is calibrated to respond to the particular physical state which did, in fact, correspond to my decision to pour the milk. Is the cause of the machine's response the mental event or the physical event? It would seem that the physical event is more proportional to this effect, and thus a better candidate for the cause. This is because the mental state does not satisfy Yablo's adequacy condition; the are many other physical states on which the mental state could have supervened that would not have made the machine respond.

The moral of this story is, given different effects of the same precipitating event, different aspects of the precipitator will move to the fore as more plausible candidates for the title of "cause."

4. Yablo's Theory of Cause and Connectionism

Let us return to the argument that connectionism entails eliminativism put forth by Ramsey, et al. (1991). It will be recalled that their argument was that since the activity of the entire network goes into the cause of any given behavior, and since the representations of all the supposed beliefs of the system are distributed in superposed form across the entire network as well, that it makes no sense to pick out any one belief or desire as the cause of any particular behavior. Clark (1993) responded that this might be true for the actual causes of behavior, but not for the explanation of behavior. It should be obvious by now, however, that Yablo's analysis of cause would show that the beliefs and desires that Clark wants to dub only explanatory are also, because of their greater proportionality to the effects being explained, better candidate causes than the descriptions of the network on which they supervene.

Smolensky (1995a), on the other hand, argued that Ramsey et al. were wrong in claiming that the beliefs and desires themselves were not individually represented in the activity of the network. Their claim was based on examination of the wrong form of representation of the network's activity; viz., numerical instead of graphical. But then Smolensky goes on to argue that the symbolic level of analysis (i.e., the level of beliefs and desires), though explanatory, is not causal. This honor he reserves for the connectionist level of analysis. Again, Yablo's analysis shows in exactly what sense the symbolic level can regarded as causal. If the effects one is looking for the causes of are semantic or behavioral (in the full intentional sense), then the symbolic level is more proportional than the connectionist level. If the effects one is looking for the causes of are computational or purely motor (as distinct from fully behavioral) then the connectionist level may be more proportional than the symbolic. That is, both of Smolensky's levels are both causal and explanatory, but of different things; specifically of different aspects of, perhaps, single events.

This sheds some light on Smolensky's ongoing debate with Fodor. What Fodor wants explained is mental causation: how beliefs and desires come to cause behavior. The entities of the symbolic level are most proportional to these effects. He rejects Smolensky's connectionist level as being of no inherent psychological interest, but must maintain a very narrow definition of "psychology" to do so. As many of his critics have pointed out (e.g., Chalmers, 1990), there are many areas of psychology that do not seem to require symbol processing for adequate explanation (e.g., sensation, perception). If Fodor wants to reject these as being "physiological" or somesuch, rather than full-bloodedly "psychological," then so be it, but at this point the debate descends into one of little more than semantics (in the pejorative sense).

What seems to be going on in all these debates is that an unanalyzed notion of cause (and its relation to explanation) has covered up the true nature of the dispute. The debate appears to be between different, and competing, theories of the same psychological phenomena. Instead, each theory is explaining a different set of effects and therefore (following Yablo's argument) making use of different levels of causation in its explanations of these effects. So why all the vociferous debate? There are two related answers to this question. The first is that most connectionists have not recognized the modal nature of causes and effects. True to their scientific training, they think that cause is a purely empirical phenomenon. Thus, they have implicitly concluded that because, for each effect to be explained, there is empirically only one real world event, there must only be one effect to be explained. However, because of the modal relations between cause and effect this conclusion does not follow. Any one event may encompass innumerably many distinct effects at several different levels of analysis.

The second related reason for all the debate results from a lack of consensus about what are the psychological facts to be explained. As Green (in press) has argued elsewhere, there is no consensus in psychology about the facts that psychology is supposed to explain; that is, there is no established criterion of the cognitive. To say that psychology is supposed to explain human behaviour simply repeats the problem, for now we need a criterion of behaviour that distinguishes it from all the effects produced by a human being (some effects, such has their heating the air, are obviously not to be explained by psychology). Such a criterion must not rely on an implicit notion of the cognitive and must pick out the correct scale and modal scope of the effects to be explained. There is currently no consensus in psychology about this proposed criterion. As the effect to be explained varies from theorist to theorist, what is to count as the correct cause, and therefore explanation, also so varies. In short, until psychology decides on the effects it is to explain, and wakes up to the modal properties of cause and explanation it will be mired in a debate that cannot be resolved, but should simply be dissolved. Another way of putting this is that what appears to be a debate about the structure of the mind turns out to be a hidden debate--hidden, it would seem even from the participants themselves--about the philosophy of science; specifically about the natures of cause and explanation.


References

Braithwaite, R. B. (1946). Teleological explanations: The presidential address. Proceedings of the Aristotelian Society, 47, i-xx.

Braithewaite, R. B. (1953). Scientific explanation. Cambridge, Eng.: Cambridge University Press.

Bromberger, S. (1962). An approach to explanation. In R. S. Butler, (Ed.), Analytical philosophy--Second series (pp. 72-105). Oxford: Blackwell.

Bromberger, S. (1966). Why-questions. In R. G. Colodny (Ed.), Mind and cosmos (pp. 86-111). Pittsburgh, PA: University of Pittsburgh.

Chalmers, D. J. (1990). Why Fodor and Pylyshyn were wrong: The simplest refutation. Proceedings of the twelfth annual conference of the Cognitive Science Society (pp. 340-347).

Clark, A. (1993). Associative engines: Connectionism, concepts, and representational change. Cambridge, MA: MIT Press.

Feigl, H. (1945). Some remarks on the meaning of scientific explanation. Psychological Review, 52, 250-259.

Fodor, J. A. & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28, 3-71.

Green, C. D. (1996). Fodor, functions, physics, and fantasyland: Is AI a Mickey Mouse discipline? Journal of Experimental and Theoretical Artificial Intelligence, , 8, 95-106.

Hempel, C. G. & Oppenheim, P. (1948). Studies in the logic of explanation. Philosophy of Science, 15, 135-175.

Hospers, J. (1946). On explanation. Journal of Philosophy, 43, 337-346.

McClelland, J. L., Rumelhart, D. E., & Hinton, G. E. (1986). The appeal of parallel distributed processing. In Rumelhart, D. E. & McClelland, J. L. (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition (vol. 1, pp. 110-146). Cambridge, MA: MIT Press.

McCulloch, W. S. & Pitts, W. H. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 115-133.

Miller, D. L. (1946). The meaning of explanation. Psychological Review, 53, 241-246.

Miller, D. L. (1947). Explanation vs. description. Philosophical Review, 56, 306-312.

Newell, A. & Simon, H. A. (1976). Computer science as empirical inquiry: Symbols and search. Communications of the Association for Computing Machinery, 19, 113-126.

Prince, A. & Smolensky, P. (in press). Optimality theory: Constraint satisfaction in generative grammar. Cambridge, MA: MIT Press.

Ramsey, W., Stich, S. P., & Garon, J. (1991). Connectionism, eliminativism, and the future of folk psychology. In W. Ramsey, S. P. Stich, & D. E. Rumelhart (Eds.), Philosophy and connectionist theory (pp. 199-228). Hillsdale, NJ: Lawrence Erlbaum.

Smolensky, P. (1988). On the proper treatment of connectionism. Behavioral and Brain Sciences, 11, 1-73.

Smolensky, P. (1991). Connectionism, constituency, and the language of thought. In B. Loewer & G. Rey (Eds.). Meaning in mind: Fodor and his critics (pp. 201-227. Oxford: Blackwell.

Smolensky, P. (1995a). On the projectable predicates of connectionist psychology: A case for belief. In C. MacDonald & G. MacDonald (Eds.), Connectionism: Debates on psychological explanation (pp. 357-394). Oxford: Basil Blackwell.

Smolensky, P. (1995b). Constituent structure and explanation in an integrated connectionist/symbolic architecture. In C. MacDonald & G. MacDonald (Eds.), Connectionism: Debates on psychological explanation (pp. 223-290). Oxford: Basil Blackwell.

Turing, A. (1950). Computing machinery and intelligence. Mind, 59, 433-460.

Van Fraassen, B. C. (1980). The scientific image. Oxford: Oxford University Press.

Van Gelder, T. (1990). Compositionality: A connectionist variation on a classical theme. Cognitive Science, 14, 355-384.

Yablo, S. (1987). Identity, essence, and indiscernibility. Journal of Philosophy, 84, 293-314.

Yablo, S. (1992a). Cause and essence. Synthese, 93, 403-449.

Yablo, S. (1992b). Mental causation. Philosophical Review, 101, 245-280.