Do evolutionary considerations provide good reason to think that the mind is a collection of special-purpose modules?

1

The architecture of the mind is a contentious issue, and in this essay, I shall explore arguments in favour of the idea that the mind is almost wholly modular. In section 2 I shall give a rough definition of a module in the sense meant, and offer some groundwork. In section 3, drawing on Carruthers, I elaborate my position and offer positive reasons for my position. In section 4.1 I respond first to two arguments against Carruthers’ and my own position, and then in 4.2 some general considerations against my position, then I draw my conclusions in section 5 for a near-completely modular mind, but permitting that there may be some very-recent, non-modular mental architecture.  

2

In Carruthers (2004), he defines a module, roughly, as a mental ‘isolable, functional sub-unit.’ (section 1.1). I want to be a little more demanding and say modules operate on domain specific inputs and outputs, as in Fodor (1985), and have access to (mostly) proprietary databases. So, a mental ‘unit,’ that is isolable, performs a specific function, and has domain specific inputs and outputs, and access to a (mostly) proprietary database of information. It is worth noting here that such mental structures need not have direct analogues in brain structures, although I shall discuss worries relating to this below. For now, I shall be taking as a given that the brain is computationally realised, as found in Haugeland (2000). Roughly speaking, this position suggests that the brain should be thought of as a device that manipulates tokens according to formal rules, whereby the positions of tokens in any given system may be taken as a token for the purposes of creating ‘virtual machines’ that simulate various rule-based systems. Such a position is up for debate, but there is a large literature in support of such a thesis (see Haugeland (2000) for a defence of this view, and Searle (1980) for a critique). So, the modules under question can be thought of as being ‘simulated’ by formal systems manipulating tokens with multiple layers of simulation being used to create the specified structure. Here we should take tokens to mean values of one sort or another. In mechanical computers, the most basic token is 1 or 0, similarly, we can think of mental tokens as being these values that can be operated upon but are devoid of intrinsic meaning. Accepting that this assumption may leave me open to criticism on grounds beyond the scope of this discussion, I shall proceed with this assumption in place. So, going forward, I have defined a module as a simulated mental functional sub-unit, operating with computational algorithms, a proprietary database, that operates on domain specific inputs and outputs. I shall also be assuming that the mind is a proper target of natural selection, for the purposes of the discussion that follows, although that also may be a matter of debate. I further assume, following Fodor (1985), that we have reason to think that the mind is at least somewhat structured. Next, I shall consider evolutionary reasons we should expect the mind to be composed entirely, or largely, of such modules.

3

The massively modular hypothesis (MMH) I shall be defending is found in Carruthers (2004), with some additions of my own. This is the name given to the hypothesis that we should expect the mind to be totally or largely composed of modules, as outlined above. This examination shall be restricted to evolutionary grounds, as I have taken computational tractability to be a given in my assumptions, and therefore it cannot be taken as a reason to count in favour of MMH. 

3.1

Evolution, according to Carruthers, is a process of ‘bolting-on’ new ‘special-purpose items to [an exigent] repertoire.’ (section 2.1) The line of thinking is roughly this; when a new mutation comes along, let’s assume it adds to or subtracts from the survival fitness of a given species. This means that if it subtracts from survival fitness, such an addition is likely to die out quickly. If it adds to survival fitness, then such an adaptation will be promulgated and quickly outcompete its competitors. Thus, it is for biological systems. I argue, following Carruthers, that it makes sense that such adaptive additions would appear in the mind, and that over a significant period, we should come to expect massive modularity. I, however, am going to depart slightly from Carruthers and offer a slightly different justification. Here’s a plausible story about how the first modules might come about. Consider the earliest mind. It was probably made up of nothing but raw sensory inputs, produced by Fodorian modules (Fodor, 1985). It makes sense that at some point, a mutation led to the addition of some new neurological architecture, which permitted the simulation of mental modules beyond those considered by Fodor. Now, this new neurological architecture will eventually spread to a small sub-section of the population, assuming it doesn’t harm survival fitness. Now, let’s assume that this new neurological architecture can indeed enhance the likelihood of an individual surviving, and that there are two different versions of the simulation that this neurological architecture can be instructed to operate. One is modularised, in the sense outlined above, and confers a significant advantage in avoiding predation. The second takes as input all sensory data and attempts to process them in a domain-general way and confers an advantage in attracting a mate. Let’s assume that each system confers equal advantage. Even under such conditions, I wish to argue that the modular architecture will emerge, because of the energy demand difference. A modular simulation, that restricts its inputs, will by definition have fewer tokens in its formal system. The general system must find some way to simulate all the possible outputs of the sensory organs as tokens, including all of those that the modular system takes as input, plus all the others. Contrariwise, the module requires only those tokens that are required to represent its restricted inputs. At this early stage of evolution, there is a direct relation between the representations produced by the neurological system and the primitive mind. Therefore, the modular neurology at this early stage will require less adenosine triphosphate (ATP) to catalyse the neurochemical reactions required to generate the representations of the tokens in its formal system; the modular approach always confers an additional advantage over a competing general one; that it requires less ATP, since it requires fewer tokens in its formal system. Fewer tokens means fewer neurochemical reactions to simulate tokens. Using less energy confers an additional advantage on the modular architecture, and therefore we should expect it to outcompete the general system.

3.2 

Now, it may be contested that this argument only works when there are two competing mental systems and that this only demonstrates plausibility with respect to two coexisting systems, one modular and one general, however, I think it can show that domain-general processing systems would never get off the ground by invoking an intuition about evolution and observation about the nature of general processing systems. The intuition I wish to invoke is this, which I shall briefly argue is plausible: any adaptation that creates too large an energy (ATP) demand on the organism it evolves in will kill that organism unless it confers a commensurate advantage in acquiring energy from the environment. This seems plausible to me, since it seems that increased energy consumption demands increased intake, which presumably must be conferred by the same advantage (or luck, but that could not persist down multiple generations.) It doesn’t seem as though there would be any sensible way to avoid such an outcome, although obviously what counts as ‘too large’ is up for grabs. I want to suggest that in any neurological system sufficiently complex to develop a mental simulation, any mental structure that is domain-general will always represent ‘too large’ an increase in demand. I shall begin by considering the earliest minds and suggest that up to a point in history (more on this later,) increased energy demands will always rule out domain-general mental structure. Return to the picture I painted above, of newly evolved mental architecture and a simulation emerging from randomness that takes a non-modular but truth tracking form that confers an advantage in finding a mate. How such a non-modular structure could be computationally realised is mysterious (see Fodor, 2000 for a discussion, and I briefly return to it below), but I think that it is reasonable to assert something about such a realisation. The number of tokens needed in this simulation would be huge, since there would need to be a token for each antecedent memory, a token for any processed value that is yet to be turned into output, and tokens for each possible output state. Even if the number of tokens were relatively low, say 100, and even if there were only 10 possible states within the system (which seems implausibly low, given the domain-general nature of the system), the energy demands would be incredible. That would create 10010 combinations required, let’s say, per second activation. That is the number of states the neurological structure must simulate per second (bear in mind that neuronal activity is usually measure in millli- or microseconds). Let’s assume that each second simulation requires 1 ATP per neurone overall, which is very generous. That’s 10010 ATP per second. That calculates to 3.125x1014 molecules of glucose per second activation. That’s approximately half a gram of pure glucose, per second activation. Given the vagaries of digestion, that could equate to hundreds, perhaps thousands, of calories extra intake per day! What’s worse, because the system is domain-general, it will be very frequently activated, since it must operate with all the various inputs available to the minded organism, and it will constantly have to be checking if its criteria for activation are met, which will require assessing the 10010 relevant states. This all spells a huge energy demand for the a-modular system, and as above, it seems that this would result in any such system dying out of a population very quickly, unless the commensurate advantage conferred outweighs the increased energy cost. Since energy scarcity is one of the most fundamental evolutionary challenges faced by larger, complex organisms, it seems extraordinarily unlikely that conditions of energy surplus could exist long enough to permit the selection of a domain-general system. Even on an implausibly generous account, the domain-general system looks to be far too demanding on evolving organisms.

Further down the path of evolutionary history, things become more complex. It’s not implausible that more energy efficient forms of domain-general representations become possible. I accept it is a sensible response to suggest that a-modular, domain general systems may evolve as more complex mental apparatus evolves, and I shall address this more extensively in section 4.2.2. 

4

In section 4.1 I shall consider arguments from the literature against Carruthers’ and my position. In section 4.1.1 I shall consider Cowie and Woodward’s argument against the analogy from biology, and in 4.1.2 their directionality objection. In 4.2 I shall consider some general arguments, first around computational realisation in 4.2.1, and energy conservation in 4.2.2.

4.1

In their paper, Cowie and Woodward (2004) suggest that the analogy with biology is flawed. I shall summarise their argument (pp314-316) as follows:

  1. Structure and function are intimately related. Hypotheses about structure inform hypotheses about function in a reflective equilibrium;

  2. As mental structures leave no fossil record, there is no evolutionary record of the structure of the mind;

  3. Therefore, any inferences about the structure of the mind based on evolutionary concerns are intrinsically limited.

I wish to accept this argument, and argue that evolutionary modularity arguments of this form are analogous to social contract theory: it’s a plausible story, that conforms to the available evidence and intuitions, that gives us some idea as to what to expect about the structure of the mind. I do not wish to defend the stronger thesis that modularity is the only sensible cognitive architectural scheme. I wish to suggest only that if we take the mind as computationally realised (and I think we should,) then there are good evolutionary considerations to suggest the mind is modular. That being the case, I am quite happy to accept that this argument is under-determined by the evolutionary eidence in a radical way. However, I wish to give this more consideration, since other arguments (e.g. Carruthers, 2004) have pushed for that stronger thesis. In response to this argument I would say that any argument about the structure of the mind (as opposed to the brain) is going to be under-determined by the physical evidence, of course it is. What we can do is have a debate about which of the stories of mental evolution we are telling (modular or a-modular, or others) are most compelling, and why. This can then inform our empirical enquiries to help enlighten each story, and hopefully, one will eventually win out. Until then, this argument counts as much against any argument that speculates about the history of mental architecture at all, and not just against MMH. 

4.1.2

Another related argument to the one above replaces (3) with

  1. Therefore, any inferences about the structure of the mind are prone to the objection of inferring in the wrong direction. We ought to start with structure, and deduce the function (paraphrased from ibid.).

Indeed, this has some weight to it, even if there doesn’t seem to be a principled reason for starting with structure. In biology, it is most common to start an analysis of some unknown organ by assessing its structure. However, and this is an important disanalogy with biology that is worth noting, we are analysing the mind, and since we have no clear picture of how the mind is physically realised, we are constructing a compelling narrative that fits the evidence to inform further inquiry, rather than conducting a definitive analysis of a physical organ. It seems to me that the reason for this disanalogy is obvious: biology has a fossil record and genetic information to consult. Psychology lacks these tools. But, crucially, that counts against any psychological hypothesis that tries to account for evolutionary concerns, and arguably, all arguments in psychology that aren’t centred around neurology. We have failed to find the neurological structure for ‘the subconscious,’ because there are probably many, but that doesn’t prevent us from formulating equally plausible hypotheses that reference ‘the subconscious,’ because it conforms neatly to other theories about our biology, including our evolution, and so it is with MMH. MMH plausibly conforms to our best evidence regarding our evolution, and if computational realisation is true, then there are strong evolutionary reasons to think we have a massively modular mind.

4.1.2

My argument against a domain-general mental architecture turns on two major ideas: the computational realisation of the mind and the sheer energy demand involved in assigning neurological tokens in domain-general systems. I shall briefly consider rejecting each, and argue for their acceptance, before moving on to concluding remarks.

4.2.1 

It is possible that, contrary to everything in this paper, the mind is a non-computationally realised thing with non-modular, domain general systems. I accept that I cannot rule it out. However, what I wish to point to is the oddness of any such system. What would a non-computationally realised mind look like? How do we explain who or what is ‘doing’ the understanding (the ‘homunculus’ or ‘regress’ problem,) and how does it do what it does with a finite neurological system, if not with reference to multiple levels of simulation of formal systems, as in a computer? Computers, after all, share many of our combinatorial capacities. These considerations are not at all meant to be decisive in my favour, merely to highlight the oddness of a non-computational mind.

4.2.2 

The argument I sketch is one that works only in a state where energy is scarce and energy conservation is vital. Now, I take it to be that all forms of life that might have been complex enough to have sufficient neurological capacity to create mental architecture over and above the Fodorian, have always been in a position where energy is scarce. For the sake of arguing against the point, however, let’s assume that there is in fact, an apex predator with a complex mind for which energy conservation is no longer a serious survival problem. Could such a being evolve a domain general system? I think so, assuming the neural architecture has a sufficiently complex system to represent domain-general tokens in a sufficiently innovative way, and those who are fittest, and reproduce most, are never in a position whereby energy scarcity is an issue. 

I wish to digress briefly before bringing this section to a close. One objection to MMH I have failed to consider is that the brain was not fixed in the Pleistocene (Downes, 2013). I agree. The end of the Pleistocene brought the dawn of agrarianism, and the beginning of the end of energy scarcity for humans, at least in the sense required for my argument. Adequate provision for the energy needs for the future could begin, and that permits an environment of excess energy, where at least some of the time, humans could consume more energy than previous possible. Now, my suggestion is that I cannot without irrationality, reject the possibility of a domain-general system developing at the end of the Pleistocene, for human beings at the least, and so I accept that it has, in fact happened. This is a more moderate modularity than Carruthers’, but one I ultimately wish to accept, since it provides an elegant explanation of the apparent domain-generality of human thought. This leaves us with a mystery about how a domain general system like this could be computationally realised, but that is a discussion for future research. 

5

So, my settled position after considering all the objections; if we accept that the mind is computationally realised, (and I think we should,) then it is implausible from an evolutionary point of view to assert that a domain-general mental architecture could have evolved before the end of the Pleistocene; all the mental architecture we possess that evolved prior to agrarianism (and the end to energy scarcity for humans) is modular. In the relatively short time since then, humans may have also evolved a power-hungry, domain-general reasoning system that may not be modular, and accounts for all the re-combinatorial capacities of the modular outputs that come from elsewhere in the brain, but I have no idea how such a thing might be realised. Now, this position cannot be presented as decisive, and there are other problems that domain-general systems (such as the relevance problem,) and modular systems encounter that I haven’t dealt with here. Nonetheless, I feel that this position accounts for our intuitions about the flexibility of our most conscious, most recently evolved mental architecture, whilst still explaining why so much is beyond out conscious reach.

Bibliography

Carruthers, P. (2004) ‘The mind as a system of modules shaped by natural selection.’ Available at: http://faculty.philosophy.umd.edu/pcarruthers/Shaped-modules.htm (retrieved 11/11/2017 at 01:06).

Cowie, F. and Woodward, J. (2004).’ The Mind is Not (Just) a System of Modules Shaped (Just) by Natural Selection’, in Hitchcock, C ed., Contemporary Debates in Philosophy of Science, pp312-334. Oxford: Blackwell Publishing.

Downes, S. (2013) ‘The Basic Components of the Human Mind Were Not Solidified During the Pleistocene Epoch’ in S. Downes and E. Machery (eds.), Arguing About Human, pp93-101. London: Routledge

Fodor, J. (2000). ‘The Mind Doesn’t Work That Way’. Cambridge: MIT Press

Fodor, J. (1985). ‘Précis of The Modularity of MindBehavioral and Brain Sciences 8(1) pp1-42.

Haugeland, J. (2000). ‘Semantic Engines’, in Cummins and Cummins eds., Minds, Brains and Computers. Oxford: Blackwell Publishing.

Searle, J. (1980). ‘Minds, Brains, and Programs’ Behavioral and Brain Sciences 3(3) pp417-457.

Previous
Previous

Values in Science

Next
Next

Logic & Rationality