Terence Horgan and John Tienson claim that folk psychological laws are different in kind from basic physical laws in at least two ways: first, physical laws do not possess the kind of ceteris paribus qualifications possessed by folk psychological laws, which means the two types of laws have different logical forms; and second, applied physical laws are best thought of as being about an idealized World and folk psychological laws about the actual world. I argue that Horgan and Tienson have not made a persuasive case for either of the preceding views.
Abstract The purpose of this paper is to examine critically Jerry Fodor's views of the limits of computational neural network approaches to understand intelligence. Fodor distinguishes between two different approaches to computationally modelling intelligence, and while he raises problems with both, he is more concerned with the approach taken by those who make use of neural network models of intelligence or cognition. Fodor's claims regarding neural networks are found wanting, and the implications of these shortcomings for computational modelling of cognition are discussed. Keywords: cognitioncomputationconnectionismJerry Fodorneural networkssemanticssyntax Acknowledgements I gratefully acknowledge financial support from the Social Sciences and Humanities Research Council of Canada during the research and writing of this article. I am also indebted to some very helpful comments from an anonymous referee. Notes Notes 1. Fodor (2000 Fodor, J. 2000. The Mind Doesn't Work that Way: The Scope and Limits of Computational Psychology, Cambridge, MA and London, , UK: MIT Press. [Crossref] , [Google Scholar], pp. 13–22) provides more details about what he considers a CTM. It is directly connected with what he calls a rationalist psychology and its syntactic implementation. It is not entirely clear that a CTM requires commitment to a rationalist psychology; such a commitment may be Fodor's own proprietary interpretation of computational theories of mind. This article will not engage the issue of whether a rationalist psychology (as Fodor understands it) is required for a CTM. Instead, the focus will be on the different syntactic approaches to CTM. 2. Have a look at Elman (1992 Elman, J. 1992. "'Grammatical Structure and Distributed Representations'". In Connectionism: Theory and Practice, volume 3 of Vancouver Studies in Cognitive Science Edited by: Davis, S. 138–178. New York and Oxford: Oxford University Press. [Google Scholar]), Elman et al. (1996 Elman, J, Bates, EA, Johnson, MH, Karmiloff-Smith, A, Parisi, D and Plunkett, K. 1996. Rethinking Innateness: A Connectionist Perspective on Development, Neural Networks and Connectionist Modelling, Cambridge, MA: MIT Press. [Google Scholar]) and Rodriguez, Wiles and Elman (1999 Rodriguez, P, Wiles, J and Elman, J. 1999. A Recurrent Neural Network that Learns to Count'. Connection Science, 11: 5–40. [Taylor & Francis Online], [Web of Science ®] , [Google Scholar]) for a better understanding of what this type of network is capable of doing. 3. See Guarini (2006 Guarini, M. 2006. Particularism and the Classification and Reclassification of Moral Cases. IEEE Intelligent Systems, July/August: 22–28. [Google Scholar]) for a brief discussion of MCC's treatment of cases with multiple motives and multiple consequences. 4. Churchland (1989 Churchland, PM. 1989. A Neurocomputational Perspective: The Nature of Mind and the Structure of Science, Cambridge, MA and London, , UK: MIT Press. [Google Scholar], pp. 153–196) often writes of a set of synaptic weights as embodying a theory. For reasons mentioned in the text, this is not totally implausible. However, theorising has traditionally involved the assertion or thinking of sentences. At best, the MCC (as part of a larger system) could deliver a kind of animal or pre-reflective capacity to classify. What is impressive about humans is that we can reflect on our initial classifications and come to revise them. Much of this reflection appears to be linguistically mediated. When such linguistically mediated reflection becomes sufficiently systematic, we call it theorising. If Brandom (1994 Brandom, RB. 1994. Making it Explicit: Reasoning, Representing, and Discursive Commitment, Cambridge, MA and London, , UK: Harvard University Press. [Google Scholar] chapter 2; 2000 Brandom, RB. 2000. Articulating Reasons: An Introduction to Inferentialism, Cambridge, MA and London, , UK: Harvard University Press. [Crossref] , [Google Scholar] chapters 2 and 3) is right about logical expressivism, then logical operators are crucial for such reflection. For example, part of what the conditional operator allows us to do is make explicit our inferential practices so that we can reflect on them. Nothing in this article should be read as an attempt to dispense with or supplant logical reconstructions of reasoning. However, humans do appear to be able to classify situations at a very young age, well before we have the ability to engage in sophisticated logical reflection. For this reason and others, models that do not implement formal logics are worth exploring, even if they are not the whole story of cognition. 5. The weights being considered are those that connect the input units to the hidden units, and those that connect the hidden units to the output units. 6. Namely, whether killing is an acceptable way of achieving freedom from an imposed burden. 7. It does not follow that the network's behaviour must remain a mystery (i.e. not to be subject to scientific understanding). There are techniques for analysing what is going on at the level of hidden units. We can understand the training process as a search for a set of partitions for a hidden unit activation vector state space such that the appropriate output is produced. Hierarchical cluster analysis can be used to reveal how cases are clustering together. Inner dot products of hidden unit vectors of complete cases can be used to determine how close or far apart the cases are from one another in state space. This gives us some insight as to what the network is treating as similar or dissimilar. Of course, the techniques mentioned here do not in any way exhaust the possibilities. 8. A similar point could be made about inferential role semantics. If the inputs to a network are premises and the outputs are conclusions, the inferential role semanticist could look at patterns of inferences to fix content, and changing numbers of hidden units would only matter to the extent that they affect the patterns of inference, which could be assessed without looking at the hidden units. There are other complexities to be tended to both with this approach and with the causal covariance approach, but there is no room to explore them in this article. 9. Over and above the two senses of theory identified in the text above, there are two other senses. There is the notion of a scientific theory, and there is the notion of a theory understood simply as a set of commitments where these commitments are not as systematically related to one another as they would be in a scientific theory. Clearly, an individual scientific theory (e.g. Newton's theory of motion and gravity) is not a total theory in the sense discussed earlier. 10. Strictly speaking, a subset could contain only one commitment, but that is not what is intended here. Clearly, given that a limited theory is used to describe a kind of limited holism, what is intended is that the subset in question would contain many commitments but fall far short of the totality of our commitments. 11. By 'neural network' or 'connectionist' models, I am referring to models that are not simply implementing what Fodor refers to as 'classical' or 'local' processing.
One form of analogical argument proceeds by comparing a disputed case (the target) with an agreed upon case (the source) to try to resolve the dispute. There is a variation on preceding form of argument not yet identified in the theoretical literature. This variation involves multiple sources, and it requires that the sources be combined or blended for the argument to work. Arguments supporting the Triple Contract are shown to possess this structure.
With this article, the author explores Dancy's suggestion and describes a neural network model of classification exploring the possibility of case-based moral reasoning (including learning) without recourse to moral principles. The resulting simulations show that nontrivial case classification might be possible but reclassification is more problematic
Waveform relaxation has potential to overcome problem of excessive computer run times which are necessary for simulation of larger circuits with the use of existing simulators. One of the attractive features of waveform relaxation is its suitability for parallel implementation. Amount of data necessary for interchange between parallel processors after each iteration influences the overall performance of simulation. Method of integration based on Chebyshev series provides for representation of solutions in the most compact form which makes it very attractive for parallel implementations. This paper presents some results of numerical experiments with the spectral integration applied in the relaxation framework to a number of MOS circuits.