Architecture of Intelligence
We start by making a distinction between mind and cognition, and by positing that cognition is an aspect of mind. We propose as a working hypothesis a Separability Hypothesis which posits that we can factor off an architecture for cognition from a more general architecture for mind, thus avoiding a number of philosophical objections that have been raised about the "Strong AI" hypothesis. Thus the search for an architectural level which will explain all the interesting phenomena of cognition is likely to be futile. There are a number of levels which interact, unlike in the computer model, and this interaction makes ...view middle of the document...
However, besides these knowledge states, mental phenomena also include such things as emotional states and subjective consciousness. Under what conditions can these other mental properties also be attributed to artifacts to which we attribute knowledge states? Is intelligence separable from these other mental phenomena?
It is possible that intelligence can be explained or simulated without necessarily explaining or simulating other aspects of mind. A somewhat formal way of putting this Separability Hypothesis is that the knowledge state transformation account can be factored off as a homomorphism of the mental process account. That is: If the mental process can be seen as a sequence of transformations: M1 -->M2 -->..., where Mi is the complete mental state, and the transformation function (the function that is responsible for state changes) is F, then a subprocess K1 --> K2 -->. . . can be identified such that each Ki is a knowledge state and a component of the corresponding Mi, the transformation function is f, and f is some kind of homomorphism of F. A study of intelligence alone can restrict itself to a characterization of K’s and f, without producing accounts of M’s and F. If cognition is in fact separable in this sense, we can in principle design machines that implement f and whose states are interpretable as K’s. We can call such machines cognitive agents, and attribute intelligence to them. However, the states of such machines are not necessarily interpretable as complete M’s, and thus they may be denied other attributes of mental states.
B. Dimension 2: Functional versus Biological
The second dimension in discussions about intelligence involves the extent to which we need to be tied to biology for understanding intelligence. Can intelligence be characterized abstractly as a functional capability which just happens to be realized more or less well by some biological organisms? If it can, then study of biological brains, of human psychology, or of the phenomenology of human consciousness is not logically necessary for a theory of cognition and intelligence, just as enquiries into the relevant capabilities of biological organisms are not needed for the abstract study of logic and arithmetic or for the theory of flight. Of course, we may learn something from biology about how to practically implement intelligent systems, but we may feel quite free to substitute non-biological (both in the sense of architectures which are not brain-like and in the sense of being un- constrained by considerations of human psychology) approaches for all or part of our implementation. Whether intelligence can be characterized abstractly as a functional capability surely depends upon what phenomena we want to include in defining the functional capability, as we discussed. We might have different constraints on a definition that needed to include emotion and subjective states than one that only included knowledge states. Clearly, the enterprise of AI deeply...