The Problem with Artificial Intelligence, Machine Learning, Cognitive Modeling, etc.
The problem with all of the above terms is that they imply something false. Although cognitive modeling is perhaps the least false, because it claims the least, a simple look at the definition of ‘cognitive’ shows where the falsity lies. Since it is the least inaccurate, starting with it should demonstrate more easily the inaccuracy of the other two terms.
Cognitive: “Of or relating to the mental processes of perception, memory, judgment, and reasoning.”
Of the four items listed, only memory can be attributed, properly, to a cognitive model, and even memory is used in a very different sense to the sense used in cognition.
Looked at from a developmental view, and taking Hegel’s development of reason as a relatively common model, preceding perception is sense certainty, which is preceded by simple sensation. What differentiates sense certainty is that it is certain of what it senses, in the sense that the sensation itself is what it appears to be. It can’t express what it appears to be or even differentiate one sensation from another, nor can it direct its sensing ability in any particular way.
Perception on the other hand can direct its sensing ability, however it cannot say what it is directed towards is as such. It can direct it towards a tree, for example, but not say what the tree is as such, i.e. that it is in fact a tree. All it can do is, in effect, point and say “there”. Yet cognitive modeling lacks even the last part of simple perception, in that the model has no innate sense of what “there” means.
What is lacking in terms of memory as we intend it as part of the meaning of the term cognitive, is precisely what as such is being remembered. In terms of reasoning, an understanding of what is being reasoned about as such is lacking. Judgment is lacking in entirety, since judgment requires understanding what and how something is in comparison with something, which have to be understood as what and how it is as such.
People in the various communities working on ‘AI’, ‘Machine Learning’, and ‘Cognitive Modeling’ have referred to this as ‘The Hard Problem of Consciousness’, but this is a misnomer. Consciousness is the thing we have the most direct knowledge of, it is hardly the problem. The problem is the means by which consciousness is produced, and the main missing element in the various theories is precisely the as-structure revealed in the as such. This element is what we properly term understanding, and the proper term for this as such required for understanding is ontology, the knowledge of what a thing is in its being.
While we know that intelligence, learning and cognition require an ontological capacity, and that in its most developed form that we know of, human consciousness, it is intimately linked with language, we barely know what ‘ontological capacity’ itself means, other than the circular definition via the as-structure, never mind how to produce it, other than the time honoured way that involves both genders.
So what do the terms refer to? Every form of AI etc. functions by more or less complex pattern matching combined with the ability to refer to more or less flexible rules. Problems occur when we take the metaphors as being real, such as ‘decision paths’ in neural networks, where nothing approximating a decision is ever factually attempted, never mind accomplished.
The most obvious fails involve false positives in pattern matching, which generally result in what might be termed ‘artificial stupidity’.
As an example, because I speak English and French, Facebook gives me most ads in French, on the assumption that if I speak both, French must be my mother tongue. Simply taking into account that I was born near Manchester, England, and there’s no reason to believe my father played for Manchester United, the likelihood that my mother tongue is French drops to near zero.
How consciousness is produced has been largely assumed to involve some form of computational ability, but no form of computation has been successfully attributed to thinking.
The usual way of saving the computational presumption is that thinking, and cognition is only a restricted form of thinking, is computational but not in a sense similar to any other form, but that’s simple equivocation, since there’s no means of deciding that it is computational other than presumption, a presumption that all evidence denies any validity to.
How some beings, man in particular, are ontological is an a priori for understanding any kind of intelligence, never mind producing it artificially, unless artificial is taken as meaning not created, but faked.