Reflexivity, AI, Machine Learning, Cognitive Modelling, and the Background of any Experience
“Self-consciousness is not the empirical self, but it makes experience possible for it, a normative transcendental difference. Without a doubt only the theoreticians of natural law have overestimated knowledge, since they did not hesitate to declare that the sequencing of ends can be deciphered by right reason. Parmenides and Aristotle regarding the one, as Kant and Luther regarding consciousness, showed themselves to be more circumspect. The pros hen allows itself to be thought but not to be known. Similarly, the ultimate transcendental condition, apperception, indeed unifies our representations but is itself not a representation.”
Reiner Schurmann. Broken Hegemonies
Reflexivity, as has been intuited in nearly every form of AI, machine learning, cognitive modelling etc., is key to experiencing as such. Through reflexivity, further, we experience experiencing itself. Whether by recursion in classic AI, mathematical objects such as tensors in machine learning, or a constantly developing context that in turn rewrites the pattern matching algorithms in cognitive modelling, a sort of minimal reflexivity is attempted. The grammatical use of the term, as well, only metaphorically or analogously refers to anything outside the human sphere — the properly reflexive pronouns myself, ourselves and the like are restricted to human beings.
To take cognitive modelling, at once the simplest attempt at machine intelligence and often the richest in results, the difficulty is that the experienced is already abstracted to data, and the background against which we experience to context. However, in attempting to recall an experience we don’t laboriously reconstruct a context, a single contextual hint can bring it to immediate memory “all at once”.
Kant named the background “transcendental apperception”, and although it’s an accurate term, its formulation leaves us with little to understand immediately, its construction is extremely unfamiliar. Yet the background of all experiences cannot be something unfamiliar, since it it persists (although through sometimes radical changes) throughout all experiencing. As implied by Heidegger in his use of things like angst and profound boredom as ontologically disclosive, the familiar term for what is abstracted to ‘context’ is mood.
It’s via the uniqueness of each mood (however many similar moods we may have had come over us, each is subtly different than any of the others) that we know that a given experience, whether in the moment or remembered, is my experience. This implies that the reflexivity of human being extends to that which is experienced as mood, since it reflects the experience back as something that happened to me.
Since this background of mood, which is always there though we generally only notice its changes, must itself be reflexive in order to allow the redoubling of experiences in the experience of experiencing itself, its provenance in the self becomes questionable. We do not experience mood as originating from us, as we do with emotion, for instance, but as coming over us. This coming over us is always a priori to the experience it forms a background for, hence Kant’s term ‘ap-perception’. While mood, as Heidegger states, discloses our state of being or how we are at a given moment, and in doing so defines moments themselves, could it be that its ontological function goes beyond our own being and its state to the disclosure of Being? Could a change of mood signal a change in the dispensation of Being itself, rather than merely the state of our own being? Finally, is the standing-under implicit in understanding a standing-under something that comes-over-us, namely mood, and therefore dependent on it?
In any event, it’s clear that reflexivity in human experience is not simulated in any of the variations of AI etc. other than at the most minimal level and only on abstracted events of experience. Cognitive modelling, such as ACT-R/jACT-R comes closest in terms of modelling both the foreground experience (albeit only as abstracted data) and the background against which the experience takes place (albeit only as abstracted context).
Intelligence has to include the experience of understanding, just as cognition has to include the experience of thinking. Currently we do not know precisely how the background of experience “comes over us” in an event which is always a change. Until we do, cognitive modelling will remain only a minimal model of cognition, and artificial intelligence will only exist insofar as we take artificial in its common meaning of “fake”.