How do you define intelligence in the cognitive domain?
I never delved deeply into the intelligence-mind problem. Defining intelligence is a slipper problem and, in my opinion, not necessarily very interesting. Moreover, there is too much talk about it, which shows that we are possibly hitting a wall. When I first read Turing and approached the philosophy of mind, I never believed there was much promise in that space (definition of intelligence) for many reasons. One reason is the significant confusion about what we can confidently claim to know versus what remains unknown or not fully understood from the neurobiological perspective. When Turing tackled the topic, he simply demonstrated that intelligence essentially boils down to performance when it can be concretely defined. In other words, if x produces y, and if a human would produce y in a manner that we, as humans, would deem intelligent, then x is intelligent (thus, the trick lies in constructing a transitive argument for comparison, etc., which is fair enough for programmers or pure logicians attempting to create some form of computing machine). Turing was very candid in setting limits to the thought experiment. I believe that intelligence pertains to a certain type of performance that necessitates specific properties at the causal level.
[I] If S produces x through I, where I has causal capacity such that x is not produced by chance and x achieve a solution for a given problem, then S is intelligent.
A conditional definition could indeed be formulated, but I anticipate numerous caveats. I agree that a causal requirement exists for the producer of the outcome. The issue I see with your definition lies in its high generality and open-ended clause. Firstly, intelligence is a property displayed by something, yet your definition leaves the extension of this property unidentified. Secondly, the terms ‘recognize’ and ‘process’ lack a clear object of reference. If we limit intelligence to something computational in nature, for instance, this could be undesirable depending on the perspective. Additionally, is this ‘intelligence’ randomly instantiated? Does it require specific hardware? The definition remains silent on these points. While your approach appears to be functional, meaning it addresses the role of intelligence, it still leaves some ambiguity. Lastly, the notion of ‘causing differences’ seems overly general. If I fail to perform a task 100,000 times by repeatedly making the same mistake in different ways, I am still technically ‘causing a difference.’ However, this outcome is likely not what we intended.
Anyway, it was interesting defining intelligence and giving you this answer as I never seriously thought about this issue. For me, defining intelligence is more empirical than a theoretical problem, as Protagoras (and partially Turing) proposed: humans are the measure of all things. I do believe there are smart things out there and maybe intelligent (like ChatGPT, in some capacity, AlphaZero is much more convincing to me). They solve problems as humans would in some ways or more effectively, providing we feed them with time, information, gigantic memories, and good hardware (quite a set of requirements!, a human does with much less). And that’s fine for me. General intelligence, though, can be different, but for me, it is just a matter of question: if I see a general-intelligence machine, I will salute it. If I won’t see, I won’t. As the talk does not follow the walk, I stay closely with the walk-in places of all sorts of useless speculations. And for the moment, I don’t. However, I also salute machinery that solves problems for me through me, in a sense. I like them, and I find them intelligent when they work. As a chess player, I must see this in this way. After all, we still play chess! For a partial but good account of a chunk of this discussion, you could find my article on the future of intelligence analysis quite compelling.
Be First to Comment