@Julian I agree, but that’s because I know your research. Most people would consider video-game AI as specific intelligence. But what you do is different, I think.
@Nicholas Yes. Cats are very good predator, but they are basically using hard-wired algorithms.
…which suggests that if we want software with general intelligence, we need tasks for which you need general intelligence in order to succeed. “Digital savannas”.
I would say that playing computer games are as close as we can get to this sort of tasks; in particular, a piece of software that, when faced with a novel game taken from a suitably defined set of games, efficiently learns to play this game is generally intelligent.
(Of course, I say this because this is consistent with my general research direction…)
Francois Rivestsays:
Althouth “I can grab and eat a strawberry without thinking”, babys can’t. This suggests that a lot of our domain-specific expertise is partly ‘acquired’ in humans. Even though it is an easy task for us (like speaking), it does not mean that it does not required to be learn (in part??).
The general finding from the Industrial / Organisational psychology literature based on a huge database of studies that have measured both intelligence and job performance is that
intelligence is positively correlated with just about all jobs and the relationship is stronger as the job gets more complex.
See for example the meta analysis by Schmidt & Hunter (1998).
That said the expertise literature (see work by K. Anders Ericsson) has ample evidence that it is the amount and quality of focused domain specific practice which best distinguishes domain-specific experts from novices.
Thus, intelligence is not a disadvantage in domain specific settings. Rather, it just becomes less important, as amount of dedicated practice and training in the domain takes over or as the complexity of the task is smaller.
Also, My sense from the research is that the intelligence-social ineptness correlation is just a stereotype with little empirical support.
Kevembuanggasays:
Daniel:Do you disagree with me, or are you criticizing the reference I used?
The latter, it weakens your point.
With which I don’t fully agree either.
I think the difference is more a matter of scale than a matter of nature.
general intelligence covers a much much larger “search space” than domain intelligence and therefore there is likely an extra cost incurred to fetch the proper subdomain from the start of the “search tree” when dealing with trivial problems.
There may also exist an unfavorable balance of neural resources usage for general questions versus dedicated processing.
Paul: As for whether there is a link between clumsiness/social ineptitude and intelligence , it is certainly the case.
The question of whether there is a link between clumsiness/social ineptitude and intelligence is interesting. I suspect there’s some degree of truth in the extremity (e.g. links between autism and exceptional memory), something to be said for unusual interests, and a lot of confirmation bias. At college, attending a job fair, I managed to acquire too many papers and swag. At one point I got into a cycle of dropping something, bending over to pick it up, and dropping something else while doing so. After getting things under control a recruiter who saw this was very eager to talk to me, insisting I must be a scientist. I’ll have to walk into stuff at my next job interview…
And although we focus on domain specific applications, much of machine learning is classifiable as general intelligence. Genetic Algorithms, Max-Ent, Naive Bayes, KMeans Clustering: the algorithms are all domain neutral. Is it just the tweaking human practioners currently do to an algorithm that needs to better automatation? And of course, novel detection of what a good and bad training example look like would be nice…
Kevembuangga, that was a very interesting article, thanks for shedding some statistics on the question. I do still have to wonder about confirmation bias in the results, in that someone was interviewing these people and deciding if they were adjusted or not on subjective criteria. But even if the actual correlation is weaker than the data suggests, it certainly exists, at least as you move further from the mean.
The discussion at the end, though, seems to suggest this is largely a social phenomena and not inherent to the individual. In particular they state that individuals raised by gifted parents with gifted peers tend to adapt socially, and its primarily the highly gifted mixed with regular peers that show these tendencies. That seems to be a very different phenomenon than any inherent link between high general intelligence and low domain-specific intelligence. I have a much easier time believing the intelligent have more issues with their peers (as any substantially different individual tends to) than that the intelligent are generally predisposed to social awkwardness and poor motor skills.
Mitchsays:
So are you sorta saying that general intelligence is really a very specific meta-intelligence?
Sean O'Connorsays:
Yeh, it’s not clear how much is preprogrammed and how much is learned from the environment. Surely a lot is preprogrammed, the raw instincts and emotions that people without knowledge operate by. Sign stimuli to start observing faces, learning to walk etc.
Maybe though vast amounts of information is gathered through unsupervised learning. Similarity Alignment
Given the 3D structure of the brain you can easily have features built on other features up to very high levels of complexity. And have a simple readout layer from all those disentangling features to some wanted response.
I think that as humans are usually better at some things than others,the same way different algorithms have their weaknesses and strengths.Extending that thought, usually an algorithm should perform better if it is trained for a specific task rather than a multitude of it.It seems like it is a trade off between good performance and variance. Moreover,as humans ,we have developped some functions like “picking a strawberry to eat” because it is our basic instinct and we were exposed to that environment. AI to learn something respectively should be exposed to that environment as well.
AI applications usually target a specific problem.I feel like it would be almost impossible to find a way that would solve more problems and not try to trade off variety with accuracy leaving a certain task worse off.However the prospect is more than exciting
Sorry to be a contrarian again but Kanazawa is a pretty dubious “reference”.
May be not at the Malcolm Gladwell level but not that far…
@Kevembuangga Do you disagree with me, or are you criticizing the reference I used?
It is not because I use a reference that I vouch for it. References are merely pointers.
But isn’t the “cats catching birds or mice” algorithm domain-specific? It can catch birds or mice, but it won’t ever do anything else 🙂
@Julian I agree, but that’s because I know your research. Most people would consider video-game AI as specific intelligence. But what you do is different, I think.
@Nicholas Yes. Cats are very good predator, but they are basically using hard-wired algorithms.
…which suggests that if we want software with general intelligence, we need tasks for which you need general intelligence in order to succeed. “Digital savannas”.
I would say that playing computer games are as close as we can get to this sort of tasks; in particular, a piece of software that, when faced with a novel game taken from a suitably defined set of games, efficiently learns to play this game is generally intelligent.
(Of course, I say this because this is consistent with my general research direction…)
Althouth “I can grab and eat a strawberry without thinking”, babys can’t. This suggests that a lot of our domain-specific expertise is partly ‘acquired’ in humans. Even though it is an easy task for us (like speaking), it does not mean that it does not required to be learn (in part??).
The general finding from the Industrial / Organisational psychology literature based on a huge database of studies that have measured both intelligence and job performance is that
intelligence is positively correlated with just about all jobs and the relationship is stronger as the job gets more complex.
See for example the meta analysis by Schmidt & Hunter (1998).
That said the expertise literature (see work by K. Anders Ericsson) has ample evidence that it is the amount and quality of focused domain specific practice which best distinguishes domain-specific experts from novices.
Thus, intelligence is not a disadvantage in domain specific settings. Rather, it just becomes less important, as amount of dedicated practice and training in the domain takes over or as the complexity of the task is smaller.
Also, My sense from the research is that the intelligence-social ineptness correlation is just a stereotype with little empirical support.
Daniel: Do you disagree with me, or are you criticizing the reference I used?
The latter, it weakens your point.
With which I don’t fully agree either.
I think the difference is more a matter of scale than a matter of nature.
general intelligence covers a much much larger “search space” than domain intelligence and therefore there is likely an extra cost incurred to fetch the proper subdomain from the start of the “search tree” when dealing with trivial problems.
There may also exist an unfavorable balance of neural resources usage for general questions versus dedicated processing.
Paul: As for whether there is a link between clumsiness/social ineptitude and intelligence , it is certainly the case.
The question of whether there is a link between clumsiness/social ineptitude and intelligence is interesting. I suspect there’s some degree of truth in the extremity (e.g. links between autism and exceptional memory), something to be said for unusual interests, and a lot of confirmation bias. At college, attending a job fair, I managed to acquire too many papers and swag. At one point I got into a cycle of dropping something, bending over to pick it up, and dropping something else while doing so. After getting things under control a recruiter who saw this was very eager to talk to me, insisting I must be a scientist. I’ll have to walk into stuff at my next job interview…
And although we focus on domain specific applications, much of machine learning is classifiable as general intelligence. Genetic Algorithms, Max-Ent, Naive Bayes, KMeans Clustering: the algorithms are all domain neutral. Is it just the tweaking human practioners currently do to an algorithm that needs to better automatation? And of course, novel detection of what a good and bad training example look like would be nice…
Kevembuangga, that was a very interesting article, thanks for shedding some statistics on the question. I do still have to wonder about confirmation bias in the results, in that someone was interviewing these people and deciding if they were adjusted or not on subjective criteria. But even if the actual correlation is weaker than the data suggests, it certainly exists, at least as you move further from the mean.
The discussion at the end, though, seems to suggest this is largely a social phenomena and not inherent to the individual. In particular they state that individuals raised by gifted parents with gifted peers tend to adapt socially, and its primarily the highly gifted mixed with regular peers that show these tendencies. That seems to be a very different phenomenon than any inherent link between high general intelligence and low domain-specific intelligence. I have a much easier time believing the intelligent have more issues with their peers (as any substantially different individual tends to) than that the intelligent are generally predisposed to social awkwardness and poor motor skills.
So are you sorta saying that general intelligence is really a very specific meta-intelligence?
Yeh, it’s not clear how much is preprogrammed and how much is learned from the environment. Surely a lot is preprogrammed, the raw instincts and emotions that people without knowledge operate by. Sign stimuli to start observing faces, learning to walk etc.
Maybe though vast amounts of information is gathered through unsupervised learning.
Similarity Alignment
Given the 3D structure of the brain you can easily have features built on other features up to very high levels of complexity. And have a simple readout layer from all those disentangling features to some wanted response.
I think that as humans are usually better at some things than others,the same way different algorithms have their weaknesses and strengths.Extending that thought, usually an algorithm should perform better if it is trained for a specific task rather than a multitude of it.It seems like it is a trade off between good performance and variance. Moreover,as humans ,we have developped some functions like “picking a strawberry to eat” because it is our basic instinct and we were exposed to that environment. AI to learn something respectively should be exposed to that environment as well.
AI applications usually target a specific problem.I feel like it would be almost impossible to find a way that would solve more problems and not try to trade off variety with accuracy leaving a certain task worse off.However the prospect is more than exciting