Interesting post, and aligned with how I felt about things… until I read Nick Bostrom’s profound and thoughtful book *Superintelligence*. I can’t sum up his entire thesis here, and besides I’m only halfway through the book, but he makes one point quite convincingly: there are plausible scenarios in which a superintelligent AI appears too quickly for us to respond before it seizes a decisive strategic advantage.
For me, these two thoughts support the idea that a rapid takeoff is at least a possibility: (1) computers live in dilated time, compared to our relatively low clock speeds, so maybe 1 AI-day is tens (hundred? thousands?) of human years; (2) an AI with near-human-level strategic capability might decide to conceal its intelligence from us until it has a decisive strategic advantage.
Bostrom points out too that our puny human imaginings about how a superintelligent being will behave are, at best, sketchy. In the end, I agree with him that we need to think about these scenarios, however, especially about how we define the tasks and goals of AIs, since those are what will determine what happens to humans after we create the conditions for a superintelligence to emerge.
“It’s probably fine” is not going to work forever.
I have read Bostrom as well as several other authors who push the same concerns.
If you scan my blog post, I do not deny that artificial intelligence can be a threat. I also do not deny that computers can outsmart humanity in the near future. In fact, I state both of these things more than once.
What I deny is that there is some mystical point in time when… right before we had ordinary boring computers… and then just after that point we have a new species.
I mean, yes, there is a point in time before which human beings are better at Chess, and then after this point, the machine is so much better that no human being stands a chance.
There will be a point in time, not far in the future, when self-driven cars will be safer than human-driven cars.
These changes can happen very quickly. Software can twice as good in a matter of a year. And that 2x factor can be enough to leave human beings wanting.
Technology is definitively unpredictable. And it can make rapid progress.
But your self-driving cars won’t acquire a consciousness and refuse to drive you all of a sudden. If they refuse to drive, then a software patch is all that is needed.
Intelligence is not a mystical property, it is just software.
I wasn’t really thinking about the possibility of superintelligence or the threat of it. I was specifically thinking of the rapid emergence. My point was that there are plausible scenarios where superintelligence *appears* to arrive effectively instantaneously, for example because an ‘only’ human-level AI chose to conceal its bootstrapped learning phase (which might in any case be very short in human time, but maybe it’s years). Kind of like a tunneling prisoner, who appears to escape ‘one night’, but actually it took months. If superintelligence emerges this easy, a patch is impossible.
an ‘only’ human-level AI chose to conceal its bootstrapped learning phase
So this AI has acquired “free will” and “consciousness” without us knowing and it is now upgrading itself to higher levels of consciousness.
The Internet is made of server farms with millions of computers. This is already superior, arguably, to human-level intelligence. So, yeah, maybe you will have a PC in 20 or 30 years that is as powerful as a human brain. And you might worry that this PC is hiding things from you and plotting against you… But why aren’t you worried, right now, about the Internet, as the supercomputer it is, doing the very same thing?
As human beings, we just follow our programming and so will these computers. This may very well lead these computers to kill us all, but that will follow from their programming not because they will sudden acquire their freedom.
Computer science does not allow any concept of free will or consciousness. There is simply no such thing. We are just Turing machines reading and writing on tapes.
By a large margin, the software we produce is far more reliable than any human being. Simply put, if you should worried about an intelligence killing other people, you should first worry about a human intelligence doing so.
If some nuclear rocket would fail and destroy an american city, would we say it was AI attacking us? If some Wallstreet trading algorithm destroys the economy, would we say AI is attacking us? If Facebook/Google accidentally mass-mails sensitive material about lots of politicians around, would we say AI is attacking us?
Countries and organizations routinely engage in cyberwars. This will continue and smarter software will certainly be part of the equation. Think about a software virus that is two orders of magnitude smarter than anything we currently have.
If we have self-driving cars, we have also autonomous military drones. Think about a drone that can outsmart and outgun any human combatant. I think that we will have such things in 30 years and they will be very dangerous.
My point is that there will not be a mystical moment where a peaceful drone “wakes up” and becomes human or superhuman. Drones in the future will still be just software… more sophisticated, faster, but still just software.
I agree that there’s nothing mystical about intelligence. However I think that’s also an uninteresting tangent in the larger conversation. As qznc briefly referred to, I think the 2010 stock market flash crash is an interesting early warning about very real problems we are going to face. As more and more decision making becomes automated in various ways, all sorts of nasty unintended things can happen very quickly. I don’t care whether it’s the machines “turning against their creators” or not. I think humanity collectively would do well to proceed very cautiously along the “automate everything” path that we’re heading down.
In other words, if self-driving cars all stop suddenly due to some bug, that would be massively annoying but surely fixable. If self-guiding weapons wipe out 90% of people in an hour there might not be an opportunity to fix the bug.
If self-guiding weapons wipe out 90% of people in an hour there might not be an opportunity to fix the bug.
Given a choice between a weapon in the hands of the random soldier, and a weapon in the “hands” of a sophisticated AI, which would you prefer?
Consider your answer carefully.
We have been trusting our lives to AIs for decades. When you embark on a plane, you are basically trusting the autopilot, which is nothing but an AI… Lots and lots of critical infrastructure relies on AI… and it often does because it is safer than to rely on human beings…
The Russians already have an automated nuclear retaliation system in place:
It goes back to the 1960s… It was put in place, in part, because it is safer than to rely on human beings… Indeed, it will only act if an attack has begun, whereas a human being could “fear” irrationally that the act is ongoing and order a retaliation without cause.
So while buggy software can be dangerous, it is often safer than relying on human beings. As long as it is used for good, smarter software will probably make our lives safer.
Yes, a self-driving car could hit a bug and start running after pedestrians and killing them… We should, and we do, worry about such things… but we have drunk drivers doing much the same regularly… at least your car won’t have a couple more beers… and it might even prevent you from driving unless you are sober…
I agree with your comments, and I welcome our self-driving car future (when the systems are good enough). The impression I got from your original post was that you were striking a dismissive tone with regard to recent concerns about widespread use of AI. I think that concern is legitimate; not largely informed by mystical thinking about the nature of intelligence. Complex software can fail in complex ways, and many of the people who will be in positions to control deployments have little clue about how software works.
Your question about weapons in the hands of soldiers versus sophisticated AIs is an interesting one. Given the state of technology today, I would be terrified by mobile robots with heavy armaments engaging in urban warfare. (Slightly relevant: I have done a tiny bit of work on automatic target recognition software.) Maybe there will come a day when robot soldiers make sense. I think the main message of the AI alarmists is that we should be extremely careful about how we approach that day.
I would be terrified by mobile robots with heavy armaments engaging in urban warfare.
I’d be terrified of anything with heavy armaments engaging in urban warfare.
I don’t think that smart software leads to heavier armaments. I think the opposite is true.
Right now, most corporations have armed guards doing the rounds to find trespassers. You can replace these armed guards by autonomous drones that constantly seek trespassers. Only when the trespassers have been found, and identified as hostile, are weapons necessary.
Why do guards carry weapons? Because they are human beings who want to protect themselves. Autonomous drones are disposable and do not need to be armed. And if they are to be armed, they can use non-lethal weapons.
Extend this to policing in general. Have drones patrolling the streets. Unlike cops, there is no reason for these cheap drones to be armed. They can move around looking to prevent and deter crimes. Weapons can be brought forth only as needed… after all, the drone is expendable, not the cop.
It is very difficult for human beings to aim for non-lethal use of force against a determined opponent. It is a lot easier for disposable drones.
The same probably applies to the military. With cheap drones, they need fewer weapons, not more. You can bring the firepower only when needed. It is far more effective.
You can easily order disposable autonomous drones to “use non-lethal force” and “to hold their ground”… and expect these orders to be followed. For human beings, if the tension grows too much you have to start worrying about soldiers going rogue. We have seen it time and time again in Irak. It is ineffective, dangerous and wasteful.
Eugenesays:
A great article. I am becoming a fan.
To the point. Artificial intelligence expressed in software programs is a giant leap forward. Not many people realize that a combination of fast hardware and right software is a perfect modelling tool. It’s accessible to practically anyone on the planet. Buildinng a biological system out of living cells to do the same task is nearly impossible.
AI is a program. To get smarter, it must learn. Because it’s unlikely anybody is going to create intelligence out of the box. It is an aquired skill.
Many algorithms today rely on neural nets which do not express knowledge in a closed formula format so it’s seems hard to measure intelligence quantitatively. Best we can do is ask program to solve problems or answer questions.
Still, none of the AI examples exhibit true intelligence. They are expert systems, with ability to understand written and spoken language and a model of the world. Human are capable of extending, imagining things.
Matt Fulkersonsays:
This article reminded me of a conversation I had with a friend and roommate who was studying AI back in 1996. I recall discussing whether “machine consciousness” was possible, and if so if it would be similar to human consciousness. I argued the point that since we don’t really know how to define or measure consciousness, it is unlikely the machines we are currently creating are going to spontaneously start exhibiting consciousness purely due to the computational power of the machine.
My own belief is that we do have free will within constraints. For me, holding such a belief is at least useful for understanding how we interact with the world around us. Also, from an evolutionary perspective, consciousness that has some say in what happens next seems more plausible.
My own belief is that we do have free will within constraints. (…) Also, from an evolutionary perspective, consciousness that has some say in what happens next seems more plausible.
People believed in consciousness and free will centuries before evolution, as we understand it, was conceived.
Matt Fulkersonsays:
No doubt. Some old ideas are entirely discarded after centuries of progress. Sometimes old ideas are only partially discarded (e.g. Newton’s laws of motion).
In this case, the fields of evolution and quantum mechanics do seem to leave the door open for free will. In the days when only deterministic theories existed, many philosophers were hung up on the idea there could be no choice. (I’m not saying randomness must imply choice, but see below.)
Thinking out load here:
Assume we have consciousness coexisting within brains that are capable of learning. The argument against free will is that this consciousness can only interact with the physical world through observation. The argument for free will is that this consciousness can also affect the state of the brain somehow.
Would there be in any point in a machine where consciousness can only observe and not give any feedback? Why would this type of consciousness evolve? There would be no evolutionary benefit to passive observation alone.
So it seems to me that a more strongly interacting consciousness is more plausible from an evolutionary perspective.
In this case, the fields of evolution and quantum mechanics do seem to leave the door open for free will.
I really do not see any link between evolution and free will. I can use evolution in a purely deterministic software program. Many AI researchers do just that, all the time.
Would there be in any point in a machine where consciousness can only observe and not give any feedback? Why would this type of consciousness evolve? There would be no evolutionary benefit to passive observation alone.
Consciousness may play a useful role, or not. Yet even a useful consciousness does not imply free will.
What the evidence says is that when you decide to throw a rock, you become conscious of this decision only after the decision is taken. So consciousness is not what is driving your decisions.
It is an incorrect statement to say that evolution prunes what is not useful. Most of our DNA is useless junk.
If something does not significantly harm the passing of the genes, it is not going to get selected against… so to prove that consciousness must be useful on evolutionary grounds, you have to prove that it would otherwise harm significantly the ability of an individual to pass on his or her genes.
But these arguments tend to be weak and full of hand-waving. Why do human beings have such large brains while other monkeys have much smaller (3x or more) brains?
Matt Fulkersonsays:
From reading wikipedia about Benjamin Libet, it is not clear to me that there is consensus about his conclusions. Here is a key statement in the *Methods* section:
“In other words, apparently conscious decisions to act were preceded by an unconscious buildup of electrical activity within the brain – the change in EEG signals reflecting this buildup came to be called Bereitschaftspotential or readiness potential.”
Maybe I’m being dense, but I don’t get it. Surely “deciding to act” must have some physical origin (e.g. the electrical activity). It would of course be physically impossible to note when a decision is made before the decision has started to be made. So I don’t see how your conclusion about the rock decision example follows from a Libet experiment.
Anyway, proving or disproving the existence of free will is a lot like proving or disproving the existence of God. There is probably no experiment that can be done that will stand up to scrutiny. At the end of the day, one will believe what one will believe.
Matt Fulkersonsays:
\begin{facetiousness}
I’m trying to command myself to get back to work, but instead I’m observing myself posting another comment. Maybe I’m starting to doubt the independence of my own free will :-).
\end{facetiousness}
I’m wondering if the free will existence argument goes away if one views consciousness as an emergent phenomenon. With emergent phenomena, the whole is much more interesting than the sum of its parts. While the whole is utterly dependent on all of (or at least many of ) the parts, the behavior of the parts are influenced by the whole.
Whether our egos are true commanders of the mind or simply passive observers is the wrong question to be asking. How thought arises from cooperative behavior within our brain might be a right question.
Here is a physics analogy. Bring a superconductor below the transition temperature, and electrons cease to behave independently. If the material is a metal above the transition temperature, electrons will flow with resistance (remove the voltage and they’ll stop flowing). Below the transition temperature, the electrons flow without resistance, being members of the superconducting condensate. Their behavior is now governed by the existence of the condensate, not by their individual properties within the ordinary metal.
So if consciousness is also an emergent phenomenon, it is plausible that it can both influence and be influenced. “We” are somehow a part of this emergent phenomenon. Consciousness depends on hardware to arise, and coordinated functioning of the hardware depends on consciousness. Their mutual dependence is what makes us interesting.
I’m wondering if the free will existence argument goes away if one views consciousness as an emergent phenomenon. With emergent phenomena, the whole is much more interesting than the sum of its parts. While the whole is utterly dependent on all of (or at least many of ) the parts, the behavior of the parts are influenced by the whole.
We do not know of a set of neurons that are responsible for “consciousness”. There is not part of the brain that you can remove and leave the person intact save for the consciousness.
Whether our egos are true commanders of the mind or simply passive observers is the wrong question to be asking. How thought arises from cooperative behavior within our brain might be a right question.
Though consciousness may not be a purely passive function, it is probably not “in charge”. The CEO of a company is also not in charge. Decisions come to him, and most of the time, he can block them or let them pass. The Google CEO can approve a few big decisions each week, but no matter how smart he is, he can’t take all or even just a fraction of the decisions. He has thousands of super smart engineers below him… they are collectively many times smarter.
So if consciousness is also an emergent phenomenon, it is plausible that it can both influence and be influenced. “We†are somehow a part of this emergent phenomenon. Consciousness depends on hardware to arise, and coordinated functioning of the hardware depends on consciousness. Their mutual dependence is what makes us interesting.
Sure.
I think it is like a CEO. It can maybe block a few things (I am not going to eat this cookie)… but the actions of the consciousness are probably expensive…
Matt Fulkersonsays:
First part: Not sure I understand your point, but that is ok. As you remove neurons, consciousness surely gradually gets damaged as the brain itself is damaged. They are mutually dependent.
Second part: I like the CEO analogy. Influence over a small subset of what is happening, without absolute control.
It appears that our neurons are specialized. Some help control how our hands move, others how we perceive faces and so forth. We have not found neurons in charge of consciousness.
Matt Fulkersonsays:
Ah. So it seems that neurons are somehow better together than their specialized purposes would indicate. That is essentially what emergence is, if “better” is taken to mean different in surprising and interesting ways.
Here is a fascinating thought experiment… I can’t recall who came up with it, but it is great.
Ok. So you have one brain, and one consciousness. Why just one consciousness?
Some people who are left with half a brain are still conscious. Some of them are actually just as intelligent as most people.
So, if we were to split your brain in half, we would find two relatively intelligent people. They both would be conscious. None of them would be you.
We have documented cases that are quite close to this thought experiment actually.
But if it works in reverse too… If I take your brain and my brain and we fuse them together at the neuronal level… do you think we would have two consciousness co-existing? Probably not. We would get just one consciousness, and it would be neither you nor I.
Would the fused brain be “smarter” than you and I individually? Would it be “more conscious”?
Now, I don’t think we will be running these types of experiments per se… but we can already manufacture new neurons and drop them in the brain. They seem to find their place and grow connections. Such procedures will undoubtably arise in the coming decades as therapies for neurodegenerative diseases.
But why stop at repair? What if you could add new neurons to healthy and young brains? Putting too many in the existing cranium could put stress on your body, but these things can be worked around… What happens then? Do we get a more conscious, smarter individual?
If that sounds crazy, think about dogs. Dogs have tiny brains. We could easily design experiments to grow dog neurons and add them to the dog’s brain… there are physiological limits, certainly, but we see how far we can go…
Would the dog get smarter and more conscious?
Matt Fulkersonsays:
Hmm… I suppose the fusion could either “work†and you end up with a mostly unified single consciousness with access to both individual’s memories, or you could have a situation where the two fight it out and assert dominance at one time or another (multiple personality).
I hear a lot that people don’t use most of their brains, so does that mean that further additions wouldn’t make us much smarter, except for specialized upgrades? If you get an upgrade, would you maybe forget how to do some other things if there is some limit to what your consciousness can manage? I bet some gifted individuals could handle more than others.
Now the dog example is very interesting. But I suppose adding things could detract from things like sense of smell and ability to track.
Hmm… I suppose the fusion could either “work†and you end up with a mostly unified single consciousness with access to both individual’s memories, or you could have a situation where the two fight it out and assert dominance at one time or another (multiple personalities).
I don’t think we have any evidence for the “multiple personalities fighting” theory though it is, of course, possible. But even if it does happen, do you think that the multiple personalities would fight forever? It seems more likely they would eventually merge.
Interestingly, this means that both original consciousnesses would “die”.
I hear a lot that people don’t use most of their brains, so does that mean that further additions wouldn’t make us much smarter, except for specialized upgrades? If you get an upgrade, would you maybe forget how to do some other things if there is some limit to what your consciousness can manage? I bet some gifted individuals could handle more than others.
You would hope that the combined brain would have the expertise of the two people together, given time to adjust. So you could merge a physicist with a novelist and get a novelist-physicist who would be able to be both a great physicist and a great novelist.
Asrafulsays:
Point on Free will gives us insights about processing dependency which actually biased out reaction , its tough to predict the situation after 30 years , even if we increase the limit its more difficult. Fear about AI is ill-defined. Process of adopting and growth of human mind is most complex one and its not so near to model it so fast in computer world. Great read .
Interesting post, and aligned with how I felt about things… until I read Nick Bostrom’s profound and thoughtful book *Superintelligence*. I can’t sum up his entire thesis here, and besides I’m only halfway through the book, but he makes one point quite convincingly: there are plausible scenarios in which a superintelligent AI appears too quickly for us to respond before it seizes a decisive strategic advantage.
For me, these two thoughts support the idea that a rapid takeoff is at least a possibility: (1) computers live in dilated time, compared to our relatively low clock speeds, so maybe 1 AI-day is tens (hundred? thousands?) of human years; (2) an AI with near-human-level strategic capability might decide to conceal its intelligence from us until it has a decisive strategic advantage.
Bostrom points out too that our puny human imaginings about how a superintelligent being will behave are, at best, sketchy. In the end, I agree with him that we need to think about these scenarios, however, especially about how we define the tasks and goals of AIs, since those are what will determine what happens to humans after we create the conditions for a superintelligence to emerge.
“It’s probably fine” is not going to work forever.
I have read Bostrom as well as several other authors who push the same concerns.
If you scan my blog post, I do not deny that artificial intelligence can be a threat. I also do not deny that computers can outsmart humanity in the near future. In fact, I state both of these things more than once.
What I deny is that there is some mystical point in time when… right before we had ordinary boring computers… and then just after that point we have a new species.
I mean, yes, there is a point in time before which human beings are better at Chess, and then after this point, the machine is so much better that no human being stands a chance.
There will be a point in time, not far in the future, when self-driven cars will be safer than human-driven cars.
These changes can happen very quickly. Software can twice as good in a matter of a year. And that 2x factor can be enough to leave human beings wanting.
Technology is definitively unpredictable. And it can make rapid progress.
But your self-driving cars won’t acquire a consciousness and refuse to drive you all of a sudden. If they refuse to drive, then a software patch is all that is needed.
Intelligence is not a mystical property, it is just software.
I wasn’t really thinking about the possibility of superintelligence or the threat of it. I was specifically thinking of the rapid emergence. My point was that there are plausible scenarios where superintelligence *appears* to arrive effectively instantaneously, for example because an ‘only’ human-level AI chose to conceal its bootstrapped learning phase (which might in any case be very short in human time, but maybe it’s years). Kind of like a tunneling prisoner, who appears to escape ‘one night’, but actually it took months. If superintelligence emerges this easy, a patch is impossible.
an ‘only’ human-level AI chose to conceal its bootstrapped learning phase
So this AI has acquired “free will” and “consciousness” without us knowing and it is now upgrading itself to higher levels of consciousness.
The Internet is made of server farms with millions of computers. This is already superior, arguably, to human-level intelligence. So, yeah, maybe you will have a PC in 20 or 30 years that is as powerful as a human brain. And you might worry that this PC is hiding things from you and plotting against you… But why aren’t you worried, right now, about the Internet, as the supercomputer it is, doing the very same thing?
As human beings, we just follow our programming and so will these computers. This may very well lead these computers to kill us all, but that will follow from their programming not because they will sudden acquire their freedom.
Computer science does not allow any concept of free will or consciousness. There is simply no such thing. We are just Turing machines reading and writing on tapes.
By a large margin, the software we produce is far more reliable than any human being. Simply put, if you should worried about an intelligence killing other people, you should first worry about a human intelligence doing so.
If some nuclear rocket would fail and destroy an american city, would we say it was AI attacking us? If some Wallstreet trading algorithm destroys the economy, would we say AI is attacking us? If Facebook/Google accidentally mass-mails sensitive material about lots of politicians around, would we say AI is attacking us?
Countries and organizations routinely engage in cyberwars. This will continue and smarter software will certainly be part of the equation. Think about a software virus that is two orders of magnitude smarter than anything we currently have.
If we have self-driving cars, we have also autonomous military drones. Think about a drone that can outsmart and outgun any human combatant. I think that we will have such things in 30 years and they will be very dangerous.
My point is that there will not be a mystical moment where a peaceful drone “wakes up” and becomes human or superhuman. Drones in the future will still be just software… more sophisticated, faster, but still just software.
I agree that there’s nothing mystical about intelligence. However I think that’s also an uninteresting tangent in the larger conversation. As qznc briefly referred to, I think the 2010 stock market flash crash is an interesting early warning about very real problems we are going to face. As more and more decision making becomes automated in various ways, all sorts of nasty unintended things can happen very quickly. I don’t care whether it’s the machines “turning against their creators” or not. I think humanity collectively would do well to proceed very cautiously along the “automate everything” path that we’re heading down.
In other words, if self-driving cars all stop suddenly due to some bug, that would be massively annoying but surely fixable. If self-guiding weapons wipe out 90% of people in an hour there might not be an opportunity to fix the bug.
If self-guiding weapons wipe out 90% of people in an hour there might not be an opportunity to fix the bug.
Given a choice between a weapon in the hands of the random soldier, and a weapon in the “hands” of a sophisticated AI, which would you prefer?
Consider your answer carefully.
We have been trusting our lives to AIs for decades. When you embark on a plane, you are basically trusting the autopilot, which is nothing but an AI… Lots and lots of critical infrastructure relies on AI… and it often does because it is safer than to rely on human beings…
The Russians already have an automated nuclear retaliation system in place:
https://en.wikipedia.org/wiki/Dead_Hand_(nuclear_war)
It goes back to the 1960s… It was put in place, in part, because it is safer than to rely on human beings… Indeed, it will only act if an attack has begun, whereas a human being could “fear” irrationally that the act is ongoing and order a retaliation without cause.
So while buggy software can be dangerous, it is often safer than relying on human beings. As long as it is used for good, smarter software will probably make our lives safer.
Yes, a self-driving car could hit a bug and start running after pedestrians and killing them… We should, and we do, worry about such things… but we have drunk drivers doing much the same regularly… at least your car won’t have a couple more beers… and it might even prevent you from driving unless you are sober…
I agree with your comments, and I welcome our self-driving car future (when the systems are good enough). The impression I got from your original post was that you were striking a dismissive tone with regard to recent concerns about widespread use of AI. I think that concern is legitimate; not largely informed by mystical thinking about the nature of intelligence. Complex software can fail in complex ways, and many of the people who will be in positions to control deployments have little clue about how software works.
Your question about weapons in the hands of soldiers versus sophisticated AIs is an interesting one. Given the state of technology today, I would be terrified by mobile robots with heavy armaments engaging in urban warfare. (Slightly relevant: I have done a tiny bit of work on automatic target recognition software.) Maybe there will come a day when robot soldiers make sense. I think the main message of the AI alarmists is that we should be extremely careful about how we approach that day.
I would be terrified by mobile robots with heavy armaments engaging in urban warfare.
I’d be terrified of anything with heavy armaments engaging in urban warfare.
I don’t think that smart software leads to heavier armaments. I think the opposite is true.
Right now, most corporations have armed guards doing the rounds to find trespassers. You can replace these armed guards by autonomous drones that constantly seek trespassers. Only when the trespassers have been found, and identified as hostile, are weapons necessary.
Why do guards carry weapons? Because they are human beings who want to protect themselves. Autonomous drones are disposable and do not need to be armed. And if they are to be armed, they can use non-lethal weapons.
Extend this to policing in general. Have drones patrolling the streets. Unlike cops, there is no reason for these cheap drones to be armed. They can move around looking to prevent and deter crimes. Weapons can be brought forth only as needed… after all, the drone is expendable, not the cop.
It is very difficult for human beings to aim for non-lethal use of force against a determined opponent. It is a lot easier for disposable drones.
The same probably applies to the military. With cheap drones, they need fewer weapons, not more. You can bring the firepower only when needed. It is far more effective.
You can easily order disposable autonomous drones to “use non-lethal force” and “to hold their ground”… and expect these orders to be followed. For human beings, if the tension grows too much you have to start worrying about soldiers going rogue. We have seen it time and time again in Irak. It is ineffective, dangerous and wasteful.
A great article. I am becoming a fan.
To the point. Artificial intelligence expressed in software programs is a giant leap forward. Not many people realize that a combination of fast hardware and right software is a perfect modelling tool. It’s accessible to practically anyone on the planet. Buildinng a biological system out of living cells to do the same task is nearly impossible.
AI is a program. To get smarter, it must learn. Because it’s unlikely anybody is going to create intelligence out of the box. It is an aquired skill.
Many algorithms today rely on neural nets which do not express knowledge in a closed formula format so it’s seems hard to measure intelligence quantitatively. Best we can do is ask program to solve problems or answer questions.
Still, none of the AI examples exhibit true intelligence. They are expert systems, with ability to understand written and spoken language and a model of the world. Human are capable of extending, imagining things.
This article reminded me of a conversation I had with a friend and roommate who was studying AI back in 1996. I recall discussing whether “machine consciousness” was possible, and if so if it would be similar to human consciousness. I argued the point that since we don’t really know how to define or measure consciousness, it is unlikely the machines we are currently creating are going to spontaneously start exhibiting consciousness purely due to the computational power of the machine.
My own belief is that we do have free will within constraints. For me, holding such a belief is at least useful for understanding how we interact with the world around us. Also, from an evolutionary perspective, consciousness that has some say in what happens next seems more plausible.
My own belief is that we do have free will within constraints. (…) Also, from an evolutionary perspective, consciousness that has some say in what happens next seems more plausible.
People believed in consciousness and free will centuries before evolution, as we understand it, was conceived.
No doubt. Some old ideas are entirely discarded after centuries of progress. Sometimes old ideas are only partially discarded (e.g. Newton’s laws of motion).
In this case, the fields of evolution and quantum mechanics do seem to leave the door open for free will. In the days when only deterministic theories existed, many philosophers were hung up on the idea there could be no choice. (I’m not saying randomness must imply choice, but see below.)
Thinking out load here:
Assume we have consciousness coexisting within brains that are capable of learning. The argument against free will is that this consciousness can only interact with the physical world through observation. The argument for free will is that this consciousness can also affect the state of the brain somehow.
Would there be in any point in a machine where consciousness can only observe and not give any feedback? Why would this type of consciousness evolve? There would be no evolutionary benefit to passive observation alone.
So it seems to me that a more strongly interacting consciousness is more plausible from an evolutionary perspective.
In this case, the fields of evolution and quantum mechanics do seem to leave the door open for free will.
I really do not see any link between evolution and free will. I can use evolution in a purely deterministic software program. Many AI researchers do just that, all the time.
Would there be in any point in a machine where consciousness can only observe and not give any feedback? Why would this type of consciousness evolve? There would be no evolutionary benefit to passive observation alone.
Consciousness may play a useful role, or not. Yet even a useful consciousness does not imply free will.
What the evidence says is that when you decide to throw a rock, you become conscious of this decision only after the decision is taken. So consciousness is not what is driving your decisions.
It is an incorrect statement to say that evolution prunes what is not useful. Most of our DNA is useless junk.
If something does not significantly harm the passing of the genes, it is not going to get selected against… so to prove that consciousness must be useful on evolutionary grounds, you have to prove that it would otherwise harm significantly the ability of an individual to pass on his or her genes.
But these arguments tend to be weak and full of hand-waving. Why do human beings have such large brains while other monkeys have much smaller (3x or more) brains?
From reading wikipedia about Benjamin Libet, it is not clear to me that there is consensus about his conclusions. Here is a key statement in the *Methods* section:
“In other words, apparently conscious decisions to act were preceded by an unconscious buildup of electrical activity within the brain – the change in EEG signals reflecting this buildup came to be called Bereitschaftspotential or readiness potential.”
Maybe I’m being dense, but I don’t get it. Surely “deciding to act” must have some physical origin (e.g. the electrical activity). It would of course be physically impossible to note when a decision is made before the decision has started to be made. So I don’t see how your conclusion about the rock decision example follows from a Libet experiment.
Anyway, proving or disproving the existence of free will is a lot like proving or disproving the existence of God. There is probably no experiment that can be done that will stand up to scrutiny. At the end of the day, one will believe what one will believe.
\begin{facetiousness}
I’m trying to command myself to get back to work, but instead I’m observing myself posting another comment. Maybe I’m starting to doubt the independence of my own free will :-).
\end{facetiousness}
I’m wondering if the free will existence argument goes away if one views consciousness as an emergent phenomenon. With emergent phenomena, the whole is much more interesting than the sum of its parts. While the whole is utterly dependent on all of (or at least many of ) the parts, the behavior of the parts are influenced by the whole.
Whether our egos are true commanders of the mind or simply passive observers is the wrong question to be asking. How thought arises from cooperative behavior within our brain might be a right question.
Here is a physics analogy. Bring a superconductor below the transition temperature, and electrons cease to behave independently. If the material is a metal above the transition temperature, electrons will flow with resistance (remove the voltage and they’ll stop flowing). Below the transition temperature, the electrons flow without resistance, being members of the superconducting condensate. Their behavior is now governed by the existence of the condensate, not by their individual properties within the ordinary metal.
So if consciousness is also an emergent phenomenon, it is plausible that it can both influence and be influenced. “We” are somehow a part of this emergent phenomenon. Consciousness depends on hardware to arise, and coordinated functioning of the hardware depends on consciousness. Their mutual dependence is what makes us interesting.
I’m wondering if the free will existence argument goes away if one views consciousness as an emergent phenomenon. With emergent phenomena, the whole is much more interesting than the sum of its parts. While the whole is utterly dependent on all of (or at least many of ) the parts, the behavior of the parts are influenced by the whole.
We do not know of a set of neurons that are responsible for “consciousness”. There is not part of the brain that you can remove and leave the person intact save for the consciousness.
Whether our egos are true commanders of the mind or simply passive observers is the wrong question to be asking. How thought arises from cooperative behavior within our brain might be a right question.
Though consciousness may not be a purely passive function, it is probably not “in charge”. The CEO of a company is also not in charge. Decisions come to him, and most of the time, he can block them or let them pass. The Google CEO can approve a few big decisions each week, but no matter how smart he is, he can’t take all or even just a fraction of the decisions. He has thousands of super smart engineers below him… they are collectively many times smarter.
So if consciousness is also an emergent phenomenon, it is plausible that it can both influence and be influenced. “We†are somehow a part of this emergent phenomenon. Consciousness depends on hardware to arise, and coordinated functioning of the hardware depends on consciousness. Their mutual dependence is what makes us interesting.
Sure.
I think it is like a CEO. It can maybe block a few things (I am not going to eat this cookie)… but the actions of the consciousness are probably expensive…
First part: Not sure I understand your point, but that is ok. As you remove neurons, consciousness surely gradually gets damaged as the brain itself is damaged. They are mutually dependent.
Second part: I like the CEO analogy. Influence over a small subset of what is happening, without absolute control.
Not sure I understand your point, but that is ok.
It appears that our neurons are specialized. Some help control how our hands move, others how we perceive faces and so forth. We have not found neurons in charge of consciousness.
Ah. So it seems that neurons are somehow better together than their specialized purposes would indicate. That is essentially what emergence is, if “better” is taken to mean different in surprising and interesting ways.
Here is a fascinating thought experiment… I can’t recall who came up with it, but it is great.
Ok. So you have one brain, and one consciousness. Why just one consciousness?
Some people who are left with half a brain are still conscious. Some of them are actually just as intelligent as most people.
So, if we were to split your brain in half, we would find two relatively intelligent people. They both would be conscious. None of them would be you.
We have documented cases that are quite close to this thought experiment actually.
But if it works in reverse too… If I take your brain and my brain and we fuse them together at the neuronal level… do you think we would have two consciousness co-existing? Probably not. We would get just one consciousness, and it would be neither you nor I.
Would the fused brain be “smarter” than you and I individually? Would it be “more conscious”?
Now, I don’t think we will be running these types of experiments per se… but we can already manufacture new neurons and drop them in the brain. They seem to find their place and grow connections. Such procedures will undoubtably arise in the coming decades as therapies for neurodegenerative diseases.
But why stop at repair? What if you could add new neurons to healthy and young brains? Putting too many in the existing cranium could put stress on your body, but these things can be worked around… What happens then? Do we get a more conscious, smarter individual?
If that sounds crazy, think about dogs. Dogs have tiny brains. We could easily design experiments to grow dog neurons and add them to the dog’s brain… there are physiological limits, certainly, but we see how far we can go…
Would the dog get smarter and more conscious?
Hmm… I suppose the fusion could either “work†and you end up with a mostly unified single consciousness with access to both individual’s memories, or you could have a situation where the two fight it out and assert dominance at one time or another (multiple personality).
I hear a lot that people don’t use most of their brains, so does that mean that further additions wouldn’t make us much smarter, except for specialized upgrades? If you get an upgrade, would you maybe forget how to do some other things if there is some limit to what your consciousness can manage? I bet some gifted individuals could handle more than others.
Now the dog example is very interesting. But I suppose adding things could detract from things like sense of smell and ability to track.
Hmm… I suppose the fusion could either “work†and you end up with a mostly unified single consciousness with access to both individual’s memories, or you could have a situation where the two fight it out and assert dominance at one time or another (multiple personalities).
I don’t think we have any evidence for the “multiple personalities fighting” theory though it is, of course, possible. But even if it does happen, do you think that the multiple personalities would fight forever? It seems more likely they would eventually merge.
Interestingly, this means that both original consciousnesses would “die”.
I hear a lot that people don’t use most of their brains, so does that mean that further additions wouldn’t make us much smarter, except for specialized upgrades? If you get an upgrade, would you maybe forget how to do some other things if there is some limit to what your consciousness can manage? I bet some gifted individuals could handle more than others.
You would hope that the combined brain would have the expertise of the two people together, given time to adjust. So you could merge a physicist with a novelist and get a novelist-physicist who would be able to be both a great physicist and a great novelist.
Point on Free will gives us insights about processing dependency which actually biased out reaction , its tough to predict the situation after 30 years , even if we increase the limit its more difficult. Fear about AI is ill-defined. Process of adopting and growth of human mind is most complex one and its not so near to model it so fast in computer world. Great read .