, 2 min read
Battlestar Galactica: when AI goes wrong
I bought Season One of the new Battlestar Galactica series.
I’m an old man, so I watched the original. The difference between this new version and the old one is that now, the cylons are machines built by man. In other words, Battlestar Galactica tells the story of AI gone wrong. What if we built intelligence machines, and what if these machines turned against us?
In this new series, the humans are in deep trouble as in the first series. They are trying to escape the ennemy. The ennemy is sneaky and furtive. The cylons have outsmarted the humans and they rarely fight in the open.
Of course, in the post-9/11 era, this is just what we expect. Some might describe the cylons as terrorists. What is interesting is that the humans are responsible for the cylon’s very existence to begin with. The humans must live with the result of their actions. They can hate the cylons, but, to some extend, they can only blame themselves, Also, their own defects are what make the cylons so powerful in the first place: greed and hedonism are what the cylons go after.
I like this story at two levels. Firstly, this matches exactly what the Americans should experience and what they will eventually come to realize. You can keep polluting, you can keep funding the third world military to secure oil reserves or other goods. But all these actions have consequences. The Americans have created Al Quaida to a large extend by training and funding them initially. Also, the Americans are greedy and that’s their main weakness: building empires is a dangerous and expensive game. But this doesn’t really make me like the show: I’m not looking for a Michael Moore commentary on a Friday night. However, the AI-is-dangerous component is interesting. I’m not advocating we stop funding AI research: mostly because I do not think we can achieve any form of non-trivial intelligence using current computer technology. However, should we ever close down on hard AI, I believe we should back out. The day my computer will “know” it is a computer will be a dangerous day.
I’m serious about this. If in 20 years from now, we start getting close to hard AI, I will do down to the streets and ask that we stop in our tracks.