, 2 min read
21 open problems in Artificial Intelligence
Peter Turney has come up with a list of 21 (important) open problems in the field of Artificial Intelligence. I am not aware of any such list anywhere, so this might be an important contribution. For comparison, Wikipedia as a list of open problems in Computer Science. In the field of database, the closest thing to a list of open problems would be the Lowell report: it falls short of providing true open problems however.
I am a bit surprised to see Learning Chess, but not Learning Go, on his list since I have the impression that Deep Blue has pretty much learned to play Chess at a very high level, whereas the same is not true of Go.
Out of Peter’s list, two of the open problems stroke a cord with me:
- Self-References in software. I am no expert in AI, but it seems to me that the main mystery we are facing today, the deepest mystery of all, is what is consciousness? Some say that as computers grow larger, more connected and more powerful, they will acquire consciousness. Maybe.
- Approximate queries in databases. As we now have infinite storage, and as data is created and discarded faster than ever, we need smarter database systems in the sense that they can provide the human beings exactly what they need, just like a human assistant would, only faster. The key here is probably to use lossy database engines and approximate representations. I like this topic because while I am not an AI research, it is close to my interests. For related work, see our recent paper on OLAP Tag Clouds (to be presented at WEBIST 2008), our work on quasi-monotonic segmentations (to appear in IJCM), and my work on better piecewise linear segmentations (SDM 2007). This last paper is interesting because it was motivated by my frustration at defining what a flat segment is in time series, a concept human beings can agree upon easily, it seems.