Compare the search for Artificial Intelligence to going to the moon. Or rather, since we know that the moon is within our grasp, a human expedition to Mars.
Going to Mars is an enormous endeavour
Going to Mars is a very ambitious task, and there are many obstacles in the way of going there :
Technological challenges :
- May need to build a space-based stage-post
- Advances in rocketry, spaceworth vehicles, reliability, etc ...
Human challenges :
- Heroism of potentially not having a ride home
- Long journey, small vessel, etc ...
Some of these challenges will require leaps into the unknown - and overcoming many unforseen problems in the process.
But, given a year, say, a NASA working group could come up with a budget (large) and timeline (long) that they could swear would put humans on Mars by a specified date : 2050, for argument's sake.
And, in an appendix to the budget, they could have what would amount to a (very long) list of items to buy, things to research, problems to solve, machines to build and crew selection criteria.
But there's nothing to suggest that they would be simply unable to deliver a detailed document, the basic content of which would essentially be a shopping list.
AI is an enormous endeavour
Building a conscious machine is a very ambitious task, and there are many obstacles in the way of getting there :
The ability to perform every quantifiable task that humans do:
- Speech recognition, object recognition
- Knowledge retrieval, task planning
Make the machine conscious:
- What does it mean to be conscious?
Let's give a team of the very best computer scientists, philosophers, linguists, biologists, and engineers the task of outlining how to go about creating a conscious machine (anyone else who wants to pitch in is welcome: we're comparing this to the budget for a trip to Mars, after all).
The statistics for building a machine as complex/powerful as the brain all point to the same kind of completion date (2050, for argument's sake).
But it's very difficult to say for sure how long it would take to even come up with an equivalent shopping list.
So, as Dennett says (roughly) : If you have a specific question, the sciences are fantastic for providing the answer. Philosophy is required when we don't even know what question to ask.
The problem with AI is that we don't even know what the right kinds of question are : We need philosophers to help us get our bearings.
The Practice of Artificial Intelligence
Naturally, I admire every advance that's being made in creating smarter and smarter computers. Machine translation and voice recognition are both now producing results thoroughly out of reach twenty years ago.
And, just like chess, as these problems are broken down into their component parts, the "magic trick" in each case is revealed to be something quite tangible. A huge achievement, though somehow an anti-climax.
Surely, the question in the back of everyone's mind is : Once the layers of the onion are peeled away, what is really there?
At the end of the day, let's assume that the machine can identify all the objects in a scene; can listen and speak; can answer questions drawing on all accessible human knowledge; can create novel solutions to problems; that is to say : Do all the things we can imagine as being elements of what makes humans score highly in tests.
What mechanism can we put in place to enable the machine to realize that it is self-aware?
There's a huge hole at the end of our shopping list, and that's a problem.