We’ve talked a little in the past about what it means to be intelligent.
Computers are really amazing machines. Their level of complexity is something we just take for granted. At the most fundamental level, computers just execute a predetermined set of instructions like:
- take this web page and display it like this
- add these two numbers together
- when the door opens, turn on the lights
- given these two addresses, and this huge set of map data, find me the shortest route to get there
Using the last example, we can even get machines to make corrections when you make a wrong turn. Your GPS system can reroute you based on another set of rules.
Through sophisticated algorithms a machine can even take a set of these wrong turns and refine the way it presents and responds to your mistakes. It could, for example, start letting you know about your next turn earlier if you are predisposed to missing the turns. This happens fairly easily if we add a piece of code to look for this behavior and respond to it.
This isn’t the same as being intelligent.
Things that are “alive” can take mistakes or unexpected circumstances and not only respond to them but make entirely new models of how to deal with things. If you are sitting in the passenger seat of a car and giving the driver directions and the driver always misses turns, you create a new response to giving them directions on the fly. You are not pre-programmed to look for this error. In effect, you write yourself a new piece of code to deal with this.
A machine is not intelligent until you separate it from it’s programming. It needs to be able to learn from it’s mistakes. It would do this by being able to write it’s own code, building on it’s existing programming and knowing how to make additions or changes to itself.
The long development cycle of a human makes a great comparison to building an intelligent machine.
When you are born, you have “basic programming”. Your heart knows how to beat, your immune system works, you can breathe, and cry, and eat, but you don’t even know how to use your hands. You progress to using your body to get you around, then you develop simple language skills. Then over the rest of your development you reprogram yourself constantly, learning from mistakes and reinforcing things you learn that are correct.
This process is the holy grail of Artificial Intelligence. You build a machine that is capable of dynamically reprogramming itself based on it’s “experiences” over time. It starts off like an infant and then needs to spend time learning and making mistakes.
After it “grows up” we can give it the keys to the nuclear arsenal and it can proceed to destory us. Or maybe I’ve been reading too much SciFi. One of those.
Good article. But I'd like to point out that a machine can not and will never be able to be truly intelligent. As you say, “a machine is not intelligent until you separate it from its programming.” But in order to run, a machine always needs programming. Whether it's implemented in hardware or by software, the programming is what makes it run. Even if a machine is writing its own software, it's following a program that tells it how to write the new software. So ultimately a machine is always just following orders that were given to it by a human. Machines will never be able to perform certain human thought characteristics such as the ability to have a spontaneous thought (i.e. an independent thought — independent of its programming). As I'm sure you know, no one has ever been able to get a computer program to generate a truly random number. There's no way to do it, because a computer program by definition cannot be spontaneous. Therefore it cannot be truly imaginative, creative, etc. In other words, it cannot “think outside the box” unless it's programmed to think things that appear to be outside the box, but then it's not really outside the box, is it?So a machine is doomed to always run solely as a program, with every new “thought” coming from a calculation of past data and programs. The fact that it can calculate vast amounts of data very quickly, and that it can mimic some human “thought” processes does not make it intelligent. It can be made to appear intelligent, but that's not the same thing as actually being intelligent and definitely not self-aware.That doesn't mean artificial intelligence can't progress a long way and do some interesting things. But it does mean that machines will never become more capable than their creators, unless the human race becomes too degraded and handicapped by putting too much emphasis on worshiping and serving the machines — but that would be the choice of the humans, not the will of the machines.
Excellent points.I would argue that the separation from it's programming is in fact possible. If it can reprogram itself, it should be able to analyze how the algorithm that allows it to reprogram itself to be copied, a separate runtime created and tested to see if the changes to it's core will work to the outcome it desires. This doesn't fit what we would be able to fit into present programming models but in concept makes sense.The human brain is a programmable bio electric machine, it reprograms itself all the time. We know the reprogramming model exists so we should be able to recreate the process somewhere else.