Artificial Intelligence falls into two categories of two options; narrow or general, and strong or weak. Narrow AI functions well when working on an individual task (i.e. playing chess), while general AI is focused on reasoning with any sort of problem. Weak AI is focused on mimicking human intelligence by the measure of how correct the output is, while strong AI focuses on mimicking human intelligence in terms of how the machine “thinks” or processes data.

Using these classifications, we can understand AlphaGo and Deep Blue to be Weak, Narrow AIs. Watson, on the other hand, is more of a Weak, General AI. Though some would classify Watson as strong due to the way it finds connections in data, Roger Schank argued strongly against it in his article “The Fraudulent Claims Made By IBM About Watson And AI.” He argued that “You can’t understand words if you don’t know their context.” In Watson’s case, it doesn’t matter that it can nearly instantaneously look up all of Bob Dylan’s lyrics if it can’t reason enough to understand the context behind them. In general, these three machines show that AI is certainly viable, but an AI that is both strong and general has yet to be developed, and there are significant roadblocks if we want to get there.

One obstacle on the way to strong AI is the Chinese Room, a counter argument against the Turing Test. The Chinese Room postulates that if a machine were able to pass the Turing Test in Chinese, a man who understood only English could pass the same test with the aid of the step by step machine instructions written in English. He would pass the test without understanding any Chinese. I believe that this is not a perfect argument against the Turing Test. Consider a human brain, with all the neurons firing as needed to make a human work. If a human were to pass the Turing Test, I could reconstruct the human passing the Turing Test given the aid of a step by step instruction on which neuron should fire when and where, along with the ability to fire such neurons at will. One might counter this argument by saying that the number of neurons is enormous, and I would reply that the same is true for the number of instructions that the machine would necessitate in the Chinese Room. Ultimately, the ability to fire neurons doesn’t grant knowledge or wisdom. It is the process through which the knowledge is conveyed. Similarly, machine instructions are the process by which an AI “thinks”.

I believe that the growing concerns about AI are warranted, if only from the perspective that fear of the unknown and unknowable is healthy. I believe that as AI is developed, it is important to put in place procedures to create friendly AI. Artificial Intelligence is dangerous because it can go beyond human control. There verly likely will come a time when AI is smarter than humans. At that point, their incentives will have a stronger determination on civilization than our human incentives do. And if those incentives aren’t completely aligned, it could be disastrous for humanity.

As far as classifying computer systems as minds or vice versa, I think this would be considered more a matter of categorization than ethical implication. One could follow slippery slope logic that if computer systems are minds, they are living, and if they are living, they should have rights, and feelings, and citizenship, and representation, etc… For today, I believe it is too soon to determine where the proper cutoff is when discussing AI. But then again, if we don’t determine that soon, maybe it will be decided for us.

P.S. I highly recommend this article from WaitButWhy about Artificial Intelligence:

Part One and Two