Artificial Intelligence

To begin with, Artificial Intelligence (AI) is a sub field of computer science that has a goal of enabling the development of computers so that they are able to do things normally done by people. More specifically, these things are associated with people acting intelligently. Going further than this basic definition, once can split AI up into 3 different groups: strong AI, weak AI, and everything in between.

Strong AI focuses on simulating human reasoning. The people working towards this are trying to build systems that think but also explain how humans think as well.

On the other hand, weak AI can be defined as just trying to get the systems to work. While we might be able to build systems that can behave like humans, the results will tell us nothing about how humans think. For example, IBM’s Deep Blue, was a system that was a master chess player, but certainly did not play in the same way that humans do.

The middle camp focuses on systems that are informed or inspired by human reasoning. The people working on this are only using human reasoning as a guide rather than trying to perfectly model this. The article “What is artificial intelligence,” claims that IBM Watson is a good example of this camp. Watson builds up evidence for the answers it finds by looking at thousands of pieces of text that give it a level of confidence in its conclusion. It combines the ability to recognize patterns in text with the very different ability to weigh the evidence that matching those patterns provides. Its development was guided by the observation that people are able to come to conclusions without having hard and fast rules and can, instead, build up collections of evidence. Just like people, Watson is able to notice patterns in text that provide a little bit of evidence and then add all that evidence up to get to an answer.

It is similar in a way that it can mimic human thoughts and reasoning, but it is hard to get it to combine all of the complex parts of a human. Humans are still much more advanced. Good proof of this comes from the AlphaGo or Deep Blue systems. While the Deep Blue system was able to defeat the chess world champion, it was not good for much else. The chess world champion on the other hand can probably think through other things and hold conversations amongst much more. Going off of this, this is why I think that these kinds of systems are gimmicks or tricks. One thing one of the article does mention that can be concerning though is that developers might have developed a way to bottle something very like intuitive sense. This is one of the main differentiating points between humans and systems. If systems can gain this sense then they might move up from just being tricks. However, there is still much more developing to be done in order to reach a human level of intuitive sense.

If an AI machine could fool people into believing it is human in conversation, he proposed, then it would have reached an important milestone. The original Turing Test wasn’t intended to see if a robot could pass for a human, but rather, for deciding whether a machine can be considered to think in a manner indistinguishable from a human. Of course this depends on which questions you ask. For example, there was one instance where a customer was trying to unravel the responder to a chat he was participating in as a robot and it actually worked. The response back was that the server could not process their request at the moment. This shows that despite how well referenced the Turing Test is, it does neglect a decent amount of human aspects.

The Chinese test is as follows: Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but does not produce real understanding. This is trying to argue that the Turing Test is inadequate. Just because a computer system can hold a conversation does not mean it actually understands what is happening or going or being said.

Overall, after reading all of the articles, I do not think that one can consider a computing system a mind. That does not mean that in the future it could not be one. There is just too many missing parts and gaps between what humans can do and what the systems can do. Going along with this, I do not think that humans are biological computers. There is a lot more to us than just input and output. Take for example all of our emotions. These are super hard to replicate and each person is effected by things differently. The human mind and body is very complex and cannot just be dwindled down to a computer.

Two main ethical implications that were mentioned were with jobs and legal reliability. This has always been a concern with new innovative technology. However, in the past things and people were able to adjust and sometimes even make more of a profit. Now introducing AI into the workforce would cause tons of people their jobs and will turn world into who knows what. The other point being made in the articles with legal reliability. If systems begin to take over human jobs and computer systems then who is responsible when a mistake is made. How often will mistakes be made, is it more or less likely to happen. There are all sorts of unanswered questions when it comes to this that must be solved and figured out before ever implementing AI into every day life.

 

Leave a comment