.. , typically use Boolean logic to process input from an individual user and employs stored rules to generate a prediction or suggestion. A prime example of this usage is the “Office Assistant” which is included with Microsoft’s Office 97 software package. This assistant is extremely useful for the individual who is unfamiliar with the software package. If the user seems to be floundering around looking for a way to accomplish a task, the assistant will attempt to interpret the desires of the user by looking at what he as been doing and then tries to make an educated guess as to what he wants to do.
Then the assistant will display a help menu to guide the user through the desired course of action. AI needs many ideas that have, up until now, been studied only by philosophers. This is because a robot or truly AI system, if it is to have human level intelligence and ability to learn from its experience, needs a general world view in which to organize facts. Others have pointed this out when addressing the necessity of broadening the professional constituency of AI and reexamining its fundamental assumptions about human nature. One of the first successful applications of artificial intelligence in a business setting was the “Authorizer’s Assistant,” developed for American Express. The system allows the approval of most transactions without human intervention. Summarized in the system are a number of rules that relate to the approval of purchases.
The system uses those rules and the unique profile that users establish by their pattern of purchases to ensure that the purchase is appropriate. Perhaps the biggest return on AI is potentially on Wall Street. Substantial attention has been given to the development of automated trading systems, integrating AI into capital management, and using AI in capital planning. However, information about such systems is generally limited, since disclosure of successful approaches could lead to the loss of competitive advantage, and large sums of money. On activity that appears to be generating the greatest interest on Wall Street is that of data mining, using approaches such as neural networks. Data Mining is the descendant, and to some, the heir and successor of statistics. Statistics and Data Mining pursue the same aim, which is to build compact and understandable models incorporating the relationships (“dependencies”) between the description of a situation and a result (or a judgement) concerning this description.
The underlying assumption is that there is indeed some kind of dependency, i.e. the result, measurement or judgement we are trying to model is derived from some or all of the “descriptive variables” we have been able to gather. The main difference is that Data Mining techniques build the models automatically while classical statistics tools need to be wielded by a trained statistician with a good idea of what to look for. Data Mining is the process of looking for knowledge and anticipating patterns in data. One of the primary approaches for finding patterns in data is neural networks. Neural networks were named, based on their structural similarity with the process used by the human brain. Although, the methods used by neural nets are beyond the scope of this paper, their applications are generally accessible.
For example: a neural network approach can be used to investigate the relationship between a set of financial statement ratios and whether or not the firm goes bankrupt. Another example is for the case where banks must choose whether or not to make a loan, based on a set of input characteristics. In a similar manner, patterns of information are investigated using neural networks to assist in the process of choosing stocks as reported in U.S. News & World Reports. So, we’ve explored what AI is and how it is being used today, but what about those dreams of a mechanical brain which so closely approximate the human mind that real life like robots are possible.
There is Cog, (Cognitive) which is the grand experiment in the latest approach to artificial intelligence: letting a machine discover the world on its own, the way humans do, rather than cramming its memory with some preexisting computer model that describes the world from a human perspective. Cog the android wannabe – wannabe because it doesn’t have legs yet. According to the creators, those will come later. For now, it’s still learning to coordinate its eye, head and hand “muscles”. On the other side of the coin is Cyc (World Book Encyclopedia), the most ambitious version of the old school, top-down system.
Some $40 million has been invested on organizing Cyc’s reasoning “engines” and stuffing its knowledge base with a half-million rules derived from 2 million common-sense facts. These are the things people soak up during childhood like: Mothers are always older than their daughters. Birds have feathers. When people that other software might miss, Cycorp has a database of captioned photos. Most database managers retrieve photos based on a precise word match in the caption. Type in “strong and daring person,” and Cyc pulls up a picture captioned “Man climbing mountain.” Cyc knows that a man is a person, and that mountain climbing demands strength and is dangerous.die, they stay dead.(World Book, 1999).
To show how Cyc’s common-sense method can help find information. The next stop for Cyc was to begin learning on its own by reading newspapers, books, and scientific journals. Then, in eight or nine years, Lenat figures Cyc will be smart enough for postgraduate work. It might help doctors make better diagnoses by checking medical records and presenting alternatives. Or it might help market researchers spot sales patterns missed by conventional data-mining programs.
Lenat expects Cyc to be ready to take charge of its own research lab by 2020. He expects Cyc to design unique experiments and uncover new knowledge. MIT’s Brooks has similar dreams for Cog’s offspring, but the timetable is less certain, because Cog got off to a later start. It was conceived just five years ago, after a Jan 12, 1992, part that Brooks gave to celebrate the birthday of HAL, the AI system in 2001: A Space Odyssey. After brooding about the lack of anything close to HAL, Brooks decided he had to take a stab at it.
If all goes well as more behaviors are added, such as a sense of touch and then smell, Brooks knows what he wants the results to be: something like Lt Commander Data, the super-smart android on Star Trek. How long might that take? Brooks doesn’t know. But maybe, around 2020, these two will mellow out and give us Commander Cycog. What does this have to do with business? Well just think of the possibilities of a work force that never gets tired, requires little or no supervision and has the knowledge of the entire human race at its fingertips so to speak. The ramifications are staggering.
This could be the only way that extended space travel may be undertaken. These AI Robots/Systems with virtual impunity could do extremely dangerous tasks, which would normally require a human to perform. Additionally, a work force of these machines could greatly increase production while lower overall cost of production. With no payroll, a company also doesn’t have to provide costly benefits. With the ability to learn these machines could be taught production changes in a fraction of the time required training a human workforce. Thereby reducing the time required to spin/tool up a new or modified production line, once again resulting in a cost saving to the company and ultimately the consumer.
Conclusion Is artificial intelligence attainable? All the experts seem to think the answer to this question is a resounding Y E S. I agree with them, however, I don’t believe that the time lines they are forecasting are realistic. There are so many obstacles to overcome that I don’t believe 20 years will be enough time. Only time will tell if these individuals are on the right track. All we can do wait and see. However, it should be an exciting time for man and machine. Computers and Internet.