What is it that AI is incapable of accomplishing | Technology
|What is it that AI is incapable of accomplishing?|
What can't Artificial Intelligence (AI) do?
An econometrics problem illustrates the difference between artificial and human intelligence. Understanding the tacit knowledge and limits of AI is crucial to implementing it effectively and fairly.
One of the only lucid thought experiments ever conducted by econometricians, the "red bus-blue bus" problem illustrates a central drawback that comes with using statistical estimation to quantify the probability that a person will make a specific choice when faced with to various alternatives. As the thought experiment proceeds, imagine that you are indifferent about taking a car or a red bus to work. Due to his indifference, an estimate of his probability of choosing either option is to flip a coin. There is a 50 percent chance that you will take the car and a 50 percent chance that you will take the red bus. Therefore, your odds of selection are one to one.
Now present a third transportation option in two different scenarios and assume that the traveler remains indifferent between alternative options. In the first scenario, a new rail route is opened so that the alternatives facing the apathetic traveler are the car, the red bus, and the train. The estimated probabilities are now one-third car, one-third red bus, and one-third train. The odds are the same as in the two option scenario, one to one to one.
In the second scenario, instead of a red bus, assume the bus could be blue. Therefore, the choice facing the traveler is to take a car, take a red bus, or take a blue bus. Is there any real difference between taking a red bus and taking a blue bus? No, it is effectively the same choice. So the odds should be broken down as 50 percent car, 25 percent red bus, 25 percent blue bus, and two-to-one-to-one odds.
This is because the actual choice is exactly the same as in the first two-choice scenario, ie taking a car versus taking a bus. In other words, a red bus and a blue bus represent the same choice. The color of the bus is irrelevant to the traveler's choice of transportation. So the probability that the apathetic traveler will select a red or blue bus is simply half the probability that the person will take the bus. However, the method by which these probabilities are estimated is unable to decipher these irrelevant alternatives. The algorithm encodes car, red bus, blue bus as one by one as in the scenario with the train.
Artificial Intelligence: Tacit Knowledge
Physicist Michael Polanyi defined "tacit knowledge" as a quantifiable or commonly understood result that a human achieves by performing a task that cannot be codified by a repeatable rule. He makes a distinction between this type and abstract knowledge, which is describable, rule-bound, and repeatable. Tacit knowledge is difficult or impossible to express formally because humans developed the skills that make it up evolutionarily, before the advent of formal methods of communication. As a result, training AI to perform tasks that require tacit knowledge is extremely difficult.
Artificial Intelligence: Algorithmic deficiencies
The (non) choice of red bus/blue bus is a good example of how the algorithmic calculation can fail. In their crude forms, models cannot distinguish subtleties of linguistic description that humans have little or no difficulty grasping. To a person, the reason why the red bus and the blue bus are identical when considering transportation alternatives feels intuitive. It is certainly intuitive that there is a difference in the choice set when presenting a train versus a blue bus. Describing why bus color is irrelevant as a programmable rule in an algorithmic process, on the other hand, is extremely difficult. Why is this the case?
This puzzle is an example of Polanyi's paradox, named after physicist Michael Polanyi. The paradox, in a nutshell, is "We know more than we can say." More fully, the paradox says: "We know more than we can say, that is, many of the tasks we perform are based on tacit and intuitive knowledge that is difficult to codify and automate." Polanyi's paradox comes into play whenever an individual can do something but cannot describe how he does it.
In this case, "doing something" implies a quantifiable or commonly understood result that a human achieves by performing a task that cannot be codified by a repeatable rule. Polanyi calls this kind of human action "tacit knowledge." He makes a distinction between this type and abstract knowledge, which is describable, rule-bound, and repeatable.
Economist David Autor uses Polanyi's paradox to explain why machines have not taken over all human careers. It suggests that if automation were not limited to the abstract realm of knowledge, machines would have usurped all human tasks and human employment would have plummeted since the 1980s. However, automation has not led to this result because it requires specifying exact rules to tell computers what tasks to perform. Tacit knowledge, however, is difficult or impossible to express formally because humans developed the skills that make it up evolutionarily, before the advent of formal methods of communication.
Artificial Intelligence: Evolutionary skills
Unspoken and unspoken abilities are the crux of another paradox formalized by researchers Hans Moravec, Rodney Brooks, and Marvin Minsky. Moravec's paradox states, in compact form, that
We should expect the difficulty of reverse engineering any human ability to be roughly proportional to the amount of time that ability has been evolving in animals.
The oldest human abilities are largely unconscious and therefore seem effortless to us.
As a result, we should expect skills that appear to require no effort to be difficult to reverse engineer, but skills that require effort may not necessarily be difficult to design at all.
Paradoxically, mental reasoning and abstract knowledge require very little computation, but sensorimotor skills, visualization of future outcomes, and perceptual inference require vast amounts of computational resources. As Moravec states in his book on this subject, “It is comparatively easy to make computers show adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to of perception and mobility.”
Incorporating Polanyi's and Moravec's paradoxes into a common theme, humans have only developed abstract thought over the last few thousand years, and it seems difficult for our species because its relatively rapid development makes it new and inherently difficult to understand. Alternatively, humans have developed tacit, intuitive but unspeakable abilities throughout the entire course of our evolutionary history. They are based on our environment, are experientially acquired, and predate explanation.
Artificial Intelligence: The future of AI is complementary
So for artificial intelligence, these paradoxes explain a counterintuitive conclusion that leads to a fundamental question of resource allocation. If the simplest skills for humans are the most challenging for machines, and furthermore, if those tacit skills are difficult or impossible to code, then the simplest tasks that humans perform unconsciously will require enormous amounts of time, effort, and effort. and teaching resources. to the machines.
An inverse relationship emerges between the ease with which a human performs a skill and its ability to describe and, subsequently, its replicability by machines. The main economic question is, then, is it worth developing AI to perform intuitive human tasks? Why invest ever-increasing amounts of resources to develop an AI that performs ever simpler tasks?
This suggests a natural slowdown in general AI development. Although Moore's Law points to a trillion-fold increase in the processing power of computers, the logic by which we communicate with computers hasn't changed much since the 1970s. opportunity cost of AI research that enables machines to perform increasingly simple human tasks, its development will slow as diminishing returns set in.
Ideally, as Autor suggests, the future of AI lies in its complementarities with human abilities rather than its substitution for them. For example, until the computer revolution of the 1970s and 1980s, statisticians employed veritable armies of graduate students to manually process reams of paper data into summary statistics like means, medians, and standard deviations. With the advent of electronic calculators and, later, computers, statistics that previously required hours or days of human effort could be calculated in seconds.
With this change in computational means, machines were able to supplement teams of statistical researchers by absorbing the low-level, repeatable arithmetic obligations of students. This freed up a great deal of time for the statisticians and their students as a team to solve more nebulous and open-ended statistical problems, the very kinds that require creative thinking that computers don't solve well. The current view of AI and its interaction with human capabilities needs serious rethinking in terms of the kinds of problems it is being developed for. After all, do we really need AI to be able to tell us that red buses are the same as blue buses?
Source: Kunal Thakur, Direct News 99