Deep Learning: An in-depth look at AI-powered Technology | Technology
|Deep Learning: An in-depth look at AI-powered Technology|
Deep Learning is helping industries make great strides, but is it revolutionary or just a "useful tool" destined for extinction?
It's a busy day in 2039 and you're watching a movie while being transported by one of the countless autonomous vehicles prowling the world's roads. Brake and speed up when necessary, avoid crashing into things (other cars, cyclists, stray cats), obey all traffic signals, and always stay within the lane markers.
Not long ago, such a scenario would have seemed ridiculous. It is now more and more firmly in the realm of the possible. In fact, autonomous vehicles may one day become so aware of their surroundings that accidents will be virtually non-existent. However, getting to that point requires overcoming a number of hurdles using a variety of complex processes, including deep learning. But how far can technology take us?
"Deep learning is solving a problem and it's a useful tool for us," says Xiang Ma, a machine learning expert and veteran research manager at HERE Technologies in Chicago, which works on developing sophisticated navigation systems for autonomous vehicles. “And we know it's working. But it could just be a stopgap technology. We don't know what's next."
Deep learning is a form of machine learning that is a subset of artificial intelligence.
What is Deep Learning?
A recently reinvigorated form of machine learning, itself a subset of artificial intelligence, deep learning employs powerful computers, massive data sets, "supervised" (trained) neural networks, and an algorithm called backprop for short. to recognize objects and translate speech in real time by mimicking the layers of neurons in the neocortex of the human brain.
Deep Learning: A quick explanation
Deep learning (sometimes known as deep structured learning) is a subset of machine learning, where machines use artificial neural networks to process information. Inspired by biological nodes in the human body, deep learning helps computers quickly recognize and process images and speech. Computers then "learn" what these images or sounds represent and build a huge database of stored knowledge for future tasks. In essence, deep learning allows computers to do what humans do naturally: learn by immersion and example.
"The point about this approach is that it scales beautifully," Hinton told the New York Times. “Basically, you just need to keep making it bigger and faster, and it will get better. There is no turning back".
How Deep Learning works: Building the next-generation autonomous car
Ma's team at HERE creates high-definition maps that greatly enhance a vehicle's perceptual capabilities and build the navigation system for future travel. Deep learning is crucial to that process.
“Some people say we can live without HD maps, that we can just put a camera in the car,” Ma says after proudly leading a tour of his office space and showing off the company’s futuristic coffee machine (try the coffee with milk!). “But no matter how good your camera is, you will always have a case of failure. No matter how good your algorithm is, you are always missing something. So in the event your sensors are broken, the map is your last resort.”
Because it's important to start with accurate training data, Ma explains, human tagging is a crucial first step in the process. Images from Street View and Lidar (a radar-like detection system that uses laser light to collect precise 3D shapes of the world) combined with lane marking information that is initially manually encoded is fed to a deep learning engine that it is processed repeatedly (“iteratively”, goes the jargon) improved and retrained. The deep learning model is then "deployed" (used in, more jargon) into a production pipeline to automatically detect lane markings down to the centimeter. Humans enter the equation again to verify that all measurements are correct.
DeepMind's "deep reinforcement learning" led to the development of software called AlphaGo and its more advanced sibling AlphaGo Zero, both of which easily defeated human world champions in the ancient Chinese game of Go in 2016.
The scope and impact of Deep Learning: Revolutionary or just a useful tool?
In a 2012 New Yorker article, New York University machine learning professor and researcher Gary Marcus expressed his reluctance to hail deep learning as some sort of revolution in artificial intelligence. While it was "important work, with immediate practical applications", it nonetheless represented "only a small step towards creating truly intelligent machines".
"Realistically, deep learning is only part of the larger challenge of building intelligent machines," Marcus explained. "Such techniques lack ways to represent causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas such as 'sister' or 'same as'. They have no obvious ways of making logical inferences, and they are still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are normally used."
It was like Thor's golden hammer. Where it is applied, it is much more effective. In some cases, certain apps can turn the dial up several notches for superhuman performance.
“The power of this was really when people discovered that you could apply deep learning to a lot of problems,” says Guild AI founder Garrett Smith, who also runs machine learning organization Chicago ML. “It was like Thor's golden hammer. Where it is applied, it is much more effective. In some cases, certain apps can turn the dial up several notches for superhuman performance.”
Google's nine-year-old DeepMind, which it acquired in 2014, is a leader in that space, and its goal is nothing less than "solving intelligence" by merging machine learning and neuroscience. The UK-based company's studies of "deep reinforcement learning," a combination of deep learning based on neural networks and reinforcement learning based on trial and error, led to the development of software called AlphaGo and its smaller sibling. advanced AlphaGo Zero, both of which easily beat human world champions in the ancient Chinese game of Go in 2016.
Science has also continued to benefit from (deep) machine learning.
Molecular dynamics, which involves simulating the trajectories of atoms and molecules to learn how a solid or biological system behaves under stress, or how drug molecules bind to their receptors, is an example. Before long, experts predict, molecular design will be fully automated, accelerating drug development.
The promise of neural networks
The biggest recent change in deep learning is the depth of neural networks, which have gone from a few layers to hundreds of them. More depth means a greater ability to recognize patterns, which increases object recognition and natural language processing. The former has more far-reaching ramifications than the latter.
“Translation [of languages] is a big deal, and there have been amazing applications,” says Smith. “You have more flexibility in terms of networks being able to make predictions in languages that they have never seen before. But there is a limit to the general applicability of the translation. We've been doing it for a while, and it's not a huge game changer. The vision thing is what's really driving a lot of remarkable innovation. Putting a detector in a car so it can accurately judge its surroundings, that changes everything.”
Challenges in Deep Learning: How do we solve the data problem?
"You're basically representing knowledge, the ability to do complex processing," she says. “To do that, you need more neurons and more capacity. You need data that is not usually available.”
Which means pre-trained models (created by someone else and readily available online) and public datasets (ditto) won't cut it. That's where the billionaire giants of machine learning have a distinct advantage.
One-Shot Learning and NAS: A Powerful Combination
That's huge, says Smith.
GAN: The cat and mouse approach
“You can think of a GAN as a combination of a counterfeiter and a policeman in a game of cat and mouse, where the counterfeiter learns to pass counterfeit bills and the policeman learns to spot them. Both are dynamic; that is, the police are also training (perhaps the central bank is marking the bills that slipped away), and each side comes to learn the other's methods in a constant escalation.
AutoML: A new form of Deep Learning?
Then there is the application of machine learning to machine learning. Called AutoML, it is based on a "learn-to-learn" instruction that prompts computers, through a learning process, to design innovative architectures (rules and methods) on their own.
“Many people are calling AutoML the new way to do deep learning, a system-wide change,” explains a recent essay at wardsdatascience.com. “Instead of designing complex deep networks, we will simply run a preset NAS algorithm. This idea of AutoML is just to abstract away all the complex parts of deep learning. All you need is data. Let AutoML do the hard part of network design! Deep learning literally becomes a plug-in tool like any other. Take some data and automatically create a decision function powered by a complex neural network.”
The future of Deep Learning: Evolution or Extinction?
"These tasks that deep learning has been really spectacularly powerful at are exactly the kind of tasks that computer scientists have been working on for a long time because they're well defined and there's a lot of commercial interest behind them," says Kondor. “So it's true that object recognition is completely different than it was 12 years ago, but it's still just object recognition; they are not exactly high-level deep cognitive tasks. To what extent we are really talking about intelligence is a matter of speculation."
In a Medium essay published last December titled "The Deepest Problem with Deep Learning," Gary Marcus offered an updated version of his 2012 New Yorker examination of the topic. In it, he referenced an interview with the computer science professor at the University of Montreal, Yoshua Bengio, who suggested the “need to consider the difficult challenges of AI and not be satisfied with incremental advances in the short term. I'm not saying I want to forget about deep learning. On the contrary, I want to build on it. But we need to be able to extend it to do things like reason, learn causality, and explore the world to learn and acquire information.”
Marcus "agreed pretty much every word" and when he published the scientific paper "Deep Learning: A Critical Appraisal" in January 2018, his thoughts on why problems with deep learning are impeding the development of artificial general intelligence ( AGI) caused a backlash online. In a series of tweets, Thomas G. Dietterich, distinguished professor emeritus at Oregon State University and former president of the Association for the Advancement of Artificial Intelligence, defended deep learning, noting that “GOFAI works better than nothing [Good Old- Fashioned Artificial Intelligence] ever produced.”
Still, even the "Godfather of Deep Learning" has changed his tune, telling Axios that he has become "deeply suspicious" of backward propagation, which underlies deep learning.
"I don't think that's how the brain works," Hinton said. "We [humans] clearly don't need all the labeled data."
His solution: "throw it all out and start over."
However, are such drastic measures really necessary? It may be, as Facebook AI research director Yann LeCunn told VentureBeat, that deep learning should ditch the popular but sometimes problematic Python coding language in favor of one that is simpler and more malleable. On the other hand, LeCun added, new hardware may be needed. In any case, one thing is clear: deep learning must evolve or it risks disappearing. Although achieving the first, experts say, does not exclude the second.
The human brain is complex. Deep learning is not.
"Current deep learning is just a data-driven tool," says HERE's Ma. "But it's definitely not self-study yet."
Not only that, but no one yet knows how many neurons are needed for it to be self-learning. Furthermore, from a biological point of view, relatively little is known about the human brain, certainly not enough to create a system that even comes close to mimicking it. At this point, Ma says, even her three-year-old daughter has deep learning rhythm.
“If I show her a [single] image of an elephant and then she sees a different image of an elephant, she can immediately recognize that it is an elephant because she already knew the shape and can imagine it in her brain. But deep learning would fail on this problem, because it lacks the ability to learn from a few samples. We still rely on massive amounts of training data."
For now, Ma says, deep learning is simply a useful method that can be implemented to solve various problems. As such, she thinks, her extinction is not imminent. However, it is likely.
“As soon as we know what's next, we'll switch to that. Because this is not the definitive solution.”
Source: iArtificial, Direct News 99