Mammoth AI report says era of deep learning could fade, but that’s unlikely
The era of deep learning began in 2006, when Geoffrey Hinton, professor at the University of Toronto, who is one of the founders of this particular approach to artificial intelligence, hypothesized that Dramatically improved results could be achieved by adding many more artificial neurons to a machine learning program. “Deep” in deep learning refers to the depth of a neural network, the number of layers of artificial neural data traversed.
Hinton’s insight led to breakthroughs in the practical performance of AI programs on tests such as the ImageNet image recognition task. The next fifteen years have been called the deep learning revolution.
A report released last week by Stanford University, working with several institutions, argues that the dominance of the deep learning approach may fade in the years to come, as it lacks answers to tough questions. of the construction of the AI.
“The recent dominance of deep learning may be coming to an end,” write the authors of the report. “To continue to progress, AI researchers will likely need to embrace hand-coded methods for general and special purposes, along with ever-faster processors and larger data.” says the AI100 report.
Also: Speed-obsessed AI industry loath to consider energy cost in latest MLPerf benchmark
The report, officially known as “The Hundred Year Study of AI,” is the second installment in what is expected to be a series of reports every five years on the state of the discipline. The report is written by a set of academics who form a standing committee and organize workshops, the findings of which are summarized in the study.
The deep learning report’s prediction may be premature, for the simple reason that unlike in times past, when AI was on the periphery of computing, the math that powers deep learning is now firmly entrenched. in the world of business computing.
Hundreds of billions of dollars in market value are now attributed to the fundamentals of deep learning. Deep learning, unlike any AI approach before it, is now the setting.
Decades ago, companies with ambitious IT quests went out of business for lack of money. Thinking Machines was the jewel in the pursuit of artificial intelligence in the 1980s – 1990s. It went bankrupt in 1994 after spending $ 125 million on venture capital.
The idea of today’s startups going bankrupt seems much less likely, stuffed as they are with unprecedented amounts of money. Cerebras Systems, Graphcore, SambaNova Systems, have raised billions, collectively, and have access to much more money, both in the debt and equity markets.
More importantly, the leader in AI chips, Nvidia, is a powerhouse that is worth $ 539 billion in market value and sells for $ 10 billion a year by selling chips for deep learning training and training. inference. This is a company with many avenues to build more, sell more and grow even richer through the deep learning revolution.
Why is Business such successful deep learning? This is because deep learning, whether or not it leads to something that looks like intelligence, has created a paradigm for using ever faster computing to automate much of computer programming. . Hinton, along with his co-conspirators, were honored for advancing computer science, regardless of what AI researchers may think of their contribution to AI.
The authors of the AI100 report argue that deep learning faces practical limits in its insatiable desire for data and computational power. The authors write,
But now, in the 2020s, these general methods are encountering limitations – available compute, model size, durability, data availability, fragility, and lack of semantics – that are starting to push researchers to redesign specialized components of their systems to try. to bypass them.
All of this may be true, but, again, it’s a call to arms that the computer industry is happy to spend billions to respond to. Deep learning tasks have become the target of the most powerful computers. AI is no longer a particular discipline, it is the heart of computer engineering.
It takes just fourteen seconds for one of the fastest computers on the planet, built by Google, to be automatically “trained” to solve ImageNet, according to benchmark results earlier this year from the MLPerf test suite. It is not a measure of thought per se, it is a measure of how quickly a computer can transform an input into an output – incoming images, linear regression response.
All computers since Alan Turing designed them do one thing and one thing, they turn a series of ones and zeros into a different series of ones and zeros. All deep learning is a way for the computer to automatically suggest the transformation rules rather than having someone specify the rule.
What Google and Nvidia are helping to build is quite simply the future of all computers. Every computer program can benefit from automating some of its transformations, rather than being painstakingly coded by a person.
The incredibly simple approach behind this automation, matrix multiplication, is sublime because it is a basic mathematical operation. It’s an easy target for computer manufacturers.
This means that each chip becomes a deep learning chip, in the sense that each chip is now a matrix multiplication chip.
“Neural networks are the new applications,” said recently Raja M. Koduri, senior vice president and general manager of Intel Accelerated Computing Systems and Graphics Group. ZDNet. “What we’re seeing is that every socket, it’s not the CPU, GPU, IPU, everything will have matrix acceleration,” Koduri said.
Also: AI ethics: benefits and risks of artificial intelligence
When you have a hammer, everything is a nail. And the computer industry has a very big hammer.
Cerebras’ WSE chip, the world’s largest semiconductor, is a giant machine for doing one thing over and over again, matrix multiplications that power deep learning.
The MLPerf benchmark test has become the benchmark by which companies buy computers, based on their deep learning computational speed. The deep learning revolution has become the deep learning industry as it established matrix math as the new measure of calculus.
The researchers who wrote the AI100 report take stock of the research directions. Many researchers fear that deep learning has not come close to the goal of understanding or achieving human-like intelligence, and does not appear to be any time soon.
Critics of deep learning such as NYU psychologist Gary Marcus have held entire seminars to explore a way to merge deep learning with other approaches, such as symbolic reasoning, to find a way to go beyond. which seems the limited nature of the monotonous approach to deep learning.
Also: AI in sixty seconds
The criticism is elegantly summed up by one of the report’s study group members, Melanie Mitchell of the Santa Fe Institute and Portland State University. Mitchell wrote in an article this year, titled “Why AI Is More Difficult Than We Think,” that deep learning faces serious limitations despite the optimistic adoption of the approach. Mitchell cites as evidence the fact that much touted goals like the much-heralded age of self-driving cars did not materialize.
As Mitchell argues, astutely enough, deep learning barely knows how to talk about intelligence, let alone replicate it:
It is clear that in order to more effectively make and assess the progress of AI, we will need to develop a better vocabulary to talk about what machines can do. And more generally, we will need a better scientific understanding of intelligence as it manifests in different systems of nature. This will force AI researchers to engage more deeply in other scientific disciplines that study intelligence.
All of this is arguably true, and yet the computer industry loves incrementalism. Sixty years of making integrated circuits double in speed and double in speed have made the computer world addicted to things that can be easily reproduced. Deep learning, based on a sea of matrix multiplications, is once again a sublime target, a terribly simple task to be performed faster and faster.
As long as IT companies can continue to make improvements in matrix acceleration, the deep learning industry, as the mainstream of IT, will have endurance to be reckoned with.
(If you’re interested in learning more about the AI100 report, Stanford is hosting a virtual discussion today from 9-10 a.m. PT, which you can access to the event’s web page.)