AI has caused a renaissance of tech industry R&D, says Meta’s chief AI scientist
The strength of the deep learning era of artificial intelligence has lead to something of a renaissance in corporate R&D in information technology, according to Yann LeCun, chief AI scientist for Meta.
“The type of techniques that we’ve been working on have had a much bigger commercial impact, much more wide-ranging,” than was the case in prior eras of artificial intelligence, said LeCun during a small meeting of press and executives via Zoom this month.Â
Also:Â ChatGPT is ‘not particularly innovative,’ and ‘nothing revolutionary’, says Meta’s chief AI scientist
“And the result of this is it has attracted a lot of research funding and in fact, caused a renewal of industry research.”
As recently as twenty years ago, said LeCun, Microsoft Research was the only industry entity that “had any kind of stature in information technology.” But then, said LeCun, the 2010s saw “Google Research really coming to the fore, and FAIR [Facebook AI Research], which I created, and a couple of other labs starting up, and basically reviving the idea that industry could do fundamental research.”
That resurgence of corporate R&D is happening, said LeCun, “because the prospect of what may happen in the future, and what happens in the present, thanks to those technologies, is great.”
The value of applied AI, said LeCun, is leading to a dual-track system, where corporate R&D maintains longer-range, moonshot projects, and then another track that funnels research into practical product applications.
“It makes complete sense for a company like Meta to have, simultaneously, a large research lab that has ambitious long-term goals like building intelligent virtual assistants that have human-level intelligence, because that’s what we want, ultimately; but at the same time, the technology that has been developed is already useful.Â
“For example, content moderation and speech detection in multiple languages has been completely revolutionized over the last two or three years by large, Transformers pre-trained in a self-supervised manner,” said LeCun, referring to Google’s Transformer natural language processing program, introduced in 2017, which has become the basis for numerous programs such as OpenAI’s ChatGPT.
“It’s made enormous progress, unbelievable progress, and it’s due to the latest in AI research,” said LeCun.
LeCun was an invited speaker for an hour and half talk hosted by the Collective[i] Forecast, an online, interactive discussion series that is organized by Collective[i], which bills itself as “an AI platform designed to optimize B2B sales.”
LeCun was replying to a question by ZDNET about what effect the unprecedented interest in AI by industry and commerce is having on the basic science of AI.
Also: Meta’s AI guru LeCun: Most of today’s AI approaches will never lead to true intelligence
LeCun described himself as “optimistic” about the ability for applied AI to be used for good in society. Even where AI fails to achieve some goals, it produces effects that can be beneficial, he indicated.
LeCun offered the example of autonomous vehicle systems that, while failing to be truly autonomous, have had the dividend of providing road safety features that save lives.Â
“Every car that comes out in Europe now has to come with automatic emergency braking system, ABS,” observed LeCun. “It’s not required in the US. but many cars have it.”
ABS, he noted, are “the same systems that also allow the car to drive itself on the highway, right?” The braking mechanism reduces collisions by 40%, he noted. “So, despite everything you hear about, you know, the Tesla that ran into a truck or whatever, those things absolutely save lives, to the point that they are required.”
LeCun also volunteered “one of the things I find pretty promising about AI is the use of the AI in in science and medicine at the moment” to better people’s lives.
“There are a lot of experimental systems, a few hundred of which have gotten FDA approval, that improve reliability of diagnosis from MRI and X-rays and various other things for a number of different ailments,” said LeCun. “This is going to have a huge impact on on on health.”
Those breakthroughs, while positive, are small, he said, compared to “the big thing,” namely, “the way AI is used for science going forward.
Also: Meta’s AI luminary LeCun explores deep learning’s energy frontier
“We have systems that can fold proteins, we have systems now that would be able to design proteins to stick to a particular site, which means we can design drugs in a completely different way than we’ve done in the past,” said LeCun.
AI also has “enormous potential for progress in materials science,” said LeCun. “And we’re going to need this because we need to solve climate change, so, we need to be able to have high-capacity batteries that don’t cost a fortune, and don’t require you to use exotic materials that we can only find in one place.”
LeCun cited one such materials project, Open Catalyst, founded by colleagues at FAIR, which works with Carnegie Mellon University to apply AI to develop “new catalysts for use in renewable energy storage to help in addressing climate change.”
“The idea there is, if we could cover a small desert with photovoltaic panels and then store the energy that is used by those panels, for example, in the form of hydrogen or methane,” explained LeCun. The current approaches to store hydrogen or methane products, he said, are “either scalable, or efficient, but not both.”Â
“Could we discover, perhaps using the help of AI, a new catalyst that would make that process more efficient or scalable by not requiring some exotic new material? It may not work, but it’s worth a try.”
Also: Microsoft CEO Nadella: ‘Expect us to incorporate AI in every layer of the stack’
Despite those many promising commercial and applied applications, LeCun suggested that the narrowness of industrial uses falls short of AI’s grander objective, the quest for animal- or human-level intelligence.
The enormous research advances underlying today’s applications, things such as Transformers, were made possible in the deep learning era by unprecedented availability of data and computing, said LeCun, while fundamental scientific advances haven’t always been as plentiful or as rich.
“What has caused the more recent wave is, first, a few conceptual advances — but, frankly, not a huge amount, and not that impressive — but, really, the amount of data that’s available and the amount of computation that made it possible to scale those systems up.”
Things such as Large Language Models, such as GPT-3, the computer program on which ChatGPT is based, are evidence that scaling AI, meaning adding more layers of tunable parameters, directly improves performance of programs. “It turns out they work really well when you scale them up,” he said of GPT-3 and their ilk.
The industry may find diminishing returns at some point, said LeCun, by relying on scaling alone without exploring other avenues.
Also: Machines that think like humans: Everything to know about AGI and AI Debate 3
“A lot of companies such as OpenAI, in particular, have used this as a mantra, just just make things bigger, and it will just work better,” he said. “But I think we are reaching the limits of that right now.”
Despite scaling ever-larger models, said LeCun, “We don’t seem to be able to train a completely autonomous self-driving [automobile] system by just, you know, training bigger neural nets on more data; that doesn’t seem to get there.”
As impressive as they are, programs such as ChatGPT, which LeCun has called “not particularly innovative,” and “nothing revolutionary,” fail to have a capacity for planning, he said.Â
“They are completely reactive,” said LeCun. “You give them a context of a few thousand words,” meaning, the human-typed prompt, “And then from that, the system just generates the next token, completely reactively.”
“There’s no planning ahead or decomposition of a complex task into simpler ones, it’s just reactive,” said LeCun.Â
LeCun offered the example of the OpenAI program CoPilot, which has been integrated by Microsoft into the GitHub code-management platform. “There is a very dire limitation of such systems,” he said. “They are being used as, basically, a predictive keyboard on steroids.”
Also: We will see a completely new type of computer, says AI pioneer Geoff Hinton
“You start writing your program, and make some description of what it should do in the comments, and you have tools based on large language models that will complete the program,” he explained.Â
Such auto-complete is like cruise control in cars that help driving on the highway. “Your hands need to remain on the wheel at all times” because Co-Pilot can generate errors in code with no awareness of the error.Â
 “The question is, how do we get from systems that generate code that sometimes runs but sometimes doesn’t,” said LeCun. “And the answer to this is all of those systems today are not capable of planning; they are completely reactive.”
“And this is not what you need for intelligent behavior.”
Rather, said LeCun, “If you want intelligent behavior, you need a system that is capable of anticipating the effect of its own actions,” as well as having “some sort of internal world model, a mental model of how the world is going to change as a consequence of its own actions.”
LeCun outlined a think piece last summer about the need for programs with a planning ability, something he discussed with ZDNET at length in September.
As of yet, the resurgence of corporate information technology R&D has not yet lead to the most prized outcome of technology, productivity, said LeCun, but that may come in the next decade.
Also: AI’s true goal may no longer be intelligence
Citing the work of researcher Erik Brynjolfsson of Stanford University’s Human-Centered Artificial Intelligence group, LeCun noted that economists consider AI a “general-purpose technology, meaning, something that “will slowly disseminate in all corners of the economy and industry and basically affect all economic activity” by various effects such as creating new jobs, displacing other jobs, etc., “and lead to increased productivity because it fosters innovation.” In other words, innovation that builds on innovation is the economic equivalent of productivity.Â
“What Eric, in particular, has been saying is that at least until very recently, we have not observed an increase in productivity due to AI, and, historically, he says it takes about 15, 20 years to see a measurable effect on productivity of a technological revolution.Â
“So, according to his prediction, that’s probably going to happen over the next ten years.”
The resurgence of corporate basic R&D in information technology may have some staying power given its appeal to young scholars, indicated LeCun.
“I think one phenomenon that we’ve been observing is that young, talented people now aspire to become AI researchers because that’s the cool thing to do, whereas before, the same people would have gone to finance,” said LeCun. “It’s better for them to go to science, I think.”
Pingback: puffs ties https://exotichousedispensary.com/product/piffs-tixs/