5 AI Predictions For The Year 2030

0

[ad_1]

At the end of 2023, we published a list of 10 predictions for AI in 2024.

Making predictions for the year ahead is difficult enough. What if we tried to predict the future not one year out, but half a decade out?

The further into the future we attempt to peer, the hazier things look and the more speculative our thinking must become. If one thing is certain in technology, it is that no one can actually predict the future—and that we are all going to be surprised by how things play out.

But putting a stake in the ground about how things will unfold is nonetheless an informative and fun thought experiment.

Below are five bold predictions about what the world of artificial intelligence will look like in the year 2030. Whether you agree or disagree with these predictions, we hope they get you thinking.

1. Nvidia’s market capitalization will be meaningfully lower than it is today. Intel’s will be meaningfully higher than it is today.

Nvidia is the hottest company in the world right now. It has been the biggest beneficiary of today’s generative AI boom, with its market cap skyrocketing from under $300 billion in late 2022 to over $2 trillion today.

But Nvidia’s position as the single dominant provider of chips for AI cannot and will not last.

What Nvidia has built is difficult, but not impossible, to replicate. A resurgent AMD is emerging as a credible alternative provider of advanced GPUs, with its cutting-edge new MI300 chip about to become widely available. The big tech companies—Amazon, Microsoft, Alphabet, Meta—are all investing heavily to develop their own AI chips in order to lessen their dependence on Nvidia. OpenAI’s Sam Altman is seeking up to trillions of dollars of capital to build a new chip company in order to diversify the world’s supply of AI hardware.

As demand for AI chips continues to grow in the years ahead, relentless market forces will ensure that more competitors will enter, supply will increase, prices will drop, margins will tighten and Nvidia’s market share will fall.

In addition, as the market matures in the years ahead, the primary type of AI computing workload will shift from training to inference: that is, from building AI models to deploying those models in real-world settings. Nvidia’s highly specialized chips are unrivaled when it comes to training models. But inference can be done with cheaper and more commoditized chips, which may undermine Nvidia’s advantage in the market and create an opening for competitors.

None of this is to say that Nvidia will not still be an important part of the AI ecosystem in 2030. But the current stratospheric runup in its stock price—which has made it the third most valuable company in the world as of this writing, larger than Amazon or Alphabet—will in retrospect look like irrational exuberance.

Meanwhile: what is the one thing that sets Intel apart from virtually every other chip company in the world?

It manufactures its own chips.

Nvidia, AMD, Qualcomm, Broadcom, Alphabet, Microsoft, Amazon, Tesla, Cerebras, SambaNova, Groq: none of these companies build their own chips. Instead, they design chips and then they rely on other companies—most importantly, the Taiwan Semiconductor Manufacturing Company (TSMC)—to produce those chips for them.

Intel alone owns and operates its own chip fabrication facilities.

The ability to manufacture chips has become a vital geopolitical asset. Case in point: China’s utter dependence on foreign semiconductor suppliers has enabled the U.S. to handicap China’s domestic AI industry by banning the import of AI chips to China.

U.S. policymakers are acutely aware of the vulnerabilities posed by the extreme concentration of chip manufacturing in Taiwan today, especially as China adopts an increasingly hawkish stance toward the island. Promoting advanced semiconductor manufacturing on U.S. soil has become a top policy priority for the U.S. government. U.S. lawmakers are taking decisive action to advance this goal, including committing a whopping $280 billion to the effort under the 2022 CHIPS Act.

It is no secret that Intel has fallen behind TSMC over the past decade in its ability to manufacture cutting-edge chips. Yet it still remains one of the few companies in the world capable of fabricating advanced semiconductors. Under CEO Pat Gelsinger, who took the helm in 2021, Intel has reprioritized chip fabrication and undertaken an ambitious strategy to reclaim its former position as the world’s preeminent chip manufacturer. There are recent indications that the company is making progress toward that goal.

And perhaps most importantly: there is simply no other option to serve as America’s homegrown chip manufacturing leader.

U.S. Commerce Secretary Gina Raimundo, who leads the Biden administration’s efforts on AI and chips, acknowledged this directly in a recent speech: “Intel is the country’s champion chip company.”

Put simply, America needs Intel. And that bodes well for Intel’s commercial prospects.

Nvidia’s market cap today is $2.2 trillion. Intel’s, at $186 billion, is more than an order of magnitude smaller. We predict that this gap will have shrunk significantly by 2030.

2. We will interact with a wide range of AIs in our daily lives as naturally as we interact with other humans today.

Even though the entire world is buzzing about artificial intelligence right now, the number of touchpoints that the average person actually has with cutting-edge AI systems today is limited: the occasional query to ChatGPT or Google Bard/Gemini, perhaps.

By the year 2030, this will have changed in dramatic fashion.

We will use AIs as our personal assistants, our tutors, our career counselors, our therapists, our accountants, our lawyers.

They will be ubiquitous in our work lives: conducting analyses, writing code, building products, selling products, supporting customers, coordinating across teams and organizations, making strategic decisions.

And yes—by 2030, it will be commonplace for humans to have AIs as significant others.

As with any new technology, there will be an adoption curve. Some portions of the population will more readily adjust to interacting with their new AI peers; others will resist for longer. The proliferation of AIs throughout our society will unfold like the famous Ernest Hemingway line about how people go bankrupt: “Gradually, then suddenly.”

But make no mistake: this transition will be inevitable. It will be inevitable because AIs will be able to do so much of what humans do today, except cheaper, faster, and more reliably.

3. Over one hundred thousand humanoid robots will be deployed in the real world.

Today’s AI boom has unfolded almost entirely in the digital realm.

Generative models that can converse knowledgeably on any topic, or produce high-quality videos on demand, or write complex code represent important advances in artificial intelligence. But these advances all occur in the world of software, the world of bits.

There is a whole other domain that is waiting to be transformed by today’s cutting-edge AI: the physical world, the world of atoms.

The field of robotics has been around for decades, of course. There are millions of robots in operation around the world today that automate different types of physical activity.

But today’s robots have narrowly defined capabilities and limited intelligence. They are typically purpose-built for a particular task—say, moving boxes around a warehouse, or completing a specific step in a manufacturing process, or vacuuming a floor. They possess nowhere near the fluid adaptability and generalized understanding of large language models like ChatGPT.

This is going to change in the years ahead. Generative AI is going to conquer the world of atoms—and it will make everything that has happened to date in AI seem modest by comparison.

Dating back to the dawn of digital computing, a recurring theme in technology has been to make hardware platforms as general as possible and to preserve as much flexibility as possible for the software layer.

This principle was championed by Alan Turing himself, the intellectual godfather of computers and artificial intelligence, who immortalized it in his concept of a “Turing machine”: a machine capable of executing any possible algorithm.

The early evolution of the digital computer validated Turing’s foundational insight. In the 1940s, different physical computers were built for different tasks: one to calculate the trajectories of missiles, say, and another to decipher enemy messages. But by the 1950s, general-purpose, fully programmable computers had emerged as the dominant computing architecture. Their versatility and adaptability across use cases proved a decisive advantage: they could be continuously updated and used for any new application simply by writing new software.

In more recent history, consider how many different physical devices were collapsed into a single product, the iPhone, thanks to the genius of Steve Jobs and others: phone, camera, video recorder, tape recorder, MP3 player, GPS navigator, e-book reader, gaming device, flashlight, compass.

(An analogous pattern can even be traced out in the recent trajectory of AI models, though in this example everything is software. Narrow, function-specific models—one model for language translation, another for sentiment analysis, and so on—have over the past few years given way to general-purpose “foundation models” capable of carrying out the full range of downstream tasks.)

We will see this same shift play out in robotics over the coming years: away from specialized machines with narrowly defined use cases and toward a more general-purpose, flexible, adaptable, universal hardware platform.

What will this general-purpose hardware platform look like? What form factor will it need to have in order to flexibly act in a wide range of different physical settings?

The answer is clear: it will need to look like a human.

Our entire civilization has been designed and built by humans, for humans. Our physical infrastructure, our tools, our products, the size of our buildings, the size of our rooms, the size of our doors: all are optimized for human bodies. If we want to develop a generalist robot capable of operating in factories, and in warehouses, and in hospitals, and in stores, and in schools, and in hotels, and in our homes—that robot will need to be shaped like us. No other form factor would work nearly as well.

This is why the opportunity for humanoid robots is so vast. Bringing cutting-edge AI into the real world is the next great frontier for artificial intelligence.

Large language models will automate vast swaths of cognitive work in the years ahead. In parallel, humanoid robots will automate vast swaths of physical work.

And these robots are no longer a distant science fiction dream. Though most people don’t yet realize it, humanoids are on the verge of being deployed in the real world.

Tesla is investing heavily to develop a humanoid robot, named Optimus. The company aims to begin shipping the robots to customers in 2025.

Tesla CEO Elon Musk has stated in no uncertain terms how important he expects this technology to be for the company and the world: “I am surprised that people do not realize the magnitude of the Optimus robot program. The importance of Optimus will become apparent in the coming years. Those who are insightful or looking, listening carefully, will understand that Optimus will ultimately be worth more than Tesla’s car business, worth more than [full self-driving].”

A handful of younger startups are likewise making rapid progress here.

Just last week, Bay Area-based Figure announced a $675 million funding round from investors including Nvidia, Microsoft, OpenAI and Jeff Bezos. A couple months ago, the company released an impressive video of its humanoid robot making a cup of coffee.

Another leading humanoid startup, 1X Technologies, announced a $100 million financing in January. 1X already offers one version of its humanoid robot (with wheels) for sale, and plans to release its next generation (with two legs) soon.

Over the next few years, these companies will ramp from small-scale customer pilots to mass production. By the decade’s end, expect to see hundreds of thousands (if not millions) of humanoid robots deployed in real-world settings.

4. “Agents” and “AGI” will be outdated terms that are no longer widely used.

Two of the hottest topics in AI today are agents and artificial general intelligence (AGI).

Agents are AI systems that can complete loosely defined tasks: say, planning and booking your upcoming trip. AGI refers to an artificial intelligence system that meets or exceeds human capabilities on every dimension.

When people envision the state of AI in 2030, agents and/or AGI are often front and center.

Yet we predict that these two terms won’t even be widely used by 2030. Why? Because they will cease to be relevant as independent concepts.

Let’s start with “agents”.

By 2030, agentic behavior will have become a fundamental, essential element of any advanced AI system.

What we today refer to using the umbrella term “agents” is actually just a core set of capabilities that any truly intelligent entity possesses: the ability to think long-term, plan, and take action in pursuit of open-ended goals. Becoming “agentic” is the natural and inevitable end state for today’s artificial intelligence. Cutting-edge AI systems in 2030 will not just generate output when prompted; they will get stuff done.

In other words, “agents” will no longer be one intriguing subfield within AI research, as they are today. AI will be agents, and agents will be AI. There will thus be no use for the term “agent” as a standalone concept.

What about the term “AGI”?

Artificial intelligence is fundamentally unlike human intelligence, a basic truth that people often fail to grasp.

AI will become mind-bogglingly more powerful in the years ahead. But we will stop conceptualizing its trajectory as heading toward some “generalized” end state, especially one whose contours are defined by human capabilities.

AI great Yann LeCun summed it up well: “There is no such thing as AGI….Even humans are specialized.”

Using human intelligence as the ultimate anchor and yardstick for the development of artificial intelligence fails to recognize the full range of powerful, profound, unexpected, societally beneficial, utterly non-human abilities that machine intelligence might be capable of.

By 2030, AI will be unfathomably more powerful than humans in ways that will transform our world. It will also continue to lag human capabilities in other ways. If an artificial intelligence can, say, understand and explain every detail of human biology down to the atomic level, who cares if it is “general” in the sense of matching human capabilities across the board?

The concept of artificial general intelligence is not particularly coherent. As AI races forward in the years ahead, the term will become increasingly unhelpful and irrelevant.

5. AI-driven job loss will be one of the most widely discussed political and social issues.

Concerns about technology-driven job loss are a familiar theme in modern society, dating back to the Industrial Revolution and the Luddites. The AI era is no exception.

But to this point, discussions about the impact of AI on job markets have been mostly theoretical and long-term-oriented, confined to academic research and think tank whitepapers.

This is going to change much more abruptly than most people appreciate. Before the decade is out, AI-driven job loss will be a concrete and pressing reality in everyday citizens’ lives.

We are already beginning to see canaries in the coalmine here. Last month, fintech giant Klarna announced that its new customer service AI system is handling the work of 700 full-time human agents. Plagiarism detection company Turnitin recently projected that it would reduce its workforce by 20% over the next 18 months thanks to advances in AI.

In the years ahead, organizations will find that they can boost profitability and productivity by using AI to complete more and more work that previously required humans. This will happen across industries and pay grades: from customer service agents to accountants, from data scientists to cashiers, from lawyers to security guards, from court reporters to pathologists, from taxi drivers to management consultants, from journalists to musicians.

This is not a distant possibility. The technology is in many cases already good enough today.

If we are honest with ourselves, a major reason why we are all so excited about AI in the first place—a major reason why AI offers such transformative economic opportunity—is that it will be able to do things more cheaply, more quickly and more accurately than humans can do them today. Once AI can deliver on this promise, there will be less need and less economic justification to employ as many humans as today in most fields. Almost by definition, in order for AI to have an impact on society and the economy, it will take people’s jobs. Of course, new jobs will also be created—but not as quickly and not as many, at least at first.

This job loss will bring with it tremendous near-term pain and dislocation. Political movements and leaders will arise in fierce opposition to this trend. Other segments of society will just as vocally champion the benefits of technology and AI. Civil unrest and protests will be inevitable; they will no doubt turn violent at times.

Citizens will clamor for their elected officials to take action, in one direction or another. Creative policy proposals like universal basic income will go from fringe theories to adopted legislation.

There will be no easy solutions or clear-cut ethical choices. Political affiliations and social identities will increasingly be determined by one’s opinions on how society should navigate the spread of AI throughout the economy.

If you think the political moment in 2024 is tumultuous: buckle up.

[ad_2]

Source link

You might also like
Leave A Reply

Your email address will not be published.