Moore’s Law Slowing Down? No Problem, Just Add AI
[ad_1]
A lot has been said about Moore’s Law over the last decade. Most of it has had to do with it either being dead or at the very least slowing down. Clearly there are still breakthroughs being made in terms of smaller process node geometries by leading companies such as Intel and TSMC. However, it is also just as clear that the industry can no longer just solely rely on Moore’s Law for its performance gains. Some are trying to achieve more performance using heterogeneous compute architectures, some are using chiplets and yet others are using both. At Mobile World Congress this year, along with their established heterogeneous compute architecture approach, Qualcomm highlighted another tool that can be used: Artificial Intelligence (AI).
There is a commonly used aphorism that goes something like “if you have something great and want to make it better, just add bacon.” Whether one agrees with this or not, or simply replaces “bacon” with something else, it’s sentiment still holds true in that there are certain things that, when added, just make things better. For the technology industry, and more specifically the chip industry, AI is shaping up to be just that.
This is not just about generative AI, which really has only matured enough in the last couple of years to make a significant impact. In contrast, adding traditional or functional, machine learning-based AI to enhance product capabilities has been a strategy for differentiation for the past decade or so. At MWC this year, Qualcomm took this potential recipe for success and applied it to 5G with the announcement of their latest 5G offering, the Snapdragon X80.
Making AI the star
The Snapdragon X80 is a modem-RF platform consisting of four components – the baseband, RF transceiver, RF front end and a mmWave front end module. Supporting 3GPP Release 17, as well as anticipated Release 18 features, the X80 modem enables 6x downlink carrier aggregation, up to 6 receive channels for smartphones, 10 Gbps peak download speed, 3.5 Gbps peak upload speed, support for narrowband non-terrestrial networks and of course, AI-enabled 5G optimization.
The baseband comes equipped with Qualcomm’s 2nd generation 5G AI Processor which allows the platform to utilize AI to improve quality of service and end-user experience by intelligently controlling both modem and RF functions. In conjunction with their 3rd generation 5G AI Suite, performance metrics for data speeds, power handling and efficiency, coverage, spectrum efficiency, latency and GNSS location are all improved by the use of AI. Additionally, it uses AI processing to assist in mmWave beam management which is essential for providing 5G mmWave range extension when used in fixed wireless access (FWA) customer premise equipment (CPE).
These improvements are partially achieved by utilizing AI to manage multi-antenna subsystems more intelligently and efficiently. AI is also utilized to provide contextual inputs to optimize the radio link by identifying and factoring in the state of the RF environment and what the user is doing in terms of applications or workload. For example, if the user is doing something that is latency dependent such as a video call, the AI might increase transmit power to compensate for any channel quality impairments and prioritize throughput and latency over the resulting increase in power consumption.
According to Qualcomm, compared with their previous generation, implementing AI to help optimize 5G performance allows them to reduce best-cell selection time by 20%, reduce link acquisition by up to 30% as well as improve location accuracy to a similar degree. For mmWave applications, it also delivers up to 60% faster CPE service acquisition and 10% lower power while connected.
Using AI to make AI better
As stated, Qualcomm hopes to dramatically improve user experience with its latest X80 modem-RF platform. While this is always going to be an objective with each new generation, this is needed now, more than ever, as advanced use cases and applications, including generative AI, continue to increasingly demand faster processing, faster throughput and latency while maintaining or improving power consumption on these devices.
Aside from throughput and latency, user experience is also driven by battery life. Tirias Research recently conducted a study on the latest flagship smartphones running generative AI workloads and determined that current battery technologies will need all the help they can get in the AI Era.
Applying AI to ensure the best possible combination of modulation coding schemes, transmission power and antenna array configuration for a given workload helps in multiple ways. Along with maximizing uplink and downlink throughput and minimizing latency which are already major drivers in improved user experience, AI also minimizes the amount of time the transmit chain (RF transceiver and power amplifier in the RF front end) is powered up and, when it is powered up, ensuring that the power used to send the transmission is only as strong as it needs to be to achieve the desired performance. Along with the applications processor and the display, the transmit chain is one of the biggest consumers of power supplied by the battery in a mobile device. By minimizing the amount of time that the transmit chain is turned on either through minimizing retransmits to overcome high error rates or by maximizing throughput and minimizing latency, the impact on the user experience in terms of longer battery life can be meaningful.
AI-optimized experiences don’t just benefit the end user. When any given transmission can be completed faster with a lower transmit power, mobile network operators also benefit by maximizing capacity through a lower effective noise floor, which ultimately benefits the end user by helping minimize interference and ensuring the best possible downlink and uplink speeds are achieved for the given RF environment.
What’s next for 5G and AI
Along with its AI-based enhancements, the X80 modem-RF platform also supports anticipated 3GPP Release 18 (Rel 18) features. As such, Qualcomm is asserting that this latest 5G offering is “5G Advanced Ready”. Regardless of which features ultimately get standardized through Rel 18, at the very least, OEMs designing the X80 modem-RF into their device line up will get what Qualcomm believes will be value added capabilities for the next generation of not just smartphones but other device types as well, such as PCs, XR devices, automotive and FWA CPEs. According to Qualcomm, the platform is currently sampling and is expected to ramp commercially in the second half of this year.
Qualcomm’s use of AI to improve product performance is not a new idea and it’s not even the first time Qualcomm or other companies have done so. What makes it noteworthy is that this product launch serves to highlight the critical role that traditional AI plays, and will continue to play, in enhancing existing technologies and products to a level of performance that not only makes workloads like generative AI possible, but usable. It also demonstrates another weapon in a chipmaker’s arsenal for continuing to deliver the performance increases that are needed with or without Moore’s Law.
Qualcomm is not the only company to use AI in this manner. However, in a world where all of the attention is on generative AI, it is often easy to forget about the importance of traditional, machine learning-based AI product optimizations. Traditional AI used in this manner is a powerful tool for differentiation and every opportunity to wield it as such should be taken.
[ad_2]
Source link