These days, it seems like artificial intelligence (AI) is taking over the world. In the last two years alone, we’ve seen the rise of ChatGPT, autonomous vehicles, Google Gemini, deep fakes, and my least favorite – soulless AI art. With so many new applications coming into focus, it can feel like AI is up, running, and ready to take your job tomorrow.
Many of us feel apprehensive about the growth of AI. To be honest, part of the reason I wanted to write this blog was to determine whether my own fears were justified. As a writer, my profession is often listed as one of the first to go once AI gets a bit smarter, and I certainly hope that’s not the case.
On the contrary, each time I hear from someone who helps develop AI, it seems like new hurdles with its deployment arise. Ultimately, my intention for this blog is to help bridge the gap between the general public perception of AI, with what’s happening behind the scenes. Despite its seemingly rapid rise, I do believe that AI’s growth is plateauing – at least for now.
The AI Hype Cycle
Let’s start the conversation with Gartner’s AI Hype Cycle. I think we all agree that AI is here to stay, but where are we really in terms of growth? According to this research, many organizations aren’t getting the full value they anticipated from AI. This is especially true for generative AI, with businesses citing user acceptance and data governance as two major adoption hurdles. The report then makes a case for composite AI, describing it as “the next phase in AI evolution.” Composite AI is defined by Gartner as the combination of machine learning, natural language processing, and knowledge graphs to create AI that’s more adaptable and scalable.
This begs the questions – how can we be ready to move to the next phase of AI if we’re struggling to see value from the technology we already have? And what will it take for AI to be competent enough to be worthwhile?
Two days before I began writing this, Google announced its new quantum computing chip. The big advantage the chip appears to offer is its significant reduction in errors within qubit-based systems – a roadblock that has historically held quantum computing back. If it comes to full fruition, it can potentially mitigate AI data size limitations and boost AI problem-solving capabilities.
When you search “Google Willow quantum chip,” you’re met with article headlines that use words like “breakthrough” and “milestone.” But if you scroll to the “What’s next with Willow and beyond” section of Google’s blog announcement, you’ll see that the company admitted that its chip has yet to yield “a relevant real-world application.”
That’s not meant to minimize the work Google has done. Its Willow chip and other advancements do show signs of progress, but I think some headlines are a bit misleading. Sensational headlines, along with our relatively new access to early versions of generative-AI, play a role in influencing our perception.
Paradoxically, AI itself can also exacerbate this. For instance, ChatGPT is largely trained using pre-existing text found on the internet. When headlines use overly bold language, its algorithm learns from those patterns, which can then influence the articles it produces. Sometimes unknowingly, people will read the AI articles, form their opinions, and overestimate the competency of this technology.
What’s Holding AI Back?
So if ChatGPT is still smart enough to write a passable essay in 10 seconds, and if cars can drive themselves, what’s preventing AI from advancing even further? Essentially, AI technology isn’t reaching its potential due to its current inability to scale.
Marco Chisari, head of the Samsung Semiconductor Innovation Center (SSIC), touched on some of AI’s scalability challenges at SEMICON West 2024. During his presentation, he cited power consumption, bandwidth, and capacity as three of AI’s biggest hurdles. To break it down further:
There isn’t enough electricity to support demand for generative AI
Put simply, today’s grids are not equipped to handle AI’s power needs. With some AI applications requiring as much compute power as a small city, reinventing grids across the world will be critical for enabling the next generation of AI. To illustrate this, a 2023 study by the U.S. Department of Energy stated that the U.S. power grid will need to grow by 57% to accommodate today’s electrification and energy requirements, which is no small feat.
Right now, it’s unclear when the grid will be capable of meeting AI’s exponential growth. As it stands, I see this as the biggest issue facing AI scalability.
We need better interconnect solutions
Most of today’s interconnect solutions rely on passing through the CPU. While this worked fine in the past, latency and bandwidth concerns are slowing the progress of AI. To help address this, Semiconductor Engineering highlighted interconnect solutions like Compute Express Link (CXL) and PCI Express (PCIe) as part of the solution. In addition, 3D integration is enabling both faster performance, reduced power consumption, and better signal integrity than more traditional types of system-in-packages (SiP).
Although great work is being done in this area, we need to remember that adopting new approaches and changing industry standards won’t happen in a day. Much progress has been made, but we’re still in the early stages of these next-generation developments.
There aren’t enough leading-edge manufacturers
TSMC, Samsung, and Intel are the three biggest manufacturers of leading-edge AI chips. With just three companies carrying the manufacturing burden, they struggle to accommodate demand for AI chips on their own. More manufacturers are needed, but entering this arena poses significant challenges. Not only does producing leading-edge chips come with substantial, upfront investment costs, but it also requires a highly-specialized workforce that can be difficult to attract and train.
Final Thoughts
Although I believe that AI will be incredibly transformational, none of these hurdles have immediate solutions. Even when AI does get closer to our envisioned future, I think it’s still important to put it in perspective. Granted, CHIPS Act funding, investments in R&D, and extremely talented people are coming together to find solutions, but none of these roadblocks are possible to resolve immediately.
Personally, AI reminds me of the internet 30 years ago. The rise of the internet brought unlimited access to information, the ability to buy things without leaving home, and new fields of study like computer science, social media management, cybersecurity and more. In many ways, the internet made our lives better, but it’s equally easy to list ways it made life worse.
Do I think AI will have as much of a disruptive impact as the internet? Eventually, but more importantly, I don’t think the rise of AI will be as dark as science fiction will make you believe. Just like the internet, AI will come with new opportunities, alongside new challenges. I think it’s impossible to predict what the future of AI has in store for us, but at the end of the day, we’ll have no choice but to embrace it.