AI

This year’s IMAPS Device Packaging Conference (DPC) had much to say about AI. As the industry’s golden child, it’s well-known that AI is driving many of the innovations we’re seeing in the advanced packaging space.

In my opinion, the biggest takeaway from the event was how AI is creating a need for larger package sizes and bigger high-bandwidth memory (HBM) stacks. To quickly summarize three days worth of content – many of today’s advanced packaging innovations are helping address the challenges arising from this shift. 

While it’s true that larger packages are necessary for accommodating the tall HBM stacks that AI requires, that’s not to say that all advanced package structures are exactly the same. As someone who’s newer to this field, I was surprised to learn that there’s no single package structure that’s dominating the AI market. This is something I’ve always wondered, but for some reason, never directly asked. 

Fortunately, Jan Vardaman, president of TechSearch International, laid out the different packages being used by current industry leaders. 

  • Silicon interposers attached to build-up substrates
  • Redistribution Layer (RDL) interposers attached to built-up substrates with or without bridges
  • 3D with µbumps like Intel’s Foveros
  • Flip chip Ball Grid Arrays (BGA) 
  • Fan-out Wafer Level Packages (FOWLP) 

Although I won’t go through these structures in detail, I will say that bridge technology came up quite a bit this year. Bridges are becoming increasingly common for larger AI modules. 

TechSearch International’s Jan Vardaman

For context, a bridge is a type of interposer that connects chiplets only at the edges. Using a bridge can offer higher-bandwidth communication, as opposed to what’s possible with a standard silicon interposer. An interposer spreads or reroutes electrical connections, and a chiplet is a small integrated circuit (IC) that’s designed to work with other small ICs. However, standard silicon interposers are still the dominant choice due to their cost-effectiveness.  

According to Gabriela Pereira, technology market analyst at Yole Group, 94% of interposer revenue in 2024 was from silicon. Moving into 2025, mold interposers with silicon bridges will strongly ramp up, and forward momentum will continue into 2026. 

Will Chiplets Finally Have Their Moment? 

Chiplets as we know them today go back as far as the year 2000. With 25 years between now and then, we know that chiplets still haven’t broken into the mainstream advanced packaging world. Even in my limited time in the industry, I’ve heard a lot about how chiplets are an emerging solution for AI scaling issues. 

But are we any closer to integrating chiplets now that AI demands have increased exponentially? To be honest, it doesn’t seem like it. 

To give some more background on how a chiplet works, the idea is to create specialized ICs that perform specific functions. The logic behind chiplets is to optimize ICs only where needed. For example, a chiplet provides the freedom to use the most advanced GPUs while cutting costs on other components that aren’t as integral to the device. Chiplets offer unparalleled customization, but they come with a multitude of other challenges. 

Mark Kuemerle, VP of technology at Marvell, said it best. “Companies using chiplets can be counted on your fingers.” 

So if chiplets have so much potential, what specifically is preventing them from taking off? The reasons are abundant, but some of the primary ones are complex testing requirements, the need for highly-specialized engineers, and cost concerns. In addition, the reliance on multiple suppliers also makes them hard to standardize.

Marvell’s Mark Kuemerle

Chiplet die-to-die interconnects, for example, introduce a host of signal integrity concerns that make them challenging to test in real time. The large number of i/os that chiplets rely on also require elaborate interconnects, which further complicates testing. As of now, testing chiplets is slow and expensive compared to testing standard System on Chips (SoCs). And, with all of the complexity that chiplets possess, yield becomes a major concern as well. 

You can’t have yield loss, because you’d be throwing out something extraordinarily expensive,” said Kuemerle. “Chiplets are being used to disaggregate i/o from a silicon die to drive us to the latest and greatest tech that costs even more.”

Even though chiplet modules aren’t nearly as dominant as SoCs, they’re still being leveraged here and there by leading-edge companies. For instance, AMD’s EPYC processors and Intel’s Ponte Vecchio GPU are based on chiplet modules. Will we see more chiplet modules in the future? Probably at some point, but not without a lot of problem solving first. 

Scaling and Development for Gen-AI 

Artificial intelligence, or artificial stupidity? 

Vardaman raised a question that’s on everyone’s mind – when will AI actually become intelligent enough to deliver on the value we expect from it? 

I touched on this last December in my article, It’s Time to Put AI’s Growth Into Perspective. Despite massive investments in AI, Gartner reported in late 2024 that businesses still are not seeing the results they anticipated from the technology. This is especially true for generative AI. 

But still, even with concerns surrounding generative AI, the industry is determined to make it work. Pereira noted that generative AI drove the AI market in 2024, totaling around $672B in semiconductor device revenue. 

To improve gen-AI, the industry will need to bring more memory closer to the processor and reduce its reliance on the cloud. For consumer electronics, achieving this means boosting on-device memory capabilities. According to Chirag Patel, senior director of AI product management at Qualcomm, challenges of on-device AI are as follows. 

  • Memory and storage limitations
  • Small device batteries and thermal constraints
  • Latency implications caused by the mix of gen-AI and non gen-AI workloads 

To begin to solve these, he underscored the need to reduce memory consumption through the following methods. (Please note that these are simplified definitions of complex processes).

  • Distillation. This is when a small AI model is trained to mimic a larger model, thereby consuming less memory
  • Quantization: This reduces the number of bits where possible to improve inference speed 
  • Speculative decoding: This allows a smaller model to predict tokens in bulk. A token is the fundamental unit of data that AI processes 
  • Efficient image and video architecture: This makes better use of existing memory by streamlining operations 
  • Heterogeneous computing: This leverages multiple types of processors, making them more energy efficient 

Right now, HBM supply is tight, so it makes sense to optimize existing memory capacity as much as possible. In this industry, every effort counts. 

Final Thoughts

Every time I hear from leaders in AI, I’m always impressed by their ability to problem solve and innovate despite extensive (and often overwhelming) hurdles. Although I still feel that AI can be terrifying, we can have a conversation about AI ethics and dystopian scenarios another time. 

While AI still has a long way to go in some areas, Kimon Michaels, co-founder of PDF Solutions, highlighted that right now, AI’s most immediate impact will be on increasing efficiency and yield during the semiconductor manufacturing process. We’re seeing some of these developments already. For instance, Comet Yxlon combined AI with its 3D X-Ray technology to inspect defects non-destructively at the submicron level. Kulicke & Soffa is also using AI analytics for advanced packaging process control. 

Lastly, I wanted to briefly address the uncertainty surrounding the American semiconductor industry. In the middle of this year’s DPC, the U.S. president expressed that he wanted to get rid of the CHIPS Act. With many companies in attendance at the conference anticipating CHIPS Act funding, this led to several hallway conversations about what’s next. While your guesses are as good as mine, I encourage you to read Francoise’s article – Will the CHIPS Act be DOGE’d?, if you haven’t already.  

AI opens up a world of challenges, but with challenge comes opportunity. With so many groundbreaking innovations created by our 3D InCites community members and beyond, the sky is the limit for what can be accomplished. 

Jillian McNichol

Jillian McNichol is a technology blogger with more than seven years of experience covering a…

View Jillian McNichol's posts

Become a Member

Media Kit