Coverage of the GSA Memory Conference continues in this week’s issue of Chip Scale Review Tech Monthly. Francoise von Trapp contributed this article.

While there continue to be incremental improvements and innovations in single chip packaging technologies, it’s nothing compared to the focus on multi-chip assembly and packaging technologies. Whether they’re called systems-in-package (SiP), multi-chip modules (SiP), package-on-package (PoP),  or heterogeneous stacks, the reality is that to cram all the necessary functionality into today’s devices requires some pretty fancy footwork in the back-end assembly processes.  The latest configuration to take the stage at recent conferences is what’s being called 2.5D silicon interposer technologies.  At the GSA Memory conference held April 11, 2011 in San Jose, a number of speakers from different parts of the supply chain touted the benefits of this so-called first step to 3D integration.

Defining 2.5D Interposer
Wally Rhines, Chairman and CEO of Mentor Graphics, define 2.D integration as stacking 2D chips in 3D integration. Multiple dice are connected in a system usually using a silicon interposer to connect die – most often side-by-side. What makes 2.5D different than full 3D stacking? While both 3D integration approaches rely on TSVs to form interconnects, when the TSVs are in the interposers, they aren’t in active die areas.

According to Rhines, one advantage from the design tool vendor perspective is that design methodology can be evolutionary as long as TSVs are kept out of active area. The design tools require only EDA upgrade products to support it.  However, moving to Full 3D IC requires revolutionary design methodology.  “2.5D will provide long lasting solutions,” noted Rhines. “Things become dramatically more difficult when TSVs move into active circuit areas.”  He added that there are many future benefits that can be achieved with today’s 2.5D IC. It allows for stacking existing analog, digital and memory chips into the third dimension without TSVs in active circuitry (Figure 1).

mentor1Figure 1: The evolution of 3D ICs. (Source: Mentor Graphics)

Is 3D IC Really More Costly?
In his talk, Cost/Architecture/Application Implications for 3D Stacking Technology, Yuan Xie, of Pennsylvania State University, addressed the sensitive issue of cost. First of all, he stressed the importance of weighing the benefits against the increase in production costs. Incremental improvements, such as a 10% latency improvement due to wirelength reduction, don’t justify the cost of true 3D IC integration for niche markets. This is one reason why, although TSV stacking has been successfully demonstrated in DRAM stacks, memory manufactures haven’t gone into production with them.  There needs to be a killer application and novel architecture to justify the cost.  Wide/IO DRAM on Logic has become that killer app for 3D ICS.

That said, Xie also offered the perspective that while 3D IC stacking may appear to be a more costly method of heterogeneous integration, it really depends on the size of the chip. He suggests using a 3D bonding cost model vs. a wafer cost model that calculates the cost of each 2D die.  From the 3D bonding cost model, 3D stacking offers cost advantages for large die, explained Xie.  When you stack smaller die with higher yield, it requires fewer metal layers to satisfy routing constraints.  Going from 2D to 3D with few metal layers can mean cost reduction. “3D integration is not always necessarily the expensive option.” Said Xie.  With larger die, there is a cost benefit. 3D stacking offers cost advantages for large chips with billions of transistors.  For example, a 24x24mm 40nm die partitioned into four 24x6mm die shows an increase in yield from 25% to 70%.

A Memory Maker’s Perspective
In her keynote, Market Trends Shaping the Memory  Industry, Sharon Holt, Senior VP and General Manager, Semiconductor Business Group, Rambus, talked about the various paths a company could take to get to 3D integration, and divulged Rambus’ strategy.

Demands on DRAM continue to increase due to, by and large, to cloud computing and the continued increase in global mobile data traffic, which in 2010 rose 59%. In 2011, video streaming will account for more than 50% of all mobile traffic. These features will continue to push into memory computing.

Holt suggested that there are  three paths forward, one being an evolutionary path , continuing to push the LPDDR roadmap; the second being a revolutionary approach, jumping to  wide I/O DRAM on logic stacking with 3D ICs; and a third approach that to develop  breakthrough innovations that leverage the evolutionary infrastructure.

While Holt agrees that eventually, it’s the revolutionary road that will need to be traveled, she says there’s still lots of work to get it ready  for HVM, and we don’t need to go there just yet.  Instead, Rambus is working on innovations that can extend performance of the existing infrastructure that will avoid cost risks involved with the revolutionary approach.  She said the company’s proprietary FlexLink™ Command/Address (C/A) improves memory parallelism for multi-core processing, enables higher data rates, and improves memory system scalability. Additionally, because of its micro-threaded memory core, data throughput is improved, memory resource contention is reduced, and finer access granularity is enabled.  Used in a PoP footprint, it’s a compact 3D solution.

Conclusion
There’s no doubt there’s no place to go but up to accommodate next generation technologies. And it no longer remains to be seen how we’ll get there. 2.5D is what’s now in 3D integration.  When all the other options have been exhausted, everyone agrees 3D IC will be next.

Francoise von Trapp

They call me the “Queen of 3D” because I have been following the course of…

View Francoise's posts

Become a Member

Media Kit

Login