Missing Interposer Abstractions And Standards

Interposers still need to be opened to the masses; benefits include more flexibility and fewer aging effects.

popularity

The design and analysis of an SoC based on an interposer is not for the faint of heart today, but the industry is aware of the challenges and is attempting to solve them. Until that happens, however, it will be a technique that only large companies can deploy because they need to treat everything almost as if it were a single die.

The construction of large systems uses techniques, such as abstraction and encapsulation, to aid with comprehension, to limit the impact interdependence, and to contain the scope of analysis. Without them, it needs to be treated holistically.

For semiconductors, this starts at the very lowest levels with gates. These are an abstraction of what is happening at the transistor level. Because of the way they are constructed, feedback can be ignored, and the logic function is provided as an abstraction with some parameters. That makes analysis simple. And so long as they are used within their fan-out constraints and take into account their timing, they can be considered perfect logic function.

Similar techniques are used when placing complete chips on a printed circuit board. While analysis of the assembled board is required, each chip can be viewed as an abstraction with only the I/O exposed for the analysis. The chips do not have to know about each board they are to be used in, just the “standards” for the conditions that they will face.

But the inclusion of an interposer into an SoC has not reached that level of abstraction and encapsulation yet. It is not possible to develop a chiplet without knowing details about the interposer it will be used on, and the other chiplets that will used alongside it. That confines the application of interposers to vertically integrated companies that have large enough teams to be able to develop all the pieces using a synchronized methodology.

The bigger problem is that analysis is uncontained and continues to grow with total system size. That will soon become a limiter. Interconnect standards are required to break the dependencies and suitable abstractions found.

This has been achieved for high-bandwidth memory (HBM), where the interfaces have been fully defined. That enables the design and fabrication of memories to be separated from the design of the chip and the interposer. Once those have been verified enough times, engineers will stop doing flat analysis. But so far, interconnect standards for the general case do not exist. Without this, a viable chiplet market does not exist because the chiplet may have to be modified for each interposer, or the full design files would have to be handed over for systems analysis. Many proposals are being made, so it is quickly becoming the Wild West.

Interfaces
Chip-to-chip interfaces are well-defined, as are the interfaces within a die, but die-to-die interfaces are new. “A traditional ASIC has large I/O drivers necessary to drive signals through the package, board and external interfaces which could range from tens of mms to several meters,” says Tony Mastroianni, advanced packaging solutions director for Siemens EDA. “2.5D die-to-die interfaces deploy smaller I/O drivers that are only required to drive horizontal connections to adjacent die through the interposer, which may be on the order of tens to hundreds of microns. 3D die-to-die interfaces deploy even smaller I/O drivers that are only required to drive vertical connections directly to the die stacked above or below, which may be on the order of a few to hundreds of nanometers. The reduced drive strength and shorter trace lengths inherent of the 2.5 and 3D approaches enable dramatic reductions of power and increased I/O bandwidth, which offers orders of magnitude of improved energy efficiency (pJ/bit).”

This requires new standards to be defined. “Die-to-die connections need to provide much higher bandwidth, lower-power communication between them, and there are protocols being developed for that sort of scenario,” says Marc Swinnen, director of product marketing at Ansys. “That would allow some standardization in the chiplet world, but it is still nascent. The result is that the only people who have really made this work so far are the vertically integrated guys — people who control the entire stack from top to bottom, and every chip is designed with that in mind. They are able to get to it to work, but we’re not yet to the point where some mainstream vendor could just pick some chiplets off the shelf and expect them all to work together.”

Fig. 1: Interposer layer in package with HBM2 and SoC. Source: Ansys

Fig. 1: Interposer layer in package with HBM2 and SoC. Source: Ansys

It is not as simple as just tweaking an existing standard. “With a die-to-die connection across an interposer or even a substrate within the package, you’re talking about a few millimeters,” says Manuel Mota, product marketing manager at Synopsys. “Your insertion loss is going to be much smaller, and you can design the SerDes to take advantage of that. They are much simpler and take much less power — five to six times less power. Conversely, you are also doing away with some of the additional knobs that you have inside a chip-to-chip SerDes to deal with reflections, higher resistivity, etc. It’s a different problem, but it’s still a signal integrity issue and a crosstalk issue with the neighboring channels that are now closer.”

Deliverables
When developing anything intended for a third party, there has to be an agreed upon set of deliverables that collectively provide all of the necessary information required for integration. It took many years to develop that for the soft IP business, and it will probably take a similar time for chiplets.

“Facilitating the implementation of these devices will require general purpose chiplet providers to adopt and deliver standardized, machine-readable models delivered with their chiplet components, including system/RTL level functional, physical, electrical, power and thermal models in additional to data sheets, integration and test guidelines,” says Siemens’ Mastroianni. “It also will require physical layer ASIC IP to be integrated into custom ASICs devices that include external chiplet interfaces. This IP may come from chiplet vendors and/or traditional ASIC IP vendors.”

The open compute project (OCP) ODSA group is an industry-wide collaboration working on developing standards to drive interoperability of chiplets from independent vendors. “They have established a Chiplet Design Exchange (CDX) working group to focus on standardizing chiplet models, implementation work flows and test methodology,” adds Mastroianni. “The CDX working group is actively working on these standards, but it will take time to solidify the standards and provide design/test flows and adoption by the chiplet providers.”

There are related issues that need to be addressed. “These organizations have been struggling with how to define these standards,” says Ansys’ Swinnen. “It’s slippery because there are so many elements. It’s not just power. It’s also electrical, thermal and power integrity. They all have to be specified to a very narrow degree because the things are so intimately connected. And the speeds are much higher. So they are actually defining new communication protocols like BoW (bunch of wires), which allows for very high-speed, low-power connectivity between the chiplets.”

Part of their work is to define the protocols and physical standards for the die-to-die connections. “JEDEC has a spec for high-bandwidth memory (HBM) and is currently working on HBM3, which is planned to support 665 GB/s per package,” says John Park, product management group director for IC packaging and cross-platform solutions at Cadence. “AIB is another emerging standard, which will rely on a large number of parallel signals at slower speeds. As the technology evolves, the standards will follow suit. As things are today, the need for standards is there, but no one is waiting for the standard to be ratified before designing a 3D package. We believe there will be multiple standards for heterogeneous integration of multiple chip(lets) simply because there are so many different packaging/integration technologies available.”

There are also different types of connections. “One is a disaggregated processor approach, where a complex CPU, GPU, or perhaps AI processor can be decomposed into plug-and-play modules assembled and interconnected with silicon interposers or bridges,” says Mastroianni. “This approach will likely be offered by a small set of semiconductor vendors with proprietary interfaces for their processor building block, and optional, general-purpose interfaces and general-purpose chiplet offerings.”

The second approach would be to use general-purpose building block chiplets that would be assembled and interconnected with a custom ASIC or ASICs. “There has been good progress in the standardization of interfaces and protocols, including USR and XSR serial interfaces and OpenHBI, HBM and BoW parallel interfaces,” adds Mastroianni. “There are still several companies trying to promote their proprietary interfaces as standards. I suspect the list will grow, but things do seem to be converging.”

Many of these efforts are based around foundry offerings. “If you look at the Intel ecosystem and the Chips Alliance, who are now taking care of AIB, they are defining another parallel interface that is used more by DARPA and government programs within the US,” says Synopsys’ Mota. “These are standards that are mature and new versions are coming to the market. There is momentum in the industry and a good understanding of the needs of standardization. Things are moving very well in the right direction.”

Tool support
Along with the definition of standards and deliverables, they ultimately are consumed by EDA tools that need to perform the necessary analysis. “Signal integrity and power integrity were issues, and are issues, even at the PCB level,” says Swinnen. “People do voltage drop analysis on the power planes and signal integrity, cross-coupling, and so on at the PCB level. There are tools for that. For an interposer, you have very high-speed signals running relatively long distances in parallel to each other. While the interposer may be built using older technology nodes, it is physically bigger than a standard chip. If the interposer supports several dies, and you’re running from one corner to the other, that’s a very long distance to be running. And if it’s a bus, you get a lot of coupling. Capacitive coupling increases with the length of the physical coupling.”

Power delivery is an increasingly complex issue. “When you have these very dense designs, creating the power delivery network becomes challenging,” says Kenneth Larsen, director of product marketing for Synopsys. “There is such a lot of switching going on, and you need to verify the impact of power noise on the signal lines. It is a combination where you need to do power integrity with signal integrity analysis to make sure the interference does not cause an issue. You also have cross coupling effects for thermal coupling between die. Then, when you’re talking about integrating photonics, they are very noisy. There’s a lot of noise on the systems, and that can impact performance of the entire system. Everything is swinging, and that creates noise.”

Today, system-level analysis has to be done flat. “For electromigration, or voltage drop analysis, we have examples where the analysis of the individual chips shows no IR drop problems,” says Swinnen. “But when you stack them together you do have IR drop problems. Conversely, we have an analysis of a single chip that shows an electromigration issue. But when you stack them together with the others on the interposer, the electromigration issue goes away because there’s a bunch more parallel paths that are added to the system, and so the current locally drops. Power analysis is fundamentally different when you place them on the interposer, and it’s not as if these two can be seen separately. Power analysis has to analyze the whole thing together.”

Some problems are similar to those experienced in PCBs, but with added complexity. “PCB problems can be magnified because of the small geometries used in silicon,” says Mota. “But some challenges are new, such as the lack of solid planes that we typically see on a PCB.”

There are also some unique issues that have not been seen in the past. “We have seen an issue that crops up, which is low frequency power oscillations,” says Swinnen. “When you have multiple chips, or multiple elements on the interposer, you can get oscillations in the voltage. These are low frequency oscillations at a few hundred Hertz. What have seen is that the power oscillates between the different elements in the chip. This is something you would never see on a single chip, but it’s something you do see on PCBs and interposers, so that’s a completely novel sort of analysis you need to do. The power distribution network models that we use are typically high frequency models. We need a chip-power model that captures this effect so that it can be modeled and simulated to avoid problems.”

The tools are a hodgepodge of capabilities cobbled together today, but they are improving. “EDA vendors will need to provide more comprehensive, integrated design flow solutions to enable the broader design community,” says Mastroianni. “This will include the integration of system-level design and verification, advanced package design and analysis, IC design and analysis, and DFT/test tools, methodology and infrastructure. It is unlikely that a single EDA vendor can provide best-in-class solutions for all of this technology, so an open, configurable approach will likely prevail. This will be a daunting challenge, and the facilitation of a broad-based 3D solution will be even more challenging.”

Conclusion
Interposers or bridges are the way to keep the notion of Moore’s Law alive, but the ecosystem to enable it is still nascent. Today, every foundry has a different and incompatible solution. Around those, standards organization and consortiums are attempting to bring forward standards, protocols, and methodologies that will enable a third-party ecosystem to develop. EDA companies are having to decide where to place their resources and what problems they need to attack first. But progress is being made on all fronts. The industry depends on it.

Related
Architecting Interposers
It’s not easy to include interposers in a design today, but as the wrinkles get ironed out, new tools, methodologies, and standards will enable it for the masses.
Interposers Knowledge Center
Top stories, blogs and white papers on interposers.
HBM3: Big Impact On Chip Design
New levels of system performance bring new tradeoffs.



Leave a Reply


(Note: This name will be displayed publicly)