Quantcast
Channel: Analog/Custom Design
Viewing all articles
Browse latest Browse all 749

Dealing with AOCVs in SRAMs

$
0
0

Systems on Chip, or SoCs as they’re more commonly called, have become increasingly more complex, and incorporate a dizzying array of functionality to keep up with the evolving trends of technology. Today’s SoCs are humongous multi-billion-gate designs with huge memories to enable complex and high-performance functions that are executed on them. It is quite common to have about 40% of an SoC’s real estate used for Static Random Access Memory (SRAM). SRAM design is a complex and highly sensitive process, and what we want to design in the silicon is often different from what actually comes out of the manufacturing process. This is due to Advanced On-Chip Variations, or AOCVs.

AOCVs occur in the device manufacturing processes, and there are two kinds:

  1. Systematic Variations: These are caused by variations in gate oxide thickness, implant doses and metal or dielectric thickness. They are deterministic in nature, and exhibit spatial correlation – i.e., they are proportional to the cell location of the path being analyzed.
  2. Random Variations: These are random, as the name suggests, and therefore are non-deterministic. They are proportional to the logic depth of the path being analyzed, and tend to statistically cancel each other out given a long enough path.

As can be deduced, the effects of these variations are getting more pronounced as process geometries are shrinking, and so dealing with them in an effective manner is crucial to the proper functioning of an SoC. And therein lies the rub.

Traditional Solutions for AOCVs in SRAMs

AOCVs need to be modeled effectively, so their effects can be taken into account for the ultimate SRAM design to be successful. This means the design needs to be simulated to account for the random and deterministic process variations. Most companies deal with this in one of the following two ways:

  1. Running a Monte Carlo simulation on the full memory instance RC extracted netlist

This approach involves creating a simulatable instance netlist from the instance schematic, and running Monte Carlo simulations on the complete netlist, multiple times. This will give us the most accurate results. However, this is an incredibly CPU and memory intensive approach, with run times lasting several days. Additionally, it will require huge runtime memory requirements and will need bigger LSF machines.

  1. Run Monte Carlo simulations on the critical path RC netlist

This approach involves reducing the netlist drastically by identifying repetitive cells in the memory and replacing them with a load model. Then you create a critical path schematic for each component to be simulated and run Monte Carlo. While this approach is definitely much faster than the previous approach, it still involves several thousand nodes and instances, and runtime is still in the order of a few days. Additionally, it requires time to create critical path schematics for different components and to ensure the setup is correct. Creating a critical path involves manual effort and is error prone, making it a less than ideal solution.

So what is a designer to do?

Enter, the approach used by our customer, Invecas. Their solution is based entirely on the Legato Memory Solution, specifically Liberate-MX runs, with Spectre simulations. It relies on re-suing the characterization database from Liberate-MX runs. This means, there is no additional time spent on setting up the environment. It also involves reusing the partition netlist created by the Liberate-MX flow. Liberate has the inbuilt intelligence of identifying the dynamic partition, and activity factor. This approach results in the least amount of runtime and memory required.

So how does this work?

Liberate runs a Fast-SPICE tool under the hood to identify the worst-case path that is active and toggling, and extracts only that path to work on. Then an accurate SPICE run is performed, to provide the accurate .libs. Generating these accurate .libs is already included in the Liberate MX flow and available today. Invecas then modified this flow for AOCV, by taking this partition, with all the accompanying setups and nodes, and adding a couple of commands for Monte Carlo runs. The script now runs Monte Carlo on the greatly reduced partition, and returns AOCV models with all the derating values in a matter of hours, instead of days, or even weeks.

The comparison of results between the three approaches can be summarized below.

Method 1

FULL INSTANCE SIMS (Considers 300MC runs)

Method 2

CRITICAL PATH SIMS

Invecas Method

PARTITION NETLIST SIMS

Invecas Method Improvement over Method 1

Invecas Method Improvement over Method 2

No.of Devices

7440000

17000

560

13285.71

30.36

No.of Nodes

22400000

317000

12300

1821.14

25.77

No.of RC elements

22000000

231000

12000

1833.33

19.25

RUN Time (Hours)

350

84

1.45

241.38

57.93

RUN Memory (GB)

50

10

1

50

10

 

The side-by-side testing clearly shows, that the Invecas method using the Legato Memory Solution has greatly reduced the number of devices, nodes and RC elements that the Monte Carlo run uses, from several million, to a few thousand. This automatically reduces the runtime and memory requirements by several orders of magnitude, thereby solving the biggest problem faced by the designers today.

Please visit our page to find out more about this process, or to read about the Cadence Legato Memory Solution.


Viewing all articles
Browse latest Browse all 749

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>