Quantcast
Channel: Analog/Custom Design
Viewing all 750 articles
Browse latest View live

The Art of Analog Design Part 1: Overview of Variation-Aware and Robust Design

$
0
0

In this series, we will focus on advanced concepts for custom IC design, in particular, variation-aware design (VAD). With emergence of high-speed simulators such as Spectre® APS, designers can now run simulations faster than ever before, so they are able to more completely verify their designs before taping out. However, it requires more than verifying the proper functionality for different stimulus and performance across corner conditions to assure a design is successful. To be successful requires more, it requires properly allocating design margins based on process variation. Designers can not only use the Cadence® Virtuoso® ADE Product Suite to analyze the results and verify the design is specification compliant, reducing the risk of a design respins and getting the product to market faster. It can also increase competitiveness by helping designers reduce the effect of process variation on a design. Solving this problem requires more than fast simulation, it requires adopting new tools and methodologies.

First, let’s consider the impact of over margining to avoid the negative effects of process variation on circuit performance. For example, let’s say we are designing a successive approximation ADC and find that the linearity of the capacitor digital-to-analog converter, CAPDAC, used to generate reference values, limits yield to 90%. Also assume that for the current design, the CAPDAC is 25% of the die area and there are 1000 die/wafer. If we can increase the yield to 99% by doubling the CAPDAC area, should we do it? Working through the numbers, we see that the current design has 900 good die per wafer while the high-yield design has 792 good die wafer, 800 die/wafer * 99% yield. So even though the yield went up, profit will go down. There are two points to consider:

  1. Overdesign, designing with a margin is not free. Allowing too much design margin can hurt competitiveness.
  2. The second point is subtler, to borrow from Mark Twain, “There are three kinds of lies: lies, damn lies, and statistics.”, that is, we are relying heavily on statistical analysis to make critical decisions.

What type of simulation was performed to generate the yield numbers generated? Should the results of these simulations be trusted? These are questions that we also need to consider when making the decision on which design to take to production. In the first part of this series of articles, we will explore variation-aware design. The question to be considered is how to balance the conflicting requirements immunity to process variation against the cost in terms of product competitiveness.  In the second half of these articles, we will explore reliability analysis for devices and interconnect. Again, this is an area where designers have traditionally relied on allocating design margin and overdesign to prevent issues. The question to be considered is, as the importance of designing for automotive applications and industrial and infrastructure applications grows, do we have enough design margin? Automotive designs operate in harsher environments and may need to operate reliably for years after a consumer product would have been recycled. The need for these types of solutions has been anticipated and these capabilities already exist in the design environment. 

 In the next article, we will look into Monte Carlo sampling methods to see how we can minimize the number of simulations required to answer the question of what yield is for the circuit.


The Art of Analog Design Part 2: Monte Carlo Sampling

$
0
0

Historically, one of the great challenges that analog and mixed-designers face has been accounting for the effect of process variation on their design. Minimizing the effect of process variation is an important consideration because it directly impacts the cost of a design. From Pelgrom’s Law (1), it is understood that the device mismatch due to process variation decreases as the square root of increasing device area, see note 1. For example, to reduce the standard deviation, sigma, of the offset voltage from 6mV to 3mV, means that the transistors need to be four times larger. By increasing transistor size, the die cost is also increased since die cost is proportional to die (and transistor) area. In addition to increasing cost, increasing device area may degrade performance due to the increased device parasitic capacitances and resistances of larger devices. Or the power dissipation may need to increase to maintain the performance due to the larger parasitic capacitances of the larger devices. In order to optimize a product for an application, that is, for it to meet the target cost with sufficient performance, analog and mixed-signal designers need tools to help them analyze the effect of process variation on their design. Another way to look at the issue is to remember that analog circuits haven’t scaled down as quickly as digital circuits, that is, to maintain the same level of performance has historically required something like the roughly the same die area from process generation to process generation. So, while the density of digital circuitry doubles every eighteen months, analog circuits don’t scale at the same rate. If an ADC requires 20% of the die area at 180nm, then after two process generations at the 90nm process node the die area of the ADC and digital area are equivalent. After two more process generations at 45nm, the ADC requires 4x the area of the digital blocks, see note 2. The example that has been presented is exaggerated, however, the basic concept that process variation is an important design consideration for analog design is valid.

Traditionally, the main focus of block-level design has been on parasitic closure, that is, verifying the circuit meet specification after layout is complete and parasitic devices from the layout have been accounted for in simulation. This focus on parasitic closure meant that there was only limited supported for analyzing the effect of process variation on design. During the design phase, sensitivity analysis allowed a designer to quantitatively analyze the effect of process parameters on performance. During verification, designers have used corner analysis or Monte Carlo analysis to verify performance across the expected device variation, environmental, and operating conditions. In the past, these analysis tools were sufficient because an experienced designer already understood their circuit architecture, its capabilities, and its limitations. So performance specifications could be achieved by overdesigning the circuit. However, ever decreasing feature size have increased the effect of process variation and market requirements meaning designers have less margin to use for guard banding their design. Also, the decreasing feature size means that power supply voltages are scaling down and in some cases circuit architectures need to change. An example of how power supply voltage effects circuit architecture is ADC design, where there has been a movement from pipeline ADC designs at legacy nodes, 180nm, to successive approximation ADC, SARADC, for advanced node, 45nm, designs. This change has occurred because a SARADC can operate at lower power supply voltages than pipeline ADCs. As a result of the changing requirements placed on designers, there is a need for better support for design analysis than ever before.

Let’s look at an example of statistical analysis often performed by analog designers. Shown below is the Signal to Noise and Distortion Ratio, SNDR or SINAD, value of a Capacitor D/A Converter, CAPDAC. A CAPDAC is used in a successive approximation ADC to generate the reference voltage levels used to compare the input voltage to in order to determine the digital output code. The SINAD of the CAPDAC determines the overall ADC accuracy. 

 Figure 1:Example of Monte Carlo Analysis Results for Capacitor D/A Converter Signal-to-Noise Ratio

On the left is the distribution of the capacitance variation and one the right is the CAPDAC Signal-to-Noise Ratio, SNR, distribution. From the SNR distribution, the mean and standard deviation of the CAPDAC SNR can be calculated. If the specification of the SNR must be greater than 60dB, does this result mean that the yield will be 100%? Another question to consider is whether or not distribution for the SNR is Gaussian or not since the analysis of the results is impacted by the type of distribution. Or we might want to quantify the process capability, Cpk. Cpk is a parameter used in statistical quality to control to understand how much margin the design has. In the past, this type of detailed statistical analyses has not been available in the design environment. In order to perform statistical analysis, designers needed to export the data and perform the analysis with tools such as Microsoft Excel.

Beginning in IC67, Cadence® Virtuoso® ADE explorer was released with features to support a designer’s need for statistical analysis. Just a note, for detailed technical information, you can explore the Cadence Online Support website or contact your Virtuoso front-end AE. Now let’s take a quick look at enhancements to Monte Carlo analysis starting with the methods used to generate the samples for.

In Monte Carlo analysis, the values of statistical variables are perturbed based on the distributions defined in the transistor model. The method of selecting the sample points determines how quickly the results converge statistically. Let’s start with a quick review, in the CAPDAC example we ran 200 simulations and all of them passed. Does that mean that the yield is 100%? The answer is no, it means that for the sample set used for the Monte Carlo analysis, the yield is 100%. In order to know what the manufacturing yield will be, we need to define a target yield, for example, let target yield greater than 3 standard deviations, or 99.73%, and define a level of confidence in the result of 95%. Then we can use a statistical tool called the Clopper-Pearson method to determine if Monte Carlo results have a >95% chance of having a yield of 99.73%. The Clopper-Pearson method produces an interval of confidence, the minimum and maximum possible yield, given the current yield, number of Monte Carlo iterations, etc. Often designers perform a number of simulations: 50, 100, etc. based on experience and assume that the results would predict the actual yield in production. By checking the confidence interval, we can reduce the risk of missing a yield issue. Another result of using the rigorous approach to statistical analysis, is that more iterations of Monte Carlo analysis are required. As a result, designers need better sampling methods that reduce the number of samples, Monte Carlo simulation iterations, required in order to trust the results.

Random sampling is the reference method for Monte Carlo sampling since it replicates the actual physical processes that cause variation; however, random sampling is also inefficient requiring many iterations, simulations, to converge. New sampling methods have been developed to improve the efficiency of Monte Carlo analysis by more uniformly selecting sample points. Shown in Figure 2, is a comparison of samples selected for two random variables, for example, n-channel mobility and gate oxide thickness. The plots show the samples generated by random sampling and a new sampling algorithm called Low Discrepancy Sampling or LDS. Looking at the sample points, it is clear that LDS has more uniformity spaced sample points. More uniformly spaced sample points mean that the sample space has been more thoroughly explored and as a result the statistical results converge more quickly. This translates into fewer samples being required to correctly estimate the statistical results: yield, mean value, and standard deviation.

Figure 2: Comparison of Random Variable values using Random Sampling and LDS Sampling  

The LDS sampling method replaces Latin Hypercube sampling because it is as efficient and supports Monte Carlo auto-stop. Monte Carlo auto-stop is an enhancement to Monte Carlo that optimizes simulation time. Statistical testing is used to determine if the design meets some test criterion, for example, for the CAPDAC, assume that you want to know with a 90% level of confidence that the SNR yield is greater than 99.73%. The user needs to define these criteria at the start of the Monte Carlo analysis and the results are checked after every iteration of the Monte Carlo analysis. The analysis stops if one of two conditions occurs. First, the analysis will stop if the minimum yield from the Clopper-Pearson method is greater than the target criteria, that is, the SNR yield is greater than 99.73%.  More importantly, the Monte Carlo analysis will also stop if Virtuoso ADE Explorer finds that the maximum yield from the Clopper-Pearson method will not exceed 99.73%. Since failing this test means that the design has an issue that needs to be fixed, this result is also important. It also turns out that failure usually occurs quickly, after a few iterations of the simulation. As a result, using statistical targets to automatically stop Monte Carlo can significantly reduce the simulation time. In practice, what does this look like? Consider the following plot in Figure 3 which shows the upper bound, the maximum yield, and lower bounds, minimum yield, and the estimated yield of the CAPDAC as a function of the iteration number. The green line is the lower bound of the confidence interval assuming the user would like to represent the estimated yield By the 300th iteration, we know that the yield is greater than 99% with a confidence level of 90%. Or we can be very confident that the CAPDAC yield will be high. In addition, thanks to Monte Carlo auto-stop we only needed to run the analysis once.

Figure 3: Yield Analysis Plot

To summarize, the two improvements to Monte Carlo sampling are LDS sampling and Monte Carlo auto-stop. LDS sampling uses a new algorithm to more effectively select the sampling points for Monte Carlo analysis. Monte Carlo auto stop uses the statistical targets: yield and confidence level, to determine when to stop the Monte Carlo analysis. As a result of these two new technologies, the amount of time required for Monte Carlo analysis can be significantly reduced.

In the next article, we will look into analyzing Monte Carlo analysis results to better understand our design and how to improve it.

Note 1: Remember in analog design, designers rely on good matching to achieve high accuracy in their designs. Designers can start with a resistor whose absolute accuracy may vary +/-10% and taking advantage of the good relative accuracy, matching between adjacent resistors, to achieve highly accurate analog designs. For example, the matching between adjacent resistors may be as good as 0.1%, allowing design data converters of 10 bit, 1000parts per million (ppm), 12 Bit, 0.00025ppm, or even 14 Bit, 00001ppm, accuracy circuits.

Note 2: In reality, only the components in the design sensitive to process variation do not scale, so the area of the digital blocks will scale and the area of some of the analog blocks may scale. The solution designers typically adopt to maintain scaling, is to implement new technologies, such as, digitally assisted analog (DAA) design to compensate for process variations. While adopting DAA may enable better scaling of the design, it also increases schedule risk and verification complexity.

References:

1)     M.J.M. Pelgrom, A.C.J. Duinmaijer, and A.P.G. Welbers, “Matching properties of MOS transistors,” IEEE Journal of Solid-State Circuits, vol. 24, pp. 1433-1439, October 1989.

2)     See Clopper-Pearson interval http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval

Virtuosity: Driving Along a Longer Route May Take You Home Sooner!

$
0
0
On my way back home every day, I need to make a decision — should I drive less, or more? Because, there are two different routes that I can take to home. The shorter route is usually busier at peak traffic times. The other route, is long. When I reach the turning point, I almost get swayed in to take the shorter, seemingly straight path. The days I give in to that temptation, I usually reach home late. It can be the same when using software — what may seem to be a harmless shortcut could cost you a lot of troubleshooting time. Here's how a customer recently experienced this when copying a library in Virtuoso.(read more)

Photonics Summit and Workshop 2017

$
0
0

Interested in learning about system-level integration of electronic/photonic devices?

 Silicon WaferThe use of silicon photonics allows semiconductor designers to leverage the billions of dollars invested in existing manufacturing facilities, integrating electronics and optical on the same die or in the same package. Breakfast Byte’s blogger Paul McLellan has written quite a few excellent blogs about Silicon Photonics. In one blog post, he says, “The basics of silicon photonics are something that every semiconductor designer should have at least a little knowledge about.” Check out this post, it's a fantastic primer!

When integrating hybrid devices in which the electronic and photonic components are combined into a single IC, you need tools that integrate both design methodologies—electronic and photonic—as well as taking packaging and systems designs into account. Cadence has been involved with and has provided solutions for silicon photonic design and packaging for quite some time now, and in 2015, partnered with Lumerical Solutions and PhoeniX Software to create an integrated electronic/photonic design environment (EPDA).

Learn about these solutions while and networking with other expert users at the second annual Photonics Summit and Workshop on September 6 and 7, at the Cadence headquarters in San Jose.

On day one, industry experts will present on photonic IC design and packaging. They feature representatives from Hewlett Packard Labs, AIM Photonics, IBM, Chiral Photonics, Inc., and more.

Day two will consist of a hands-on workshop where you will learn system-level electronic and photonic design first hand. During the workshop, you will…

  • Add new elements to an existing photonics PDK
  • Assemble a PIC and its CMOS driving logic, fiber connector (from our partner Chiral Photonics), and a laser as a complete system
  • Play with Tektronix testing equipment

To see the full agenda and to register for the free event, go to www.cadence.com/go/silicon-photonics.

Virtuoso Video Diary: What Are Parametric Sets?

$
0
0
Over the past few IC6.1.7 and ICADV12.3 ISR releases, a lot of new and useful features have been added to Virtuoso ADE Explorer and Virtuoso ADE Assembler. An interesting one that recently caught my attention amidst this forever-increasing feature list is – Parametric Sets in Design Variables. This feature could be a savior if you’re working on a gigantic list of design variables or parameters with sweeps, but don’t want to run all the possible sweep combinations for them. Parametric sets help save time and also provide you the flexibility to run a specific set of variables. To put it in simpler words – when you create a parametric set by combining two or more variables, only a selected set of sweep combinations are created by picking values from the same ordinal position for all the variables or parameters in the parametric set. This reduces the number of design points, thereby, reducing the number of simulations.(read more)

Virtuosity: Saving, Loading and Sharing ADE Annotation Settings

$
0
0
The whole ADE annotation flow was overhauled way back in IC6.1.6 but at that time there was no way to share the annotation settings between designs, or to automatically load them. Well, in IC6.1.7 ISR13 we have added the ability to do both! (read more)

Virtuosity: What Color is Your Virtuoso Wearing Today?

$
0
0
Like you, Virtuoso can dress in a different color too every day. Interested to know, how? Read on to find out ....(read more)

Virtuosity: Sweeping Multiple Config Views

$
0
0
Before IC6.1.7 ISR10, you could sweep multiple views in ADE for only one block in your design. What if you have more than one block that has multiple views that you want to sweep? Well from ISR10 onwards, you can do that. Here's how.(read more)

Virtuosity: Sweeping Multiple DSPF Views in ADE

$
0
0
Wouldn't it be great if you could have a view for your DSPF files and sweep them in an ADE session without having to add them as simulation files? Well now you can! You can create a DSPF view just like any other view, schematic, layout, extracted - and this can be easily included in any ADE simulation. You can also combine this with the config sweep feature to enable you to sweep several DSPF views at once. Just make note that the top-level test bench must be a config. Let's see how to do this...(read more)

The Art of Analog Design: Part 3, Monte Carlo Sampling

$
0
0

In Part 2, we looked at Monte Carlo sampling methods. In Part 3, we will consider what happens once Monte Carlo analysis is complete. Of course, we will need to analyze the results, so let’s look at some of the tools for visualizing what the Monte Carlo analysis is trying to show us about the circuit.

First let’s review the results from the previous blog. The circuit being simulated is a Capacitor D/A Converter, or CAPDAC. The CAPDAC is used in a Successive Approximation ADC to generate the reference levels for comparison. The mismatch of the unit capacitors in the CAPDAC contributes to degradation of the CAPDAC SINAD (Signal-to-Noise and Distortion ratio) and is an important contributor in determining the overall SINAD of the ADC. This CAPDAC is used in a 10 Bit ADC. Based on the error budget for the ADC, if the CAPDAC has a SINAD of 60dB or better we will be able to meet our ADC SINAD target. The CAPDAC SINAD was simulated using Monte Carlo with auto-stop, yield target of 60dB for SINAD, yield of 3s or greater, confidence level of 90%, and Low Discrepancy Sampling, LDS, method. The simulation required 1755 samples to meet the 90% confidence requirement level.

In the last blog append, we looked at the. The effect of process variation on SINAD distribution was plotted, see figure 1. To help understand the how CAPDAC performance compared to the specification,. The specificationthe pass/fail limits have been overlaid on top of the distribution, green is pass and red is fail.

Figure 1: CAPDAC SINAD distribution

The plot also has bars showing the mean value, s, and the values of standard deviation from -3σto +3σ allowing us to visualize how much margin the CAPDAC has relative to the specification. For the CAPDAC there is almost 2s close margin between the specification and the upper limit of the specification, -3s limit, of the distribution.

One observation from looking at the distribution, is that the distribution appears to have a long tail. In statistics, distributions with long tails means that the distribution has a large number of occurrences far from the central part of the distribution. Looking at the distribution, we can see that on the positive side of the distribution, there is only one point that is > +2s from the mean. While on the negative side of the distribution, there are many data points, < -3s from the mean. Next, let’s apply another tool, quantile-quantile plotting. The purpose is to test our simulated distribution and is a Normal (or Gaussian) distribution. A quantile-quantile plot is a technique to evaluate if two distributions are the same by plotting their quantiles against each other where the quantiles are points taken at regular intervals from the cumulative distribution function (CDF) of a random variable. The 0-quantile of distribution is the median, it is the value where half the samples in the distribution are higher in value than the median and half of the samples in the distribution are lower in value the median. Since the distribution is skewed, the mean value will not be equal to the median value.

Figure 2: Quantile-quantile plot for CAPDAC SINAD

If the simulated distribution is a straight line when plotted against the reference distribution, the Normal distribution, then the distributions match and the simulated distribution is Gaussian. As expected, the simulated distribution is not a straight line when plotted against the Normal distribution (see Figure 2). The distribution is only Normal in the region from -1s to +1s of standard deviation. Another way to look at the effect of the long tail is to consider how the CAPDAC yield compares to the expected yield of a Normal distribution. For the CAPDAC, there is 1 failure for 1755 samples. The worst-case value of CAPDAC SINAD is 59.85dB, -5.2s from the mean value. Using the Normal distribution, the expected failure probability for 5s deviation from the mean value is 1 failure per 3.5 million attempts. The effect of the long tail, non-Normal nature of the distribution, is a significant reduction in the yield compared to the yield when the distribution is a Normal distribution. Using quantile-quantile plots provides a powerful tool for visualizing whether the simulated distribution is a Normal distribution or not.

Next, let’s look at another measurement that is useful for designers. First, let’s determine the process capability index or Cpk value. The Cpk is a statistical measure of process capability which is the ability of a process to produce output within specification limits. For the CAPDAC, the Cpk is one of the outputs in the Virtuoso ADE Assembler results window (see Figure 3). The Cpk can only be output if a specification has been defined.

The Cpk is defined as the ratio of the distance from the mean value to the specification in standard deviations over the distance from the mean value to the actual distribution limit in standard deviation. For the CAPDAC, the numerator is 4.6s, the distance from the mean value of 61.15dB to 60dB in sigma, see sigma to target. The target yield was 3s so the denominator is 3s. 

The less precise way to think about Cpk, is to think of it as a measure of design margin. It tells us how much margin we have between the actual limit of the process and the user’s expectation for the process.

To summarize we have looked at two tools for visualizing the results of  Monte Carlo analysis and using the tools to identify problems. Plotting distributions allows us to understand how well centered a design is. Quantile plots allow us to look at the distribution and identify if it has a long tail since a long tail can translate into poor yield. And by using Cpk we can quantify how much design margin we have. In the next blog post, we will start to look at what we can do to identify and correct issues. 

Virtuosity: Power Filtering!

$
0
0
Finally, we have filters in the Corners Setup form, Results tab, Outputs tab, Data View and Setup assistants in Virtuoso ® ADE Explorer and Virtuoso ® ADE Assembler. But, they are not just for finding basic strings like vdd or 1p. They can do so much more; filtering for values within a range, finding strings containing all or any of the words you specify, filtering for prefixes or suffixes, and so on. Let's see what advanced filtering these filters are capable of.(read more)

Virtuosity: Can I Speed up My Plots?

$
0
0
If your Virtuoso ® ADE Assembler, Virtuoso ® ADE Explorer or Virtuoso ® ADE XL setup contains multiple sweeps or corner points, or maybe the transient simulations are time consuming, then plotting waveforms using Plot All may consume significant time and memory. Here, Quick Plot will help you out. Quick Plot will plot outputs in Virtuoso Visualization and Analysis, faster and with much less memory usage.(read more)

The Art of Analog Design Part 5: Mismatch Analysis II

$
0
0

In Part 4 of the series, we looked at applying mismatch analysis as a design tool. In Part 5, we will continue to look at mismatch analysis by applying the technology  to other types of designs..

The first case we will look at is a circuit without a DC operating point. A dynamic comparator, see Figure 1, doesn’t have a quiescent operating point making it difficult to analyze.

 In this case, the offset voltage is measured using transient analysis. A positive and a negative staircase is applied at the input and the input value which results in the output switching being recorded, the average value of input levels is the offset voltage. To increase the resolution of the offset voltage measurement, the step size needs to be small. In this case, the step size of the staircase ramp is 100mV. A Verilog A module was used as the signal source to generate the staircase, see Figure 2. For more details about measuring dynamic comparator offset Voltage, please see the ADC Verification Workshop Rapid Adoption Kit in Cadence online support.

Looking at the comparator, we would expect that the mismatch of the p-channel input transistors is the primary source of offset voltage.  After the Monte Carlo analysis, we will use scatter plots showing the random variable causing mismatch for three transistors: NM2, NM3, and NM4, see Figure 3a. For the devices in the differential pair, NM2 and NM3, we can see that there is correlation between the offset voltage and the input transistors, the correlation coefficient is r about 0.5. For the current source transistor, NM4, there is no correlation, the correlation coefficient r about 0, between the offset voltage and the transistor’s variation. So, the scatter plots are consistent with our expectations about how the devices are impacted and the statistical variation.

Again, we can see the utility and the limitations of the scatter plot. Qualitatively the scatter plot allows us to visualize the relationship between the inputs, statistical variables, and the outputs measured values. However, it is difficult to extract quantitative information from the results. So, while we can use scatter plots to confirm what we already know, they don’t really provide any additional information to designers.

We will use mismatch analysis to analyze the relationship between variations on offset voltage. The mismatch analysis results are shown in Figure 4. Again, we see that offset voltage has a non-linear, second-order relationship with the statistical variables. We can also see that most of the variation, 99.935% is accounted for by the mismatch results. We can see that ~90% of the offset voltage is due to the input transistor variation. Mismatch analysis considers the variation at the statistical variable level: NM2.rn2 contributes 30%, NM3.rn2 contributes 29%, NM2.rn1 contributes 17%, and NM3.rn1 contributes 16%. While our naming convention could be more explicit, you can think about the variables as the individual contributions to variation: gate oxide thickness variation and gate length variation. Another observation is that there is another source of offset voltage variation, the cascode transistors, NM0 and NM1. While not significant, it useful to know that mismatch analysis has enough resolution to identify small contributors.

Mismatch analysis provides designers a tool to analyze the effect of mismatch qualitatively and quantatively.  

To summarize, the mismatch analysis is a useful tool to analyze the results of Monte Carlo analysis. In this case, we analyzed the effect of variation on a dynamic comparator. Traditionally it is difficult to analyze a dynamic comparator because it is not a linear circuit with a DC operating point. Perhaps more than anything else, the ability to analyze circuits that designers have not been able to analyze in the past is the true value of mismatch analysis.

The Art of Analog Design Part 4: Mismatch Analysis

$
0
0

In Part 3, we started to explore how to analyze the results of Monte Carlo analysis. In Part 4, we will consider the question, what is the relationship between process variation and the circuit’s performance variation? The tool for exploring the relationship process variation and circuit performance variation is mismatch analysis in the tool Virtuoso® Variation Option (VVO). 

Let’s start by looking at a simple example that shows the sources of offset voltage of a two-pole operational amplifier, see Figure 1.

Figure 1: Two Pole Operational Amplifier

Looking at the design, we would expect that mismatch of the p-channel input transistors are the primary source of offset voltage. First, let’s look at the Monte Carlo simulation results for the op-amp, see Figure 2.

Figure 2: Monte Carlo Analysis Results

The results show that the offset voltage is ~7.3mV. While Monte Carlo analysis tells us how much offset voltage there is, it does not tell us anything about the source of the offset voltage or how much improvement can be achieved. So, what are the sources of the offset voltage? After Monte Carlo analysis, we can plot the relationship between threshold voltage of input p-channel transistors, M17 and PM5, and the n-channel transistors in the first stage load current mirror. The scatter plots in Figure 3 show that there is no correlation between threshold voltage and the offset voltage of the operational amplifier since the correlation between offset voltage and the device threshold voltages is effectively 0.

Figure 3: Scatter Plots, Threshold Voltage versus Offset Voltage

Now let’s try using contribution analysis, see Figure 4.

Figure 4: Mismatch Analysis Results

Mismatch analysis shows the relationship between the threshold voltage and the offset voltage. The reasons that the scatter plot showed no correlation was because it looks for linear correlation. Mismatch analysis reports that the dependency is second order, the label shows R^2, The results show that most of the variation, 99.997%, can be explained by the threshold variation of the M17, PM5, NM4, and NM6. The results also show that ~70% of the offset voltage variation is due to the p-channel variation, the contribution from M17 is 34%, and the contribution from PM5 is 34%. The other source of offset voltage variation is the n-channel threshold voltage contribution of 30%.

Let’s use this information and see if we can improve the design. Since the p-channel contributes most of the offset voltage, we will try an experiment. We will increase the p-channel transistor area by 16x, length by 4x and width by 4x, keeping the W/L ratio constant. Increasing the device size should decrease the effect of p-channel mismatch by a factor of four.

Figure 5: Monte Carlo Analysis with 16x P-Channel

The effect of scaling the p-channel transistors on the offset voltage of the op-amp is to reduce the offset voltage from 7.2mV to 3.7mV. Doing some math, the p-channel offset contribution is ~6.4mV and the n-channel contribution is ~3.3mV. Verifying the offset voltage, the initial offset voltage is (6.42) + (3.32) = 7.2mV. After device sizing, the offset voltage is ((6.4/4)2) + (3.32) = 3.7mV.

This example shows how mismatch analysis can be used to understand the effect of process variation on circuit performance. While we understand qualitatively that input transistors are the primary contributor to offset voltage, mismatch analysis provides us a tool for qualitative analysis of variation. In the next blog, we will apply mismatch analysis to additional circuits.  

The Art of Analog Design Part 5: Response to Frank’s Question

$
0
0

In the comments to blog #5, Frank Wiedmann asked about the correlation between the results of mismatch from Monte Carlo analysis and DC mismatch analysis. It is a fair question and here is a short blog to explore the topic. The example may not be realistic, but it is a useful for exploring the effects of mismatch on a circuit.  

Let’s start with a simple circuit—A resistively loaded differential amplifier with cascodes, shown in Figure 1. The process is the Cadence® 45nm GPDK. This test circuit gives us a good platform for exploring the effect of mismatch on circuit performance. The GPDK includes models for Monte Carlo analysis and the results are easy to share.

  Figure 1: Differential Amplifier

As background, DC mismatch is an analysis that estimates the effect of mismatch on circuit performance from a single simulation. It is considerably faster than using Monte Carlo analysis. The drawback is that only the DC operating point effect of mismatch is considered, so we could not use it for the dynamic comparator, see part 5. Originally, the model card needed to be modified for DC mismatch analysis. DC mismatch used different mismatch parameters than Monte Carlo analysis. Since about 2012, DC mismatch analysis reads either the stats block or Monte Carlo process variations. The original DC mismatch parameters are still supported for backwards compatibility. For Virtuoso® ADE Explorer users, look in the Analyses tab for dcmatch. One point to keep in mind is that DC match simulates the offset voltage at a specified output due to process variation, however, it can’t be used for derived measurements.

For Monte Carlo analysis, used Low-Discrepancy Sampling and 1400 iterations to generate the distribution. From experience, this number of iterations should give a reasonable approximation to the expected distribution. The standard deviation of the output offset voltage is 10.557mV. The Monte Carlo analysis results are compared with the dc mismatch analysis results in Table 1.

Figure 2: Output Offset Voltage Distribution

The DC Mismatch analysis was run using the stats option, that is, the statistical information in the stats block is used for the DC mismatch analysis.  

 

 

Monte Carlo

DC Mismatch

Offset Voltage

10.56mV

10.42mV

Contributors

M1:rn2_18

M1:rn2_18

 

M0:rn2_18

M0:rn2_18

Table 1: Comparison of Monte Carlo and DC Mismatch Results      

The comparison results show that the offset voltages are close but are not quite identical. The difference in the results comes down to the approximation that is used when performing DC mismatch analysis. DC mismatch analysis assumes that the output distribution is Gaussian. The assumption allows us to estimate the variation without requiring the many iterations required by Monte Carlo to calculate the actual distribution. This is an example where the assumption breaks down because the tails of the distribution are not Gaussian. The output referred offset voltage is plotted using the normal quantile-quantile plot, shown in Figure 3. The results show that the tails of the distribution are not Gaussian, see the areas in the green boxes.

Figure 3: Quantile-quantile plot of Output Offset Voltage

 

One other item to notice is that DC mismatch and Monte Carlo mismatch analysis report the same contributors. The contributors are the random variables that result in the largest variation in the output offset voltage.

The summary is that DC mismatch provides a reasonable approximation to Monte Carlo mismatch results and can be used for predicting trends and worst-case corners. The limitation is that DC mismatch relies on the assumption that the distribution is Gaussian. Asa result for signoff, Monte Carlo analysis is the appropriate choice.


Virtuosity: Read Mode Done Right

$
0
0
Because of the ease with which you can set up complex sweep, corner and Monte Carlo simulations, the Virtuoso ADE tools are frequently used to perform verification and regression simulation runs. Those runs are most commonly done by accessing cellviews in read-only mode (RO), so that the “golden” simulation setups are not modified and there is no need to check out the cellviews from the design management (DM) vault. Opening an ADE XL view in read-only mode allows you to run simulations, but when you exit, the simulation results are deleted. In addition, if any small modifications are made in RO mode—correcting a typo, or adjusting a path—those changes are lost unless you save them to a new cellview, which is inconvenient and can lead to confusion. In Virtuoso® ADE Assembler and Virtuoso® ADE Explorer, working with maestro views in read-only mode is more powerful and flexible. The added functionality of RO mode also enables Virtuoso® ADE Verifier to run the maestro views it needs for verification so that the end users’ work is not disrupted.(read more)

Simplifying the Memory Design Process

$
0
0

On today’s SOC designs, the memory control logics and memory arrays take up a lot of real estate in terms of area and as a part of the larger system since it mostly determines the performance of the application. Regardless of the processors and the interconnect, the memory system provides the instructions and operands and the application cannot be executed any faster than the memory system can handle. Due to this constant demand for increasing memory size for higher performance at advanced nodes, design and verification engineers are faced with multiple challenges.

One of the biggest challenges in memory design is time to market. The designs need to be completed, and ramp to yield, in a very short time frame.  But in trying to complete this process, memory designers often face multiple tool and flow challenges as well.

One of the flow challenges is that different tools are used for design, verification and model creation steps. For example, the memory cell design needs very accurate SPICE simulator as well as extensive variation analysis.  But how does a designer ensure consistency and accuracy across multiple tools?  For example, the FastSPICE tool used to do margin analysis needs to be consistent with the tools or scripts used to generate Liberty timing and power models. These models convey the performance of the memory to the SOC designer, and so it’s very important that they accurately represent the design.  So ensuring consistency across the tools and flows makes this process even harder and longer for designers.

Additionally, there is a significant number of PVT corners that designers need to cover as they move to advanced nodes. Our research shows that about 196 PVT corners are needed to accurately characterize the designs at 16nm and below while 12 PVT corners are needed at 90nm.  More PVT corners means more verification time, which puts additional pressure on time to market.  And finally, while performing memory characterization accurate timing, circuit simulation, power and leakage, and performance metrics must all be met it and currently the only solution is through the use of various point tools.  To solve these tool flow challenges, we have created a memory design, verification, and characterization solution. This means that our customers can focus on delivering their memory designs on schedule with the right performance and power, rather than on tools or flows.

The new Cadence® Legato Memory Solution is the industry’s first integrated solution for memory design and verification. It provides a one-stop shop for all memory design, verification and characterization needs, eliminating the complexity of piecing together point tools for multiple design and verification tasks.  The Legato Memory Solution delivers up to 2x runtime improvement while meeting demanding design schedules.  This solution provides a new standard for completion of memory design on-time and with accuracy.   

See also the Breakfast Bytes post Legato: Smooth Memory Design.

Virtuosity: All New XStream In - The Translation Expressway

$
0
0
A layout design has to go through several iterations and multiple data exchanges across tools for different types of processing during the designing process. At each stage, a large-sized, hierarchical layout needs to be imported into Virtuoso. Therefore, you need a translation tool that is fast and reliable. The new IC617/12.2 XStream In translator with its latest performance upgrades ensures this and a lot more. Read on to find out…(read more)

Art of Analog Design Part 7: Mismatch Tuning

$
0
0

In days of future past, we looked at DC mismatch analysis and compared it to Monte Carlo analysis for analyzing the effect of device mismatch on the offset voltage of a differential amplifier. We found that DC mismatch does provide good estimates of the effect of mismatch with the limitation that the offset voltage has a Gaussian distribution. Since DC mismatch analysis only needed a single simulation to generate an estimate, we can use it for design exploration. For example, when looking for the worst-case corner for offset voltage, we can use DC mismatch analysis to accelerate simulation time.    

Suppose that we wanted to find the device size that meets our design specification for offset voltage. Let’s start with the same differential amplifier and assume that the offset voltage should be 1mV. How can we find the optimum gate width for this offset voltage 1sigma value? One option would be to perform DC mismatch analysis and sweep the n-channel transistor gate width. Let’s set a specification of a target offset voltage of 1mV and look for the gate width that will meet our offset voltage specification.

Figure 1: Parametric Sweep of Device Size vs. Offset Voltage

In this case, we swept the number of fingers for input pair and can see that the we can significantly reduce the area without compromising the offset voltage of the amplifier.

There is actually one alternative to the parametric sweep approach for tuning offset voltage. We can use mismatch analysis to perform the same task. In the Mismatch Contribution window, you can click on the Mismatch Tuner icon, see the red box on Figure 2.  

Figure 2: Using Mismatch Tuner to Size Transistors for Offset Voltage

When you click on the Mismatch Tuner icon, you get slider bars that you can adjust and the results in the Contribution Analysis window are updated. What we see here is that by reducing the gate width of the input transistors by 60%, then the offset voltage is 1mV. This result is consistent with the results of the parametric sweep of DC mismatch analysis. We can reduce the size of the input transistors by 60% and still meet our objectives for offset voltage.

 So, which method should I use? If all you are interested in is offset voltage of a linear analog circuit, then using DC mismatch with parametric sweep may be sufficient. However, in most other cases, this option is not available. Consider the dynamic comparator, it does not have a quiescent operating point so we can’t use DC mismatch to estimate the input stage scaling. In this case, mismatch tuning can be used. Suppose you need to make a choice to achieve a 500uV offset voltage, you can either scale the devices or add additional circuitry to calibrate out the offset. After running Monte Carlo analysis, see figure 3, the current offset voltage of the comparator is about 1mv, good but not good enough to meet the target. 

Figure 3: Dynamic Comparator Offset Voltage

So, let’s try using the mismatch tuner, see Figure 4. In this case, we see that we need to increase the device size by 4x to reduce the offset voltage level to an acceptable level. Based on this result, the designer needs to decide which approach to take: scaling the input devices, or adding an offset calibration, to better optimize area and power. So, we can use mismatch tuning to give us insight into how variation impacts offset voltage. Another use case to consider is suppose you have several parameters to trade-off: offset voltage, power supply rejection ratio, common-mode rejection ratio, and bandwidth. In this case, mismatch tuning allows you to envision interaction between device scaling and multiple parameters. So, while the two approaches overlap, using the mismatch tuner is a more general solution for analyzing the effect of mismatch on circuit performance.


Figure 4: Dynamic Comparator Offset Voltage Mismatch Tuning

One thing to keep in mind when using either dc mismatch analysis or mismatch tuning is that these techniques rely on mathematical techniques to estimate the effect of mismatch. These results should be verified with Monte Carlo analysis. In this case, after using mismatch tuning the results were checked. Before sizing the offset voltage was 938uV. Mismatch tuning suggested that by increasing the device size by about 4x, the offset voltage would be reduced to 480uV. Monte Carlo analysis shows that the actual offset voltage after tuning was 406uV, see figure 5. 

Figure 5: Dynamic Comparator Offset Voltage Monte Carlo Results after mismatch tuning

Over the last two blogs, we have looked at DC mismatch analysis. In the previous blog, we compared the results from dc mismatch analysis to Monte Carlo analysis as a tool for estimating offset voltage. Then in this blog we looked at using dc mismatch as a design tool to improve our design. In the next blog, we will take a similar look at AC mismatch analysis.         

Dealing with AOCVs in SRAMs

$
0
0

Systems on Chip, or SoCs as they’re more commonly called, have become increasingly more complex, and incorporate a dizzying array of functionality to keep up with the evolving trends of technology. Today’s SoCs are humongous multi-billion-gate designs with huge memories to enable complex and high-performance functions that are executed on them. It is quite common to have about 40% of an SoC’s real estate used for Static Random Access Memory (SRAM). SRAM design is a complex and highly sensitive process, and what we want to design in the silicon is often different from what actually comes out of the manufacturing process. This is due to Advanced On-Chip Variations, or AOCVs.

AOCVs occur in the device manufacturing processes, and there are two kinds:

  1. Systematic Variations: These are caused by variations in gate oxide thickness, implant doses and metal or dielectric thickness. They are deterministic in nature, and exhibit spatial correlation – i.e., they are proportional to the cell location of the path being analyzed.
  2. Random Variations: These are random, as the name suggests, and therefore are non-deterministic. They are proportional to the logic depth of the path being analyzed, and tend to statistically cancel each other out given a long enough path.

As can be deduced, the effects of these variations are getting more pronounced as process geometries are shrinking, and so dealing with them in an effective manner is crucial to the proper functioning of an SoC. And therein lies the rub.

Traditional Solutions for AOCVs in SRAMs

AOCVs need to be modeled effectively, so their effects can be taken into account for the ultimate SRAM design to be successful. This means the design needs to be simulated to account for the random and deterministic process variations. Most companies deal with this in one of the following two ways:

  1. Running a Monte Carlo simulation on the full memory instance RC extracted netlist

This approach involves creating a simulatable instance netlist from the instance schematic, and running Monte Carlo simulations on the complete netlist, multiple times. This will give us the most accurate results. However, this is an incredibly CPU and memory intensive approach, with run times lasting several days. Additionally, it will require huge runtime memory requirements and will need bigger LSF machines.

  1. Run Monte Carlo simulations on the critical path RC netlist

This approach involves reducing the netlist drastically by identifying repetitive cells in the memory and replacing them with a load model. Then you create a critical path schematic for each component to be simulated and run Monte Carlo. While this approach is definitely much faster than the previous approach, it still involves several thousand nodes and instances, and runtime is still in the order of a few days. Additionally, it requires time to create critical path schematics for different components and to ensure the setup is correct. Creating a critical path involves manual effort and is error prone, making it a less than ideal solution.

So what is a designer to do?

Enter, the approach used by our customer, Invecas. Their solution is based entirely on the Legato Memory Solution, specifically Liberate-MX runs, with Spectre simulations. It relies on re-suing the characterization database from Liberate-MX runs. This means, there is no additional time spent on setting up the environment. It also involves reusing the partition netlist created by the Liberate-MX flow. Liberate has the inbuilt intelligence of identifying the dynamic partition, and activity factor. This approach results in the least amount of runtime and memory required.

So how does this work?

Liberate runs a Fast-SPICE tool under the hood to identify the worst-case path that is active and toggling, and extracts only that path to work on. Then an accurate SPICE run is performed, to provide the accurate .libs. Generating these accurate .libs is already included in the Liberate MX flow and available today. Invecas then modified this flow for AOCV, by taking this partition, with all the accompanying setups and nodes, and adding a couple of commands for Monte Carlo runs. The script now runs Monte Carlo on the greatly reduced partition, and returns AOCV models with all the derating values in a matter of hours, instead of days, or even weeks.

The comparison of results between the three approaches can be summarized below.

Method 1

FULL INSTANCE SIMS (Considers 300MC runs)

Method 2

CRITICAL PATH SIMS

Invecas Method

PARTITION NETLIST SIMS

Invecas Method Improvement over Method 1

Invecas Method Improvement over Method 2

No.of Devices

7440000

17000

560

13285.71

30.36

No.of Nodes

22400000

317000

12300

1821.14

25.77

No.of RC elements

22000000

231000

12000

1833.33

19.25

RUN Time (Hours)

350

84

1.45

241.38

57.93

RUN Memory (GB)

50

10

1

50

10

 

The side-by-side testing clearly shows, that the Invecas method using the Legato Memory Solution has greatly reduced the number of devices, nodes and RC elements that the Monte Carlo run uses, from several million, to a few thousand. This automatically reduces the runtime and memory requirements by several orders of magnitude, thereby solving the biggest problem faced by the designers today.

Please visit our page to find out more about this process, or to read about the Cadence Legato Memory Solution.

Viewing all 750 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>