Pages Menu
TwitterRssFacebook
Categories Menu

Posted in Top Stories

SmartShell – A Unique Software Interface for Design and Production

By Shu Li, Business Development Manager, Advantest America and Michael Braun, Product Manager, Advantest Europe

Before device test can take place on automated test equipment (ATE), device-specific test programs need to be developed for the target device and test system. As part of this process, a large amount of digital test content (patterns) gets translated from EDA (design/simulation) to ATE (test) format and needs to be debugged and characterized on the target tester.

In the mixed-signal (MX) and radio-frequency (RF) domain, scripts in various languages (tcl, Python, LabView, etc.) are often used for device bring-up and characterization on bench instruments, using early device samples on an evaluation board, either before ATE test program development starts or sometimes in parallel.

These often interactive scripts are not natively applicable to the production test system, so ATE users have developed proprietary solutions to bridge the gap between ‘bench type’ engineering test and production test environments. This enables leveraging some of the early device learnings for volume testing, or simply running the same test scripts in the two very different environments.

Both digital pattern validation and MX/RF script execution or conversion to ATE have potential for improvement and standardization, which will benefit both time-to-market (TTM) and time-to-quality (TTQ). This article will provide further details for both areas.

Digital (DFT) pattern bring-up and validation

Test patterns for scan, built-in self-test (BIST), functional, or other digital tests are typically created by design or DFT engineers in their design/simulation (EDA) environment and then handed over to the test department, where they are converted to the native ATE pattern format and integrated into the production test program. As part of this process, all patterns need to be validated and characterized on the tester, to make sure that they work as intended and have enough margin to guarantee a stable production test.

This pattern bring-up and validation process can be very time consuming because initial pattern generation and bring-up/validation is typically done in two very different environments: design/DFT/simulation versus test engineering. The design or DFT engineer creates the test patterns, but it is the test engineer’s responsibility to convert and run them against the actual silicon. If they don’t work, the test engineer will produce a log file with failing cycles for the pattern at hand and send it to the designer, whose task is then to identify the root cause of the failures in the simulation environment and to re-generate a corrected test pattern as needed. The corrected pattern needs to be translated and validated on the tester again, going back and forth between design and test. Often, design/DFT and test engineering are isolated from each other, in two different locations, communicating by email or FTP. The test engineer will thus notify the DFT engineer of discovered errors, but the latter may not get around to re-simulating the test patterns immediately. As a result, the test development process will incur some delays. The majority of patterns may pass, but some tricky ones can take months of re-spins, which will not help with getting working products to market quickly. This traditionally manual process – offline pattern generation, conversion and download, then emailing feedback about errors – is painful and time consuming (Figure 1).

If there were a way to execute and validate the generated patterns directly from the DFT/simulation environment without going through the full circle of pattern translation and fail cycle collection for every minor change, it would benefit all parties involved and reduce the pattern bring-up cycle time.

 

Figure 1. The debugging process involves lengthy communication between design and test, requires significant learning, and is prone to errors, leading to lengthy cycle times.

 

Scripts for mixed-signal/RF ‘bench instrument’ test on ATE

Mixed-signal and RF testing involves, besides some digital resources to set up and control the device, additional analog and RF instrumentation. In a lab environment, these resources are benchtop instruments such as oscilloscopes, spectrum analyzers, waveform generators and other tools.On the bench, each test requires specific control scripts for both the device and the various lab instruments involved. On the ATE system, fully integrated hardware instruments are used and controlled by standardized software components that are part of a generic test program. Often, bench instruments have a higher precision for specific tasks but are not as universal as ATE resources and cannot reach nearly the same throughput as ATE can deliver. For volume data collection in characterization, significant effort must be made to reach high throughput for data collection from many devices in a reasonable amount of time. Leveraging an ATE to do some tasks that are normally done in the lab/bench environment will speed up this data collection significantly and help to smooth the transition between design/bench and ATE. In this context, it would be very helpful to have a solution that allows moving back and forth seamlessly between the lab/bench environment and the ATE, without the need to convert bench-type scripts into ATE ‘native’ test programs. Running the exact same script(s) on the bench AND on the ATE system would help to improve correlation and TTM, while leveraging knowledge from both environments.

Figure 2. Time to market is a major issue when dealing with scripting for mixed-signal/RF devices. Producing a working customer sample can take 9-12 months, depending on chip size, type, etc.

Building a unified interface to bridge between design and test

What’s needed to address these challenges is an easy-to-use client/server environment that simplifies the communication between design and test to enable smart debugging. Advantest has developed a software option for its V93000 system-on-chip (SoC) test system that provides such a solution.

The newly developed SmartShell is a software environment for digital pattern validation and native script execution on ATE. The interface links directly between the DFT/bench environment and the V93000 tester, without the need to convert patterns and scripts to the tester’s ‘native’ data format. This allows fast pattern bring-up and characterization, enabling DFT engineers to validate their patterns faster and designs to be characterized more efficiently before they are released to production on the V93000 system. The block diagram in Figure 3 illustrates the dataflow process.

Figure 3. SmartShell data flow, from pattern/script generation to ATE and back.

With this new tool, porting different test content is made easier and straightforward, giving designers the freedom to incorporate various tasks into their test program without having to think about how to port them to an ATE system. Those that work best for the device being developed will be converted when it comes to manufacturing.

Engineers in both design and test can use the tool. The DFT engineer can run a simple script instructing the tool to check a new pattern or to loop over a number of patterns while varying conditions like voltage or frequency. He or she can access the results directly from their environment, without having to learn the native formats and software environment of the test system. The test engineer can run scripts originally generated for a totally different environment, and then quickly compare ATE results with results from the bench instrumentation. The command interface controls functionality and execution, and allows the results to be viewed in the engineer’s preferred format (see Figure 4).

Figure 4. The software package features an interface that is easy to use for design and test engineers alike.

SmartShell’s key capabilities include:

  • On-the-fly control of tester resources for digital, mixed-signal, RF and DC measurements
  • Fast internal pattern conversion, execution, and back-propagation of results
  • Ease of programming using any command-based script language
  • Accommodates customized script language using a bridge to its standard set of commands
  • Auto-recording/generation of setups for early production to ensure reusability
  • Compatible with SmarTest 7 (DFT/pattern validation only) and SmarTest 8 (Scripting)

Summary

SmartShell represents a solution to bridge the gap between design and test, delivering capabilities for pattern validation and script execution that are beneficial regardless of company size or device type. Early validation can be done in a well-contained design or bench environment, without the need to ‘learn the tester.’ The highly programmable SmartShell interface for the V93000 allows experts to best utilize their individual skillsets to debug devices effectively and efficiently in a highly integrated manner. The tool significantly shortens the turnaround times for high-quality test patterns and scripts, enabling device makers to achieve both faster TTM and lower overall cost of test.

Read More

Posted in Top Stories

Inline Contact Resistance Solves Wafer Probing Challenges

By Dave Armstrong, Director of Business Development, Advantest America, Inc.

Testing is increasingly being conducted at the wafer level as well as at lower voltage levels, necessitating even greater test accuracy. Achieving this accuracy is often hampered by poor contact resistance (Cres). In this environment, conventional continuity tests – in which current is applied in parallel to all pins and per-pin diode voltage is measured to verify continuity between the tester and the internal die – are not adequate for determining yield limiting contact problem. Moreover, they have no value in determining potential issues with probe card degradation over time.

Probe cards often are subjected to offline electrical contact resistance measurements, but contact resistance grows as the probes become dirty. As wafer probing progresses, if one pin starts to make poor contact, the situation will worsen, resulting in a large quantity of potentially good parts being turned away. To prevent the impact of accumulated residue affecting probe and test quality, inline contact resistance is becoming essential. Similar to the shift toward inline process control that became industry-standard in the early 2000s, measuring contact resistance inline will serve to mitigate the problem of dirty or damaged probes, unlike continuity testing.

Key industry trends

Moving to inline contact resistance is critical for a number of reasons. First, systems-on-chip (SoCs) face a daunting roadmap for wafer probing, as Figure 1 indicates.

Figure 1. The International Technology Roadmap for Semiconductors (ITRS) spotlights significant challenges for wafer probing through 2020.
*International Technology Roadmap for Semiconductors 2015
**Full Wafer Contact

The data shown in the table for 2016 – 66,000 individual probes making contact with one die – was fairly conservative. Today, the industry is exceeding those numbers, which were published in 2015, by more than 150%. Every probe must be clean and accurate, and therein lies the challenge. Sometimes, the same die is being probed three to four times – perhaps at a different temperature each time – and each probe kicks up oxide “dirt.” In addition, dirt accumulates as the probe moves across the wafer (through the lot) since the same probe is used for each die on the wafer. It’s a problem of numbers, repeatability and trends for a series of wafers and a series of die.

Further complicating this, engineers are now doing more than final test at wafer probe, e.g., at-speed testing and Known-Good-Die (KGD) testing. Getting to KGD is, of course, the ultimate goal – they want to touch down on the die and know that everything works before packaging. This is not easy to achieve. More types of testing, such as high- and low-temperature tests, must be performed – all of which will be challenged by poor contact resistance.

Moving to contact resistance measurements is also critical because of the need to accommodate probe planarity issues. In the environment of the ATE, the probe needs to be held in as planar a manner as possible while, at the same time, a force of 200kg or more is pushing down on the probes. This pressure will cause the probe assembly to bow, making true planarity difficult to maintain. The probe micrograph in Figure 2 provides an example of this problem. The contact resistance in the center of the die was much higher than at the periphery, and much lower at the southern edge. The culprit: bowing up in the center of the die, which created a balloon effect that caused the center and part of the northeast portion to be bad. While a reading of anything less than 8 ohms is considered acceptable, high-power probing requires less than 4 ohms, making Cres requirements much tighter in high-power and extreme-temperature environments.

Figure 2. This example contact resistance (Cres) plot clearly shows bowing in the center of the probe.

Contact resistance measurement process

The diagram in Figure 3 represents circuits typically found in a variety of devices, each of which can benefit from inline Cres measurement. While many readers will be familiar with these diagrams, here is a brief summary of each type: 3a is a traditional I/O circuit with two diodes, one connected to a power supply and one to ground; 3b is similar, with a resistor added to each diode in series; 3c consists of one large diode with a resistor in series; 3d contains a single diode going to ground only and not a power supply; 3e is the opposite, with the diode going to the power supply and not to ground; and 3f represents the class of interfaces known as SerDes, used in high-speed communication. These circuits are unknowns in many respects – they are very different from any other type of circuit and by far the most difficult to assess for contact resistance.

Figure 3. Each type of circuit illustrated in this black box diagram can benefit from inline Cres measurement.

Inline contact resistance measurements can be performed in a variety of ways. Using standard digital pin parametric measurement unit (PPMU) resources enables tracking changes to contact resistance – either over time or positionally. To measure the contact resistance of I/O pins, the engineer basically forces currents, measures voltages, and then performs calculations to determine Cres.

Conceptually, Cres is calculated using this equation:

*************

can be calculated by looking at the diode equation:

**************

Which can be reordered to calculate the change in diode voltage:

The challenge with this equation is what value to use for diode ideality η. This value is not a constant, it varies with technology, process, and transistor geometry. All the other values are known, e.g., q = transistor charge, k = Boltzmann constant, T = die temperature, etc. Because, as noted above, different device pins have different (or no) diodes, the diode configuration becomes critical when trying to determine the value of η. Since diodes may exist to ground, supply, or both, either positive currents/voltage or negative currents/voltage may need to be used in order to obtain valid Cres measurements.

The curves for each pin type also need to be individually analyzed to solve for diode ideality η. Once determined, ideality values don’t seem to change for a given process and design. However, ideality values can vary broadly – the pins shown on the graph in Figure 4 have idealities between +60 and -2.6. Many different pins are superimposed on top of each other, all showing very different performance. The key point to note here is that, when the correct value is determined for η, Cres doesn’t change with different current levels.

Figure 4. By looking at this plot, the test engineer can determine which type of ESD protection circuit is being employed, and use portions of the plot to calculate η.

 

Determining values for η

The process for determining η involves the following steps:

  1. Force different currents into and out of all DUT pins and measure the voltage. For the purpose of this article, the currents selected were ±20mA, ±10mA, ±5mA, ±2mA, ±1mA and ±0.5mA.
  2. Select three positive or three negative measurements, and calculate ideality using this formula:
  3. Try different current values in the equation to check if the ideality stays relatively constant. With the right value for η the result will change very little. Also, all pins with a similar I/O buffer design will have the same η value.
  4. Perform a final check of the ideality selected by using the value in the Cres equation. The resistance value should be positive and will not change with different current levels.

At this point, production measurements of Cres can be performed by simply performing two current force and voltage measurements (of the same polarity that was used for calculating Image) and then performing the following calculation for Cres:

Example Cres measurement results are shown in Figure 5. Measurements were taken at two different force levels (140 lbs. and 92 lbs.), and on a pin-by-pin basis, Cres rose by about 2 ohms between them. The orange plot at 69 ohms highlights a failure in the making. Cres should be lower with higher force, so this tells us that the contactor bowed.

Figure 5. This graph provides the distribution of measurement results obtained at two different overdrive levels. Pins with resistors in series with their ESD diode are clearly visible at ~ 33Ω. Those without are at ~9Ω.

Determining Cres of SerDes pins

SerDes pins are difficult to analyze. They often have, pre-emphasis and equalization circuits on the inputs and outputs to match the on-die circuitry to their transmission lines. This greatly complicates conducting Cres measurement on these pins.

Figure 6. Changes in termination resistance values can help determine Cres for SerDes pins.

As seen in Figure 6, on part of the I-V curve, the slope of lines is about 100 ohms, as SerDes pins typically have termination resistors of 100 ohms, so the I-V curve will show this resistance, not Cres. The good news is that these termination resistance values do change with Cres changes – so the engineer can measure nominally 100 ohms using traditional Ohm’s law equations without the diode voltage adjustment, and then watch to see if the measurement value increase as the Cres degrades.

The test methods described so far are all two-terminal inline test methods. It’s important to recognize that two-terminal measurements will inherently include additional resistances in addition to the key Cres value to be determined. This is shown in Figure 7.

Figure 7. Two-terminal contact resistance stray values are compensated for over time.

While the test system is designed to compensate for all the resistances in the grey fields, it cannot compensate for the green resistors. As a consequence, the Cres values measured by these techniques will 1) be higher than normally expected, and 2) vary from pin to pin due to fixture design differences. The best way to deal with this situation is to simply save a baseline set of Cres as measured by these two-terminal techniques and then monitor the difference between the baseline and the Cres values measured over time.

Determining Cres of supply pins

A supply’s contact resistance can also be measured using a PPMU and a device power supply (DPS) monitor pin. In connecting the DPS in the test system to the DUT pins, it is becoming common practice to connect one of the DUT supply pins to a digital PMU pin. In addition to providing an enhanced ability to monitor the on-die supply voltage, this approach allows direct measurement of Cres from the digital pin to the DPS signal itself, and thus, direct calculation of the contact resistance average value for the DPS interconnection. Using the monitor pin, a simple I-V curve is observed (Figure 8) which allows straightforward calculation of the Cres. Complicating this for high-power designs is the very large number of supply pins in parallel. While this technique will still measure the average contact resistance it is less sensitive to changes in Cres at the per-pin level.

Figure 8. I-V curve for supply-pin Cres measurement. The sensitivity of this measurement to single-pin Cres issues drops as a large number of probes are connected in parallel.

Determining Cres of ground pins

This designed-in capability is unique to Advantest. In power-supply modules, a current is forced through a primary path, and then another path is used to sense voltage-out on the device under test (DUT) board. One of the available modules for the V93000 test system is called the UHC4. The UHC4 has a contact resistance monitor circuit built directly into the supply, giving it the unique ability to measure the voltage difference between force and sense right in the instrument (Figure 9).

Measuring a low value of resistance requires a high current (i.e., measurements must be taken during a power-up condition). As a simple example: with the part in an active mode, consuming, say 100 amps, a 1-volt change across the pins tells the user that all resistors are exhibiting 10 milliohms of resistance. A shift in the resistance indicates a ground connection problem. Continually monitoring the module will provide a good level of sensitivity to any big issues that arise.

Figure 9. The V93000 UHC4 module is uniquely able to measure voltage difference between force and sense.

High-accuracy Cres measurement

Using the precision DC measurement resources available in the V93000, high-accuracy Cres measurements can be made using thermal measurement diodes with four-terminal techniques. Several resources can be used to make these measurements. The results of a Monte-Carlo analysis of the measurement Imageaccuracy with the available instruments are provided in the table at right, which clearly shows the benefits afforded by Advantest’s DC-scale AVI64 universal analog pin module over the per-pin PMU of the PS1600.

In summary

The Advantest V93000 is able to measure both inline (2-terminal) and high-accuracy (4-terminal) contact resistance. These measurements will become more critical as the industry moves to higher pin counts, higher power levels and lower voltages. Expanded testing at the wafer probe will also drive this trend, as will extreme-temperature testing, which makes everything more difficult.

Read More

Posted in Top Stories

Scalable Platform Key to Controlling SSD Test Costs

By Ben Rogel-Favila, Senior Director, System-Level Test, Advantest America

Manufacturers of solid-state drives (SSDs) want to keep test costs under control even as device performance, density, and variety all increase. In addition, the SSD product life cycle is accelerating. Test equipment manufacturers must strike a reasonable balance here.  They must provide more capable but less expensive systems that can cover a wider variety of devices.  One approach is to make the test system modular and scalable, so companies can buy what they need now and add on later as they need more features and more capabilities.  Another approach is to define more parameters and device characteristics in software rather than in hardware, providing more flexibility and allowing upgrades and changes to be made locally without buying an entire new system.  Additionally, the value-add of test equipment is moving beyond traditional “test electronics.” SSDs present a variety of unique test challenges, all of which an SSD test platform must address while keeping test costs in line.

How test affects time to market

Time to market (TTM) is a critical metric in terms of new product success. Manufacturers must hit the market window as early as possible – later entry means realizing less profit. This concept is well understood. So, what role does test play? In the SSD space, a reliability demonstration test (RDT) must be performed to qualify the product. If this step isn’t done correctly, it can affect TTM. To ensure that device testing doesn’t hinder TTM entry, manufacturers must recognize that their device will have some issues – the sooner they’re identified, the greater the chances of achieving optimal TTM.

Several factors can help mitigate these risks. First, it’s essential to employ a thorough set of engineering tools that can help pinpoint more quickly and accurately where any problems lie. Also required is the support of the tester provider. Test products are highly complex, comprising hardware, software and firmware all interacting with one another, so having someone competent in these environments actively supporting efforts to find these problems is key.

Next is test development – a significant undertaking that requires a robust environment able to accommodate the different test stages that a device undergoes during its lifecycle. If all the same tools can be applied at each stage, testing becomes easier and much more efficient. Finally, being able to reuse the test throughout all the various test cycles also saves time, as well as helping minimize the introduction of new test conditions from engineering through production. The bottom line is that test can substantially impact TTM, and if not done correctly, this impact can be negative.

SSD test stages and requirements

An SSD is put through a range of different tests during its lifecycle, each of which has different objectives and different needs. The SSD test lifecycle involves two distinct test stages. The first stage is focused on engineering/R&D, the second on manufacturing (see Figure 1). In stage 1, once the developer has confirmed the product design is correct, he or she must very that the design architecture lends itself to a reliable product. Once these steps are completed and the RDT conducted, the design is ready to move into stage 2, high-volume manufacturing (HVM). Following assembly, two tests are typically employed: built-in self-test (BIST) and full-speed (or, at-speed) functional test. Being able to utilize a single test solution throughout these stages helps ensure both test process consistency and quality of results.

Figure 1. SSD lifecycle test stages

Figure 2. SSD test requirements include a wide range of variables.

The test requirements for SSDs comprise a wide range of variables that span many different engineering disciplines, as shown in Figure 2:

  • Different protocols – As we have examined in prior articles, SSDs make use of several different protocols, e.g., Non-Volatile Memory Express (NVMe), PCI Express (PCIe), Serial Attached SCSI (SAS) and Serial ATA (SATA), which vary significantly in functionality.
  • Different form factors – Testing chips is essentially the same process, regardless of IC shape or sizes. With SSD, form factors range from heavy, 8-inch PCIe cards to small, gumstick-sized M.2 devices, which are very fragile; the same system must be able to test each type of SSD.
  • Different test methods – As mentioned above, a robust system is needed to perform both BIST and functional testing.
  • Different speeds – This is a key requirement. System complexity and cost rise exponentially with increased device speed. With SSDs, speeds currently range from 1.5 Gigabits/second (Gbps) to 12 Gbps, with 16 Gbps (for PCIe Gen 4) and 22.5Gbps (for SAS 24) on the horizon.
  • Enterprise vs. consumer – This distinction is important because, at least intuitively, we can assume enterprise has a bigger budget for test given that the price tag for enterprise-level SSDs is about 50x that for consumer-level products.
  • Manual vs. automated – All SSD devices are being tested manually today, but as volumes increase, three things are happening: a) demand is growing at a rate that makes continually adding operators not economically feasible; b) operator error is growing in parallel with the number of operators; c) operator turnover is on the rise, creating a significant problem for manufacturers and pointing up the need to use robots on the line.
  • Different temperatures – Testing a device at ambient temperature is very different from testing it at -10°C. The automotive market is a prime example of this challenge – there has been a huge increase in electronic content, and test temperatures can range from -45°C to +125°C or more in order to ensure vehicle electronics can handle a wide range of climates.

ALL of these requirements have to be addressed – the question is, how? An all-inclusive test cell, a “Rolls Royce” approach, could be developed to do everything needed, but would be extremely costly, and customers would invariably pay for features they didn’t need. At the other end of the spectrum is an application-specific test cell, which is less costly, but also less flexible. Because this type of solution can only do one particular type of test, if an application changes, a new tester would have to be purchased.

Can one SSD test system accommodate all TTM needs, handle both engineering and manufacturing environments, address the wide range of test requirements, and grow with a customer as their test needs evolve?

An SSD test platform is the answer. Comprising a family of components that can be easily mixed and matched to create new products quickly, such a system allows customers to meet their needs exactly, with no scrimping and no waste. Tomorrow, if a faster module is needed to test PCI Gen 4, the customer only has to purchase that module – the rest of system (thermal, power supplies, and other constants) can be reused, which equates to 70-80 percent of the components. By only paying for what they need, customers can extend the life of the platform over 10-15 years or more.

Scalable, flexible, affordable test solution

Advantest’s offering in this space is its modular MPT3000HVM test platform. Figure 3 illustrates how easily the system can be reconfigured to accommodate a new mix of products with different protocols.

Figure 3. As shown above, a shift in protocol mix for the SSD devices to be tested can be completed in minutes with the SSD test platform approach.

The base unit for the MPT3000HVM, called the primitive, is the secret to the platform’s success. The user starts with this unit, and then adds in components as needed to accommodate specific test demands, e.g., full protocol test or different types of test electronics. The personality of the primitive changes according to what it incorporates. If, for instance, an engineer is using a tester full of primitives doing full functional and needs to switch to BIST, he or she can reconfigure the primitive and continue to use it, simply by changing the modules.

Similarly, different form factors can be easily accommodated. Figure 4 shows several racks of MPT3000HVM primitives running tests on different device sizes. Just by changing the device interface board, any type of SSD, or even chips, can be tested in parallel. Different primitives in same system – can test all at same time, w/different protocols, form factors, test electronics – all flexibility needed. System then becomes accumulation of primitive in 19-inch rack.

Figure 4. Multiple MPT300HVM primitives, each testing a different type of SSD, can be run in parallel within the same rack.

Unique capabilities

One crucial characteristic of the platform is its powerful, easy-to-use software, which can be easily applied to the different test stages. It employs a universal GUI, so the user always sees the same interface whether working on device verification, RDT or HVM. Whether different protocols, form factors, or types of tests are involved, it’s the same software, so engineers need to be trained only once. Because of the software’s capabilities, the same set of tools can be used across all test stages, which eliminates the problem of different people not being able to access different aspects of the software and this being unable to reproduce issues. Tools in the suite include stylus main, datalog & test flow control; graph characterization; power profile; production operator interface; oven control; and calibration & diagnostics.

ATE is traditionally focused on fault detection – i.e., the chip is tested and, in general, the result is pass or fail. Testing beyond this level is unnecessary with chip test because if the chip doesn’t pass, it’s not repairable, so it’s discarded. With SSD, we are learning that pass/fail testing is not enough because they can be repaired. Customers are now expecting not only fault detection but also fault location – they need to know the root cause of the failure, and need tools to help them do this. Advantest is developing these tools, which no other test provider currently offers, to enable a wide range of fault location capabilities, incorporating FPGA technology to detect deviations as they happen.

Summary

The SSD industry requires a test platform that optimally addresses shortening time to market, ever-decreasing SSD product lifecycles, and the need to keep test costs under control even as device performance, density and variety all increase. Advantest’s proven MPT300HVM was created to resolve all these challenges, delivering a flexible, scalable platform that can handle testing a wide range of device types, speeds, form factors and other variables simply through swapping out modules to create a configuration that meets the user’s needs.

This also includes handling a variety of protocols. SATA still has high usage for low-end consumer SSD devices, but the world is moving to PCIe, both standalone and as a transport mechanism for NVMe. PCIe Gen 4 is coming soon, as is SAS 24, and the Advantest solution can handle them all, leveraging the company’s long tradition of creating platforms in a system-level offering.

Read More

Posted in Top Stories

5G Lessons Learned from Automotive Radar Test

By Roger McAleenan, Director, Millimeter-Wave Test Solutions, Advantest America

Situated between microwave and infrared waves, the millimeter-wave spectrum is the band of spectrum between 30 gigahertz (GHz) and 300GHz. It is used for high-speed wireless communications and is widely considered as the means to bring 5G into the future by allocating more bandwidth to deliver faster, higher-quality video, and multimedia content and services. Automotive radar is the entry point into millimeter wave for testing purposes.

Automotive radar has been evolving for the past several years, with Tier One companies producing and developing designs for a variety of different applications. As automotive is considered one of the key vertical markets for 5G technology – others include mobile broadband, healthcare wearables, augmented and virtual reality (AR/VR), and smart homes – radar systems in vehicles can provide valuable insight into the other millimeter-wave applications.

The 5G standard promises new levels of speed and capacity for mobile and wireless communications with greatly improved flexibility and latency compared to 3G and 4G/LTE technologies. However, its unique chip structures will create new challenges for test and measurement. By understanding the limits of test equipment, systems and hardware, we can better address the practical aspects associated with delivering on the promise of this technology.

Test and measurement challenges
From a measurement perspective, 5G and auto radar have functional characteristics in common that need to be measured, such as signal blockage, radiation interference and beamwidth selection. Another aspect is loss of signal penetration, an area where radar has an advantage over optical techniques that can be confounded by rain or snow. The band assigned to automotive radar, 76-81GHz provides greater accuracy in range resolution, and is sandwiched between point-to-point (P2P) bands on each side.

The challenges to be addressed in 5G test are similar to those associated with automotive radar, as well. Challenges in millimeter-wave applications include:

  • Handling multiple port devices economically
  • Providing features and testing optimized for characterization and production
  • Over-the-air environment due to packages with integrated antennas
  • High-port-count switching/multiplexing (4×4, 8×8, etc.), often in the same device
  • High levels of device features on a die– MCU + memory  + radio + high-speed digital

Multiple antennas improve power efficiency since more energy is pointed where it needs to be, and with steering, multiple targets can be tracked. This provides improvements to the capabilities and applications expand broadly to “surround” safety features, vehicle-vehicle coordination/communication.  The increased complexity in devices extends up to multiple combinations of transmit and receive.  This functionality will significantly improve vehicle-to-animal/human/object recognition and avoidance, as well as tracking more targets simultaneously.

Transceiver design is important, and they can be optimized as required as a low- or zero-intermediate frequency (IF) design. Automotive and 5G radios look nearly the same, with the similar IP blocks, e.g., phase shifters, local oscillators, RF amplifiers and mixers (Figure 1). The primary distinction is 5G radios’ modulation capability. Both may include up and down conversions, but for 5G, the market is looking for information bandwidth increase. This is actually pretty difficult from a test perspective because it requires elaborate analog equipment like high-performance oscilloscopes. This aspect is still a work in progress.

Figure 1. Transceiver design in automotive and 5G systems is highly similar.

Four main millimeter issues and considerations must be addressed in auto radar. This applies to 5G as well in that these four problems – rain attenuation, Fresnel zone, path loss, and ground reflection – are all problematic, whether you’re driving a car or the equipment is on a tower. Figure 2 shows all the areas in which radar is being used in cars, and further underscores the challenges associated with effective testing of these systems from a system level perspective.

Figure 2. Radar zones in vehicles continue to multiply as automated content increases.

One way to address some of the operational millimeter challenges is through beamforming. This is a technique that focuses the radar transmitter and receiver in a particular direction. Beamforming can be passive or active, although the former is limited in its effectiveness. Active RF beamforming, the increasingly preferred approach, will be gamechanging: it enables tracking multiple objects, both moving and static (people, vehicles, buildings, etc.) at various speeds, simultaneously. This allows auto radar to actively steer the beam toward objects and track them independently. Because the beam can be positioned with so many possibilities, testing in this way is currently a rhetorical question, although several automakers are working on solutions. For 5G, the beams would normally point either to other towers or to individual handsets and be able to track them. Basestations will have antenna arrays that can be steered to track people with 5G handsets – this will be an essential success factor in achieving the information bandwidth promise.

Test lessons learned
Advantest’s automated test equipment has been deployed for testing automotive radar for more than four years, testing from 18GHz to 81GHz, including wireless gigabit (WiGig) test in the 60GHz range, which may also be applicable to 5G.

At the moment, the focus remains on device test, but this is changing. Millimeter-wave applications provide an ideal opportunity to move away from component-level test and more toward higher-level models and end-to-end system-level testing. Figure 3 highlights the growing trends associated with system-level test. With that noted, here are some key lessons learned from Advantest’s work in the auto radar space, using its proven V93000 test platform.

Figure 3. Demand and opportunity for system-level testing is on the rise.

  1. Power accuracy is critical. This will be very important to understand and address because, as we move closer to built-in self test (BIST), the device must be able to measure accurately the power it’s generating. Right now, we’re still learning how to get RF CMOS and BIST working together to give an accurate power measurement.
  2. Metrology is difficult. Given the various connectors and waveguides that must be navigated, there are few reliable ways to perform accurate metrology of fixtures, connectors, loadboards, and other components. Also, there is the issue of system degradation – every time a new part is tested, it degrades slightly due to the materials used, and over time, the sockets or membranes that begin to deteriorate. In addition, when something finally needs to be changed out on the test system, recalibration must be performed, and that can cause a slight change in measurement results when combined with the degradation issue.
  3. Limits need to be established. As devices grow more complex and better – and as efforts are made to extend radar range – two key factors come into play:
    • Phase noise – This key parameter on RF signals affects performance of radio systems in various ways. It’s important to understand at what point phase noise begins to impact performance    and the cost-benefit.
    • Noise figure – This measure of the degradation of the signal-to-noise ratio, caused by components in an RF signal chain, is essential to making radar more effective. The key question in this regard is, what’s the smallest signal I can see (relates to dynamic range)?
  4. Millimeter “anything” is expensive. Currently, there are significant costs associated with millimeter-wave technology that will likely decline over the next few years. In the meantime, some chipmakers are trying to implement millimeter-wave technology for smaller end products, such as radar distance measuring devices, but they can’t build them because they can’t figure out how to test them economically on a small scale. The solution may rely on future technology that is still being developed.
  5. Test engineering knowledge is scarce. This is perhaps the most critical factor of all – hence, saving it for last. The number of engineers working in millimeter technology is relatively small, and companies wanting to enter the space can’t simply materialize engineers versed in radar technology to help them with product development – particularly when the primary emphasis in most engineering programs is digital technology, rather than analog/RF. This means that talent is expensive, which can put a real damper on what companies are able to do. We need competent engineers to be trained that are strongly motivated and passionate about millimeter-wave.

Summary
Automotive radar technology is here now, and while it’s currently being seen primarily in premium-brand vehicles, the goal is to bring down the unit cost so that it becomes standard equipment throughout the automotive industry. To do this, a number of challenges must be addressed, including solving of the complexities associated with testing. Advantest is strongly committed to this market and in taking a leading role in finding these solutions and applying them to other millimeter-wave applications as the market continues to grow – including the fast-emerging 5G.

Read More

Posted in Top Stories

Storage Evolution Driving Growth of System-Level Test

By Colin Ritchie, Vice President, System Level Test Business Unit, Advantest,
and Scott West, Marketing Manager, System Level Test Business Unit, Advantest

Thanks in large part to the booming mobile market, global demand for storage capacity continues unabated. Gartner indicates that solid-state drive (SSD) shipments are on pace to top 370 million units by 2020, while Research & Markets forecasts that the client SSD market alone will grow at a compound annual rate (CAGR) of 36 percent between 2017 and 2021. With this growth comes an increased drive for performance, requiring implementation of new/updated storage protocols.

Currently, the market supports three primary storage protocols. Serial ATA (SATA) and Serial Attached SCSI (SAS) are still in use – the latter, for enterprise applications in particular – but there is a migration towards the newer PCI Express protocol. Increasingly, SSD makers are choosing the latest incarnations of PCI Express: PCIe Gen 3 and the forthcoming Gen 4. In addition, often used with PCIe is Non-Volatile Memory Express (NVMe), a storage interface/protocol developed especially for SSDs by a consortium of prominent vendors.

This shift toward newer, faster protocols creates an associated need for improved test speed and accuracy. Driven by these and other associated demands, the test industry is moving up the value chain, from component-level to module-level to system-level testing. System-level test (SLT) is not only different from classical component test, but also more difficult, creating some key challenges to be addressed.

In-house solutions no longer viable

Storage SLT is still in its infancy, much as memory test was three decades ago. However, an SSD’s state machine has virtually an infinite number of combinations that must be tested effectively without an infinite amount of time available to perform brute-force iterations. While ICs have a set number of vectors or memory patterns to be run, SSDs feature a huge number of constantly changing states – factor in unplanned events (such as power-cycling), and the result is a huge number of cases that are difficult to cover. SSD providers can’t address every eventuality – but they must be able to ship product to their customers with absolute confidence it will work.

Traditionally, storage makers have created their own test solutions in-house because of the high degree of customization required – in addition, no commercially viable solution was available that could meet their needs. However, as the pace of change and growth in the storage market continue to accelerate, these companies have come to recognize that they cannot keep developing their own internal test solutions – they will end up spending more time, money and engineering resources on developing the ability to test their products than they will on developing the actual products. Greater expertise is needed to test the higher performance devices while maintaining a consistent solution across increasing production volumes is. Faster product cycles make this an even more pressing issue.

SSD product lifecycles have collapsed down from two years to as little as six months. In the time it would take for a storage maker to develop its own custom test solution, the product for which it’s intended will have already peaked and be on its way to obsolescence. Reinventing the wheel with each new product is simply no longer viable. Most of these manufacturers have thus made the decision to implement commercial test solutions.

Advantest leading the way in SLT

This creates a substantial opportunity for Advantest. Storage is a market in which Advantest has purposely become a fundamental enabler of growth, as well as a quality arbitrator. The company is committed to helping customers wean themselves away from in-house-developed test solution, with its MPT3000 platform allowing customers to focus on their core competencies. As an example, an executive with a leading SSD supplier that previously focused on traditional flash memory components, recently acknowledged his company’s decision to align itself with Advantest. They needed a partner they could depend on to develop the volume of test solutions needed to meet customer demand in the face of collapsing time-to-market (TTM) windows.

The platform strategy Advantest has refined in the ATE industry applies directly to system level test. By developing modular components for the common platform, both standard solutions as well as targeted custom solutions can be configured, cost effectively. When a custom need arises, the platform components provide 80 to 90 percent of the solution, allowing efficient use of Advantest’s expertise to adapt the solution to a specific configuration. Each adaptation extends the platform so storage makers can benefit, through working with Advantest, from their peers’ shared knowledge and experience.

Going back to the shortened product lifecycle, when the TTM window is only six to nine months, missing one design win can greatly impact a storage maker’s business. A three-month delay in the product cycle can translate to a 10-percent market-share hit – e.g., a loss of $100 million from a $1 billion SSD revenue stream is clearly significant! Advantest’s goal is to take test off the table for customers when competing for business – the tester should never be a gating factor in this regard. By implementing the modular MPT3000 platform, they can compete on their product differentiation

Another key challenge storage makers face is of the need for flexibility in terms of manufacturing floor configuration. They spend significant amounts of money building their factories based on their business forecast, customer demand and production plan. However, as we’re seeing increasingly, these plans can change dramatically in a short time, necessitating flexible solutions on their factory floor that can be retooled easily, efficiently and cost-effectively to meet changing customer demand and/or when moving from one product generation to the next. The MPT3000’s FPGA-based test architecture enables quick changeover, maximizing utilization and production output for customers.

Advantest’s portfolio today comprises end-to-end test – from components through modules to systems. The MPT3000 platform is focused on system-level test for the SSD market, and is serving as both a test case, if you will, and a learning platform for further SLT efforts within the company. A previous GO SEMI article delved further into its protocol test capabilities: http://www.gosemiandbeyond.com/applying-flexible-ate-technology-to-protocol-test-and-the-ssd-market/

Other past articles that shed light on Advantest’s system-level test efforts include last issue’s piece on SLT for embedded NAND flash memories – http://www.gosemiandbeyond.com/system-level-test-essential-for-fast-growing-embedded-nand-market/ – and a prior interview with Artun Kutchuk of Advantest Group’s W2BI business, which provides wireless test automation products for the mobile and IoT space: http://www.gosemiandbeyond.com/out-of-the-lab-and-into-the-field-making-iot-device-testing-portable/.

We are interested in learning about the kinds of SLT challenges you face – whether in storage or elsewhere. Please feel free to comment below, or send an email to Scott.West@Advantest.com, to share your knowledge and expertise.

Read More

Posted in Top Stories

A Smarter SmarTest: ATE Software for the Next Generation of Electronics

By Rainer Donners, Product Manager, Advantest Corp.

The complexity of the ICs being designed into consumer and communications devices continues to increase. The 10-nanometer (nm) node is here, and some chipmakers are already beginning to turn out 7-nm devices. With smaller transistors that pack more and more functionality on a single chip, the complexity of test programs is increasing apace with that of the ICs themselves.

These ICs are used in end products such as smartphones, Internet of Things (IoT) devices, computer and gaming products, and they are ‘multi-domain,’ i.e., they contain digital, DC, analog and radio-frequency (RF) circuitry all on the same chip. Developing efficient test programs for these multi-domain ICs within a shorter time to market (TTM) is becoming a challenge.

To overcome this challenge, Advantest has introduced SmarTest 8, the latest version of its SmarTest software, developed to support the V93000 test platform. It’s important to note that both SmarTest 8 and SmarTest 7 will coexist on the V93000 Series for the next decade, enabling customers to use the version best suited to their test needs and business requirements. SmarTest 8 works with all V93000 Series test cards introduced since 2011; see Figure 1 for cards supported by SmarTest 8.

Figure 1. V93000 Cards supported with SmarTest 8

SmarTest 8 features a host of new capabilities that will enable engineers who must deal with highly complex test programs to achieve superior parallelism and throughput. The new software’s many benefits include:

  • Faster test program development
  • Efficient debug and characterization
  • Higher throughput, earlier, due to automated optimization
  • Faster time to market
  • Ease of test-block reuse
  • Efficient collaboration

SmarTest 8 unifies multiple different tools within the SmarTest Work Center (SWC) environment, delivering a state-of-the-art look and feel and an entirely new design to ensure ease of use; see Figure 2.  Let’s take a look at some of the key SmarTest 8 concepts, features and tools that will allow users to reap the new product’s benefits for their test programs.

Figure 2. SmarTest 8 comprises a suite of tools designed to simplify and optimize test program development and debug

Operating Sequence
Advanced multi-domain devices consist of multiple different functional blocks. Typical block types include an RF block for transmitting/receiving phone signals, a protocol-ware (PA) interface to condition the device, analog blocks for microphone use and playing music, and/or digital blocks for processing.

Testing one functional block, e.g. the RF block, typically requires ‘assembling a test’ out of multiple pieces, e.g., starting an analog stimulus signal, conditioning the device with digital signals, and starting one or multiple RF measurements. Figure 3 provides an example, with three RF tests displayed in the Operating Sequence View.

Figure 3. Operating Sequence View, displaying an example RF test

The Operating Sequence is designed to easily assemble the multiple ‘test pieces,’ with precise synchronization where needed. These test pieces are typically patterns, protocol transactions, or ‘actions.’ Actions can be DC stimulus changes (e.g., stepping up a ramp), DC measurements, analog stimulus and measurements, or RF stimulus and measurements – to name some examples.

In addition to easy test setup, the Operating Sequence supports intuitive debugging; the screen view in Figure 3 displays exactly what has been executed.  Interactive changes during debug are well supported – as the second blue block in the figure indicates, inserting an additional transaction into one PA block changes the length of this conditioning block. The SmarTest 8 software automatically ensures that with the next execution of this test, subsequent actions (like cw2 and measurePower in the screen view) are shifted and retain their synchronous start.

Additionally, the Operating Sequence contributes to fast throughput. Multiple measurements can run in parallel, per Figure 3, in which two RF ports are tested concurrently. In addition, the execution of an Operating Sequence is done entirely via the unique test-processor-per-pin hardware of the V93000 system. No software or workstation interaction is required.

Overall, the Operating Sequence, with its new functionality, ease-of-use and optimal throughput, helps enable shorter TTM. This unique tool and concept are unavailable in competitive offerings.

Modular Test Program Structure
Test programs are complex, consisting of up to multiple thousands of tests for the different blocks of the device. The structure of the test setup data of SmarTest 8 is designed to easily deal with this complexity: SmarTest 8 incorporates the concept of subflows. Subflows as part of the testflow tool are established in the ATE software already. SmarTest 8 adds the new component that setup data can be structured and stored in separate and independent ‘subflow’ directories.

This capability enables multiple unique advantages:

  • Teams can work on their own subflows independent from other teams, so collaboration is easy.
  • No manual ‘merge’ effort is needed, as merging of the subflows is automatically performed by SmarTest 8.
  • A complete (proven, debugged) subflow can be reused within multiple test programs of a device family. Reuse here refers to one single source of the subflow, not the ‘copy/paste’ approach typically used today. The latter creates test program maintenance challenges that are prevented with SmarTest 8’s re-use/single-source approach.

By making development and debug faster, easier and far less complex, the modular test program structure ultimately contributes to reduced TTM and time to quality (TTQ).

Test-oriented Use Model via Instruments
Many test systems’ use model is tester- or hardware-centric – it requires the user to learn how to program the tester in order to achieve the needed tasks. SmarTest 8 moves away from this model, allowing the user to focus on the test, not the tester.

With SmarTest 8, the user ‘thinks’ in terms of using instruments for his/her test tasks. Figure 4 shows example instruments and their respective implementations of the tester hardware cards.

Figure 4. Simplified use model via SmarTest 8 Instruments

These instruments are then programmed via properties and actions; see Figure 5 for an example setup for two VCC signals.

Figure 5. Level specifications for VDDA and VDDD signals

This level specification is identical for all DC instruments in SmarTest 8. When setting up the test, the test engineer will use the level specification for all suitable hardware, which could be a DC Scale DPS128, a parametric measurement unit (PMU) of a Pin Scale 1600 or a PMU of a Wave Scale MX card.

This test-orientation and tester, respectively, hardware-abstraction is consequently used in SmarTest 8, for test setup descriptions, application programming interfaces (APIs), access to results, and in debug tools. This makes the software intuitive and easy to learn, and the test programs are easy to develop, understand and debug, further lightening the burden for the test engineer.

In summary, after several years of development, optimization and beta test, SmarTest 8 is now ‘ready for prime time.’ It is installed at numerous customers, both in test program development and in high- volume manufacturing. SmarTest 8 delivers new benefits that, together with the proven V93000 platform, meets the test needs for the next/newest generation of advanced multi-domain, multi-core ICs. As part of the V93000 platform, it will be continuously expanded to enable even more capabilities and to make test engineers more efficient.

Read More