Pages Menu
Categories Menu

Posted in Top Stories

The Future of Data Analytics and Semiconductor Testing

This article is adapted with permission from a recent Advantest blog post.

By Michael Chang, Vice President & GM ACS, Advantest

The world is changing more rapidly than ever. With the explosion of Artificial Intelligence (AI), Machine Learning (ML) and data analytics, semiconductor manufacturers now have the opportunity to extract valuable insights from the massive amounts of data being generated throughout the silicon lifecycle. By leveraging AI algorithms and ML, semiconductor manufacturers can now optimize silicon design, assembly, and testing processes. It is through the analysis of these vast amounts of data, AI can quickly identify patterns, predict failures, and optimize quality. 

So, what does this all mean? It means that we now have the capability to greatly improve yield rates, reduce production costs, and accelerate time-to-market. Ultimately, the goal is to create an end-to-end utilization of analytics throughout manufacturing and test operations so that data analytics and ML will enhance the speed and accuracy of the testing process, reduce the risk of defects, and help the entire industry move ever closer to its goal of zero defects.  

This is where Advantest is revolutionizing the test industry. We just announced Advantest’s ACS Real-Time Data Infrastructure (ACS RTDI™), a solution that offers advanced analytics, including machine learning capabilities and future-proof, real-time, automated production control. The Advantest ACS ecosystem integrates all data sources across the entire IC manufacturing supply chain, a revolutionary first in the industry. In fact, ACS has been collaborating with multiple major data analytics companies as part of an industry-wide collaboration to accelerate data analytics and AI/ML decision-making within a single, integrated platform. These partnerships will help customers take advantage of new levels of data integrity and security across different test nodes and benefit from proven infrastructure solutions that will enable them to achieve new levels of operational efficiency. 

This is how Advantest is unlocking the intrinsic value of AI in semiconductor testing. 

The ACS RTDI platform integrates data sources across the entire IC manufacturing supply chain while employing low-latency edge computing and analytics in a secure True Zero Trust™ environment. This innovative infrastructure minimizes the need for human intervention, streamlining overall data utilization across multiple insertions and supporting customers’ databases. Because security remains a top concern among customers, the ACS RTDI platform has been architected to be reliable and safe, ensuring hassle-free OS revisions, while protecting data from unauthorized access or loss. This is accomplished by leveraging True Zero Trust™. Overall, the new ACS ecosystem will enable customers to boost quality, yield, and operational efficiencies, and to accelerate product development and new product introductions for years to come.

To fully support ACS’s revolutionary strategy, we also offer the ACS Solution Store which enables customers to choose from a comprehensive collection of software solutions designed for the digital age, addressing major challenges facing the semiconductor industry and that can be tailored to individual customer needs. Customers can select from the ever-expanding catalog of solutions in an easy-to-navigate browsable online catalog ― from a growing list/ team of partners joining the Advantest open solution ecosystem revolution.

Figure 1: Semiconductor Integrated Workflow and Benefits

Learn more about how Advantest is improving the technological world by setting a new standard in the semiconductor industry on our website: https://www.advantest.com/acs/overview/.

Read More

Posted in Top Stories

Solving High-Energy Testing Challenges

This article is a condensed version of an article that appeared in the November 2023 issue of Electronic Specifier. Adapted with permission. Read the original article here, p. 12.

By Fabio Marino, Managing Director, CREA, an Advantest company

Although the global semiconductor market is currently experiencing a slowdown, the automotive sector remains solid fueled by the demand for EVs. What used to be a niche market is now rapidly expanding to the mainstream, and companies that supply power IC technology must increase production volume to meet growing demand. 

Last year, CREA leased a new building to expand production capacity and keep up with ongoing business growth. This will allow CREA to produce test equipment for a wide variety of power semiconductors, including insulated-gate bipolar transistors (IGBT) and silicon carbide (SiC) and gallium nitride (GaN) semiconductors. SiCs’ advantages over traditional IGBTs include higher thermal conductivity, better ability to tolerate high voltages, higher switching speeds and lighter weight. Wide-bandgap technology such as SiC is the key to developing more efficient advanced battery systems that will enable new electric vehicles (EVs) to go farther and faster. 

Addressing high-power test requirements

Parasitic inductance and capacitance, which play an important role in the measurement, can create conditions that may damage the tester. Thus, testing high-powered SiC devices requires highly refined, specialized test equipment. CREA’s low-stray-inductance and probe card interface (PCI) technology enables engineers to minimize parasitic values. This allows the performance of specialized tests needed to ensure reliability and quality, facilitating the development of efficient batteries for new EVs. 

To meet customer demands for lower cost, CREA is expanding its bare-die test capabilities. Bare-die test utilizing the PCI and thermal control technology holds the key to expanding dynamic test to the wafer level. Package test is simpler, but if a single switch malfunctions, the entire package must be discarded. Bare-die test is more cost-efficient and creates less waste—the only challenge is that a probe card is needed to perform the test. Probe cards are fragile, and the high amount of energy generated during dynamic test can break the probe card and damage the tester itself.

CREA’s PCI technology monitors each probe needle for abnormal current distributions, shutting off the tester when such an abnormality is detected to prevent damage. CREA also developed a chamber for bare-die test that moderates temperature by controlling airflow to prevent sparking that can occur while working with high voltages, ultimately reducing the threat of harm to the ATE.

SiC technology provides many benefits over traditional IGBT technology, as noted earlier. While many major semiconductor companies are investing in R&D to support SiC technology, SiC is very different from silicon wafer technology. It requires completely different equipment, and the automated tools that factories currently have are designed for silicon wafers and will not work with SiC. Because SiC is a maturing technology, production yields are low. This creates a significant opportunity for test companies to deliver SiC-optimized test equipment. CREA continues to refine its power IC testing technology to increase yield and help customers maintain sustainable business models that can keep up with rising demand.

Conclusion

Fueled by major investments from the global semiconductor community, the power IC industry is evolving quickly. This creates significant opportunities for companies like CREA that are building solutions to address high-power specs and overcome industry challenges.

CREA’s patented LSI™ and PCI™ technology provides specialized testing solutions for power ICs found in hybrid and EV automotive engines. These solutions will accelerate the shift from 400V to 800V batteries, accommodating the testing specs needed to develop cutting-edge EV technology. Today, CREA engineers are developing techniques to run high-energy tests in parallel – increasing yield and helping to accommodate rising global demand for SiC and other advanced power semiconductors. 

 

CREA’s Power Device Testers

Read More

Posted in Top Stories

Improving Debug Time and Test Coverage with Parallel Validation Strategies

This article is excerpted from an article that appeared in the July/August 2023 issue of Chip Scale Review. Adapted with permission. Read the original article here, p. 28.

By Adir Zonta, Product Marketing Manager, V93000 Engineering Solutions, Advantest

Test data volumes are exploding as the number of transistors per chip increases along with the number of test vectors needed to test each transistor. A recent article [1] described how traditional methods of device validation and characterization, ATE structural and functional test, and system-level test no longer suffice due to increased device complexity, and introduced innovations in pre-silicon verification, first silicon bring-up, and post-silicon validation (PSV) that are necessary to meet today’s challenges.

This article looks further at these innovations and the systems necessary to implement them, including how to equip an engineering lab with automated parallel test stations to speed up test engineering tasks such as pattern validation. It also describes how a new standard helps bridge the gap between electronic design automation (EDA) and ATE and how Cadence and Advantest have collaborated on an initiative to put the standard into practice.

Test-pattern validation

One of the challenges that the explosion in test data imposes on test engineering is the ever-lengthening time required for test-pattern validation, which is impacting time to market. Test-pattern validation determines whether the patterns are generated correctly, that the expected responses are accurate, and that they have enough margin to account for parameter variations (for example, in voltage and frequency) in production.

Generating test patterns 

The test patterns include structural scan patterns generated by automatic test-pattern generators or functional test patterns generated manually from a test specification or automatically using random or constraint-based test-generation methods or other techniques linked with EDA tools. Test patterns from the EDA tools are generally in a standard format such as STIL (Standard Test Interface Language) or WGL (Waveform Generation Language).

Structural test patterns target specific fault models, such as “stuck at” faults or timing faults, whereas functional test patterns confirm the performance of the device under test (DUT) in its end use. Functional test vectors are particularly important in automotive and other industries where performance and safety are critical. The key aspects of generating test patterns are summarized below.

Cyclized test vectors. The patterns in STIL or WGL from EDA tools are converted to cyclized test vectors for the target ATE system by adding timing and control information to synchronize the patterns with a specific ATE system’s clock and control signals. This can require extensive development time.

Error causes. Inevitably, the cyclized test vectors will experience errors resulting from design defects percolated through the cyclization process, the cyclization process itself, or corner cases that the original design did not take into account. Regardless, the PSV process must identify and correct any errors.

Correcting test-pattern errors. When errors are detected during the pattern validation process, they must be corrected through manual or a combination of manual and automated methods.

Automated parallel test stations speed up the process

Speeding up test pattern generation requires a test lab with the equipment necessary to run parallel pattern validation, minimizing the time spent on pattern debugging while assuring sufficient test coverage. A solution such as the Advantest V93000 EXA Scale EX Test Station, an engineering platform for complex device bring-up that supports structural and functional test, provides this parallel test capability without requiring a lot of floor space because it is designed to fit under the company’s single-site M4171 automated handler. Complete with integrated active thermal control (ATC) over a range of -45 to +125°C, the handler brings automated device loading, unloading, and binning into the laboratory environment. As shown in Figure 1, six test cells can fit within a 5m by 5.5m laboratory space.

Figure 1: Six EX test stations with M4127 handlers in a 5m by 5.5m laboratory space can speed up test-pattern validation and other engineering tasks.

There are two primary challenges involved in creating functional test on ATE:

  1. The need to convert the functional test content into a production test vector pattern, which requires tooling and extensive development time.
  2. Typically, there is no native software debugging environment on a typical tester, making it very difficult for the test case developer to debug any issues in support of the test engineer. Excessively long, unpredictable debug cycles are inevitable.

Pre-silicon methodologies and an ATE instrument can work together to seamlessly and interactively validate the functional test content to help to meet these challenges.

PSS links EDA and ATE

Reuse of pre-silicon verification test content can ease the transition from the pre-silicon verification stage to first silicon—including bring-up, bare-metal test execution, and ATE stage. To that end, the Accellera Systems Initiative, an organization focused on the creation and adoption of EDA and intellectual property (IP) standards, has developed the Portable Test and Stimulus Standard (PSS), which specifies a single representation of stimulus and test scenarios that span simulation, emulation, and post-silicon [2].

PSS enables the once-siloed EDA and ATE disciplines to work together. However, while structural test dominates the ATE side, rising quality expectations are driving a need for more functional test to ensure the chip will perform properly in its end-use mode. However, as previously mentioned, converting functional test content into production test vectors requires extensive development time, and a typical ATE system lacks a native software debugging environment that could speed up the process [3].

Joint EDA-ATE PSS implementation

A joint cooperative initiative between Cadence and Advantest involved a combination of the PSS and HSIO approaches. The companies have developed a solution that involves PSS-based test content creation, an interface to ATE software, the loading of parameterized test content, test execution on ATE hardware, and debug and analysis (Figure 2). The Cadence Perspec System Verifier automates the process of extending the PSS models used in pre-silicon validation to the ATE environment, reducing the complex use-case scenario development time. A container file labeled FDAT in Figure 2 provides an efficient interface between Perspec and the Advantest SmarTest 8 software for its V93000 ATE systems. 

Advantest’s Link Scale ATE instrument interacts natively with the DUT using low pin-count HSIO, such as USB and PCI Express interfaces running in full-protocol mode, without pattern cyclization. Collected test traces can be viewed in a SmarTest viewer or imported into Cadence’s Verisium Debug AI-powered debug tool for correlation with the original PSS tests. In addition, Link Scale can host embedded software debuggers such as the Lauterbach TRACE32.

Figure 2: PSS enables interfacing of EDA and ATE to optimize test validation.

Device validation best practices

Going forward, one key will be smoothing the transition from the lab environment with engineering test stations to the production floor. The single-load-board strategy, in which a multisite load board for high-volume production can be used in the lab with only a single site enabled, makes it unnecessary to develop one board for engineering activities and another for high-volume manufacturing (HVM).

The engineering environment should be as close as possible to the HVM environment. The EX Test Station achieves this goal because it uses our Xtreme Link technology, designed to provide high-speed optical data connections, embedded computing power, and card-to-card communications for high-volume production ATE. The station is also suitable for testing initial engineering batches efficiently. In addition, it helps to ensure seamless flow between the engineering and HVM environments.

Conclusion

The semiconductor industry has a long and successful history of testing increasingly complex devices, continually enhancing structural, functional, and system-level test to minimize test escapes. Advances continue as the industry contends with an exploding amount of test data necessary for silicon bring-up, PSV, and other test engineering tasks. A key innovation is a laboratory equipped with engineering workstations that can operate in parallel to speed up tasks such as pattern validation. In addition, EDA and ATE companies are cooperating to leverage standards such as PSS to bridge the pre- and post-silicon verification stages, and they are leveraging HSIO to allow ATE to apply test patterns without cyclization. Finally, engineering workstations are incorporating the load-board, compute, and communications technologies of production ATE systems, thereby speeding the transition from the lab to HVM.

References

  1. D. Armstrong, “Device validation: the ultimate test frontier,” Chip Scale Review, Nov-Dec. 2022, p. 26.
  2. Accellera Board Approves Portable Test and Stimulus Standard 2.0,” Accellera Systems Initiative, April 14, 2021.
  3. M. Rubin, A. Zonta, Pre and Post-Silicon Verification Have Never Been Closer! Leveraging Portable Stimulus for Automatic Test Equipment (ATE),” Cadence Design Systems Inc., May 4, 2023.

 

Read More

Posted in Top Stories

Deploying Cutting-Edge Adaptive Test Analytics Apps: Innovation Based on a Closed-Loop Real-Time Edge Analytics and Control Process Flow into the Test Cell

By Ken Butler, Senior Director of Business Development, Advantest, & Guy Cortez, Senior Staff Product Manager, Synopsys

Semiconductor test challenges abound in this era of AI. As such, semiconductor test engineering is increasingly moving towards fully adaptive test where each device receives the “right” test content to assess its correctness. Advantest and Synopsys have partnered to provide new cutting-edge real-time adaptive test applications at the test cell based on complete closed-loop analytics and control process flow. Our solution leverages a high-performance, highly secure real-time data infrastructure combined with advanced analytics derived from a comprehensive silicon lifecycle management (SLM) platform. For example, when test measurement data is combined with on-die sensor readings using a very fast and secure computing platform, the solution provides an in-situ adaptive test with milliseconds latencies.

Figure 1. Semiconductor test challenges for the AI era

First, let’s review the Advantest ACS portion of the solution. The Advantest ACS Real-Time Data Infrastructure (RTDI) is a platform that provides low latency and highly secure data access and system control for test operations (Figure 2). It consists of the following components:

  • ACS Container Hub™, a web registry for the management and distribution of open container initiative-compliant AI/ML and statistical workloads.
  • ACS Unified Server™, a multi-purpose, reliable and scalable platform that serves as a gateway and local mirror for containerized applications.
  • ACS Edge™, a high-performance and highly secure computing solution for the execution of complex analytical workloads for real-time applications in production test.
  • ACS Nexus™, the communications backbone for ACS RTDI which allows for streaming access to test and test cell data as well as real-time control of tester operations.


Figure 2. ACS Real-Time Data Infrastructure

Two example use cases to which adaptive test flow can be applied are:

  1. Reduce test time – Save cost and improve throughput by eliminating some tests that appear useless (no parts failing).
  2. Reduce DPPM – Improve quality control by adding additional tests for some “risky” parts.

Our first use case focuses on this second method of improving quality. Figure 3 below is an image of a stacked wafer map that highlights specific zonal regions on the wafer that have an excessive number of failures, shown in purple. A typical stacked wafer map consists of 25 wafers or one lot’s worth. The remaining good die in this region are suspect due to the amount of failures on this part of the wafer. The larger and darker the purple identifier per x, y coordinate is, the more prominent failures there happen to be at that x, y coordinate across all wafers analyzed.

An application is available to identify which packaged die from select x, y coordinates are considered risky based on a specific failure threshold set by the product engineer. Additional test(s) will later be applied to those parts labeled as risky during final test (a form of ZPAT at final test), thereby improving the overall quality of the chip but with minimal impact on total test time.

Figure 3. Wafer stack map

A second application is adaptive limit setting, i.e., the adjustment of test limits during test program execution. The method shown here utilizing sensor data provides higher accuracy in limit management compared to existing methods such as dynamic part average testing (DPAT), because sensors embedded in the chip provide additional key information that enables the monitoring of the chip’s operational metrics such as power and performance. This example highlights the use of sensor data that characterizes the process and environment information to enable more accurate limits on speed/power consumption during testing, thus resulting in lower DPPM and higher quality.

Figure 4 below shows a comparison of the two adaptive test limit approaches. First, the DPAT method shown is a standard univariate approach based on the die population results for a given test. Next, the sensor-aware method incorporates a bivariate correlation between the data measured from sensors and the results of a specific VDD consumption test. The second method can identify at-risk die that would be missed by conventional DPAT analysis.

Figure 4. Traditional univariate DPAT vs. sensor aware bivariate method

Conclusion: In this article, we describe a real-time, highly secure data infrastructure plus a pair of complex, high-value analytical applications that consume both test response and on-die sensor data to produce inferences for true adaptive test decision-making with low milliseconds latencies. The analytics and associated applications are available as part of an open solutions ecosystem, which allows users to either develop their own solutions or procure and deploy them from Synopsys or other providers. The result is the democratization of machine learning driven applications, making them available to everyone in the semiconductor test community.

Related Links:

The Advantest ACS Solution Store

Advantest ACS

Advantest Talks Semi Podcast

Read More

Posted in Top Stories

Scalable Platform Meets the Test Challenges of Ultra-Wideband Chipsets

This article is adapted from a paper and presentation at SEMICON China, March 2023.

By Kevin Yan and Daniel Sun, Advantest (China) Co., Ltd.

Ultra-wideband (UWB) technology, as defined by IEEE 802.15.4 and 802.15.4z standards, enables short-range, low-power RF location-based services and wireless communication. A variety of devices have reached the market to help implement UWB capability, but these devices present significant test challenges related to the high RF frequencies at which UWB operates, the ultrawide bandwidths of UWB’s multiple channels, and the technology’s complex modulation schemes. An effective test platform requires flexibility and scalability to handle the frequencies and bandwidths involved, as well as the compute power to effectively analyze the test results.

UWB markets and capabilities

The UWB market is expanding at a rapid pace. One forecast estimates that UWB unit shipments are growing at a 40% CAGR with the market for UWB chips expected to reach $1.259 billion by 2025.1 UWB serves both business and consumer end markets, with the consumer segment now representing the majority. UWB is hitting the mainstream in the mobile smartphone market, with the automotive and wearables/tags segments also seeing UWB adoption.

Some specific applications that UWB can serve include indoor navigation, item tracking, secure hands-free access, credential sharing, and hands-free payments. In addition, automotive applications are likely to expand, with the Car Connectivity Consortium incorporating UWB into the Digital Key Release 3.0 specification currently under development. 

Compared with other technologies, UWB provides superior positional accuracy. Table 1 compares UWB with RFID, Wi-Fi, and Bluetooth. UWB, which employs time of flight (ToF) and angle of arrival (AoA) technology, achieves a positional accuracy of better than 30 cm, outperforming the others. Bluetooth incorporates AoA and angle of departure (AoD) to provide < 1 m of accuracy (version 5.1). Wi-Fi, which relies only on its Received Signal Strength Indicator (RSSI) functionality for distance estimates, has a limited accuracy of 15 m. RFID can only detect presence, not distance.

Table 1. Positional accuracy comparison of four wireless technologies

UWB basics

The United States Federal Communications Commission (FCC) and the International Telecommunication Union Radiocommunication Sector (ITU-R) define UWB as a communications technology that transmits and receives a signal whose bandwidth exceeds the lesser of 500 MHz or 20% of the arithmetic center frequency fc. The UWB physical (PHY) layer includes 16 channels in three groupings, as shown in Table 2. The sub-gigahertz band (group 0, shaded yellow) includes one channel at an fc of 499.2 MHz, the gigahertz low band (group 1, shaded blue) includes four channels with fc ranging from approximately 3.5 GHz to 4.5 GHz, and the gigahertz high band (group 2, shaded red) includes 11 channels with fc extending from approximately 6.5 GHz to 10 GHz.

Table 2. UWB PHY group and channel allocation                

Most UWB products currently on the market focus on the high band, particularly channels 5 and 9. The UWB PHY layer also comes in low-rate-pulse (LRP) and high-rate-pulse (HRP) repetition-frequency configurations. HRP high-band configurations are becoming the preferred configurations, finding success in industrial applications for location and ranging and for device-to-device communications. Table 3 outlines the IEEE 802.15.4z HRP UWB high-band specifications, including a range up to approximately 100 m and data rates from 110 kbps to 27.4 Mbps. In addition, HRP UWB uses two types of modulation: burst position modulation (BPM) and binary phase-shift keying (BPSK).

Table 3. IEEE 802.15.4z HRP UWB high-band specifications

Figure 1 compares the UWB and Bluetooth spectrums. The narrow-band Bluetooth (left) has a 1 MHz bandwidth at 2.4 GHz. In contrast, UWB (right) has a 500 MHz or greater bandwidth at center frequencies (fc) extending up to approximately 10 GHz, as shown in Table 1. Each band’s upper and lower bounds (fH and fL, respectively) exhibit power levels 10 dB less than the maximum power level at fc.


Figure 1. Bluetooth has a 1-MHz bandwidth at 2.4 GHz, while high-band HRP UWB has a > 500-MHz bandwidth at center frequencies fc from approximately 6.5 GHz to 9.5 GHz.

Test requirements

A typical UWB transceiver chipset includes an analog front end containing a receiver (Rx), a transmitter (Tx), and a digital backend that interfaces to an off-chip host processor. It also includes a Tx/Rx switch that connects either the receiver or transmitter to an antenna port. Some versions come with two RF antenna ports to serve phase-difference AoA applications, and some will add even more ports to improve positional accuracy. AoA capability can help pinpoint the specific location of an object as well as its distance, as shown in Figure 2.


Figure 2. AoA capability can help pinpoint a tag’s angular location as well as distance.

The receiver includes an RF front end that employs a low-noise amplifier that amplifies the received signal before down-converting it to the baseband. The chipset’s transmitter applies digitally encoded transmit data to an analog pulse generator. The chipset also includes a phase-locked loop (PLL) that provides local oscillator signals for receive and transmit mixers.

Typical UWB production tests involve transmit measurements and pulse-related measurements as specified in the 802.15.4z standard as well as direct receiver measurements, ToF measurements, and AoA measurements. 

Transmit measurements ensure the devices meet all emissions rules established by the FCC or other relevant governmental authorities. The tests involve power spectral density (PSD) measurements in accordance with a transmit-spectrum mask (Figure 3) as well as center-frequency tolerance measurements.


Figure 3. A power spectrum mask defines limits for PSD measurements.

Pulse-related measurements ensure the interoperability of UWB devices and are performed using time-domain analysis (Figure 4). Specific tests include baseband impulse response, including measurement of pulse main-lobe width, pulse side-lobe power, and normalized mean square error (NMSE). Additional tests look for chip clock error and chip frequency offset. 


Figure 4. This compliant pulse example uses time-domain analysis.

Although not specified in the 802.15.4z standard, direct receiver measurements must be performed to ensure quality parts. A typical receiver test measures the minimum power level at which the device can operate with minimum error. A typical way to perform this test is to send a minimum power stimulus to the device under test and measure the device’s packet error rate (PER).

Finally, ToF and AoA measurements characterize the positioning performance. In high-volume production test, such measurements are often performed using phase shifts between two Rx antenna inputs.

UWB devices present three specific test challenges that traditional ATE systems cannot address. The first relates to the high RF frequencies involved, ranging up to more than 10 GHz—exceeding the typical less-than 6 GHz capability of many traditional ATE RF instruments. Second, UWB devices require wideband measurements extending to 1.35 GHz—well beyond the 200 MHz limits of traditional instruments. Third, UWB devices must be tested using frequency-domain PSD measurements and time-domain impulse-response measurements, a combination that requires test software with complex algorithms and an efficient architecture to handle the huge amounts of data processing required.

UWB test platform

The flexible Advantest V93000 platform can be configured with appropriate hardware and software to support the thorough test of UWB devices. The platform’s Wave Scale RF instruments cover the frequency range from 10 MHz to 70 GHz. The V93000 platform can also support the necessary wide bandwidths. For example, the Wave Scale RF18 card supports 5.85 GHz to 18 GHz frequency stimulus and measurements with a 200 MHz bandwidth. Adding the optional Wave Scale Wideband card to the Wave Scale RF18 extends the bandwidth up to 2 GHz. The combination also has built-in event triggering—useful for testing asynchronous UWB chips’ Tx packets. The platform can accommodate 128 RF ports to enable efficient multisite parallel testing.

V93000 SmarTest 8 software contains a UWB demodulation library and can analyze a UWB signal in the time and frequency domains and measure such items as the transmit PSD mask, transmit center-frequency tolerance, baseband impulse response, chip clock rate, and chip carrier alignment. To enable rapid test-data analysis, SmarTest 8 supports hidden uploads of captured waveforms and multi-threaded background processing of previously captured data while simultaneously capturing the next measurement. In addition, standard existing SmarTest 8 features can make receiver and ToF-related measurements. Finally, AoA tests can be performed by signals of different phase to different receiver ports, as shown in Figure 5.


Figure 5. AoA tests require relative phase measurements between antenna receiver ports in response to applied stimulus from V93000 instruments.

Conclusion

As the UWB market rapidly expands, test platforms are adapting to accommodate the high RF frequencies, wide bandwidths, and complex modulation schemes involved. The Advantest V93000 platform’s hardware and software include the standard and UWB-specific features necessary to test UWB devices.

Acknowledgments

We would like to acknowledge and give our warmest thanks to Frank Goh, who supports the UWB V93000 Solution and provided professional guidance to this paper. Frank Goh is a principal consultant at the Center of Expertise Asia from Advantest Singapore.  

Reference

 

  1. AMENDED Comments of The Ultra Wide Band (UWB) Alliance Before The Federal Communications Commission, July 14, 2020.
Read More

Posted in Top Stories

True Zero Trust Combats IC Manufacturing Security Challenges

By Michael Chang, ACS VP and General Manager

The semiconductor manufacturing industry is facing a host of unprecedented technology and security challenges. A common catchphrase these days is that “data is the new oil.” Data is everywhere, in everything we do, and there is both good and bad associated with this trend. Data everywhere creates new security issues that need to be addressed to protect the integrity of your information and your devices. Advantest has done this through a new infrastructure setup that enables a True Zero Trust environment on the fab test floor – in turn, allowing us to truly embrace AI without having to fear security repercussions.

Addressing core challenges

Some of the key technology challenges for chipmakers include chip-level scale integration, which requires new types of setup tools and data to be integrated for making measurements; system quality challenges; and achieving chip-scale sensors. Another area of focus is manufacturing 2.5D and 3D chiplets.

A paper published in 2021 by three Google engineers identified an issue with cores failing early due to fleeting computational errors not detected during manufacturing test, which they call “silent” corrupt execution errors. The paper goes on to propose that researchers and hardware designers and vendors collaborate to develop new measurements and procedures to avoid this problem. The interim solution is to isolate and turn off cores that are failing, but they hint that because of chip-level integration and 2.5D/3D, new approaches are needed to measure and screen out these failing cores automatically.

The other side of this coin is security concerns. Access to systems is limited, and different software cannot be installed on machines. We use firewalls, anti-virus spyware, encryption, password management and other technologies to protect our computers, but they’re not infallible. Experts agree that cyberattacks are inevitable, so there needs to be a means of using data to protect all the data on our systems. Advantest is doing this through our ACS offerings, which enable real-time data security, as shown in Figure 1: ACS Nexus™ for data access, ACS Edge™ for edge computing, and the ACS Unified Server for True Zero Trust™ Security.

Figure 1. Advantest’s open solution ecosystem. Data is needed from all sources to mitigate new challenges.

As Figure 1 illustrates, through our Real-Time Data Infrastructure, we can integrate data sources from across the chip manufacturing supply chain, leveraging that data to continually improve our insights and solutions. We can implement test throughout the product lifecycle, taking real-time action during production. Nothing has to be done away from the test floor; all analyses and actions occur during actual test, maintaining fully secure zero trust protection of the data.

Security is more than protection

One way to illustrate the approach that we take to security in semiconductor manufacturing is to look at a seemingly unrelated example: the International Space Station (ISS). Designed to protect against damage from space debris, the outer hull of the ISS is outfitted with Whipple bumpers. These multi-layered shields are placed on the hull with spaces between the layers. The intent is that impact with a layer will slow and, ideally, break apart the projectile, so that by the time it reaches the bottom layer, any potential harm has been prevented. While the bumpers slow the kinetic energy of the debris, something will eventually get through. The second line of defense is the ISS’s containment doors, which ensure any areas where air leaks have been detected can be isolated so that the astronauts are protected. Clearly, this is mission-critical.

The key word: “containment.” It’s not enough to protect – no system is infallible. You also have to contain it so that potential security issues don’t become pervasive and cause a major breach. The challenge when looking at this from the test cell perspective is the test cell is located on the test floor, which is surrounded by all kinds of other equipment that you have no control over. And not just other manufacturing tools. Everyday office products can be vulnerable to hacking – computers, software, printers, routers… even smart appliances in the break room such as IoT-enabled coffee pots. Hackers are increasingly finding ways to get to us through software update servers, routers, printers, and even bypassing firewalls.

Figure 2. The ACS-enabled True Zero Trust environment for the test floor is a must to ensure containment.

The bottom line is that your infrastructure is going to be vulnerable, so you must add a reliable containment structure such that, when there is an attack, you can shut down. This is what our True Zero Trust™ environment is designed to enable. The “zero trust” concept is just what it sounds like – the complete elimination of the assumption of trust from within networks and systems. This means that no default trust is granted to any user or device, either inside or outside an organization’s network. This model grants resource access on a need-to-know basis only, requiring stringent identity verification and contextual information that cannot be known or provided by another source. By preventing unauthorized access to sensitive data, companies mitigate the risks of data breaches and attacks, whether external or internal.

What does this mean for AI/machine learning?
New chip technologies require new measurements, relying on multi-dimensional data. Large language models (LLMs) are creating vast new opportunities in all domains. LLMs are machine learning models that can perform natural-language processing tasks such as generating and classifying text, answering questions, and translating text. LLMs are trained on massive amounts of text data and use deep learning models to process and analyze complex data. This can take several months and result in a pretty hefty power bill.

However, during training, LLMs learn patterns and relationships within a language while aiming to predict the likelihood of the next word based on the words that came before it. We’re talking about a very large number of parameters and petabytes of data. LLMs are used in a variety of fields, including natural language processing, healthcare, and software development.

Currently, LLMs can comprehend and link topics, and they have some understanding of math. But an app like ChatGPT – the most popular and widely used LLM – does not understand new developments as it is not connected to the Internet. LLMs can recognize, summarize, translate, predict, and generate human-like texts and other content based on knowledge from large datasets, and they can perform such natural-language processing tasks as:

  • Sentiment analysis
  • Text categorization
  • Text editing
  • Language translation
  • Information extraction
  • Summarization

Using LLMs to summarize knowledge and feed it into the test cell or test floor can be done in a True Zero Trust environment because there is no danger of the data being manipulated in undesirable ways. With that said, LLMs aren’t self-aware – they don’t know when they make mistakes, so an LLM should be considered a data exoskeleton.

Conclusion 

Over the next few years, we can anticipate a significant shift in the types of applications being developed, moving away from traditional statistical machine learning and towards more sophisticated autonomous or semi-autonomous agents that can automate testing. In order to effectively safeguard the valuable assets and intellectual property of OSAT and fabless organizations, containment is necessary. The ACS Real-time Data Infrastructure offers a highly secure containment system called True Zero Trust. Through its innovative design, this infrastructure establishes a cutting-edge paradigm that allows for the creation of secure data highways and paves the way for building novel applications with enhanced security.

 

Read More