An Expert Looks At the Issues: Paul Sakamoto on Final Test
Paul M. Sakamoto is Vice President and General Manager of the Memory Products Div. of Credence Systems, Fremont, Calif. Earlier he was Vice President of Sales for Micro Component Technology, Director of Sales Development for Megatest Corp. and a Product Engineering Supervisor at Intel Corp. He earned a bachelor's degree in electrical and computer engineering from Oregon State University, Corvallis. Paul is also a Chip Scale Review columnist.
|Paul M. Sakamoto
Q Next year at this time, what will the experts say is new in final test?
A What will be new won't be new from the standpoint of invention, rather in deployment. That is, moving performance test to probe from final test will have taken another big jump.
In the past, most of the work here has been done in memory test and in some high-end logic. By next year, we should see more mixed-signal applications, where the bulk of the test time is spent at wafer sort instead of final test. This will reduce the total demand for final test capacity. In addition, strip handling for test and built-in self-test will have some meaningful amount of usage.
Q Since ATE keeps getting bigger, faster, more automated and more integrated, why does final test remain a big production bottleneck?
A In fact, this issue is what is driving the effort to get more testing done at wafer sort as well as what drives a lot of the move towards strip handling.
One of the big waste issues at final test is that these are usually some of the higher-performance, higher-expense pieces of test capital in the process flow, yet they spend a lot of the time waiting for handling equipment to position the devices for test.
This handler lock time can take tens of seconds for a high parallel memory handler and usually amounts to several seconds for the various logic and mixed signal handlers, assuming parallel socketing.
When you analyze the test time vs. handling time and then compute capital utilization, final test becomes very wasteful compared to wafer sort, where handling time is measured in sub-second intervals.
Q Has there been any real progress in test standards?
A Over time, all industries eventually reach some level of standardization if it makes sense to do so. And it has to be in the form of economic sense for the suppliers as well as the buyers, or the suppliers will not invest in the standard.
In the front-end equipment area, there have been a few standards where equipment vendors have decided it would be nice to be able to connect harmoniously to prevent redundant development. A few examples are in material I/O and in data formats such as GEM/SECCS.
In the backend, there has been less forward movement. The few real examples would include the use of electrical power (but not the voltage levels of such) and the fact that the vast majority of testers use 50 transmission lines for device signal paths.
I don't think that the standardization can occur until there is one vendor who so totally dominates the others in this relatively limited market, with such a superior software model, that the others will be forced to follow the leader. On the other hand, one company may just crush the others through its business practices and become the Microsoft of ATE.
Q Is vertical integration by ATE makers a long-term trend?
A I think that this is a long-term trend as the industry goes through the next maturation level. At one point, ATE companies were so vertically integrated that they made their own computers, relays, meters, etc.
This was wrong for enterprises of such relatively small size. Eventually, more and more of the product became outsourced or subcontracted to the point where there is now a large potential to lose control over critical technical aspects of the final product.
For instance, leaving the task of interface development to a garage-shop loadboard company when the sale of a million dollar tester depends on it is poor risk management.
The little guy can build the tenth one, and we would give them the documentation package to do a good job, but the first critical applications must come from the ATE vendor.
There is a reverse situation where one can really question the wisdom of leaving the development and maintenance of critical ASIC devices in our machines to large outside contractors who will have other priorities as soon as there is an upturn.
The short answer, then, is that I believe that having a vertical base capability for development is key. Having enough capacity to do it all is probably wrong.
Q What is the relative impact of software and hardware for memory test vs. logic and mixed signal testing?
A Memory testing places more emphasis on the hardware than the software element. Although memory test requires that the engineer develop the actual test patterns that stimulate and test the device, the fact is that most of the patterns in use today are descendents of the previous generation-with some incremental additions for array size or process issues.
Q Will there ever be a single, simple software test platform that will enable, for example, users to swap Company A's memory test system for Company B's system without tears?
A I would say probably not. This goes back to your question about standards. There would have to be a "Microsoft" of ATE software that could take over the total software burden and leave the hardware development to the few remaining manufacturers.
Q What impact will wafer-level processing have on final test?
A As with all new technologies, the impact of wafer-level testing and burn-in will be modulated by adoption rates, which, in turn, will be modulated by start-up costs and how they apply to a majority of the devices to be processed and the amount of time it takes to set up for WLP versus working a very established methodology of today.
That is, after the technology has been proven and re-proven many times, it will still require a large changeover of logistics to have this really turn over a significant amount of the volume in the world. However, that being said, this is another radical step in moving test time further toward the front end and away from the final test machines.
As this evolves, the amount of total test seconds at final test will grow more slowly. I still don't see how this will work out for high-mix/low-volume devices, or even for medium volume devices, but the proponents seem to have some reasonable scenarios under which it does dominate the future.
I maintain, however, that when we look at characterization, development, analysis and other areas that require measurement instead of just testing, it is hard to see how WLP is tenable. So, I would bet that ten years from now, we still have both tools in our arsenal.
Q What is the dynamic between wafer-level processing and strip handling of packaged devices?
A In order of "distance from end application environment," wafer-level processing is the most distant; strip handling is second most distant, and individual device socketing is potentially the closest.
Depending on how the actual implementation is done, the results will overlap or change order. My assertion assumes some level effort between the three. It also assumes that the end-use environment is not a "silicon-on-silicon" hybrid, which would give WLP an advantage in correlating to the final application.