The first integrated circuits were invented by Texas Instruments and Fairchild Semiconductor in 1959. Today, semiconductor manufacturing is a $600 billion dollar industry and microchips are ubiquitous and impact our lives in ever increasing ways. To achieve such astonishing growth, academics and industry have had to constantly innovate, researching new production technologies. While much has been said about Moore's law and the push towards higher and higher transistor densities, the innovations made in how the billion dollar factories producing these chips are run have received less attention. This article focuses on innovations in scheduling: algorithms which assign lots to machines, decide in which order they should run, and ensure any required secondary resources (e.g. reticles) are available. These decisions can significantly impact the throughput and efficiency of wafer fabs.
Many innovative technologies in scheduling were first proposed by researchers and have, over time, been adapted in manufacturing. They include:
- Dispatching: rule-based systems for deciding which lot to run next on a tool
- Optimization-based scheduling: mathematical techniques like mixed integer programming and constraint programming which can generate optimal machine assignments, sequencing, and more for entire toolsets or areas of the fab, improving fab-wide objectives like cycle-time or cost
- Simulation: computer models of the manufacturing process which are often used to run what-if analysis, evaluate performance, and aid decision making
From dispatching to mathematical programming
Early academic research on dispatching rules dates back to the 1980s. Authors at the time already highlighted the significant impact scheduling can have on semiconductor manufacturing. They experimented with different types of dispatching rules, ranging from simple first-in-first-out (FIFO) rules to more bespoke rules focused on particular bottleneck tools. Over time, dispatching rules have evolved from fairly simple to increasingly complex. Rule-based dispatching systems quickly became the state-of-the-art in the industry and continue to be popular for several reasons: they can be intuitive and easy to implement, yet allow covering varying requirements. There are, however, also many situations in which dispatching rules may perform poorly: they have no foresight and generally look only at a single tool and therefore often struggle with load balancing between tools. They also struggle with more advanced constraints such as time constraints or auxiliary resources, e.g. reticles in photolithography. More generally, dispatching systems are a mature technology that has been pushed to its limits and is unlikely to lead to significant increases in productivity and yields.
For these reasons, focus has shifted over time to alternative technologies, especially deterministic scheduling based on mixed-integer programming or constraint programming. In the academic literature, these approaches start to increasingly show up around the 1990s. Early contributions focused on analysing the complexity of the wafer fab scheduling problem and solved the resulting optimization problem using heuristic techniques, but slowly moved towards rigorously scheduling single machines, tackling one particular aspect of the problem at a time. Due to the limited scope deterministic techniques could initially tackle, their adoption in industry lagged behind the academic discussion.
From single machines to fab-wide scheduling
The last twenty years have seen deterministic scheduling techniques mature and schedule larger and more complex fab areas. In the academic literature, authors moved from focusing on single (batching) tools, to entire toolsets or larger areas of the fab including re-entrant flows. They also started including more and more operational constraints such as sequence-dependent setup and processing times, time constraints, or secondary resources such as reticles. In order to achieve this increase in scale and complexity, researchers have applied a large number of optimization techniques, and often combined rigorous mathematical programming methods with heuristic approaches. Some have used general purpose meta-heuristics, such as genetic algorithms or simulated annealing, while others have developed bespoke heuristics for fab scheduling, such as the shifting bottleneck heuristic.
As the size of problems optimization-based scheduling techniques could solve grew, the industry started to explore how to adopt these methods in practice. For example, in 2006, IBM announced that it had successfully used a combination of mixed-integer programming and constraint programming to schedule an area of a fab with up to 500 lot-steps and that this had led to a significant reduction in cycle time. Our own technology at Flexciton leverages mathematical optimization and smart decomposition, combined with modern cloud computing, to efficiently schedule entire fabs. One key advantage of using cloud technology is the ability to access huge amounts of computational power. It allows to break down complicated problems and deliver accurate schedules every few minutes, as well as the ability to adapt the solution strategy to the complexity at hand. Additionally, it enables responsive adjustments, as events unravel in real-time, allowing for a truly dynamic approach to scheduling.
Optimization-based scheduling’s trajectory from an academic niche to a high-impact technology has partially been accelerated by two major trends:
The process has been accompanied by considerable improvements in productivity, as scheduling is able to overcome many of the downsides of dispatching: it can look ahead in time, balance WIP across tools, and improve fab-wide objectives such as cost or cycle-time. A major advantage of scheduling is that it can both increase yields when demand is high and reduce cost when demand is low.
When in doubt, simulate.
A discussion of scheduling in wafer fabs would not be complete without a word on simulation models. Simulation models are technically not scheduling algorithms - they require dispatching rules or deterministic scheduling inside them to decide machine assignment and sequencing. But they have been used to evaluate and compare different scheduling approaches from the very beginning. They were also quickly adopted by industry and have, for example, been used by STMicroelectronics to re-prioritise lots and by Infineon to help identify better dispatching rules. The development of highly reliable simulation models could greatly increase their use for performance evaluation and scheduling.
More reliable simulation models are also important in light of recent trends in academic literature, which may provide a glimpse into the future of wafer fab scheduling. Rigid dispatching rules that need to be (re)tuned frequently may soon be replaced by deep reinforcement learning agents which learn dispatching rules that improve overall fab objectives. In some studies, such systems have been shown to perform as well as dispatching systems based on expert knowledge. If and when the industry adopts such techniques on a large scale remains to be seen. Since they require accurate simulation models as training environments, they can be extremely computationally intensive, and their adoption will largely depend on the development of faster training and simulation models. The combination of self-learning dispatching systems, and comprehensive, scalable scheduling models may well hold the key to unlocking unprecedented improvements in fab productivity.
Flexciton aspires to be the key enabler in this transition, bringing state-of-the-art scheduling technology to the shop floor in a modern, sophisticated, and user-friendly platform unlike anything else on the market. Despite the enormous challenges that come with the scale of this endeavour, the initial results are very encouraging; cloud-based optimization solutions can indeed bring a step change to streamlining wafer fab scheduling while delivering consistent efficiency gains.