As next-gen designs become increasingly sophisticated, a more holistic and streamlined approach to the manufacturing process is vital.
As I’ve talked about in previous blogs, the semiconductor industry faces serious challenges on a number of fronts.
The supply chain issues caused by Covid are still a headache. While some industries (automotive in particular) are putting pressure on chip companies to ramp up production, others, such as data storage, suffer from demand downturns. Another key factor impacting chip making is an ongoing shortage of skilled labour within the industry. Then there’s the problem of manufacturing equipment, with companies either unable to source second-hand tools or new tools being too expensive due to inflation. And as the world’s energy crisis continues, power itself – and skyrocketing electricity bills – is also a major concern.
As I discussed in my presentation at last year’s Fab Management Forum, the big issue that underlies all of these challenges is complexity. In many ways, fabs and the way they operate haven’t changed much in the past decade – yet the products they make have become increasingly sophisticated and as a result, more difficult to manufacture at scale. It’s not unusual now to see chip designs going into production with over 1,600 unique steps required to produce them, in cycle times that can stretch up to nine months. And as an example of just how complex chips are becoming, Micron recently began volume production of the world’s first 232-layer NAND.
This level of sophistication is only going to increase in the coming years, and the complexity challenge will soon reach breaking point if fabs continue with current practices. Unless fabs introduce new methods to streamline and simplify the management of the production process, their performance and output will continue to suffer, hindered by the sophistication of their own products.
What’s the problem with how fabs attempt to deal with complexity? Currently, they follow the classic model of addressing a big problem by breaking it down into a series of smaller, more manageable problems, with different teams assigned specific challenges to tackle. However, this approach has created problems of its own – different teams within the fab also have different priorities and KPIs, which they often work towards in isolation. And as individual teams try to max their KPIs, conflicts can arise that negatively impact production itself.
Let’s drill down into the complexity issue and look at how it affects production scheduling in particular.
There are a number of different areas within chip production – metrology, photolithography, diffusion furnace, epitaxy etc – which each have their own set of tools and rules as to how they operate. Each area also has its own team with their own KPIs. So while the overarching objective of a fab is to produce a required number of saleable wafers, each team also has more granular objectives against which they’re being measured.
Typically, teams schedule production in their areas according to a series of rules that dictate the sequence in which wafers are processed - for example, this particular recipe should always run on this particular tool. That sounds simple enough, except there can be thousands of these rules for each area – in fact, it’s so difficult for industrial engineers to properly manage and control each area’s parameters that the rules tend to be full of simplifications and shortcuts.
To maintain the fab’s performance, these rules also require regular maintenance to respond to different events happening in the fab on a daily basis. Yet given their sheer volume, and the growing complexity of the products being made, it’s impossible for teams to adapt every rule to address the real-time situation. An additional issue is that each area has its own software to administer these rules and monitor its KPIs, but it generally doesn’t interoperate with the software in other areas.
All of which means that the teams aren’t able to see the status of each other’s area – they can only operate based on their own data. Not only have the rules they use have been simplified in an attempt to deal with complexity, but they’re designed to meet each area’s objectives, not the overarching goal of production. So while individual teams may be hitting their own KPIs, the overall performance of the fab is inconsistent.
There is no ‘big picture’ of the production process that individual teams are able to consult to guide their decision-making – and as it is, they are not being judged on overall performance, just how well their own area is doing. But this is simply not a viable way for fabs to work going forward.
So what is the solution for handling production complexity on its own terms rather than constantly diluting it? It’s counter-productive to try and simplify data when it’s that very complexity that makes it so powerful – and by genuinely engaging with every aspect of it, it’s possible to gain a more accurate and comprehensive picture of what’s happening in the fab. Rather than simplifying the data, we should instead be simplifying the process.
The first step to managing complexity is employing an intelligent scheduling system that operates based on a holistic overview of what’s actually happening in the fab at any one time, identifying and responding to bottlenecks in the WIP as they happen. It also needs to make these adjustments and deliver schedules autonomously, because as we’ve seen, the complexity and unpredictability of modern fab operations make it infeasible for conventional rules-based schedulers to deliver consistent results. The constant requirement for manual retuning is a drain on IE resources, and the intelligence in the software itself is not advanced enough to effectively tackle the hardest problems found in a wafer fab.
Is such an autonomous approach to scheduling possible? Short answer, yes it is, but it requires a willingness on the part of the semiconductor industry to a) fully embrace smart manufacturing practices, and b) to switch from their conventional scheduler to deploy a best-in-class technology that leverages both the power of the cloud and the computational speed of AI.
The complexity of modern chip design demands a new approach to production that is equal to this complexity – otherwise, the industry will be forever on the back foot, constantly struggling to keep up with the future while failing to capitalise on the richness of the data available to it in the here and now.
Author: Jamie Potter, CEO and Cofounder of Flexciton
Timelinks are one of the most challenging scheduling problems found in a wafer fab and were causing a particular problem for Renesas Electronics' US fab. After seeing the potential performance gains with our software trial, they decided to go ahead with full implementation.
Timelink constraints are one of the most complex issues to handle in fab scheduling. They define the maximum allowed time between steps in the production of a wafer. Correct scheduling of timelinks is critical to helping minimise the risks of oxidation or contamination. This can happen when a wafer is queuing outside of a tool for too long, resulting in scrappage or rework that damages profitability. Renesas Electronics asked Flexciton to see if its intelligent scheduling software could improve this aspect of scheduling in the diffusion area of its wafer fab.
What makes timelink constraints very hard to schedule is their interdependence. For example, by moving from step one to step two, the wafer enters the first timelink. When moving from step two, the wafer enters a second timelink which lasts until step 4. However, there can also be a third timelink constraint – known as a nested timelink – between step three and step four which overlaps the second timelink constraint (see Fig. 1). Therefore, step three has to be scheduled in a way that allows for both the second and third timelink constraints to be adhered. This example discussed is just for a few steps but, in reality, there could be hundreds of steps and many overlapping time constraints that need to be continually considered. This creates one of the most complex scheduling problems seen in a wafer fab, and any violation of the timelinks has a negative financial impact.
The most commonly used scheduling approach is based on heuristics, using a set of if-then operational rules that have been manually programmed and require constant maintenance. This is a relatively simplistic methodology that has hardly changed over the past two decades and thus cannot effectively solve today’s much more challenging scheduling problems. In modern day fabs, very complex, multi-dimensional problems are common on a daily basis and existing heuristic approaches don’t have the built-in intelligence to look ahead to future steps.
Flexciton’s next-gen scheduling software is the only solution on the market that is able to do this. It pairs powerful mathematical optimisation technology with smart decomposition techniques to work out solutions with complete autonomy. It has the ability to generate an optimised production schedule within a few minutes by searching through billions of scenarios to select the best possible one. Importantly, its intelligent algorithms consider the knock-on effects that one change can have against all the other constraints in the fab – including timelinks. This repeating iterative process ensures that it is continually updating the schedule to allow for any changes in fab conditions or business objectives.
The software was run in a simulation environment that replicated the way that Flexciton’s scheduler would have run live at the Renesas fab. The results showed that a significant improvement in reducing timelink violations of 29% could be achieved. Additional improvements would be possible of a 22% reduction in the number of batches and an 11% reduction in queue time despite these two KPIs being conflicting (see Fig. 2). This is because decreasing the number of batches naturally means increasing the number of wafers in each batch, but this increases the queue times for each batch as operators wait for new wafers to arrive at the tool before processing them together.
Currently, most fabs have no knowledge of the arrival times for future lots so operators can sometimes wait unnecessarily to maximise a batch size, causing more wafers to queue and damaging productivity. Uniquely, the Flexciton scheduler can see how lots are moving in time and can thus optimise the trade-off between number of batches and queue time to achieve the impressive gains seen on these conflicted KPIs.
Renesas were impressed with the simulation figures. Jay Maguire, Engineer at Renesas, commented, “Flexciton was able to show us several specific decisions we could have done differently to improve batching and cycle time. We are pursuing a live trial of the Flexciton software.”
Jamie Potter, Flexciton’s co-founder and CEO, explained, “The key differentiator of our approach is that our software has the intelligence to predict what may happen in the future based on the current state of a fab (or WIP in a fab). It searches for the best solution amongst billions of possibilities to continuously keep finding the optimal schedule that meets the KPIs to maximise a fab’s productivity and profitability. Humans and heuristics just can’t do that.”
The problem with complex systems is that there’s so much variability and interaction, it's hard to get actionable insights from data. In Part 1 of this blog, Ben Van Damme explains that instead of accepting the complex nature of a fab, factories can control it using advanced scheduling.
One of the consequences of the pandemic has been an incentive to deglobalise, as regions suffered from the issues with supply chains and geopolitical dependencies. Significant delivery issues in the chip industry – and in particular wafer manufacturing – have had a negative impact on the global economy. However, onshoring this high technology industry will also bring its own challenges. Expertise and cost efficiency to name a couple. Zooming in a bit closer on so-called wafer fabs, we can distinguish two types of factories. The legacy and smaller fabs serving niche markets with older technology nodes, and the cutting-edge giga-factories, recently built or in the making. Both types have different problems to tackle, but one key component of their roadmap could be surprisingly similar.
The newest fabs have well integrated automated systems, but operating them efficiently on such a scale is a challenge of its own. The older factories have the downside of being less automated but they realise the need to become more efficient in energy consumption, labour cost and capacity utilisation. In both situations, digital transformation is coming to the rescue. Industry 4.0 is no longer a buzzword, it has become a matter of regional technological sovereignty.
The fundamental building block of Industry 4.0 is data; an asset which is present in abundance in wafer fabs. So what is preventing these factories from levelling up? The answer is simple, the solution is not: complexity. It’s an inherent part of wafer manufacturing, stemming from; increasingly high numbers of process steps, job shop factory types, re-entrant flows, product diversity, sensitivity to quality issues and so on.
The problem with complex systems is that there’s so much variability and interaction, it's hard to get actionable insights from data. Instead of accepting the stochastic and complex nature of the fab, factories can better control it by using advanced production scheduling to understand in which order lots get processed, on which tool and – the most important difference when compared with common rules-based approaches – when they get processed. To begin, this can be employed in certain bottleneck areas and then once you do it for the entire factory, you get a holistic picture of what is going to happen. Sounds great, doesn’t it? But how exactly will this benefit your fab? To explain, let’s place production scheduling in a couple of recognisable use cases.
Wafer manufacturing has complicated recipe-tool qualification matrices within a group of tools that perform similar processes. The weaker tools can process fewer recipes than the stronger ones. We want to avoid stronger tools “stealing” lots away from the weaker tools, because it leaves fewer lots for the weaker tools to process, therefore wasting capacity. The same is true for faster and slower tools: while faster tools are preferred, pushing all the WIP through the faster tools leaves the slower tools under utilised. Advanced schedulers allow for better anticipation of incoming WIP and superior use of available capacity for weak and slow tools. The bigger and more complex the matrix grows, the harder it is to find the optimal processing of WIP. On top of the scheduling itself, mathematical programming helps to optimize lot-to-tool assignments over time. This results in a capacity booster, similar to putting a turbocharger on an engine: it’s the same engine, but with more power.
Process steps with timelinks are common in wafer manufacturing to control the maximum amount of time a wafer spends between two or more process steps. If a timelink is violated, the wafer requires rework – or worse still, scrappage. A system that avoids timelink violations requires the ability to intelligently plan into the future. And that’s exactly what an advanced scheduler does. It has been proven to drastically reduce timelink violations, even in the most complex of scenarios.
Batching is a complex decision making process since it involves an estimate of lot arrivals and how waiting longer trades off with running smaller batches. Predicting lot arrivals is difficult in such a complex environment, and trading off wait time against batch efficiency is even harder because the costs and gains are not always clear. Determining and automating this process is well within an advanced scheduler’s remit. Once the algorithm is tuned, it makes the most efficient decision, and perhaps even more importantly: it generates consistent output.
Another use case related to the problem of lot arrivals is the problem of changeover decisions. One toolset with different machine setups can serve multiple different toolsets down the line. A bit like a waiter in a restaurant serving multiple tables. Waiters have to make sure no table is without food or drink, and to do that, they visit the tables regularly to ask for any orders. But for machines, you can’t switch the setup too often because it only increases non-productive time. Preferably, you also plan setup changeovers at a time when planned or predicted downtime for the machine occurs, to reduce downtime variability. To put it simply, it’s a decision on when to switch over from the type A process to the type B process on a tool. An advanced scheduler can solve that equation, finding the optimal point in time. Schedulers are better at this than human reasoning or rule-based logic, as solving to a time dimension is what they are designed for.
Line balancing is – even for experienced manufacturing engineers – difficult to grasp. One can intuitively understand what it means, but how do you define “balanced” in the first place? Even if you can, it is absolutely beyond the capabilities of a human brain to manually and continuously make decisions that control it. And once it’s out of balance, to recover it. Again, considering the time dimension is a crucial aspect of what advanced schedulers offer, which enables them to recover faster from unforeseen circumstances and maintain better risk-control for generating continuous output.
As opposed to dispatch lists that only tell the order in which to process lots, advanced schedulers can also tell when a lot is supposed to start and finish processing on a tool. Combine that information with which operators are serving which tools, and you can move away from tool-centric dispatch lists towards operator-centric task lists. With a handheld device, that could even allow you to send push notifications when urgent intervention is needed. It can reduce idle time on tools that have no available operator. Even more so, it can allow for an entire rethink of the workflows operators are used to.
So far in this blog, we’ve focused on scheduling use cases where lots are scheduled on tools, leading to higher throughput on tools, toolsets or the entire factory. All these use cases can also be addressed by improving some rule-based dispatching strategies, but what advanced scheduling offers is the ability to optimize for future decisions rather than just real-time. With that comes better visibility on what will happen in the factory, and it also leaves opportunities for re-organising workflow and freeing up resources. In part 2 of this blog, we’ll begin to look at the future and what could happen when we integrate even further. Enter, Industry 5.0.
Author: Ben Van Damme, Industrial Engineer and Business Consultant
The big theme at this year’s SEMI Industry Strategy Symposium (ISS) conference was ‘How does Europe fulfil its ambition by 2030’. Jamie Potter shares his thoughts on the steps being taken to achieve its ambitious goal.
The big theme at this year’s SEMI Industry Strategy Symposium (ISS) conference was ‘How does Europe fulfil its ambition by 2030?’. It involves an ambitious target of reaching a 20% share of the global semiconductor market by 2030 whilst having a more resilient industry ecosystem. This is a huge challenge, especially when one considers that the global semiconductor market is forecasted to reach $1tn by 2030. A 20% share of this would mean $200bn in just seven years. For perspective, the global market figure currently sits at $600bn which means Europe’s present-day 8% share is around $48bn. Breaking it down like this reveals the magnitude of the challenge; Europe must increase its share of the market by more than quadruple as the size of the pie increases along the way. Looking solely at Europe’s rate of growth in the market over recent years compared with the rest of the world, I can tell you that their target is infeasible. But before we conclude, there are several aspects to consider.
First, what is going to drive this extraordinary growth? Second, why has the EU – and indeed the US which currently claims 10% of global semiconductor manufacture – set these targets? And finally, what is being planned to achieve them.
Expertise can be fast tracked by partnering with existing fab companies; such as TSMC discussing building new fabs in the US and Germany. But naturally, they require government grants from the funds being created to boost the semiconductor manufacturing industries. It’s worth comparing how much each area is allocating for this. South Korea’s figure is $450bn, the US is $233bn, and China is investing $200bn. With these sizable sums already formally approved by the relevant authorities, fab construction in these nations is already starting.
The EU, on the other hand, is only planning to invest a comparatively tiny $43bn.
This figure is nowhere near enough to quadruple its current semiconductor manufacturing capabilities. In fact, Kurt Sievers, CEO of NXP, estimated that a more realistic figure to achieve a 20% market share would likely be over $500bn. And moreover, this has not yet been passed in parliament, so the EU is already behind on the timeline to achieve its target compared with the other market players. As for the UK, the figure has not been announced but is rumoured to be around $1bn – which is not enough to fund just one new fab at an advanced node.
It’s important that SEMI is driving this discussion around the EU Chips Act as government funding is a critical driver for the region's growth within the global semiconductor market. But it’s not enough. As an industry, we need to take stronger action and challenge the decisions being made by the EU and the UK. They require the expertise of industry leaders to understand the full importance of microelectronics for the economy, without it I believe the money they invest will be fruitless.
As regular readers know, our software can make existing and new build fabs smarter and substantially more productive, but in order to hit the EU’s extraordinarily ambitious targets, more funding and strategic partnerships must be considered. I suspect that one solution will entail a close relationship between the EU and the US to create a US/EU-based supply chain model with both regions working together to share their centres of excellence to create a complete, self-contained system. Even if the ambitious targets are not met, working on de-integrating the supply chain with onshoring will provide security for the electronics that underpin today’s successful economies.
Author: Jamie Potter, CEO and Co-founder of Flexciton
Photo Credit: SEMI