Blog & News

We upload to our blog every couple of weeks, sharing insightful articles from our engineers as well as company news an our opinions on recent industry topics. Subscribe to our mailing list to get great content delivered straight to your inbox.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
load port utilisation on photo tools - photolithography area
Read time
 min read
Technical
Goodhart’s Law and the Pitfalls of Targeting Load Port Utilisation on Photo Tools

In this blog, Dominic Bealby-Wright, one of our optimization engineers, takes a look Goodhart's Law and its relation to load port utilisation on tools in the photolithography area.

It has been described as the law that rules the modern world, and its effects can be observed in every organisation. I’m referring to Goodhart’s law, named after British economist Charles Goodhart, who wrote the maxim: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”

A common flavour of this effect is described in the following cartoon, based on a possibly apocryphal story of how central planning failed in a nail factory in the Soviet Union.


We have seen (less dramatic) examples of this effect at work in semiconductor wafer fabs. For instance, teams of operators may be measured on the number of lot moves that occur during their shift. In general, more moves per shift correlates with more wafers delivered on time to customers. However, this relationship breaks down if operators ‘game the system’ by loading batch tools with small batches at the end of the shift, thus wringing out a few extra moves in their shift, but hobbling the next shift.

Memorable though such examples are, they give the impression that Goodhart’s Law relies on people being uninterested in the ultimate goal that their organisation is pursuing. However, apathy is not usually the driving factor in Goodhart’s law; whenever lack of information, limited computational power or even an inability to concisely express our true preferences leads us to substitute a proxy metric for our true goal, the law is bound to rear its head. Former Intel CEO Andy Grove described the effect of such surrogate indicators as like “riding a bicycle: you will probably steer where you are looking”; and if where you’re looking isn’t perfectly correlated with the road ahead, you can expect a wobbly ride!

The intricacies of tools with multiple load ports

For a more subtle example of where using an imperfect measure as a target can lead to suboptimalities when scheduling a wafer fab, we were inspired by a post on the excellent Factory Physics and Automation blog looking at the relationship between load port utilisation and cycle time. In our experience, we have seen load port utilisation of a tool used as a target when designing both operator workflows and dispatching rules.

First, some quick definitions. Many tools in a fab have multiple ‘load ports’ where lots can be inserted into the tool, but then a limited chamber capacity so that, for instance, only one wafer can be processed in the chamber at the same time.

Figure 1: Example of a tool with 3 chambers and two load ports.


Consider the machine in Fig. 1 with three chambers and two load ports. Lots can be loaded in either load port, but then each wafer in the lot has to move through Chambers A, B and C one at a time. This means wafers may have to queue inside the tool, if the next chamber they need is still processing.  Lots must be unloaded at the same load port in which they were inserted. Suppose it takes each chamber 10 minutes to process a wafer, and we want to process two lots each consisting of three wafers. If we were only allowed to use a single load port, we would have to wait for the first lot to move through all three chambers and exit at the same load port before we can start processing the second lot. Fig.2 shows that for a simple model (that ignores transfer time between chambers), the second lot will have to wait 50 minutes before it can start processing.

Figure 2: Example of how the tool from Figure 1 would process two 3-wafer lots if only load port 1 were being utilised.


If however, an operator loads both batches into the two load ports at the same time (Fig. 3), the machine will pick up the first wafer of the second lot as soon as the first lot has finished processing in chamber A. Thus the second lot will only need to wait 30 minutes.

Figure 3: Same situation as Fig. 2 is shown, except in this case both load ports are available for use. Therefore, once all three wafers of Lot 1 have finished processing in Chamber A, Lot 2 can begin processing.  


Therefore, for a given level of WIP at a tool, we can expect higher load port utilisation to be correlated with reduced waiting and therefore improved cycle time.

Indeed, in cases where a wafer cannot be unloaded from a tool until all the wafers in the same lot are also ready to be unloaded (a common workflow), it can actually make sense to split lots before a chamber tool. For instance, if we have a lot of 6 wafers before the tool (see Fig. 1) – loading all the wafers as a single lot in a load port – it will take 80 minutes for all 6 wafers to move through the three chambers until we can unload the lot. If however, we split the original lot into two lots of three and load them into both load ports (as in Fig. 3), then the first lot can be unloaded after just 50 minutes, and potentially continue to its next step earlier.

How directly targeting load port utilisation can harm cycle time

As predicted by Goodhart’s Law, the correlation between load port utilisation and fab cycle time breaks down once we try to optimize directly for load port utilisation. This breakdown is particularly stark on photolithography tools, where process steps rely on a critical secondary resource: reticles. Reticles (also called photomasks) act like stencils in the expose step of a photolithography process, patterning the wafer with the desired features.  In most photo tools, reticles must be loaded onto the tool in containers, called pods, before the lots that require them can be loaded onto the machine. Therefore, if a lot is inserted into a load port early, the wafers could just be waiting inside the machine. Moreover, this also requires loading a reticle into the machine when it could have a more productive use elsewhere.

For a simple example, consider a toolset consisting of two of the tools from Fig. 1 (we can imagine chambers A, B and C are performing coat, expose and develop operations respectively).

Suppose we have just loaded a 3 wafer lot onto tool 1. The other load port of tool 1 remains free. Meanwhile on tool 2, both load ports are utilised, but there are only two wafers yet to be processed in Chamber A.

A lot (lot X) that requires a special reticle (of which only one exists) arrives. Due to a lot-level restriction, lot X can only run on tool 1. This sort of restriction is particularly common in photolithography where running consecutive photo layers through the same tool (even if there are multiple tools qualified for the operation) can reduce product variability caused by idiosyncratic aspects of the lensing to a particular tool (this is sometimes known as a ‘lot-to lens’ dedication).The operators on this toolset abide by the following rule for dispatching lots:

Rule 1: If a load port and the required reticle are available, load the reticle and the lot onto the tool.

Since tool 1 has a load port available, the operator immediately loads the reticle onto the machine, and puts lot X into the load port.

Ten minutes later, lot Y arrives at the toolset, also requiring the same reticle, and with a lot-level restriction forcing it to run on tool 2. Since the reticle is already loaded on tool 1, lot Y cannot be dispatched until lot X has finished processing and the reticle has been moved from tool 1 to tool 2. Assume, for the purpose of simplicity, the reticle moves instantaneously, both lots will have finished processing in 130 minutes time (see Fig. 4).

Figure 4: Example of processing on the two machine toolset when operators follow Rule 1 for dispatching lots


Imagine, however, the operators adopted the following workflow:

Rule 2:  If a load port and the required reticle are available and the tool can begin processing immediately (i.e. Chamber A is free), load the reticle and the lot onto the tool

In this case, lot X will not be immediately loaded onto tool 1, since Chamber A is initially occupied. After only 20 minutes though, lot Y can be loaded onto tool 2, to finish processing 50 minutes later, at which the reticle can be moved and lot X can start on tool 1. Thus, after just 120 minutes (as opposed to the 130 minutes under Rule 1), both lot X and lot Y will have finished processing. Therefore, we can see that by adopting rule 2, the cycle time, and hence the throughput of the toolset can be improved.

Figure 5: Example of processing on the two machine toolset when operators follow Rule 2 for dispatching lots.


In our experience of wafer fabs, we often see workflows akin to Rule 1, wherein operators fill the load ports of photo tools as soon as they are free, thus forfeiting the opportunity to use reticles earlier on different tools. Adopting a workflow like Rule  2, however, is more difficult since it requires operators to have foreknowledge of when the tool will be ready to process a new lot, and reacting promptly to load the tool at precisely this time.  In practice, particularly when operator availability is limited, you will risk increasing wait time because you leave the tool under utilised if you fail to load a lot as soon as a machine becomes available.

Using advanced optimization to handle Goodhart's Law

Flexciton’s scheduler can help to alleviate this problem by employing advanced optimization technology. It can predict when lots will arrive at the photo toolset and which reticles they will require, and then jointly schedule the reticles and lots on the toolset to obtain an optimized schedule. The knowledge of future arrivals crucially allows us to identify cases where loading a reticle onto a machine now is suboptimal, since a lot will soon arrive at another tool that can make use of the reticle sooner or that simply has a higher priority. Thus, following a Flexciton schedule, operators can dispatch to load ports when they become available, with minimal risk of harming cycle time due to locking in reticles prematurely.

However, we still are not immune to the curse of Goodhart’s Law. The cycle time of an optimized schedule is itself only a proxy for what we actually care about: producing more high quality wafers at a low cost per wafer. Over-optimizing for cycle time may lead to a solution with so many loads and unloads that the labour cost of running fab becomes prohibitive. Or, as described in one of our previous blog posts, the solution may require moving reticles so frequently between tools that we increase the chance of a costly breakage.

To solve this, we apply a technique suggested by Andy Grove himself: we use pairing indicators. Combining indicators, where one has an effect counter to the other, avoids the trap of optimizing one at the expense of another. This is why we typically pair cycle time with the number of batches (to account for limited operator availability) or the number of reticle moves (to keep the risk of reticle damage low), thus mitigating the perils of Goodhart’s Law.

Read
cloud technology for semiconductor wafer fabs
Read time
 min read
Industry
Is Fear Holding Back The Chip Industry’s Future In The Cloud?

The semiconductor industry is at the cutting edge of technology – so why is it still so nervous about the cloud? Persisting with an outmoded security model means missing out on significant gains in manufacturing.

The semiconductor industry is at the cutting edge of technology – so why is it still so nervous about the cloud? Persisting with an outmoded security model means missing out on significant gains in manufacturing.

Only the paranoid survive?

Perhaps more than any other sector in the world, the semiconductor industry is incredibly protective of its intellectual property (IP). Given the centrality of the silicon chip to modern life, that’s not surprising – companies are in a constant arms race to design and develop ever more sophisticated chips to meet the never-ending demand for innovation from their customers. A design breakthrough could be worth billions of dollars, and so the security of the relevant data is paramount.

And that’s not the only threat that keeps semi co security teams awake at night – there’s the security of the actual chips themselves to consider. An ongoing fear within both the industry and among government security agencies is that rogue code may be inserted into a chip either during development or the manufacturing process, making any system it becomes part of vulnerable to attack.

In fact, security of manufacturing – with many companies now sub-contracting to facilities in Asia – has been explicitly cited as a key reason for building more fabs in the US. In March 2022, President Joe Biden said that semiconductors are “so critical to our national security… that we’re going to create rules to allow us to pay a little more for them if they’re made in America.” In other words, security fears are so intense that the industry is willing to put prices up just for the supposed reassurance of having chips that aren’t produced overseas.

Although Biden’s worries over the threats to national security are not cloud related, they feed into a culture of fear that has become embedded into the semiconductor industry, hindering its advancement towards next-gen technologies.  

The cloud revolution

The cloud has revolutionised the way that business works in the 21st century in a number of ways. For a start, it’s decentralised the IT function – applications that would previously have resided in on-premise server rooms are now accessed as a service via the cloud. This has significantly simplified the set-up and running of satellite offices and local branches because there’s no need to house and manage IT hardware at every location – all that’s needed is a connection to the internet.

But for hi-tech companies, the real advantage of the cloud is the ability to access vast amounts of computing power on demand. Whether it’s for data crunching a massive set of figures, running an AI model through its paces, or simply trying to crack a really complex problem, the muscle provided by cloud computing can dramatically speed the process up.

On the face of it, this would make the semiconductor industry an obvious candidate for the widespread adoption of cloud technology. But that hasn’t been the case. Limited adoption has taken place – though usually relating to ‘non-critical’ business functions – but compared to the companies they serve, semi cos have been conspicuously slow to embrace the potential of the cloud.

Outmoded assumptions and intransigence

For an industry on the cutting edge of technological innovation, the reasoning behind this state of affairs seems to be based on outdated assumptions, an indication perhaps of just how embedded the fear culture is. The security philosophy at many chip makers is still predicated on each separate facility being a castle under siege that needs to be protected from external attack. The idea of willingly opening up these defences to the cloud is anathema.

Another factor holding back the full embrace of the cloud at chip companies and fabs is the fear of change. Many IT and security managers simply don’t recognise the new world of serverless functionality that the cloud can bring, and are quite happy to stick with the existing model. And there are IT teams that do understand the possibilities of cloud, but are frightened by what they imagine will be a massive upheaval of their working lives and environment, from having to create new security policies to potentially making themselves redundant. Without the pressure to change that has come from the top in other industries, IT itself is blocking cloud adoption.

Yet as both design and manufacturing processes become more complex, this reluctance to change isn’t tenable in the long-term. As chips become more and more sophisticated, the need to access computing power at scale will increase – and that means companies either building bigger server farms and private data centres, or properly embracing the cloud paradigm.

The fact is that cloud security has improved immeasurably over the past decade. According to a recent report from Accenture, “Today’s cloud solutions offer enhanced security and automation technologies that aren’t available for on-premise systems, making cloud a better option for preventing IP theft.” And refusing to move with the times because it threatens to disrupt the status quo is an increasingly questionable excuse from an industry built on pushing the technological envelope.

Ultimately, semiconductor companies have only fear and intransigence holding them back from total cloud adoption.

The end of on-premise production scheduling?

If the industry is to continue to innovate and keep up with the demands of its customers, it needs to produce highly sophisticated, next generation chips at scale. The only way to do that is by adopting smart manufacturing practices and technologies - and that means fully embracing the cloud. Why? Because current on-premises scheduling systems are no longer fit for purpose to handle the new levels of manufacturing complexity that next gen chips demand.

In an enclosed, siloed environment, such as exists in most current fabs, a typical on-premise scheduling system will only have access to so much computing power. Traditionally, these constraints have resulted in a reliance on heuristics to predict and control production workflow, as this is the best that can be achieved with the resources available. However, although these systems often use real-time data, the decisions they make are still based on rules that are created based on human experience from the past. The dynamic nature of a fab means that these rules are never going to stay pertinent, thus resulting in suboptimal production decisions.

By connecting the fab to the cloud, these power constraints disappear – and with them the restrictions that previously forced fabs to use heuristics-based scheduling. With access to a new magnitude of compute, companies can deploy more sophisticated systems able to schedule production based on real-time information, and thus optimize the manufacturing process.

Thanks to the power of the cloud, this next generation of scheduling systems is able to use complex mathematical algorithms to search through the billions of possible WIP permutations and make the best scheduling decision with present-time accuracy. This AI-based approach to scheduling requires a huge amount of computing power to rapidly work out the fab’s optimal position, but the cloud makes it possible to perform these calculations at unparalleled speed.

In theory, it is possible to get good computational power on-premise. The system would most likely be chosen based on what is cost-effective at the time and the power needed to solve the problem a fab had on that day. However, new computational power becomes more available and cost effective all the time. Moreover, fab complexity can easily change. For example, introducing a larger product mix into the fab could exponentially increase the complexity of the scheduling problem. With cloud, you can improve your hardware – and hence your KPIs – almost immediately. Something that is extremely unlikely on-premise due to the practical implications for the IT department.

And what could be a greater incentive to become cloud-friendly than fab capacity increases of up to 10%, which is what we’ve seen using these next gen systems? That’s the type of figure which should help even the most security-conscious chip company to change their mind about cloud technology.

Read
The Flex Factor get to know Product Manager Seb
Read time
 min read
Culture
The Flex Factor with... Seb

Introducing Seb Steele; self-proclaimed 'colossal nerd', John Boyd super fan and all-round product person.

Introducing Seb Steele; self-proclaimed 'colossal nerd', John Boyd super fan and all-round product person.

Tell us what you do at Flexciton?

Hi, I'm Seb and I work at Flexciton. In my mind my role is to "try to be helpful", but we honestly couldn't think of a job title for that so we stuck with Product Manager.

My main responsibility is to work out what constraints stand between our customers and whatever their desired future states are, and to help our engineering and R&D teams to find the right solutions to those constraints.

In the last year or so, I've been most heavily involved in developing our Fab-Wide Scheduler. Our team actually got laughed at when we explained how quickly we were planning to develop & roll it out across the entire fab - but we made it!

What does a typical day look like for you at Flexciton?

My day is typically very varied, which is ideal for me. It might be focussed on client requirements, in which case I could be meeting with them, or I could be testing a new feature on the fab floor, or maybe analysing some of their data to help with a design, or shaping some new tickets.

Sometimes I'll be getting in-deep on the logic for a feature, in which case I could be pair programming with an engineer if the code is tightly coupled to a customer's business logic, or I could be trying to keep up with people much smarter than me discussing the various consequences of different objective value formulations.

Then there's all the more internally-focussed things: providing context or knowledge sharing with teams, or doing onboarding, grabbing coffee with colleagues, or chatting about brain-computer interfaces in book club!

In short, I don't really have a typical day - and that's the joy of it.

What do you enjoy most about your role?

The variety and the fact that I get to work with such incredibly smart and talented people every day. There's a tonne of goodwill and a real culture of continuous improvement; it's down to earth and pretty flat, so there's no nonsense to deal with. I'm incredibly grateful for all of those things!

Also, as a product person, I'm grateful to be able to help solve challenges that actually make a positive impact in the world (did someone say "chip shortage"?), rather than optimising some clickthrough rates for a social media platform, or something.

If you could give one piece of advice to someone, what would it be?

Always try to have skin in the game.

If you could summarise working at Flexciton in 3 words, what would they be?

Speed, agility, humility.

What’s one thing you’re learning now or learned recently?

I've started - really just started - dipping my toes into quantum computers & the progress on creating quantum algorithms for use in optimisation. From applications of Grover's algorithm to more recent heuristic approaches e.g. a recent paper where the global optimum of a solution landscape is found in polynomial time, where a classical computer would take exponential time. It sounds like it's still not clear whether this could actually be applied to a real-world problem or not, and I'd like to dig into it more to understand whether this was calculated using purely theoretical, perfect qubits, or if it would still work with real-world noisy qubits - it sounded like the overhead of requiring error correction is pretty huge. In any case, it's obviously a mind-blowing field and one that I'm excited to learn a little more about.

Tell us about your best memory at Flexciton?

I've loved the time I've spent with various teammates when on client visits. We get to know each other better and have some great conversations. But colossal nerd as I am, I'm also going to mention how much I love the book club I'm part of. We talk about science and history and strategy and psychology and kung fu, and everything in between. I was always a big fan of when John Boyd said: "When there are no new ideas or I am unable to think, I'll be dead because that's my life's sustenance."

Interested in working at Flexciton? Head over to our careers page to to check what vacancies we currently have available and learn a little more about us whilst you're there.

Read
the legacy equipment shortage for semiconductor wafer fabs
Read time
 min read
Industry
Machine Says No – Is There A Way Around The Legacy Equipment Shortage?

Manufacturing equipment makers are under pressure to meet new fabs’ demands, with a serious knock-on effect for legacy chip makers. But can they increase capacity without increasing their number of tools?

Manufacturing equipment makers are under pressure to meet new fabs’ demands, with a serious knock-on effect for legacy chip makers. But can they increase capacity without increasing their number of tools?

Machines are the new bottleneck

The story of the semiconductor industry right now is dominated by shortages. There’s the chip shortage itself, as global supply chains continue to struggle to meet demand post-COVID. There’s the labour and talent shortage that we looked at in a previous blog. And now hitting the headlines is a manufacturing equipment shortage, with a lead time of up to 18 months on new lithography machines and other chip making tools.

Speaking to Reuters in April, ASML CEO Peter Wennink noted that, not only are the company’s customers having to wait over a year for its products, but that utilisation rates of ASML's machines are also at an all-time high, as semi cos try to keep up with demand. This is borne out by another industry executive, quoted recently by Nikkei Asia, who said, “Chipmakers like TSMC and UMC have told their senior executives to jump on a plane and visit all their key equipment suppliers in the US, Europe and Japan to avoid any of their rivals getting the machines ahead of them, and to personally make sure their equipment vendors are not lying to them about the lead times.”

Yet at least companies like TSMC know that manufacturers are working round the clock to fulfil their orders and provide the machines they need for the new, leading edge fabs they’re building. For many legacy chip makers – whose output is still vital to numerous industries, but is regarded as being on the ‘trailing edge’ of innovation – the problem is more acute, because most equipment manufacturers have actually stopped making the machines they need to increase capacity at their fabs.

The reason why is a simple case of economics. Chip companies around the world are making massive investments in new facilities focused on producing next generation semiconductors, which of course is why there is such a huge demand for machines to service this process. Not only has this created a new and thriving technology ecosystem, but manufacturers can also charge a premium for these machines. In contrast, supplying equipment to legacy chip makers is a lot less lucrative, and in order to meet demand from the new fabs, many manufacturers have simply stopped making the old machines.

The double bind of sourcing equipment

This presents legacy fabs with a major problem. While the focus in the industry is on increasing capacity to meet demand for next-gen chips, semiconductor shortages are occurring across all sectors, with markets such as traditional, non-electric automotive still reliant on older, legacy chipsets. As such, there is increased demand at legacy fabs as well, with companies dependent on machines that should have already been retired because there aren’t new replacements for them anymore. This also means that if a legacy fab wants to ramp up production, it has to source and recondition second-hand equipment, which is both increasingly difficult and far from ideal.

However, there’s another issue that affects companies with legacy fabs that want to boost capacity, and that’s one of increased costs. Because these facilities have been operating for a comparatively long period of time, they will almost certainly be fully depreciated, which is reflected in the price of the chips they manufacture - in other words, because their capex costs are now low, chips can be sold more cheaply than when the facility was still being paid for and capex had to be factored into the price. But this means that, even if a company manages to source additional machines, its capex will go up again, which will potentially make its chips more expensive and less competitive.

Legacy chip companies are thus caught between a rock and a hard place. On the one hand, they’re finding it increasingly difficult to source new tools to produce more chips. On the other hand, they risk becoming uncompetitive if, by increasing capacity with new machines, they are forced to increase their prices. And while using depreciated second-hand equipment is an option to get around this, finding it is another matter.

This is a huge headache for legacy fabs, but the impact in the wider world is even worse, with many industries continuing to suffer from chip shortage issues because their suppliers are unable to ramp up production.

Increase capacity, not machine count

There is another solution which bypasses the vicious circle described above, and that’s for companies to embrace smart manufacturing practices. Historically, the favoured way to significantly expand capacity was to increase the number of machines in the fab, and many companies are still wedded to this way of thinking. But advances in production scheduling software, in particular, are enabling forward-thinking companies to unlock capacity they didn’t know they had by optimising their WIP and the way their machines are used.

Most legacy fabs still use heuristics-based scheduling software to run their WIP, derived from SLIM methodology. However, this methodology is now over 20 years old, and was developed to work within constraints that no longer exist. Access to computational power - driven by cloud computing - has increased enormously, which means that much more sophisticated scheduling systems can now be used to make decisions about the WIP.

By using complex mathematical algorithms, this new generation of scheduling systems can make production decisions that are optimal for that exact point in time, enabling fabs to work at genuine capacity rather than the ‘false capacity’ that an over-reliance on rules-based software has created. The results that we have seen are truly game-changing, with capacity increases of up to 10% using the same number of machines and tools.

Rather than getting caught up in the equipment bottleneck, another solution exists today that can be quickly implemented with minimal upfront costs. By optimising production scheduling with AI-based precision, makers of legacy chips can increase capacity and meet new orders without having to expand their physical facilities. In a world of ongoing shortages, that’s a significant advantage.

Read
SEMI Lynceus Seagate and Flexciton present at FutureFab Webinar
Read time
 min read
News
Flexciton and Seagate Technology to Present at SEMI's Upcoming FutureFab Solutions Webinar

What will the future of wafer fabrication look like? With innovative AI-driven technologies paving the way for significant improvements in efficiency, quality and on-time delivery whilst also driving down costs – chip manufacturers need to be paying close attention.

What will the future of wafer fabrication look like? With innovative AI-driven technologies paving the way for significant improvements in efficiency, quality and on-time delivery whilst also driving down costs – chip manufacturers need to be paying close attention. In SEMI's upcoming FutureFab Solutions webinar, we explain why disruptive technologies, such as the hybrid optimization-driven scheduling that Flexciton provides, are pivotal in making progress towards Industry 4.0. We will discuss the technology behind Flexciton’s solution as well as how it performed when it was deployed live into the photolithography area of a Seagate Technology wafer fab.

This webinar will be taking place on June 21 from 17:00–18:00 CEST (16:00–17:00 BST). Can't make it? The full session will also be available on-demand for those who register.

Here’s an overview of what we’ll be covering:

  • Flexciton’s optimization-driven solution and the scheduling strategy they used when challenged with increasing capacity at Seagate’s Springtown facility.
  • Real case studies presented by both Seagate and Flexciton on the results gathered from deployment into the photolithography area.
  • How Flexciton’s advanced scheduling managed to enable an increase in throughput and a reduction in the amount of reticle moves.

This webinar is hosted by SEMI and will feature presentations from Flexciton, Seagate Technology and Lynceus AI, another disruptive AI-driven solution provider that will talk about their case study from deployment into a Tier 1 automotive supplying wafer fab.

You can register your space today by following this link: https://www.semi.org/eu/events/Future-Fab-Webinar

Read
the labour skills shortage in the semiconductor industry
Read time
 min read
Industry
Position Vacant: Are Chip Companies Really Running Out Of People?

The semiconductor industry worries that it won’t have enough workers to run its new fabs. But there’s a labour problem right now at legacy facilities. Could disruptive technologies help to solve this problem?

The semiconductor industry worries that it won’t have enough workers to run its new fabs. But there’s a labour problem right now at legacy facilities. Could disruptive technologies help to solve this problem?

A worldwide labour crisis

It’s not just supply chain issues that are afflicting the semiconductor industry. Another major problem is a shortage of labour. There’s a significant fab building programme underway, but already companies fear there won’t be enough manpower to run them properly.

This is a worldwide issue. A recent white paper by talent management company Eightfold shows that, to meet the capacity demands from new fabs, the US chip industry needs to increase its workforce by at least 50%. And according to Deloitte, China is also facing a labour crisis, with 400,000 more semiconductor employees required to meet its stated targets. Even Taiwan is feeling the pinch, with a huge gap opening up between rocketing demand and the ability to meet it due to a lack of skilled engineers.

Unsurprisingly, these countries are doing everything they can to boost the number of STEM graduates, with billions of dollars going into universities to support this goal. In addition, the SEMI Foundation – the non-profit arm of global industry association SEMI – has a number of programmes in place to develop a larger and more diverse workforce, with the ultimate aim to dramatically expand the pipeline of skilled workers ready to fill labour deficits.

However, while these initiatives are laudable and entirely necessary, they don’t address the labour issues that many legacy fabs are facing right now. These issues – such as experienced operators retiring and skilled engineers being poached by newer, bigger facilities – mean that it’s growing ever harder for legacy fabs to meet capacity pressures. With their workforces dwindling or under threat, what can be done to ensure that legacy fabs are still able to operate efficiently?

The problem of running fabs on gut feeling

One solution is to change the way in which fabs operate. Many legacy facilities are still reliant on workers on the floor to move WIP from one machine to the next, since upgrading legacy fabs to facilitate advanced material handling systems (AMHS) is often too costly or too complicated. Instead, operators tend to take instructions from a rules-based scheduling software. However, in some instances the operators will make their own scheduling decisions based on their ‘gut feeling’ if, for example, the system doesn’t take into account certain constraints and makes an implausible suggestion or the operator thinks they can make a better decision themselves.

Because of the lack of intelligence in rules-based scheduling systems, many chip companies have to rely on experienced, highly-skilled operators to oversee the manufacturing process. Therefore, when companies look to expand facilities or replace employees, they understandably think they need to find operators experienced enough that they require minimal training to adapt to their fab – and worry that new candidates don’t exist in sufficient quantity.

However, if decision-making around scheduling in the fab can be improved, with less dependence on operators’ own judgements, then it’s possible to not onboard new staff much quicker but also optimize the total number of people needed to run the facility.

The simple rules-based scheduling software that many fabs rely on to guide operators’ decisions runs using predefined rules. These predefined rules take into consideration only a limited number of possible cases, resulting in the software making suboptimal decisions when it is faced with unknown scenarios which contribute to inconsistent results. Little wonder then that experienced operators often believe their own scheduling decisions are just as good, if not better. But by applying smart manufacturing practices – as SEMI is encouraging chip companies to do – it’s possible to automate and optimize production scheduling and easily add constraints that allow for far better decision making.

The AI-driven production scheduling system can analyse the state of the fab at any given moment and make scheduling decisions that are optimal for that precise point in time. The ramifications of these systems are profound. If the WIP flow is truly optimized, there is no need for operators to make their own dispatching decisions – they just need to follow the instructions coming from a smart scheduler, reducing the pressure on fabs to find highly skilled workers and enabling a concomitant percentage reduction of the manpower required on the fab’s floor. Optimized scheduling also allows for significant improvements in production KPIs such as throughput and cycle time – assisting a fab to achieve overall performance gains.

In addition, optimized scheduling removes the need for skilled engineers to spend time analysing production data in an effort to continually tweak the rules and maintain the scheduling system. Instead, their time can more usefully be spent on other tasks. Essentially, optimization enables more to be done with less.

Why competition is sometimes counter-productive

Another area in which advanced technologies can have a positive impact on the labour issue - not just in legacy facilities but across the industry - is in the optimization of manufacturing. At every semiconductor company, a significant proportion of their engineering talent is focused on developing ways to improve the chip production process, and thus gain an advantage over their rivals. While this type of competition undoubtedly drives progress within the industry, it can also be counter-productive, with teams at each company tied up trying to solve the same problem.

It's only natural that a cutting edge industry structured around research and science should assume the best solutions to every problem can be developed internally. Yet this is not always the case, particularly in emerging fields such as advanced AI. Rather than tie up talent and resources trying to solve issues such as optimizing scheduling in fabs, companies should be prepared to investigate ‘disruptive’ technologies from beyond their own walls that may already have cracked this problem.

By adopting a ‘best-in-class technology’ approach to the manufacturing process – rather than perpetuate a culture of trying to develop proprietary solutions for everything – companies can instead refocus their engineering talent on core competencies. For instance, by embracing external innovation, chip companies can redeploy internal teams to overcome efficiency obstacles elsewhere in the fab that they previously did not have the capacity to work on.

It goes without saying that continuing to promote STEM-based graduates towards careers in microelectronics is vital if the semiconductor industry is to meet its ambitious targets over the next decade. But while disruptive technologies can’t make the current labour shortage problem go away on their own, they can make a serious contribution to lessening its impact and changing chip companies’ attitudes towards recruitment.

Read
Read