min read

Maximising Wafer Fab Performance: Harnessing the Cloud's Competitive Edge

cloud technology cloud-native semiconductor industry wafer fabs cloud adoption security risk cost of cloud AWS azure public cloud software hack X-FAB hacked

To cloud, or not cloud, that is the question.

Some might consider the opening statement a tad flippant in borrowing Hamlet's famous soliloquy. Yet, the internal struggle our hero feels agonising over life and death holds a certain likeness to the challenges faced by Fab Managers today. Businesses live and die by their decisions to either embrace or disregard new innovations to gain a competitive edge and nowhere is this truer than in the rough and tumble world of semiconductor manufacturing; Fairchild, National Semiconductor and S3 are just a few of those who did not last. [1][2][3]

Semiconductor manufacturing has had a long history of innovating, tweaking, and tinkering,[4] so it’s somewhat surprising that the sentiment towards cloud uptake has been weaker in the semiconductor industry compared to the wider market[5]. This article aims to explore some of the potential benefits of cloud adoption to better equip Fab Managers with the motivation to take another look at the cloud question.

Recap: What are the different types of Cloud?

Cloud computing encompasses public, private, and hybrid models. The public cloud (think Azure, AWS, Google Cloud and so on) offers rental of computational services over the internet, while the private cloud replicates cloud functionality on-premises. However, private clouds require a significant upfront investment, ongoing maintenance costs and a skilled in-house IT team to manage and maintain the infrastructure, making it a less appealing option for smaller firms. Hybrid cloud blends on-site and cloud resources for flexible workloads, segregating the most sensitive workloads to on-premise environments for the greatest control; however, control does not necessarily mean security, which will be discussed in a later article! 

Understanding the benefits of cloud

1.      The Latest Tech

Embracing the latest cloud technology offers wafer fab facilities, not just organisations, a direct path to heightened capabilities in their manufacturing processes through the use of digital and smart manufacturing technologies. By harnessing advanced computational powers such as real-time analytics; optimization[6]; and machine learning defects detection[7], fabs can maximise all their fundamental KPIs, ultimately leading to better business outcomes. McKinsey estimates that, compared to other semiconductor activities, manufacturing has the most to gain from the AI revolution (Fig. 1), and a key technology enabling this is will be the vast computational power of the cloud.[8]

Fig. 1: McKinsey estimates that the AI revolution could reduce semiconductor manufacturing costs by around $38bn.

Case Study: The Latest Tech Driving Improvements in Fab KPIs

Seagate achieved a 9% increase in moves
by utilising Flexciton’s cloud native platform and cutting-edge autonomous scheduling.

2. Redundancy, Scaling, Recovery and Updates

It is true that some of these technologies can be provided on-premises; however, cloud computing, in general, reduces downtime through redundancy, automated scaling, and disaster recovery mechanisms, ensuring seamless operation even during hardware failures or unexpected traffic spikes. Some estimates suggest that downtime can cost firms an eye-watering $1 million to $5 million per hour, depending on their size and sector. [9] By leveraging the cloud, the cost of operating disaster recovery services has demonstrated potential cost savings of up to 85% when comparing public to private options. [10] It is easy to speculate that for wafer fab critical infrastructure, the cost of downtime could be significantly higher.

Furthermore, the number of wafers processed within a fab can cause computational traffic spikes during busy periods for some applications. On-premises deployments would need to account for this, even if the resource is not in use all the time, which can add to inefficiencies, while public cloud can elastically scale down, meaning you only pay for what you use. 

Lastly, on-premises systems without the ability to monitor and update remotely are often many versions behind, prioritising perceived stability but research has shown increasing the rate of software iteration increases stability and resilience rather than weakening it. [11] Without the convenience of remote updates, legacy systems can become entrenched, with employees on the shop floor being hesitant to embrace change due to the fear of disrupting critical infrastructure and the expenses associated with upgrading IT infrastructure. This sets in motion a self-reinforcing cycle where the expenses and associated risks of transitioning increase over time, ultimately resulting in significant productivity losses as users continue to rely on outdated technology from decades past.

3. Specialisation and Comparative Advantage

Stepping back from the fab and taking a holistic view of the semiconductor manufacturing organisation reveals compelling economic arguments, both on macro and micro scales, for embracing cloud.

Allowing cloud providers to specialise in cloud computing while wafer fab manufacturers focus solely on wafer fabrication benefits the latter by freeing them from the complexities of managing IT infrastructure. [12] This collaboration allows wafer fab manufacturers to allocate their resources towards core competencies, leading to increased operational efficiency and superior wafer production.

Simply put, fabs do not build the complex tools they need to make their products, such as photolithography equipment; they purchase and utilise them in ways others can’t to produce market leading products. Why should utilising the tools of the cloud be any different?

On a macro level, the argument of specialisation also applies through comparative advantage.[13] Different continents and countries have comparative advantages in certain fields, Asia has long been a world leader in all kinds of manufacturing due to its vast populations.[14] The United States, on the other hand, has a tertiary education system which is the envy of the world; institutions like Stanford and MIT are household names across the globe, and this has provided the high technical skills needed to be the home of the technology start up. Utilisation of cloud technology and other distributed systems allows firms to take the best of what both regions have to offer, high tech manufacturing facilities from Singapore to Taiwan with the latest technology from Silicon Valley or perhaps London. Through the cloud, Fab Managers and organisations can leverage a single advanced technology across multiple fabs within complex supply chains. This eliminates the need for costly and experienced teams to travel across the globe or manage multiple teams in various locations with varying skill sets, all while locating facilities and offices where the best talent is.

In brief, semiconductor firms' fate could rest on one pivotal decision: adoption of cloud. This choice carries the promise of leveraging cutting-edge technology, fortifying resilience, and reaping a multitude of advantages. Notably, by transitioning to cloud-native solutions, Fab Managers can usher their organisations into an era of unparalleled competitiveness, all while enjoying a range of substantial benefits. Among these benefits, for example, is cloud-native architecture like Flexciton’s, promising lower cost of ownership and zero-touch maintenance for fabs. We will delve deeper into the crucial aspect of security in one of our upcoming blogs, providing a comprehensive understanding of how cloud-native solutions are actually key to safeguarding sensitive data and intellectual property, rather than compromising it. In this era of constant innovation, embracing the cloud is more than just an option; it’s becoming a strategic imperative.

Author: Laurence Bigos, Product Manager at Flexciton


[1] Investor relations - Texas Instruments completes acquisition of National Semiconductor - Texas Instruments

[2] ON Semiconductor Successfully Completes Acquisition of Fairchild Semiconductor for $2.4 Billion in Cash

[3] S3 Graphics: Gone But Not Forgotten | TechSpot

[4] Miller, C. (2022). Chip War: The Fight for the World's Most Critical Technology. Scribner.

[5] Flexciton | Blog & News | Is Fear Holding Back The Chip Industry’s Future In The Cloud?

[6] Flexciton | Resources | Seagate Case Study 2.0

[7] Lynceus: Inline, Real-time, AI Based Process Control Monitoring That Can Reduce Inspection & Metrology Capex (semianalysis.com)

[8] Applying artificial intelligence at scale in semiconductor manufacturing | McKinsey

[9] Know Key Disaster Recovery Statistics And Save Your Business (invenioit.com)

[10] Wood.pdf (usenix.org)

[11] Forsgren, N., Humble, J., & Kim, G. (2018). Accelerate: The Science of Lean Software and DevOps. IT Revolution Press.

[12] Specialization Definition (investopedia.com)

[13] What Is Comparative Advantage? (investopedia.com)

[14] Why China Is "The World's Factory" (investopedia.com)

Explore more articles

View all
flexciton culture hiring optimization engineers jobs linkedin semiconductor industry jobs well paid micron infineon tsmc samsung electronics microelectronics jobs wafer fabs
Read time
 min read
The Flex Factor with... Jannik

Please give a warm welcome to Jannik, our next team member to sit in the hot seat. In this edition of The Flex Factor, find out how Jannik juggles being both an optimization engineer and customer lead, as well as what get's him excited in the world of tech.

Please give a warm welcome to Jannik, our next team member to sit in the hot seat. In this edition of The Flex Factor, find out how Jannik juggles being both an optimization engineer and customer lead, as well as what get's him excited in the world of tech.

Tell us what you do at Flexciton?

I’m an optimization engineer and technical customer lead working in the customer team. As an optimization engineer, I work on our models and the general back-end code to make sure we create optimal schedules that meet the client’s requirements.

As a customer lead, I speak to our clients to understand their unique challenges, so that I can translate them into requirements for our solution and liaise with our team to prioritise the right bits of work we want to get done.

What does a typical day look like for you at Flexciton?

To start my day I like to have a check in with my clients, to make sure their apps are working as expected and there are no queries waiting to be handled. Other than that, there is no such thing as a typical day.

Some days will be full of programming to create solutions for new problems we encounter, or to iron out bugs that made their way into the code during previous work. Other days might have lots of meetings to align our work with the engineering & product teams, or to speak with our customers and technology partners.

What do you enjoy most about your role?

My role has loads of connections within the company, which means I get to work with many super smart people to achieve our goals. I also really enjoy learning about the many different challenges our clients face and create solutions for them, and occasionally I get to visit clients and peek inside the cleanroom, which never fails to amaze me.

If you could summarise working at Flexciton in 3 words, what would they be?

Challenges, curiosity, intelligence.

If you could have dinner with any historical figure, living or deceased, who would it be, and why?

Sebastião Salgado, the Brazilian photographer. Not only is he an inspirational photographer, he must also be full of stories and life lessons from many years of travelling and reforesting his family's farm land.

In the world of technology and innovation, what emerging trend or development excites you the most, and how do you see it shaping our industry?

It’s a very broad trend, but it’s amazing to see AI solutions spreading to more and more people and helping them in their daily lives. You’d think an industry like semiconductors is at the forefront of this, but we can see that there is still a lot of hidden potential which we can hopefully help to unlock over the next few years by replacing some of the legacy technology.

Tell us about your best memory at Flexciton?

This one is really tough because I love all the small moments here, from having a super technical discussion amongst engineers to finding out a new fun fact about each other over some drinks.

If I have to pick a single moment, it would be our surfing lesson near Albufeira during last year’s team trip. It was just loads of fun trying it out (and failing) together.

We're hiring! To see what vacancies we have available, check out our careers site.

batching lots wafer fab simultaneous sequential parallel optimization job-shop scheduling production scheduling semiconductors front-end semi fab manufacturing infineon tsmc micron stmicro samsung
Read time
 min read
B is for Batching

In the second instalment of the Flexciton Tech Glossary Series, we're taking you on an insightful journey through the world of batching. Find out about the many complexities of batching, the existing methods of solving the problem and the wider solution space.

Welcome back to the Flexciton Tech Glossary Series: A Deep Dive into Semiconductor Technology and Innovation. Our second entry of the series is all about Batching. Let's get started!

A source of variability

Let's begin with the basics: what exactly is a batch? In wafer fabrication, a wafer batch is a group of wafers that are processed (or transported) together. Efficiently forming batches is a common challenge in fabs. While both logistics and processing both wrestle with this issue, our article will focus on batching for processing, which can be either simultaneous or sequential.

Figure 1: the different types of batching in a wafer fab.

Simultaneous batching is when wafers are processed at the same time on the same machine. It is very much inherent to the entire industry, as most of the machines are designed for handling lots of 25 wafers. There are also process types – such as thermal processing (e.g. diffusion, oxidation & annealing), certain deposition processes, and wet processes (e.g. cleaning) – that benefit from running multiple lots in parallel. All of these processes get higher uniformity and machine efficiency from simultaneous batching.

On the other hand, sequential batching refers to the practice of grouping lots or wafers for processing in a specific order to minimise setup changes on a machine. This method aims to maximise Overall Equipment Effectiveness (OEE) by reducing the frequency of setup adjustments needed when transitioning between different production runs. Examples in wafer fabrication include implant, photolithography (photo), and etch. 

Essentially, the entire process flow in wafer manufacturing has to deal with batching processes. To give a rough idea: a typical complementary metal-oxide semiconductor (CMOS) architecture in the front-end of the line has up to 70% of value added steps being batching. In a recent poll launched by FabTime on what the top cycle time contributors are, the community placed batching at number 5[1], behind tool downs, tool utilisation, holds, and one-of-a-kind tools. Batching creates lot departures in bursts, and hence it is inherently causing variability in arrivals downstream. Factory Physics states that:

“In a line where releases are independent of completions, variability early in a routing increases cycle time more than equivalent variability later in the routing.” [2]

Successfully controlling this source of variability will inevitably result in smoother running down the line. However, trying to reduce variability in arrival rates downstream can lead to smaller batch sizes or shorter campaign lengths, affecting the effectiveness of the batching machines themselves.

The many complexities of batching

In wafer fabs, and even so more in those with high product mix, batching is particularly complicated. As described in Factory Physics:

"In simultaneous batching, the basic trade-off is between effective capacity utilisation, for which we want large batches, and minimal wait to batch time, for which we want small batches.” [2]

For sequential batching, changing over to a different setup of the machine will cause the new arriving lots to wait until the required setup is available again.

So in both cases, we’re talking about a decision to wait or not to wait. The problem can easily be expressed mathematically if we’re dealing with single product manufacturing and a low number of machines to schedule. However, as one can imagine, the higher the product mix, the more possible setups and machines. Then the problem complexity increases, and the size of the solution space explodes. That’s not all, there are other factors that might come into play and complicate things even more. A couple of examples are:

  • Timelinks or queue time constraints: a maximum time in between processing steps
  • High-priority lots: those that need to move faster through the line for any reason
  • Downstream capacity constraints: machines that should not get starved at any cost
  • Pattern matching: when the sequence of batching processes need to match a predefined pattern, such as AABBB

Strategies to deal with batching

Historically, the industry has used policies for batching; common rules of thumb that could essentially be split up into ‘greedy’ or ‘full batch’ policies[3]. Full batch policies require lots to wait until a full batch is available. They tend to favour effective capacity utilisation and cost factors, while they negatively impact cycle time and variability. Greedy policies don’t wait for full batches and favour cycle time. They assume that when utilisation levels are high, there will be enough WIP to make full batches anyway. For sequential batching on machines with setups, common rules include minimum and maximum campaign length, which have their own counterpart configurations for greedy vs full batching.[3] 

The batching formation required in sequential or simultaneous batching involves far more complex decisions than that of loading a single lot into a tool, as it necessitates determining which lots can be grouped together. Compatibility between lots must be considered, and practitioners must also optimize the timing for existing lots on the rack to await new arrivals, all with the goal of maximising batch size. [4]

Figure 2: Impact of Greedy vs. Near-full batch policy on cycle time x-factor for a tool. [4]

Industrial engineers face the challenge of deciding the best strategy to use for loading batch tools, such as those in the diffusion area. In an article by FabTime [4], [5] the impact of the aforementioned greedy vs full or near full batch policy is compared. The greedy heuristic reduces queuing time and variability but may not be cost-effective. Full batching is cost-effective but can be problematic when operational parameters change. For instance, if a tool's load decreases (becomes less of a bottleneck), a full batch policy may increase cycle time and overall fab variability. On the other hand, a greedy approach might cause delays for individual lots arriving just after a batch is loaded, especially critical or hot lots with narrow timelink windows. Adapting these rules to changing fab conditions is essential.

In reality, these two policies are two extreme settings in a spectrum of possible trade-offs between cost and cycle time (and sometimes quality). To address the limitations of both the greedy and full batch policies, a middle-ground approach exists. It involves establishing minimum batch size rules and waiting for a set duration, X minutes, until a minimum of Y lots are ready for batching. This solution usually lacks robustness because the X and Y values depend on various operational parameters, different recipes, product mix, and WIP level. As this rule-based approach incorporates more parameters, it demands greater manual adjustments when fab/tool settings change, inevitably leading to suboptimal tool performance.

In all of the above solutions, timelink constraints are not taken into consideration. To address this, Sebastian Knopp[6] recently developed an advanced heuristic based on disjunctive graph representation. The model's primary aim was to diminish the problem size while incorporating timelink constraints. The approach successfully tackled real-life industrial cases but of an unknown problem size.

Over the years, the wafer manufacturing industry has come up with various methodologies to help deal with the situation above, but they give no guarantee that the eventual policy is anywhere near optimal and their rules tend to stay as-is without adjusting to new situations. At times, this rigidity has been addressed using simulation software, enabling factories to experiment with various batching policy configurations. However, this approach proved to be resource-intensive and repetitive, with no guarantee of achieving optimal results.

How optimization can help master the batching problem

Optimization is the key to avoiding the inherent rigidity and unresponsiveness of heuristic approaches, helping to effectively address the batching problem. An optimization-based solution takes into account all batching constraints, including timelinks, and determines the ideal balance between batching cost and cycle time, simultaneously optimizing both objectives.

It can decide how long to wait for the next lots, considering the accumulating queuing time of the current lots and the predicted time for new lots to arrive. No predetermined rules are in place; instead, the mathematical formulation encompasses all possible solutions. With a user-defined objective function featuring customised weights, an optimization solver autonomously identifies the optimal trade-off, eliminating the need for manual intervention.

The challenge with traditional optimization-based solutions is the computational time when the size and complexity of the problem increase. In an article by Mason et al.[7], an optimization-based solution is compared to heuristics. While optimization outperforms heuristics in smaller-scale problems, its performance diminishes as problem size increases. Notably, these examples did not account for timelink constraints.

This tells us that the best practice is to try to break down the overall problem into smaller problems and use optimization to maximise the benefit. At Flexciton, advanced decomposition techniques are used to break down the problem to find a good trade-off between reduced optimality from the original problem and dealing with NP-hard complexity.[8]

Many practitioners aspire to attain optimal solutions for large-scale problems through traditional optimization techniques. However, our focus lies in achieving comprehensive solutions that blend heuristics, mathematical optimization, like mixed-integer linear programming (MILP), and data analytics. This innovative hybrid approach can vastly outperform existing scheduling methods reliant on basic heuristics and rule-based approaches.

Going deeper into the solution space

In a batching context, the solution space represents the numerous ways to create batches with given WIP. Even in a small wafer fab with a basic batching toolset, this space is immense, making it impossible for a human to find the best solution in a multi-product environment. Batching policies throughout history have been like different paths for exploring this space, helping us navigate complex batching mathematics. Just as the Hubble space telescope aided space exploration in the 20th century, cloud computing and artificial intelligence now provide unprecedented capabilities for exploring the mathematical world of solution space, revealing possibilities beyond imagination.

With the advent of these cutting-edge technologies, it is now a matter of finding a solution that satisfies the diverse needs of a fab, including cost, lead time, delivery, quality, flexibility, safety, and sustainability. These objectives often conflict, and ultimately, finding the optimal trade-off is a business decision, but the rise of cloud and AI will enable engineers to pinpoint a batching policy that is closest to the desired optimal trade-off point. Mathematical optimization is an example of a technique that historically had hit its computational limitations and, therefore, its practical usefulness in wafer manufacturing. However, mathematicians knew there was a whole world to explore, just like astronomers always knew there were exciting things beyond our galaxy. Now, with mathematicians having their own big telescope, the wafer manufacturers are ready to set their new frontiers.

Ben Van Damme, Industrial Engineer and Business Consultant, Flexciton
Dennis Xenos, CTO and Cofounder, Flexciton


[1] FabTime Newsletter: Issue 24.03

[2] Wallace J. Hopp, Mark L. Spearman, Factory Physics: Third Edition. Waveland Press, 2011

[3] Lars Mönch,  John W. Fowler,  Scott J. Mason, 2013, Production Planning and Control for Semiconductor Wafer Fabrication Facilities, Modeling, Analysis, and Systems, Volume 52, Operations Research/Computer Science Interfaces Series 

[4] FabTime Newsletter: FabTime Cycle Time Tip of the Month #4: Use a Greedy Policy when Loading Batch Tools

[5] FabTime Newsletter: Issue 9.03 

[6] Sebastian Knopp, 2016, Complex Job-Shop Scheduling with Batching in Semiconductor Manufacturing, PhD thesis, l’École des Mines de Saint-Étienne 

[7] S. J. Mason , J. W. Fowler , W. M. Carlyle & D. C. Montgomery, 2007, Heuristics for minimizing total weighted tardiness in complex job shops, International Journal of Production Research, Vol. 43, No. 10, 15 May 2005, 1943–1963   

[8] S. Elaoud, R. Williamson, B. E. Sanli and D. Xenos, Multi-Objective Parallel Batch Scheduling In Wafer Fabs With Job Timelink Constraints, 2021 Winter Simulation Conference (WSC), 2021, pp. 1-11

autonomous scheduling flexciton semiconductors industry tsmc samsung optimization fps production scheduling wafer fabs photolithography taxis new york
Read time
 min read
Autonomous Scheduling: A Tale of Three Taxis

At Flexciton, we often talk about how autonomous scheduling allows wafer fabs to surpass the need for maintaining many rules to enable the behaviours they want at different toolsets. Seb Steele offers an analogy to show how significant the difference is.

At Flexciton, we often talk about how autonomous scheduling allows wafer fabs to surpass the need for maintaining many rules to enable the behaviours they want at different toolsets. I would like to offer an analogy to show how significant the difference is.

Navigating the City

Imagine you are a passenger in a taxi. Your driver is a local; they know every road like the back of their hand and know the best routes to avoid likely problems. They can be flexible and effective, but have to spend a long time thinking about how to get to your destination. They also can’t know about the traffic on each potential route, and for new destinations they may require some trial and error before they find a good way of getting there. Worst of all, though they might have accumulated some great stories from their years of driving, it’s only thanks to those many years that they can navigate with any level of mastery.

Now imagine you have a very basic robotic driver; this driver is so mechanical that it has a hard-coded rule for every single road and junction: “If I’m at junction 20, I wait exactly thirty seconds and then I turn left.” This rule has come from an engineer performing a time study based on traffic levels six months ago. The driver has no knowledge of local events happening (for example, if it turns out that there is no oncoming traffic right now), and doesn’t even change its decisions when you need it to navigate to a new destination!

Meanwhile, when local conditions change at all (gaps in oncoming traffic at junction 20 are now every twenty seconds on average!) an engineer needs to manually change that parameter in the robot’s logic. And if the overall conditions change everywhere, or a new destination is desired, every rule needs to be retuned.

The autonomous taxi gets you to your destination in the shortest time possible. Meanwhile, the manual taxi gets you there slowly, and a poorly tuned robotic taxi might not get you there at all!

Finally, imagine a truly autonomous taxi. This taxi has a navigation system that knows where the traffic is, assesses the speed of every potential route, reacts to changes in conditions, and can get you to exactly where you want to go. In fact, all you have to do is tell it the destination; then you can sit back and relax, knowing it will get you there in the shortest possible time.

Navigating the Fab

While many wafer fabs have moved away from relying purely on tribal knowledge of manufacturing specialists on the fab floor, the scheduling problem in semiconductor factories is so difficult that, until recently, the hard-coded robotic taxi driver was the state-of-the-art. These solutions ask industrial engineers to manually tune thousands of rules to achieve intelligent behaviour, and they must be continuously re-tuned as fab conditions change.

A common scheduling challenge is deciding when to allow wafers into a timelink (or queue time loop) at diffusion. A timelink is the maximum amount of time that can elapse between two or more consecutive process steps, and some schedulers will simply limit the number of lots allowed within the timelinked steps at any one time. Others will just use a priority weight given to all timelinked lots, so that they are more likely to move through the loop without violating their time limit. Both of these rules are manually tuned and can’t react to the conditions of that particular moment in time, leading either to rework or scrap, or unnecessarily high cycle times.

Another typical example from a commonly-used heuristic scheduler is the application of minimum batch size rules at diffusion areas. A typical rule might be “wait for a minimum batch size of x, unless y minutes have elapsed, in which case dispatch whatever is at the rack.” Many fabs will set up this rule for every furnace-operation combination, which could mean ~3,000 manually tuned parameters just for one rule at one toolset.

Meanwhile, when micro conditions change, for example daily wip level fluctuations, these tuned parameters cannot react. And worse, when macro conditions such as overall market demand change, it makes it very hard for the whole fab to pivot quickly, because every rule needs re-tuning manually. Despite the theory that these rules can be set once every few months, in practice most fabs end up re-tuning these rules continuously, even daily, in order to maintain reasonable performance - accepting the predictable impact on industrial engineering resources that has!

Optimized scheduling, however, does away with these rules entirely and directly calculates the optimal schedule to improve your chosen objectives. In the timelink example, it doesn’t need to rely on guessing how many lots can be allowed into the loop - it just calculates the optimal schedule for the multiple steps involved, ensuring no timelink violations will occur.

But still, how do you get the scheduler to do what you want?

A New Paradigm for Tuning

If you have read any of our previous articles, you may be aware that optimization-based scheduling uses objectives such as “minimise queue time” and “maximise batch size” to calculate the optimal schedule. In fact, on most of our toolsets we only use ~2-3 objective weights, and by setting these you can achieve the balance and results you want.

Even this, however, is not truly autonomous.

We’ve been working to bring forward a new paradigm: letting you choose the fab-level outcome you want directly - like setting the destination for the taxi. If you know you want to prioritise achieving higher throughput, you can just specify that and Flexciton’s autonomous scheduler will automatically figure out what the optimization objective needs to be to achieve it.

What does this mean? It means you directly control the fab outcome you want to achieve, rather than guessing what toolset-level behaviours will produce the fab-level KPIs you want.

Orders of Magnitude

So when we speak about autonomous scheduling, we are referring to this new paradigm where you can choose the outcome you want, and Flexciton automatically does the rest. Instead of ~3,000 manually tuned parameters for just one of many rules at one toolset, just pick your desired KPI trade-off, and we automatically set the handful of objective weights that drive the optimization engine.

The result is not needing engineering resource dedicated to tuning; consistent high performance across changes in fab conditions; and it becomes easy to pivot the entire fab’s direction when market conditions change.

This is how Flexciton’s scheduler is powerful enough to let you set the destination, and go.

Author: Sebastian Steele, Product Manager at Flexciton