Idea and principles of uncertainty quantification
Of the big engineering trends of the last decade or two, uncertainty quantification is definitely one of the biggest. Easily a contender for the top 3, in my opinion. Lots of research and industrial interest both in that area. Even more if you count in some of the closely-related fields, such as robust design and parameter exploration.
But what is uncertainty quantification, really?
Idea of uncertainty quantification
Let’s pick the term apart. It’s quite easy, there being only two words in total and all that.
The first one is uncertainty. This part is actually something that every design engineer knows, probably without even realizing it.
Say you first design a product and then construct it, or have it constructed. Is the final product exactly what you intended, by all significant meters and scales? Of course it’s not. We all know it isn’t.
So what do you have instead, exactly? Well, that’s kinda uncertain, wouldn’t you say?
Furthermore, if you make several supposedly-identical products, they will still have some deviations and variations between them.
That’s uncertainty, explained in a simple fashion. The state of not knowing infinitely-precisely how your product is being built, and how the final product is going to be like.
The quantification part then includes putting some numbers on that afore-described uncertainty. Typically, that means focusing on some quantity-of-interest, or QoI. For example, the torque of an electric motor, or the critical load of a steel structure. Next, the statistical properties of the QoI are quantified. That can mean determining only its mean value and variance (small bits of information), or its full propability distribution function (all information there is), or something in between.
Why its important
No production process is perfect. There are always some margins of error or reliability to consider. At the same time, the demands for performance of any kind are growing, pushing those same margins tighter and tighter. Motors need to be more power dense, for instance.
A systematic way of analysis can be a life-saver here. If we push this here, we can estimate the changes of failure in this.
More exact definition
The simplest way to do this would be by measurements. Take your previous 1000 motors, and measure stuff. That’s quite an okay sample size, so you can easily apply high-school statistics to see how your QoI behaves.
However, that’s not what is usually meant by uncertainty quantification. Instead, most sources and authors tend to focus on two aspects
- Quantifying the sources of uncertainty, and then
- Studying the propagation of that uncertainty to the QoI.
For example, determining the range and type of variations in material parameters would belong to the first part. Performing stochastic simulations to quantify their effect on the QoI could then second make up the second point.
One point of caution, though. Systematic limitations in the design process are typically not considered under uncertainty quantification. This means factors like the accuracy of the FEA approach. Instead, they would fall under error analysis, and other similar fields loved by mathematicians.
But other than that, pretty much everything else is fair game.
The procedure
We can divide the whole uncertainty quantification process into the following phases. In the near future, I will be writing posts about each of them, so stay tuned.
1) Determining the sources of uncertainty
This can include variations and imperfections in the materials used. For example, the iron sheets used in electrical machines are hardly homogeneous in real life. Even the ones called anisotropic are very much isotropic in reality. In simpler words, they “conduct” the magnetic flux better in one direction than the other. Which direction? Answer: uncertain.
The manufacture process also induces some variations in the product. Tools suffer wear and tear, and tolerances are never exact. The impregnation process of a winding can also induce imperfections and faults into the resin used.
2) Quantifying the sources of uncertainty
Once, the sources of uncertainty are identified, they have to be quantified. This is no easy task, mildly put.
Ideally, we would have to know the joint probability distribution of all sources of uncertainty combined. Alas, in reality we may have to contend ourselves with a some kind of quesstimate. That way, we can at least get some bounds for a worst-case scenario.
In any case, expect to see statistics like mean and variances, and marginal distributions during this phase.
Experience is your greatest ally here. If I were you, I would begin by talking to the workers actually making the product. Especially the senior ones. They may not speak of it as uncertainty (and thus neither should you when consulting them, you arrogant idiot), but they definitely know what goes bump in the manufacture process. Far better than you or I ever will.
Click here to read more about finding and quantifying the sources of uncertainty.
3) Expressing the uncertainty in mathematical form
This stage might be the hardest one to understand. Simply put, we do something to model the uncertainty in a way that suits are purposes to best. As you’ll probably guess, there are several ways to do this.
However, I will be focusing on something called polynomial chaos and the Karhunen-Loeve expansion. My reason is that these two can quite easily be coupled to simulation models later on.
Simply put, the uncertainty is expressed in a functional form. Think of Fourier series for a second. A Fourier series consists of expressing a general time-dependence as a series of sinusoids.
Now, instead of time-dependence we have a stochastic dependence. So, instead of sinusoids we will have random polynomials as the basis. But otherwise the principle is the same. Express the dependence as a weighted sum, and then determine the weights.
Click here to read about polynomial chaos.
Click here to read about treating random functions rather than variables.
4) Coupling to simulation models
Next, we estimate the propagation of the uncertainty to the final product. For this, we use simulation models such as FEA. There are basically two ways to model this.
Non-intrusive models don’t modify the existing FEA core. They simply feed samples of different input data to it, run several simulations, and then gather the output. The number of required simulations can be reduced by using advanced approaches.
By constrast, intrusive models directly include the uncertainty in the simulation itself. They can be mathematically harder, require modifying the code itself, and result in larger problems for the computer to solve. However, they can still be computationally faster and yield more reliable results. Especially today, now that we have several options to reduce the computational burden.
5) Post-processing
This phase ties directly to the previous one. Remember stage 3), where the source of uncertainty was expressed as a functional? Well now we do the same thing to the QoI. Intrusive models do this for us instantly, whereas with non-intrusive ones we have to do some fitting.
But, the beauty of the approach becomes apparent here. Now, we have a direct expression between the sources of uncertainty, and our quantities of interest. This meta-model is computationally very lightweight. This means we can very easily analyse any scenario we want, almost instantly. The time-consuming FEA model is no longer needed, you see.
6) Deciding what to do with the results
The title says it all. This is very much problem-dependent, and again requires quite a bit of insight and experience.
Conclusion
There you have it. The idea of uncertainty quantification explained, and the typical workflow explained. In the future, I will be writing more detailed posts about each of the phases, so stay tuned!
Have something to add or ask? Hit us a comment!
-Antti
Check out EMDtool - Electric Motor Design toolbox for Matlab.
Need help with electric motor design or design software? Let's get in touch - satisfaction guaranteed!