you are here:   / News & Insights / Engineering Advantage Blog / Big Data, Stochastics and Robust Design

Engineering Advantage

Big Data, Stochastics and Robust Design

October 17, 2013 By: Nick Veikos

When we hear of “Big Data” these days, it is usually related to how companies will be able to improve their marketing and get better at selling to us. Fortunately, there are also truly valuable applications of Big Data.

The October 2013 issue of Mechanical Engineering Magazine featured an enlightening article on Big Data and how it can create smart factories. It reminded me of a newsletter in which my former heat transfer professor, Vijay Modi, discusses how data-driven systems will be required to increase resilience and reduce the environmental footprint in urban areas. Big Data even helpedTeam USA win the America’s cup.

What role does Big Data play in simulation? One could argue that we are always using Big Data in simulation. If terabytes worth of results from a large FEA or CFD model is not Big Data, what is? We have an infinite array of virtual sensors at our disposal. This makes the mere 300 sensors on the Oracle AC72 catamaran seem trivial by comparison.

But, by itself, this is not Big Data – it is merely Lots of Data. By performing analysis of only one geometric configuration, with one set of material properties, and one set of boundary conditions, we are really looking at the outcome of only one data point in a design. We have loads of information about that design point, but we have little knowledge of the success of the design if some combination of the original parameters were different.

The marketing analogy is asking one person a thousand questions – you will really get to know a lot about that one person, but very little about how the overall population will respond to those questions. 

With a single data point, we cannot infer causal effects to reveal relationships, dependencies, or predict outcomes for even slightly modified configurations. The common practice is to compensate for this knowledge gap by applying safety factors to the results. When these don’t work, everyone is left scratching their heads because there is no clear understanding of what went wrong. 

By using a statistical approach to represent the geometry, materials, and boundary conditions, a simulation can capture the true physics of the system behavior, rather than only one of the infinite potential outcomes.  This stochastic approach is the simulation version of Big Data.

By pulling together results from multiple simulations, engineers are able to quantitatively predict things like the probability of failure, the number of failures per thousand, and the useful service life. This data provides tremendous insight, becoming the enabler for creating robust designs – where small changes in design parameters do not significantly impact performance.

The use of probabilistic methods is not new, but recent computer software and hardware improvements have made it much more accessible. In order to take this stochastic, Big Data approach to simulation we need the following four components:

  • A parametric CAD model and robust simulation software which makes it easy to set-up and run many design variations.
  • Lots of computational capability – either in-house or via the cloud.
  • A scalable software licensing scheme to make it affordable.
  • Tools to pull the data together and help implement robust design.

With these key components in place, engineering organizations of all sizes can begin to implement robust design methodologies, and realize quality improvements that simulation leaders like 

have been achieving for many years.

Like it or not, Big Data is here to stay. The good news is that, in the form of robust design, it can really help design better, safer, and more reliable products.