Until fairly recently, a major obstacle in the use of Response Surface Optimization with finite element analysis was the overhead cost associated with generating the design point solutions. The design point data is used to develop a polynomial or surrogate model of the response that can be used in an optimization analysis. With any data fitting operation, a greater number of data points will typically result in a better surrogate representation of the response. However, the number of configurations which can be practically solved depends on available hardware, software, model size, and type of analysis.

Each design point gets its data from a complete finite element model solution. With a linear finite element model, the solution time is a function of the number of degrees of freedom in the model. If your model is small, and the solution completes in a few minutes, solving for a large number of design points is not unreasonable.

When a model includes nonlinear effects, the finite element program must iteratively determine the system stiffness as well as the degree of freedom solution. This may require several linear solutions per load increment, making the process orders of magnitude more time consuming. If your nonlinear solution requires a large number of iterations for each design point, it could become impractical to solve for many design configurations.

This is why the selection of a Design of Experiments method becomes an important part of Response Surface Optimization. The DOE method chosen will define a subset of the total number of potential design points that will be used to create the response surfaces of the system. There are many DOE methods available and care should be taken to choose the method that is best suited to the physical system and the potential design variation.

The number of design points required for a “Two Level Full Factorial Analysis” is 2 to the nth power (n being the number of input variables). Finite element models with 1-3 design variables are practical to run with a full factorial DOE, but models with more variables will eat up a significant amount of your computing resources. A DOE model that can employ a fractional factorial method such as the “Central Composite Design (CCD)” will reduce the number of design points dramatically.

Another model that can be used to reduce the number of design points is “Latin Hypercube Sampling”. This method is based on Monte Carlo sampling with an improved distribution of the points in the design space. The number of design points can be user-specified but care should taken when using this approach, as the number of design points affects the accuracy and predictive quality of the response surfaces.

If your model is highly nonlinear, a “Sparse Grid Initialization” model may be the best choice. This method is coupled with the “Sparse Grid Response Surface” model and can automatically add design points when it detects high gradients in the response.

Any model that allows for additional refinement points is advantageous when conducting a DOE. The initial DOE runs can be used to coarsely characterize the design space. User defined refinement points can increase the fidelity of regions of interest in the response. The refinement points can be added incrementally to improve the surrogate model without the need to regenerate the entire response from scratch. Figure 1 above illustrates the effect of adding refinement points to the DOE on a response surface. Note the gradient at the center of the refined response surface that was not identified by the initial DOE.

The predictive value of the Response Surface model is dependent on the both the response points generated and the metamodel that is used to fit the data. The choice of a particular metamodel type is also an important component of the Response Surface Optimization but that is a discussion for another blog.

Do you have certain DOE methods of choice? Feedback on which models are most commonly used and why they are chosen would make for an interesting discussion.