In a previous post, "How Does my DOE Model Determine How Many Design Points to Generate?," we discussed the importance of choosing an appropriate Design of Experiments method when performing a Response Surface Optimization. When the DOE part of the process is complete, the next step in the optimization is choosing a surrogate or "Meta Model" method that will be fit to the DOE data.
Consider the example shown in Figure 1. A beam is simply supported on the left end. The middle region rests on the support block. A uniform pressure is applied across the top of the beam. We can set up a Design of Experiments study to track the bending response of the beam when we modify the beam height and the offset distance of the support.
Figure 1: Parametric Beam Example

Using a DOE based on the Central Composite Design method, the design points are defined and the response for each point is determined from a finite element solution. Figure 2 is a graph of the response per input variable.
Figure 2: Bending Stress Result vs. the Beam Height & Support Offset

With the design point data available, the next step in the optimization process is the selection of a Meta Model, also referred to as a response surface, to represent the system response in between the data points. The purpose of the Meta Model is to act as a surrogate representation of the system response so that it can be quickly sampled to determine the optimal solution.
One of the most common Meta Models is the 2nd Order Polynomial. This method, as the name suggests, fits a quadratic polynomial to the data points. This model works well when there is little local variation of the response and an exact mapping of the data is not required. If the variation of the output data follows a smooth trend, the data points will largely fall on the surface, but an exact fit of all points is not guaranteed. Large variations in the response will affect the goodness of fit as the polynomial model may not have the fidelity required to map all of the data points onto a single surface. A 2nd Order Polynomial Meta Model for this example is illustrated in Figure 3.
Figure 3: Second Order Polynomial Meta Model

The Kriging Meta Model is often used when the goodness of fit metrics are important. The Kriging model combines a polynomial model but fits the surface to local deviations at the response points. The goodness of fit will always be high, as the Kriging model forces to surface to pass through all of the data points. This can be advantageous for locating local deviations in the response that may require additional refinement points to fully characterize. However, if the response of the system is noisy, the Kriging model may not be the best choice, as it may create multiple local maximums and minimums in the response. In Figure 4, we can see that the fitting of the surface to the design points in the lower left requires a curvature that creates a local minimum. The result will be underpredicted by the Meta Model as the response surface result in that region is lower than the surrounding data point values.
Figure 4: Kriging Meta Model Showing Data Fitting to Local Response Variations

The nonparametric Regression model addresses the data noise issues that affect the Kriging model by creating a narrow envelope around the base polynomial fit that encompasses all of the data points. With this model, the trends of the data are captured similar to the 2nd Order Polynomial model and the localized variations that would otherwise require the Kriging model should fall within tolerance envelope. Care must be also be taken with this approach, as the tolerance envelope may exaggerate the local extremes that were seen in the Kriging example. This is illustrated in Figure 5.
Figure 5: NonParametric Regression Model Exaggerating Local Response Peaks & Valleys

The good news is the Meta Model generation is very quick by comparison to the DOE solution, and you can use the same DOE data for multiple Meta Model types and compare them. Regardless of the Meta Model used, the optimal design inputs should always be checked by generating verification points (solved results) in order to confirm that your Meta Model is accurately predicting the response.
If you find that the standard Meta Models are not mapping your data to your level of satisfaction, there are other approaches available besides the three models discussed here. These include manual refinement of the design point definition or using a Meta Model that will dynamically add refinement points during the generation of the response surface. Stay tuned for another addition to this blog topic where these approaches will be discussed.