A design optimization study makes all kinds of sense on paper. Why wouldn’t you want your product to be the strongest, lightest, and most cost effective design possible? Why wouldn’t you want to have a way to quickly and accurate predict how your product will react to variations due to manufacturing tolerances, material quality, and loading? The tools to accomplish this are readily available and the internet is chock full of white papers, case studies, and success stories to support the methods. It should be a nobrainer, right?
Well, like anything, the devil is in the details. What most optimization newcomers realize early on is that running the optimization study is the easy part. Getting the data for the optimization tool is where the real work is done. Consider performing an optimization study using a finite element model. The data used by the optimization tool comes from actual finite element model solutions. The number of solutions required is a function of the number of input variables. You could manually generate a finite element model for each input combination if you have unlimited time but who has that luxury? Ideally you would want to use one model and drive it parametrically to automate the process of generating the design point results.
To do this effectively, you need to have a robust regenerative model. There are several levels of the design point process where conflicts in the inputs can occur. Weeding out the problem areas can be difficult, but losing resolution in your optimization model negatively affects the predictive quality of the optimization. There are steps you can take to avoid errors like the ones shown below:

1) CAD Model Regeneration: Potential conflicts in the geometry parameters can lead to regeneration errors in the CAD model. The difficulty in predicting where these conflicts will occur only increases with the number of input variables. One approach is to manually test the extremes of the parameter values using the CAD program and adjust the parameters ranges as needed. Although this is a good place to begin, it will not capture all of the potential conflicts and can be very time consuming. Another approach is to use your DOE tool to set up a test run of the geometry. The total volume of the geometry can be tracked as the output quantity and the finite element solution can be shunted in the process. This approach will automate the process of identifying failed geometry regenerations for the specific DOE model you plan to use and you can adjust the input parameter definitions or the CAD geometry to correct this.

2) Finite Element Mesh Regeneration: Just because your geometry regenerated doesn’t mean the model will mesh successfully. Since we will need a mesh for the finite element solution, it is always a good idea to check this as well as in the case of the geometry, you can shunt the solution process and pull a mesh result like to the total number of nodes to get a handle of the number of degrees of freedom in the model. This will give you confidence in your meshing procedure as well as help you anticipate the computing resources you will need.

3) Boundary Condition Regeneration: If you are applying your loads and supports via geometry picking in the finite element model setup, your boundary conditions could end up in the wrong place when the geometry is regenerated. Tools like ANSYS Workbench are pretty good at maintaining consistent surface nomenclatures (tags) but that is dependent on consistent topology of the design point geometry. If your tool supports a sharing of geometry groups between the CAD and the finite element model like the ANSYS Named Selections, it is good practice to use these groups to define your boundary conditions in place of picking. With ANSYS, these can also be used in mesh definition.

4) Connections: If your model includes joint and contact connections, you need to be certain that they are regenerated in the correct locations. As is the case with mesh controls and boundary conditions, using geometry groups generated at the CAD level will make the connection generation more robust. If your finite element program supports automatic contact generation, you should deactivate it.
5) Finite Element Solution: The previous three steps will not be much good if the model fails to solve. Always run a few interactive solutions where you anticipate that the solution may be more difficult to achieve, such as load magnitude maximums, and potentially large initial contact gaps or penetrations. The interactive runs also give you the opportunity to test your solution settings, like time step size. You can also evaluate your solution output frequency and check result files sizes. You do not want to run out of storage space part way through your DOE.

6) Model Output: Since many programs reuse the same finite element model to generate the DOE design points, only result values for each output quantity are stored. When the next design point is run, the previous results are deleted. When this is the case, it is once again advisable to use geometry groups generated at the CAD level to define the result locations. The ANSYS program also allows you to store the full model for select design points. You will need to identify the points before the DOE begins, but this can be used to evaluate a sample of the DOE to confirm that the results are being extracted as you intended.
Following all of the above steps may seem like a lot work, but it will be worth it when your DOE returns all of the required design point results. With a complete set of design points data you can construct surrogate models to rapidly predict the system response to input combinations that fall within the design space like the response surface model shown in Figure 1 above.
The surrogate models also enable you to quickly and effectively run optimization studies for varying goals and constraint conditions. The optimization output can consist of candidate designs for you to choose from like the output table shown in Figure 2.
Figure 2 – Candidate Designs Corresponding to an Optimization Goal and a Unique Set of Constraints

The development of the parametric model may seem like a lot of work up front, but at the end of the process you’ll have options to choose from. You will also have a more detailed understanding of the robustness of your design than you could have gained in the same amount of time, if you tried to optimize manually by trial and error. Does anyone out there in optimization land have any other tips to share on making your models more robust? We would welcome a conversation.