#### Mesh convergence with examples

Deciding on the correct mesh size is a rather difficult thing. The most accurate approach would be doing a mesh convergence check. I will show you an example of how to do it.

27 July 20207 minutes read

In this part of the *shell design course,* we will use* Linear Bifurcation Analysis* (LBA) to estimate shell capacity. The main part of this post however is about choosing the correct size of mesh and presenting a way to estimate at which mesh size accuracy of the solution is acceptable in performed analysis.

I wrote about the analysis itself in this post. In shell design, I use it to estimate how much capacity I have in my shell due to instability failure. The outcome of this analysis is definitely not the correct capacity, but it gives a certain insight into the model behavior.

**In setting up the analysis following things should be considered:**

- In different preprocessors this analysis can be called differently, but most common names are:
*Buckling, Linear Buckling, LBA.* - When setting up the analysis you should specify how many
*eigenvalues*you wish to obtain. In my experience 10 seems to be a nice number for shells. - Some solvers do not have the capacity, but it is always nice to restrain negative load multipliers… this means that the solver cannot change the direction of the provided loads.

Below are the first few eigenvalues I have obtained for the shell model used in this course, please note how edgy deformation shape looks like due to coarse mesh. At the end of this post, a 2 min clip is provided showing how I set up and performed the *linear buckling* analysis in Femap.

As you can see, each *eigenvalue* comes with a *load factor*. This factor informs the user how many times must the load change in order for the model (in ideal, linear conditions) to fail due to instability shown on the outcome deformation (more on this here). Since we applied a linear load of 50 kN/m at the top of the shell, and the first eigenvalue is 1.0481 first ideal, linear instability will occur in the model at load: 50 kN/m × 1.0481 = 52.4 kN/m. For now, it may seem that our shell has sufficient capacity, however, experience shows that “real” shell capacity will be much smaller since here we did not take into account the geometric nonlinearity or shell imperfections. Also in this case the mesh is very coarse and this also greatly influences the outcome as shown below.

As you can see above, deformations seem really crude and not very well defined. Analysis of mesh density is recommended for each simulation, so we will do it here as well. In this example, I am using R4 elements (rectangular with 4 nodes each).

**In order to establish suitable finite element size:**

- Perform chosen analysis for several different mesh sizes.
- Notice where high deformations or high stresses occur, perhaps it is worth to refine mesh in those regions.
- Collect data from analysis of each mesh: outcome, number of nodes in the model, computing time

For our shell, I have performed several analyses for different element sizes. On the drawing below you can see the outcome for few selected meshes. Please notice, that for the biggest elements actual eigenvalue shape is different than in the case of models with more refined mesh.

Of course, the biggest problem is, how to decide how refined mesh is “refined enough” since decreasing finite element size leads to vast computing times. Also, the balance between computing time and accuracy should be sought, since we may be increasing computing time over 2 times to improve accuracy by 1% which seems unreasonable.

Usually, when mesh density is being discussed in tutorials, different problems are solved with known analytical solutions: this way it is easy to verify how big calculation errors we have received. Unfortunately in almost all analyses performed for commercial purposes, the solution of the problem is unknown. In those cases “typical” approach is based on the chart below:

Reduction of finite element size leads to more elements, which in turn leads to more nodes in the model. If we build a chart showing the outcome (in this case *first eigenvalue*) dependence on node count in the model, this chart will be asymptotically reaching for the correct answer (in this case 0.6947). However exact estimation of asymptotic value may be problematic or time-consuming. There is a simple trick to make things easier to calculate: instead of node count on horizontal axis let us use 1 / node count. This way the correct answer will be where the horizontal axis value reaches 0. This means that if we approximate our curve with the equation (in most cases linear approximation is sufficient, Excel does this automatically) it is very easy to calculate the “y” value for x = 0.

Note that the obtained curve is almost linear which is usually the case in most models. From the equation provided by Excel, it is easy to derive a correct answer when x = 0. At this stage, since we know the correct answer, we can calculate how big errors were made in the estimation of results for each finite element size. Below is a chart showing dependence between error and computing time, and between error and finite element size:

From the above chart it is easy to notice that after a certain point, any significant increase in accuracy will “cost” enormous additional computing time. At this stage, all information is known for a wise choice of mesh size. If I would know how accurate result I have to obtain I will know how big elements should I use, or if there will be much similar analysis made for several models I can decide which accuracy level is time-efficient. Notice that this chart is asymptomatically reaching 0%… if you have made all the steps, described here, and your chart does not go toward 0 chances are you used too big elements. In this case, I have decided to use mesh size with computing time 68s (finite element size is 10mm). This gives a 5% error in result estimation, and for demonstration purposes, I am fine with that accuracy.

To save up computing time, we can decide to decrease finite element sizes only in areas of the model where big deformations, stresses, or instabilities take place. This of course assumes that we can predict where all of those areas are. In that case, it is better to make the charts described in this post with node count in the area where we refine the mesh, instead of nodes in the whole model. For this example, it makes little sense for now, since computing time is only 100s, but there will be a second part about mesh optimization where we will look deeper into this topic.

*Linear Bifurcation Analysis*in shell design is used as a tool to estimate capacity due to instability- Real capacity due to instability in shells is almost always significantly lower than the outcome of LBA
- Too coarse mesh can lead to results with very big errors
- Mesh density analysis helps in decision how refined mesh should be used in analysis in order to obtain results with satisfactory accuracy
- Reducing element sizes in places where big deformations / stresses / instabilities take place allow for greatly increased accuracy without great expense in computing time

If you like the course make sure to subscribe so you don’t miss the following parts.

Finally recording of me setting up the *Linear Buckling Analysis *in Femap. This is a continuation of a clip from part 1 of the course.

That is all for this week. Have a good one!

If you like FEA, you can learn some useful things in my special** free FEA course** for my subscribers. You can get it below.

##### Categories:

- Meshing

10 Lessons I’ve Learned in 10 Years!

10 Lessons I’ve learned in 10 Years!