#### Rigidity of GAP elements in contact

GAP element rigidity will depend on the material of the parts in contact... and also on the mesh size! Learn how to calculate it!

12 December 20228 minutes read

Mesh size is one of the most common problems in FEA. There is a fine line here: bigger elements give bad results, but smaller elements make computing so long you don’t get the results at all. You never really know where exactly is your mesh size on this scale. Learn how to choose the correct size of mesh and estimate at which mesh size accuracy of the solution is acceptable.

As an example, I will use a simple discretely supported shell. As an “outcome” I will use the critical load multiplier of the first eigenvalue.

It’s perhaps worth mentioning that the “outcome” can be anything that interests you. If you want to know the certain stress component in a certain node, or a displacement of selected DOF that is ok. Whatever you choose goes, as long as it is actually influenced by the mesh size! I took the multiplier simply as it is easy to obtain, and linear buckling computes very fast 🙂

You can see the model I used below. Notice how deformation shape and outcomes change with the mesh refinement. I should write that a mesh refinement check (often called mesh convergence) should be made for each problem. This is somewhat true but let’s face it, you won’t make it for each problem most likely… it simply takes a lot of time! I would suggest you do such a study for some of the most important projects/parts and based on that experience you can extrapolate the knowledge to similar problems.

In this example, I am using QUAD4 elements (normal 4 node quadrilateral elements, sometimes referred to as “S4”).

**In order to establish suitable finite element size:**

- Perform chosen analysis for several different mesh sizes.
- Notice where high deformations or high stresses occur, perhaps it is worth to refine mesh in those regions.
- Collect data from analysis of each mesh: outcome, number of nodes in the model, computing time

For our shell, I have performed some analysis for different element sizes. On the drawing above you can see the outcome for few selected meshes. Please notice, that for the biggest elements actual *eigenvalue shape* is different than in the case of models with more refined mesh.

Usually smaller mesh means more accurate results, but the computing time gets significant as well.

You should search for a balance between computing time and accuracy. In some instances you can increase computing time over 2 times to improve accuracy by 1% – for me, that seems unreasonable. Knowing your problem you will know best what makes sense and what doesn’t, based on what accuracy do you need.

When mesh density is being discussed in tutorials, different problems are solved with known analytical solution. You can then easily compare the FEA outcome to a known solution – you get an error value without trouble. This is a fantastic approach that can teach you a lot, but unfortunately in reality you don’t know the correct answer… so you can’t really do that can you?

Unfortunately in almost all analyses performed for commercial or scientific purposes, the solution of the problem is unknown. In those cases, the “typical” approach doesn’t work. Instead, you will have to “guess” the correct answer based on the models with different meshes you have done. This is done with the following chart:

Reduction of finite element size leads to more elements, which in turn leads to more nodes in the model. If we build a chart showing the outcome (in this case *first eigenvalue*) dependence on node count in the model, this chart will be asymptotically reaching for the correct answer (in this case 0.6947). Node count is only one of the parameters possible here. Since I simply decreased the element size in the entire model it made sense. You can just as easily use a number of elements on the width of your part, or the size of the “typical” element. If you refine mesh only in a small area (i.e. where the stress concentration is) you can easily use a node count in that area instead of the entire model etc.

Whatever metric you will use, will depend on the problem you are solving. Node count is the most popular one, simply since it is the easiest one to do 🙂

The exact estimation of asymptotic value on the chart above may be problematic or time-consuming. There is a simple trick to make things easier to calculate: instead of node count on horizontal axis let us use 1 / node count. This way the correct answer will be where the horizontal axis value reaches 0. This means that if we approximate our curve with the equation (in most cases linear approximation is sufficient, Excel does this automatically) it is very easy to calculate the “y” value for x = 0.

Note that the obtained curve is almost linear which is usually the case in most models. From the equation provided by Excel, it is easy to derive the correct answer when x = 0. At this stage, since we know the correct answer, we can calculate how big errors were made in the estimation of results for each finite element size. Below is a chart showing dependence between error and computing time, and between error and finite element size:

From the above chart it is easy to notice that after a certain point, any significant increase in accuracy will “cost” enormous additional computing time. When I am asked to do a convergence check on mesh refinement those 2 charts are the real answer (you can easily change the mesh element size with node count if you like). Now you know the errors each mesh size gives and the computing time it costs 🙂

Now you know how accurate results you will get with a given mesh, and how much time computing will take with such an approach. Making a decision is always problematic. I usually think about how sure I am about loads or boundary conditions – usually, those are just “estimated” and then increased “just to be sure”. When that is a case a mistake of a few percent won’t do any harm.

Time is also a factor to consider here. If you have 100 similar models to calculate increasing computing time 2 times will take A LOT of time… just something to consider.

Notice that this chart is asymptomatically reaching 0%… if you have made all the steps, described here, and your chart does not go toward 0 chances are you used too big elements. Just know that if you are not sure it is wise to make one model with “extremely” small elements – you know… just in case.

When you first do mesh convergence you will realize, that to have a great accuracy computing time will be significant. That is true, but you are not defenseless. Look at the similar shell below. Coarse mesh gives bad results for sure, but the very fine mesh takes a lot of time to compute. Knowing that the stability failure occurs at the bottom I have made a third model (on the right) that has a very fine mesh where it is important, and a coarse mesh where “nothing happens”.

This way I go the accurate outcome without incredible long computing time. Of course, there are limits, since you cannot be sure where failure will occur, etc. in some problems. Regardless it is always a good idea to make a coarse mesh, check when things will go south, and then refine the mesh in those “hot regions” rather than on the entire model. This does not work in all cases, but it works in some 🙂

- The too coarse mesh can lead to results with very big errors
- Mesh density analysis helps in the decision how refined mesh should be used in the analysis in order to obtain results with satisfactory accuracy
- Reducing element sizes in places where big deformations/stresses/instabilities take place, allow for greatly increased accuracy without great expense in computing time

This is one of the topics I teach on my: **free FEA essentials course.** Subscribe below to get it!

If you have a spare 15 seconds write a comment with your thoughts on the matter or any questions you might have. I have a good history of replying to each and every comment!

##### Categories:

- Meshing

Join my FEA Newsletter

Join me on March 12th!

See you on

## Share

## Join the discussion