In this part of shell design course we will use* Linear Bifurcation Analysis* (LBA) to estimate shell capacity. The main part of this post however is about choosing correct size of mesh and present a way to estimate at which mesh size accuracy of solution is acceptable in performed analysis.

#### Setting up Linear Bifurcation Analysis

I wrote about the analysis itself in this post. In shell design I use it to estimate how much capacity I have in my shell due to instability failure. The outcome of this analysis is definitely not the correct capacity, but it gives certain insight into the model behaviour.

In setting up the analysis following things should be considered:

- In different preprocessors this analysis can be called differently, but most common names are:
Buckling, Linear Buckling, LBA.- When setting up the analysis you should specify how many
eigenvaluesyou wish to obtain. In my experience 10 seems to be a nice number for shells.- Some solvers do not have the capacity, but it is always nice to restrain negative load multipliers… this means that the solver cannot change the direction of the provided loads.

Below the first few *eigenvalues* I have obtained for shell model used in this course, please note how edgy deformation shape looks like due to coarse mesh. At the end of this post a 2 min clip is provided showing how I set up and performed the *linear buckling* analysis in Femap.

As you can see, each *eigenvalue* comes with a *load factor*. This factor informs user how many times must the load change in order for model (in ideal, linear conditions) to fail due to instability shown on the outcome deformation (more on this here). Since we applied linear load of 50 kN/m at the top of the shell, and the first *eigenvalue *is 1.0481 first ideal, linear instability will occur in model at load: 50 kN/m × 1.0481 = 52.4 kN/m. For now it may seem that our shell have sufficient capacity, however experience shows that “real” shell capacity will be much smaller since here we did not took into account the geometric nonlinearity or shell imperfections. Also in this case the mesh is very coarse and this also greatly influences the outcome as shown below.

#### Choosing a suitable mesh size

As you can see above, deformations seem really crude and not very well defined. Analysis of mesh density is recommended for each simulation, so we will do it here as well. In this example I am using R4 elements (rectangular with 4 nodes each).

In order to establish suitable finite element size:

- Perform chosen analysis for several different mesh sizes.
- Notice where high deformations or high stresses occur, perhaps it is worth to refine mesh in those regions.
- Collect data from analysis of each mesh: outcome, number of nodes in the model, computing time

For our shell I have performed several analysis for different element sizes. On the drawing below you can see the outcome for few selected meshes. Please notice, that for biggest elements actual *eigenvalue shape* is different than in case of models with more refined mesh.

Of course the biggest problem is, how to decide how refined mesh is “refined enough”, since decreasing finite element size leads to vast computing times. Also the balance between computing time and accuracy should be sought for, since we may be increasing computing time over 2 times to improve accuracy by 1% which seems unreasonable.

Usually when mesh density is being discussed in tutorials, different problems are solved with known analytical solution: this way it is easy to verify how big calculation errors we have received. Unfortunately in almost all analysis performed for commercial purposes the solution of the problem is unknown. In those cases “typical” approach is based on the chart below:

Reduction of finite element size leads to more elements, which in turn leads to more nodes in the model. If we build a chart showing the outcome (in this case *first eigenvalue*) dependence on node count in the model, this chart will be asymptotically reaching for correct answer (in this case 0.6947). However exact estimation of asymptotic value may be problematic or time consuming. There is a simple trick to make things easier to calculate: instead of node count on horizontal axis let us use 1 / node count. This way the correct answer will be where horizontal axis value reaches 0. This means that if we approximate our curve with the equation (in most cases linear approximation is sufficient, Excel does this automatically) it is very easy to calculate “y” value for x = 0.

Note that the obtained curve is almost linear which is usually the case in most models. From the equation provided by Excel it is easy to derive a correct answer when x = 0. At this stage since we know the correct answer we can calculate how big errors were made in estimation of result for each finite element sizes. Below is a chart showing dependence between error and computing time, and between error and finite element size:

From the above chart it is easy to notice that after certain point, any significant increase in accuracy will “cost” enormous additional computing time. At this stage all informations are known for wise choice of mesh size. If I would know how accurate result I have to obtain I will know how big elements should I use, or if there will be many similar analysis made for several models I can decide which accuracy level is time-efficient. Notice that this chart is asymptomatically reaching to 0%… if you have made all the steps, described here, and your chart do not go toward 0 chances are you used too big elements. In this case I have decided to use mesh size with computing time 68s (finite element size is 10mm). This gives 5% error in result estimation, and for demonstration purpose I am fine with that accuracy.

To save up computing time, we can decide to decrease finite element sizes only in areas of the model where big deformations, stresses or instabilities take place. This of course assumes that we can predict where all of those areas are. In that case it is better to make the charts described in this post with node count in the area where we refine mesh, instead of nodes in whole model. For this example it makes little sense for now, since computing time is only 100s, but there will be a second part about mesh optimization where we will look deeper into this topic.

Summary

Linear Bifurcation Analysisin shell design is used as a tool to estimate capacity due to instability- Real capacity due to instability in shells is almost always significantly lower than the outcome of LBA
- Too coarse mesh can lead to results with very big errors
- Mesh density analysis helps in decision how refined mesh should be used in analysis in order to obtain results with satisfactory accuracy
- Reducing element sizes in places where big deformations / stresses / instabilities take place allow for greatly increased accuracy without great expense in computing time

If you like the course make sure to subscribe so you don’t miss following parts.

Finally recording of me setting up the *Linear Buckling Analysis *in Femap. This is a continuation of a clip from part 1 of the course.

That is all for this week. Have a good one!

Łukasz

## Leave A Comment