Pages

Tuesday, October 26, 2010

The dreaded question: Is the solution converged?

This is perhaps the most asked and important, yet dreaded, question in CFD analysis. Some people will say “yes, the residuals were XXXX” just to dismiss the question. But is the answer that simple? Unfortunately it is not. That is just one measure we should use in determining whether a solution is converged. The definition of convergence is, in a mathematical sense, the approach towards a definite point. Basically we are trying to get our numerical solution sufficiently close (accurate) to a definite point (exact solution). To help understand what convergence is in practice, we must step back and look at what we are trying to accomplish in modeling.

Overview

The goal of CFD modeling is to obtain virtual flow field which represents the physical situation. In traditional CFD modeling, the first step in this process is to create a grid that represents the physical domain. Once the mesh has been created, the boundary conditions and other physics models are applied to complete the computational model. The computational model is then solved. As analysts we must think of the convergence ramifications when executing both the meshing and solving steps.

Meshing Convergence

Typically when people speak of meshing in regards to convergence they are considering refining the mesh in certain areas to reduce the residuals in this area. This is a technique that is commonly used, but is not what we will be discussing here.

However when you consider what we are representing the continuous flow field by using discrete approximations, we must ensure that our grid is sufficient fine—that it approximates, to sufficient accuracy, the physical flow field. Typically this is done by investigating successively finer and finer meshes to show that the solution converges to a fixed limit. Many people refer to this as “grid independence,” but in reality is it is convergence of the discrete computational model to the continuous physical system.

Solver Convergence

When monitoring a solver run, people generally just examine the residuals. Residuals are a measure of change of the solution between iterations at it tends towards the discretitized solution. Different solvers specify different levels at which the residuals must meet to be a “converged” solution. However these are just a general rule-of-thumb. In general the residuals can reduce to a certain level but the flow field may not have reached an iteration independent solution. Conversely the residuals could converge to a level higher than the specified tolerance yet could reach a fixed solution for the quantities of interest (although it we would still have to check the mesh convergence of this solution). Because of this it is typically recommended to monitor several relevant quantities during the solution phase to make sure these converge in addition to the residuals.

Of course the question that naturally arises is: What are the variables we should monitor? This is where our years of schooling and experience come into play. In general we want to monitor variables that are relevant for the problem we are solving. For instance for many problems we are concerned with the pressure drop through the domain so we should monitor this quantity as the solution progresses. If we were investigating the flow over an airplane it would be useful to monitor the lift and drag. If we were to model a heat exchanger, it would be useful to monitor the temperatures leaving the domain and the heat flux through the various surfaces. So there is no single monitor that that is sufficient to determine convergence for all problems. We must use our engineering judgment to determine the most useful quantities to monitor.

Summary

In summary, answering whether a solution is a converged solution is a complex answer. It is a question that, as a modeler, we must always have in the back of our mind. When we are developing a concept for the model in our mind we should be thinking about the variables we should monitor. When developing a mesh we should be thinking about developing another to test mesh convergence (grid independence). When crunching the numbers, we want to monitor both the residuals and the monitor points we have created. When the first results are displayed, we should be looking for unphysical discontinuities or other phenomena that would indicate poor convergence.

This discussion is not meant to be an end-all be-all in regards to convergence. In fact entire books have been written about the subject, so I will not claim to have described in completely in this blog. I just wanted to present some questions that we, as modelers, should always be considering when developing and solving our models. We should understand the true nature of convergence, feel confident when asked whether our solution has converged, not simply rattle off the scripted response that the residuals were below some arbitrary value!

Thursday, August 12, 2010

Get started with Entry-Level HPC and ANSYS

Ansys Mechanical has supported and been tightly integrated in the High Performance Computing (HPC) arena for many years and many versions. However, I've seen a quite some hesitation from users and companies to introduce HPC into their engineering simulation environment. Reasons generally come down to cost and complexity.

True, setting up a central cluster with many nodes is costly. The complexity of configuring it, optimizing it (for Ansys and the other array of applications that will share it), and maintaining it can be daunting.
However, I've worked with a large number of customers recently getting into "entry level HPC". Even though our primary workstations are getting more powerful (6-core processors are here, 12-core processors are coming soon) and we're able to run larger jobs on them, there's still a need to offload the job to an HPC environment. Let's face it - we've all closed our emails, web browsers, and office apps during those painfully slow solves to try a free up just a few more Mb's of ram, hoping the run won't crash.

What I consider "entry-level" is to have at minimum a 2nd workstation (or server), can be high or low end, expensive with lots of CPU/RAM/disk space, or inexpensive (assembled from all those spare components laying around). The idea here is to try HPC - a simple setup to send a solve over to a 2nd computer. If you have the compute power in your 2nd computer for high-end analysis, great! If not, get something set up to at least introduce yourself to the concepts and see how it works.

I recently worked with a customer who purchased a very high-end single-node compute server. Why just one? Simple answer... cost constraints. We were able to set it up, get the Ansys users up and running and accustomed to HPC (and adopting its advantages) and then when the budget allowed, the customer added additional compute nodes to the existing cluster.

Off-loading the solve can be done a number of ways, including Remote Solve Manager (RSM), batch scripts, Distributed Ansys, even simply using Remote Desktop. (Great discussion points for future topics!) This simple "entry level HPC" setup can free up your primary workstation during those intensive solves. It is amazingly convenient to build a model on my laptop, hit "Solve", shut down my laptop, go home, and come in the next morning with a fully solved model!

I love to hear your thoughts!

Jason.

Tuesday, August 10, 2010

When NOT to use comparative Charts for CFD software

Recently there was an article in Desktop Engineering where it compared several commercial CFD products in the marketplace today. I am all for this! Sure! What shocked most in the industry who read this article was how terribly inaccurate this was and how biased it was towards one single product. I wish it was featured in a "marketing section" vs. an engineering magazine such as Desktop Engineering. In this rebuttal article from Mentor Graphics (Dr. J), the author claims to even have received an apology from Desktop Engineering: "To their credit, when we notified them, Desktop Engineering apologised to us, sent out an apology to all of its readership and promised that such a chart would not go out again."

Here is a direct link to the rebuttal article: "Lies, Damned Lies, and "CFD Comparison Charts" - Part I" You can also see the original "comparative charts" here as reference.

In this article, Dr. J aptly points out the one-sided take by CF Design. Though he mainly talks about improper comparison of Mentor Graphics' FloEFD, there are several inaccuracies on this comparative chart when it comes to ANSYS CFD products (FLUENT, CFX) as well. Any current FLUENT and CFX user can testify to these inaccuracies! Dr. J, the count on "misleading statements" in the DE article are well beyond 27 if we include the ANSYS CFD products. I still don't believe this article ever got published! It begs more research and accurate information.

Having been in the simulation industry (FEA, CFD) for close to a decade, this false comparative chart regarding CFD products is really appalling, especially coming from a magazine such as DE. Negative marketing campaigns are not a good idea. If you do them, please know the facts about competition first! And then do not do it as a "guide" to helping engineers decide on what CFD code to chose!

Update: Folks from Blue Ridge Numerics have responded on LinkedIN forums on this and seems like they are working on it. Good to know:) You can follow their comments here: www.linkedin.com/groups?home=&gid=66032

I look forward to your feedback.

Monday, July 19, 2010

Fracture Mechanics in Turbine Blade Analysis

I would like to take some time to start a discussion on fracture mechanics. The calculations for a basic fracture mechanics analysis are fairly simple, but they can play a very important role in failure analyses. This topic comes up quite frequently in turbine blade work.

So what is fracture mechanics? In short it’s a method of determining the time it takes a crack in a part to grow to failure under a specific loading condition. The crack growth stage of fatigue can make up a significant portion of a products life. This happens in products ranging from bicycles, to airplanes, to steam turbine blades.

At the heart of fracture mechanics is the stress intensity factor K defined as:

Where:

f(g) is a correction factor based on crack geometry. This value tends to be between 1 and 1.4.

a is the crack length

s is the remote stress

Fatigue crack growth is divided into 3 regions as shown in the figure below. In this figure, crack growth rate (da/dN) is plotted on the vertical axis in log scale and Stress Intensity Range (DK=Kmax-Kmin) is plotted on the horizontal axis in log scale. Region I is associated with crack threshold effects (the area where a crack first begins to grow), Region II is an area of linear growth (Paris region), and Region III exhibts extremely high/unstable crack growth.

For design purposes the focus is on Regions I and II. Crack growth is so fast in Region III that it does not have a significant effect on the total crack propagation life. Noted on the graph is DKth, the threshold stress intensity which is determined through testing. This value marks the beginning of crack growth. Kc is the critical stress intensity and values higher than this predict fracture.

When performing a turbine blade analysis we often want to determine if a dynamic stress condition is severe enough to grow a crack (Refer to previous posts on blade analysis). To determine that, we run a fracture mechanics analysis for Region I of the above graph. If the stress condition and initial flaw size is not capable of growing a crack then we need not be concerned with removing the near resonant condition.

This analysis starts with calculating the stress intensity factor range:

In the case of an edge crack on a turbine blade airfoil the a correction factor f(g)=1.12 is typically used. Ds is the stress range, or dynamic stress for turbine blades (again refer to my last post on dynamic stress analysis). If DK is greater than DKth, then a crack will propagate under the given loading condition.

One other thing to consider is the R ratio. Test results for DKth values are very dependant on the conditions which they were tested at. A particular DKth value will only apply to a loading condition that has the same R ratio as the test. The R ratio calculation is shown below:

Where sm is the mean steady stress and sd is the alternating or dynamic stress. If the R ratio for the DKth test is different than calculated above, the DKth will need to be adjusted to account for the difference. One common method for the compensation of DKth is Walker’s Equation:

Where is the value at R=0, g is a material constant and is typically between 0.3 and 1. Steels are typically around 0.5. Using this relationship, and assuming that is a constant, you can calculate the DKth value for any R ratio.

Thanks for reading,

Tuesday, June 15, 2010

Do we really need training if the software is so easy?

Firstly, I want to say sorry for the long absence from this blog. I was finishing up my MBA from Simon School of Business, University of Rochester. Thank you for all those who supported me during the past 2 years. A big thank you!

Recently I had a very interesting phone discussion on "Training" for simulation software (ANSYS, Moldflow etc.). The question this young gentleman asked me was "All the simulation software firms claim that their software is really easy to use. But then they suggest 2-5 days of training! Why this mismatch?"

This question really intrigued me and I have heard several versions of this question in the past 9 years of my sales career. So, I thought I would put out my thoughts on this topic and also get some feedback from you guys on this. Let us see some concepts leading to such a question on training:

1. "Yes! The software is easy": Most of the software today is far more easier to use than it ever was before. ANSYS Workbench has changed the paradigm on the once clumsier and "specialists only" fea software. But has the ease of use of software really eliminated the need for training? The ease of use has certainly helped analysts do more analyses and more efficiently and also lowered the "barriers to entry" to some extent. Students right out of school now are getting employed as full-time analysts. Some pundits may claim that "easier" software still do not have all the bells and whistles needed and I may give it some merit. But for most analysts, a easy to use simulation user interface such as ANSYS Workbench is just fine! But that still mean one doesn't need ANSYS training atall?

2. "I can learn it on my own": I have heard this several times as well and this is true to some extent as well. Most of the simulation software today is at a point where one can install it and start using it right away with very little effort. And the in-built tutorials will probably be just enough to get you there. But then why a formal training?

3. "In-house experts are my gurus": Several larger firms have already established in-house experts who are probably well suited to teach most of the fea courses out there today using the software of their choice. And the newbie analyst can probably use the aide of some perseverance and in-house guru expert can probably get there too....

So, that being said, then why pay for formal training?

1. Makes you an efficient user (I did not say flawless user):
While it is true that the software is easy today, a formal training will give you a good understanding of most of the quirks and enhance your familiarity with the product in a systematic way. Within 2 days or so, you will know where most of the buttons are, how the product works, what is behind all that math and matrices, what solver to use and when to use (and also when not to use) and most importantly gives you direct hands-on exposure under experts guidance. By the time you are back at work ready to use the software, you will be able to "hit the ground" running with the software. You will know atleast how to set-up your own models, mesh them and solve it (doesn't guarantee you will do it right though, that is where your 2 yrs/4yrs etc. of education or years of experience comes handy).

2. It gives you a jump-start:
You sure can learn it on your own. And believe me, most of what you learn about the software, it actually will be learnt on your own (over years). The training only gives you a jump-start to get you there quicker.

3. Makes your co-workers more efficient:
Training makes life for your "in-house gurus" much better. Now instead of asking "What is this button?", you will now ask "Does an axi-symmetric model make more sense for this? Can I use a submodel for this?" to "Shall I use Anand's model or Neo-Hookean?" You will make your guru's time much more efficient!

4. Improves your ROI on simulation software itself!
Formal training surely enhances your chances of getting a better Return on your Investment on your software expenditure! Yep, you will now be able to use the software better, run more analyses and hopefully save millions for your firm (for which they bought the software in the first place).

I have also heard some other benefits over years such as "Great to know the teachers and now I have an outside resource available", "I wouldn't have been this diligent to do this in just 2 days or even few months", "I needed it for NY PE Credits" and "I just needed to get away from work!"

However, now that you may be a little more convinced on the value of training, end of the day, personal motivation and dedication goes a lot farther. Training is just a drop in the ocean!

I am eager to hear your feedback (even if you agree or disagree, I want to hear it).

Rob.

Thursday, April 29, 2010

Turbine Blade Dynamic Stress Analysis

Here is the second installment on our turbine blade analysis discussion. (Part one is here: Turbine Blade Modal Analysis)
This time I will focus on a dynamic stress analysis. Once you have created the interference diagram that was discussed in my last post, you will be able to identify conditions where resonance may occur. Typically I find any case where the resonant condition is less than 3% different from the forcing frequency (impulse line on interference diagram). I then run a dynamic stress analysis on each of those conditions using BLADE.

Resonant conditions were covered in my last post, but I think it is important enough to summarize it here. The dynamic amplitude and stress response of a structure depends on the following factors:
1. The natural frequencies of the system
2. The damping properties
3. The forcing amplitudes or stimulus ratio, defined as the ratio of the dynamic forces to the static steam loads on the blade
4. The phase angles, defined by the harmonic content (nodal diameter) of the modes.

The steam flow field is non-uniform due to nozzle asymmetry and irregular spacing geometry within the steam flow path. Other factors may include geometry variations of wakes, leakage flows and disturbances in the turbine structure such as joints and steam extractions. Since so many variables are involved and some of the fluid phenomena are still unknown, it is extremely difficult to estimate accurately the dynamic forces and consequently the stimulus ratio.

When calculating the alternating stresses using BLADE, I typically assume a 1% stimulus ratio so that results can be easily scaled. In practice the stimulus ratio varies for different machines and for different blade rows. When BLADE calculates these stresses it assumes that the system is at a resonant condition. Therefore, the resonant stresses that are output by BLADE need to be detuned (i.e. reduced) if the particular stimulus is not precisely at resonant frequency. For example let’s say the conditions that were selected to run a dynamic analysis on were within 1% of resonance and this occurs at 3500 Hz. This could lead to a significant detuning since the forcing frequency would be 35 Hz away from resonance.

The detuning of these resonant stresses is accomplished through the transmissibility function or sometimes referred to as the magnification factor. A derivation of the transmissibility function can be found in a mechanical vibrations text typically in the harmonic vibration chapter. For convenience here is the final result:

Where:

sd= Dynamic Stress

sr= Resonant Stress

h=Frequency Ratio (excitation frequency/natural frequency)

z =Critical damping ratio



Once the stress values are detuned you will now know the frequency and nodal diameter for each possible resonant condition and the dynamic stresses that occur there. With this you will be able to judge if this near resonant condition is significant and if a design needs to be modified to reduce the stresses or shift the frequencies to detune the resonant condition.

Thanks for reading and welcome your comments and suggestions.