Monday, September 5, 2011
Cloud this.. Cloud that... What about Cloud Simulations? Cloud FEA/CFD?
Do you think Cloud FEA/CFD will actually see the daylight? I have created a poll on this recently and already have seen some great comments and interest on this topic. I would love to hear from you! Here you go:
Poll: Cloud Computing for FEA/CFD? Do you like it?
Once I have this poll completed (in 28 days from today), I plan to update this post (or publish a new post) with a final report and analysis of what the simulation community thinks.
I am really interested in hearing what you have to say!
Tuesday, October 26, 2010
The dreaded question: Is the solution converged?
This is perhaps the most asked and important, yet dreaded, question in CFD analysis. Some people will say “yes, the residuals were XXXX” just to dismiss the question. But is the answer that simple? Unfortunately it is not. That is just one measure we should use in determining whether a solution is converged. The definition of convergence is, in a mathematical sense, the approach towards a definite point. Basically we are trying to get our numerical solution sufficiently close (accurate) to a definite point (exact solution). To help understand what convergence is in practice, we must step back and look at what we are trying to accomplish in modeling.
Overview
The goal of CFD modeling is to obtain virtual flow field which represents the physical situation. In traditional CFD modeling, the first step in this process is to create a grid that represents the physical domain. Once the mesh has been created, the boundary conditions and other physics models are applied to complete the computational model. The computational model is then solved. As analysts we must think of the convergence ramifications when executing both the meshing and solving steps.
Meshing Convergence
Typically when people speak of meshing in regards to convergence they are considering refining the mesh in certain areas to reduce the residuals in this area. This is a technique that is commonly used, but is not what we will be discussing here.
However when you consider what we are representing the continuous flow field by using discrete approximations, we must ensure that our grid is sufficient fine—that it approximates, to sufficient accuracy, the physical flow field. Typically this is done by investigating successively finer and finer meshes to show that the solution converges to a fixed limit. Many people refer to this as “grid independence,” but in reality is it is convergence of the discrete computational model to the continuous physical system.
Solver Convergence
When monitoring a solver run, people generally just examine the residuals. Residuals are a measure of change of the solution between iterations at it tends towards the discretitized solution. Different solvers specify different levels at which the residuals must meet to be a “converged” solution. However these are just a general rule-of-thumb. In general the residuals can reduce to a certain level but the flow field may not have reached an iteration independent solution. Conversely the residuals could converge to a level higher than the specified tolerance yet could reach a fixed solution for the quantities of interest (although it we would still have to check the mesh convergence of this solution). Because of this it is typically recommended to monitor several relevant quantities during the solution phase to make sure these converge in addition to the residuals.
Of course the question that naturally arises is: What are the variables we should monitor? This is where our years of schooling and experience come into play. In general we want to monitor variables that are relevant for the problem we are solving. For instance for many problems we are concerned with the pressure drop through the domain so we should monitor this quantity as the solution progresses. If we were investigating the flow over an airplane it would be useful to monitor the lift and drag. If we were to model a heat exchanger, it would be useful to monitor the temperatures leaving the domain and the heat flux through the various surfaces. So there is no single monitor that that is sufficient to determine convergence for all problems. We must use our engineering judgment to determine the most useful quantities to monitor.
Summary
In summary, answering whether a solution is a converged solution is a complex answer. It is a question that, as a modeler, we must always have in the back of our mind. When we are developing a concept for the model in our mind we should be thinking about the variables we should monitor. When developing a mesh we should be thinking about developing another to test mesh convergence (grid independence). When crunching the numbers, we want to monitor both the residuals and the monitor points we have created. When the first results are displayed, we should be looking for unphysical discontinuities or other phenomena that would indicate poor convergence.
This discussion is not meant to be an end-all be-all in regards to convergence. In fact entire books have been written about the subject, so I will not claim to have described in completely in this blog. I just wanted to present some questions that we, as modelers, should always be considering when developing and solving our models. We should understand the true nature of convergence, feel confident when asked whether our solution has converged, not simply rattle off the scripted response that the residuals were below some arbitrary value!
Thursday, August 12, 2010
Get started with Entry-Level HPC and ANSYS
Ansys Mechanical has supported and been tightly integrated in the High Performance Computing (HPC) arena for many years and many versions. However, I've seen a quite some hesitation from users and companies to introduce HPC into their engineering simulation environment. Reasons generally come down to cost and complexity.
True, setting up a central cluster with many nodes is costly. The complexity of configuring it, optimizing it (for Ansys and the other array of applications that will share it), and maintaining it can be daunting.
However, I've worked with a large number of customers recently getting into "entry level HPC". Even though our primary workstations are getting more powerful (6-core processors are here, 12-core processors are coming soon) and we're able to run larger jobs on them, there's still a need to offload the job to an HPC environment. Let's face it - we've all closed our emails, web browsers, and office apps during those painfully slow solves to try a free up just a few more Mb's of ram, hoping the run won't crash.
What I consider "entry-level" is to have at minimum a 2nd workstation (or server), can be high or low end, expensive with lots of CPU/RAM/disk space, or inexpensive (assembled from all those spare components laying around). The idea here is to try HPC - a simple setup to send a solve over to a 2nd computer. If you have the compute power in your 2nd computer for high-end analysis, great! If not, get something set up to at least introduce yourself to the concepts and see how it works.
I recently worked with a customer who purchased a very high-end single-node compute server. Why just one? Simple answer... cost constraints. We were able to set it up, get the Ansys users up and running and accustomed to HPC (and adopting its advantages) and then when the budget allowed, the customer added additional compute nodes to the existing cluster.
Off-loading the solve can be done a number of ways, including Remote Solve Manager (RSM), batch scripts, Distributed Ansys, even simply using Remote Desktop. (Great discussion points for future topics!) This simple "entry level HPC" setup can free up your primary workstation during those intensive solves. It is amazingly convenient to build a model on my laptop, hit "Solve", shut down my laptop, go home, and come in the next morning with a fully solved model!
Tuesday, August 10, 2010
When NOT to use comparative Charts for CFD software
Monday, July 19, 2010
Fracture Mechanics in Turbine Blade Analysis
So what is fracture mechanics? In short it’s a method of determining the time it takes a crack in a part to grow to failure under a specific loading condition. The crack growth stage of fatigue can make up a significant portion of a products life. This happens in products ranging from bicycles, to airplanes, to steam turbine blades.
At the heart of fracture mechanics is the stress intensity factor K defined as:
Where:
f(g) is a correction factor based on crack geometry. This value tends to be between 1 and 1.4.
a is the crack length
s is the remote stress
Fatigue crack growth is divided into 3 regions as shown in the figure below. In this figure, crack growth rate (da/dN) is plotted on the vertical axis in log scale and
For design purposes the focus is on Regions I and II. Crack growth is so fast in Region III that it does not have a significant effect on the total crack propagation life. Noted on the graph is DKth, the threshold stress intensity which is determined through testing. This value marks the beginning of crack growth. Kc is the critical stress intensity and values higher than this predict fracture.
When performing a turbine blade analysis we often want to determine if a dynamic stress condition is severe enough to grow a crack (Refer to previous posts on blade analysis). To determine that, we run a fracture mechanics analysis for Region I of the above graph. If the stress condition and initial flaw size is not capable of growing a crack then we need not be concerned with removing the near resonant condition.
This analysis starts with calculating the stress intensity factor range:
In the case of an edge crack on a turbine blade airfoil the a correction factor f(g)=1.12 is typically used. Ds is the stress range, or dynamic stress for turbine blades (again refer to my last post on dynamic stress analysis). If DK is greater than DKth, then a crack will propagate under the given loading condition.
One other thing to consider is the R ratio. Test results for DKth values are very dependant on the conditions which they were tested at. A particular DKth value will only apply to a loading condition that has the same R ratio as the test. The R ratio calculation is shown below:
Where sm is the mean steady stress and sd is the alternating or dynamic stress. If the R ratio for the DKth test is different than calculated above, the DKth will need to be adjusted to account for the difference. One common method for the compensation of DKth is
Where is the value at R=0, g is a material constant and is typically between 0.3 and 1. Steels are typically around 0.5. Using this relationship, and assuming that is a constant, you can calculate the DKth value for any R ratio.
Thanks for reading,
Tuesday, June 15, 2010
Do we really need training if the software is so easy?
Thursday, April 29, 2010
Turbine Blade Dynamic Stress Analysis
This time I will focus on a dynamic stress analysis. Once you have created the interference diagram that was discussed in my last post, you will be able to identify conditions where resonance may occur. Typically I find any case where the resonant condition is less than 3% different from the forcing frequency (impulse line on interference diagram). I then run a dynamic stress analysis on each of those conditions using BLADE.
Resonant conditions were covered in my last post, but I think it is important enough to summarize it here. The dynamic amplitude and stress response of a structure depends on the following factors:
sd= Dynamic Stress
sr= Resonant Stress
h=Frequency Ratio (excitation frequency/natural frequency)
z =Critical damping ratio