Dr. Steve Karman is a professor in the Graduate School of Computational Engineering at the University of Tennessee at Chattanooga and a researcher at the SimCenter: National Center for Computational Engineering. The SimCenter is a center for integrated research and education, the primary goals of which include establishment of next-generation technologies in computational modeling, simulation and design.
[JC] What do you see are the biggest challenges facing CFD in the next three years?
[SK] The use of CFD in design is really starting to become practical, especially for steady state cases. I am referring to using adjoint-based methods that can tackle real design problems like “optimize the wing shape for maximum lift and minimum drag”. Unsteady design is more challenging and requires a tremendous amount of computing resources.
The use of CAD throughout the CFD process should become more mainstream. Many meshing packages, Pointwise for example, are incorporating hooks into the actual CAD model. The industry needs to move toward using CAD exclusively and eliminate, as much as possible, the use of discrete models and even IGES and STEP. The designer’s intent and the ability to modify the shape for design optimization is best handled with the original CAD software. Meshing researchers are beginning to make use of vendor-neutral interfaces to CAD, such as the CAPRI package from Bob Haimes at MIT.
[JC] Can you elaborate on why you want to minimize use of standard file formats like IGES and STEP and faceted geometry models?
[SK] Just being a standard file format like IGES and STEP does not mean the file is clean and valid. There are different flavors of the standards and sometimes the model is not properly closed, containing gaps between parts or may include untrimmed surfaces which forces the mesh generator to handle the trimming process. Discrete models, such as facet files, do not contain enough information to provide the true curvature in the geometry, making them ill-suited for mesh adaptation and design. Most CAD packages now work with solid models that maintain a properly closed model at all times at a consistent accuracy or precision. The discrete model or IGES/STEP files are acting as a transfer mechanism between the real CAD and the meshing software. We should strive to eliminate that step of the process.
[JC] Back to the big challenges…
The meshing community needs to move toward true parallel mesh generation for two reasons. The first reason is the sizes of the problems are increasing and it will become common practice to use meshes that contain a billion elements. This is impractical in the serial mode many organizations currently use to create meshes. The other obvious reason for going parallel is speed. Generating a billion-element mesh in serial is tremendously slow. Parallel mesh generation is not a trivial task. The process is inherently serial, especially where tet meshing is concerned. Researchers are beginning to work on the issues and I expect some breakthroughs will be made over the next few years. I believe the meshing process needs to parallel from the outset and never go back to a serial mesh. Many of the post-processing packages are already able to support parallel result files. There should be no reason to require a combined serial mesh in the process stream. So parallelism for speed and size should be a focus.
[JC] To give us a datum for going forward, how big are your average meshes and when do you think a billion cells will start becoming practical and not just a “stunt”?
[SK] Our average size meshes are around 10 to 20 million points. Depending on the element types used that could be between 20 and 50 million elements. We recently ran some turbomachinery cases in which the mesh contained 60 million points and 100 million elements (hybrid with tets, prisms and hexes). We used 280 processors to perform the calculations. We have a total of 1300 processors on our in-house Linux cluster. That mesh contained one row of rotor-stator blades. If we wanted to run three stages, that would increase the size to approximately 200 million points and 300-400 million elements. But our current analysis process is limited to approximately 100 million points due to the requirements for serial mesh creation and parallel partitioning. So we already would like to run close to a half-billion elements.
LES solutions are notorious for needing exceedingly large meshes. We would like to be able to run cases now that use meshes consisting of 200 million points and close to a billion elements. One of my colleagues recently attended an ONR conference where a presenter showed results from an aerodynamic/hydrodynamic case that used a Cartesian mesh consisting of close to 1 billion hexes. So the need for billion element meshes is here, but the process is not quite practical, yet. I foresee us creating and analyzing billion element meshes on a regular basis over the next couple of years.
[JC] Tell us about what you’re currently working on.
[SK] When I joined the SimCenter in 2003, my primary responsibility was to lead the research activities in mesh generation, to support the group that came to Chattanooga from Mississippi State who were developing the flow solver. We had purchased a couple of Gridgen licenses to use for production runs. So, we had the ability with Gridgen to create structured and unstructured viscous meshes, the typical hybrid types.
I began re-creating the mesh generation capability used by SPLITFLOW, the Cartesian flow solver I helped develop at GD/Lockheed Martin. That evolved into a code named P_HUGG, which could perform Parallel Hierarchical Unstructured Grid Generation and produced hex-dominant meshes for inviscid analyses. The output mesh was a hybrid mesh containing tetrahedral, pyramids, prisms and hexahedra. We are continuing to evolve this technology and a former student, Vincent Betro, has added capabilities for adding viscous layers and coupled the method with a tetrahedral mesher to create a versatile hybrid-meshing package. The ultimate goal is to develop an unsteady adaptive meshing capability that utilizes the hierarchical tree efficiently to modify the mesh based on boundary motion and solution gradients.
A couple of years after moving to Tennessee, I was collaborating with Kyle Anderson in the area of mesh smoothing using the Linear-Elastic relationships. We were smoothing interior meshes based on boundary perturbations and realized that the method could actually be used to create viscous meshes. We developed a code named P_VLI (Parallel Viscous Layer Insertion) with a simple premise: use Linear-Elastic smoothing to move an existing mesh away from a viscous surface to create room for viscous layers to be added to the mesh. The details are a bit more complicated than that when you have to consider sharp convex and concave corners, small gaps and bodies in close proximity to one another. It has been a very useful tool at the SimCenter and is used on a daily basis.
In addition to elliptic smoothing using the Linear-Elastic equations, we are developing unstructured Winslow smoothing, which has previously only been used on structured grids. There have been some researchers that have proposed schemes for unstructured elliptic smoothing, but I haven’t seen it taken to production level and in three dimensions. Working with a student, we devised an approach for solving the Winslow equations on a local virtual control volume and can produce smooth meshes in physical space. We are continuing to evolve the method and recently published papers describing how to control viscous spacing and grid line angularity. In addition, we are working to incorporate adaptive mesh smoothing based on gradients from flow solutions.
[JC] What path did you follow to get to where you are today?
[SK] My first year of college was at the Air Force Academy, where I majored in aeronautical engineering. I then transferred to Texas A&M University and earned a B.S. in aerospace engineering in 1980 and an M.E. in aerospace engineering in 1982. I started working for the General Dynamics Fort Worth Division, which would eventually become Lockheed Martin, in the aeroanalysis group in January of 1983. I continued my education part time at the University of Texas at Arlington and earned my Ph.D. in aerospace engineering in 1990.
My initial duties at GD involved integrating the early CFD technologies into the analysis process, things like lifting line theory and panel codes. About that time, some of the early Euler solvers were being developed such as Antony Jameson’s FLO57. That code was originally intended for use with transport type aircraft with an internal mesh generation capability that used conformal mapping to make a single block, C-topology mesh dimensioned 97X17X17. Needless to say, that mesh didn’t work well for the F-16 jet fighter. So the CFD group began working on other means to create meshes for FLO57 and eventually our own Euler code, affectionately named BLEU (Block Euler). More features were added to the flow solver by a team of developers and the computer code evolved into the Lockheed Martin workhorse solver known as FALCON.
The mesh generation activities of the group early on were initiated by Chris Reed and myself, but were eventually taken over by a guy name John Steinbrenner. About the same time, several more developers were added to the CFD team, including you and Rick Matus.
What I may be more known for is the Cartesian flow solver named SPLITFLOW owned by Lockheed Martin. This code generated a hierarchical Cartesian mesh automatically based on an input surface triangulation. The Cartesian mesh was refined based on user-defined adaptation functions, such as pressure and Mach number. Neal Domel parallelized SPLITFLOW and Eric Charlton was hired to apply and support the code. It was and still is a very useful code for performing inviscid analyses of very complicated geometries.
[JC] Who or what inspired you to get started in your career?
[SK] I was always interested in airplanes as a child. That was part of the reason for attending the Air Force Academy. Although I was pilot-qualified, I chose to leave the academy and transfer to Texas A&M University after one year. While pursuing my undergraduate degree at A&M, I enrolled in some computer programming and numerical methods courses. That continued in graduate school, when we learned to write small perturbation/potential flow codes. The thought of calculating the flow about an airplane on a computer fascinated me. By the way, I did earn a private pilot’s license after my senior year, although I haven’t flown much since leaving Texas A&M.
[JC] Can you share with us your favorite tools and resources that help you get your job done?
[SK] As I have already mentioned, we use Gridgen at the SimCenter; we also use Pointwise. In fact, I use Pointwise in one of the graduate grid generation courses I teach, making use of the Teaching Partnership Program with Pointwise. I show the students the basic mesh generation process used by Pointwise, and then we go “under the hood” to learn about the underlying methods. Students write their own programs for distributing points on curves and assembling curves to create domains. They then write their own TFI method to initialize the domains and then develop a Winslow smoother. By the end of the semester, they also write their own Delaunay triangulation program. Throughout the process, Pointwise serves as a model for the grid generation process.
I participate in the AIAA Meshing, Visualization and Computational Environments technical committee, and the International Meshing Roundtable. These have been valuable resources for learning about innovative techniques that are being developed in the field of mesh generation. My background is in aerospace, and the techniques covered in the AIAA Journals are most relevant, but the IMR community is broader and brings a different perspective to the process. Another publication that is valuable is the Journal of Computational Physics. As far as conferences I attend, I have managed to attend the big AIAA Aerospace Sciences meeting in January each year and try to attend the IMR each year. If funding permits, I also try to go to one of the AIAA summer meetings each year.
[JC] You’re obviously closely involved with students. Pointwise is, too, with our Teaching Partnership Program and our internships. What advice do you have for young people entering the field today?
[SK] The field of computational engineering is experiencing a tremendous growth. It has broad range of engineering applications and technology areas from defense to energy, environment, and health. Find an area you are passionate about and learn all you can. Attend conferences often to learn about the current research areas and who the people are in the field. Volunteer to serve on committees, such as the AIAA committee I mentioned earlier. They encourage young engineers to get involved. It is one of the best ways to meet the prominent people in the field and develop a collegial relationship.
[JC] Are students enthusiastic about meshing, especially pursuing it in academia or industry?
[SK] The first graduate course in mesh generation that I teach is taken by most of the students in the SimCenter program. At this stage in their studies, they haven’t focused on the area of research they want to pursue. Some will choose to study flow solvers, but other will elect to pursue meshing. All of my current advisees have taken my mesh generation classes at the SimCenter.
We were fortunate to have the 19th International Meshing Roundtable in Chattanooga last October. I was able to arrange for the students to attend that conference. That was an invaluable experience for many of them. They were able to see the wide range of topics and listen to presentations by researchers they had only known by name previously. They were thrilled to be attending the conference and many sat on the very first row throughout all the presentations.
[JC] You’ve already touched a little bit on how you know Pointwise.
[SK] I go back a long way with Pointwise. You could say I was the original alpha-tester for Gridgen. John Steinbrenner was developing the first version of Gridgen while he was at GD. I was using the software to create a multi-blocked Euler mesh for an F-16. The results were published at an AGARD conference in France in the mid-‘80s-, probably one of the first full aircraft analyses. I was running the software from Fort Worth on the NASA Ames computer over a 1200-baud modem using a Tektronix 4014 terminal. Yes, one of those phosphorous screen types that had to be refreshed frequently.
After you and Steinbrenner left GD, we continued to use the software. I believe Lockheed Martin is one of the largest users of Gridgen/Pointwise today. I also worked with many of the Pointwise employees -who also worked at GD, such as Rick Matus, Erick Gantt, Chris Fouts, and Pat Baker. I also knew George Shrewsbury when he worked for Lockheed Martin before retiring and joining Pointwise.
The group that started the SimCenter purchased the licenses for Pointwise before I arrived, and we have continued to use it to this day. We have attended several Pointwise User’s Group Meetings and have made presentations at the meeting.
[JC] If we were to come visit you, where’s a good place to go out for dinner?
[SK] The obvious answer to that question would be Big River Grille. It is one of the most popular restaurants downtown near the Tennessee Aquarium. They serve American style food and are a microbrewery. When we have visitors to the SimCenter, that is one of the restaurants we frequently visit for lunch and dinner.
Another restaurant that is a bit expensive, but very popular is St. John’s. They serve American style food also, but for dinner only.
[JC] Thanks for your insight, Steve. Best of luck with your research.
Great article.
Wanted to comment RE IGES and STEP. Although they’re not always the best routes, with the right tools they can be quickly repaired. The translators typically reveal issues with the underlying geometry which may have been masked by the original CAD system.
We make a tool for FEA and CFD specialists that can clean up geometry, remove small features like rounds and holes, extract volumes from CAD data, and parameterize dumb solids for closed-loop optimization in CAE. Nifty stuff compared to ordinary CAD. http://www.spaceclaim.com
-Blake
(a founder of SpaceClaim)
Hi Blake. Thanks for the comment. It would be hard to work in the CAE world these days and not have heard about SpaceClaim. It’s all the rage.
As for IGES and STEP, I see it as a case of “don’t blame the messenger.” It’s not the format themselves that are to blame, it’s the fact that the files often aren’t written correctly relative to the standard. Then there’s the whole tolerancing issue.
In the article, Steve implies that having a solid model solves a lot of problems. Our approach to this in Pointwise and Gridgen is a feature suite called “solid meshing.” Part of that feature set automatically (or manually) assembles the solid on import. You can hide a lot of problems that way.