At the 13th International Meshing Roundtable (http://www.imr.sandia.gov/) in 2004, Dr. Tim Tautges and co-authors David White and Robert Leland presented one of the most memorable papers on mesh generation I’ve ever read. Their Twelve Ways to Fool the Masses When Describing Mesh Generation Performance still rings true seven years later.
Tim’s opening sentence definitely resonates: “Mesh generation… is far from a solved problem.” That means there are a lot of us working to improve meshing algorithms and techniques and we are writing technical papers and product brochures that describe the results we’ve achieved.
That’s where Tim’s paper comes in. All good humor is based on a kernel of truth and what makes Tim’s paper so funny is that it hits very close to home. He wrote: “In our efforts to describe our meshing technology ‘in the best possible light’ compared to other approaches, we often make an algorithm look better than it really is.” At one time or another, we’ve all read (or written?) documents that violate – unintentionally, of course – at least one of Tim’s twelve ways.
Tim and his co-authors wrote from the perspective of providing tongue-in-cheek advice to meshing researchers. I’m going to turn this around as though it’s a buyer’s guide for meshing software users. Tim also contributes some new thoughts on the subject.
Caveat emptor: here are the twelve things to beware of when reading about a mesh generation technique.
- Generate a large mesh where a small one will do. The ultimate “smoke and mirrors” trick is when people illustrate their meshing technology by throwing hundreds of millions of mesh points at a problem. Vague references to Moore’s Law and anecdotes about “a guy who used to work at a place where they once ran a computation with a billion points” are meant to dazzle you with size and distract you from the fact that they often are meshing a simplified geometry with a mesh that’s impractical.
- Simplify meshing at the cost of analysis. When you see someone focusing too much on the time to mesh relative to the CFD solver time, beware. An all-tet mesh can be generated relatively rapidly, but if you’re going to use it for viscous CFD, be prepared to run the solver for a long time, use lots of memory, and get questionable results. We meshing folks need to keep reminding ourselves that meshing is not an end unto itself. That’s why we spend so much time and effort on what happens after the meshing algorithm finishes, like cell-count reduction and plugins for customizing mesh export.
- Quote time to mesh without accounting for related work. This is one of my favorites. As Tim wrote, “it is best to compare the weaknesses of other approaches with the strengths of yours.” We are doing a lot of work on overset grids, but I’ll cite Tim’s comment on that technique anyway. Have you ever noticed overset people extolling the relative virtues of generating multiple component grids rather than a big block structured grid? How often do those same people tell you how big a pain in the neck overset hole cutting is?
I also caution you to beware of the phrase “Starting with an analysis-ready CAD model…” Analysis-ready is a code phrase for a model that’s been extensively cleaned up – excess details removed, geometry simplified, gaps and overlaps healed. I bet a lot of grid methods work great if you start with a perfect CAD model, but how often in practice do you get one? - Use examples which appear more complicated than they are. “It has been said that a picture is worth 1,000 words; however, people forget to say that pictures are worth plus or minus 1,000 words.” Have you seen weight loss TV commercials with the before and after photographs in which someone has lost 150 pounds? In fine print somewhere on the screen you’ll also see words to the effect of “non-typical results.” One awesome mesh picture implies all-round awesomeness, but the truth is that it actually may be the only mesh that worked.
- Obscure important details about the model. They say the devil’s in the details, so beware of descriptions of methods and meshes that ignore details, especially the important ones. What you might be missing are things like non-conforming interfaces, lack of boundary conformance, hex-dominant versus all-hex meshes, etc.
- Describe 2D algorithm assuming easy extension to 3D. Writing “extension to 3D is an exercise for the reader” (or student) is a great way to bury a ton of unknowns. We all should know that going from 2D to 3D isn’t a linear exercise – the complexity expands geometrically (pardon the pun).
- Describe “automatic” algorithm which uses arbitrary tunable parameters. Tim’s seventh point delves into a pet peeve of mine – abuse of the term “automatic.” Automatic is defined as “without human intervention.” How many things in life are truly automatic? I like to say that anything can be automatic, it just depends on when you start counting.
I once had a guy tell me he knew how to automate structured (mapped) hex meshing. I was intrigued. He described his idea in 2D (see point #6) along these lines: first the user draws four edges that bound the region to be meshed, then assigns a number of points to the edges taking care that opposite edges have the same number of points, then distributes the points along each of the four edges, and then clicks one button that solves the elliptic PDE equations of structured grid generation and their mesh is generated automatically. Yes, it’s automatic as long as you start counting when the boundaries are fully defined. - Quote time to mesh, leaving out important details. We all know that meshing takes the most time when doing an analysis. You’d think measuring time would be foolproof, but it’s not. Are you being quoted the time spent by the algorithm’s author or a user who’s trying it for the first time? Have you ever completed a new mesh and then thought, “Knowing what I know now, I’d be able to remesh this in half the time?” Maybe the written material is giving you the mesh generation time for a guy’s 1,000th widget mesh. Tim says quoting meshing time in days is a particularly egregious sin. How do you know if they’re neglecting to mention those were 12-hour days, including weekends?
- Mesh single-region models. If you’re only being shown meshes for single parts, consider asking about meshing an assembly. This introduces all sorts of non-manifold conditions that put meshers to a much stronger test.
- Use an approach which works by itself but breaks other parts of the process. Hex-dominant meshes are great and much easier to generate than all-hex meshes. But does that really matter if your solver only handles hexes? If the method only works with faceted geometry from STL files, does that help you if you only use CAD models with analytic surfaces?
- Report a single quality metric (or none), where several should be used. I am one of those people who complains about the plethora of mesh quality metrics. Tim also points out how this works to your disadvantage. Because there are so many, it’s likely meshing developers can find a metric that makes their meshes look really good. Worse yet, beware of newly contrived metrics that portray the meshes in a glorious light.
- Show surface mesh but not interior. Because it’s easier to get a good quality surface mesh than a volume mesh, you’ll often see volume regions drawn only by their boundaries. This is a sign that something very funky may be lurking on the inside.
Tim was kind enough to participate in this look-back at his work and came up with a 13th caveat to beware of:
- Illogically use the topics of GPUs and parallel computing. Beware when GPU and multi-core computing are used to justify work on a meshing problem, especially if the performance quoted doesn’t justify the use of those technologies.
Tim and I corresponded regarding this new, lucky 13th caveat.
Chawner: I gave you the chance to invalidate any of the caveats, but it’s funny that you’ve come up with a new one instead. We’re working on some GPU stuff right now, so this is quite timely.
Tautges: The meshing community is well aware now of GPU and parallel computing, and it will always sound more impressive when you refer to those issues when discussing your meshing problem. Also, because customers have heard of these things so often now, boring them early on will cause them to stop listening by the time you get around to quoting performance numbers, which is also good. Because, when you get right down to it, using GPUs and parallel computing can be downright hard, and the performance improvements you get from them sometimes don’t justify the effort.
Chawner: Maybe that’s just a way to spice up the otherwise unglamorous subject of mesh generation. Do you have specific examples of this?
Tautges: In fact, yes I do. I just came back from a conference where, in one talk, the virtues of GPUs and exascale computing were extolled. Later in the same talk, the speedup quoted for the innermost computational kernel was six. In another talk, after justifying their project based on exascale computing, performance numbers were given for up to 128 processors, with efficiencies dropping below 60 percent at the upper end. While a speedup factor of six would normally be nothing to sneeze at, a speedup of six in a GPU-based kernel sometimes translates to only two or three for the whole code. Also, for GPU-based parallel computing, the degree of code modification can be extensive, and can come at the expense of speedup from MPI-based parallel solution approaches. For the parallel computing case, machines with 128 cores are quite common now in universities, national laboratories, and, I suspect, most companies doing production engineering simulations. Efficiencies far greater than 60 percent on 128 processors will be needed to justify the use of machines with much higher core counts (and these efficiencies are achievable across a wide variety of simulation types).
Chawner: Seriously, there have been a lot of changes in the field of meshing since you originally wrote the paper.
Tautges: I agree. There’s been progress on considering other approaches besides “all hexes all the time” that was more prevalent when this paper was written. I’ve seen more examples of hybrid meshes (hexes, tets, and prisms or pyramids to connect them) and also polyhedral meshes, and also some inside-out hex generation tools that seem to be gaining traction.
Chawner: All this work and these advances validate your statement that meshing is far from a solved problem. Makes you wonder where meshing will be in another seven years.
Tautges: Yeah, I wonder too. However far we get on the problem, though, we’ll still probably see the use of these now-13 ways in reporting on meshing. Perhaps, they’ll become a bit less common if we continuing pointing them out, however tongue in cheek.
Chawner: Thanks, Tim, for your insight. I’m now going to review all our technical documentation.
Be sure to read the complete original paper. It can be found online at http://www.imr.sandia.gov/papers/imr13/tautges.pdf
The 20th International Meshing Roundtable will be held in Paris on 23-26 October 2011 and there’s still time to submit a paper. Full papers are due 6 July 2011. Just keep in mind the 13 caveats when writing.