This week’s CFD news represents only a fraction of what we have bookmarked but is still chock full of goodness. It covers geometry modeling for CFD, design, a must-read article on programmer productivity, new software releases including one from our friends at Numeca, several job openings, and great news about more in-person events. Shown here is a teaser image for the 2nd High-Fidelity Industrial LES/DNS Symposium.
Engineering.com interviewed Cadence’s Nick Wyman about the challenges to be overcome when dealing with geometry models for CFD and mesh generation.
In this rather high-level article about “digital development” (virtualizing the entire lifecycle of an aircraft) as applied to Northrop-Grumman’s B-21 Raider, we see them coin the term MB(x) for “model-based everything.”
Today I learned about bionic surface technologies GmbH who not only provide CFD consulting services but are also experts in riblet surfaces for drag reduction.
UX Magazine shares 100 Things I Know About Design, an article I keep returning to. #9 Design without thinking is decoration.

While we’re on the topic of meshing, check out The Art of Meshing, a video from the Swiss Federal Labs for Materials and Science Technology. (It’s not just the art of mesh and has a lot of ideas for how to make a good mesh.)
Cadence shares some thoughts on CFD Simulation Types: Discretization, Approximation, Algorithms.
A 4 exaflops, GPU-based “AI supercomputer” is now online at the National Energy Research Scientific Computing Center. [I really don’t understand the difference between an AI supercomputer and just a regular supercomputer.] Perlmutter (named after Nobel Laureate Saul Perlmutter) features 6,000 GPUs.
If you are a software developer or lead a team of software developers I highly recommend you read the article The SPACE of Developer Productivity in ACM Queue magazine. Performance (the “P” in SPACE) “is often best evaluated as outcomes instead of output.” [Thought-provoking articles like this are why ACM’s magazines are my favorites of all the professional societies of which I am a member.]
Calculating Near Wall Cell Size for a Given Y+ provides background on how to set the height of your first mesh cell off a no-slip boundary to achieve a certain value of the non-dimensional distance y+. Your target y+ value depends on a variety of factors related to how accurate you want the solution to be based on your turbulence model and numerical algorithms. Roughly speaking [and these suggestions are sure to be wrong for someone] y+ of 50 is a good target if you’re using wall functions while 1 is a target if you’re not. If you’re trying to compute heat transfer, a y+ of 0.1 is something to shoot for. Of course, the challenge is that you don’t really know the y+ until you’ve computed the solution which means you either have to develop a good body of experience on which to base your best practices or you can compute the actual y+ values as a sanity check of your results.
If the previous paragraph has you psyched about y+, you can download our Y+ Calculator app for iPhone and Android. [If nothing else, it’s a great conversation starter at parties.]
As of April, the OpenFOAM Foundation has raised about €175,000 through their maintenance plans to support ongoing development and maintenance of the software.

Why simulate data centers? Why indeed.
The 2nd High-Fidelity Industrial LES/DNS Workshop will be held in Toulouse and online on 22-24 September 2021.

Not to be “me too” but we’ve done a lot of meshing for the PPTC such as the on-demand (and aptly titled) webinar Meshing the PPTC.
The call for papers is open (until July 16th when your abstract is due) for the CONVERGE User Conference (27-30 Sep 2021, online). [User conference? Users conference? User’s conference? Users’ conference?]
Geometry model interoperability [shudder]. Is there a better way? nTopology describes CodeReps for the exchange of implicitly defined geometry which appears to have many positive attributes.
Digital wind tunnels are the supposed subject of this article from Imperial College but it’s really about a H-O DNS simulation of a turbine blade. [I will tease Prof. Vincent by noting that the article identifies him as the “lead author” whereas on the paper he’s the last of 8 authors. Isn’t the rule of thumb that the 4th author is the person who actually prepared the paper?]
Red Bull Advanced Technologies has an opening in the UK for a Junior Aerodynamics Engineer.
Cadence has two openings for Software Engineer – Mesh Generation in the Fort Worth office (i.e. with the team formerly known as Pointwise). Here’s one. Here’s the other.
Ever wonder about AI-accelerated CFD? Here’s an overview of how byteLAKE’s CFD Suite is said to reduce the time to solution by at least 10x.
What’s new in Simcenter Femap 2021.1.
CFD for gen3 supercars.
Beta CAE released v21.1.3 of their software suite.

Running Tecplot 360 on AWS – Part 2.
Flexi is high-order, high-performance, open-source CFD from the Univ. of Stuttgart.
CFD for tugboats.
Zenotech profiles work at the Univ. of Bristol involving CFD for wind energy.
Here’s Visualizing Data’s best of the visualization web for January 2021 and February 2021.

As if cicadas aren’t getting enough press coverage these days, here are simulations of how they fly.
The 2021 code_saturne/neptune_cfd Users Meeting will be held at EDF Lab Saclay on 15 September. Looks like it’ll be a hybrid event, in-person and on-line.
Tecplot has an opening for a Technical Support Specialist.
Cadence has a job opening for a Sr Marketing Editor/Writer in India.
Blender 2.93 was released.

You’ve heard of “budgetary convergence,” right? That’s when your CFD solution is deemed converged when you run out of computer budget. In blogging there’s “time convergence” because when you run out of time the post is done regardless of how many links remain in your bookmarks.
Re: “I really don’t understand the difference between an AI supercomputer and just a regular supercomputer.”
The article states “Perlmutter [is] the fastest system on the planet on the 16- and 32-bit mixed-precision math AI uses.” Improving speed by using reduced precision where it does no harm to the end results is a big thing within machine learning methods. This is apparently the biggest supercomputer that is specialized for such a purpose.
Of course mixed precision techniques are not exclusive to machine learning. Here is an article describing a mixed 64-bit / 32-bit / 16-bit linear solver implementation in FUN3D: https://fun3d.larc.nasa.gov/papers/LowPrecisionSolver.pdf
Thanks for the clarification. Mixed precision seems like a fine distinction but what do I know about AI or even genuine intelligence for that matter.