Analysis & Simulation Roundtable in NASA Tech Briefs

The December 2014 issue of NASA Tech Briefs included this year’s edition of their annual Analysis and Simulation Software Industry Roundtable in which several company executives are interviewed about the coming year.

You should read this article. Click the link above or the image below to access the article.


I was a bit surprised that the article opened with an exploration of 3D printing‘s influence (positive) on the use of simulation software. I had to think for a minute [a rare occurrence] on why that might be so. The consensus seems to be best summed up by CD-adapco’s David Vaughn who said that anything like 3D printing that reduces the time required to create a physical prototype necessarily requires more use of simulation early in the design process.

I was not surprised at all that the second topic covered was “the cloud.” Maybe it is just me [probably] but “the cloud” apparently means different things to different people. Is it the convenience of on-demand licensing? Is it the easy access to a big computing resource? Is it easier to use? Is it cheaper to use? Siemens PLM Software’s Mark Sherman seems to share at least some of my thinking as evidenced by his quote “…there’s nothing better for interacting with a large FEA model than local high-performance hardware.” [Mark, sorry to lump you in with my sorry lot.] I’m not nor do I suspect Mark is a Luddite, but we must have clear, consistent expectations with our customers about what we want to accomplish in “the cloud.”

Simulation data management was the third topic explored and rightly so. As simulation software has matured, the simulations have gotten bigger and more numerous. The manual, often visual, inspection of results to find the “needle in a haystack” has to be shelved. I just wish the magazine had included representatives from CFD postprocessing experts Tecplot, Intelligent Light, and CEI because they have all been making advances in the automated, simultaneous interrogation of multiple datasets.

MSC Software’s Dominic Gallello sums things up quite nicely, “2015 is the year that users should poke their heads up from their work to see what is out there. I think they will be amazed with the leaps forward.”

What do you think? Include your comments in the Reply area below.

This entry was posted in News, Software and tagged , , , , , . Bookmark the permalink.

10 Responses to Analysis & Simulation Roundtable in NASA Tech Briefs

  1. John, thanks for drawing my attention to this interesting article. To your remark on the cloud in your second paragraph: in the meantime (in fact since September 2011) the cloud is well defined by the National Institute of Standards and Technology (NIST) here: It’s essentially: “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Since then this definition is undisputed.

    To experiment with cloud computing for engineers, we started the UberCloud CAE Experiments in July 2012, performed 165 real life CAE Cloud experiments and published over 100 articles about the different cloud case studies, benefits, roadblocks, and lessons learned in Desktop Engineering and other magazines. A vendor-independent discussion of the cloud hurdles and their solutions can be found here: The results are encouraging, to summarize: better, cheaper, faster. But also: The cloud doesn’t solve all your problems.

    • John Chawner says:

      Thanks for making me aware of that definition of “the cloud.” I’m perfectly happy with it. However, I’m willing to bet that many potential users of “the cloud” have expectations that go beyond that (e.g. free, cheap, easy). That’s not to say they’re wrong – there could be a free cloud – but setting user’s expectations is important if for no other reason than to prevent disappointment. Thanks again.

      • Thanks, John. I agree that some people might have expectations that go beyond the standard cloud definition. But cloud is best understood by looking for real life analogies which we already are used to. Let’s look at the cloud service pattern first:
        Cloud: I have my own in-house computer on my desk which I have paid for (upfront) and I need the standard services to run it (power, software, training, etc.). If I need more compute power I can tap into the cloud, on demand, and pay per use. How nice, because I don’t have to buy another computer. Now the analogies:
        Car: I have my own in-house car in my garage which I have paid for (upfront) and I need the standard services to run it (gas, garage services, etc.). If I need a second car for a short time (because my wife needs our car, or other reasons), I can either rent a car for a short time, or call a taxi, on demand, and pay per use. How nice, because I don’t have to buy another car.
        House: I have my own house which I have paid for (upfront) and I need the standard services to maintain it (electricity, water, gas, etc.). These days, when my kids are coming with their kids, my house became too small, so the sleep in a nearby hotel, on demand, and pay per use. How nice, because I don’t have to buy (or build) another house.
        So why do we naturally (without batting an eye) pay for a rental car, a taxi, a hotel, and many other services around us, but not for additional computing power in the cloud? Makes no sense. If we can get a great cloud service when we need it, why shouldn’t we pay for it, adequately. And in my paper about ‘server versus cloud cost’ [1] I have proved that a cloud service is cheaper than buying your own server, in most cases.
        Finally, ease of use of cloud services: just check the two YouTube demos about ANSYS in the Cloud [2] and STAR CCM+ in the Cloud [3]. It looks like you are working on your familiar desktop, but in fact the applications are running in a cloud …

  2. John Chawner says:

    Thanks, Wolfgang. I’m perfectly willing to adopt your definition of the cloud, at least as the primary definition of what it means and what we should expect from it (on-demand compute capacity beyond what we normally have permanent, local access to).

    Actually, I’ve been waiting to see who’d take the bait of my statements about cloud confusion and stake a claim that *this* is what the cloud means.

  3. Thanks, John. Good news really is that we are already beyond the point of the definition of cloud computing, even for our engineering colleagues who indeed face greater challenges than our enterprise colleagues (licensing, heavy data transfer, corporate assets in the cloud, etc.). But engineering cloud companies like Bull, Gompute, Nimbix, OCF, OzenCloud, Rescale, Sabalcore, and certainly our UberCloud CAE Marketplace are taking care of lowering (or even removing) these challenges very professionally. And in case an engineer is still reluctant we always recommend to first perform a what we call CAE UberCloud Experiment.
    BTW, congratulations to 20 years Pointwise and the great achievement !

  4. OK, so how does one differentiate cloud computing from super computing? Super computing is also “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.?”

    • John Chawner says:

      Hi Martin: I suspect the answer to your question lies in the phrase “can be rapidly provisioned and released with minimal management effort.” That is to say, in order to use super-computing you must have a supercomputer which you can’t easily purchase this afternoon and start using (i.e. cannot be rapidly provisioned). On the other hand, for those of us with a DoD background, use of remote supercomputers (e.g. an account at any of the MSRCs) could be considered cloud computing once the accounts are setup. On the third hand, MSRC accounts are given away willy-nilly so you could say that disqualifies them as being termed “cloud computing” as opposed to a commercial service you just give your credit card and start using. But that’s just my interpretation and I’m new to this.

      For me, what Wolfgang’s definition tells me is that cloud computing is all about easy access to remote computing resources (i.e. resources that go beyond what you can affordably maintain on your own locally, or resources that cover surge demand) as opposed to other things I’ve heard and read about the cloud – it’s free, it’s easy to use, software is cheaper, etc.

  5. Martin, not sure why you are using NIST’s widely agreed cloud definition as your personal definition of supercomputing. Why not keeping proven definitions, like those from Wikipedia and others, e.g., “A supercomputer is a computer at the frontline of contemporary processing capacity…”. In more detail, a supercomputer is NOT a model for enabling ubiquitous, convenient, on-demand network access etc. etc: A supercomputer is not ubiquitous in that it has a very special architecture (SMP, DMP, Vector, etc.), it’s among the fastest stand-alone computers in the world, without ubiquitous access (in fact very special one), not convenient at all (rather it takes months if not years to become part of the highly educated supercomputing community), it is not on-demand (John Doe can’t use it at his fingertips), and so on. On the other hand, a supercomputer could be part of a cloud, indeed, e.g. if AWS decides to buy a Cray machine (which they won’t, for economic purposes).

    Martin, If you are interested to see how easy – in contrary to a supercomputer – the access and use of the cloud for engineers and scientists is today, please see the two demos about ANSYS and CD-adapco in the cloud here: and Thanks.

  6. In general, for me at least, when doing CFD one would like to do a few runs that push the envelope of computer power (or ones wallet), for example grid convergence or a more realistic turbulence model (i.e. lower off body eddy viscosity which equals unsteady and lots of vorticity).

    BTW, why would I want to do CFD on a machine/cluster/HPC that does not have SMP, AVX, QuickPath (i.e. Xeon or something similar), and (hopefully) IB. (i.e. very special architecture)? OK, it doesn’t need to be a supercomputer, a small rack system will usually do. But, if cloud computing does not encompass that, then it’s a waste of time and money for CFD. Isn’t it? Or maybe, we are not talking about the same thing.

  7. Thanks, Martin. Until recently cloud offers indeed were mostly based on more general hardware. But since a few years cloud providers like Bull, Gompute, Nimbix, OCF, OzenCloud, Rescale, Sabalcore, and others are offering engineering software on HPC hardware (Xeon, including Infiniband, etc), although such a hardware is expensive and the risk for the cloud provider is high because it ages quickly and needs to be fully utilized. This risk diminishes as the acceptance of HPC Cloud in the engineering community grows, a trend which IDC currently observes.

Leave a Reply