Rigorous product research & development testing is an essential, everyday occurrence in engineering, automotive and aerospace manufacturing, so it’s no surprise that organizations continuously look for ways to optimize the process. Manufacturers and engineers are all about trying to design the next generation of something. And typically, it’s about obtaining more performance and efficiency.
Tools such as computational fluid dynamics (CFD) are the backbone of modern testing in these industries, as the majority of things they design interact with the air around us. CFD is a digital tool that helps deepen the understanding of the air’s behavior, as well as providing a way to digitally to assess the performance, efficiency and safety of new designs. This, in turn, minimizes the amount of costly physical testing because engineers and designers can can conduct their analysis from computers using specialized software, reducing the need to manufacture prototypes purely for testing purposes.
Still, running CFD workloads is demanding as it requires massive amounts of compute capacity to run sophisticated models that can simulate the complex flow fields that exist in real-life applications. Running such simulations on a laptop would take years, so to run these simulations in just hours requires access to high-performance-computing (HPC) facilities. Furthermore, the more complex the model, the more computing power it needs to run.
But there are ways to optimize this simulation process for better performance.
While there are challenges in the accuracy of the CFD software itself, simulation challenges can usually be boiled down to speed and the availability of compute capacity. Many engineers have experienced being stuck in the queue on their supercomputers, waiting to run their simulations—and often, when they get their turn to run it, they run against limited compute capacity, affecting their time of results. This has a big-picture impact on the business in terms of overall time to market.
Additionally, it has an impact on engineers’ output as well. Engineers can be tempted to make their simulation jobs smaller to run them faster, which can potentially affect the fidelity of the job. Other issues include data handling, which can be a constant battle when computers don’t provide enough disk space.
And sometimes, a few months down the line, if engineers don’t obtain the result that you want — they have to rerun the simulation, which is ultimately wasting money.
How some companies are innovating in the CFD space
A prominent sailing champion and a years-long contender in America’s Cup used high-performance computing and CFD simulations to optimize the sailing process and yacht performance above and below the water.
One of the most important goals for the team was to ensure fast and continuous innovation so they could keep the wind in their sails and deliver a winning performance. They needed a way to flexibly and affordably scale up their HPC capabilities during critical periods.
As a solution, engineers combined a velocity prediction program with cloud-based HPC resources to create a dynamic simulation of the yacht as it moved across the race course under diverse wind and weather conditions.
This helped the team not only optimize the physical profile of the hull, sails, and other components but also train sailors to adjust the sails in the most efficient manner, creating optimized sail shapes as the race progresses.
The engineering team also leveraged the cloud to run CFD simulations, running as many simulations as they wanted. Computing capacity was never an obstacle to their creativity. In this way, they could investigate many complex problems and optimize every aspect of yacht’s performance.
Another example is a race car manufacturer using CFD tools to investigate the flow fields around their vehicles with all the details needed to achieve their target.
To optimize price performance and ensure faster time to results, the company linked its on-premises workloads to cloud resources through a virtual private cloud (VPC). This solution gave the company complete control over its virtual networking environment and established secure connections between on-premises networks, remote offices, client devices and the cloud network.
Furthermore, this solution facilitated a transparent user experience for the company's aerodynamicists. With an orchestrator tool, users could ensure their workflows ran seamlessly. Upon task completion, the data was downloaded to the on-premises solution and shared with aerodynamicists.
With this solution, the race car manufacturer found the resources, flexibility and availability needed to innovate without limitations, improving their car’s performance and workflows simultaneously.
New solutions are continuously being delivered through the cloud. Companies that want to take it to the next step have many options to choose from. The latest-instance hardware, updated software tools and upgraded solutions can give engineering teams more freedom to work according to their demands and run more simulations simultaneously—and get results faster. They can also improve daily task delivery and service-level agreements, reduce simulation running time and deploy new features faster while reducing errors.
The result can be a reduced duration of product development cycles, a faster journey to the production line, better economics and improved competitiveness in the market.
Neil Ashton, Ph.D., leads computational engineering product strategy at Amazon Web Servicesand is a computational fluid dynamics subject-matter expert for Amazon. He is a visiting fellow at the University of Oxford and is active in the research community, leading community workshops such as the American Institute of Aeronautics and Astronautic’s (AIAA) high-lift prediction workshop and the automotive CFD prediction workshop series. He has published widely on high-fidelity CFD method development, including the role of high-performance computing (HPC) in accelerating its adoption.