Mellanox Demonstrates Record Performance for ANSYS High-Performance Computing Application with Dell HPC Solutions

  • Surpasses Previously Published Record by 25 Percent and Maintains Leadership with 1/3 of the Compute Infrastructure
  • World-Record Performance in HPC Environments Enables Industrial and Manufacturing Customers to Save on CAPEX and OPEX
  • Breakthrough Result of Continued Collaboration Between Mellanox and Dell Since 2010

SUNNYVALE, Calif. & YOKNEAM, Israel — (BUSINESS WIRE) — November 10, 2014 — Mellanox® Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced it has achieved, through collaboration with Dell, world-record performance for ANSYS Fluent 15.0.7, exceeding previous results submitted by more than 25 percent*. By connecting Dell HPC Solutions’ 32-node high-performance computing cluster with Mellanox’s end-to-end FDR 56Gb/s InfiniBand solutions and HPC-X v1.2 MPI software library, industrial customers can deploy systems that achieve higher application performance compared to systems that are 3X larger in size, resulting in 67 percent savings on CAPEX and OPEX.

ANSYS Fluent is a leading commercial high-performance computing application from simulation software provider ANSYS that offers the broad physical modeling capabilities needed to model flow, turbulence, heat transfer, and reactions for industrial applications. ANSYS is deployed in applications ranging from air flow over an aircraft wing to combustion in a furnace, from bubble columns to oil platforms, from blood flow to semiconductor manufacturing, and from clean room design to wastewater treatment plants. Now industrial customers deploying a 32-node Dell-based cluster can realize a greater than 25 percent increase in performance, speeding time-to-analysis and time-to-market in their business, and achieving the full promise of a supercomputer’s performance level. Customers also can speed up their deployments as the 64-CPUs/640-core result beat the previous record by the 192-CPUs/1920-core Aries interconnect-based cluster by 8.53 percent.

In order to compete in today’s global market, industrial organizations are increasingly required to perform more and quicker data-intensive analysis. Critical to achieving this analysis are HPC solutions that combine multiple systems, perform complex calculations and drive faster processing for the most demanding applications.

As a leader in HPC solutions, Dell provides robust, scalable and responsive solutions that are cost-effective and enable product innovation. They are deployed across the globe as the computational foundation for academic, governmental and industrial research critical to scientific advancement and economic and global competitiveness.

“Dell’s HPC solutions are specifically optimized to reduce the time it takes to process heavy workloads, including industrial applications, by weeks or even months,” said Jimmy Pike, vice president, senior fellow and chief architect of the Enterprise Solutions Group at Dell. “By running ANSYS software on Mellanox and Dell solutions, customers in both manufacturing and engineering can accelerate the design and optimization of their product development, at substantially less capital cost and operating expense.”

“These results are a testament of the performance and efficiency of Mellanox FDR 56Gb/s InfiniBand. As systems scale, the performance difference by using Connect-IB would be more dramatic against other interconnects,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “With only a third of the amount of servers used by the competition, the combined Dell and Mellanox platform achieved best-in-class performance. These results are significant for industrial and commercial manufacturing companies looking to quicken their time-to-market without having to increase their costs.”

The 32-node ANSYS Fluent 15.0.7 Dell HPC Solution cluster included:

  • Dell PowerEdge™ R720xd servers
  • Mellanox Connect®-IB FDR InfiniBand adapters
  • Mellanox SwitchX®-2 SX6036 FDR InfiniBand switches
  • MPI: Mellanox HPC-X v1.2.0-250
  • Dual-Socket Hexa-Core Intel E5-2680 V2 @ 2.80 GHz CPUs
  • 64GB memory, DDR3 1600 MHz
  • OS: RHEL 6.2, OFED 2.3-1.0.1 InfiniBand Software stack
  • Hard Drives: 24x 250GB 7.2 RPM SATA 2.5” on RAID 0

Supporting Resources:

1 | 2  Next Page »

Review Article Be the first to review this article

Featured Video
Senior Electrical Engineer for Allen & Shariff Corporation at Pittsburgh, PA
Principle Electronic Design Engr for Cypress Semiconductor at San Jose, CA
Applications Engineer for intersil at Palm Bay, FL
Design Verification Engineer for intersil at Morrisville, NC
Senior Formal FAE Location OPEN for EDA Careers at San Jose or Anywhere, CA
Upcoming Events
IPC APEX EXPO 2018 at San Diego Convention Center San Diego CA - Feb 24 - 1, 2018
DVCon US 2018 at Double Tree Hotel San Jose CA - Feb 26 - 1, 2018
5th EAI International Conference on Big data and Cloud Computing Challenges at Vandalur, Kelambakkam high road chennai Tamil Nadu India - Mar 8 - 9, 2018

Internet Business Systems © 2018 Internet Business Systems, Inc.
25 North 14th Steet, Suite 710, San Jose, CA 95112
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise