Argonne National Laboratory Nuclear Engineering Division

Think, explore, discover, innovate
U.S. Department of Energy

Advanced Computation & Visualization

Advanced Computation


 
Part of the two HPC clusters Eddy and Eddy2

Part of the two HPC clusters Eddy and Eddy2. Click on image to view larger image.

The Nuclear Engineering Division has three HPC clusters as its advanced computational Multiple-Instruction Multiple-Data platform for performing computations of engineering mechanics, fluid dynamics, Monte Carlo simulations and engineering analysis. The three clusters are comprised of one 15 dual core node Ethernet, a 46 multicore (368 cores) Infiniband, as well as a 8 node multicore/large memory (256 cores/128GB per node) Infiniband cluster.

The front-end servers have both Intel Xeon and Opteron processors and one front end node is a large memory system with 198 GB of RAM for CFD Meshing. The compute nodes have a variation by cluster of both Intel Xeon processors running as well as Opteron. NE also has three large memory non-clustered machines for large meshing at 512GB and 1TB of total memory.

The two multicore clusters have their file systems provided by a new high performance fileserver from PANASAS (PAS 8).

The operating system's primarily used on the cluster are CentOS.

Compilers:

PGI Workstation 3.1 (F77, F90, HPF, C++, C), Intel Fortran Pro, Absoft F90 Pro, MPI.

 

Applications

The following applications have been ported to the cluster:

  • CFX,
  • ANSYS,
  • IMPACT21,
  • MESHLESS,
  • Monte Carlo,
  • NEPTUNE,
  • Star-CD,
  • Star-CCM+,
  • Matlab,
  • COMSOL,
  • Fluent,
  • Abaqus and
  • Transims.

Part of the two HPC clusters Eddy and Eddy2 is shown in the figure above.

Last Modified: Fri, May 30, 2014 1:48 PM

 

For more information:

Roger N. Blomquist
Information Technology Section
Fax: +1 630-252-4500

 

U.S. Department of Energy | UChicago Argonne LLC
Privacy & Security Notice | Contact Us | Site Map | Search   go to top