Frequently Asked Question

Ansys
Last Updated 5 years ago

Policy
Ansys is proprietary software. Each user is required to use a valid license. A license can be obtained directly from Ansys Inc, or from the user's respective research institution. Licence credentials are submitted on a per job basis.

General
ANSYS offers a comprehensive software suite that spans the entire range of physics, providing access to virtually any field of engineering simulation that a design process requires.

Description
The Ansys software suite provides several different engineering simulation solution sets for:

  • Structural Mechanics
  • Multiphysics
  • Fluid Dynamics
  • Explicit Dynamics
  • Electromagnetics

More information can be found on the Ansys website. Product documentation can be accessed via the Ansys Customer Portal for those who have a license.

Availability
Ansys modules are available on Arctur-2. The pre-compiled binaries that were installed with the software suite support both serial and parallel job execution, limited only by the license being used.

Usage
To see what Ansys modules are available, enter:

module avail ansys
Loading the module will load the relevant environment variables and paths to use the product.

Note that you must provide proper license connection information in your job submission file in order for your job to run. See below for some examples.

If you have licenses on our servers use the following exports:
export ANSYSLI_SERVERS=2325@licensing.arctur.io
export ANSYSLMD_LICENSE_FILE=1055@licensing.arctur.io


CFX Usage

CFX jobs can be:

  • serial (single task)
  • local parallel (several tasks on a single node)
  • distributed parallel (several tasks across mutliple nodes)

Serial Example

When you have your CFX definition file ready, you need to create a Job Submission File. In this example we ask SLURM to allocate resources for just one task, and then launch the task via srun:

#!/bin/bash
#SBATCH -n 1  #ask SLURM to allocate resources for just one task, since it's a serial run
#SBATCH --time hh:mm:ss #specify a time limit for the job.
#SBATCH -J  

currDir=`pwd`
defFile=$currDir/

#specify the Ansys license server. default port is 1055
export ANSYSLMD_LICENSE_FILE=@ 

#specify the Ansys licensing interconnect server. default port is 2325
export ANSYSLI_SERVERS=@

time srun cfx5solve -def $defFile

Note that if cfx5solve can't connect to the license server and check-out a valid license for the requested number of tasks (one process in this case), the job won't run.

Also, the 'time' command isn't necessary, but it can be useful to see the job's total running time when done.

Next, in the directory where you have your submit file and all your needed job files, such as the definition file, submit the job:

sbatch 

SLURM will then queue and run the job when resources become available. When the job is complete, you can view your slurm-.out file to see the job's output.

Local Parallel Example (single node)
When you want to run multiple parallel tasks on one node (utilizing one core per task), you should create a local parallel job. In the following example, we want to run 28 parallel tasks (convenient for Arctur-2, since one node has 28 cores):
#!/bin/bash
#SBATCH -N 1  #our job's tasks will run on 1, and only 1 node.
#SBATCH -n 28
#SBATCH --time hh:mm:ss  #time limit for our job
#SBATCH -J   #arbitrary job name to help the user identify the job

module load ansys/cfx  #setup our CFX environment

currDir=`pwd`
defFile=$currDir/

#here we call an external script which will create the node list for CFX.

#This script was made available when we loaded the ansys/cfx module.

NODES=`gen_cfx_nodelist.pl`

#specify the Ansys license server. default port is 1055 
export ANSYSLMD_LICENSE_FILE=@

#specify the Ansys licensing interconnect server. default port is 2325
export ANSYSLI_SERVERS=@

time cfx5solve -parallel -def $defFile -start-method "Platform MPI Local Parallel" -par-dist $NODES

Note that we didn't use srun to call cfx5solve; this is because cfx5solve spawns it's own processes when running in parallel. We simply tell SLURM the resources to allocate via the #SBATCH directives, and then allow cfx5solve to spawn itself.

Also, note that if we specify an amount 28 task or less, such as '-n 22' and have not explicitly set '-N 1', then SLURM may still distribute those tasks across more than one node, so the 'distributed parallel' mode should be used if '-N 1' is not used.

Distributed Parallel Example (multiple nodes)

To run more than 28 tasks we need to use multiple nodes on Arctur-2. Ansys refers to a multi-node job as a 'distributed parallel job'. In the following example, we ask SLURM to reserve resources for 224 tasks (8 nodes * 28 cores). Note that we explicitly specify we want 8 nodes, as well as exclusive access to those nodes while the job is running:

#!/bin/bash
#SBATCH -n 224
#SBATCH -N 8
#SBATCH --time hh:mm:ss
#SBATCH -J 

module load ansys/cfx

currDir=`pwd`
defFile=$currDir/boxbigorig.def
NODES=`gen_cfx_nodelist.pl`

export CFX5RSH=slurmrsh
export CFX_SOLVE_DISABLE_REMOTE_CHECKS=1

#specify the Ansys license server. default port is 1055 
export ANSYSLMD_LICENSE_FILE=@ 

#specify the Ansys licensing interconnect server. default port is 2325 
export ANSYSLI_SERVERS=@

time cfx5solve -parallel -def $defFile -start-method "Platform MPI Distributed Parallel" -par-dist $NODES

Note that the external script gen_cfx_nodelist.pl constructs a tasks per node list for us. We then pass this list to CFX so that it starts up the proper amount of tasks on each node according to the allocations SLURM has reserved for the job.
TIP for CFX parallel job efficiency:

For a CFX parallel job to run at peak efficiency, Ansys generally recommends one processing core for every 250,000 cells. For example, if you have a definition file with 7,000,000 cells, then you should run a parallel job with 28 tasks. Your mileage may vary so it's always good to experiment.

Fluent Usage

Fluent jobs can be:

  • serial (single task)
  • parallel (several tasks across one or more nodes)

Journal file


You should first prepare your journal file. Here is a simple example:

; Fluent Example Input File
;----------------
; Read case file
/file/read-case LIRJ.cas
;----------------
; Set the number of time steps and iterations/step
/solve/iterate 3000
;----------------
; Save Case & Data files
/file/write-data LIRJ.dat 
/exit yes

A relevant section for journal files can be found in the CFD online FAQ for Fluent, here.

Serial Job
An example submission file for a serial job could be as follows:

#!/bin/bash
#SBATCH -n 1  # only allocate 1 task 
#SBATCH -t 08:00:00  # upper limit of 8 hours to complete the job
#SBATCH -J fluent1 # sensible name for the job

module load ansys/fluent

export FLUENT_GUI=off

export ANSYSLI_SERVERS=@
export ANSYSLMD_LICENSE_FILE=@

time fluent 2ddp -g -i  > fluent1.out 2> fluent1.err

The example above will run one task, with standard output goint to the fluent1.out file, and error output going to the fluent1.err file.

Parallel Job
To run several tasks in parallel on one or more nodes, the submission file could be as follows (Arctur-2 - for running on another cluster, just change the number of nodes (N) you ask for, remembering how many cores there are per node):

#!/bin/bash
#SBATCH -N 2  # allocate 2 nodes for the job
#SBATCH -n 56  # 56 tasks total
#SBATCH -t 04:00:00  # upper time limit of 4 hours for the job
#SBATCH -J fluentP1 # sensible name for the job

module load ansys/fluent

export FLUENT_GUI=off
export ANSYSLI_SERVERS=@ export ANSYSLMD_LICENSE_FILE=@ if [ -z "$SLURM_NPROCS" ]; then N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') )) else N=$SLURM_NPROCS fi echo -e "N: $N\n"; # run fluent in batch on the allocated node(s) time fluent 2ddp -g -slurm -t$N -mpi=pcmpi -i > fluentP1.out 2> fluentP1.err
This website relies on temporary cookies to function, but no personal data is ever stored in the cookies.
OK

Loading ...