- How to log in
- Transferring data
- Running your first program
How to log in
The only way to connect to vilje.hpc.ntnu.no is by secure shell
(ssh), e.g. from a Linux/UNIX system:
Windows users will need an SSH client installed on their machine, see e.g. PuTTY.
forwarding is necessary to display editor windows (gvim, emacs, nedit,
etc.) or similar on your desktop. To enable X11 forwarding, log in with
-Y options enabled
Windows users need an X server to handle the local display in addition to the ssh program, see this intro (from the University of Minnesota) for PuTTY users.
SCP (Secure CoPy):
ssh for data transfer, and uses the same authentication and provides the same security as
ssh. For example, copying from a local system to Vilje:
SFTP (Secure File Transfer Protocol):
sftp is a file transfer program, similar to
ftp, which performs all operations over an encrypted ssh transport. Example, put file from local system to Vilje:
For Windows users WinSCP is a free graphical SCP and SFTP client.
Work file system
In addition to the home directory, each user also owns a directory named /work/(username), which is intended as workspace for running jobs, and temporary storage.The contents of this directory are not backed up, and unused data may be subject to deletion after 21 days, which admits more permissive storage quotas for this directory.
Binary Data (Endianess)
If you plan to use Fortran sequential unformatted files created on a big-endian system, like Njord, for I/O on Vilje you need to do a big-endian-to-little-endian conversion. See this secion for more information.
Running your first program
MPI programs are executed as one or more processes; one process is typically assigned to one physical processor core. All the processes run the exact same program, but by receiving different input they can be made to do different tasks. The most common way to differ the processes is by their rank. Together with the total number of processes, referred to as size, they form the basic method of dividing the tasks between the processes. Getting the rank of a process and the total number of processes is therefore the goal of this example. Furthermore, all MPI related instructions must be issued between MPI_Init() and MPI_Finalize(). Regular C instructions that is to be run locally for each process, e.g. some preprocessing that is equal for all processes, can be run outside the MPI context.
Below is a simple program that, when executed, will make each process print their name and rank as well as the total number of processes.
- MPI_Init(); Is responsible for spawning processes and setting up the communication between them. The default communicator (collection of processes) MPI_COMM_WORLD is created.
- MPI_Finalize(); End the MPI program.
- MPI_Comm_rank( MPI_COMM_WORLD, &rank ); Returns the rank of the process within the communicator. The rank is used to divide tasks among the processes. The process with rank 0 might get some special task, while the rank of each process might correspond to distinct columns in a matrix, effectively partitioning the matrix between the processes.
- MPI_Comm_size( MPI_COMM_WORLD, &size ); Returns the total number of processes within the communicator. This can be useful to e.g. know how many columns of a matrix each process will be assigned.
- MPI_Get_processor_name( name, &length ); Is more of a curiosity than necessary in most programs; it can assure us that our MPI program is indeed running on more than one computer/node.
Compile & Run
Save the code in a file named helloworld.c. Load the Intel compiler and SGI MPT module files:
Compile the program with the following command:
Make a batch job. Add the following in a file named job.sh
Submit the job to the queue. Note that the command
qsub returns the
See the queue status. Note that the example runs fast. It
can be finished before the status command returns a job identifier. The
job identifier is used to name the output from the job together
with the name of the job. The job name is given with
option in the job.sh-script. In this example it is 'my_mpi_job'. The
standard output from the processes are logged to a log file in the
working directory named my_mpi_job.o<jobidentifier>. Here is the
content from on batch execution of the job.sh:
Note that the file my_mpi_job.e<job identifier> contains the output to standard error from all the processes. If the processes are executed without faults, no errors are logged (the file is empty).
More examples are provided in the MPI and MPI IO Training Tutorial.