File systems on the clusters (fimm)
The following file systems exist on fimm.
User area (home directories): /home
/home/fimm -- file system for user home directories on fimm. Files are backed up daily, except for folders called "scratch" or "tmp" and their sub-folders.
Genral software installation directory: /local
/local -- for local installations on fimm of popular or general-purpose software.This file system is accessible from all compute nodes. For use by system administrator only. Contact firstname.lastname@example.org if you would like software/program installed in /local.
The fastest local file system: /scratch
/scratch -- the fastest available (local) file system for each compute node of fimm. Files are not backed up. Files created in /scratch should be deleted after the job is finished. Use this area for optimal I/O performance. On a node of fimm, it is ca 70 GB.
Work area (temporary data): /work/$USERNAME
/work -- large external storage shared by all compute nodes on fimm. Files are not backed up. Not as fast as /scratch, but /work has much larger capacity. Use /work if multiple processors or multiple compute nodes need to access the same file(s), or if /scratch is too small. /work on fimm has automatic deletion scripts that will delete from the oldest files onward when the filesystem goes beyond 80% usage. Please delete files as soon as possible from this filesystem.
Data archive: /migrate
/migrate -- the area for archiving data, i.e., for the automatic migration of data between disk and tape. Can be used upon request only. Transfer of data to or from /migrate is only meant for users that have large collections of data (tens of Gigabytes or more, typically the result of long simulations) that need to be archived and that cannot be stored in the home directories. See here for more details.
Data archive for Bjerknes: /bcmhsm
/bcmhsm -- "/migrate" for Bjerknes-users. Same rules as for /migrate. Is nfs-mounted on fimm as "/bcmhsm". There are symlinks from /migrate/username to /net/bcmhsm for the Bjerknes-users.
File system for Bjerknes: /home/bjerknode
This is a dedicated file system for bjerknes users, note that this file system is also not in backup.
Correct usage. These are general rules that you should keep in mind.
* Use your home directory only to store permanent data (program source files, makefiles, scripts, compressed data files, etc.). * Use /scratch and /work to store files that are needed or exist only temporarily. These include executables, object files, uncompressed data files, etc. Such files are typically needed only during execution of a job and can often be generated from the source files (in your home directory). * Transfer of data from and to the /scratch area is the fastest. If your program performs intensive I/O, you should ensure that all input files, intermediate files, and output files are read from and written to the /scratch area. * Files in /work and /scratch may be deleted automatically (without notice) if they haven't been used in the last seven days.
$HOME and $USER -- when your account on one of the machines is created, you get a personal home directory (called $HOME) and one called /work/$USER, where $USER is your account name. The processors on all compute nodes of fimm have access to your home and work directory on fimm. You can create your own /scratch/$USER directories on each of the compute nodes.
Remote access -- on the cluster fimm, a compute node can access another node's local /scratch area via the network:
See the FAQ page on how to copy data from the /scratch areas. On fimm, the command 'qstat -f jobid' or 'qstat -n jobid' lists the nodes which your job is running.
Disk quota -- all users get by default a soft quota and a hard quota for their home directory. If the soft quota is exceeded for more than 7 days, or the hard quota is exceeded, you will not be able to create any more files in your home directory. You can check your disk usage (KB), soft quota (quota) and hard quota (limit) with the 'mmlsquota' command. There is also a limitation on the number of files in /home/fimm. The soft quota is 150000, and the hard quota is 300000.
Backup and Restore
Our systems are connected to a secondary storage device (tape robot) with more than 500 Terabyte tapes. The tape robot is used for the storage of backup data and archiving (large) files. The tape robot is generally available for users of the IBM cluster fimm, and the Cray XT4 hexagon.
Incremental backups (only modified files) of user home directories (/home) are made every night. All versions of a file for the last 90 days are available from backup, a deleted file remains in backup for 365 days before it expires.
The following files are excluded from backup:
* files named "core". If you value your corefiles, you should compress them with e.g. gzip or bzip2. Then they will get another name (core.gz or core.bz2) and will get included in backup. * files ending with the extension ".o". These are typically temporary object files generated during compilation. * contents of (sub)directories named tmp, TMP, temp, TEMP or scratch. Do not put any valuable data in such directories.
Backups are not made of directories for temporary storage (e.g. /work or /scratch).
Retrieving files from backup
dsmc is the command to restore files from backup. The command can be used by all users (not only root) provided you have read and write permissions on the files that you want to restore.
For the IBM e1350 cluster (fimm), only files in /home/fimm are backed up.
For the Cray XT4 (hexagon), only files in /home are backed up.
To get usage information, execute
# dsmc help
Retrieving files from backup requires some time (depending on e.g. file size and workload of the tape robot) and can take between a few seconds and several minutes.
For fimm, to restore the latest available version of the file /home/fimm/plab/utby/smit.log, execute:
# dsmc restore /export/fimm/plab/utby/smit.log
If you need to restore an older version of a file use:
# dsmc restore -inac -pick /export/fimm/plab/utby/smit.log
Select the version you want to restore, and restore it.
Multiple files can be restored using wildcards:
# dsmc restore '/export/fimm/plab/utby/smit.*'
Restoring files to a different location can be done by specifying a restore point path:
# dsmc restore '/home/plab/utby/smit' '/work/utby/smit'
Multiple files can be restored to a restore point, by using wildcards. E.g:
# dsmc restore '/home/plab/utby/smit/*.*' '/work/utby/smit/'
Important options to dsmc are:
-sub=yes # can be used to restore a whole file tree -pick # gives interactive mode to select which files to restore -inac # select from older versions of files -todate=DD.MM.YYYY # select newest version of files up to DD.MM.YYYY -fromdate=DD.MM.YYYY # select newest version of files from DD.MM.YYYY
A complete list of options can be found under:
# dsmc help
Enter the number of the desired help topic or 'q' to quit, 'd' to scroll down, 'u' to scroll up.
Problems restoring files? Send problem report to