About cluster

PHYSON is a compact 216 core high performance linux cluster, dedicated to support scientific research and education. The machine was build as part of Modeling and analysis of complex systems grant financed by NSRF (National Scientific Research Fund) ВУ-Ф-205/2006 (р-л Prof. Nikolay Vitanov), the cluster has continuous support thanks to the following grants NSRF ДО 02-136/2008 (Prof. Ana Proykova), ДО 02-167/2008 (Prof. Ivan Petkov), ДДВУ 02/42 (Prof. Krasimir Mitev), ДО 02-90/2008 (Prof. Nikolay Vitanov) и НИС-3249/2017 (Prof. Gergana Guerova).

Network access to the cluster

Terminal session

The cluster is accessible through a remote access protocol SSH (Secure SHell) at physon.phys.uni-sofia.bg and standart port 22. You can use any SSH client, and it is recommended that the latter supports tunneling of a protocol X11 if you intend to use graphical interface applications.

NX session

In cases where the user desires to use graphical user applications, he may use the option of an NX session. A brief guide with description for setting up and using an NX client and session can be found here.

For more details, see Network Access.

Resource allocation

Cluster is a device with finite and shared resources. Their distribution is managed on two levels:

  1. administrative – the project management issues permits to use the resources and sets their individual quotas;
  2. system – “batch processing system”: work-with-sge-en takes care of the physical allocation of resources.

In order to optimally share the front node resources, all long-running (more than 30-minute) programs and those that require a lot of memory should be run through the “task-batch system”: work-with-sge-en . This includes interactive programs (such as Maple and Mathematica), which should be launched as “interactive tasks”: work-with-sge-en # interactive. Programs that use more than 30 minutes of processor time on the frontal node are subject to automatic termination.

Training

The Monte Carlo Group: http: //cluster.phys.uni-sofia.bg/ holds a masters course on the basics of parallel programming of shared and distributed memory systems using the OpenMP standard and the Message Passing Messaging Library (MPI) Interface). The course also includes general training on the use of batch-to-task systems. For more information, please look at the “course page”: http: //cluster.phys.uni-sofia.bg/hpc/ on the group site.

Short introductory course: intro-course-en for cluster usage.

Frequently Asked Questions

Answers to cluster FAQs can be found on the Frequently Asked Questions page: faq-en.

Statistics on the usage of the cluster:

The system load system can be transformed into a ganglia monitoring system at the following address, the aggregated CPU statistics are generated on a monthly basis and can be viewed here