About

An introductory as well as a wiki page to High Performance Computing Clusters for users from University of Applied Sciences Esslingen, also known as Hochschule Esslingen (HE).

HE academic researchers are provided with direct access to bwGriD and bwUniCluster platforms, free of charge, in order to run any non commercial calculations or simulations.

Each cluster has its own infrastructure and uses cluster specific resource managment tools, software packages, libraries, development tools, etc. An user may have to adjust the work procedures for each cluster accordingly.

Note: after you choose the cluster, in the upper right corner you'll get a cluster specific navigation menu, for cluster specific wiki sections.


bwUniCluster 2.0

As part of bwHPC project, bwUniCluster is a modern system which consists of more than 840 SMP nodes with 64-bit Intel Xeon processors, providing access to users from multiple universities of Baden-Württemberg.

Each node on the cluster has at least two Intel Xeon processor, local memory from 96GB to 3TB, local SSD disks, network adapters and optionally accelerators (NVIDIA Tesla V100). All nodes are connected over a fast InfiniBand interconnect and also connected to an external file system based on Lustre.

More info about hardware and architecture you may find here.

Workload manager: SLURM.

bwUniCluster


bwGRiD Esslingen (no longer active)

The Baden-Württemberg Grid (bwGRiD) was part of the D-Grid initiative and provided more than 12,000 cores for research and science at 8 locations in Baden-Württemberg. Participating partners were the Universities of Freiburg, Heidelberg, Tübingen, Mannheim and the Ulm/Konstanz network, the Esslingen University of Applied Sciences, the Karlsruhe Institute of Technology and the High Performance Computing Centre (HLRS) in Stuttgart.

A NEC LX-2400 cluster, it is an old project with an outdated hardware, access to which is still granted for HE users. Currently it has about 75 Nodes with Intel Nehalem processors with 2.27GHz and 2.8GHz with 8 cores each and 22 Nodes with 32 virtual cores, totaling in about 1264 active cores. Each node has 24GB and 64GB memory respectively. All systems are without local hard disk.

The file system used is the NEC LXFS with 36 TB, a high-performance parallel file syastem based on Lustre.

Each blade is connected to the local network with Gigabit Ethernet. This network is used for administration and for logging on to the systems. Each blade has a QDR InfiniBand interface (transfer rate: 40 GBit/Sec) for the transfer of data and results of calculations. The InfiniBand network is designed as a HyperCube with a total of 192 edge ports. Both the data exchange of parallel programs and the connection to the NEC LXFS are carried out via InfiniBand.

Workload manager is a combination between MOAB-TORQUE and PBS.


Disclaimer

These systems are not designed to collect and backup any data. Therefore, it is highly recommended not to store any data you are not afraid to lose on the clusters.

For some extra questions you may try to contact M.Vögtle on: michael.voegtle[at]hs-esslingen.de

apply

Interested? Apply now! for the summersemester 2025

Your personal contactContact us

Prof. Dr. rer. nat. Gabriele Gühring

Tel: +49 711 397-4376
E-Mail: Gabriele.Guehring@hs-esslingen.de
Send message