Computing and storage resources for academia
Different computing and storage resources are available for LBMC staff (researchers, engineers, postdocs, students, etc.), from local to (inter-)national ones, scaling from small individual machines (tier 3) to super-computers of increasing power (tier 2, 1 and 0).

The following table summarized the different category of computing resources available for academic research (at increasing scales):
| Tier | Scale | Computing power | Storage | Usage | Access |
|---|---|---|---|---|---|
| 3 | Local | Personal computers (1-20 CPU cores, 1 GPU) | 0.5-10TB | Development, prototyping, small computations/analysis/simulations/training/inference | Direct |
| 2 | Regional (meso-centres) | Super-computers (100-10000 CPU cores, 10-100 GPUs) | 100TB-10PB | Development, prototyping, scaling up, bigger computations/analysis/simulations/training/inference | Access upon request for local members |
| 1 | National | Super-computers (10000-100000 CPU cores, 100-1000 GPUs) | 10-100PB | Massive computations/analysis/simulations/training/inference | Project call |
| 0 | European | Super-computers (10000-1000000 CPU cores, 100-1000 GPUs) | 10-100PB | Massive computations/analysis/simulations/training/inference | Project call |
Vocabulary:
- CPU: central processing unit, main computer computing unit with several computing cores (between a few units and 100 cores for a single CPU)
- GPU: graphics processing unit, specific processing units (originally) dedicated for image processing, but also very competent for general array/tensor operations
- TB: TeraBytes (10⁹ bytes), storage unit
- PT: PetaBytes (10¹² bytes), storage unit
- training/inference: in machine learning, a model is first trained using available data and then can be used for inference or prediction on new data
Usage
Generally, large computing resources (tier 2, 1 and 0) can be used for different purposes:
- High Performance Computing (HPC): general purpose computing using one or multiple powerful computing servers (using CPUs and/or GPUs) for batch processing
- Artitificial Intelligence (AI) model training: specific AI oriented computations using generally one or multiple GPUs.
- Cloud computing providing virtual infrastructure (e.g. on-demand virtual machines), platform or software as a service
Depending on your computing need, you should use the relevant computing resources. Switching from one computing scale (e.g. your laptop) to a bigger one (e.g. a regional meso-centre) requires to check that your code is efficient when scaling up. You should always scale up progressively (e.g. tier 3 to tier 2, then if necessary tier 2 to tier 1, and so on), so that you can verify that your code will run efficiently on larger computing infrastructure.
You can contact the biocomputing hub if you need help scaling up your code.
Generally, computing facility also propose storage resources (either fast access storage for computing purpose, or slower access storage for backup/archiving).
Generally, computing resources are running Linux operating system (OS) and can be accessed through a command line interface (CLI), except for some (not all) cloud computing facilities that provide a graphical user interface (GUI) through a web interface (accessible in your web browser).
If you are not confident with using Linux OS and command line interface, you can register to the “Unix and command line interface” training course that is proposed to ENS biology lab members each year during fall semester by the Conseil d’Analyses Numériques (CAN) of the SFR BioSciences federative structure.
Tier3: local computing
At LBMC, there is no shared computing resources hosted locally at the lab level. However, some of us have access to powerful desktop work stations for software development, prototyping and running small computations/analysis/simulations/training/inference.
You can ask your colleagues or the biocomputing hub if you need specific resources (e.g. a more powerful laptop or desktop) for programming/development related to your project.
Tier2: regional centers
CBPsmn
At ENS, we have access to a dedicated meso-centre (regional scale) called the Centre Blaise Pascal de Simulation et de Modélisation Numérique (CBPsmn) providing computing and storage resources, freely for the members of ENS labs.
The CBPsmn provides two kind of infrastrucures:
The PSMN cluster (~30000 CPU cores, 60 GPUs, 10PB storage) for batch processing:
- PSMN documentation
- PSMN account request
- Tips for PSMN access and usage
PSMN usage: you can install/compile software on login/submission nodes but you cannot compute on these nodes, you should use the slurm scheduling system to reserve computing resources for your computations (called jobs).
The CBP facility provide various resources:
- cloud@CBP (~3000 CPU cores, ~160 GPUs, ~2PB storage, including a wide variety of hardware models) for interactive computing, specifically targeted for development, prototyping and testing on various hardware, see the dedicated resource monitoring page to choose a computing server
- CBP-cluster (~3000 CPU cores) for batch computing, see the dedicated resource monitoring page
- CBP account request
cloud@CBP policy: do not compute in your /home directory, use the /local volume storage available on each machine.
MesoNet
MesoNet is a federation of french (regional) computing meso-centres with a single access-point, providing computing and storage resources. It is open to French academic research personnel through the EduGain identity federation (e.g. with your @ens-lyon.fr credentials).
MesoNet provides infrastructure for HPC, cloud computing and federated storage.
Here are some useful links:
- documentation (in french)
- account request and access (click on Connexion avec EduGain)
Upon account creation request, you will receive an e-mail with a link to validate your request, do not forget to do it. Then, when your account is active, you will receive a confirmation e-mail. If you have use the EduGain federation to create your account, you should NOT use provided link to setup a password.
Tier1: national centers
GENCI
For larger computing needs, you can request access to one of the three national computing centers (IDRIS, CINES and TGCC) federated by the public operator GENCI.
Computing resources are allocated through project calls, either throughout the year (dynamic access for projects requesting ≤ 50 kh normalized GPU hours / 500 kh CPU hours) or through two project calls in January-February and June-July (regular access for larger resource requests).
Useful links:
IN2P3 CC
As member of the LBMC, we have freely access to the IN2P3 computing center (CC IN2P3) with dedicated computing and storage quotas.
Useful links:
- Account request and access (you should ask to be part of the “LBMC” group/collaboration)
- Documentation
IFB
The Institut Français de Bioinformatique (IFB) provide computing and storage resources for academic research in biology through the EduGain identity federation (e.g. with your @ens-lyon.fr credentials).
- IFB cloud for cloud computing based on virtual machine deployment (through a web interface), with available preconfigured services like domain-specific environments (for genomics, bioimaging, metabolomics, etc.) or generic apps (like linux distributions, workflow environments, or Rstudio web server, shiny server, Jupyter notebooks, etc.)
Tier0: European centers
See the EuroHPC initiative and PRACE infrastructure.