Berzelius Container Modules are lightweight wrappers that make it possible to transparently use Apptainer containers as environment modules.
It has the following benefits:
Use familiar environment module commands, ensuring a minimal learning curve or change to existing workflows
Leverage all the benefits of containers, including portability and reproducibility
Please use container modules on compute nodes where you have access to apptainer.
We save all containers at /software/sse/containers
on Berzelius.
We currently have
module load AlphaFold/2.3.1-apptainer
We have a copy of AlphaFold database on Berzelius at /proj/nsc/users/alphafold_db_2.3.1
which should only be used for test purposes. Please download your own copy for your research.
We set paths for the database and the results folder.
ALPHAFOLD_DB=/proj/nsc/users/alphafold_db_2.3.1
ALPHAFOLD_RESULTS=/proj/nsc_testing/xuan/alphafold_results_2.3.1
We dowonload the database. It will take ~2.6 T hard disk space.
bash scripts/download_all_data.sh ${ALPHAFOLD_DB}
To make the best utilisations of the GPU resources on Berzelius, we strongly suggest to separate the CPU and GPU parts when running AlphaFold jobs. You should run the CPU part job on Tetralith or on your local computer, and then run the GPU part job on Berzelius.
bash run_alphafold.sh \
-d ${ALPHAFOLD_DB} \
-o ${ALPHAFOLD_RESULTS}/output \
-f ${ALPHAFOLD_RESULTS}/input/T1050.fasta \
-t 2021-11-01 \
-g true \
-P 3 \
-F false
module load PyTorch/2.0.0-apptainer
The container will be trigered by python ...
commands.
For example, we can check the available GPU by:
python -c "import torch; print('GPU Name: ' + torch.cuda.get_device_name(0))"
module load PyTorch/2.0.0-apptainer
cd /tmp
git clone https://github.com/pytorch/examples
cd examples/mnist
python main.py
Results would be
...
Train Epoch: 14 [56960/60000 (95%)] Loss: 0.019554
Train Epoch: 14 [57600/60000 (96%)] Loss: 0.001885
Train Epoch: 14 [58240/60000 (97%)] Loss: 0.011090
Train Epoch: 14 [58880/60000 (98%)] Loss: 0.002160
Train Epoch: 14 [59520/60000 (99%)] Loss: 0.001996
Test set: Average loss: 0.0274, Accuracy: 9915/10000 (99%)
python -c "import os; os.system('mamba list numpy')"
module load TensorFlow/2.11.1-apptainer
The container will be trigered by python ...
commands.
For example, we can check the GPU info by:
python -c "import tensorflow as tf; print(tf.test.gpu_device_name())"
module load TensorFlow/2.11.1-apptainer
cd /tmp
wget -O tensorflow_mnist_example.py "https://liuonline-my.sharepoint.com/:u:/g/personal/xuagu37_liu_se/EV416fjpNHVBujn25mbDguYBzxWlBCTdMaluxgesyiCDKw?e=8VH53p&download=1"
python tensorflow_mnist_example.py
Results would be
...
Epoch 1/6
469/469 [==============================] - 7s 2ms/step - loss: 0.3538 - sparse_categorical_accuracy: 0.9017 - val_loss: 0.2000 - val_sparse_categorical_accuracy: 0.9411
Epoch 2/6
469/469 [==============================] - 1s 1ms/step - loss: 0.1641 - sparse_categorical_accuracy: 0.9529 - val_loss: 0.1384 - val_sparse_categorical_accuracy: 0.9603
Epoch 3/6
469/469 [==============================] - 1s 1ms/step - loss: 0.1181 - sparse_categorical_accuracy: 0.9664 - val_loss: 0.1127 - val_sparse_categorical_accuracy: 0.9668
Epoch 4/6
469/469 [==============================] - 1s 1ms/step - loss: 0.0925 - sparse_categorical_accuracy: 0.9733 - val_loss: 0.0969 - val_sparse_categorical_accuracy: 0.9706
Epoch 5/6
469/469 [==============================] - 1s 1ms/step - loss: 0.0747 - sparse_categorical_accuracy: 0.9780 - val_loss: 0.0886 - val_sparse_categorical_accuracy: 0.9725
Epoch 6/6
469/469 [==============================] - 1s 1ms/step - loss: 0.0622 - sparse_categorical_accuracy: 0.9818 - val_loss: 0.0874 - val_sparse_categorical_accuracy: 0.9737
python -c "import os; os.system('mamba list numpy')"
Guides, documentation and FAQ.
Applying for projects and login accounts.