Since Docker is a more capable container system, not all containers will work in Singularity.
You can also download (
singularity build ...) Docker and Singularity Hub containers into a local file. You can then modify them and run without needing to download data over the network each time.
See the documentation on the Singularity website for more information.
Only run images from trusted sources (see this page for more information).
It's very convenient to just do
singularity run docker://tensorflow/tensorflow, but every time you do it, a lot of data is downloaded from Docker to NSC, a temporary image is then built in which is deleted when singularity exits.
If you are going to run many jobs using a Docker container, this process will use a lot of network bandwidth and slow down your calculations.
Instead, whenever you're going to be using an image more than once, first download it (e.g
singularity build /proj/.../myimage.sif docker://...) and run it from disk (
singularity run /proj/.../myimage.sif).
If you will be running the same image many times from a single job, you might be able to speed it up even furtner by copying the image to
/scratch/local first, then running it from there multiple times.
Don't do it. Download the image and run it from the file system. See the performance section of the Singularity Hub page for more information on why.