Any image/container that you download and run will have full access to your account and your data!
This is no different than when running "normal" applications (binaries, scripts or source code downloaded from somewhere), but Singularity makes this so easy (e.g
singularity run docker://tensorflow/tensorflow will run potentially untrusted code as your user), so it's important to keep this in mind at all times.
If you're not sure if an image is trusted, either ask NSC for advice, or don't use the image and instead make your own from trusted sources.
The Docker official repositories (which provides e.g a basic CentOS and Ubuntu image) can be considered trusted. If you make your own images from scratch, we recommend that you always use an official image as the starting point (either from Docker or some other official source like the CentOS project).
Any application that you install on top of the operating system should also be trusted. I.e don't add unknown code from some Ubuntu PPA repo without doing some checking first - who is behind the software and do you trust them?
When you run Singularity, you run an operating system provided by you, not by NSC. It's good practice to keep the operating system inside your images updated. You are responsible for keeping the image updated (not NSC)!
To make it easy to update your images, we recommend that you always build your images entirely from the bootstrap file (i.e use %post to install software and do not make any manual changes once the image is built). Then all you need to do to update the image is to re-run "sudo singularity build ..." and copy the new image to NSC.
Especially if you use operating system commands/applications (e.g wget, curl) to access untrusted data (e.g data files from the Internet), you should make sure you update your images regularly.
As an example, there has been bugs in wget that allowed a remote website to execute arbitrary commands as your user if you used wget to download data from it.