Systems  
Status displays
System status
Retired systems
 
 
 
 
 
 
 
 
 
 
 
 

Storage
Users
  • SMHI
  • SNIC
Related information
Hardware
The central part of the storage solutions is the tape library IBM 3584 (renamed to IBM TS3500 on May 9 2006) which in its current configuration can hold 1575 tapes, each capable of storing 800GB of uncompressed data. The library consists of four frames, one base frame and three expansion frames. A total of 15 expansion frames can be attached to the base frame. Installing a new expansion frame can be done within a couple of hours and adds 440 new tape slots to the system. By removing tape slots tape drives can be installed instead. Today we have nine tape drives installed in the base frame.

The tape drive we use is LTO3 and LTO4, which is capable of handling the LTO format (Linear Tape-Open). This drive model has a maximum native speed of 120MB/s. This is during the condition that the feeding device, usually a computer with disk storage, can output this amount of data at a stable rate. If the computer becomes busy and the data rate drops the drive slows down to a lower speed and waits some time before speeding up again, if the incoming data rate is enough i.e.

The rest of the hardware consists of two tsm server and one server for HSM and servers and a smaller one serving as interface/front end, three disk systems and two SAN-switches. The two main servers and HSM server are connected to the SAN-network were they have their main disk storage and to an internal network. The front end is connected to the Internet and directly to the migration server (Jaix) via gigabit Ethernet. See the picture below.

NSC Storage solution layout

The server Soshine and Tsm2 runs the software that handles where data is located and is the only one accessing the tape library and the tape drives at the moment. It has two main purposes which are to serve as a backup server and to work as a back end for the migration server Jaix which automatically migrates user data to and from tape upon request from the users. The backup part is mainly for our internal backup needs but also for serving other sites and providing mirror backups for others using the same software. We are currently mirroring parts of our TSM-server with HPC2N in Ume.

Performance

The total throughput of the TSM-server is approximately 250-300MiB/s depending on the type of operation. Receiving large files from the migration server and moving data to tape from two disk storage pools at the same time is usually the most optimal operation from a maximum throughput point of view. Having a fairly decent client regarding disk performance, backup and restore operations averages at 150GiB/hour. The maximum input/output is limited by the gigabit interface we use for external communications, so 2-3 fast clients can be served at full speed concurrently.
 

FTP-server: IBM eServer 336

  • CPU: Dual Intel Xeon 3.2GHz
  • Memory: 2GiB

HSM-server: IBM pServer 9110-510

  • CPU: Dual Power 5 1.5GHz
  • Memory: 4GiB
  • Storage disk: 14 x 372GiB, SATA2

TSM-server1: IBM eServer 346

  • CPU: Dual Intel Xeon 3.2GHz
  • Memory: 4GiB
  • Database disk: 6 x 68GiB, SCSI 15k rpm
  • Storage disk: 12 x 372GiB, SATA2

TSM-server2: DELL 2950

  • CPU: QUAD CORE XEON E5420 2.5GHz
  • Memory: 8GiB
  • Database disk: 6 x 68GiB, SCSI 15k rpm
  • Storage disk: 12 x 466GiB, SATA2

Tape library: IBM 3584

  • Tape drives: 4 x LTO3 5 x LTO4
  • Tape slots: currently 1575
  • Tape capacity: 400GB - LTO3
  • Tape capacity: 800GB - LTO4
    Software
    The most important software in the storage solution is the backup and archiving software TSM from IBM. It stores data in hierarchic so called storage pools. Data (files) are migrated further down the hierarchy of storage pools as the upper ones are filled with data they receive from archiving and backup clients. Usually the first storage pools are located on (fairly) small and fast disk systems while the end storage pools are located on cheaper tape media residing in tape libraries. Though with todays small price difference between disk and tape the entire storage space could just as well reside on disk only media even for fairly large solutions.

    The backup part of TSM uses the concept of infinite incremental backup where you never take a full backup of a system. Instead each file is kept in a number of versions where a new version is saved only if the file has changed. When the maximum number of versions is reached the system discards the oldest one. The most recent version is called the active version and is the file currently stored on the client file system. Non-active versions of files can also be discarded based on how old they are, since e.g. having maximum versions set to 10 on a 1 TiB file system would essentially require 10 TiB of backup storage after a while. Also, a version of a file that is 2 years old and which has changed 7 times during this period is usually not that interesting.

    Access to the system is provided via GRID software, the Globus Toolkit. The Globus Toolkit provides one interface through GridFTP which basically is ordinary FTP capable of authenticating users by certificates. There are limited amounts of easy-to-use clients for communicating with the GridFTP-server but it is the fastest interface and thus most suitable for large batch transfers. The other interface currently available is provided via patched OpenSSH software. With the Globus toolkit comes GSI-OpenSSH which also utilizes the feature of authenticating users by certificates. Using SSH gives the possibility of using SFTP as transfer protocol which is supported by far more client software then GridFTP. Our GSI OpenSSH has also been patched with the hpn-none-patch to further increase transfer rates over high latency networks and to remove the CPU bound limit enforced by the use of encryption. Modern CPUs (+3.0 GHz) can encrypt at the speed of approximately 35-40 MiB/s, which is a limitation on 1 Gbit/s networks.

     

    Operating systems

    • AIX: AIX 5.3
    • Linux: CentOS 5
    • Linux: CentOS 4

    Other

    • Tivoli Storage Manager - Server
    • Tivoli Storage Manager - HSM
    • Globus Tool kit

    User Guide - Introduction

    The user guide for this resource involves getting a GRID certificate and learning how to use two commands, migftp and globus-url-copy. In order to get access to the storage area, you must have a valid grid certificate. This certificate is used instead of regular passwords as the authentication mechanism when accessing the storage system. There is no need for a special account/username, everything is resolved through the unique certificate id. After having received the certificate you need to send an email to support@nsc.liu.se requesting your certificate being activated on the storage system.

    IMPORTANT

    • The grid certificate consists of 2 files located at ~/.globus at the host(s) from where you will be accessing resources:
      usercert.pem -- grid certificate
      userkey.pem -- the private key, be careful with this file.
    • The certificate is personal and only bound to you as a person (it consists of a name, organisation and an e-mail address). It is not bound to a specific machine or a user name.
    • The certificate is valid for 1 year only, after that it must be renewed again.
    • The private key is encrypted using a password of your choice. Anyone that can decrypt this private key will be able to authenticate as you wherever this grid certificate is used as authentication (the public key, on the other hand, is public, and may be readable by others).
    • The private key should therefore be handled with great care. On every machine that it exists it must only be readable by you (i.e. ``chmod 400 userkey.pem''). Any transferring of the private key between computers must only be done using encryption (such as scp, sftp, rsync over ssh, etc.).
    • You must choose a strong password for the private key. This password must not be used anywhere else and should not be easily cracked. You must never ever give away the password to somebody else.

    For more information regarding certificates and public key cryptography:
    http://en.wikipedia.org/wiki/Public-key_cryptography
    http://en.wikipedia.org/wiki/Public_key_certificate

    Getting a Grid Certificate

    The following steps tells you how to create a certificate request together with a private key. In order to be useful, this certificate request (a public key, together with your name, address and organisation) must be digitally signed by a certificate authority that we trust. This digital signature binds this public key, to you as a person.
    1. Log in to any machine that has a configured nordugrid-client:
      • gimle
      • bore
      • kappa (or any other SNIC resource in Sweden)
    2. Create a certificate request together with a private key by executing the following in the terminal on the machine you have logged in to:
      grid-cert-request -interactive
      1. Come up with a strong pass phrase that is used to encrypt the private key (write it down and keep it at a secure place, e.g. your wallet).
      2. Press the ENTER key when asked about the questions about "Level 0 Organization Name" and "Level 1 Organization Name" (i.e. they should be left unaltered as "Grid" and "Nordugrid").
      3. The domain is the part of your department's e-mail address after '@': MISU users should write the following as "Your Domain": misu.su.se
        and SMHI users should write the following as "Your Domain": smhi.se
      4. Fill in your name. WARNING: only ASCII characters are valid, for example '','' and '' are not valid and should be replaced with their ASCII-equivalents.
      5. Fill in your e-mail address used at work.

      When finished, something like the following should be displayed:

      A private key and a certificate request has been generated with the subject:
      /O=Grid/O=NorduGrid/OU=nsc.liu.se/CN=Per Lundqvist/Email=perl@nsc.liu.se
      

      The following should now be located in ~/.globus (consisting of a empty, non-valid certificate file, the certificate request and the corresponding private key):
      [perl@dunder ~]$ ls -al ~/.globus/
      total 12
      drwxr-xr-x   2 perl nsc   70 Dec  7 14:22 .
      drwxr-xr-x  19 perl nsc 4096 Dec  7 14:04 ..
      -rw-r--r--   1 perl nsc    0 Dec  7 14:16 usercert.pem
      -rw-r--r--   1 perl nsc 1354 Dec  7 14:22 usercert_request.pem
      -r--------   1 perl nsc  963 Dec  7 14:22 userkey.pem
      
    3. A Certificate Authority must now sign this certificate request:
      1. E-mail the file ~/.globus/usercert_request.pem to ca@nordugrid.org. You should use the distinguished name (DN) of your grid identity as the subject for your mail. The distinguished name is contained within ~/.globus/usercert_request.pem and is on the form:
           /O=Grid/O=NorduGrid/OU=<your organization>/CN=<Your name>/Email=<your email>
           
        You may use the following command to retrieve your distinguished name from the certificate request:
           grep "^[[:blank:]]*/O=Grid/O=NorduGrid/OU=.*/CN=.*/Email=.*" ~/.globus/usercert_request.pem
           
      2. After a couple of days you will receive back a signed certificate. Save this file as ~/.globus/usercert.pem (the file that already is there is empty and should be overwritten). usercert_requst.pem is now no longer needed and should be removed.
      3. Send an e-mail to support@nsc.liu.se and request your certificate being enabled on the resource. This mail must have the same subject as the mail you sent to ca@nordugrid.org (i.e. use your distinguished name as subject).
      4. These files may then be copied to other machines with configured nordugrid-clients from where you intend to connect to the storage area. Example:
           [perl@tornado ~]$ scp -r ~/.globus/ perl@dunder:
           perl@dunder's password:
           usercert.pem           100% 1155     0.0KB/s   00:00
           usercert_request.pem   100% 1354     0.0KB/s   00:00
           userkey.pem            100%  963     0.9KB/s   00:00
           
           [perl@dunder ~]$ ls -al ~/.globus/
           total 12
           drwxr-xr-x   2 perl nsc   43 Dec 7 14:23 .
           drwx------  21 perl nsc 4096 Dec 7 14:23 ..
           -rw-r--r--   1 perl nsc 1155 Dec 7 16:25 usercert.pem
           -rw-r--r--   1 perl nsc 1354 Dec 7 16:25 usercert_request.pem
           -r--------   1 perl nsc  963 Dec 7 16:25 userkey.pem
           
    4. This certificate is only valid for 1 year and must then be renewed. You should receive an e-mail a couple of weeks before it expires telling you to request a new.

    Renewing a Grid Certificate

    The Grid Certificate expires in 1 year and must at some time be "renewed". Your old certificate will not really be renewed since you will get a completely new certificate together with a new private key and the old certificate together with the old private key will be obsolete. The new certificate and the new private key must both be distributed again to all the systems that will use them.

    IMPORTANT: Do not mix up the old certificate and the old private key with the newly generated ones. The signed certificate from the certificate authority is only valid together with the private key that was created together with the certificate request. If these files does not match together and the correct pair get lost, you will have to renew your certificate again, which may take some time.

    1. Log in to a machine at NSC that has a configured nordugrid-client and where your old certificate is available (~/.globus/):
      • gimle
      • bore
    2. Rename the directory containing both the old certificate and the old private key to .globus_old:
        mv ~/.globus ~/.globus_old
        
    3. Perform step 2 in "Getting A Grid Certificate"
    4. Sign the new certificate request with your old private key. This step verifies your identity. It is not really necessary, but will speed up the processing of your certificate request by the certificate authority. The signed request will be in the file ~/sigfile:
        openssl dgst -binary -sign ~/.globus_old/userkey.pem < ~/.globus/usercert_request.pem > ~/sigfile
        
    5. Similar to step 3 in "Getting a Grid Certificate", a Certificate Authority must now sign this certificate request:
      1. E-mail the file ~/.globus/usercert_request.pem together with ~/sigfile to ca@nordugrid.org. You should use the distinguished name (DN) of your grid identity as the subject for your mail. The distinguished name is contained within ~/.globus/usercert_request.pem and is on the form:
            /O=Grid/O=NorduGrid/OU=<your organization>/CN=<Your name>/Email=<your email>
            
        You may use the following command to retrieve your distinguished name from the certificate request:
            grep "^[[:blank:]]*/O=Grid/O=NorduGrid/OU=.*/CN=.*/Email=.*" ~/.globus/usercert_request.pem
            
      2. After a couple of days you will receive back a signed certificate. Save this file as ~/.globus/usercert.pem on the system where you generated the request (the file that already is there is empty and should be overwritten). Both usercert_requst.pem and sigfile are now no longer needed and should be removed.

        IMPORTANT: Again, be careful not to not save the new certificate with a non-corresponding private key.

      3. Redistribute the new certificate and the new private key to all the machines that will make use of them (i.e. from where you will contact NSC's storage facility) as in step 3.4 in "Getting a Grid Certificate".

    Transferring Files

    Storage Server: nuffs.nsc.liu.se

    Accessible from any computer with the NorduGrid software installed currently available from the following clusters (contact support@nsc.liu.se if you need this to work on a computer not administrated by NSC):

    • blixt
    • bluesmoke
    • dunder
    • login-2.monolith
    • pavel
    • tornado
    Using one of the following clients:
    • migftp (FTP-client, interactive)
    • globus-url-copy (advanced, for scripting purposes)

    Current directory structure at nuffs:

       /gridstorage/misu/dblcopy/
       /gridstorage/misu/nobackup/
       
       /gridstorage/rossby/dblcopy/
       /gridstorage/rossby/nobackup/
       
       /gridstorage/smhid/dblcopy/
       /gridstorage/smhid/nobackup/
       
       /gridstorage/smhip/dblcopy/
       /gridstorage/smhip/nobackup/
    
    • Recommended file size is 5GB-100GB
    • You only have write permissions below the directories listed above (e.g. you may put files in /gridstorage/smhid/nobackup and below, but not in /gridstorage or in /gridstorage/smhid).
    • All the files that are copied to nuffs will be stored immediately on a temporary disk cache. Data stored on this disk cache will be migrated to tape storage within 24 hours.
    • Files located below a dblcopy directory will be migrated to two different tapes. This step is performed in serial such that the files are migrated to one tape and when this is completed these files will be copied to another tape (hence the name dblcopy).
    • Files located below a nobackup directory will be migrated to only one tape.
    • All data sent to and from nuffs will be sent unencrypted, but authentication is performed using grid certificates.
    • Expect file transfer rates to be between 30 to 60 MB/s, this also obviously depends on the amount of network- and system-load on nuffs and the client host.
    • Log in to the host you intend to use when accessing the storage. The following hosts may be used at the moment:
      • blixt
      • bluesmoke
      • dunder
      • login-2.monolith
      • pavel
      • tornado

    Make sure you have the correct certificate and private key in ~/.globus at the machine from where the file transfer is performed (see 3b and 3c). Also verify the file permission flags on the directory and files (most importantly, userkey.pem should only be readable by you):

    [perl@dunder ~]$ ls -al ~/.globus/
    drwxr-xr-x   2 perl nsc   43 Dec 7 14:23 .
    drwx------  21 perl nsc 4096 Dec 7 14:23 ..
    -rw-r--r--   1 perl nsc 1155 Dec 7 16:25 usercert.pem
    -rw-r--r--   1 perl nsc 1354 Dec 7 16:25 usercert_request.pem
    -r--------   1 perl nsc  963 Dec 7 16:25 userkey.pem
    

    Every time you want access to the storage area you must begin by authenticating yourself using your grid certificate. Since the private key userkey.pem is encrypted you must decrypt this key using the pass phrase you choose while creating the certificate request. This step will create a proxy certificate which is valid as authentication for up to 12 hours. After 12 hours has passed you must do this step again (ongoing file transfers will not be terminated but you must authenticate yourself again if you start a new file transfer session to nuffs). To create the proxy certificate type:

    [perl@tornado ~]$ grid-proxy-init
    Your identity: /O=Grid/O=NorduGrid/OU=nsc.liu.se/CN=Per Lundqvist
    Enter GRID pass phrase for this identity:
    Creating proxy .......................................... Done
    Your proxy is valid until: Thu Dec 22 23:14:21 2005
    

    To see other options such as changing the time the proxy certificate is valid type:

    [perl@tornado ~]$ grid-proxy-init -help
    

    8. Use one of the following clients to transfer your files: migftp globus-url-copy

    Regardless of which application is used to transfer files it is recommended to archive your data with tar before uploading to nuffs. This has the advantage of preserving the complete data structure and being more efficient since it is more efficient to transfer and store a few large files than many small.

    • migftp is a wrapper script around lftp provided by NSC (using grid certificates for authentication and unencrypted file transfers). It is suitable for interactive usage.
      • migftp takes as argument an absolute or relative path (relative /gridstorage on nuffs). This path states the starting current working directory on nuffs. If no argument is passed, the starting working directory will be /gridstorage.
      • Some useful commands in migftp: help, cd, chmod, put, get, mput, mget, ls, mkdir, mirror, du, rm, rmdir, less, find, lcd, lpwd, queue. It supports tab-completion (e.g. press tab 2 times at an empty prompt to see all available commands). Commands may be executed in your local shell by prefixing such commands with a ``!'' (e.g. ``!ls -lh'', ``!pwd'', ``!hostname'', etc). Jobs may be backgrounded using "C-z" and put back in foreground using ``fg''.
      • One disadvantage of migftp is that symbolic links are not preserved. Archive your data before uploading to nuffs if you need to keep your file structure completely intact (with e.g. ``tar'').
      • For more information type ``help'' at the prompt in migftp, or read the man page to lftp.
    • globus-url-copy is suitable in scripts when you do not have interactive control over the file transfers. One disadvantage though is that it currently does not support recursive transfers of sub-directories and does not preserve symbolic links (a workaround is to archive the file structure with tar before uploading).
    • globus-url-copy [options] sourceURL destURL
      where sourceURL and destURL is on the form: file://full_path_to_file or gsiftp://full_path_to_file
      file: specifies the full path on the cluster. NOTE: this will result in 3 forward-slashes as in file:///home/perl/file.
      gsiftp: specifies the path to the file on nuffs. For example: gsiftp://nuffs/gridstorage/rossby/nobackup/perl.
      One useful option is the verbose flag ``-vb''. For help, type: globus-url-copy -help

    Examples

    # Generate a proxy-certificate that is valid for 12 hours:
    [perl@tornado ~]$ grid-proxy-init
    Your identity: /O=Grid/O=NorduGrid/OU=nsc.liu.se/CN=Per Lundqvist
    Enter GRID pass phrase for this identity:
    Creating proxy ................................... Done
    Your proxy is valid until: Wed Jan 25 01:58:07 2006
    
    # Start migftp with current working directory at nuffs set to /gridstorage/rossby/nobackup/perl
    [perl@tornado ~]$ migftp rossby/nobackup/perl
    Connecting to: sftp://nuffs.nsc.liu.se:3022
    cd ok, cwd=3D/gridstorage/rossby/nobackup/perl
    [ftp] nuffs.nsc.liu.se:~/rossby/nobackup/perl>
    
    # Use ``mirror'' to recursively download whole directory trees (but without preserving symbolic links) from nuffs. Use ``mirror -R'' (reverse) to upload recursively. All file transfer commands (put, mput, get, mget, mirror) has the ability to resume previously unfinished file transfers using the option ``-c'' (continue).
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> !ls
    bin  example.dir  install  tmp
    
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> !ls example.dir -l
    total 2048004
    drwxr-xr-x  2 perl nsc         27 Mar 17 10:01 dir1
    -rw-------  1 perl nsc 1048576000 Mar 17 09:36 file100
    -rw-r--r--  1 perl nsc 1048576000 Mar 17 09:36 file100.copy.1
    lrwxrwxrwx  1 perl nsc          7 Mar 17 11:30 file100.ln -> file100
    
    # Recursively upload directory example.dir to nuffs while also ignoring symbolic links:
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> mirror -R -c example.dir
    Total: 2 directories, 3 files, 1 symlink
    New: 3 files, 0 symlinks
    
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> ls example.dir
    drwxr-xr-x    3 perl     nsc           256 Mar 17 14:18 .
    drwxr-xr-x    3 perl     nsc           256 Mar 17 14:17 ..
    drwxr-xr-x    2 perl     nsc           256 Mar 17 14:18 dir1
    -rw-------    1 perl     nsc      1048576000 Mar 17 14:18 file100
    -rw-r--r--    1 perl     nsc      1048576000 Mar 17 14:18 file100.copy.1
    
    # Upload directory example.dir recursively to nuffs recreating symbolic links on nuffs as the regular files they point to (using ``-L''):
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> mirror -R -c -L example.dir
    Total: 2 directories, 4 files, 0 symlinks
    New: 1 file, 0 symlinks
    
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> ls example.dir
    drwxr-xr-x    3 perl     nsc           256 Mar 17 14:21 .
    drwxr-xr-x    3 perl     nsc           256 Mar 17 14:17 ..
    drwxr-xr-x    2 perl     nsc           256 Mar 17 14:18 dir1
    -rw-------    1 perl     nsc    1048576000 Mar 17 14:18 file100
    -rw-r--r--    1 perl     nsc    1048576000 Mar 17 14:18 file100.copy.1
    -rw-------    1 perl     nsc    1048576000 Mar 17 14:21 file100.ln
    
    # Since symbolic links are not preserved when uploading files, archive the directory using tar and automatically upload the tar-file to nuffs when the archive has been created by using the ``queue'' command:
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> queue !tar cf example.dir.tar example.dir/
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> jobs
    [0] queue (sftp://nuffs.nsc.liu.se:3022)
        Now executing: [1] ! tar cf example.dir.tar example.dir/
    [1] ! tar cf example.dir.tar example.dir/
    
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> queue put example.dir.tar
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> jobs
    [0] queue (sftp://nuffs.nsc.liu.se:3022)
        Now executing: [1] ! tar cf example.dir.tar example.dir/
        Commands queued:
        1. put example.dir.tar
    [1] ! tar cf example.dir.tar example.dir/
    
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> jobs
    [0] queue (sftp://nuffs.nsc.liu.se:3022)
        Queue is stopped.
    
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> ls
    drwxr-xr-x    2 perl      nsc             256 Mar 17 12:40 .
    drwxrwxrwt   17 nfsnobody rossby         8192 Mar 16 16:14 ..
    -rw-r--r--    1 perl      nsc      3145738240 Mar 17 12:42 example.dir.tar
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> !ls -l example.dir.tar
    -rw-r--r--    1 perl      nsc      3145738240 Mar 17 12:40 example.dir.tar
    
    # Example when the proxy certificate must be renewed:
    [ftp] nuffs.nsc.liu.se:~/smhid> cd nobackup/
    cd: Login failed: Password required
    
    [ftp] nuffs.nsc.liu.se:~/smhid> suspend
    
    [1]+  Stopped  lftp sftp://nuffs.nsc.liu.se:2022
    
    [perl@tornado ~]$ grid-proxy-init
    Your identity: /O=3DGrid/O=3DNorduGrid/OU=3Dnsc.liu.se/CN=3DPer Lundqvist
    Enter GRID pass phrase for this identity:
    Creating proxy ..................................................... Done
    Your proxy is valid until: Fri Dec 23 02:40:33 2005
    
    [perl@tornado ~]$ fg
    lftp sftp://nuffs.nsc.liu.se:2022
    [ftp] nuffs.nsc.liu.se:~/smhid> cd nobackup/
    cd ok, cwd=3D/gridstorage/smhid/nobackup
    [ftp] nuffs.nsc.liu.se:~/smhid/nobackup>
    
    # Retrieve all tar files from nuffs using mget and wild-cards while also specifying the directory where the files should be stored (without -O they are stored at the local current working directory):
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> ls
    drwxr-xr-x    3 perl      nsc             256 Mar 17 14:48 .
    drwxrwxrwt   17 nfsnobody rossby         8192 Mar 16 16:14 ..
    drwxr-xr-x    3 perl      nsc             256 Mar 17 14:21 example.dir
    -rw-rw-r--    1 perl      nsc      3145738240 Mar 17 14:49 example.dir.2.tar
    -rw-r--r--    1 perl      nsc      3145738240 Mar 17 12:42 example.dir.tar
    
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> mget -c example.dir.*tar -O tmp/
    5373706240 bytes transferred in 168 seconds (30.57M/s)
    Total 2 files transferred
    
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> !ls -l tmp/example.dir.*tar
    -rw-r--r--  1 perl nsc 3145738240 Mar 17 14:55 tmp/example.dir.2.tar
    -rw-r--r--  1 perl nsc 3145738240 Mar 17 14:56 tmp/example.dir.tar
    

    globus-url-copy examples:

    # copy /home/perl/example.dir.2.tar from tornado to /gridstorage/rossby/nobackup/perl/example.dir.2.tar at nuffs:
    globus-url-copy -vb file:///home/perl/example.dir.2.tar \
       gsiftp://nuffs/gridstorage/rossby/nobackup/perl/example.dir.2.tar
    
    # make a tar archive before uploading, in order to preserve symbolic links:
    [perl@tornado ~]$ tar cvf example.dir.2.tar example.dir & \
       globus-url-copy -vb file:///home/perl/example.dir.2.tar \
       gsiftp://nuffs/gridstorage/rossby/nobackup/perl/example.dir.2.tar
    
    example.dir/
    example.dir/file100
    example.dir/file100.copy.1
    example.dir/dir1/
    example.dir/dir1/file100.copy.2
    example.dir/file100.ln
    
    3123707904 bytes     28445.33 KB/sec avg     52224.00 KB/sec inst
    

    Using screen

    When using migftp or globus-url-copy we recommend you to use it together with the command screen.
    • Screen makes it possible to create a shell or process which you can attach or detach from at any time, the application will continue to run regardless what happens to your xterm or your network connection. This is obviously useful with migftp or globus-url-copy, when file transfers may not be finished for a couple of days and you do not want to start all over due to some problem with the connection between your desktop and the cluster.
    • In addition, if a file transfer to nuffs takes a very long time to complete, screen will allow you to reattach to the same migftp- or globus-url-copy-session later at any time using a new login to the same cluster. Example:
      1. (While logged in to Tornado from a desktop at work): Using screen together with migftp 2TB data is transferred to nuffs with a transfer rate of 30MB/s.
      2. Since the expected time to completion would be about 18 hours I leave work and go home for the day.
      3. At home I log in to Tornado again and reattach to the same screen where I may monitor (and control) the progress of the file transfer.

    screen examples

    # Using screen together with migftp on dunder (screen can be used by any application that only displays its output to the terminal):
    [perl@dunder ~]$ screen
    [perl@dunder ~]$ grid-proxy-init
    Your identity: /O=3DGrid/O=3DNorduGrid/OU=3Dnsc.liu.se/CN=3DPer Lundqvist
    Enter GRID pass phrase for this identity:
    Creating proxy
    ................................................................ Done
    Your proxy is valid until: Tue Mar 21 00:39:08 2006
    
    [perl@dunder ~]$ migftp rossby/nobackup/perl
    Connecting to: sftp://nuffs.nsc.liu.se:3022
    cd ok, cwd=3D/gridstorage/rossby/nobackup/perl
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl>
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> mput example.file
       `example.file' at 33306202 (0%) 5.38M/s eta:85m [Sending data]
    
    # Press ctrl-a followed by d (i.e. "C-a d") to detach this screen and go back to the ordinary shell. The screen session (including migftp and any other processes running in the screen) will continue to live even if I without of detaching from it, kill the corresponding xterm):
    [detached]
    [perl@dunder ~]
    
    # To see available screen sessions:
    [perl@dunder ~]$ screen -ls
    There is a screen on:
    10972.pts-0.dunder  (Detached)
    1 Socket in /tmp/uscreens/S-perl.
    
    # To resume the screen above you must specify the shortest string that uniquely specifies a screen session, i.e. in this case all of these will work:
    [perl@dunder ~]$ screen -r 10972.pts-0.dunder
    [perl@dunder ~]$ screen -r 10972
    [perl@dunder ~]$ screen -r 109
    [perl@dunder ~]$ screen -r
    
    # When resuming a screen session, all content that has been written to the screen will be displayed:
    [perl@dunder ~]$ grid-proxy-init
    Your identity: /O=3DGrid/O=3DNorduGrid/OU=3Dnsc.liu.se/CN=3DPer Lundqvist
    Enter GRID pass phrase for this identity:
    Creating proxy
    ................................................................ Done
    Your proxy is valid until: Tue Mar 21 00:39:08 2006
    
    [perl@dunder ~]$ migftp rossby/nobackup/perl
    Connecting to: sftp://nuffs.nsc.liu.se:3022
    cd ok, cwd=3D/gridstorage/rossby/nobackup/perl
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl>
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> mput example.file
       `example.file' at 1187385685 (2%) 35.67M/s eta:18m [Sending data]
    
    # Detaching a screen that is already attached to another terminal:
    [perl@dunder ~]$ screen -ls
    There is a screen on:
          10972.pts-0.dunder  (Attached)
    1 Socket in /tmp/uscreens/S-perl.
    
    [perl@dunder ~]$ screen -d 10972
    [10972.pts-0.dunder detached.]
    
    # To terminate a screen you only would have to log out from the shell you got (from where grid-proxy-init and migftp was started above), with ``exit'' or using "C-d":
    [ftp] nuffs.nsc.liu.se:/gridstorage/rossby/nobackup/perl> quit
    [perl@dunder ~]$ exit
    [screen is terminating]
    [perl@dunder ~]$
    
      Summary:
    • Creating a screen:
      screen
    • Detaching a screen from within the screen: "C-a d"
    • Detaching a screen outside the screen:
      screen -d [shortest unique screen id]
    • Attaching (resuming) to a previously detached screen:
      screen -r [shortest unique screen id]
    • Listing available screens:
      screen -ls
    Advanced:
    • Creating a new window (a new shell) within a screen (the window number is displayed on the title bar of your xterm): "C-a c"
    • Switching between different windows within a screen (where n = next and p = previous): "C-a n" or "C-a p"
    • As default C-a is used for controlling the screen, but since this keyboard-binding is frequently used in other applications you can specify screen to use another (e.g. C-v, which will set the command to detach from within a screen to "C-v d" instead of "C-a d"): screen -e^vV
    • Naming a screen: screen -S title





    Page last modified: 2013-03-13 14:50
    For more information contact us at info@nsc.liu.se.