Attach Resources to a Workstation¶
You can expedite many workflows by attaching file systems or jobs to a workstation.
When you attach a file system, the file system is mounted to the workstation node. This allows you to interact with your files as though they were stored on the node itself, without having to copy files to and from the workstation. The file system remains attached until a user detaches it or until the workstation is deleted.
When you attach a job, the job’s file system is mounted to the workstation node, giving you real-time access to the results of the simulation. This option applies only to jobs created from spaces, for which results are written to scratch storage on the job cluster. (For jobs running on CONVERGE Horizon file systems, you can simply attach the file system itself, as described above—this is the same file system being used by the job.) A job remains attached to a workstation until a user detaches it or until the job is completed.
Resources can be attached to a workstation if all of the following are true:
The status is Online (for file systems) or Running (for jobs).
The resource was created in the same data center as the workstation.
The resource is owned by a member of your team (or your currently selected team, if you belong to more than one).
To attach a resource, go to the Workstation Details page and click Attach Resource. Select the resource to attach and click Attach. You can attach multiple resources to the same workstation.
On the workstation, run df -h in a terminal to see where the resource is mounted (look for the path /mnt/fs/files in the first column of the output). In the example below, an attached file system is mounted on /mnt/filesystem_db21a12f6e27.
If you attach a job that was created from a space, it will be automatically detached from the workstation when the job is completed. To continue post-processing on the workstation, you can download the output files from the job’s output directory with the space:download command.
[opc@INST0897bn34l14d tmp]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 118G 0 118G 0% /dev/shm
tmpfs 48G 18M 48G 1% /run
/dev/mapper/ocivolume-root 83G 6.9G 77G 9% /
/dev/sda2 2.0G 498M 1.5G 26% /boot
/dev/mapper/ocivolume-oled 15G 139M 15G 1% /var/oled
/dev/sda1 100M 6.3M 94M 7% /boot/efi
tmpfs 24G 52K 24G 1% /run/user/1000
/dev/sdb 1.0T 17G 1007G 2% /mnt/scratch
tmpfs 24G 56K 24G 1% /run/user/42
10.10.168.122:/mnt/fs/files 500G 13G 487G 3% /mnt/filesystem_db21a12f6e27
[opc@INST0897bn34l14d tmp]$ cd /mnt/filesystem_db21a12f6e27
[opc@INST0887b934d04d filesystem_db21a12f6e27]$ ls
case1 case2 case3