As of 4 PM, April 17, $ARCHIVE is undergoing an unplanned outage. The system should be restored to service by 7 PM.
RCS is currently investigating issues with the $ARCHIVE filesystem and it is currently offline. We will update users with further information as it becomes available.
Due to a circuit issue in WRRB 004, the Linux workstations are offline and unavailable. The following systems are affected:
We are currently investigating some issues with the $ARCHIVE filesystem based on reports by users. Direct logins to bigdipper to scp files and batch_stages are unavailable, and files in $ARCHIVE may be unavailable. We will update when more information is available.
Chinook is back online and available for use.
There was an unplanned power outage in the UAF Butro Data Center at roughly 4:15am this morning. Chinook is offline and the $CENTER1 filesystem is unavailable on the Linux Workstations. We are currently assessing when they will be operational again, and will distribute notifications as more information is available.
Due to a campus power outage on January 23rd RCS systems are currently offline, including Chinook, VMs hosting websites, and hosted storage. We are in the process of verifying that power is stable in the data center and will be working to bring RCS systems back online throughout the day.
Fish, Pacman, and $CENTER will be retired on December 29, 2017.
Users are encouraged to migrate to the new HPC cluster, Chinook. If you are the Principal Investigator of a project and want to have an account created on Chinook, please email firstname.lastname@example.org with your request, your project ID, and the members you would like added to your project.
$ARCHIVE will be unavailable from 9 AM to 9 PM on Tuesday, January 9, 2018.
RCS will work with Principal Investigator’s (PI) whose $ARCHIVE data volumes exceed the 10 TB standard quota. RCS storage rates are available at http://gi.alaska.edu/research-computing-systems/service-rates.
Chinook will be offline from 9 AM on November 1, 2017, to 5 PM on November 2, 2017, to facilitate expansion from 1892 cores, 73 nodes to 2816 cores, 106 nodes.
This outage is required to facilitate an upgrade of the high-speed, Lustre filesystem software and implement a new management structure for storage services.
Following this outage, all user directories in $CENTER1 will be located in subdirectories under the project(s) that you are a member. Each project will receive a 1 TB, unpurged quota. User directories currently in $CENTER1 will be moved to the new $CENTER1 project directory.