[Gimle-users] Full file systems

Kent Engström kent at nsc.liu.se
Mon Jul 18 10:17:26 CEST 2011


kent at nsc.liu.se (Kent Engström) writes:
> Dear Gimle Users,
>
> some of the Accumulus Lustre filesystems mounted on
> Gimle (and Vagn) are very, very full:
>
>   Size  Used Avail Use% Mounted on
>   6.8T  6.4T   13G 100% /nobackup/smhid4
>    17T   16T  8.7G 100% /nobackup/smhid6
>    90T   89T  153G 100% /nobackup/smhid7
>   161T  157T  2.5T  99% /nobackup/rossby14
>   8.2T  8.1T  153G  99% /nobackup/smhid5
>    72T   69T  2.1T  98% /nobackup/smhid8

We are still seeing some rather full filesystems this week, with the top
contenders being:

 Size  Used Avail Use% Mounted on
 6.8T  6.4T   13G 100% /nobackup/smhid4
  17T   16T  154G 100% /nobackup/smhid6
  90T   89T  109G 100% /nobackup/smhid7

I think it is smhid7 that hurts most at the moment. It is currently full
to the point where new files cannot be created on it (please remember
that 109G is just above 0.1 per cent of 90T and that Lustre cannot
distribute the storage 100% perfectly).

Please check if there is data you can delete, compress or move to a
newer /nobackup filsystem (such as smhid9).

If you are going to run large "du" or "find" jobs to check your files,
or if your are going to move large amounts of data, we recommend that you do
it on a compute node allocated using "interactive -N1" (to get good
performance without affecting other login node users).

We recommend that you use rsync to copy the data, verify that it looks
good at the destination, and then delete it from the source. Feel free
to contact smhi-support at nsc.liu.se if you have questions.


Sincerely,
-- 
Kent Engström, National Supercomputer Centre
kent at nsc.liu.se, +46 13 28 4444



More information about the Gimle-users mailing list