[Nebula-users] Reminder: Fahrenheit available, migrate from Nebula

Peter Bortas zino at nsc.liu.se
Thu Mar 19 09:58:20 CET 2026


Dear Nebula users,

Migration from Nebula has continued apace, and Fahrenheit now has 26
of 48 nodes active. If you find that you need more nodes available to
port your workload or or have other concerns then talk to Martin
Lilleeng Sætra.

The deadline is still the end of March, before Easter.

Regards,
--
Peter Bortas, NSC


On 03/03/2026 05:14, Peter Bortas wrote:
> Dear Nebula users,
>
> Fahrenheit is now online with 16 of 48 nodes available. We urge you to
> move your research workloads from Nebula to Fahrenheit as soon as
> possible. More nodes will be made available on demand as we can turn
> off Nebula nodes. The deadline for porting jobs is the end of March
> (before Easter), but the sooner the better.
>
> We currently have a few accounts in the creation pipeline that will
> get done today. If you think you should have an account on Fahrenheit
> but have not received an invite by Wednesday then contact Martin.
>
> All Nebula projects are also available on Fahrenheit. Please recompile
> all code and review all job/submit scripts when porting. Do NOT
> reserve full nodes on Fahrenheit (no --exclusive flags) unless you
> have discussed it with Martin first.
>
> Note that there are only "thin" nodes in Fahrenheit, but they have
> almost as much memory as a "fat" node has in Nebula.
>
> When allocating anything larger than the minimal allocation use the -n
> flag to specify number of cores. -n 50 will give your job 50 cores, -n
> 50 --ntasks-per-core=2 will give your job 25 cores with the appropriate
> Slurm environment variables set to indicate that hyper-threading should
> be used if your application is sensitive to that.
>
> Avoid using flags that specify how much memory the job should
> get. Just allocate more cores to get more memory.
>
> Fahrenheit has the exact same hardware configuration of nodes as
> Kelvin and Celsius if you have had access to those earlier:
>
> CPU: 2 x AMD EPYC 9565 72-Core Processor (288 threads per node)
>
> RAM: 768 GiB (About 2.5GiB per thread)
>
> Local storage:
> NVME flash in $SNIC_TMP. Size depends on how large part of the node
> you have allocated.
>
> Common storage:
> The exact same storage as on Nebula. The new cluster is hooked up to
> the same storage servers. That also means any change done in you /home
> or in /nobackup on Nebula OR Fahrenheit will immediately show up on
> the other cluster. So take some care that you copy any project script
> to a new directory before modifying it on Fahrenheit if you want to
> keep the old scripts running unmodified on Nebula for now.
>
> Questions about porting/migration from Nebula to Fahrenheit may be
> directed to martinls at met.no
>
> Regards,
> -- 
> Martin Lilleeng Sætra and Peter Bortas
>


More information about the Nebula-users mailing list