The CrySP Group's Computing Resources

The following computing resources are available to members of the CrySP group:

  • crunch.cs.uwaterloo.ca: 4-core (* 2 for hyperthreading) Intel Xeon E5620 2.40 GHz CPU, 8 GB RAM, 1.4 TB disk, dual 1 Gbit NIC, 2xM2050 GPUs, Ubuntu 18.04.2 LTS Linux 4.15.0 x86_64
  • whoomp.cs.uwaterloo.ca: dual 4-core Intel Xeon E5420 2.50 GHz CPU, 32 GB RAM, 20 TB RAID, dual 1 Gbit NIC, Ubuntu 14.04.3 LTS Linux 3.13.0 x86_64
  • modelnet3.cs.uwaterloo.ca: Dual-core AMD Athlon(tm) 64 X2 Dual Core Processor 5600+ 3.00 GHz CPU, 1 GB RAM, 220 GB disk, single 1 GBit NIC, FreeBSD 6.3 (ModelNet emulation kernel module installed)
  • gurgle.cs.uwaterloo.ca: Dual-core Intel Pentium D 3.00 GHz CPU, 1 GB RAM, 80 GB disk, single 1 Gbit NIC, Ubuntu 18.04.2 LTS Linux 4.15.0 i686
  • splash.cs.uwaterloo.ca: 4-core (* 2 for hyperthreading) Intel Xeon E3-1240 V2 3.4 GHz CPU, 32 GB RAM, 2 TB disk, dual 1 Gbit NIC, Ubuntu 14.04.3 LTS Linux 3.13.0 x86_64
    • splash is the machine hosting our Tor exit node.
  • kappel.cs.uwaterloo.ca: Intel Xeon E3-1220 V6 3 GHz CPU, 8 GB RAM, 2 * 8 TB disk, Ubuntu LTS; intended for running production servers and storing datasets, contact Urs if you need access
  • ebnat.cs.uwaterloo.ca: Intel Xeon E5-2640 V4 2.4 GHz CPU, 32 GB RAM, 480 GB SSD, 4 TB disk, Ubuntu LTS; intended as a compute server, contact Urs if you need access
  • crysp-lt01.cs.uwaterloo.ca : 4-core (* 2 for hyperthreading) Intel i7-6500U 2.50GHz CPU 8GB RAM, 500GB drive, single 1 GBit NIC, Windows 10 (Home), SGX enabled
  • teeter.cs.uwaterloo.ca: 4-core (* 2 for hyperthreading) Intel Xeon E3-1270v6 3.8 GhZ CPU, 2 TB SSD, 7.4 TB disk, dual 1 Gbps NIC (only 1 currently connected), Ubuntu 16.04, SGX enabled
There are three platforms that may be useful for large-scale and/or distributed computations or experiments.

RIPPLE:
The CrySP RIPPLE Facility contains more than 20 machines, with the largest machine containing 144 cores and 12 TB of RAM. The machines have very fast (more than 100 Gbps) networking, and are suitable for large-scale network simulation as well as large-scale computation. The facility also has 64 NVIDIA GPUs. You book particular machines for particular times; you will be the only user of those machines at those times, and you will have live access to the machines so you can run your experiments during your reservation slot. The machines run standard Ubuntu installations that you can SSH into during your time slot. Talk to Ian Goldberg to get access to the CrySP RIPPLE Facility. Once you have access, you can reserve time on the machines using the booking system. The specs of the current machines in RIPPLE are:

  • peep: 2 TB RAM, 80 cores, 128 Gbps net, spinny disk RAID, SSD RAID
  • grunt[0-7]: 1/4 TB RAM, 16 cores, 64 Gbps net, 8 NVIDIA Quadro K5000 GPUs

  • tick[0-6]: 1 TB RAM, 80 cores, 128 Gbps net
  • tock: 2 TB RAM, 80 cores, 128 Gbps net

  • clonk: 12 TB RAM, 144 cores, one 2 TB NVMe drive, one 350 GB NVMe drive

  • click[0-1]: 1/2 TB RAM, 32 cores, 160 Gbps net
  • clack[0-1]: 1/2 TB RAM, 32 cores, 160 Gbps net
  • cluck[0-15]: 16 GB RAM, 4 cores, 40 Gbps net
  • snoop[0-1]: Sandvine PTS 22600 (page 9 of sales brochure)

SHARCNET:
A consortium of universities in Ontario participate in SHARCNET. Any student or faculty at any of those universities can use SHARCNET for free. In SHARCNET, you submit your jobs in a batch-like system, say how many machines you want it to run on, and your experiment will run at some point in the future, according to a scheduling algorithm. To get access, your supervisor has to sign up with Compute Canada and SHARCNET (free, but it takes a few days to process the signup). Then students can sign up, linking their accounts to their supervisors'.

PlanetLab:
Many universities and other organizations around the world participate in PlanetLab. Each such organization contributes a couple of machines to the project, and anyone at any participating organization can use all of the machines around the world. The machines are not reserved; your experiments are running at the same time as everyone else's. This makes it hard to get "clean" measurements, but you have the ability to see what real worldwide Internet latencies, congestion, packet losses, etc. do to your protocols. Sign up on the PlanetLab website, specifying University of Waterloo as your institution. Your request will be routed to the person in charge of PlanetLab at uWaterloo, which happens to be Ian Goldberg.
Topic revision: r28 - 2019-03-06 - IanGoldberg
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback