Lab Setup
We now have serveral clients and servers functioning.
These machines are essentially part of a private network
with limited connection to the outside world.
Names are temporary and may need to be changed.
Thanks to
HP
and the
Gelato Federation
for the Itanium systems and for funding for some of the
research being conducted in this lab.
Also thanks to
NSERC
which provided funding for the client machines.
The server named nardo was purchased by Ian Munro through CITO funding.
Users
The idea behind this lab is that it is designed to be used exclusively
when needed for experiments involving networks, collections of machines
and/or high-performance servers.
So far we've been loosely cooperating using the following guidelines:
-
Assume that it is OK to use any machine unless you are notified via email
(so please check your email before using the machines).
But know that they may also be used by others at the same time.
-
If you want to reserve/lock one or more of the machines for
exclusive use for some experiments in which measuring performance
is the main goal please email
to the list of possible users stating which machines you want to use
and for how long.
Typically the type of request is something like:
I'd like to run some experiments on nardo,
the 8 client machines and the network overnight (say from 8 pm to 8 am).
Or I'd like to run an experiment on nardo that should take about
two hours starting at noon tomorrow.
Generally we ask the requestor to check if this would cause problems
or if there are conflicts with other users.
brecht at cs.uwaterloo.ca
db2paria at student.math.uwaterloo.ca
kmsalem at uwaterloo.ca
x5li at math.uwaterloo.ca
imunro at uwaterloo.ca
alopez-ortiz at softbase.math.uwaterloo.ca
mhe at softbase.math.uwaterloo.ca
asharji at plg2.math.uwaterloo.ca
wpjvanhe at uwaterloo.ca
Hardware
Lab Layout and Access
Note that the lab is intentionally isolated from the campus network.
To access the lab you must first log into a uwaterloo.ca machine
and then ssh into grand.uwaterloo.ca (you'll go through a firewall
to get there).
From grand you can access all of the machines through a 100 Mbps network.
All machines except grand are connected via a 1 Gbps network.
Your home directory on grand is shared across all of the client machines
but not any of the server machines.
As a result your userid number on all of the systems in this lab are the
same as the userid number you are assigned on grand.
This id number is likely different from your id number that you are using
on other machines and means that we won't be mounting any other
systems file systems on any of these machines.
None of the machines in this lab are backed up.
They are used for conducting experiments and all data that you
want to ensure is safe must be copied to another machine that
is backed up.
Note that in some cases the systems (especially the servers)
will be running experimental operating systems that may or may
not be stable.
For a self contained printable version of the diagram below click
on the appropriate link below:
Servers
-
nardo: 2-way Intel Xeon (server)
-
Dell Poweredge 2600
-
Two 2.4 GHz Xeon processors,
with NetBurst micro-architecture and Hyper-Threading Technology,
L2 cache 512 KB
-
400 MHz front side bus,
-
4 GB Memory (PC2100 DDR)
-
Embedded LSI Logic 53C 1030 Dual integraded PCI Ultra LVD SCSI Controller,
PERC4/Di (dual channel) 128 MB battery-backup cache
-
5 x 18 GB 15,000 RPM Ultra 320 SCSI drives
-
1 x 72 GB 10,000 RPM Ultra 320 SCSI drive
-
Embedded Intel 10/100/1000 Gigabit NIC
-
Integrated ATI-Rage XL 8MB video controller
-
Add-in ERA/O management card
-
5U rack mount
-
tissimo: 2-way Itanium II (server) in the machine room
-
HP rx2600
-
Two 900 MHz Itanium II processors,
L1 cache 32KB, L2 cache 256KB, L3 cache 1.5 MB
-
4 GB Memory
-
36.4 GB Model: ATLAS10K3_36_SCA
-
10/100/1000BaseT NIC -- Tigon3 (PCI:66MHz:64-bit) on the motherboard
-
10/100BaseT NIC
-
10/100BT management LAN
-
lato: 1-way Itanium II (workstation) in the Shoshin lab
-
HP zx2000
-
One 900 MHz Itanium II processors,
L1 cache 32KB, L2 cache 256KB, L3 cache 1.5 MB
-
1 GB Memory
-
36.4 GB SEAGATE Model: ST336706LW
-
Intel PRO/10/100/1000 NIC (on the motherboard)
-
3COM 3C996B-T gigabit NIC
-
saugeen: 2-way P II (workstation) in the Shoshin lab
-
Dell P400
-
Two 400 MHz Pentium II processors
L1 I cache: 16K, L1 D cache: 16K, L2 cache: 512K.
-
512 MB Memory
-
40 GB Western Digital 7200 rpm IDE
-
10/100BaseT 3Com PCI 3c905B Cyclone
-
3COM 3C996B-T gigabit NIC
-
Diamond Fire Pro 1000 graphics card
Clients
-
client1 - client8: 2-way Pentium III in the machine room
-
Two 550 MHz PIII processors, L1 I cache: 16K, L1 D cache: 16K, L2 cache: 512K
-
256 MB memory
-
9.0 GB QUANTUM ATLAS SCSI disk
-
Intel PRO/1000 NIC (192.168.20.*)
-
Intel PRO 100 NIC (192.168.10.*)
Network Connections
-
Cisco Catalyst 2900 XL 24-port 100BT switch:
using subnet 192.168.10.0
through eth0 on all clients and servers.
Access the appropriate interface
by appending -100bt to the hostname.
E.g., client1-100bt, client2-100bt or tissimo-100bt.
-
Netgear 8-port GS508T 10/100/1000BT switch.
Address is 192.168.20.254
(since the new 24-port gigabit switch has arrived
nothing is connected to this switch).
-
Dell 5224 24-port 10/100/1000BT switch:
using subnet 192.168.20.0.
The address for this switch is 192.168.20.254.
The clients and tissimo use eth1 to interface to this network.
Currently all 8 clients and tissimo are connected to this switch through
their 1000 BT interfaces.
Access the appropriate interface by appending -1k to the hostname.
E.g., client1-1k, client2-1k, or tissimo-1k.
-
Internet/Campus Network : Connections into the lab
and out to the Internet (from this lab) only occur
from
the host grand
(not through).
Accounts on grand and the hosts client1 - client8 share
home directories.
Documentation
Software
Web Servers
-
userver: is a micro web server designed for conducting experiments
with web server design and implementation.
In particular event notification and delivery mechansism.
Clients / Workload Generators
Control Software
-
trun: is a set of perl scripts developed and used over several years
by people at HP.
Created:
Tue Jan 21 11:05:04 EST 2003
Last modified:
Wed Sep 10 08:13:23 EDT 2003