Tetherless Networking Lab (Keshav)

Desktop client systems

Keshav's students use mostly APD clients, although there are a couple of School PCs. There are also a couple of "thin client" students who kept the kbd, mouse and screen and gave back the thin-client.

The latest APDs are clone systems from Group4 (the so-called "green" systems that have a dual-NIC plus wireless NIC. These systems are (at time of writing 2007-7-11) called sleet, microburst, whiteout, gale, coldfront, breeze. Ther is an iMac called shamal.

Older APDs include white-box clones from Onward, a bunch of under-powered Mac Minis, most of which are now running Linux and acting as network devices and other miscellaneous stuff like Researcher-loaner equipment.

In general, there are lots of PC parts and the systems are modified, rearranged, moved and deployed at their discretion and generally we are not notified. There are also dozens of wireless appliances called "Soekris boxes" and "Via mini-boxes".


The lab has a printer, slush.cs, which is a standalone HP2200 printer not set up on a unix queue. The have no interest in quotas and in general the students never use Solaris. The printer is set up with an ACL that restrict access to the campus. Individuals set up their client systems as required.

Server infrastructure

Keshav has a server infrastructure in dc3556 (racks 4--6) consisting of:

  • ip-addressable KVM (tcl-kvm3.cs) with 16 ports. There is management software for Windows and Linux. See below.
  • squall -- RedHat Enterprise Linux 4 (RHEL4), to which a 4TB RAID5 disk shelf is attached. There is a two-drive, 20-tape tape robot connected here, too (running CA Arcserve commercial backup software).
  • sandstorm -- RHEL4 compute server
  • lightning -- Win2003 Server hosting the Akimbi Slingshot virtualization management system
  • snowball1..snowball10 -- virtualization worker systems
  • blow -- Ubuntu research server with hardware RAID1 for MySQL databases
  • blizzard -- Keshav's personal wiki system, running CentOS4 with software RAID1. The Commons Committee Wiki is here.
  • snowstorm -- CentOS4 -- this runs the Ensim web-hosting software. Keshav manages this system.

Other research systems:

  • hurricane
  • hail
  • blackice
  • monsoon
  • windstorm

These are running a mixture of out-of-date Fedora Cores and some newer Ubuntus. They are research systems and I have no interaction with them.


The principal backup system is the tape robot on squall, which backs up squall (ie the RAID5) and also two remote systems: blizzard and snowstorm. Other systems are not backed up and lab members are advised to copy important data to squall for backup purposes.

The backup server is accessed at http://squall.uwaterloo.ca:6060/ Passwords are in the card file under "Squall backups"

Network organization

The client computers in the lab are not, in general, joined to CS-GENERAL because there is no reason for them to do so: they use the 4TB un-quota-ed Samba server on squall, and they have their own printer. They do their own backups via squall.

The servers in dc3556 are all on the CSCF research network (vlan 7) and there is a private network ( that supports the virtualization infrastructure. Only squall, sandstorm and lightning are joined to the private network.

There is a VPN running on lightning.cs to provide access to the private network. It is a standard RFC2637 VPN running PPTP. For some reason, never determined, there's a problem with Windows client routing -- when a Windows client connects, it fails to set the correct route for 192.168/16. The route for the VPN base of 192.168.127/24 is set, but not the whole class B range (this may in fact be correct, since 192.168/16 isn't really a single class B network -- it's a concatenation of lots of /24 networks). Anyway, the problem is easily solved on the client. Just issue a manual route add command:

route add mask 192.168.127.x

where x is the address of the VPN endpoint (use "ipconfig /all" to find it). It ought to be possible to create a Windows script to parse the output of some Windows commands to do this automatically.

Lightning is running a DHCP and DNS server. See those software system configurations for details about address reservations and assignments within the private network.


There is a 16-port IP-addressable KVM (tcl-kvm3.cs.uwaterloo.ca) that connects most of the lab systems that are located in DC3556. It is a Dell-branded (Dell 2161DS) Avocent device. There is client software available for Windows and Linux; an ISO of the install media is located in eDocs/rsg/tetherless/dell2161.iso. The userid and password are cscf-adm and the standard as of July 2009. It takes some getting used to the dual-mouse-pointer but it's manageable.

Using the KVM for GUI access is particularly helpful for CentOS and RHEL because of the way those systems handle GUI administrative tasks. The root passwords are generally not known, but everyone who needs admin has sudo permissions. Unfortunately this doesn't work with X applications because of the two layers of X forwarding (well, no doubt an X expert can figure it out but I couldn't). So, the approach I use is

  • log in to the console via the kvm
  • start a shell, and sudo the admin GUI tool you want to use. The name can be determined from the properties of the menu pull-down (but in general under RHEL/CentOS/Fedora it will be something like "system-config-*").

There is another local KVM in DC3556 that manages the remaining TCL systems. The decision about which system to connect to which KVM was somewhat arbitrary. In particular, there are 10 ports used for the "snowball" virtualization worker systems that frankly are never logged-into locally, and could be put on the local KVM in order to free up remote ports for more important systems.


There are two systems (blast.cs, cloudburst.cs) and an IP-addressable powerbar (planetlab-powerstrip.cs) in dc3556 (rack 5). We have no operational responsibilities other than to power-cycle them.

Akimbi Virtualization Lab

There's lots to say about this. My original presentation to the group might be useful, but it's a bit dated.

  • tcl-akimbi.ppt: Powerpoint presentation of Akimbi presentation to TCL members

To use and/or manage the Akimbi software, you must use Internet Explorer. Use the url http://lightning.cs.uwaterloo.ca/. There is a "cscfadm" userid (it doesn't allow hyphens) with the usual password (Updated March 2008)

More here.

Mailing lists

TCL has two active mailing lists at lists.uwaterloo.ca: tetherless-interest and tcl-software-licences. The former is managed by TCL people. The latter is used as a mailing address for vendors who insist on having a valid email for communications (Eg RedHat). There is useful information in the latter's list archive. For password, etc information, see the files in eDocs://dd-networks/keshav-mailing-list.

RedHat Network subscriptions

Squall and sandstorm run RedHat Enterprise 4 (ie fee-based, $50/year). The current subscription runs until 2009. See eDocs://dd-networks/keshav-rhn and the mailing list archive for more information.

Topic attachments
I Attachment Action Size Date Who Comment
PowerPointppt tcl-akimbi.ppt manage 66.0 K 2007-07-27 - 19:15 TrevorGrove Powerpoint presentation of Akimbi presentation to TCL members
Topic revision: r12 - 2009-11-26 - RonaldoGarcia
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback