Bioinformatics Group



Manual Printer Setup for Direct to printer Access

Linux Setup

Adding Printers manually

Your local Web Browser

  • Using a Web Browser open this url:
  • Add Printer button
    • LPD/LPR is the most commonly supported type
      • Most of our printers also support ipp , http and for HP printers AppSocket/HP JetDirect
    • Connection:
      • Example: lpd://socket://
    • Name: a local name you will give this printer
    • Location: Where the printer is located
    • Sharing: leave this unchecked
  • Continue
  • Model: Printer Model Goes Here. Typically; Postscript (recommended) (en) driver is preferred
    • Example: *HP LaserJet P2055 Prostscript (recommended) (en)
  • Authentication
    • Provide your normal Login Userid and Password
  • Set Default Options
    • Finishing Panel - to change duplexing, etc to your wishes
    • Image Quality - dpi and print density
    • General - default page size and tray

  • Using a Web Browser open this url:
  • Add Printer button
    • LPD/LPR is the most commonly supported type
      • Most of our printers also support ipp , http and for HP printers AppSocket/HP JetDirect

Status and Troubleshooting

Printers Built in Web Server

  • You can view the printers status directly using to see if it is
    1. ) out of papper
    2. ) paper jam
    3. ) low or out of toner
    4. ) ready to print
  • Visit the printers web page:

print.cs print job status

  • You can view the print.cs job queue to see what is happing with your print job
    • Note: the printer can be ready but your job might have an error - or someone else may have a print job error blocking your printout.
  • Visit:
    • If you need to delete a print job please contact Mike Gore

Very important note about Documents and Paper size

  • Please note if you send a PDF file to be printed and the document is A4 paper type it will fail and cause the printer to wait for manual intervention! * There are a vast number of PDF douments that are published on A4 peper type and this is the most common problem

Replacement Toner

  • There are spare toners in DC2582 If you use the last one you MUST contact Wendy Rush or Mike Gore to have news ones ordered!

Printing - via monod - old method do not use

  • This is just here for historical reference
  • Log into monod
    • Add host IP to /etc/cups/cupsd.conf to section starting with < Location > (look about 80% the way down the file)
    • restart cups /etc/init.d/cupsys restart
  • Client computer needs the following changes to /etc/cups/cupsd.conf
          # Show shared printers on the local network.
          Browsing On
          BrowseOrder allow,deny
          BrowseAllow all
          BrowseLocalProtocols CUPS dnssd
          BrowseAddress @LOCAL
          BrowseInterval 3600
          BrowseTimeout 3700
  • On Client computer restart cups: /etc/init.d/cups restart
  • Add default printer: lpadmin -d bif_bw2

Ming Li - m160 research cluster

  • m160 - Ming Li's research cluster

Bioinformatics Wiki

NOVO Bin Ma - Research cluster

  • NOVO - Bin Ma research cluster

Contacting the group

BIF CGL machine room cluster - next to CS racks

  • Physical Machines:
    • Solaris Hosts: hmsbarracouta,hmsbeagle
    • Ubuntu Hosts: codon chromosome histone chromatin
  • VMs (hosted by codon)
    • dna monod evlove
  • Spare Unused machines:
    • Proteome,Genome,Element

Access from the CS Core

  • user root on cscf can log into the BIF servers listed below as cscf-adm using public key
  • See notes below for becoming root

Remote IPMI management


  • There is an 8 port KVM attached to all of the Servers listed below, it is located in the bottom of the main BIF rack in DC2302A, with the keyboard and screen in the rack to the left.
    • KVM Port Assignments:
    • Port 1 - chromosome- note: no KM - use USB keyboard
    • Port 2 - hmsbeagle
    • Port 3 - hmsbarracouta
    • Port 4 - codon - note: no KM - use USB keyboard
    • Port 5 - chromatin - note: no KM - use USB keyboard
    • Port 6 - proteome
    • Port 7 - histone - note: no KM - use USB keyboard
    • Port 8 - genome
--+++ Networking * 17 Oct 2011 Must be revised - everything is connected to dc2303a-cs1b all of the wall jack connections except one have been disconnected

  • SunFire x2200 Network Jacks - rear view:
  • Switches
    • Netgear24 (internal private network
    • Netgear5-5 -> I11 vlan:7

Name Model Net 1 Net 2 Net 3 Net 4 KVM
Chromosome Sunfire x2200 NC DC2303a-i17 vlan:1896 DC2303a-i19 vlan:7 Netgear24-22 vlan:private KVM1
Codon Sunfire x2200 NC DC2303a-i18 vlan:1896 DC2303a-i20 vlan:7 Netgear24-23 vlan:private KVM4
Chromatin Sunfire x2200 NC NC Netgear5-1 vlan:7 Netgear24-11 vlan:private KVM5
Histon Sunfire x2200 NC DC2303a-i10 vlan:1896 Netgear5-3 vlan:7 Netgear24-12 vlan:private KVM7
Hmsbarracouta Super Micro Netgear5-2 vlan:7 Netgear24-21 vlan:private - - KVM3
Hmsbeagle Super Micro DC2303A-i9 vlan:7 Netgear24-24 vlan:private - - KVM2
Element Dell GX260 DC2303A-i12 vlan:7 - - - -
Proteome Dell Poweredge 6650 DC2303A-i15 vlan:7 Netgear24-15 vlan:private - - KVM6
Genome Dell Poweredge 6650 DC2303A-i14 vlan:78 Netgear24-13 vlan:private - - KVM8



  • See Bioinformatics and cscf-adm in safe for all of the following

Primary File Servers

Use su to become root - password is in safe

Fixing boot errors
  • If you have a boot_archive or repository.db
  • Fix boot_archive
    • boot failsafe image
    • yes to mount file system on /a
    • rm -f /a/platfore/i86pc/boot_archive
    • bootadm update-archive -R /a
    • reboot
  • Fix repository.db
    • /lib/svc/method/fs-root
    • /lib/svc/method/fs-usr
    • /lib/svc/bin/restore_repository
    • pick boot

Fixing NFS mount LDAP related errors
Restarting LDAP service on evolve

Make sure codon and the VM's are running

  • Dependencies: codon must be up and running and the VM evolve must be up and running
  • Manual method:
    1. ) log into evolve as cscf-adm and sudo bash to become root
    2. ) /etc/init.d/slapd restart restarts the ldap service
  • Automatic method:
    1. ) log onto codon as cscf-adm
    2. ) ./fixit - this will fix all services on all hosts provided they are powered up and online
Restarting LDAP clients - exporting NFS shares from hmsbarracouta and hmsbeagle

Make sure codon and the VM's are running

  • Initialize LDAP: run ldap_init on BOTH of hmsbarracouta AND hmsbeagle
  • Manual method:
    1. ) login as cscf-adm
    2. ) su - password in safe
    3. ) ./ldap_init _password (BIF_LDAP_password is in the safe)
      • Note: the ldap_init script does this:
                       ldapclient -v init \
                          -a proxyDN=cn=proxyagent,ou=profile,dc=bioinformatics,dc=uwaterloo,dc=ca \
                          -a proxyPassword=<proxypassword> evolve-local
                        Note:  <proxypassword> is the LDAP password - in safe
    4. ) exportfs -a (fixes LDAP related NFS errors)
    5. ) Debugging
      • showmount -a will show the exported mounts
      • showmount -d will show the mounts in use
  • Automatic method:
    1. ) log onto codon as cscf-adm
    2. ) ./fixit - this will fix all services on all hosts provided they are powered up and online
Restart all other LDAP clients and services everywhere and remount
  • We need to bring up services on dna,codon,chromosome,histone,chromatin
    1. ) login to codon as cscf-adm
      • ./fixit - ignore any errors

  • The fixit script will redo the client machines ldap authenticated mounts that are done with autofs
  • /etc/init.d/autofs restart
  • Autofs Status Example
          root@codon:~# /etc/init.d/autofs status
       Configured Mount Points:
       /usr/sbin/automount --timeout=60 /net/home file /etc/auto.home 
       Active Mount Points:
       /usr/sbin/automount --pid-file=/var/run/autofs/ --timeout=60 /net/home file /etc/auto.home
  • /etc/auto.home example:
       root@codon:~# more /etc/auto.home
       # Mount home directories from zfs server.
       *   -fstype=nfs,rw,rsize=32768,wsize=32768,hard,intr,bg,noacl   hmsbarracouta-local:/export/zfs/&
Restarting caching name service on machines

Note: if you have problems resolving names from a host restart the caching name service

  • the fixit script we runs the nscd command on dna,codon,chromosome,histone,chromatin
    1. ) /etc/init.d/nscd restart restart caching name service
LDAP on Evolve - docs
  • /etc/ldap.conf - config file

PAM: evolve pam is configured like this:

root@evolve:/etc/puppet/modules# grep ldap /etc/pam.d/common*
   /etc/pam.d/common-auth:auth    sufficient ignore_unknown_user
   /etc/pam.d/common-password:password   sufficient ignore_unknown_user


   root@evolve:/etc/puppet/modules/ldap# ls *
   ldapcert.key  ldap.conf    nsswitch.conf    pamldap.conf
   ldapcert.pem  ldap.secret  oldpamldap.conf

   authclient.pp  client.pp

  • ldap.secret contains the master password

Understanding ZFS exports and LDAP access from hmsbarracouta

  • ZFS TWIKI page: ZFS
  • Useful documents outlining Solaris and Linux shares:
  • List exports: showmount -e hmsbarracouta.cs
  • Local export file: /etc/dfs/sharetab
    • Example entry: /export/vm - nfs sec=sys,rw=codon-local,root=codon-local
      • The share /export/vm is shared to codon-local as root with rw perms granted to host codon-local using sys security (*sys is a host based trust)
      • In this case codon-local is defined in /etc/hosts as
    • Example entry: /opt - nfs sec=sys,rw=nfsclients,root=nfsclients
      • The share /opt is shared to nfsclients as root with rw perms granted to hosts nfsclients using sys security (*sys is a host based trust)
    • Example entry: /export/databases - nfs sec=sys,rw=nfsclients,root=nfsclients
      • The share /export/databases is shared to nfsclients as root with rw perms granted to hosts nfsclients using sys security (*sys is a host based trust)

services solaris 10

  • svcs - list service status
  • svcadm - control services * svcadm restart puppetd

Managing accounts

Note: You must do the next two steps before managing account

  1. ) Log onto evolve as cscf-adm
  2. ) make sure /net/opt/bin is in your path
Adding users
  • bif-adduser userid
    • Enter ldap password: (see BIF LDAP password in safe)
         Full name : Test User
         Office : DC1234
         Phone : 1-519-123-4567
         Current email (leave blank for default)
         Email :
         Purpose : Test User
         Date created 2010-02-09.
         ex. browndg, mli
         Supervisor: nobody
         Generated password: ucsyiderg
Deleting users

Deleting user accounts
We have no policy on deleting user accounts at this time. If you make an error in creating an account and really need to delete it, you must do so manually.
First, use ldapvi as bifadmin on evolve and delete all entries related to the user. There should be a group, a user, and an entry in the research group. You may use another LDAP editor if you prefer.
Second, you must log into hmsbarracouta and hmsbeagle and delete the users ZFS home directory using zfs destroy. If you have just created the user, only hmsbarracouta will have an account as the backup task will not yet have backed up the new directory.

Changing password

  • To change a user√ƒ¬ƒ√‚¬ƒ√ƒ¬‚√‚¬ƒ√ƒ¬ƒ√‚¬‚√ƒ¬‚√‚¬‚√ƒ¬ƒ√‚¬ƒ√ƒ¬‚√‚¬‚√ƒ¬ƒ√‚¬‚√ƒ¬‚√‚¬’s password in LDAP, they must use the bif-passwd program. This program will change both the unix and samba password hashes.
  • The default shell configuration for both bash and csh/tcsh is currently set to alias passwd to bif-passwd. If this alias is corrupted for any reason, it will result in difficulty changing passwords.
  • The bifpassword program is subject to the same special character bug as bif-adduser.
  • bif-passwd userid - you can change someones password without knowing it fist - even if you are root - need to fix this
password or user update problems

Notes: If the LDAP server is not running you will not get an obvious error - *solution restart the LDAP server and clients first. Example error: as root you try to change a password - you get prompted for the old password - regardless nothing you try will work

  • /etc/dfs/shareall - automatically created

Other Servers

Use sudo bash to become root - will not be prompted for a password

Sunfire X2200

Dell Power Edge 6650

Currently offline - spare machines

  • proteome
  • genome

Virtual Machines

Vmware Infrastructure Web Access

Note: make sure the NFS shares are working - problems with LDAP (see below) can cause access errors and weird errors!

  • Web admin interface: https://codon.cs:8333 - must use IE Web browser
    • cscf-adm or bifadmin - password in safe
Virtual Machines
  • cscf-adm - sudo bash to become root
  • VM Locations:
    • dna // (was //hmsbarracouta/export/vm/
    • monod //
    • evolve //

Trouble shooting VM server on Codon
  • some times a VM will not start no matter what you try
    1. ) /etc/init.d/vmware stop
    2. ) /home/cscf-adm/bin/kill-vmware
    3. ) /home/bifadmin/vmware-server-distrib/bin/vmware-config.p
      • If you see any errors like the network module not loading see Reinstalling VMware server * Answer 8333 for the secure port for web management - for some reason this keeps changing to 443
    4. ) now wait for a long time for the vms to start

Reinstalling VMware server
  1. ) Make sure you stop all vmware tasks - see Trouble shooting VM server section first
  2. ) cd /home/bifadmin/vmware-server-distrib
    • Answer 8333 for the secure port for web management - for some reason this keeps changing to 443
  3. ) ./ - pick all defaults

Reconfiguring VMware Tools on client machines

* Notes: you might have to reconfigure vmware tools after a Kernel upgrade*

  • Reinstall/Install: /usr/bin/
    • Note if this does not work open
      1. ) Vmware Infrastructure Web Access
      2. ) Inventory
      3. ) (highlight your VM)
      4. ) Commands
      5. ) Configure VM
      6. ) Power
      7. ) Check and install VMware tools before Power on
  • Network is broken
    • check /etc/udev/ruls.d/70-persistent-net.rules and make sure the assumptions match /etc/network/interfaces


  • To restart all services everywhere:
    1. ) log onto codon as cscf-adm
    2. ) ./fixit - this will fix all services on all hosts provided they are powered up and online
  • Manual steps follow in the next sections for reference:
Restart CPLEX license Server:
    • /etc/init.d/ilog-ilm restart
Restart Sun Grid Engine:
    • /etc/init.d/sgemaster.uwbif stop
    • /etc/init.d/sgemaster.uwbif start
Check Grid Engine
    1. ) ssh cscf-adm@dna
    2. ) qstat -f
    • Fail Example:
         cscf-adm@dna:~$ qstat -f
         error: commlib error: can't connect to service (Connection refused)
         error: unable to contact qmaster using port 6444 on host "evolve-local"
      * Working Example:
         cscf-adm@dna:~$ qstat -f
         queuename                      qtype resv/used/tot. load_avg arch          states
         all.q@chromosome.cs.uwaterloo. BP    0/0/4          0.02     lx24-amd64    E
         ---------------------------------------------------------------------------------    BP    0/0/0          -NA-     lx24-amd64    au
         ---------------------------------------------------------------------------------  BP    0/0/4          -NA-     lx24-amd64    auE
         brown@chromatin.cs.uwaterloo.c BP    0/0/3          -NA-     lx24-amd64    au
         --------------------------------------------------------------------------------- BP    0/0/1          -NA-     lx24-amd64    au
         ---------------------------------------------------------------------------------  BIP   0/0/1          -NA-     lx24-amd64    au



Procedures documentation

CUPS admin

  • Cups printing admin commands like cupsenable can be run from the command line without the cups server web interface - this is great for restarting queues on monod. Print queues dies rather often


  • evolve is the puppet master
  • resolv.conf = evolve:/etc/puppet/modules/network/files/solaris-resolv.conf
    • dns-nameservers mentioned in /etc/puppet/modules/network/templates/interfaces.erb
  • autofs
    • /etc/puppet/modules/net/files/*

Configuration files

Our puppet configuration files are on evolve in the /etc/puppet directory. The majority of the configuration logic is in the /etc/puppet/modules directory. We follow standard puppet module procedures.

Module Purpose
acct Process accounting.
apt apt config files
bash bash config files and associated default profiles
bifaccounttools Dependencies for BIF password change and user addition tools
cluster Various cluster configurations including package installation list for compute nodes and interactive nodes
cron cron entries that do not fit anywhere else University of Waterloo
csh csh package and related configuration files.
cups Printer client files.
ldap LDAP client and server configuration.
motd Message of the day.
net NFS mounts.
network Configuration for interfaces and DNS.
nfs NFS services and configs.
ntp Time management configuration through ntp.
pam Authentication services. This includes modifications for LDAP as well as limiting user logins to interactive nodes.
postfix Default mail services for all machines.
puppet Mostly client configuration. Puppet can manage itself if you are careful.
security Limits and access configuration to go with the PAM configuration files.
ssh SSH services.
tcsh tcsh and related configuration files.
user Local users such as bifadmin, cscf-admin, and database admin.


  • The remaining configuration is in /etc/puppet/manifests. Here, templates.pp provides basic templates for various types of machines including cluster nodes, interactive nodes, servers, and Solaris machines. We configures machines themselves in the nodes.pp file. The site.pp file only loads our templates and nodes at this point. Known problems and fixes
  • Puppet clients seem to freeze or break occasionally. We use a cron entry to restart the puppet client every two hours, if it is stopped. To find frozen clients, we run puppet_check on evolve. This script looks at the puppet master database and emails a warning for any clients that have node checked in within a day.
  • Puppet√ƒ¬ƒ√‚¬ƒ√ƒ¬‚√‚¬ƒ√ƒ¬ƒ√‚¬‚√ƒ¬‚√‚¬‚√ƒ¬ƒ√‚¬ƒ√ƒ¬‚√‚¬‚√ƒ¬ƒ√‚¬‚√ƒ¬‚√‚¬’s management of Solaris is currently somewhat limited and thus we manually manage our Solaris machines for the most part. Initially we managed cron through puppet, but it turns out that you can only install root cron entries on Solaris machines.


CPLEX users on remote computers and the CS firewall
  • Have CS open port 3000 between your desktop and evolve
  • port 3000 is the ILOG LIcense Server Port
License Server

Note: runs on evolve so make sure the machine is running

  • Restart CPLEX license Server:
  • /etc/init.d/ilog-ilm restart
License File
  • hmsbarracouta:/export/zfs/akhudek/cplex/access.ilm
  • on evolve see /home/cscf-adm/plex/access.ilm

Power Up

Circuit Breakers

  • All of the UPS are fed from CS panel breaker 12B6A #20 and PDS breaker *1C

UPS notes

  • CS panel breaker 12B6A #20 - attached to
    • Physical Machines
      • Solaris Hosts: hmsbarracouta,hmsbeagle
      • Ubuntu Hosts: codon chromosome histone chromatin
    • VMs (hosted by codon)
      • dna monod evlove
  • PDS breaker 1C
    • secondary power supply for hmsbarracouta,hmsbeagle

No power
  • Spare Unused machines:
    • Proteome,Genome,Element

Startup order

Note: chicken and egg issue - evolve ruins the LDAP service - evolve is a VM on codon
which needs NFS shares from hmsbrracouta which needs LDAP service from evolve! sigh

This section is for reference

Power Down

Shutdown order

If you are using cplex, change '*monod*' to '**' in your ~/cplex/access.ilm file. You may have to log out and back in to dna to pick up a new path for the cplex interactive utility as well.

Mailing Lists

External Documentation

Special Machine notes

CSCF Subscription Info

Topic attachments
I Attachment Action Size Date Who Comment
PDFpdf BIFserverdesignandpolicy.pdf manage 191.2 K 2010-04-15 - 12:50 MikeGore BIF server design and policy - 15Apr 2010 Alex Hudek
Topic revision: r61 - 2014-01-15 - MikeGore
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback

mersin escort bayan adana escort bayan izmit escort ankara escort bursa escort