Sshfs

Ubuntu - Access files from a remote computer as if they were part of the local filesystem - using SSH or SFTP

What: How to use Ubuntu file manger to access files on any server you can SSH into

Using "Files" Ubuntu file manager - Recommended for all Ubuntu GUI Desktop applications

This is the easiest method by far to access files on any machine you can SSH into using the GUI Files - Ubuntu file manager
  • Open Files - Ubuntu's default file manager
  • Open File Menu -> Connect to Server
    • Server Address -> ssh://linux.cs.uwaterloo.ca/path
      • Replace linux.cs.uwaterloo.ca with the host you want to use
      • Replace path with file path you wish to access - for example your home directory path
      • -> Connect Button
    • Enter your password for linux.cs.uwaterloo.ca ->
      • Username: Your linux.cs userid
      • Password: Your linux.cs password
        • -> Connect Button
  • Open Bookmarks Menu -> Bookmark This Location
    • Right Click on the bookmark that was just created in left hand frame of Files file manager
      • Rename to give a useful name

Example Accessing EDOCS Files" Ubuntu file manager

This is the easiest method by far to access files on any machine you can SSH into using the GUI Files - Ubuntu file manager
  • Open Files - Ubuntu's default file manager
  • Open File Menu -> Connect to Server
    • Server Address -> ssh://linux.cs.uwaterloo.ca/var/www/cs.uwaterloo.ca/cscf/internal/edocs
      • -> Connect Button
    • Enter your password for linux.cs.uwaterloo.ca ->
      • Username: Your linux.cs userid
      • Password: Your linux.cs password
        • -> Connect Button
  • Open Bookmarks Menu -> Bookmark This Location
    • Right Click on the bookmark that was just created in left hand frame of Files file manager
      • Rename to give a useful name like EDOCS

SSH Keys

Using this step will save lots of time typing in passwords when accessing your remote files
  • You can log in or access a remote account without using a password by following these steps
  1. mkdir ~/.ssh
  2. If you do not have an SSH public key yet then run this command
    • ssh-keygen -t dsa
  3. repeat for rsa, ecdsa, ed25519
  4. copy the contents of the ~.ssh/*.pub file and add it to the end of the file ~/.ssh/authorized_keys on the remote host (create this file if it does not exist)
  • Note:
    • ssh-keygen -b 2048 -t rsa is probably better

Command line via SSHFS

  • You can mount file systems from any machine you can SSH into using a package called sshfs

Documents

USAGE

  • sshfs user@remote-host:remote-directory local_mount_point
    • This mounts the remote-directory from the remote-host on a local_mount_point which is just an empty directoy that you have to create first
  • Map UID and GID: -o uid=$MY_UID -o gid=$MY_GID
  • Allow root access to the mount: -o allow_root

FUSE group - see install steps

  • You must be a member of the fuse group to use SSHFS
    • *Note: on all CSCF research and Grad PC's users are members of the fuse group automattically

How to install sshfs and put a user into the fuse group

  • On Fedora Core install with yum install fuse-sshfs
    • Adding to fuse group: /usr/sbin/usermod -a -G fuse username - or - when creating the user check all of the prvileges
  • On Ubuntu install with apt-get install sshfs
    • Adding to fuse group: /usr/sbin/adduser username fuse
  • On Mac OSX - see http://code.google.com/p/macfuse/
  • On Solaris - very soon now - final testing stage. http://mail.opensolaris.org/pipermail/fuse-discuss/

File Locking

  • As you would expect, file locking behaves the same way as it would if you had simply logged in
using the usual ssh client. In particular, if I mount a remote directory twice (under different directories on my local machine) I can then confirm (for example, if I attempt to edit a single file using these two different mounted directories) that file locking is occurs. Presumably, two users can access to a joint account as long as they belong to the same group (again it would behave the same way as though the users had logged in using ssh).

Autofs and Sshfs

  • If you have an SSH Keys as outlined above you can have the automount daemon (contained in the autofs Ubuntu package) automatically
perform the step above for you.
  • Suppose, in addition to local accounts you want to give users access to there student accounts.
For simplicity create a directory in which home directories will be mounted via sshfs, say /sshfs/cs_student/home and have automount manage it, that is, edit /etc/auto.master and add the line
/sshfs/cs_student/home   /etc/auto.sshfs
and then create /etc/auto.sshfs with the entry
userid -fstype=fuse,rw,nodev,nonempty,noatime,allow_other,max_read=65536 :sshfs\#userid@student.cs\:
(it may be better if this were invoked as the user via su as then ssh keys may be used?) where userid would be substituted with the user's on the student region. NOTE. The escaped # colon is necessary since automount would otherwise treat it as a comment but Fuse needs it, similarily the final : needs to be escaped as otherwise it would get handled by automount.

NOTE. The choice of server should probably be based on some kind of generic load balancing name, say via round robin dns, for example. Indeed we should really have /etc/auto.master simply have the form

/sshfs/cs_student/home auto_home_sshfs
where auto_home_sshfs is a executable located in /etc that auto generates necessary mount options (see below for an example script). As opposed to NFS and automount, sshfs only requires the userid and not the exact remote home directory location on the remote server.

An initial example of the executable /etc/auto_home_sshfs is given by

#!/bin/bash
SERVER=some-real-hostname
# Shell script that acccepts one argument, namely the userid
case $1 in
   .Trash*) 
      exit 1;;
   *)
      echo "-fstype=fuse,rw,nodev,nonempty,noatime,allow_other,max_read=65536 :sshfs\#$1@$SERVER\:"
esac

The code for avoiding .Trash* file glob is due to a strange bug connected with nautilus which would otherwise result in the user being prompted for the password (by ssh-askpass a gnome based Gui tool) for .Trash account, a non-existent user, see https://bugs.launchpad.net/ubuntu/+source/gvfs/+bug/210468 for a bug report concerning the .Trash strangeness, the upstream version of the bug is in http://bugzilla.gnome.org/show_bug.cgi?id=525779 Other related reports are https://bugs.launchpad.net/ubuntu/+source/gvfs/+bug/210586 Another nuisance is when one changes directory to a autofs managed directory using sshfs the gnome program ssh-askpass keeps running even though one has setup a ssh key to avoid being prompted and removing the program (apt-get remove ssh-askpass-gnome) causes one to be completely unable to mount the directory.

Problems

Hard links are not implemented

At the time of writing, Tue Mar 12, 2013, sshfs cannot implement hard links. Despite what the previous author says above about file locking, certain types of file-locking, and associated mechanisms will fail. For instance, you cannot run a dovecot server working with an sshfs-mounted version of your mail files; it fails when it cannot create hard links.

Hard links did get implemented in more recent versions. This section should therefore be revised..

Inode numbers are somewhat fake

Related to the hard link problem, sshfs creates its own inode numbers for files. If you do sshfs-mount the same remote directory on two different points, you may notice that the two copies have different inode numbers for the same file. Furthermore, if you do create a hard link on the server-side, you will notice that the sshfs version has a different inode number for each name. Unexpected results, including possible data loss, can occur because of this.

To Wit

cscf.cs% echo "I am file1" > /tmp/arpepper/file1
cscf.cs% ln /tmp/arpepper/file1 /tmp/arpepper/link2
cscf.cs% ls -i1 /tmp/arpepper/file1 /tmp/arpepper/link2
   8838991 /tmp/arpepper/file1
   8838991 /tmp/arpepper/link2
cscf.cs% 

arpepper@cscfpc20:~$ df -h /tmp/arpepper
Filesystem                                   Size  Used Avail Use% Mounted on
arpepper@cscf.cs.uwaterloo.ca:/tmp/arpepper 1000G     0 1000G   0% /tmp/arpepper
arpepper@cscfpc20:~$ ls -i1 /tmp/arpepper/file1 /tmp/arpepper/link2
67 /tmp/arpepper/file1
68 /tmp/arpepper/link2
arpepper@cscfpc20:~$ ln /tmp/arpepper/file1 /tmp/arpepper/link3
ln: failed to create hard link `/tmp/arpepper/link3' => `/tmp/arpepper/file1': Function not implemented
arpepper@cscfpc20:~$ 

External Links

Edit | Attach | Watch | Print version | History: r19 < r18 < r17 < r16 < r15 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r19 - 2021-12-17 - MikeGore
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback