Meeting 2016-07-15 10:30 DC3126
- Attendance: ldpaniak, a2brenna, nfish
Discussion:
- Demo of NextCloud and Seafile web interfaces
- Deployment and details of Ceph on Ubuntu 16.04
NextCloud and Seafile demo delayed: no network access from wifi, proxy not functioning.
Ceph:
- Native Ceph on Ubuntu 16.04 uses XFS for backend storage by default
- Current Ceph cluster 9x OSD on Dell 515 AMD hardware suppoorting Casper/Apple
- Does automount support CephFS? Yes:
Edit the file /etc/auto.misc and at the line below to the end of the file
ceph -fstype=ceph,name=cephfs,secretfile=/etc/ceph/client.cephfs,noatime cephmon01.starp.bnl.gov:6789:/
Now edit the file /etc/auto.master and add the line below to the end of the file
/mnt /etc/auto.misc --timeout 60
Restart the autofs service
# service autofs restart
- RBD makes effective use of read cache from OSD RAM
- Ceph has 16.04 repo, no PPA: http://docs.ceph.com/docs/master/install/get-packages/
- Probably stick with Ceph version shipping with 16.04
- New RBD disk format in 16.04 version. Shipping client does not support new format. Requires 4.5+ kernel on client. New client supports cluster locking.
- Ceph supports access control for users on pools not RBD (~targets).
- Crush map supports failure domains to identify machines/rooms
- ceph-w software for monitoring
General Ceph support for containers:
Ceph pool -> RBD - LXC server /var/lib/lxc metadata for targets on
CephFS share for all possible container hosts -> Find LXC RBD and lock with scripts -> map target to local block device -> mount block dev at /var/lib/lxc/container_name for container use
To do:
- a2brenna to polish scripts for container support and document in twiki.
- a2brenna to create 3x containers with storage on Ceph cluster: 1x for NextCloud, 1x for Seafile, 1x filesystem testing
- consider reconfiguration of OSD drive backend from monolithic RADI6 to individual drives from erasure coding testing.
--
LoriPaniak - 2016-07-16