Filesystem decision matrix: Ceph vs Gluster

  performance products features support for configuration usable capacity reliability
(worst case)
rebuild/resilver project monitor/maintenance
ceph+dm-crypt+xfs
3-replication
CephFS 1MB seq read (48 files): 3.6-1.8GB/s
CephFS 1MB seq write (48 files): 1GB/s
cephFS(kernel)
RBD
object
full data scrub fully supported at creation 198TB 2-drive fail before OSD migration = offline
Simultaneous (before migration) single-drive fail on each DFS server =data loss
3-drive fail = data loss
1-drive fail to offline with building loss
internal network for rebuild
one drive of data over int network/drive loss for OSD migration + additional on rebuild
ceph FS maturing, features converging  
gluster+ZFS-Z2+dm-crypt
2-replication+arbiter
glusterfs 1MB seq read (12 files): 2.2GB/s
gluster 1MB seq write (6 files): 1GB/s
glusterfs(FUSE)
libgfapi(block)/iSCSI
ganesha-NFS
full data scrub
compression
snapshots
No native ZFS encryption.
Need custom solution underlying ZFS - races
216TB 3-drive fail on single DFS before resilver = loss of building
Min cluster 4-drive fail to data loss
ZFS/PCIe traffic only for pool resilver.
3-drive fail pool loss = migration of entire pool over service net
project currently refactoring.
Changes for small files, substantial changes: DHT2, gluster4
ZED+smartd
single command for drive replace
ceph+dm-crypt+zfs
2-replication
  cephFS(kernel)
RBD
object
full data scrub fully supported at creation   2-drive fail before ZFS resilver ZFS resilver on local PCIe bus ceph FS maturing, features converging Ceph does not support ZFS as backing filesystem
for non-block eg. CephFS applications
https://github.com/zfsonlinux/zfs/issues/4913

-- LoriPaniak - 2016-11-01

Edit | Attach | Watch | Print version | History: r9 | r7 < r6 < r5 < r4 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r5 - 2016-11-09 - LoriPaniak
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback