Questions tagged [ceph]

Ceph is a free software storage platform designed to present object, block, and file storage from a single distributed computer cluster.

Ceph is a free software storage platform designed to present object, block, and file storage from a single distributed computer cluster. Ceph's main goals are to be completely distributed without a single point of failure, scalable to the exabyte level, and freely-available.

179 questions
3
votes
1 answer

Ceph or Gluster for implementing big NAS

We are planning to build NAS solution which will be primarily used via NFS and CIFS and workloads ranging from various archival application to more “real-time processing”. The NAS will not be used as a block storage for virtual machines, so the…
prema
  • 33
  • 1
  • 3
3
votes
1 answer

CTDB Samba failover not highly available

TL;DR Failing a node in a ctdb + samba cluster while interacting with a share interrupts share connection. here and here state there is work in progress to make this possible here states it is already possible with Samba 3.0 (currently using…
3
votes
2 answers

Should io and cpu intensive servers be separated in kubernetes cluster?

We are designing a new cluster architecture for our web service and are planing to use Ceph object storage and kubernetes for our services. for optimizing our servers We have different options: Use identical servers and run Ceph and our services…
3
votes
1 answer

Slow fsync() with ceph (cephfs)

I have built an experimental ceph cluster - 12 nodes, 50 osds, 3 mons, 3 mds, for which I'm trying to run a samba gateway. It seems that when writing lots of small files, the fsync() system calls from samba will routinely block, presumably at the…
kdm
  • 31
  • 3
3
votes
1 answer

Will running Ceph (or similar systems) virtualized degrade its performance?

I am setting up a Ceph cluster. The client is asking for it to be done in virtual machines, one hypervisor / VM per server. Given my previous (minor) experience with virtual machines, I wonder if this will be a problem (hypervisors abstracting…
3
votes
1 answer

Shared and replication filesystem support POSIX

I'm looking for opensource solution that support my use case. I have now 4 node on my cluster network and i need this. Store file system (huge list) Replication my file save on one node and replicate to another one. Shareding my files into 2 parts.…
3
votes
1 answer

ceph - can't start osd on rebooted cluster host

I've rebooted the server (one of ceph's hosts). Started the cluster, but the osd, that's on the host, which was rebooted is down. The osd's # is 2, so when I try: sudo /etc/init.d/ceph start osd.2 it shows: Starting ceph (via systemctl):…
igoryonya
  • 195
  • 1
  • 3
  • 14
3
votes
1 answer

ceph osd down and rgw Initialization timeout, failed to initialize after reboot

Centos7.2, Ceph with 3 OSD, 1 MON running on a same node. radosgw and all the daemons are running on the same node, and everything was working fine. After reboot the server, all osd could not communicate (looks like) and the radosgw does not work…
Tiina
  • 175
  • 2
  • 9
3
votes
1 answer

Make ceph minimize spread of file parts over OSDs

I am considering an option of ceph as distributed filesystem for my home-made MAID (massive array of idle drives). As far as I understand, Ceph oriented for cluster use and spread data evenly over OSDs (with respect to CRUSH maps) and tries to…
gordon-quad
  • 107
  • 1
  • 7
3
votes
1 answer

Ceph OSD always 'down' in Ubuntu 14.04.1

I am trying to install and deploy a ceph cluster. As I don't have enough physical servers, I create 4 VMs on my OpenStack using official Ubuntu 14.04 image. I want to deploy a cluster with 1 monitor node and 3 OSD node with ceph version…
user1802604
  • 131
  • 1
  • 3
3
votes
1 answer

Pooled storage with varying redundancy per file system

I have some files that I want stored mirrored. I have some files that I only need single copies (i.e. scratch data, easily regenerated data, etc). And I have some files that are so critical, I want it mirrored in triplicate so I can handle a 2-disk…
David Pfeffer
  • 214
  • 1
  • 11
3
votes
1 answer

Proxmox on Ceph performance & stability issues / Configuration doubts

We have just installed a cluster of 6 Proxmox servers, using 3 nodes as Ceph storage, and 3 nodes as compute nodes. We are experiencing strange and critical issues with the performances and stability of our cluster. VMs and Proxmox web access tends…
Danyright
  • 203
  • 1
  • 7
3
votes
0 answers

Openstack Nova and Ceph Volume Attachment Issue

I am trying out the volume attachment function at Openstack (version:wallaby) to the server as additional device but failed. The volume backend is ceph which all of the osds are up and healthy. ceph-osd/38* active idle 0 …
ony4869
  • 33
  • 3
3
votes
1 answer

Ceph e5 handle_auth_request failed to assign global_id after a host outage

I have a small 3-host Ceph cluster with Ubuntu 20.04.1 and Ceph 15.2.5 using docker containers and deployed with cephadm. Yesterday one of the hosts (s65-ceph) had a power outage. The other two hosts continued working for a while but then s63-ceph…
Paolo Celati
  • 71
  • 1
  • 8
3
votes
2 answers

How to best use large NVMe array for VMWare Datastore

I recently acquired a Supermicro SYS-1029P-N32R with 16 14TB NVMe SSD's (Micron 9300 Pro) at work. We have 100 gigabit networking on the box to our VMWare hosts. We've tried using FreeNAS to host ZFS zvol's as iSCSI LUNs, but were unimpressed with…
1
2
3
11 12