Questions tagged [ceph]

Ceph is a free software storage platform designed to present object, block, and file storage from a single distributed computer cluster.

Ceph is a free software storage platform designed to present object, block, and file storage from a single distributed computer cluster. Ceph's main goals are to be completely distributed without a single point of failure, scalable to the exabyte level, and freely-available.

179 questions
0
votes
1 answer

Indicating multiple clusters with ceph-ansible

I am working through a Ceph course right now (ceph learning path from packtpub). The course is ok, but has a lot of errors, and isn't always the most accurate. The course expects that you use ceph-ansible to do a lot of the work. You start by…
Matthew
  • 2,737
  • 8
  • 35
  • 51
0
votes
1 answer

Is more memory than the required beneficial for Ceph BlueStore OSDs?

I have a cluster of servers, each of them having 128GB or RAM and 6 x 2TB spinning disks dedicated for BlueStore OSDs. The servers also act like KVM hosts, so they are not dedicated to Ceph. In the past when using FileStore we noticed that if a…
Jacket
  • 131
  • 10
0
votes
1 answer

Ceph - Erasure Coded Pools - Always get inactive pgs

I'm trying to achieve similar thing to raid6 on ceph. But When I'm creating erasure coded pools (k=3 + m=2(or k=4) ) I always get inactive pgs. ceph health details are: HEALTH_WARN Reduced data availability: 128 pgs inactive PG_AVAILABILITY Reduced…
Lisek
  • 309
  • 2
  • 7
  • 15
0
votes
0 answers

How do I use a non-standard ceph system user?

I configured a ceph user on my cluster named "cepher." I ran ceph-deploy as this user to deploy some servers. Then I see this: [block][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring…
mr.zog
  • 923
  • 3
  • 20
  • 39
0
votes
1 answer

Ceph Erasure Profile with k+m> 6 disks -> PGs to be stuck on creating+incomplete state forever

So my pools has 5 nodes with 12 osds each of 8tb. Currently I am trying to create an erasure coded pool of k=8 m=2, however after adding this profile and create an ecpool with this profile the pools is always stuck on creating+incomplete. If I…
Vish
  • 176
  • 5
0
votes
1 answer

mount cephfs or rbd via Internet

Is there any suitable solution for this? I know about security issues, but is it possible? Or other better storage solutions? We tested s3fs+cephrgw, but It's very slow and unstable.
沈东立
  • 1
  • 1
0
votes
1 answer

How to upgrade librbd1 and librados2 for oVirt 4.2.x (node ng)

oVirt 4.2 comes with librbd1 und librados2 from the Ceph Hammer release which is 0.94.5. I need to update both libraries to the luminous version which is 12.x. because my ceph server is not able to talk to clients with the old 0.94.5 version. How to…
itsafire
  • 468
  • 3
  • 15
0
votes
1 answer

Why does ovirt VM using ceph disk stay in "waiting for launch" status

My setup comprises of ceph mimic (centos 7, setup with ceph-ansible), cinder/keystone combo on pike release, and ovirt 4.2.5.1. The external cinder provider is setup and I can create disks. When creating a vm and starting it, the VM shows up in the…
itsafire
  • 468
  • 3
  • 15
0
votes
2 answers

install grub in a usb and boot proxmox from another drive

Good morning my friends! I'm tryng a solution before buy some new hardware. So this is my situation: I have some beautifull hp dl360p gen8 that comes with an p420i hardware raid controller. What I'm tryng to build is a proxmox+ceph cluster for my…
0
votes
1 answer

Does Swift or Ceph have "Vault Lock"-like capabilities?

AWS Glacier offers Vault Lock, which enables compliance policies like “write once read many” (WORM). Google Cloud Platform Storage does not. Does OpenStack Swift or Ceph offer any similar compliance features out of the box?
SeanFromIT
  • 212
  • 1
  • 5
0
votes
1 answer

Ceph OSDs and journal drives

I have a separate drive for each of my ceph OSD servers. Each OSD host has 4 data drives. Does one journal drive serve the 4? Is the journal drive shared? Should there be a partition for each data drive?
0
votes
1 answer

OpenStack Redundant Ceph/Cinder Storage Architecture

Hello Serverfault community, I'm currently designing an OpenStack cluster. The part where I'm currently stuck is the Storage Architecture. I thought of building two redundant Ceph clusters in different racks with a different fuse and UPS. So far so…
0
votes
1 answer

Ceph installs without init scipts. How do I get them?

I have installed ceph in 3 centos7 nodes with ceph-deploy tool. All works good but I haven't any script to manage ceph or radosgw. My /etc/init.d/ folder contains only these: functions, network, netconsole, rdbmap. Nothing else. So I can not run…
Oleksandr
  • 733
  • 2
  • 10
  • 17
0
votes
1 answer

Ceph: HEALTH_WARN clock skew detected

I have configured ntp on Ceph nodes and time is synchronized! But ceph status always shows clock skew. ceph health detail shows: mon.node2 aadr 192.168.56.102:6789/0 clock skew 7192.45s > max 0.05s (latency 0.0129368s) mon.node3 aadr…
Oleksandr
  • 733
  • 2
  • 10
  • 17
0
votes
1 answer

OpenShift Origin and Ceph persistent volume

I have installed OpenShift Origin from latest ansible install. (CentOS 7 - 3 masters and 7 nodes) [root@master-1 ~]# openshift version openshift v1.1.0.1-1-g2c6ff4b kubernetes v1.1.0-origin-1107-g4c8e6f4 etcd 2.1.2 I am trying to create CEPH…
calvix
  • 51
  • 6