Questions tagged [ceph]

Ceph is a free software storage platform designed to present object, block, and file storage from a single distributed computer cluster.

Ceph is a free software storage platform designed to present object, block, and file storage from a single distributed computer cluster. Ceph's main goals are to be completely distributed without a single point of failure, scalable to the exabyte level, and freely-available.

179 questions
3
votes
0 answers

Ceph Rados GW Proxy

I've been given an access key to a Ceph cluster that runs radoswgw to provide S3. The key allows bucket creation, object reading etc. I don't admin the Ceph or radosgw. I have a group of users that want to use the Ceph object store, but need data…
Chris
  • 131
  • 1
2
votes
2 answers

MountVolume.MountDevice failed operation with the given Volume ID already exists

Environment: Kubernetes cluster with 1 master and 3 nodes Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-66-generic x86_64) (VMWARE VMs) screenshot from dashboard Pod (simple nginx image) cannot be mounted to a specified Volume in Kubernetes cluster with…
Alec
  • 23
  • 1
  • 1
  • 5
2
votes
0 answers

Unable to resize Ceph RBD PVC in Kubernetes

So I have a 4 node (VMs) Kubernetes cluster spun up with Kubespray. I have a Ceph cluster set up from Proxmox, and a pool is available to k8s. I can make deployments using Ceph just fine. But when I want to resize the container, I run into a long…
cclloyd
  • 593
  • 2
  • 14
  • 29
2
votes
1 answer

Proxmox Ceph OSD Partition Created With Only 10GB

How do you define the Ceph OSD Disk Partition Size? It always creates with only 10 GB usable space. Disk size = 3.9 TB Partition size = 3.7 TB Using ceph-disk prepare and ceph-disk activate (See below) OSD created but only with 10 GB, not 3.7…
rwcommand
  • 163
  • 1
  • 7
2
votes
0 answers

ceph shows wrong USED space in a single replicated pool

We are using ceph version 14.2.0. We have 4 hosts with 24 BlueStore OSDs, each is 1.8TB (2TB spinning disk). We have only a single pool with size 2 and I am absolutely sure that we are using more space than what ceph df shows: [root@blackmirror ~]#…
Jacket
  • 131
  • 10
2
votes
1 answer

Ceph status health ok, but has flag "nearfull"

Recently in Ceph there was a status of WARN because 3 disks were 85-87% full. I expanded the cluster by adding the server to the storage. But now I see flag "nearfull". Previously, this wasn't see. health HEALTH_OK monmap e6: 3 mons at…
akashavkin
  • 301
  • 1
  • 2
  • 8
2
votes
1 answer

On what network do the clients connect to the CEPH cluster (public/private)?

It's recommended to have a public network and cluster network when setting up CEPH. For what I understand this cluster network is what the nodes use to replicate data accross so that would preferably be a 10 gigabit network. However I read that only…
Maarten Ureel
  • 239
  • 2
  • 5
  • 12
2
votes
1 answer

Which distributed filesystem will actually fit my needs?

Hear me out I have seen the question asked (in different forms) here, here, and perhaps the best one I found was here, but I do not think this is a duplicate because quite some time has past since those questions were asked, and my question has its…
2
votes
1 answer

Virtual Hosting Cluster File System Confusion

My title probably doesn't encompass the full scope of what I'm needing, so I'll lay out what I want to accomplish. I have two Linux servers with large drive arrays, multiple CPUs and a large amount of RAM. I have what will be the primary file…
Brent
  • 107
  • 1
  • 2
  • 8
2
votes
0 answers

What is the correct usage of ceph-objectsore-tool?

I am trying to export data for pg, but am getting errors from ceph-objectstore-tool. I see some examples online of usage which seem to match this, and reading through the usage statement that the tool generates does not at all clarify what is wrong…
blitzen9872
  • 121
  • 3
2
votes
0 answers

Using NFS Mounts as Long Term Container Storage

this is more of a question of best practice than anything. I currently have a proxmox clustered deployment of three servers, all accessing a ceph cluster (that is self hosted on the same servers). The ceph cluster has two main pools, instances…
MineSQL
  • 21
  • 2
2
votes
2 answers

Is Ceph too slow and how to optimize it?

The setup is 3 clustered Proxmox for computations, 3 clustered Ceph storage nodes, ceph01 8*150GB ssds (1 used for OS, 7 for storage) ceph02 8*150GB ssds (1 used for OS, 7 for storage) ceph03 8*250GB ssds (1 used for OS, 7 for storage) When I…
fcukinyahoo
  • 145
  • 1
  • 2
  • 6
2
votes
1 answer

How do I mount one of multiple filesystems in a ceph cluster?

Ceph now includes (experimental) support for multiple filesystems within a single storage cluster; but the mount options don't seem to allow specifying which filesystem to mount. I have configured two testing filesystems, each with their own mds and…
rvalue
  • 121
  • 1
  • 6
2
votes
1 answer

Using available space in a Ceph pool

Here's what my Ceph situation looks like (from ceph df): GLOBAL: SIZE AVAIL RAW USED %RAW USED 596G 593G 3633M 0.59 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS …
2
votes
1 answer

Use Ceph as redundant storage for failover FTP service?

I need to setup a redundant FTP storage service that can survive one server-crash. I have two servers SrvA and SrvB that shares a virtual IP IpVirt (an IP that will usually point to SrvA but that will point to SrvB if SrvA were to have a…
CDuv
  • 242
  • 1
  • 3
  • 12
1 2
3
11 12