Over Thanksgiving break, I acquired three new-to-me Dell R720 servers with the intention of creating a high-availability Proxmox cluster using CEPH as shared storage. Like many starting this new journey, I found thousands of blog posts and YouTube videos on how to set this up. However, when it comes time to connect VMs running on Proxmox such as Kubernetes nodes to the Ceph cluster, I was unable to find a working solution. The tenacious engineer in me saw this as a challenge, and after A LOT of Googling and late nights, I was finally able to succeed. Below is a proof of concept for how to configure MicroK8s with network access to CEPH via FRR + OSPF.
Goals
- Provide a way for Proxmox VMs to access CEPH
- Setup FRR on Proxmox
- Setup FRR on VM
- Kubernetes ceph-csi driver
- Migrate PVCs to CEPH
Background
Dell R720 specs
- CPU: 40 x Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz (2 Sockets)
- Memory: 220GBs of DDR3
- HDD: 3x 3TB Segate HGST Ultrastar
- SDD: Intel 300GB
How to build a Proxmox cluster?
- How to Setup a Cluster in Proxmox with Multiple Nodes
- Build a Hyper-converged Proxmox HA Cluster with Ceph
- 3 Node Hyperconverged Proxmox cluster: Failure testing, Ceph performance, 10Gb mesh network
What is Ceph?
Ceph is an open source software-defined storage solution designed to address the block, file and object storage needs of modern enterprises. Its highly scalable architecture sees it being adopted as the new norm for high-growth block storage, object stores, and data lakes.
What is ceph-csi?
Ceph CSI plugins implement an interface between a CSI-enabled Container Orchestrator (CO) and Ceph clusters. They enable dynamically provisioning Ceph volumes and attaching them to workloads.
What is FRR?
FRRouting (FRR) is a free and open source Internet routing protocol suite for Linux and Unix platforms. It implements BGP, OSPF, RIP, IS-IS, PIM, LDP, BFD, Babel, PBR, OpenFabric and VRRP, with alpha support for EIGRP and NHRP. FRR’s seamless integration with native Linux/Unix IP networking stacks makes it a general purpose routing stack applicable to a wide variety of use cases including connecting hosts/VMs/containers to the network, advertising network services, LAN switching and routing, Internet access routers, and Internet peering.
Proxmox interfaces
At a high level, each server has four physical interfaces: two 10GB SPF interfaces (eno1
, eno2
) and two Gigabit ethernet interfaces (eno3
, eno4
). Next, I created a Linux bond interface using eno3
and eno4
with LACP, which is reflected on my Unifi switch. That Linux bond is used by vmbr1
to serve VM traffic and the Proxmox management interface. Interfaces eno1
and eno2
are utilized by FRR to create mesh networks between all the Proxmox nodes for CEPH connectivity. FRR is configured to bind to the local interface: lo:0 - fc00::1/128
for the Ceph cluster network and lo:1 - fd00::/128
for the Ceph public network. Lastly, I created a Linux bridge (vmbr30
), which acts as bridge between the FRR mesh networks and the Proxmox VMs.
Network diagram
First, let’s start with the blue lines, which are the physical SPF cables (eno1
, eno2
) running between each server. The FRR will map all physical routes in such a way that if a connection dies it can still route through another machine to the destination therefore making it fault tolerant. For example, let’s say proxmox01
is attempting to send data to proxmox02
but the direct link between the two nodes is dead. In this scenario, FRR has mapped out that proxmox01
can send the data to proxmox03
, who will then forward the data to proxmox02
.
Next, the virtual mesh networks public (fd00::/64
) and cluster (fc00::/64)
are created by FRR for CEPH communication. The public network allows clients such as our Kubernetes VMs to access the CEPH storage, more specifically via the CEPH monitors hosted on each Proxmox node. The CEPH cluster network is used by CEPH to sync and replicate data between OSDs (“storage device”) on each Proxmox node. As you may have noticed, this diagram does not contain a switch/router because FRR is our virtual router. Lastly, as mentioned above, the vmbr30
interface acts as bridge between FRR and VMs.
Install/Setup FRRouting on Proxmox
Setup interfaces
sudo su
- Copy the template below to
/etc/network/interfaces
- Set CEPH cluster network:
sed -i 's#{{ ceph_cluster_net }}#fc00#g' /etc/network/interfaces
- Set CEPH public network:
sed -i 's#{{ ceph_public_net }}#fd00#g' /etc/network/interfaces
- Set each node to a unique integer for IP address:
sed -i 's#{{ network_entity_titles[inventory_hostname]["id"] }}#<x>#g' /etc/network/interfaces
- Set Proxmox management IP address:
sed -i 's#{{ ansible_host }}#<ip_addr>#g' /etc/network/interfaces
- Set gateway:
sed -i 's#{{ ansible_default_ipv4["gateway"] }}#<gateway>#g' /etc/network/interfaces
- Restart networking:
systemctl restart networking
Setup FRR
sudo su
- Enable IPv6 forwarding:
sysctl -w net.ipv6.conf.all.forwarding 1
apt install frr -y
- Enable OSPF routing:
sed -i 's#ospf6d=no#ospf6d=yes#g' /etc/frr/daemons
- Restart FRR:
systemctl restart frr
- Copy the template below to
/etc/frr/frr.conf
- Set hostname:
sed -i 's#{{ inventory_hostname }}#<hostname>#g' /etc/frr/frr.conf
- Set IPv6 prefix for your CEPH cluster network:
sed -i 's#{{ ceph_cluster_net }}#fc00#g' /etc/frr/frr.conf
- Set IPv6 prefix for your CEPH public network:
sed -i 's#{{ ceph_public_net }}#fd00#g' /etc/frr/frr.conf
- Set each node to a unique integer:
sed -i 's#{{ network_entity_titles[inventory_hostname]["id"] }}#<x>#g' /etc/frr/frr.conf
- Restart FRR:
systemctl restart frr
- Wait about 30 seconds for all the nodes to sync
- ping each node
Template: frr.conf
# default to using syslog. /etc/rsyslog.d/45-frr.conf places the log in # /var/log/frr/frr.log # # Note: # FRR's configuration shell, vtysh, dynamically edits the live, in-memory # configuration while FRR is running. When instructed, vtysh will persist the # live configuration to this file, overwriting its contents. If you want to # avoid this, you can edit this file manually before starting FRR, or instruct # vtysh to write configuration to a different file. frr version 10.2.2 frr defaults traditional hostname {{ inventory_hostname }} log syslog warnings ipv6 forwarding service integrated-vtysh-config ! ipv6 prefix-list CEPH_NETWORKS permit {{ ceph_cluster_net }}::/64 ipv6 prefix-list CEPH_NETWORKS permit {{ ceph_public_net }}::/64 ipv6 prefix-list CEPH_NETWORKS deny any ! route-map CEPH_NETWORKS permit 10 match ipv6 address prefix-list CEPH_NETWORKS ! router ospf6 ospf6 router-id 0.0.0.1 #area 0.0.0.1 range {{ ceph_cluster_net }}::/64 #area 0.0.0.2 range {{ ceph_public_net }}::/64 redistribute connected route-map CEPH_NETWORKS log-adjacency-changes exit ! interface lo ipv6 ospf6 area 0.0.0.1 exit ! interface vmbr30 ipv6 ospf6 area 0.0.0.2 ipv6 ospf6 hello-interval 10 ipv6 ospf6 dead-interval 40 exit ! interface eno1 ipv6 ospf6 area 0 ipv6 ospf6 network point-to-point exit ! interface eno2 ipv6 ospf6 area 0 ipv6 ospf6 network point-to-point exit !
Template: interfaces
# network interface settings; autogenerated # Please do NOT modify this file directly, unless you know what # you're doing. # # If you want to manage parts of the network configuration manually, # please utilize the 'source' or 'source-directory' directives to do # so. # PVE will preserve these directives, but will NOT read its network # configuration from sourced files, so do not attempt to move any of # the PVE managed interfaces into external files! auto lo iface lo inet loopback auto lo:0 iface lo:0 inet6 static address {{ ceph_cluster_net }}::{{ network_entity_titles[inventory_hostname]["id"] }} netmask 128 mtu 9000 # Ceph cluster network auto lo:1 iface lo:1 inet6 static address {{ ceph_public_net }}::{{ network_entity_titles[inventory_hostname]["id"] }} netmask 128 mtu 9000 # Ceph public network auto eno1 iface eno1 inet static mtu 9000 #Ceph 10GB auto eno2 iface eno2 inet static mtu 9000 #Ceph 10Gb auto eno3 iface eno3 inet manual auto eno4 iface eno4 inet manual auto bond0 iface bond0 inet manual bond-slaves eno3 eno4 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy layer2+3 auto vmbr1 iface vmbr1 inet static address {{ ansible_host }}/24 gateway {{ ansible_default_ipv4["gateway"] }} bridge-ports bond0 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 auto vmbr30 iface vmbr30 inet6 manual bridge_ports none bridge_stp off bridge_fd 0 mtu 9000 # Ceph public network sleep 5 && post-up /usr/bin/systemctl restart frr.service source /etc/network/interfaces.d/*
Install/Setup FRRouting on Microk8s
Add interface
- Log into Proxmox
- Select VM> Hardware
- Add > network device
- Select for bridge
- Enter
9000
for MTU - Select “Add”
Setup netplan
- Copy the template below to
/etc/netplan/78-k8s-frr-ceph-pub.yaml
netplan apply
Template: 78-k8s-frr-ceph-pub.yaml
The important thing to note with this config is we are NOT setting a static IP and we are disabling DHCP. We need an interface but we want FRR to control setting the IP address for interface. Lastly, we are setting the MTU to 9000.
network: version: 2 renderer: networkd ethernets: ens19: dhcp4: false dhcp6: false optional: true # Mark as optional, so it doesn't cause issues if not available on boot mtu: 9000
UFW/Firewall
We need to allow OSPF communication through the firewall. UFW doesn’t support allowing OSPF with native UFW commands, instead use IPtables rule.
echo '-A ufw6-before-input -p ospf -d ff02::/16 -j ACCEPT' >> /etc/ufw/before6.rules
ufw disable && ufw enable
Install/setup FRR
sudo su
- Enable IPv6 forwarding:
sysctl -w net.ipv6.conf.all.forwarding 1
apt install frr -y
- Enable OSPF routing:
sed -i 's#ospf6d=no#ospf6d=yes#g' /etc/frr/daemons
- Restart FRR:
systemctl restart frr
- Copy the template below to
/etc/frr/frr.conf
- Set hostname:
sed -i 's#{{ inventory_hostname }}#<hostname>#g' /etc/frr/frr.conf
- Set IPv6 prefix for your CEPH cluster network:
sed -i 's#{{ ceph_cluster_net }}#fc00#g' /etc/frr/frr.conf
- Set IPv6 prefix for your CEPH public network:
sed -i 's#{{ ceph_public_net }}#fd00#g' /etc/frr/frr.conf
- Set each node to a unique integer:
sed -i 's#{{ network_entity_titles[inventory_hostname]["id"] }}#<x>#g' /etc/frr/frr.conf
- Restart FRR:
systemctl restart frr
- Wait about 30 seconds for all the nodes to sync
- ping each Proxmox node
Template: frr.conf
# default to using syslog. /etc/rsyslog.d/45-frr.conf places the log in # /var/log/frr/frr.log # # Note: # FRR's configuration shell, vtysh, dynamically edits the live, in-memory # configuration while FRR is running. When instructed, vtysh will persist the # live configuration to this file, overwriting its contents. If you want to # avoid this, you can edit this file manually before starting FRR, or instruct # vtysh to write configuration to a different file. # ! frr defaults traditional hostname {{ inventory_hostname }} log syslog informational no ip forwarding ipv6 forwarding service integrated-vtysh-config ! ipv6 prefix-list CEPH_NETWORKS permit {{ ceph_public_net }}::/64 ipv6 prefix-list CEPH_NETWORKS deny any ! route-map CEPH_NETWORKS permit 10 match ipv6 address prefix-list CEPH_NETWORKS ! interface lo ipv6 address fd00::{{ network_entity_titles[inventory_hostname]["id"] }}/128 ipv6 ospf6 area 0.0.0.2 exit ! interface ens19 ipv6 ospf6 area 0.0.0.2 #ipv6 ospf6 network broadcast exit ! router ospf6 network fd00::/64 area 0.0.0.2 ospf6 router-id 5.5.5.{{ network_entity_titles[inventory_hostname]["id"] }} exit ! end
Create Ceph pool for Kubernetes
Create CephFS on Proxmox
- ssh into a Proxmox node
- Create CEPH pool for CephFS metadata:
ceph osd pool create kubernetes_metadata 16 16
- Create CEPH pool for CephFS data:
ceph osd pool create kubernetes_data 32 32
- Create CephFS using the following pools:
ceph fs new kubernetes kubernetes_metadata kubernetes_data
- Enable bulk data:
ceph osd pool set kubernetes_data bulk true
- Create subvolumegroup in CephFS which is requirement of ceph-csi:
ceph fs subvolumegroup create kubernetes csi
Verify CephFS
- Verify creation:
ceph fs ls
- Verify creation:
ceph fs subvolumegroup ls kubernetes
Generate Ceph auth
- Create auth client with permissions:
ceph auth get-or-create client.kubernetes mon 'allow r' osd 'allow rwx pool=kubernetes_metadata, allow rwx pool=kubernetes_data' mds 'allow' mgr 'allow rw'
- Verify:
ceph auth get client.kubernetes
Deploy ceph-csi on Microk8s via Helm
Helm init
- Create namespace:
kubectl create ns ceph-csi-cephfs
- Add Helm repo:
helm repo add ceph-csi https://ceph.github.io/csi-charts
- Relax pod security profile:
kubectl label namespaces ceph-csi-cephfs pod-security.kubernetes.io/enforce=privileged --overwrite
- Create secret with
client.kubernetes
key:kubectl create secret generic csi-cephfs-secret -n ceph-csi-cephfs --from-literal=adminID=kubernetes --from-literal=adminKey=$(ssh root@<proxmox IP addr> -C "ceph auth get client.kubernetes -f json" | jq -r '.[0].key')
Configure Helm via values.yaml
The template below is a replica from my Microk8s environment and it is highly customized for MicroK8s. You WILL NEED to change this template for your k8s flavor. At the top of my config I use the Microk8s node labels to only deploy the ceph-csi daemonset to worker nodes only. My captain nodes have scheduling disable so there is no need for CEPH volumes to be mounted to these nodes. Below are some things to note and/or change for your config
nodeSelector
to only allow CEPH-csi components to be deployed to workers, since non of my captain nodes need storageprovisioner
enableHostNetwork
this is set to true so the ceph pod can access the FRR network on the hosthttpMetrics
changed the default port because another service on the host was using port – side effect of using host network
csiConfig
- Replace
{{ ceph_cluster_id }}
with the CEPH cluster ID, which can be retrieved from Proxmox - Monitors section specifies the static IP addresses of the CEPH monitors
- For IPv6 addresses they need to be wrapped in brackets (
[fd00::1]:6789
) - Legacy CEPH mon ports are specified in the config below
- See Appendix: Looking up Monitors through DNS for more details
- For IPv6 addresses they need to be wrapped in brackets (
- Replace
secret
- Specify the k8s secret that contains the CEPH auth token
storageClass
create
– Instruct the Helm create to init/create a k8s storage class for CEPH-csiname
– Name of the storage classannotations.storageclass.kubernetes.io/is-default-class
– I set the ceph-csi as the default storage class for my k8s cluster- Replace
{{ ceph_cluster_id }}
with the CEPH cluster ID, which can be retrieved from Proxmox reclaimPolicy
– Instruct the CSI driver to delete volumes when the PVC is deletedallowVolumeExpansion
– Allow volumes to be expanded
kubeletDir
– See the reference below but MicroK8s stores the kubelet directory in a NON-standard location and ceph-csi needs access to make changes- For NON-Microk8s clusters, update this value to your k8s flavor.
- Reference
--- nodeplugin: nodeSelector: node.kubernetes.io/microk8s-worker: microk8s-worker provisioner: replicaCount: 2 enableHostNetwork: true nodeSelector: node.kubernetes.io/microk8s-worker: microk8s-worker httpMetrics: containerPort: 8082 csiConfig: - clusterID: {{ ceph_cluster_id }} monitors: ###### IPv6 using CEPH mon v2 ###### # - "[v2:[fd00::1]:3300/0,v1:[fd00::1]:6789/0]" # - "[v2:[fd00::2]:3300/0,v1:[fd00::2]:6789/0]" # - "[v2:[fd00::3]:3300/0,v1:[fd00::3]:6789/0]" ###### IPv6 using CEPH mon legacy ###### - "[fd00::1]:6789" - "[fd00::2]:6789" - "[fd00::3]:6789" cephFS: subvolumeGroup: "csi" secret: name: csi-cephfs-secret create: false storageClass: create: true name: k8s-cephfs-sc annotations: storageclass.kubernetes.io/is-default-class: "true" clusterID: {{ ceph_cluster_id }} fsName: kubernetes reclaimPolicy: Delete allowVolumeExpansion: true # https://github.com/rook/rook/issues/12556 kubeletDir: /var/snap/microk8s/common/var/lib/kubelet
Deploy Helm
helm upgrade --install my-ceph-csi-cephfs ceph-csi/ceph-csi-cephfs -n ceph-csi-cephfs -f values.yaml --version 3.13.0
Verify pvc creation
If you receive an error such as “volume ID already exist”, please see the appendix below for some troubleshooting steps.
kubectl apply -f test-pvc.yaml
kubectl get pvc -n ceph-csi-cephfs
kubectl describe pvc -n ceph-csi-cephfs test
kubectl logs -f -n ceph-csi-cephfs deployments/my-ceph-csi-cephfs-provisioner
Migrating a pre-existing PVC to Ceph
For this section I am going to migrate my Prometheus PVC from local-storage to Ceph using a cronjob. An alternate method is the pv-migrate krew plugin.
- Create a file named
pvc.yaml
with code like below but with attributes for your environment--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: prometheus-kube-prometheus-prometheus-0-clone namespace: monitoring spec: storageClassName: k8s-cephfs-sc accessModes: - ReadWriteOnce resources: requests: storage: 1000Gi
- Create PVC to migrate old PVC too:
kubectl apply -f pvc.yaml
- Create a file named
clone.yaml
with code like below but with attributes for your environment--- apiVersion: v1 kind: Pod metadata: name: busybox-rsync namespace: monitoring spec: containers: - name: rsync image: eeacms/rsync:2.6 command: ["rsync", "-azP", "/mnt/src/.", "/mnt/dest"] volumeMounts: - mountPath: /mnt/src # Mount PVC-1 here name: src - mountPath: /mnt/dest # Mount PVC-2 here name: dest volumes: - name: src persistentVolumeClaim: claimName: prometheus-kube-prometheus-prometheus-0 # Attach PVC-1 to volume-1 - name: dest persistentVolumeClaim: claimName: prometheus-kube-prometheus-prometheus-0-clone # Attach PVC-2 to volume-2
- Create pod that uses rsync to migrate the PVC:
kubectl apply -f clone.yaml
kubectl logs -f -n <namespace> pods/busybox-rsync
Ugly truth with FRR
The first screenshot is the transfer speed for migrating a live VM with 16GBs of memory from one server to another using FRR. As you can see, FRR doesn’t reach anywhere near 10GB/s mark we were hoping for. I must admit that I’m not an expert in FRR or OSPF, and it’s possible that the performance issues were due to a suboptimal configuration on my part. That being said, I spent quite a bit of time tweaking the config with similar results. The second screenshot is showing errors from FRR and I have no idea why they occur or how to resolve. I tried various variations of FRR/OSPF config with no anvil. That being said, I never had an issues or data loss that I noticed, so maybe the errors are more like warnings.
Configure a 10GB network with SPF+
A friend of mine donated some Unifi gear, including a 48-port Gigabit switch with four 10GB SPF ports. While I lost the fault tolerance, I gained a networking device capable of delivering 10GB/s speeds. I decommissioned FRR and migrated to gold ole copper wire and network switching, which delivered near 10GB/s speed.
Lessons learned
New skills/knowledge
- Proxmox high availability setup
- CEPH cluster
- Learned about shared storage
- FRR and OSPF
Challenges
- ceph-csi doesn’t produce good error messages. The common error message pertaining to the volume ID already existing can manifest by several different factors
- FRRouting
- OSPF
Future improvements
At the time of writing this, Proxmox VxLAN via SDN does not support IPv6. A member of the community has proposed a PR to fix this but it has not made it to release. Once this becomes available, FRR can be removed to simply the infrastructure.
Appendix
Looking up Monitors through DNS
Below is the error produced when connecting to CEPH v2 without DNS configured. To use modern CEPH you need configure monitor lookups via DNS. For simplicity, I choose the legacy version.
Warning FailedMount 19s kubelet MountVolume.MountDevice failed for volume "pvc-4cb35664-d31e-43fc-976f-afafa8bd970f" : rpc error: code = Internal desc = an error (exit status 234) occurred while running mount args: [-t ceph [v2:[fd00::1]:3300/0,v1:[fd00::1]:6789/0],[v2:[fd00::2]:3300/0,v1:[fd00::2]:6789/0],[v2:[fd00::3]:3300/0,v1:[fd00::3]:6789/0]:/volumes/csi/csi-vol-341a67c4-5eec-4534-85a2-2cd2739117be/039610c4-0964-4822-9880-903514bcb043 /var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/cephfs.csi.ceph.com/32c3ce1f6f8630839fe646169a42a8180b08f23e53a2779bac8a5530b9be4932/globalmount -o name=kubernetes,secretfile=/tmp/csi/keys/keyfile-369101855,mds_namespace=kubernetes,_netdev] stderr: unable to get monitor info from DNS SRV with service name: ceph-mon 2024-12-23T20:32:12.663+0000 7f9174802000 -1 failed for service _ceph-mon._tcp
MAC address as all 0s
If you are unable to access an interface attached to the FRR network check the interface. Just delete the interface and restart the networking to have it re-created.
Volume ID already exists
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 26s cephfs.csi.ceph.com_i-86a1829a0e0e_1f46b3b8-7040-4801-a6d4-1d096072e60a failed to provision volume with StorageClass "k8s-cephfs-sc": rpc error: code = DeadlineExceeded desc = context deadline exceeded Normal ExternalProvisioning 11s (x8 over 86s) persistentvolume-controller Waiting for a volume to be created either by the external provisioner 'cephfs.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. Normal Provisioning 11s (x6 over 86s) cephfs.csi.ceph.com_i-86a1829a0e0e_1f46b3b8-7040-4801-a6d4-1d096072e60a External provisioner is provisioning volume for claim "default/cephfs-pvc" Warning ProvisioningFailed 11s (x5 over 26s) cephfs.csi.ceph.com_i-86a1829a0e0e_1f46b3b8-7040-4801-a6d4-1d096072e60a failed to provision volume with StorageClass "k8s-cephfs-sc": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-54cf8978-0498-493d-998c-d441f49de9b9 already exists
The common error message “Volume ID already exists” like the one above can be caused by slew of different issues. Below are some of the ones I found with POSSIBLE solutions.
- Different FRR versions between Proxmox and k8s VMs – need to be the same major version
- Network connectivity issues to Ceph
- Need to grant k8s access to host network interfaces
- Container can’t reach ceph network
- Ceph permission errors
- Need to force ceph-csi to use the ceph-mon legacy ports
- Time sync issues which can be remedied with a local NTP server – I setup an NTP server on my local pfSense box.
- Using a k8s flavor like Microk8s that doesn’t allow privileged containers – https://github.com/ceph/ceph-csi/blob/afd950ebedfd6ced908b0c8ad27d37cb5ff028d1/docs/cephfs/deploy.md?plain=1#L115
- Requires subvolumegroup to be created before provisioning the PVC. If the subvolumegroup provided in
ceph-csi-config
ConfigMap is missing in the ceph cluster, the PVC creation will fail and will stay inPending
state.- Ensure the Helm chart includes
ceph.subvolumeGroup: "csi"
- Ensure the Helm chart includes
/var/lib/kubelet
exists at diff path for MicroK8s Need to tell Helm where/what to mount.- Ceph pool has a max size
DISCLAIMERS
DISCLAIMERS
References
- Documentation/user-guides/storage.md
- DockerHub – eeacms/rsync
- How to Copy existing PVC to another namespace
- https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/
- https://github.com/utkuozdemir/pv-migrate
- Ceph is an open source software-defined
- Quagga OSPF On Ubuntu
- Looking up Monitors through DNS