Newer
Older
### Installation
## ssh keys exchange
Choose an admin node to use for the installation process.
Distribute the ssh key to all the hosts in the cluster.
## Install the deploy script
Add the ceph repository
```
yum install ceph-deploy
yum install ntp ntpdate ntp-doc
```
## Purge cluster
```
ceph-deploy purge qn-cnfslhc ds-001 ds-002 ds-303 ds-304 ds-507
ceph-deploy purgedata qn-cnfslhc ds-001 ds-002 ds-303 ds-304 ds-507
ceph-deploy forgetkeys
```
Create first monitor nodes:
```
ceph-deploy new qn-cnfslhc
```
This will create the following files:
```
ceph.conf
ceph.mon.keyring
```
Add the public network to the configuration file:
```
public_network = 131.154.128.0/22
```
Install the nodes:
```
ceph-deploy install node1 node2 node3
```
Deploy the initial monitoring node:
```
ceph-deploy mon create-initial
```
and the admin keys to the nodes of your cluster
```
```
Then deploy the manager node
```
ceph-deploy -v mgr create qn-cnfslhc
```
If you have a dirty installation you may receive errors like:
```
[qn-cnfslhc][ERROR ] [errno 1] error connecting to the cluster
[qn-cnfslhc][ERROR ] exit code from command was: 1
[ceph_deploy.mgr][ERROR ] could not create mgr
[ceph_deploy][ERROR ] GenericError: Failed to create 1 MGRs
```
This means that you must remove the old keys from `/var/lib/ceph`
```
rm -rf /var/lib/ceph/bootstrap-mgr/
```
### Enable dashboard
```
yum install ceph-mgr-dashboard # for nautilus
ceph mgr module enable dashboard
ceph config set mgr mgr/dashboard/qn-cnfslhc/server_addr 131.154.130.69
ceph config set mgr mgr/dashboard/qn-cnfslhc/server_port 5000
## Remove monitor node
```
ceph-deploy -v mon destroy <id>
### Metadata
## Add metadata server
```
ceph-deploy mds create ds-507
```
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
### OSD
## Disk preparation
```
ceph-deploy disk zap ds-507 /dev/nvme0n1
```
Prepare data disks
```
lsblk
dmsetup remove
gdisk /dev/sdbi (x z y y)
ceph--c666c0d8--e77d--4d3e--931e--c7041572f747-osd--block--3414fd14--e0bf--4adf--bf5d--3c0412821d11
ceph-deploy disk zap cs-001 /dev/sdap
```
prepare journal partitions on ssd
```
vgcreate ceph-db-0 /dev/sdbj1
for i in $(seq 40 59); do echo "lvcreate -L 23GB -n db-$i ceph-db-0";done
lvcreate -L 23GB -n db-40 ceph-db-0
lvcreate -L 23GB -n db-41 ceph-db-0
lvcreate -L 23GB -n db-42 ceph-db-0
lvcreate -L 23GB -n db-43 ceph-db-0
lvcreate -L 23GB -n db-44 ceph-db-0
lvcreate -L 23GB -n db-45 ceph-db-0
lvcreate -L 23GB -n db-46 ceph-db-0
lvcreate -L 23GB -n db-47 ceph-db-0
lvcreate -L 23GB -n db-48 ceph-db-0
lvcreate -L 23GB -n db-49 ceph-db-0
lvcreate -L 23GB -n db-50 ceph-db-0
lvcreate -L 23GB -n db-51 ceph-db-0
lvcreate -L 23GB -n db-52 ceph-db-0
lvcreate -L 23GB -n db-53 ceph-db-0
lvcreate -L 23GB -n db-54 ceph-db-0
lvcreate -L 23GB -n db-55 ceph-db-0
lvcreate -L 23GB -n db-56 ceph-db-0
lvcreate -L 23GB -n db-57 ceph-db-0
lvcreate -L 23GB -n db-58 ceph-db-0
lvcreate -L 23GB -n db-59 ceph-db-0
for i in $(seq 40 59); do lvcreate -L 13GB -n wal-$i ceph-db-0;done
for i in $(seq 40 59); do lvresize -L 10G /dev/ceph-db-0/db-$i -y;done