Newer
Older
[Official documentation](https://docs.ceph.com/en/latest/install/manual-deployment/)
## ssh keys and hostnames
Ensure that all the nodes in your cluster can ssh to each other passwordless.
Add the hostnames list to the `/etc/hosts` file.
ceph-<version>-noarch.repo
ceph-<version>.repo
with the following content respectively:
```
[ceph-<version>-noarch]
name=Ceph noarch
baseurl=http://download.ceph.com/rpm-<version>/<os-version>/noarch
enabled=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
priority=10
```
and
```
[ceph-<version>]
name=Ceph <version>
baseurl=http://download.ceph.com/rpm-<version>/<os-version>/$basearch
enabled=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
priority=10
```
And then issue
```
yum install ceph
```
on all the nodes of the cluster.
On a clean installation the following packages will be installed:
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
cryptsetup-libs
device-mapper
device-mapper-libs
pciutils-libs
platform-python-pip
platform-python-setuptools
binutils
ceph-base
ceph-common
ceph-mds
ceph-mgr
ceph-mgr-modules-core
ceph-mon
ceph-osd
ceph-selinux
cryptsetup
device-mapper-event
device-mapper-event-libs
device-mapper-persistent-data
fmt
gperftools-libs
leveldb
libaio
libbabeltrace
libcephfs2
libconfig
libibverbs
liboath
librabbitmq
librados2
libradosstriper1
librbd1
librdkafka
librdmacm
librgw2
libstoragemgmt
libunwind
libxslt
lttng-ust
lvm2
lvm2-libs
pciutils
python3-bcrypt
python3-beautifulsoup4
python3-ceph-argparse
python3-ceph-common
python3-cephfs
python3-cheroot
python3-cherrypy
python3-jaraco
python3-jaraco-functools
python3-libstoragemgmt
python3-libstoragemgmt-clibs
python3-logutils
python3-lxml
python3-mako
python3-more-itertools
python3-pecan
python3-pip
python3-portend
python3-rados
python3-rbd
python3-rgw
python3-setuptools
python3-simplegeneric
python3-singledispatch
python3-tempora
python3-trustme
python3-waitress
python3-webencodings
python3-webob
python3-webtest
python3-werkzeug
python3-zc-lockfile
python36
rdma-core
userspace-rcu
python3-cssselect
python3-html5lib
python36
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
```
This guide consider an installation where user `ceph` is used and it has `sudo` privileges.
## Cluster identifier
```
/usr/bin/uuidgen
4f0be998-bcbe-4267-a866-a8f0fe74c444
```
* Firts node
Login into the first node and ensure you have the folder
```
/etc/ceph
```
create a `ceph.conf` file
```
[global]
fsid = <cluster id>
mon_initial_members = <hostname1, hostname2 ...>
mon_host = <ip1 , ip2 ...>
cluster_network = <network CIDR notation>
public_network = <network CIDR notation>
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
auth_supported = cephx
```
## Keys creation
For a reference on the user management:
[User Management](https://docs.ceph.com/en/latest/rados/operations/user-management/)
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
Monitor key creation
```
sudo ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
```
Create admin key
```
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
```
(Note that the name client.admin stands for 'client acting with admin privileges')
Generate a bootstrap-osd keyring and a client.bootstrap-osd user and add the user to the keyring
```
sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
```
(basically the bootstrap roles are used to bootstrap services and add the keys)
Add the keys to the mon keyring
```
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
```
Change the ownership
```
sudo chown ceph:ceph /tmp/ceph.mon.keyring
```
Create the monitor map
```
monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
```
Create the directory for the monitor
```
sudo mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}
```
Populate the monitor daemon(s) with the monitor map and keyring
```
sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
```
Start the monitor
```
sudo systemctl start ceph-mon@node1
```
## Create the OSDs
# Bluestore
Prepare and activate
```
ceph-volume lvm create --data {data-path}
ceph-volume lvm activate {ID} {FSID}
```
For example
```
ceph-volume lvm prepare --bluestore --cluster-fsid 959f6ec8-6e8c-4492-a396-7525a5108a8f --data 26-2EH87DSV-HGST-HUH728080AL4200/sdad_data --block.wal cs-001_journal/sdad_wal --block.db cs-001_journal/sdad_db
ceph-volume lvm activate --bluestore 4 f9c9e764-6646-41ee-b773-24a11252dda5
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
```
# Alternative (manual disk preparation)
Considering using two disks (/dev/sdb,/dev/sdc). The first one to be used for data the second one for `wal` and `db`.
Create a physical volume on the first one:
```
pvcreate /dev/sdb
```
Create the volume group
```
vgcreate disk1_data /dev/sdb
Volume group "disk1_data" successfully created
vgdisplay
--- Volume group ---
VG Name disk1_data
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <1,82 TiB
PE Size 4,00 MiB
Total PE 476598
Alloc PE / Size 0 / 0
Free PE / Size 476598 / <1,82 TiB
VG UUID JfdKeK-35Ck-wsBF-1pvw-Uj6a-FEdf-LzDPtQ
```
Finally create the logical volume
```
lvcreate -l100%FREE -n sdb_data disk1_data
Logical volume "sdb_data" created.
[root@ds-303 manifests]# lvdisplay
--- Logical volume ---
LV Path /dev/disk1_data/sdb_data
LV Name sdb_data
VG Name disk1_data
LV UUID gFZQDt-gZ3F-w2If-Us54-ijSA-qzWT-7Uc4jE
LV Write Access read/write
LV Creation host, time ds-303.cr.cnaf.infn.it, 2020-09-30 12:22:19 +0200
LV Status available
# open 0
LV Size <1,82 TiB
Current LE 476598
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
```
Now prepare the `wal` and `db` partitions on the second disk.
```
pvcreate /dev/sdc
vgcreate disk2_journal /dev/sdc
lvcreate -L1G -n sdb_wal disk2_journal
lvcreate -L10G -n sdb_db disk2_journal
```