Skip to content
Snippets Groups Projects
manual_installation.md 8.11 KiB
Newer Older
### Manual Installation
Antonio Falabella's avatar
Antonio Falabella committed
[Official documentation](https://docs.ceph.com/en/latest/install/manual-deployment/)
## ssh keys and hostnames
Ensure that all the nodes in your cluster can ssh to each other passwordless.
Add the hostnames list to the `/etc/hosts` file.

## Repos and software
Create two repository:
ceph-<version>-noarch.repo
ceph-<version>.repo
with the following content respectively:
```
[ceph-<version>-noarch]
name=Ceph noarch
baseurl=http://download.ceph.com/rpm-<version>/<os-version>/noarch
enabled=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
priority=10
```
and
```
[ceph-<version>]
name=Ceph <version>
baseurl=http://download.ceph.com/rpm-<version>/<os-version>/$basearch
enabled=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
priority=10
```

And then issue
```
yum install ceph
```
on all the nodes of the cluster.
On a clean installation the following packages will be installed:
cryptsetup-libs
device-mapper
device-mapper-libs
pciutils-libs
platform-python-pip
platform-python-setuptools
binutils
ceph-base
ceph-common
ceph-mds
ceph-mgr
ceph-mgr-modules-core
ceph-mon
ceph-osd
ceph-selinux
cryptsetup
device-mapper-event
device-mapper-event-libs
device-mapper-persistent-data
fmt
gperftools-libs
leveldb
libaio
libbabeltrace
libcephfs2
libconfig
libibverbs
liboath
librabbitmq
librados2
libradosstriper1
librbd1
librdkafka
librdmacm
librgw2
libstoragemgmt
libunwind
libxslt
lttng-ust
lvm2
lvm2-libs
pciutils
python3-bcrypt
python3-beautifulsoup4
python3-ceph-argparse
python3-ceph-common
python3-cephfs
python3-cheroot
python3-cherrypy
python3-jaraco
python3-jaraco-functools
python3-libstoragemgmt
python3-libstoragemgmt-clibs
python3-logutils
python3-lxml
python3-mako
python3-more-itertools
python3-pecan
python3-pip
python3-portend
python3-rados
python3-rbd
python3-rgw
python3-setuptools
python3-simplegeneric
python3-singledispatch
python3-tempora
python3-trustme
python3-waitress
python3-webencodings
python3-webob
python3-webtest
python3-werkzeug
python3-zc-lockfile
python36
rdma-core
userspace-rcu
python3-cssselect
python3-html5lib
python36

```

This guide consider an installation where user `ceph` is used and it has `sudo` privileges.

## ceph.conf
Create a cluster id with the following command
```
/usr/bin/uuidgen
4f0be998-bcbe-4267-a866-a8f0fe74c444
``` 
* Firts node
Login into the first node and ensure you have the folder
```
/etc/ceph
```
create a `ceph.conf` file
```
[global]
fsid = <cluster id>
mon_initial_members = <hostname1, hostname2 ...>
mon_host = <ip1 , ip2 ...>
cluster_network = <network CIDR notation>
public_network = <network CIDR notation>
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
auth_supported = cephx
```
Where you put the `fsid` previously generated, the initial monitor member as well as its ip address and the cluster network in CIDR notation. If you have an additional network to be used as public add also this one.
This file can contain many other configuration parameters that can be added afterwards. This basic one is sufficient for the first cluster deloyment.
## cephx Keys creation
For a reference on the user management:
Antonio Falabella's avatar
Antonio Falabella committed
[User Management](https://docs.ceph.com/en/latest/rados/operations/user-management/)
Monitor key creation
```
sudo ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
```
Create admin key
```
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
```
(Note that the name client.admin is the key for the 'client acting with admin privileges')

Generate a bootstrap-osd keyring and a client.bootstrap-osd user and add the user to the keyring
```
sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
```
(basically the bootstrap roles are used to bootstrap services and add the keys)
Add the keys to the mon keyring
```
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
```
you can check and verify that now `/tmp/ceph.mon.keyring` contains the monitor key and the admin key appended to it as well as the bootstrap key.
Change the ownership
```
sudo chown ceph:ceph /tmp/ceph.mon.keyring
```
Create the monitor map 
```
monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
```
This command produce an output like this:
```
monmaptool: monmap file /tmp/monmap
monmaptool: set fsid to a729979a-da01-406e-8097-11dca4c6783f
monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
```
Create the directory for the monitor (it important that you do this as `ceph` user)
sudo -u ceph mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}

```
Populate the monitor daemon(s) with the monitor map and keyring
```
sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
```
Start the monitor (note that the systemd units are installed during package installation)
```
sudo systemctl start ceph-mon@node1
```
At this point you can issue 
```
ceph -s 
```
to check the status of cluster. If the status is `WARNING` as in this example:
```
  cluster:
    id:     a729979a-da01-406e-8097-11dca4c6783f
    health: HEALTH_WARN
            1 monitors have not enabled msgr2
 
  services:
    mon: 1 daemons, quorum falabella-cloud-1 (age 9s)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:
```
you can enable the `msgr2` protocol with the following command:
```
ceph mon enable-msgr2
```
The status at this point should be `OK`.
## Create the OSDs
# Bluestore
Prepare and activate
```
ceph-volume lvm create --data {data-path}
ceph-volume lvm activate {ID} {FSID}
```
For example
```
ceph-volume lvm prepare --bluestore --cluster-fsid 959f6ec8-6e8c-4492-a396-7525a5108a8f --data 26-2EH87DSV-HGST-HUH728080AL4200/sdad_data --block.wal cs-001_journal/sdad_wal --block.db cs-001_journal/sdad_db
ceph-volume lvm activate --bluestore 4 f9c9e764-6646-41ee-b773-24a11252dda5
```
# Alternative (manual disk preparation)
Considering using two disks (/dev/sdb,/dev/sdc). The first one to be used for data the second one for `wal` and `db`.
Create a physical volume on the first one:
```
pvcreate /dev/sdb
```
Create the volume group
```
vgcreate disk1_data /dev/sdb
  Volume group "disk1_data" successfully created
vgdisplay 
  --- Volume group ---
  VG Name               disk1_data
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <1,82 TiB
  PE Size               4,00 MiB
  Total PE              476598
  Alloc PE / Size       0 / 0   
  Free  PE / Size       476598 / <1,82 TiB
  VG UUID               JfdKeK-35Ck-wsBF-1pvw-Uj6a-FEdf-LzDPtQ
```
Finally create the logical volume
```
lvcreate -l100%FREE -n sdb_data disk1_data
  Logical volume "sdb_data" created.
[root@ds-303 manifests]# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/disk1_data/sdb_data
  LV Name                sdb_data
  VG Name                disk1_data
  LV UUID                gFZQDt-gZ3F-w2If-Us54-ijSA-qzWT-7Uc4jE
  LV Write Access        read/write
  LV Creation host, time ds-303.cr.cnaf.infn.it, 2020-09-30 12:22:19 +0200
  LV Status              available
  # open                 0
  LV Size                <1,82 TiB
  Current LE             476598
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
```
Now prepare the `wal` and `db` partitions on the second disk.
```
pvcreate /dev/sdc
vgcreate disk2_journal /dev/sdc
lvcreate -L1G -n sdb_wal disk2_journal
lvcreate -L10G -n sdb_db disk2_journal
```