Skip to content
Snippets Groups Projects
manual_installation.md 6.62 KiB
Newer Older
### Manual Installation
Antonio Falabella's avatar
Antonio Falabella committed
[Official documentation](https://docs.ceph.com/en/latest/install/manual-deployment/)
## ssh keys and hostnames
Ensure that all the nodes in your cluster can ssh to each other passwordless.
Add the hostnames list to the `/etc/hosts` file.

## Repos and software
Create two repository:
ceph-<version>-noarch.repo
ceph-<version>.repo
with the following content respectively:
```
[ceph-<version>-noarch]
name=Ceph noarch
baseurl=http://download.ceph.com/rpm-<version>/<os-version>/noarch
enabled=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
priority=10
```
and
```
[ceph-<version>]
name=Ceph <version>
baseurl=http://download.ceph.com/rpm-<version>/<os-version>/$basearch
enabled=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
priority=10
```

And then issue
```
yum install ceph
```
on all the nodes of the cluster.
On a clean installation the following packages will be installed:
cryptsetup-libs
device-mapper
device-mapper-libs
pciutils-libs
platform-python-pip
platform-python-setuptools
binutils
ceph-base
ceph-common
ceph-mds
ceph-mgr
ceph-mgr-modules-core
ceph-mon
ceph-osd
ceph-selinux
cryptsetup
device-mapper-event
device-mapper-event-libs
device-mapper-persistent-data
fmt
gperftools-libs
leveldb
libaio
libbabeltrace
libcephfs2
libconfig
libibverbs
liboath
librabbitmq
librados2
libradosstriper1
librbd1
librdkafka
librdmacm
librgw2
libstoragemgmt
libunwind
libxslt
lttng-ust
lvm2
lvm2-libs
pciutils
python3-bcrypt
python3-beautifulsoup4
python3-ceph-argparse
python3-ceph-common
python3-cephfs
python3-cheroot
python3-cherrypy
python3-jaraco
python3-jaraco-functools
python3-libstoragemgmt
python3-libstoragemgmt-clibs
python3-logutils
python3-lxml
python3-mako
python3-more-itertools
python3-pecan
python3-pip
python3-portend
python3-rados
python3-rbd
python3-rgw
python3-setuptools
python3-simplegeneric
python3-singledispatch
python3-tempora
python3-trustme
python3-waitress
python3-webencodings
python3-webob
python3-webtest
python3-werkzeug
python3-zc-lockfile
python36
rdma-core
userspace-rcu
python3-cssselect
python3-html5lib
python36

```

This guide consider an installation where user `ceph` is used and it has `sudo` privileges.

## Cluster identifier
```
/usr/bin/uuidgen
4f0be998-bcbe-4267-a866-a8f0fe74c444
``` 
* Firts node
Login into the first node and ensure you have the folder
```
/etc/ceph
```
create a `ceph.conf` file
```
[global]
fsid = <cluster id>
mon_initial_members = <hostname1, hostname2 ...>
mon_host = <ip1 , ip2 ...>
cluster_network = <network CIDR notation>
public_network = <network CIDR notation>
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
auth_supported = cephx
```
## Keys creation
For a reference on the user management:
Antonio Falabella's avatar
Antonio Falabella committed
[User Management](https://docs.ceph.com/en/latest/rados/operations/user-management/)
Monitor key creation
```
sudo ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
```
Create admin key
```
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
```
(Note that the name client.admin stands for 'client acting with admin privileges')

Generate a bootstrap-osd keyring and a client.bootstrap-osd user and add the user to the keyring
```
sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
```
(basically the bootstrap roles are used to bootstrap services and add the keys)
Add the keys to the mon keyring
```
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
```
Change the ownership
```
sudo chown ceph:ceph /tmp/ceph.mon.keyring
```
Create the monitor map 
```
monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
```

Create the directory for the monitor
```
sudo mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}

```
Populate the monitor daemon(s) with the monitor map and keyring
```
sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
```
Start the monitor
```
sudo systemctl start ceph-mon@node1
```
## Create the OSDs
# Bluestore
Prepare and activate
```
ceph-volume lvm create --data {data-path}
ceph-volume lvm activate {ID} {FSID}
```
For example
```
ceph-volume lvm prepare --bluestore --cluster-fsid 959f6ec8-6e8c-4492-a396-7525a5108a8f --data 26-2EH87DSV-HGST-HUH728080AL4200/sdad_data --block.wal cs-001_journal/sdad_wal --block.db cs-001_journal/sdad_db
ceph-volume lvm activate --bluestore 4 f9c9e764-6646-41ee-b773-24a11252dda5
```
# Alternative (manual disk preparation)
Considering using two disks (/dev/sdb,/dev/sdc). The first one to be used for data the second one for `wal` and `db`.
Create a physical volume on the first one:
```
pvcreate /dev/sdb
```
Create the volume group
```
vgcreate disk1_data /dev/sdb
  Volume group "disk1_data" successfully created
vgdisplay 
  --- Volume group ---
  VG Name               disk1_data
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <1,82 TiB
  PE Size               4,00 MiB
  Total PE              476598
  Alloc PE / Size       0 / 0   
  Free  PE / Size       476598 / <1,82 TiB
  VG UUID               JfdKeK-35Ck-wsBF-1pvw-Uj6a-FEdf-LzDPtQ
```
Finally create the logical volume
```
lvcreate -l100%FREE -n sdb_data disk1_data
  Logical volume "sdb_data" created.
[root@ds-303 manifests]# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/disk1_data/sdb_data
  LV Name                sdb_data
  VG Name                disk1_data
  LV UUID                gFZQDt-gZ3F-w2If-Us54-ijSA-qzWT-7Uc4jE
  LV Write Access        read/write
  LV Creation host, time ds-303.cr.cnaf.infn.it, 2020-09-30 12:22:19 +0200
  LV Status              available
  # open                 0
  LV Size                <1,82 TiB
  Current LE             476598
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
```
Now prepare the `wal` and `db` partitions on the second disk.
```
pvcreate /dev/sdc
vgcreate disk2_journal /dev/sdc
lvcreate -L1G -n sdb_wal disk2_journal
lvcreate -L10G -n sdb_db disk2_journal
```