Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
C
ceph
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Antonio Falabella
ceph
Commits
225f04a6
Commit
225f04a6
authored
4 years ago
by
Antonio Falabella
Browse files
Options
Downloads
Patches
Plain Diff
Docs for manual install
parent
20a0ea81
No related branches found
No related tags found
No related merge requests found
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
README.md
+2
-0
2 additions, 0 deletions
README.md
manual_installation.md
+166
-0
166 additions, 0 deletions
manual_installation.md
puppet_module.md
+0
-0
0 additions, 0 deletions
puppet_module.md
with
168 additions
and
0 deletions
README.md
+
2
−
0
View file @
225f04a6
...
@@ -10,3 +10,5 @@
...
@@ -10,3 +10,5 @@
*
[
C-states
](
power.md
)
*
[
C-states
](
power.md
)
*
[
Cheatsheet
](
cheatsheet.md
)
*
[
Cheatsheet
](
cheatsheet.md
)
*
[
inotify
](
inotify.md
)
*
[
inotify
](
inotify.md
)
*
[
Manual installation
](
manual_installation.md
)
*
[
Puppet module
](
puppet_module.md
)
This diff is collapsed.
Click to expand it.
manual_installation.md
0 → 100644
+
166
−
0
View file @
225f04a6
### Manual Installation
[
Official documentation
][
https://docs.ceph.com/en/latest/install/manual-deployment/
]
## Repos and software
For example the nautilus repos are:
```
https://download.ceph.com/rpm-nautilus/el7/x86_64/
https://download.ceph.com/rpm-nautilus/el7/noarch/
```
A nautilis installation consist for example of the following packages:
```
ceph-selinux-14.2.11-0.el7.x86_64
libcephfs2-14.2.11-0.el7.x86_64
ceph-osd-14.2.11-0.el7.x86_64
ceph-common-14.2.11-0.el7.x86_64
ceph-mds-14.2.11-0.el7.x86_64
python-cephfs-14.2.11-0.el7.x86_64
ceph-mgr-14.2.11-0.el7.x86_64
ceph-14.2.11-0.el7.x86_64
python-ceph-argparse-14.2.11-0.el7.x86_64
ceph-mon-14.2.11-0.el7.x86_64
ceph-base-14.2.11-0.el7.x86_64
```
This guide consider an installation where user
`ceph`
is used and it has
`sudo`
privileges.
## Cluster identifier
```
/usr/bin/uuidgen
4f0be998-bcbe-4267-a866-a8f0fe74c444
```
*
Firts node
Login into the first node and ensure you have the folder
```
/etc/ceph
```
create a
`ceph.conf`
file
```
[global]
fsid = <cluster id>
mon_initial_members = <hostname1, hostname2 ...>
mon_host = <ip1 , ip2 ...>
cluster_network = <network CIDR notation>
public_network = <network CIDR notation>
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
auth_supported = cephx
```
## Keys creation
For a reference on the user management:
[
User Management
][
https://docs.ceph.com/en/latest/rados/operations/user-management/
]
Monitor key creation
```
sudo ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
```
Create admin key
```
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
```
(Note that the name client.admin stands for 'client acting with admin privileges')
Generate a bootstrap-osd keyring and a client.bootstrap-osd user and add the user to the keyring
```
sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
```
(basically the bootstrap roles are used to bootstrap services and add the keys)
Add the keys to the mon keyring
```
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
```
Change the ownership
```
sudo chown ceph:ceph /tmp/ceph.mon.keyring
```
Create the monitor map
```
monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
```
Create the directory for the monitor
```
sudo mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}
```
Populate the monitor daemon(s) with the monitor map and keyring
```
sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
```
Start the monitor
```
sudo systemctl start ceph-mon@node1
```
## Create the OSDs
# Bluestore
Prepare and activate
```
sudo ceph-volume lvm create --data {data-path}
sudo ceph-volume lvm activate {ID} {FSID}
```
# Alternative (manual disk preparation)
Considering using two disks (/dev/sdb,/dev/sdc). The first one to be used for data the second one for
`wal`
and
`db`
.
Create a physical volume on the first one:
```
pvcreate /dev/sdb
```
Create the volume group
```
vgcreate disk1_data /dev/sdb
Volume group "disk1_data" successfully created
vgdisplay
--- Volume group ---
VG Name disk1_data
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <1,82 TiB
PE Size 4,00 MiB
Total PE 476598
Alloc PE / Size 0 / 0
Free PE / Size 476598 / <1,82 TiB
VG UUID JfdKeK-35Ck-wsBF-1pvw-Uj6a-FEdf-LzDPtQ
```
Finally create the logical volume
```
lvcreate -l100%FREE -n sdb_data disk1_data
Logical volume "sdb_data" created.
[root@ds-303 manifests]# lvdisplay
--- Logical volume ---
LV Path /dev/disk1_data/sdb_data
LV Name sdb_data
VG Name disk1_data
LV UUID gFZQDt-gZ3F-w2If-Us54-ijSA-qzWT-7Uc4jE
LV Write Access read/write
LV Creation host, time ds-303.cr.cnaf.infn.it, 2020-09-30 12:22:19 +0200
LV Status available
# open 0
LV Size <1,82 TiB
Current LE 476598
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
```
Now prepare the
`wal`
and
`db`
partitions on the second disk.
```
pvcreate /dev/sdc
vgcreate disk2_journal /dev/sdc
lvcreate -L1G -n sdb_wal disk2_journal
lvcreate -L10G -n sdb_db disk2_journal
```
This diff is collapsed.
Click to expand it.
puppet_module.md
0 → 100644
+
0
−
0
View file @
225f04a6
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment