Newer
Older
## References
Available options can be found directly on the source code
```
https://github.com/ceph/ceph/tree/master/src/common/options
```
Or querying the daemon
```
ceph daemon <type.id> config <diff|get|set|show>
"config diff": "dump diff of current config and default config"
"config get": "config get <field>: get the config value"
"config set": "config set <field> <val> [<val> ...]: set a config variable"
"config show": "dump current config settings"
```
Example: Which options exist for osd scrubbing?
```
# ceph daemon osd.0 config show | grep scrub
"osd_scrub_thread_timeout": "60",
"osd_scrub_thread_suicide_timeout": "300",
"osd_scrub_finalize_thread_timeout": "600",
"osd_scrub_invalid_stats": "true",
"osd_max_scrubs": "1",
```
```
ceph daemon /var/run/ceph/ceph-osd.22.asok config show | grep memory
ceph daemon osd.22 config set osd_memory_target 2147483648
or you can inject parameter with
```
ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
```
# Changing paramenters permanently
Change the parameters on the admin `ceph.conf` file
```
ceph-deploy --overwrite-conf config push <node>
## Crush map
change the device class of an OSD if needed
```
ceph osd crush rm-device-class osd.1
ceph osd crush set-device-class nvme osd.1
Add bucket type to hierarchy:
```
ceph osd crush add-bucket group1 host
ceph osd crush set 0 1.0 host=group1
```
Define the crush map rule
```
ceph osd crush rule create-replicated nvme_meta default host nvme
ceph osd crush rule create-replicated hdd_data default host hdd
```
## Manual edit crush crush map
```
ceph osd getcrushmap -o crush.map
crushtool -d crush.map -o decompiled.map
crushtool -c decompiled.map -o manual.map
ceph osd setcrushmap -i manual.map
ceph osd crush rule create-replicated data_custom default custom_group hdd
ceph osd pool set ceph_metadata size 2
ceph osd force-create-pg 2.0 --yes-i-really-mean-it
```
To change the number of placement groups in a pool
```
ceph osd pool set <pool name> pg_num <num>
```
### Pool autotune
```
ceph osd pool autoscale-status
ceph osd pool set foo pg_autoscale_mode on
```
# Erasure coding
```
ceph osd erasure-code-profile set ec-31-profile k=3 m=1 crush-failure-domain=osd crush-device-class=hdd
ceph osd pool create ec31 2000 erasure ec-31-profile
ceph osd pool set ec31 allow_ec_overwrites true
ceph osd pool application enable ec31 cephfs
ceph osd erasure-code-profile set ec-22-profile k=2 m=2 crush-failure-domain=host crush-device-class=hdd
ceph osd pool create ec22 1024 erasure ec-22-profile
ceph osd pool set ec22 allow_ec_overwrites true
ceph osd pool application enable ec22 cephfs
List the current authentication keys
```
ceph auth list
```
Adding a new key
```
ceph auth get-or-create client.1 mon 'allow r' mds 'allow rw' osd 'allow rw pool=ceph_data'
```
to change ACL you must enable the client to write also on the metadata (note the `p` on the `mds`)
```
ceph auth get-or-create client.2 mon 'allow rw' mds 'allow rwp' osd 'allow rwx pool=ceph_data, allow rwx pool=ceph_metadata'
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
```
# Adding a client
In order to add a client you need al least the ceph and epel repos, and you have to install the package `ceph-common`. The dependencies are (for example RedHat 8.4):
```
==============================================================================================================================================================================================================================================
Package Architecture Version Repository Size
==============================================================================================================================================================================================================================================
Installing:
ceph-common x86_64 2:16.2.6-0.el8 ceph 24 M
Installing dependencies:
daxctl-libs x86_64 71.1-2.el8 Storage-RedHat-8-BaseOS 42 k
gperftools-libs x86_64 1:2.7-9.el8 epel 306 k
leveldb x86_64 1.22-1.el8 epel 181 k
libbabeltrace x86_64 1.5.4-3.el8 Storage-RedHat-8-BaseOS 200 k
libcephfs2 x86_64 2:16.2.6-0.el8 ceph 811 k
liboath x86_64 2.6.2-3.el8 epel 59 k
libpmem x86_64 1.6.1-1.el8 Storage-RedHat-8-AppStream 79 k
libpmemobj x86_64 1.6.1-1.el8 Storage-RedHat-8-AppStream 145 k
librabbitmq x86_64 0.9.0-3.el8 Storage-RedHat-8-BaseOS 47 k
librados2 x86_64 2:16.2.6-0.el8 ceph 3.7 M
libradosstriper1 x86_64 2:16.2.6-0.el8 ceph 492 k
librbd1 x86_64 2:16.2.6-0.el8 ceph 4.0 M
librdkafka x86_64 0.11.4-1.el8 Storage-RedHat-8-AppStream 353 k
librdmacm x86_64 32.0-4.el8 Storage-RedHat-8-BaseOS 77 k
librgw2 x86_64 2:16.2.6-0.el8 ceph 3.7 M
libunwind x86_64 1.3.1-3.el8 epel 75 k
lttng-ust x86_64 2.8.1-11.el8 Storage-RedHat-8-AppStream 259 k
ndctl-libs x86_64 71.1-2.el8 Storage-RedHat-8-BaseOS 78 k
python3-ceph-argparse x86_64 2:16.2.6-0.el8 ceph 45 k
python3-ceph-common x86_64 2:16.2.6-0.el8 ceph 83 k
python3-cephfs x86_64 2:16.2.6-0.el8 ceph 214 k
python3-prettytable noarch 0.7.2-14.el8 Storage-RedHat-8-AppStream 44 k
python3-rados x86_64 2:16.2.6-0.el8 ceph 387 k
python3-rbd x86_64 2:16.2.6-0.el8 ceph 367 k
python3-rgw x86_64 2:16.2.6-0.el8 ceph 114 k
userspace-rcu x86_64 0.10.1-4.el8 Storage-RedHat-8-BaseOS 101 k
Transaction Summary
==============================================================================================================================================================================================================================================
## Mount the filesystem
Copy the content of the key to client that is going to mount the fs
```
[client.1]
key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXx==
```
Create a file `/etc/ceph/client.admin` with content
```
XXXXXXXXXXXXXXXXXXXXXXXXXXXXx==
```
and then issue
```
mount -v -t ceph 131.154.130.69:6789:/ /mnt/ceph -o name=1,secretfile=/etc/ceph/client.admin
```
### Mount a different folder
To specify a diffent folder you have to specify the path as an mds parameter
```
ceph fs authorize cephfs client.<client name> / r <path> rw
```
The folder must be present in the fs before mounting.
```
mount -v -t ceph 131.154.130.166:6789:<path> <mount point> -o name=<client>,secretfile=<secrefile>
## Logging
If your OS disk is relatively full, you can accelerate log rotation by modifying the Ceph log rotation file at `/etc/logrotate.d/ceph`.
Add a size setting after the rotation frequency to accelerate log rotation (via cronjob) if your logs exceed the size setting.
For example, the default setting looks like this:
```
rotate 7
weekly
size 500M
compress
sharedscripts
```
And add a cron job:
```
crontab -e
```
like this
```
30 * * * * /usr/sbin/logrotate /etc/logrotate.d/ceph >/dev/null 2>&1