Skip to content
Snippets Groups Projects
configuration.md 3.43 KiB
Newer Older
  • Learn to ignore specific revisions
  • Antonio Falabella's avatar
    Antonio Falabella committed
    ### Configuration
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ## OSD memory Configuration
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    # Changing paramenter of a running service
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ```
    ceph daemon /var/run/ceph/ceph-osd.22.asok config show | grep memory
    ceph daemon  osd.22 config set osd_memory_target 2147483648
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ```
    
    or you can inject parameter with 
    ```
    ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
    ```
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    # Changing paramenters permanently
    Change the parameters on the admin `ceph.conf` file
    ```
    ceph-deploy --overwrite-conf  config push <node>
    
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ```
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ## Crush map
    change the device class of an OSD if needed
    ```
    ceph osd crush rm-device-class osd.1
    ceph osd crush set-device-class nvme osd.1
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ```
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    Add bucket type to hierarchy:
    ```
    ceph osd crush add-bucket group1 host
    ceph osd crush set 0 1.0 host=group1
    ```
    
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    Define the crush map rule
    ```
    ceph osd crush rule create-replicated nvme_meta default host nvme
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ceph osd crush rule create-replicated hdd_data default host hdd
    ```
    ## Manual edit crush crush map
    ```
    ceph osd getcrushmap -o crush.map
    crushtool -d crush.map -o decompiled.map
    crushtool -c decompiled.map -o manual.map
    ceph osd setcrushmap -i manual.map
    ceph osd crush rule create-replicated data_custom default custom_group hdd
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ```
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ## Pools
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ```
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ceph osd pool create ceph_data  1024  replicated hdd_data
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ceph osd pool set ceph_data size 2
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ceph osd pool create ceph_metadata  2  replicated nvme_meta  
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ceph osd pool set ceph_metadata size 2
    ceph osd force-create-pg 2.0 --yes-i-really-mean-it
    ```
    
    To change the number of placement groups in a pool
    ```
    ceph osd pool set <pool name> pg_num <num>
    ```
    
    ### Pool autotune
    ```
    ceph osd pool autoscale-status
    ceph osd pool set foo pg_autoscale_mode on
    ```
    
    # Erasure coding
    ```
    ceph osd erasure-code-profile set ec-31-profile k=3 m=1 crush-failure-domain=osd crush-device-class=hdd
    ceph osd pool create ec31 2000 erasure ec-31-profile
    ceph osd pool set ec31 allow_ec_overwrites true
    ceph osd pool application enable ec31 cephfs
    
    
    ceph osd erasure-code-profile set ec-22-profile k=2 m=2 crush-failure-domain=host crush-device-class=hdd
    ceph osd pool create ec22 1024 erasure ec-22-profile
    ceph osd pool set ec22 allow_ec_overwrites true
    ceph osd pool application enable ec22 cephfs
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ## Filesystems
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ```
    ceph fs new cephfs ceph_metadata ceph_data
    ```
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ## Authentication
    
    List the current authentication keys
    ```
    ceph auth list
    ```
    Adding a new key
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ```
    ceph auth get-or-create client.1 mon 'allow r' mds 'allow rw' osd 'allow rw pool=ceph_data'
    ```
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    to change ACL you must enable the client to write also on the metadata (note the `p` on the `mds`)
    ```
    ceph auth get-or-create client.2 mon 'allow rw' mds 'allow rwp' osd 'allow rwx pool=ceph_data, allow rwx pool=ceph_metadata'
    ```
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ## Mount the filesystem
    Copy the content of the key to client that is going to mount the fs
    ```
    [client.1]
    	key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXx==
    
    ```
    Create a file `/etc/ceph/client.admin` with content
    ```
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXx==
    ```
    and then issue
    ```
    mount -v -t ceph 131.154.130.69:6789:/ /mnt/ceph -o name=1,secretfile=/etc/ceph/client.admin
    ```
    
    ## Logging
    If your OS disk is relatively full, you can accelerate log rotation by modifying the Ceph log rotation file at `/etc/logrotate.d/ceph`.
    Add a size setting after the rotation frequency to accelerate log rotation (via cronjob) if your logs exceed the size setting. 
    For example, the default setting looks like this:
    ```
    rotate 7
    weekly
    size 500M
    compress
    sharedscripts
    ```
    And add a cron job:
    ```
    crontab -e
    ```
    like this 
    ```
    30 * * * * /usr/sbin/logrotate /etc/logrotate.d/ceph >/dev/null 2>&1
    ```