Skip to content
Snippets Groups Projects
installation.md 3.92 KiB
Newer Older
  • Learn to ignore specific revisions
  • ### Current deployement
    ## mds
    *  ds-507
    *  ds-304
    
    ## mon
    *  ds-507
    *  qn-cnfslhc
    *  ds-303
    *  ds-304
    *  cs-001
    
    
    
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ### Installation
    ## ssh keys exchange
    Choose an admin node to use for the installation process.
    Distribute  the ssh key to all the hosts in the cluster.
    
    ## Install the deploy script
    Add the ceph repository
    ```
    yum install ceph-deploy
    yum install ntp ntpdate ntp-doc
    ```
    ## Purge cluster
    ```
    ceph-deploy purge qn-cnfslhc ds-001 ds-002 ds-303 ds-304 ds-507
    ceph-deploy purgedata qn-cnfslhc ds-001 ds-002 ds-303 ds-304 ds-507
    ceph-deploy forgetkeys
    
    ```
    Create first monitor nodes:
    ```
    ceph-deploy new qn-cnfslhc
    ```
    This will create the following files:
    ```
    ceph.conf
    ceph.mon.keyring
    
    ```
    Add the public network to the configuration file:
    ```
    public_network = 131.154.128.0/22
    ```
    
    Install the nodes:
    ```
    
    ceph-deploy install node1 node2 node3 --release nautilus
    
    ```
    Deploy the initial monitoring node:
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ```
    ceph-deploy mon create-initial
    ```
    
    and the admin keys to the nodes of your cluster: 
    copy the configuration file and admin keys gathered 
    above to all your Ceph nodes to enable you use the 
    ceph CLI without having to specify the monitor address 
    and ceph.client.admin.keyring each time you execute a command.
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ```
    
    ceph-deploy admin qn-cnfslhc ds-001 ...
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ```
    Then deploy the manager node
    ```
    ceph-deploy -v mgr create qn-cnfslhc
    ```
    If you have a dirty installation you may receive errors like:
    ```
    [qn-cnfslhc][ERROR ] [errno 1] error connecting to the cluster
    [qn-cnfslhc][ERROR ] exit code from command was: 1
    [ceph_deploy.mgr][ERROR ] could not create mgr
    [ceph_deploy][ERROR ] GenericError: Failed to create 1 MGRs
    ```
    This means that you must remove the old keys from `/var/lib/ceph`
    ```
    rm -rf /var/lib/ceph/bootstrap-mgr/
    ```
    
    Check that the installation is fine:
    ```
    sudo ceph -s
    ```
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ### Enable dashboard
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    The dashboard runs on a host with `ceph-mgr` active:
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ```
    yum install ceph-mgr-dashboard # for nautilus
    ceph mgr module enable dashboard
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ceph config set mgr mgr/dashboard/qn-cnfslhc/server_addr 131.154.130.69
    ceph config set mgr mgr/dashboard/qn-cnfslhc/server_port 5000
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ceph dashboard set-login-credentials admin <password>
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ceph config set mgr mgr/dashboard/ssl false
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ```
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ### Monitors
    ## Add monitor node
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ceph-deploy -v mon create <id>
    ceph-deploy -v  admin <id>
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ## Remove monitor node
    ```
    ceph-deploy -v mon destroy <id>
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ### Metadata
    ## Add metadata server
    ```
    ceph-deploy mds create ds-507
    ```
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    
    ### OSD
    ## Disk preparation
    ```
    ceph-deploy disk zap ds-507 /dev/nvme0n1
    ```
    Prepare data disks
    ```
    lsblk
    dmsetup remove
    gdisk /dev/sdbi (x z y y)
    ceph--c666c0d8--e77d--4d3e--931e--c7041572f747-osd--block--3414fd14--e0bf--4adf--bf5d--3c0412821d11
    
    ceph-deploy disk zap cs-001 /dev/sdap
    ```
    prepare journal partitions on ssd
    ```
    vgcreate ceph-db-0 /dev/sdbj1
    
    for i in $(seq 40 59); do echo "lvcreate -L 23GB -n db-$i ceph-db-0";done
    lvcreate -L 23GB -n db-40 ceph-db-0
    lvcreate -L 23GB -n db-41 ceph-db-0
    lvcreate -L 23GB -n db-42 ceph-db-0
    lvcreate -L 23GB -n db-43 ceph-db-0
    lvcreate -L 23GB -n db-44 ceph-db-0
    lvcreate -L 23GB -n db-45 ceph-db-0
    lvcreate -L 23GB -n db-46 ceph-db-0
    lvcreate -L 23GB -n db-47 ceph-db-0
    lvcreate -L 23GB -n db-48 ceph-db-0
    lvcreate -L 23GB -n db-49 ceph-db-0
    lvcreate -L 23GB -n db-50 ceph-db-0
    lvcreate -L 23GB -n db-51 ceph-db-0
    lvcreate -L 23GB -n db-52 ceph-db-0
    lvcreate -L 23GB -n db-53 ceph-db-0
    lvcreate -L 23GB -n db-54 ceph-db-0
    lvcreate -L 23GB -n db-55 ceph-db-0
    lvcreate -L 23GB -n db-56 ceph-db-0
    lvcreate -L 23GB -n db-57 ceph-db-0
    lvcreate -L 23GB -n db-58 ceph-db-0
    lvcreate -L 23GB -n db-59 ceph-db-0
    
    for i in $(seq 40 59); do lvcreate -L 13GB -n wal-$i ceph-db-0;done
    
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    for i in $(seq 40 59); do lvresize -L 10G /dev/ceph-db-0/db-$i -y;done
    
    Antonio Falabella's avatar
    Antonio Falabella committed
    ```
    
    ### Client installation
    
    ### Rados gateway
    
    [Official docs][https://docs.ceph.com/docs/master/install/install-ceph-gateway/]
    
    ```
    ceph-deploy install --rgw ds-517
    
    ceph-deploy --overwrite-conf rgw create ds-517
    
    Lucia Morganti's avatar
    Lucia Morganti committed
    
    Il gateway è già contattabile.
    http://client-node:7480