IT. POST LIST

POST ALL LABEL

ceph - 기본설치2 - ceph 설치및 구성 (Ver. 10.2.11_jewel )

By 때찌때찌맴매 - 4월 19, 2018

ceph - 기본설치2 - ceph 설치및 구성  (Ver. 10.2.11_jewel )

* ceph는 기본적으로 데비안계열 기반으로 진행 되는데, 본 포스팅은 centos에서 진행합니다. 
* 해당 작업서버는 총 7대로써 컨트롤서버인 MGMT(mon 3rd,mds도 같이 사용예정) 1대, 모니터 서버 mon 2대, 스토리지 osd 4대로 구성합니다.
  vm으로 테스트해도 동일하며, 다른 부분은 스토리지의 RAID를 하느냐, 단일 디스크들을 사용하는냐 입니다.
* ceph - 기본설치 2 는 MGMT(컨트롤서버) 서버에서만 진행합니다.
* 참조 : http://docs.ceph.com/docs/master/
             https://ceph.com/

[ CEPH 설치 - MON 서버 구성 ]

* 모든 작업은 mgmt 서버에서 진행 됩니다.
* ceph-deploy 는 ceph설치 및 운영시 사용되는 관리 패키지 입니다. ceph-deploy를 통해 설정 변경 시 일괄적으로 배포하고, osd추가 등을 편리하게 작업하게 해줍니다.
* yum update 해주세요. ceph-deploy 설치시 낮은 버젼으로 설치 되면 ceph 사이트 못찾아서  install 부터 장애가 발생됩니다.

[root@mgmt ~]# yum install ceph-deploy -y

* mon-0,mon-1 host 가 메인 모니터이고 mgmt는 개인적으로 서브로 넣어 두었습니다. mgmt를 포함하지 않으셔도 됩니다.

[root@mgmt ~]# ceph-deploy new mon-0 mon-1 mgmt
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /usr/bin/ceph-deploy new mon-0 mon-1 mgmt
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[mon-0][DEBUG ] connected to host: mgmt 
[mon-0][INFO  ] Running command: ssh -CT -o BatchMode=yes mon-0
[mon-0][DEBUG ] connected to host: mon-0 
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
..
[mon-1][DEBUG ] connected to host: mgmt 
[mon-1][INFO  ] Running command: ssh -CT -o BatchMode=yes mon-1
[mon-1][DEBUG ] connected to host: mon-1 
...
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[mgmt][DEBUG ] connected to host: mgmt 
...
[ceph_deploy.new][DEBUG ] Monitor initial members are ['mon-0', 'mon-1', 'mgmt']
...
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...


* 위 명령어가 진행 되면 아래와 같이 ceph 관련 파일들이 생기게 됩니다.

[root@mgmt ~]# ll
-rw-r--r--   1 root root  1033  4월 18 11:38 .cephdeploy.conf
-rw-r--r--   1 root root   263  4월 18 11:38 ceph.conf
-rw-r--r--   1 root root  2395  4월 18 11:38 ceph.log
-rw-------   1 root root    73  4월 18 11:38 ceph.mon.keyring

* ceph.conf 설정

vi ceph.conf 

[global]
fsid = 0651cfa0-8ce4-4fbd-b35b-966e3d262851
mon_initial_members = mon-0, mon-1, mgmt
mon_host = 192.168.1.12,192.168.1.13,192.168.1.11
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true

# 여기서 부터 추가 설정 부분입니다. 
# osd_pool_default_size,rbd_default_features 기본으로 두 옵션만 넣고, 나머지 옵션들은 상황에 맞게 추가 합니다.
# ceph.conf 내용은 http://docs.ceph.com/docs/kraken/rados/configuration/ceph-conf/  참조하세요.

osd_pool_default_size = 2
rbd_default_features = 1
osd_journal_size = 10240
osd_pool_default_pg_num = 128
osd_pool_default_pgp_num = 128


[mon]
mon_host = mon-0, mon-1, mgmt
mon_addr = 192.168.1.12,192.168.1.13,192.168.1.11
mon_clock_drift_allowed = .3
mon_clock_drift_warn_backoff = 30    # Tell the monitor to backoff from this warning for 30 seconds
mon_osd_full_ratio = .90
mon_osd_nearfull_ratio = .85
mon_osd_report_timeout = 300
debug_ms = 1
debug_mon = 20
debug_paxos = 20
debug_auth = 20


[mon.0]
host = mon-0
mon_addr = 192.168.1.12:6789

[mon.1]
host = mon-1
mon_addr = 192.168.1.13:6789

[mon.2]
host = mgmt
mon_addr = 192.168.1.11:6789

[mds]
mds_standby_replay = true
mds_cache_size = 250000
debug_ms = 1
debug_mds = 20
debug_journaler = 20

[mds.0]
host = mgmt

[osd.0]
host = osd-0

[osd.1]
host = osd-1

[osd.2]
host = osd-2

[osd.3]
host = osd-3



* 각서버에 ceph관련 패키지를 설치 합니다. 아래와 같이 정상적으로 설치가 되며 그렇지 않을경우에는 방화벽, 네트워크 통신, resolv.conf, yum update(ceph-deploy 버전) 등을 확인해봐야 합니다.
* 설치는 host명으로 일괄 설치 합니다.
** 추가 사항 **
ubuntu ceph ver.10.x 에서는 아래와 같이 install 진행시 10.x 버전으로 설치가 되지만 centos는 가끔 상위 버젼으로 올라감..;; mgmt에서 ceph yumrepo를 jewel로 설정하고도 버전이 올라가면 다운 경로를 고정으로 잡고 진행.
ex)
ceph-deploy install --repo-url 'https://download.ceph.com/rpm-jewel/el7' --gpg-url 'https://download.ceph.com/keys/release.asc' mgmt mon-0 mon-1 osd-0 osd-1 osd-2 osd-3

[root@mgmt ~]# ceph-deploy install mgmt mon-0 mon-1 osd-0 osd-1 osd-2 osd-3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /usr/bin/ceph-deploy install mgmt mon-0 mon-1 osd-0 osd-1 osd-2 osd-3
[ceph_deploy.install][DEBUG ] Installing stable version hammer on cluster ceph hosts mgmt mon-0 mon-1 osd-0 osd-1 osd-2 osd-3
[ceph_deploy.install][DEBUG ] Detecting platform for host mgmt ...
.
.
.

[osd-3][DEBUG ] 
[osd-3][DEBUG ] Complete!
[osd-3][INFO  ] Running command: ceph --version
[osd-3][DEBUG ] ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)


* ceph-mon 데몬 초기화 및 구동을 진행 합니다.

[root@mgmt ~]# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts mon-0 mon-1 mgmt
[ceph_deploy.mon][DEBUG ] detecting platform for host mon-0 ...
.
.

* ceph-mon 데몬을 systemd에 등록 후 자동 실행으로 변경합니다.

[root@mgmt ~]# ceph-deploy mon create mon-0 mon-1 mgmt
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy mon create mon-0 mon-1 mgmt
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['mon-0', 'mon-1', 'mgmt']
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts mon-0 mon-1 mgmt
[ceph_deploy.mon][DEBUG ] detecting platform for host mon-0 ...

.
.
.

* ceph.conf 생성시 등록된 mon 서버들의 데몬을  초기화하고 구동 시킵니다. 각서버에서 " netstat -nlpt " 확인시 데몬 구동을 확인 합니다.

[root@mgmt ~]# netstat -nlpt | grep ceph
Active Internet connections (only servers)
Proto  Recv-Q Send-Q Local Address     Foreign Address    State       PID/Program name    
tcp         0      0 192.168.1.11:6789  0.0.0.0:*          LISTEN      14772/ceph-mon      

* mon-initial 후 키파일들이 생성 됩니다. mon 서버들이 ceph 서버에 정상적 접근하도록 합니다.
[root@mgmt ~]# ll
-rw-------   1 root root     71  4월 18 14:24 ceph.bootstrap-mds.keyring
-rw-------   1 root root     71  4월 18 14:24 ceph.bootstrap-mgr.keyring
-rw-------   1 root root    113  4월 18 14:24 ceph.bootstrap-osd.keyring
-rw-------   1 root root    113  4월 18 14:24 ceph.bootstrap-rgw.keyring
-rw-------   1 root root    129  4월 18 14:24 ceph.client.admin.keyring

* 생성된 키파일을 참조하게 합니다.
[root@mgmt ~]# ceph-deploy gatherkeys mon-0 mon-1 mgmt
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy gatherkeys mon-0 mon-1 mgmt
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x24e8cb0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['mon-0', 'mon-1', 'mgmt']
[ceph_deploy.cli][INFO  ]  func                          : <function gatherkeys at 0x24cd500>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpS0dvzG
[mon-0][DEBUG ] connected to host: mon-0 
[mon-0][DEBUG ] detect platform information from remote host
[mon-0][DEBUG ] detect machine type
[mon-0][DEBUG ] get remote short hostname
[mon-0][DEBUG ] fetch remote file
[mon-0][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.mon-0.asok mon_status
[mon-0][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon-0/keyring auth get client.admin
[mon-0][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon-0/keyring auth get client.bootstrap-mds
[mon-0][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon-0/keyring auth get client.bootstrap-mgr
[mon-0][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon-0/keyring auth get client.bootstrap-osd
[mon-0][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon-0/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.client.admin.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.bootstrap-mds.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.bootstrap-mgr.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.bootstrap-osd.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.bootstrap-rgw.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpS0dvzG


* mon 서버 구성은 끝났습니다. 이제 osd 구성에 들어 가기 전에 상태값을 먼저 확인해 봅니다.
[root@mgmt ~]# ceph -s
    cluster 0651cfa0-8ce4-4fbd-b35b-966e3d262851
     health HEALTH_ERR
            clock skew detected on mon.mon-0, mon.mon-1
            64 pgs are stuck inactive for more than 300 seconds
            64 pgs stuck inactive
            64 pgs stuck unclean
            no osds
            Monitor clock skew detected 
     monmap e1: 3 mons at {mgmt=192.168.1.11:6789/0,mon-0=192.168.1.12:6789/0,mon-1=192.168.1.13:6789/0}
            election epoch 4, quorum 0,1,2 mgmt,mon-0,mon-1
     osdmap e1: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds
      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating

* 현재 health 상태는 ERR 입니다. 이제 OSD를 구성해서 ok 로  변경 합시다.


[ CEPH 설치 - OSD 서버 구성 ]

* osd 서버의 디스크를 확인 하면 아래와 같습니다. 
  ceph를 사용할 디스크는 sda 16TB RAID 디스크이고, osd 서버 4대를 ceph로 구성합니다.

[root@osd-0 ~]# parted -l
Error: /dev/sda: unrecognised disk label
Model: LSI MR9261-8i (scsi)                                               
Disk /dev/sda: 16.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags: 

Model: ATA P3-128 (scsi)
Disk /dev/sdb: 128GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End    Size   Type     File system  Flags
 1      1049kB  128GB  128GB  primary  xfs          boot

* ceph-deploy disk zap 명령어를 통해 해당 osd서버들의 디스크의 Partition Table 을 GPT로 생성합니다.
* 주의할 점은 LSI MR9261-8i 해당 레이드 카드 사용시, 고질적인 버그로 인해 pci 우선순위가 마음대로 바뀌며, 리부팅시 해당 디바이스 장치가 안보일 때가 있습니다.
  아래 명령어를 하기전에 각 서버의 디스크 장치를 확인해 주세요. 그렇지 않으면 OS디스크를 초기화 시켜 재설치 해야 합니다..... ㅠㅠ

[root@mgmt ~]# ceph-deploy disk zap osd-0:sda osd-1:sda osd-2:sda osd-3:sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy disk zap osd-0:sda osd-1:sda osd-2:sda osd-3:sda
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1649368>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x163e398>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('osd-0', '/dev/sda', None), ('osd-1', '/dev/sda', None), ('osd-2', '/dev/sda', None), ('osd-3', '/dev/sda', None)]
[ceph_deploy.osd][DEBUG ] zapping /dev/sda on osd-0
.
.
.
[osd-3][DEBUG ] Creating new GPT entries.
[osd-3][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[osd-3][DEBUG ] other utilities.
[osd-3][DEBUG ] Creating new GPT entries.
[osd-3][DEBUG ] The operation has completed successfully.

* osd-0에서 확인해 보니 gpt로 잘 초기화 되어 있습니다.

[root@osd-0 ~]# parted -l
Model: LSI MR9261-8i (scsi)
Disk /dev/sda: 16.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start  End  Size  File system  Name  Flags


* 다음 명령어들로 디스크 포맷 및 ceph-osd 데몬을 구동 진행 합니다.

[root@mgmt ~]# ceph-deploy osd prepare osd-0:sda osd-1:sda osd-2:sda osd-3:sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd prepare osd-0:sda osd-1:sda osd-2:sda osd-3:sda
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('osd-0', '/dev/sda', None), ('osd-1', '/dev/sda', None), ('osd-2', '/dev/sda', None), ('osd-3', '/dev/sda', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x24805a8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x2474320>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks osd-0:/dev/sda: osd-1:/dev/sda: osd-2:/dev/sda: osd-3:/dev/sda:
.
.
.
[osd-3][INFO  ] checking OSD status...
[osd-3][DEBUG ] find the location of an executable
[osd-3][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host osd-3 is now ready for osd use.

* osd에서 포맷 상태를 확인 합니다. 저널 파티션과, 데이터 파티션이 생성되고, 정상적으로 포맷이 되었습니다.

[root@osd-0 ~]# parted -l
Model: LSI MR9261-8i (scsi)
Disk /dev/sda: 16.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name          Flags
 2      1049kB  10.7GB  10.7GB               ceph journal
 1      10.7GB  16.0TB  16.0TB  xfs          ceph data


* ceph-osd 데몬을 구동 시키고 systemd에 등록 합니다.

[root@mgmt ~]# ceph-deploy osd activate osd-0:sda1 osd-1:sda1 osd-2:sda1 osd-3:sda1

* ceph activate 과정에서 장애-링크 가 발생 할 수 있습니다.
* osd 서버에서 데몬이 정상 적으로 올라왔는지 확인 합니다.

[root@osd-0 ~]# netstat -nlpt | grep ceph
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address    Foreign Address     State       PID/Program name    
tcp        0      0 0.0.0.0:6800     0.0.0.0:*           LISTEN      1382/ceph-osd       
tcp        0      0 0.0.0.0:6801     0.0.0.0:*           LISTEN      1382/ceph-osd       
tcp        0      0 0.0.0.0:6802     0.0.0.0:*           LISTEN      1382/ceph-osd       
tcp        0      0 0.0.0.0:6803     0.0.0.0:*           LISTEN      1382/ceph-osd       


* ceph-mds 데몬을 생성 합니다. 

[root@mgmt ~]# ceph-deploy mds create mgmt
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy mds create mgmt
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x15de7e8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x15c6a28>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  mds                           : [('mgmt', 'mgmt')]
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts mgmt:mgmt
[mgmt][DEBUG ] connected to host: mgmt 
[mgmt][DEBUG ] detect platform information from remote host
[mgmt][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.4.1708 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to mgmt
[mgmt][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[mgmt][DEBUG ] create path if it doesn't exist
[mgmt][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.mgmt osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-mgmt/keyring
[mgmt][INFO  ] Running command: systemctl enable ceph-mds@mgmt
[mgmt][INFO  ] Running command: systemctl start ceph-mds@mgmt
[mgmt][INFO  ] Running command: systemctl enable ceph.target

[root@mgmt ~]# netstat -nlpt | grep ceph
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address     Foreign Address     State       PID/Program name    
tcp        0      0 0.0.0.0:6800      0.0.0.0:*           LISTEN      16122/ceph-mds 

tcp        0      0 192.168.1.11:6789  0.0.0.0:*           LISTEN      14772/ceph-mon 


* 설정된 ceph.conf,ceph.client.admin.keyring 파일을 각서버에 배포 합니다.

[root@mgmt ~]# ceph-deploy admin mgmt mon-0 mon-1 osd-0 osd-1 osd-2 osd-3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy admin mgmt mon-0 mon-1 osd-0 osd-1 osd-2 osd-3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x119dfc8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['mgmt', 'mon-0', 'mon-1', 'osd-0', 'osd-1', 'osd-2', 'osd-3']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x10f7cf8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to mgmt
[mgmt][DEBUG ] connected to host: mgmt 
[mgmt][DEBUG ] detect platform information from remote host
[mgmt][DEBUG ] detect machine type
[mgmt][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
.
.
.
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to osd-3
[osd-3][DEBUG ] connected to host: osd-3 
[osd-3][DEBUG ] detect platform information from remote host
[osd-3][DEBUG ] detect machine type
[osd-3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf


* 각 서버에 ceph.client.admin.keyrin 파일에 대한 권한을 수정 합니다.
* 600 => 644 로 변경.

[root@mgmt ~]# chmod +r /etc/ceph/ceph.client.admin.keyring


* ceph 구성이 완료 되었고 현재 상태 값을 확인 합니다.

[root@mgmt ~]# ceph -s
    cluster 0651cfa0-8ce4-4fbd-b35b-966e3d262851
     health HEALTH_OK
     monmap e1: 3 mons at {mgmt=192.168.1.11:6789/0,mon-0=192.168.1.12:6789/0,mon-1=192.168.1.13:6789/0}
            election epoch 12, quorum 0,1,2 mgmt,mon-0,mon-1
     osdmap e21: 4 osds: 4 up, 4 in
            flags sortbitwise,require_jewel_osds
      pgmap v49: 64 pgs, 1 pools, 0 bytes data, 0 objects
            141 MB used, 59556 GB / 59556 GB avail
                  64 active+clean

* ceph 의 모든 기본설치 및 구성이 완료 되었습니다.

  • Share:

You Might Also Like

0 개의 댓글