<menu id="sosou"></menu>
<dd id="sosou"></dd>
<nav id="sosou"><code id="sosou"></code></nav>
<menu id="sosou"><nav id="sosou"></nav></menu>
  • <nav id="sosou"><code id="sosou"></code></nav>

    技術文檔

    cloudstack的kvm虛擬機遷移到openstack

    2022-05-04 23:54:00
    文件版本 :
    立即下載

    1 升級ceph集群的osd網絡(cluster  network)

    • 1.1 線上環境

    操作系統:ubuntu14.04 

    ceph版本:jewel版本-10.2.11

    部署方式:使用ceph-deploy部署。

    5個monitor節點  11臺osd節點

    1.2  osd網絡升級

    所用的ceph節點(mon/osd/client節點)的/etc/ceph/ceph.conf 

     vim /etc/ceph/ceph.conf    
     [default]
     ......
     public network = 10.78.0.0/16
     cluster network = 10.100.4.0/24 
     .......

    mon節點和osd節點所有ceph服務重新啟動

    restart  ceph-all
    #或者
    restart ceph-mon-all
    restart ceph-osd-all

    2.部署OpenStack對接線上ceph集群

    2.1 openstack系統環境

    操作系統:centos7.4 

    openstack版本:queens

    部署方式:kolla,使用外接ceph

    備注:openstack的controller節點、compute節點作為ceph的client端使用。

    2.2  修改yum源(controller/compute節點)

    cd /etc/yum.repos.d/
    rm  -rf * 
    
    vim CentOS-Base.repo
    [base]
    name=CentOS-$releasever - Base
    baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/
    #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    
    #released updates
    [updates]
    name=CentOS-$releasever - Updates
    baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/
    #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    
    #additional packages that may be useful
    [extras]
    name=CentOS-$releasever - Extras
    baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/
    #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    
    #additional packages that extend functionality of existing packages
    [centosplus]
    name=CentOS-$releasever - Plus
    baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/centosplus/$basearch/
    #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
    gpgcheck=1
    enabled=0
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    
    vim epel.repo
    [epel]
    name=Extra Packages for Enterprise Linux 7 - $basearch
    baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch
    #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
    failovermethod=priority
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
    
    [epel-debuginfo]
    name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
    baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug
    #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
    failovermethod=priority
    enabled=0
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
    gpgcheck=1
    
    [epel-source]
    name=Extra Packages for Enterprise Linux 7 - $basearch - Source
    baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS
    #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
    failovermethod=priority
    enabled=0
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
    gpgcheck=1
    
    vim  ceph.repo 
    [ceph]
    name=ceph
    baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
    gpgcheck=0
    enabled=1
    [ceph-noarch]
    name=cephnoarch
    baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
    gpgcheck=0
    enabled=1
    [ceph-source]
    name=cephsource
    baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
    gpgcheck=0
    enabled=1
    [ceph-radosgw]
    name=cephradosgw
    baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
    gpgcheck=0
    enabled=1
    
    yum clean all 
    yum makecache fast

    2.3 安裝ceph客戶端

    ceph-deploy節點

    vim /etc/hosts
    # controller
    10.78.0.11 controller1
    10.78.0.12 controller2 
    10.78.0.13 controller3
    # compute
    10.78.0.14 compute01
    .....
    ssh-copy-id controller1
    ssh-copy-id controller2 
    ssh-copy-id controller3 
    
    ssh-copy-id compute01
    
    ceph-deploy install controller1 controller2  controller3 compute01
    ceph-deploy admin controller1 controller2  controller3  compute01 
    驗證 
    ceph  -s

    2.4  部署openstack集群(3控制1計算)

    2.4.1 環境準備

    #所有節點 (關閉防火墻和selinux)
    systemctl  stop  firewalld.serivce 
    systemctl disable firewalld.service 
    
    vim /etc/selinux/config 
    SELINUX=disabled
    # 設置hostname
    hostnamectl  set-hostname  $HOSTNAME
    
    # 關閉NetWorkNamager
    systemctl stop NetWorkManager 
    systemctl disable NetWorkManager
    reboot

    2.4.2 部署機設置

    #pip設置
    yum install  python2-pip
    
    cat < /etc/pip.conf
    [global]
    index-url = http://mirrors.aliyun.com/pypi/simple/
    [install]
    trusted-host=mirrors.aliyun.com
    EOF
    pip install -U pip
    yum install -y python-devel libffi-devel gcc openssl-devel libselinux-python git
    pip install -U 'ansible>=2.2.0'
    
    # docker-ce
    yum remove docker docker-common docker-selinux docker-engine
    yum install -y yum-utils device-mapper-persistent-data lvm2
    wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
    sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
    
    yum clean all 
    yum makecache fast  
    
    yum install docker-ce
    # 配置docker鏡像加速器
    mkdir /etc/docker/
    cat << EOF>/etc/docker/daemon.json
    {
      "registry-mirrors": ["https://iby0an85.mirror.aliyuncs.com"]
    }
    EOF
    
    systemctl daemon-reload
    systemctl start docker-ce
    
    # 搭建docker registry(鏡像已經編譯好)
    docker run -d -p /opt/registry/:/var/lib/registry/ -p 4000:5000 --name=registry registry

    2.4.3 kolla-ansible安裝 

    #kolla-ansible 已上傳到部署機 /root/目錄下
    cd  kolla-ansible  
    pip install -r test-requirement.txt -r requirement.txt
    python setup.py install 
    cp -rv ./etc/kolla/  /etc/
    mkdir /etc/kolla/config  
    kolla-genpwd
    
    vim /etc/kolla/passwords.yml 
    keystone_admin_password: otvcloud

    2.4.4 創建pool

    ceph osd pool create volumes 128
    ceph osd pool set volumes size 3
    ceph osd pool create vms 128
    ceph osd pool set vms size 3
    ceph osd pool create images 64
    ceph osd pool set images size 3
    ceph osd pool create backups 64
    ceph osd pool set backups size 3

    2.4.5 開啟外接ceph功能

    vim /etc/kolla/globals.yml
    enable_ceph: "no"
    glance_backend_ceph: "yes"
    cinder_backend_ceph: "yes"
    cinder_backup_driver: "ceph"
    nova_backend_ceph: "yes"

    2.4.6 為glance配置rbd存儲后端

    mkdir /etc/kolla/config/{glance,cinder/{cinder-volume,cinder-backup},nova}
    
    vim /etc/kolla/config/glance/glance-api.conf 
    [glance_store]
    stores = rbd
    default_store = rbd
    rbd_store_pool = images
    rbd_store_user = glance
    rbd_store_ceph_conf = /etc/ceph/ceph.conf
    
    #拷貝ceph集群配置文件(/etc/ceph/ceph.conf)到 /etc/kolla/config/glance/ceph.conf
    cp /etc/ceph/ceph.conf /etc/kolla/config/glance/ceph.conf  
    
    #生成ceph.client.glance.keyring文件,并保存到 /etc/kolla/config/glance 目錄
    ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_p
    refix rbd_children, allow rwx pool=images' -O ceph.client.glance.keyring 
    
    cp ceph.client.glance.keyring  /etc/kolla/config/glance/

    2.4.7 為cinder配置rbd存儲后端

    vim /etc/kolla/config/cinder/cinder-volume.conf
    [DEFAULT]
    enabled_backends=rbd-1
    
    [rbd-1]
    rbd_ceph_conf = /etc/ceph/ceph.conf
    rbd_user = cinder
    backend_host = rbd:volumes
    rbd_pool = volumes
    volume_backend_name = rbd-1
    volume_driver = cinder.volume.drivers.rbd.RBDDriver
    rbd_secret_uuid = {{ cinder_rbd_secret_uuid }}
    
    
    vim /etc/kolla/config/cinder/cinder-volume.conf
    [DEFAULT]
    backup_ceph_conf = /etc/ceph/ceph.conf
    backup_ceph_user = cinder-backup
    backup_ceph_chunk_size = 134217728
    backup_ceph_pool = backups
    backup_driver = cinder.backup.drivers.ceph
    backup_ceph_stripe_unit = 0
    backup_ceph_stripe_count = 0
    restore_discard_excess_bytes = true
    
    #拷貝ceph的配置文件(/etc/ceph/ceph.conf)到 /etc/kolla/config/cinder/ceph.conf
    cp /etc/ceph/ceph.conf /etc/kolla/config/cinder
    
    #生成 ceph.client.cinder.keyring 文件
    ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_p
    refix rbd_children, allow rwx pool=volumes, allow rwx pool=vms ,allow rx pool=images' -O ceph.c
    lient.cinder.keyring
    
    #生成ceph.client.cinder-backup.keyring文件
    ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_p
    refix rbd_children, allow rwx pool=bakcups' -O ceph.client.cinder-backup.keyring 
    #將ceph.client.cinder-backup.keyring和ceph.client.cinder.keyring拷貝到/etc/kolla/config/cin
    der/cinder-backup/下面
    cp ceph.client.cinder-backup.keyring /etc/kolla/config/cinder/cinder-backup/
    cp ceph.client.cinder.keyring /etc/kolla/config/cinder/cinder-backup/
    #將ceph.client.cinder.keyring 拷貝到 /etc/kolla/cinder/cinder-volume下面
    cp ceph.client.cinder.keyring  /etc/kolla/config/cinder/cinder-volume/
    
    # 備注:cinder-backup 需要兩個 keyring 去連接 volumes 和 backups pool

    2.4.8 為nova配置rbd存儲后端

    vim /etc/kolla/config/nova/nova-compute.conf
    [libvirt]
    images_rbd_pool = vms
    images_type = rbd
    images_rbd_ceph_conf = /etc/ceph/ceph.conf
    rbd_user = nova
    
    #生成ceph.client.nova.keyring 
    ceph auth get-or-create client.nova mon 'allow r' osd 'allow class-read object_p
    refix rbd_children, allow rwx pool=vms' -O ceph.client.nova.keyring 
    
    # 拷貝ceph.conf/ceph.client.nova.keyring/ceph.client.cinder.keyring到/etc/kolla/config/nova下面
    cp /etc/ceph/ceph.conf   /etc/kolla/config/nova
    cp ceph.client.nova.keyring /etc/kolla/config/nova/
    cp ceph.client.cinder.keyring /etc/kolla/config/nova

     2.4.9 編輯主機配置文件

    vim /etc/kolla/globals.yml
    連接vpn之后進行拷貝
    
    cp /root/kolla-ansible/ansible/inventory/multinode  /root/
    
    vim multinode
    #根據實際情況修改 (拷貝即可)

    2.4.10 部署openstack

    kolla使用外接ceph,意味著沒有儲存節點,而默認情況下cinder-volume和cinder-backup運行在存儲節點,外接ceph存儲需要指定主機去運行volume和cinder-backup容器。

    vim multinode 
    [storage]
    compute01 
    
    kolla-ansible -i /root/multinode bootstrap-servers
    kolla-ansible -i /root/multinode prechecks
    kolla-ansible -i /root/multinode deploy  
    
    kolla-ansible -i /root/multinode post-deploy
    
    cp /etc/kolla/admin-openrc.sh  /root/
    
    source admin-openrc.sh 
    cd /root/kolla-ansible/tools 
    vim  init-runonce 
    EXT_NET_CIDR='10.0.2.0/24'#外部網絡
    EXT_NET_RANGE='start=10.0.2.150,end=10.0.2.199' # 浮動IP
    EXT_NET_GATEWAY='10.0.2.1'# 外部網絡網關
    
    /bin/bash init-runonce
    openstack server create ...
    openstack  network list 
    openstack network agent list 
    openstack compute service list 
    openstack image list

    2.5  cloudstack中存儲在ceph上的KVM虛擬機遷移到OpenStack上

    # 思路:1.遷移kvm虛擬機 
    #       2.遷移kvm虛擬機上的數據盤
    # 復制各個虛擬機的配置文件到控制節點1上
    /bin/bash -x $hostname volumes  即可完成虛擬機的遷移。
    #測試:針對centos虛擬機可以正常ping通,但是針對debain虛擬機無法ping通。
    
    #!/bin/bash 
    # auth:gxw 
    hostname=$1
    rm -rf /data/hosts/$hostname/backup* 
    
    vm_config_list=(`ls /data/hosts/$hostname`)
    new_pool=$2
    
    for vm_config  in ${vm_config_list[@]}
     do
       vm_name=`echo $vm_config | cut -d '.' -f 1`
       echo $vm_name 
       #filter vm-rbd
       vm_rbd=`grep  rbd /data/hosts/$hostname/$vm_config |  awk '{print $3}' | awk -F '=|/' '{print $3}'| a
       wk -F"'" '{print $1}'| head -1`
    
       vm_rbd_pool=`grep  rbd /data/hosts/$hostname/$vm_config |  awk '{print $3}'  | awk -F "='|/" '{prin
       t $2}' | head -1`
       old_pool=$vm_rbd_pool 
       
       vm_rbd_size=`rbd info $old_pool/$vm_rbd | head -2 | tail -1 | awk '{print $2}'`
       vm_rbd_unit=`rbd info $old_pool/$vm_rbd | head -2 | tail -1 | awk '{print $3}'`
       echo $vm_rbd_unit 
       if [ "$vm_rbd_unit"x = "GB"x ]
         then 
    	 vm_rbd_size_GB=$vm_rbd_size
       else 
            if [ $vm_rbd_size -le 1024 ]
               then 
                   #echo "$hostname-$vm_rbd size :$vm_rbd_size less than 1024MB,can't create boot volume! P
                   lease  change to another method!"
                   #echo "$hostname-$vm_name" >> /root/special_vm
                   vm_rbd_size_GB=1  
            else
         	   vm_rbd_size_GB=`echo $vm_rbd_size/1024 | bc`
            fi 
       fi 
       #echo $vm_rbd $vm_rdb_size_MB $vm_rbd_size 
    
       #exmport  vm_rbd
       backup_vm_rbd=/data/hosts/$hostname/backup.$vm_rbd
    
       rbd export -p $old_pool $vm_rbd $backup_vm_rbd
       
       #create boot start disk
       new_vm_rbd=$hostname-$vm_rbd 
       openstack volume create $new_vm_rbd --size  $vm_rbd_size_GB  --bootable 
       vm_rbd_boot_uuid=`openstack volume list | grep $new_vm_rbd | awk '{print $2}'`
       echo $vm_rbd_boot_uuid 
       rbd rm -p $new_pool volume-$vm_rbd_boot_uuid 
       
       # import vm_rbd
       rbd import -p $new_pool  $backup_vm_rbd volume-$vm_rbd_boot_uuid 
       rm -rf /data/hosts/$hostname/backup* 
    
       #create flavor
       vm_memory_KB=`grep "memory unit" /data/hosts/$hostname/$vm_config | awk -F '>|<' '{print $3}'`
       vm_memory_MB=`echo $vm_memory_KB/1024 | bc`
       vm_vcpus=`grep "vcpu" /data/hosts/$hostname/$vm_config |  tail -1 | awk -F '>|<' '{print $3}'`
       vm_flavor_id=$vm_rbd
       new_vm_flavor_id=$hostname-$vm_flavor_id 
       openstack flavor delete $new_vm_flavor_id 
       #openstack flavor create --id $new_vm_flavor_id   --ram $vm_memory_MB  --vcpus $vm_vcpus --d
       isk $vm_rbd_size $new_vm_flavor_id 
       openstack flavor create --id $new_vm_flavor_id   --ram $vm_memory_MB  --vcpus $vm_vcpus --d
       isk $vm_rbd_size_GB $new_vm_flavor_id 
    
       #create vm 
       new_vm_name=$hostname-$vm_name
       openstack server delete $new_vm_name 
       openstack server create $new_vm_name --volume $vm_rbd_boot_uuid --flavor $new_vm_flavor_id  --se
       curity-group 40f3bf48-2889-4be2-b
       763-e823ba13a652  --nic net-id=eb68f477-8bb1-42cc-b3d5-f89775fed16e
       
      #create data disk 
       data_rbd=`grep rbd /data/hosts/$hostname/$vm_config |  awk '{print $3}' | awk -F '=|/' '{print $3}'| a
       wk -F"'" '{print $1}'| tail -1` 
       echo $data_rbd
       if [ "$data_rbd"x="$vm_rbd"x ]
          then  
               echo "$new_vm_name have not data  disk!"
       else
         data_rbd_pool=`grep  rbd /data/hosts/$hostname/$vm_config |  awk '{print $3}'  | a
         wk -F "='|/" '{print $2}' | tail -1`
         old_pool=$data_rbd_pool 
    
         data_rbd_size=`rbd info $old_pool/$data_rbd | head -2 | tail -1 | awk '{print $2'}`
         data_rbd_unit=`rbd info $old_pool/$vm_rbd | head -2 | tail -1 | awk '{print $3}'`
       
            #echo $data_rbd_unit
           if [ "$data_rbd_unit"x = "GB"x ]
              then 
                data_rbd_size_GB=$data_rbd_size
           else
              if [ $data_rbd_size -le 1024 ]
                then 
                   data_rbd_size_GB=1
             else
                  data_rbd_size_GB=`echo $data_rbd_size/1024 | bc`
             fi 
           fi
         
         #export  data_rbd
         backup_data_rbd=/data/hosts/$hostname/backup.$data_rbd
         rbd export -p $old_pool $data_rbd $backup_data_rbd
         #create data disk
         new_data_rbd=$hostname-$data_rbd
         openstack volume create $new_data_rbd  --size $data_rbd_size_GB
         data_rbd_uuid=` openstack volume list | grep $new_data_rbd | awk '{print $2}'`
         rbd rm -p $new_pool volume-$data_rbd_uuid 
    
       # import data_rbd
       rbd import -p $new_pool $backup_data_rbd  volume-$data_rbd_uuid
       rm -rf /data/hosts/$hostname/backup*
       
       # attach data_rbd to vm_rbd
        openstack server add volume $new_vm_name  $data_rbd_uuid 
     fi 
       # attch floating ip to virtual server
       openstack floating ip create public1 
       floating_ip=$(openstack floating ip list  | grep None  | head -1 | awk '{print $4}')
       openstack server add floating ip $new_vm_name $floating_ip 
       if [ $? -eq 0 ]
         then 
            rm -rf /data/hosts/$hostname/$vm_config 
       fi 
    done

    2.5.1  centos7 忘記密碼 怎樣處理?

    1)重啟系統,進入暫停后的頁面
    圖片關鍵詞

    2)將光標一直移動到 LANG=en_US.UTF-8 后面,空格,再追加init=/bin/sh 注意是在同一行。

    3)安裝ctrl+x進入啟動界面

    sh-4.2#

    4)輸入如下命令

     mount -o remount ,rw /
     passwd root
    #若開啟selinux,進行如下設置
    touch /.autorelabel
    exec /sbin/init  或者 exec /sbin/reboot

    2.6 擴展compute節點

    1.環境準備 (2.4.1節)
    2.配置yum源(2.2節)
    3.安裝ceph客戶端 (2.3節)
    4. 修改 /root/multinode的配置文件  添加計算節點 
    kolla-ansible -i /root/multinode bootstrap-servers
    kolla-ansible -i /root/multinode prechecks
    kolla-ansible -i /root/multinode deploy  
    
    openstack compute service list | grep nova-compute

    3 ceph集群更改日志盤位置

    3.1 分區規劃及創建

         生產環境中每個 OSD節點新加入4個480G的SSD盤,想將ceph的日志存儲到SSD上。

        需要對4塊SSD進行分區,分區多少合適?

            例如:hostS06 有27個SSD,那么4個SSD就將分成27個分區,大小如何劃分。這里的設定規則是 7+ 7+7+6 =27,前三塊SSD創建7個分區,前6個分區的大小為66G,最后一個分區默認大小為84G。 最后一塊SSD創建6個分區,每個分區大小為80G。

    3.2 升級日志盤

    set -e 
    /usr/bin/ceph osd set noout 
    
    PARTUUIDDIR=/dev/disk/by-partuuid
    OSDS=$(lsblk | grep ceph | awk -F'/|-' '{print $NF}')
    
    #DEVICES=(sdn1 sdn5 sdn6 sdn7 sdn8 sdn9 sdn10  sdo1 sdo5 sdo6 sdo7 sdo8 sdo9 sdo10 sdp1 sdp5 sdp6 sd
    p7 sdp8 sdp9 sdp10 sdq1 sdq5 sdq6 sdq7 sdq8 sdq9)
    DEVICES=(sdn5 sdn6 sdn7 sdn8 sdn9 sdn10  sdo1 sdo5 sdo6 sdo7 sdo8 sdo9 sdo10 sdp1 sdp5 sdp6 sdp7 sd
    p8 sdp9 sdp10 sdq1 sdq5 sdq6 sdq7 sdq8 sdq9)
    
    #for i in {1..27}
    for i in {2..27}
    do
      DEVICE=${DEVICE[$i]}
      OSD_ID=${OSDS[$i]}
      OSD_Journal=/var/lib/ceph/osd/ceph-$OSD_ID/journal
      UUID=$(uuidgen)
    
      ln -s /dev/$DEVICE  $PARTUUIDDIR/$UUID
    
      stop ceph-osd id=$OSD_ID
      ceph-osd -i $OSD_ID  --flush-journal
      rm $OSD_Journal
      ln -s $PARTUUIDDIR/$UUID $OSD_Journal
      chown ceph:ceph $OSD_Journal
      echo $UUID  > /var/lib/ceph/osd/ceph-$OSD_ID/journal_uuid
      ceph-osd -i $OSD_ID --mkjournal
      restart ceph-osd id=$OSD_ID
    done
    ceph osd unset noout

     

    www.miya787.com