云原生存储ROOK

一.ROOK部署

StorageClass:存储类,由K8s管理员创建,用于动态PV的管理,可以链接至不同的后端存储,比如Ceph、GlusterFS等。之后对存储的请求可以指向StorageClass,然后StorageClass会自动的创建、删除PV。
实现方式:

in-tree: 内置于K8s核心代码,对于存储的管理,都需要编写相应的代码。
out-of-tree:由存储厂商提供一个驱动(CSI或Flex Volume),安装到K8s集群,然后StorageClass只需要配置该驱动即可,驱动器会代替StorageClass管理存储。

StorageClass地址:https://kubernetes.io/docs/concepts/storage/storage-classes/

1.rook概念

Rook是一个自我管理的分布式存储编排系统,它本身并不是存储系统,在存储和k8s之前搭建了一个桥梁,使存储系统的搭建或者维护变得特别简单,Rook将分布式存储系统转变为自我管理、自我扩展、自我修复的存储服务。它让一些存储的操作,比如部署、配置、扩容、升级、迁移、灾难恢复、监视和资源管理变得自动化,无需人工处理。并且Rook支持CSI,可以利用CSI做一些PVC的快照、扩容、克隆等操作。
官网:https://rook.io/

2.rook安装

1.下载rook,目前1.6.3

git clone --single-branch --branch v1.6.3 https://github.com/rook/rook.git

2.配置更改
修改Rook CSI镜像地址,原本的地址可能是gcr的镜像,但是gcr的镜像无法被国内访问,所以需要同步gcr的镜像到阿里云镜像仓库
75行左右添加,注意对其

vim operator.yaml
ROOK_CSI_REGISTRAR_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-node-driver-registrar:v2.0.1"
ROOK_CSI_RESIZER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-resizer:v1.0.1"
ROOK_CSI_PROVISIONER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-provisioner:v2.0.4"
ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-snapshotter:v4.0.0"
ROOK_CSI_ATTACHER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-attacher:v3.0.2"

如果是其他版本,需要自行同步,同步方法可以在网上找到相关文章。可以参考https://blog.csdn.net/sinat_35543900/article/details/103290782
还是operator文件,新版本rook默认关闭了自动发现容器的部署,可以找到ROOK_ENABLE_DISCOVERY_DAEMON改成true即可:

3.部署

cd cluster/examples/kubernetes/ceph
kubectl create -f crds.yaml -f common.yaml -f operator.yaml

等待operator容器和discover容器启动

[root@k8s-master01 ceph]# kubectl -n rook-ceph get pod
NAME                                                     READY   STATUS      RESTARTS   AGE
rook-ceph-operator-7c7d8846f4-fsv9f                      1/1     Running     0          25h
rook-discover-qw2ln                                      1/1     Running     0          28h
rook-discover-wf8t7                                      1/1     Running     0          28h
rook-discover-z6dhq                                      1/1     Running     0          28h

二.用ROOK搭建ceph集群

1.修改配置文件

修改cluster文件,配置image(同步到私有仓库),文件路径(本地),mon(大于3),mgr(生成环境大于2)等配置,把useAllNodes: false和useAllDevices: false都写flase,

[root@k8s-master01 ceph]# vim cluster.yaml
...
  storage: # cluster level storage configuration and selection
    useAllNodes: false
    useAllDevices: false
....
# nodes below will be used as storage resources.  Each node's 'name' field should match their 'kubernetes.io/hostname' label.
    nodes:
    - name: "k8s-master03"
      devices: # specific devices to use for storage can be specified for each node
      - name: "sdb"
    - name: "k8s-node01"
      devices: # specific devices to use for storage can be specified for each node
      - name: "sdb"
    - name: "k8s-node02"
      devices: # specific devices to use for storage can be specified for each node
      - name: "sdb"
...

2.安装创建

[root@k8s-master01 ceph]# kubectl create -f cluster.yaml

[root@k8s-master01 ceph]# kubectl get pod -n rook-ceph    #查看新创建的pod,大概装了一个多小时才好(尽量把镜像先下载好放在私有仓库,下载特别慢)
NAME                                                     READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-c2g9n                                   3/3     Running     0          89m
csi-cephfsplugin-c6kt2                                   3/3     Running     0          89m
csi-cephfsplugin-dgc2k                                   3/3     Running     0          89m
csi-cephfsplugin-provisioner-846ffc6cb4-2949b            6/6     Running     2          89m
csi-cephfsplugin-provisioner-846ffc6cb4-zzwvn            6/6     Running     5          89m
csi-cephfsplugin-rxvlh                                   3/3     Running     0          89m
csi-cephfsplugin-zst4j                                   3/3     Running     0          89m
csi-rbdplugin-5f8df                                      3/3     Running     0          89m
csi-rbdplugin-5wgw4                                      3/3     Running     0          89m
csi-rbdplugin-7c4lc                                      3/3     Running     0          89m
csi-rbdplugin-c7g9w                                      3/3     Running     0          89m
csi-rbdplugin-provisioner-75fd5c779f-2264d               6/6     Running     0          89m
csi-rbdplugin-provisioner-75fd5c779f-qlhtz               6/6     Running     0          89m
csi-rbdplugin-xn4cd                                      3/3     Running     0          89m
rook-ceph-crashcollector-k8s-master03-6f8447cdff-rj7pc   1/1     Running     0          32m
rook-ceph-crashcollector-k8s-node01-747795874c-d44c8     1/1     Running     0          32m
rook-ceph-crashcollector-k8s-node02-5d4867cfb8-qrpb8     1/1     Running     0          32m
rook-ceph-mgr-a-645f487fbb-27g5v                         1/1     Running     0          32m
rook-ceph-mon-a-5f6488c96c-bgj98                         1/1     Running     0          89m
rook-ceph-mon-b-cb5c9c669-kjdwg                          1/1     Running     0          69m
rook-ceph-mon-c-59bf47bb86-vx8hj                         1/1     Running     0          57m
rook-ceph-operator-65965c66b5-2z6nf                      1/1     Running     0          120m
rook-ceph-osd-0-5fdd49695b-5srzl                         1/1     Running     0          32m
rook-ceph-osd-1-5db9d74fb5-pkthf                         1/1     Running     0          32m
rook-ceph-osd-2-756ccbbcbd-mgmrj                         1/1     Running     0          32m
rook-ceph-osd-prepare-k8s-master03-k5g89                 0/1     Completed   0          32m
rook-ceph-osd-prepare-k8s-node01-fkks9                   0/1     Completed   0          32m
rook-ceph-osd-prepare-k8s-node02-jf985                   0/1     Completed   0          32m
rook-discover-5bttf                                      1/1     Running     0          117m
rook-discover-ghxxf                                      1/1     Running     1          117m
rook-discover-kf74h                                      1/1     Running     0          117m
rook-discover-q6ss9                                      1/1     Running     0          117m
rook-discover-znjml                                      1/1     Running     1          117m

3.安装ceph客户端工具

[root@k8s-master01 ceph]# kubectl  create -f toolbox.yaml -n rook-ceph
deployment.apps/rook-ceph-tools created

[root@k8s-master01 ceph]# kubectl get pod -n rook-ceph | grep tool
rook-ceph-tools-fc5f9586c-g5q5d                          1/1     Running     0          3m31s

进入客户端工具pod,使用ceph命令查看ceph状态

[root@k8s-master01 ceph]# kubectl exec -it rook-ceph-tools-fc5f9586c-g5q5d -n rook-ceph -- bash 
[root@rook-ceph-tools-fc5f9586c-g5q5d /]# ceph status
  cluster:
    id:     37055851-a453-4a79-a032-4f8a94099896
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
 
  services:
    mon: 3 daemons, quorum a,c,b (age 61m)
    mgr: a(active, since 61m)
    osd: 3 osds: 3 up (since 61m), 3 in (since 61m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs:     1 active+clean
 
[root@rook-ceph-tools-fc5f9586c-g5q5d /]# ceph osd status
ID  HOST           USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE      
 0  k8s-node01    1026M  18.9G      0        0       0        0   exists,up  
 1  k8s-node02    1026M  18.9G      0        0       0        0   exists,up  
 2  k8s-master03  1026M  18.9G      0        0       0        0   exists,up  

[root@rook-ceph-tools-fc5f9586c-g5q5d /]# ceph df
--- RAW STORAGE ---
CLASS  SIZE    AVAIL   USED     RAW USED  %RAW USED
hdd    60 GiB  57 GiB  6.2 MiB   3.0 GiB       5.01
TOTAL  60 GiB  57 GiB  6.2 MiB   3.0 GiB       5.01
--- POOLS ---
POOL                   ID  PGS  STORED  OBJECTS  USED  %USED  MAX AVAIL
device_health_metrics   1    1     0 B        0   0 B      0     18 GiB

4.Ceph dashboard

默认情况下,ceph dashboard是打开的,由于测试没有安装ingress,就创建一个nodePort类型的Service暴露服务:

[root@k8s-master01 ceph]# kubectl get svc -n rook-ceph  | grep dashboard
NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
rook-ceph-mgr-dashboard    ClusterIP   10.107.189.100           7000/TCP            74m    #默认创建,可以用ingress代理访问

创建一个Service

[root@k8s-master01 ceph]# vim dashboard-np.yaml
vim dashboard-np.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: rook-ceph-mgr
    ceph_daemon_id: a
    rook_cluster: rook-ceph
  name: rook-ceph-mgr-dashboard-np
  namespace: rook-ceph
spec:
  ports:
  - name: http-dashboard
    port: 7000
    protocol: TCP
    targetPort: 7000
  selector:
    app: rook-ceph-mgr
    ceph_daemon_id: a
    rook_cluster: rook-ceph
  sessionAffinity: None
  type: NodePort

[root@k8s-master01 ceph]# kubectl create -f dashboard-np.yaml 
service/rook-ceph-mgr-dashboard-np created

[root@k8s-master01 ceph]# kubectl get svc -n rook-ceph  | grep dashboard  #获得端口32377
rook-ceph-mgr-dashboard      ClusterIP   10.107.189.100           7000/TCP            79m
rook-ceph-mgr-dashboard-np   NodePort    10.109.245.0             7000:32377/TCP      71s

其中一个IP+端口即可访问 http://192.168.0.102:32377


登录:用命令查看

[root@k8s-master01 ceph]# kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
oa....a60


警告解决:https://docs.ceph.com/en/octopus/rados/operations/health-checks/

[root@k8s-master01 ceph]# kubectl exec -it rook-ceph-tools-fc5f9586c-g5q5d -n rook-ceph  -- bash
[root@rook-ceph-tools-fc5f9586c-g5q5d /]# ceph config set mon auth_allow_insecure_global_id_reclaim false

三.Storageclass动态储存-块存储

块存储一般用于一个Pod挂载一块存储使用,相当于一个服务器新挂了一个盘,只给一个应用使用。
参考文档:https://rook.io/docs/rook/v1.6/ceph-block.html

1.搭建

官网:https://rook.github.io/docs/rook/v1.6/ceph-pool-crd.html

[root@k8s-master01 ceph]# pwd
/root/rook/cluster/examples/kubernetes/ceph

[root@k8s-master01 ceph]# vim csi/rbd/storageclass.yaml   #找到自带的yaml
apiVersion: ceph.rook.io/v1
kind: CephBlockPool     #创建的pool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  failureDomain: host
  replicated:
    size: 2   #测试环境用2个,不能超过OSD的数量
    requireSafeReplicaSize: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass   #创建的StorageClass连接到pool
metadata:
  name: rook-ceph-block   #定义块储存的StorageClass 名称
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
  clusterID: rook-ceph  #创建rook时的名称,一般和命名空间名称一致
  pool: replicapool    #连接pool池子的名称(上述)
  imageFormat: "2"    
  imageFeatures: layering
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner    #创建的账号密码secret
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/fstype: ext4  #格式类型
allowVolumeExpansion: true   #是否允许扩容
reclaimPolicy: Delete    #回收策略

创建StorageClass和存储池:

[root@k8s-master01 ceph]# kubectl  create -f csi/rbd/storageclass.yaml -n rook-ceph
cephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block create

查看创建的cephblockpool和storageClass(StorageClass没有namespace隔离性):

[root@k8s-master01 ceph]# kubectl get cephblockpools.ceph.rook.io -n rook-ceph  #查看块存储的pool 
NAME          AGE
replicapool   29s

[root@k8s-master01 ceph]# kubectl get sc   #查看storageclass
NAME              PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   38s

2.测试挂载mysql-deployment

创建一个MySQL服务,该文件有一段pvc的配置,pvc会连接刚才创建的storageClass,然后动态创建pv,然后连接到ceph创建对应的存储
之后创建pvc只需要指定storageClassName为刚才创建的StorageClass名称即可连接到rook的ceph。如果是statefulset,只需要将volumeTemplateClaim里面的Claim名称改为StorageClass名称即可动态创建Pod。
其中MySQL deployment的volumes配置挂载了该pvc:

[root@k8s-master01 kubernetes]# pwd
/root/rook/cluster/examples/kubernetes
apiVersion: v1
kind: Service    #创建mysql-svc
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim   #创建mysqlpvc
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  storageClassName: rook-ceph-block     #指定storageClassName为刚才创建的StorageClass名称即可连接到rook的ceph
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
    tier: mysql
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
        - image: mysql:5.6
          name: mysql
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: changeme
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql   #挂载路径
      volumes:
        - name: mysql-persistent-storage  
          persistentVolumeClaim:
            claimName: mysql-pv-claim   #指定上述创建的PVC名称

[root@k8s-master01 kubernetes]# kubectl create -f mysql.yaml 
service/wordpress-mysql created
persistentvolumeclaim/mysql-pv-claim created
deployment.apps/wordpress-mysql created

因为MySQL的数据不能多个MySQL实例连接同一个存储,所以一般只能用块存储。相当于新加了一块盘给MySQL使用。
创建完成后可以查看创建的pvc和pv

[root@k8s-master01 kubernetes]# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql-pv-claim   Bound    pvc-65583bf8-33c1-45f5-a99c-351c88014576   2Gi        RWO            rook-ceph-block   9m31s
[root@k8s-master01 kubernetes]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS      REASON   AGE
pvc-65583bf8-33c1-45f5-a99c-351c88014576   2Gi        RWO            Delete           Bound    default/mysql-pv-claim   rook-ceph-block            9m33s

此时在ceph dashboard上面也可以查看到对应的image


此时发现mysqk的pod已经起来了,测试之后删除,然后回到ceph的web端,images已被删除

[root@k8s-master01 kubernetes]# kubectl get po 
NAME                               READY   STATUS    RESTARTS   AGE
wordpress-mysql-6965fc8cc8-m5vdk   1/1     Running   0          6m52s

[root@k8s-master01 kubernetes]# kubectl delete -f mysql.yaml   
service "wordpress-mysql" deleted
persistentvolumeclaim "mysql-pv-claim" deleted
deployment.apps "wordpress-mysql" deleted

3.测试挂载nginx-StatefulSet

创建StatefulSet会创建几个副本,每个pod都需要一个储存,如果按照块储存的方法,先创建PVC的方法就行不通,一个块存储PVC只能挂载一个POD,这时要用到volumeClaimTemplates独有标识,这是StatefulSet专门的标识,使用比较简单,直接连接storageClassName自动创建PVC。

[root@k8s-master01 kubernetes]# vim sts-sc.yaml
apiVersion: v1
kind: Service   #首先创建一个SVC
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx 
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "rook-ceph-block"
      resources:
        requests:
          storage: 1Gi

[root@k8s-master01 kubernetes]# kubectl create -f sts-sc.yaml   #创建STS
service/nginx created
statefulset.apps/web created

[root@k8s-master01 kubernetes]# kubectl get pvc   #查看PVC,storageClass已经自动创建了带序号排序的PVC
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
www-web-0   Bound    pvc-60238e27-31b1-48b6-9e6f-5309b11a0b56   1Gi        RWO            rook-ceph-block   118s
www-web-1   Bound    pvc-cbf20374-5e9d-4bb3-8361-59002878ecdb   1Gi        RWO            rook-ceph-block   84s
www-web-2   Bound    pvc-1390c984-003d-4d38-a0fd-00ceefda01ed   1Gi        RWO            rook-ceph-block   7s

四.Storageclass动态储存-文件存储

共享文件系统一般用于多个Pod共享一个存储
官方文档:https://rook.io/docs/rook/v1.6/ceph-filesystem.html

1.搭建

[root@k8s-master01 ceph]# vim filesystem.yaml
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
  name: myfs
  namespace: rook-ceph # namespace:cluster
spec:
  metadataPool:
    replicated:
      size: 3    
      requireSafeReplicaSize: true
    parameters:
      compression_mode:
        none
  dataPools:
    - failureDomain: host
      replicated:
        size: 3
        requireSafeReplicaSize: true
      parameters:
        compression_mode:
          none
  preserveFilesystemOnDelete: true
  metadataServer:
    activeCount: 1
    activeStandby: true
    placement:
      podAntiAffinity:  #设置反亲和力,提高高可用性
        requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
                - key: app
                  operator: In
                  values:
                    - rook-ceph-mds
            topologyKey: kubernetes.io/hostname
        preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - rook-ceph-mds
              topologyKey: topology.kubernetes.io/zone
    annotations:
    labels:
    #  key: value
    resources:
    # The requests and limits set here, allow the filesystem MDS Pod(s) to use half of one CPU core and 1 gigabyte of memory
    #  limits:
    #    cpu: "500m"
    #    memory: "1024Mi"
    #  requests:
    #    cpu: "500m"
    #    memory: "1024Mi"
    # priorityClassName: my-priority-class
  mirroring:
    enabled: false

[root@k8s-master01 ceph]# kubectl  create -f filesystem.yaml  #创建
cephfilesystem.ceph.rook.io/myfs created


[root@k8s-master01 ceph]#  kubectl -n rook-ceph get pod -l app=rook-ceph-mds   #查看
NAME                                    READY   STATUS    RESTARTS   AGE
rook-ceph-mds-myfs-a-5c8749c777-bhrdh   1/1     Running   0          51s
rook-ceph-mds-myfs-b-8547d89787-jznhf   1/1     Running   0          50s

然后创建storageclass

[root@k8s-master01 ceph]# vim csi/cephfs/storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-cephfs
provisioner: rook-ceph.cephfs.csi.ceph.com #driver:namespace:operator,下示例查看
parameters:
  clusterID: rook-ceph # namespace:cluster
  fsName: myfs
  pool: myfs-data0

  csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner   #控制集群的密钥
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster

reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:  

[root@k8s-master01 ceph]# kubectl get csidrivers  #与上面provisioner:匹配,要存在
NAME                            ATTACHREQUIRED   PODINFOONMOUNT   MODES        AGE
rook-ceph.cephfs.csi.ceph.com   true             false            Persistent   3d23h
rook-ceph.rbd.csi.ceph.com      true             false            Persistent   3d23h

[root@k8s-master01 ceph]# kubectl create -f csi/cephfs/storageclass.yaml   #创建storageclass
storageclass.storage.k8s.io/rook-cephfs created

[root@k8s-master01 ceph]# kubectl get sc  
NAME          PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-cephfs   rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   27s

2.挂载测试nginx

先创建一个nginx的svc,再创建PVC,创建一个nginx的yaml

[root@k8s-master01 cephfs]# vim nginx-file.yaml
apiVersion: v1
kind: Service   #创建svc
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  selector:
    app: nginx
  type: ClusterIP
---
kind: PersistentVolumeClaim  #创建pvc
apiVersion: v1
metadata:
  name: nginx-share-pvc  #自定义pvc的名称
spec:
  storageClassName: rook-cephfs  #之前创建的storageClass的名称
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment   #创建nginx-deployment挂载测试
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      containers:
      - name: nginx
        image: nginx 
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html    #设置的共享文件
      volumes:
        - name: www
          persistentVolumeClaim:
            claimName: nginx-share-pvc   #上面创建PVC的名称

创建并查看资源

persistentvolumeclaim/nginx-share-pvc created
deployment.apps/web created

[root@k8s-master01 cephfs]# kubectl get pvc 
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-share-pvc   Bound    pvc-06c21214-79ff-4bde-8b18-bd12414963b2   1Gi        RWX            rook-cephfs    6s

[root@k8s-master01 cephfs]# kubectl get pv   #自动创建的pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS   REASON   AGE
pvc-06c21214-79ff-4bde-8b18-bd12414963b2   1Gi        RWX            Delete           Bound    default/nginx-share-pvc   rook-cephfs             33s


[root@k8s-master01 cephfs]# kubectl get svc 
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1                443/TCP   20d
nginx        ClusterIP   10.102.186.144           80/TCP    12s

[root@k8s-master01 cephfs]# kubectl get pod   #pod已启动完成
NAME                   READY   STATUS    RESTARTS   AGE
web-7bf54cbc8d-cr767   1/1     Running   0          112s
web-7bf54cbc8d-hjx6n   1/1     Running   0          112s
web-7bf54cbc8d-xnb2h   1/1     Running   0          112s

进入其中一个pod修改index,查看是否共享储存

[root@k8s-master01 cephfs]# kubectl exec -it web-7bf54cbc8d-cr767 -- bash
root@web-7bf54cbc8d-cr767:/usr/share/nginx/html# echo "123 mutouren" > index.html   #进入容器添加数据

[root@k8s-master01 cephfs]# curl 10.102.186.144  #多次curl pvc访问一致
123 mutouren
[root@k8s-master01 cephfs]# curl 10.102.186.144
123 mutouren
[root@k8s-master01 cephfs]# curl 10.102.186.144
123 mutouren
[root@k8s-master01 cephfs]# curl 10.102.186.144
123 mutouren
[root@k8s-master01 cephfs]# kubectl exec web-7bf54cbc8d-xnb2h -- cat /usr/share/nginx/html/index.html    #查看另一个容器的文件也存在
123 mutouren

五.PVC扩容

文件共享类型的PVC扩容需要k8s 1.15+
块存储类型的PVC扩容需要k8s 1.16+
PVC扩容需要开启ExpandCSIVolumes,新版本的k8s已经默认打开了这个功能,可以查看自己的k8s版本是否已经默认打开了该功能:

[root@k8s-master01 ~]# kubectl exec kube-apiserver-k8s-master01 -n kube-system -- kube-apiserver -h |grep ExpandCSIVolumes
                                                     ExpandCSIVolumes=true|false (BETA - default=true)

如果default为true就不需要打开此功能,如果default为false,需要开启该功能。

块存储和文件存储类似,已块存储为例。
扩容前容量:

[root@k8s-master01 kubernetes]# kubectl exec wordpress-mysql-6965fc8cc8-ph6nn  -- df -h | grep mysql
/dev/rbd0                 20G  160M   20G   1% /var/lib/mysql

扩容直接edit pvc修改其大小为25G,可以看到此时pvc并没有扩容,但是pv已经扩容,也可以看到ceph dashboard的image也完成了扩容,但是pvc和pod里面的状态会有延迟,大概等待5-10分钟后,即可完成扩容:

[root@k8s-master01 kubernetes]# kubectl edit pvc mysql-pv-claim 
persistentvolumeclaim/mysql-pv-claim edited
...
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 25Gi  #20改为25G

[root@k8s-master01 kubernetes]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS      REASON   AGE
pvc-ef45d834-eaad-40dc-a6fa-83b567182924   25Gi       RWO            Delete           Bound    default/mysql-pv-claim   rook-ceph-block            4m26s


几分钟后查看,已经扩容

[root@k8s-master01 kubernetes]# kubectl get pvc  
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql-pv-claim   Bound    pvc-ef45d834-eaad-40dc-a6fa-83b567182924   25Gi       RWO            rook-ceph-block   

[root@k8s-master01 kubernetes]# kubectl exec wordpress-mysql-6965fc8cc8-ph6nn  -- df -h | grep mysql 
/dev/rbd0                 25G  160M   25G   1% /var/lib/mysql

六.PVC快照和回滚

1.PVC快照创建

PVC快照功能需要k8s 1.17+
以块存储为例:
先创建snapshotclass

[root@k8s-master01 rbd]# pwd
/root/rook/cluster/examples/kubernetes/ceph/csi/rbd
[root@k8s-master01 rbd]# kubectl create -f snapshotclass.yaml 
volumesnapshotclass.snapshot.storage.k8s.io/csi-rbdplugin-snapclass created

创建之前先在容器里面写入数据

[root@k8s-master01 rbd]# kubectl exec -it wordpress-mysql-6965fc8cc8-ph6nn -- bash
root@wordpress-mysql-6965fc8cc8-ph6nn:/# df -h
/dev/rbd0                 25G  160M   25G   1% /var/lib/mysql

root@wordpress-mysql-6965fc8cc8-ph6nn:/var/lib/mysql# touch  123.log
root@wordpress-mysql-6965fc8cc8-ph6nn:/var/lib/mysql# echo 123 > 123.log

然后创建,查看

[root@k8s-master01 rbd]# vim snapshot.yaml
---
# 1.17 <= K8s <= v1.19
# apiVersion: snapshot.storage.k8s.io/v1beta1
# K8s >= v1.20
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: rbd-pvc-snapshot  #需要打快照的PVC
spec:
  volumeSnapshotClassName: csi-rbdplugin-snapclass
  source:
    persistentVolumeClaimName: mysql-pv-claim

[root@k8s-master01 rbd]# kubectl create -f snapshot.yaml 
volumesnapshot.snapshot.storage.k8s.io/rbd-pvc-snapshot created

[root@k8s-master01 rbd]# kubectl get volumesnapshot  #创建成功
NAME               READYTOUSE   SOURCEPVC        SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS             SNAPSHOTCONTENT                                    CREATIONTIME   AGE
rbd-pvc-snapshot   true         mysql-pv-claim                           25Gi          csi-rbdplugin-snapclass   snapcontent-e4412f69-eba4-48db-8a55-4a9ad65c1d62   16s            17s

可以到web端查看快照

2.PVC快照回滚

如果想要创建一个具有某个数据的PVC,可以从某个快照恢复:

[root@k8s-master01 rbd]# cat pvc-restore.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc-restore  #新建pvc快照的名字
spec:
  storageClassName: rook-ceph-block    #新建pvc的storageClass,与来源的sc一致
  dataSource:
    name: rbd-pvc-snapshot  #来源pvc快照的名字
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 25Gi #不能低于原pvc的大小

[root@k8s-master01 rbd]# kubectl create -f pvc-restore.yaml 
persistentvolumeclaim/rbd-pvc-restore created

[root@k8s-master01 rbd]# kubectl get pvc  #查看PVC
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql-pv-claim    Bound    pvc-ef45d834-eaad-40dc-a6fa-83b567182924   25Gi       RWO            rook-ceph-block   
rbd-pvc-restore   Bound    pvc-b7be06bd-fa78-415e-a153-8167ecd1ecf0   25Gi       RWO            rook-ceph-block   5s

第二步,从快照创建的pvc创建一个pod,然后进行快照恢复。

[root@k8s-master01 kubernetes]# vim mysql-restore.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: check-mysql
spec:
  selector:
    matchLabels:
      app: check
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: check
    spec:
      containers:
        - image: mysql:5.6
          name: mysql
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: changeme
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: check-mysql-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
        - name: check-mysql-persistent-storage
          persistentVolumeClaim:
            claimName: rbd-pvc-restore   #换成新的PVC

[root@k8s-master01 kubernetes]# kubectl create -f mysql-restore.yaml   #然后创建
deployment.apps/check-mysql created
[root@k8s-master01 kubernetes]# kubectl exec -it check-mysql-656b9f555b-dfzsb -- bash
root@check-mysql-656b9f555b-dfzsb:/# cd /var/lib/mysql
root@check-mysql-656b9f555b-dfzsb:/var/lib/mysql# ls  #文件和拍快照时一致,可以进行文件还原
123.log  auto.cnf  ib_logfile0	ib_logfile1  ibdata1  lost+found  mysql  performance_schema

七.PVC克隆

pvc克隆和pvc回滚操作类似,都是根据快照先创建一个新的PVC,然后通过PVC创建pod。
区别是克隆是用现在用着的PVC作来源dataSource,回滚是用备份快照的数据。kind一个是VolumeSnapshot,一个是PersistentVolumeClaim 。

[root@k8s-master01 rbd]# cat pvc-clone.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc-clone  #新建pvc快照的名字
spec:
  storageClassName: rook-ceph-block  #新建pvc的storageClass,与来源的sc一致
  dataSource:
    name: mysql-pv-claim     #来源pvc快照的名字,你需要克隆的PVC
    kind: PersistentVolumeClaim    
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 25Gi

[root@k8s-master01 rbd]# kubectl create -f pvc-clone.yaml 
persistentvolumeclaim/rbd-pvc-clone created

[root@k8s-master01 rbd]# kubectl get pvc 
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql-pv-claim    Bound    pvc-ef45d834-eaad-40dc-a6fa-83b567182924   25Gi       RWO            rook-ceph-block   
rbd-pvc-clone     Bound    pvc-74fdc858-8bc3-4235-9255-c7c79a940db0   25Gi       RWO            rook-ceph-block   4s
rbd-pvc-restore   Bound    pvc-b7be06bd-fa78-415e-a153-8167ecd1ecf0   25Gi       RWO            rook-ceph-block   28m

[root@k8s-master01 rbd]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS      REASON   AGE
pvc-74fdc858-8bc3-4235-9255-c7c79a940db0   25Gi       RWO            Delete           Bound    default/rbd-pvc-clone     rook-ceph-block            9s
pvc-b7be06bd-fa78-415e-a153-8167ecd1ecf0   25Gi       RWO            Delete           Bound    default/rbd-pvc-restore   rook-ceph-block            28m
pvc-ef45d834-eaad-40dc-a6fa-83b567182924   25Gi       RWO            Delete           Bound    default/mysql-pv-claim    rook-ceph-block            84m

八.数据清理

如果Rook要继续使用,可以只清理创建的deploy、pod、pvc即可。之后可以直接投入使用

数据清理步骤:
1.首先清理挂载了PVC和Pod,可能需要清理单独创建的Pod和Deployment或者是其他的高级资源
2.之后清理PVC,清理掉所有通过ceph StorageClass创建的PVC后,最好检查下PV是否被清理
3.之后清理快照:kubectl delete volumesnapshot XXXXXXXX
4.之后清理创建的Pool,包括块存储和文件存储
a)kubectl delete -n rook-ceph cephblockpool replicapool
b)kubectl delete -n rook-ceph cephfilesystem myfs
5.清理StorageClass:kubectl delete sc rook-ceph-block rook-cephfs
6.清理Ceph集群:kubectl -n rook-ceph delete cephcluster rook-ceph
7.删除Rook资源:
a)kubectl delete -f operator.yaml
b)kubectl delete -f common.yaml
c)kubectl delete -f crds.yaml
8.如果卡住需要参考Rook的troubleshooting

a)https://rook.io/docs/rook/v1.6/ceph-teardown.html#troubleshooting
for CRD in $(kubectl get crd -n rook-ceph | awk '/ceph.rook.io/ {print $1}'); do     kubectl get -n rook-ceph "$CRD" -o name |     xargs -I {} kubectl patch {} --type merge -p '{"metadata":{"finalizers": [null]}}' -n rook-ceph; done

9.清理数据目录和磁盘
参考链接:https://rook.io/docs/rook/v1.6/ceph-teardown.html#delete-the-data-on-hosts

参考链接:https://rook.io/docs/rook/v1.6/ceph-teardown.html

暂无评论

发送评论 编辑评论


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇
下一篇