K8S系列 -- RBAC权限管理及部署Dashboard集群管理web界面

一、部署dashboard v1.8.3

        当我们管理K8S集群时,需要使用kubectl命令,需要写一堆的参数等等,是比较麻烦的,所以,K8S提供了一个web界面,可以很直观的看到所有的集群资源,方便我们对于集群的管理,这个就是K8S的Dashboard插件,接下来我们就来部署一下这个插件。在部署完成后,我们还会再部署一个Dashboard的插件,heapster,这个插件可以为我们增强一下Dashboard中的功能。

    1、准备镜像

[root@k8s7-200 ~]# docker pull k8scn/kubernetes-dashboard-amd64:v1.8.3
[root@k8s7-200 ~]# docker images | grep dashboard
k8scn/kubernetes-dashboard-amd64   v1.8.3                     fcac9aa03fd6        21 months ago       102MB
[root@k8s7-200 ~]# docker tag fcac9aa03fd6 harbor.od.com/public/dashboard:v1.8.3
[root@k8s7-200 ~]# docker push harbor.od.com/public/dashboard:v1.8.3

    2、准备资源配置清单

        a、RBAC

[root@k8s7-200 ~]# cd /data/k8s-yaml/
[root@k8s7-200 k8s-yaml]# mkdir dashboard
[root@k8s7-200 k8s-yaml]# cd dashboard/
[root@k8s7-200 dashboard]# vim rbac.yaml
[root@k8s7-200 dashboard]# cat rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system

        b、Deployment

[root@k8s7-200 dashboard]# cat dp.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      priorityClassName: system-cluster-critical
      containers:
      - name: kubernetes-dashboard
        image: harbor.od.com/public/dashboard:v1.8.3
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 50m
            memory: 100Mi
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          # PLATFORM-SPECIFIC ARGS HERE
          - --auto-generate-certificates
        env:
        - name: ACCEPT_LANGUAGE
          value: english
        volumeMounts:
        - name: tmp-volume
          mountPath: /tmp
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard-admin
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"

        c、Service

[root@k8s7-200 dashboard]# cat svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 443
    targetPort: 8443

        d、Ingress

[root@k8s7-200 dashboard]# cat ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: dashboard.od.com
    http:
      paths:
      - backend:
          serviceName: kubernetes-dashboard
          servicePort: 443

    3、创建资源

[root@k8s7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/rbac.yaml
serviceaccount/kubernetes-dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created
[root@k8s7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dp.yaml
deployment.apps/kubernetes-dashboard created
[root@k8s7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/svc.yaml
service/kubernetes-dashboard created
[root@k8s7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/ingress.yaml
ingress.extensions/kubernetes-dashboard created

二、部署heapster插件

    1、准备镜像

[root@hdss7-200 ~]# docker pull quay.io/bitnami/heapster:1.5.4
[root@hdss7-200 ~]# docker image ls | grep heapster
quay.io/bitnami/heapster        1.5.4                      c359b95ad38b        13 months ago       136MB
[root@hdss7-200 ~]# docker tag c359b95ad38b harbor.od.com/public/heapster:v1.5.4
[root@hdss7-200 ~]# docker push harbor.od.com/public/heapster:v1.5.4

    2、准备资源配置清单

        a、RBAC

[root@hdss7-200 ~]# cd /data/k8s-yaml/dashboard/
[root@hdss7-200 dashboard]# mkdir heapster
[root@hdss7-200 dashboard]# cd heapster/
[root@hdss7-200 heapster]# cat rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:heapster
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system

        b、Deployment

[root@hdss7-200 heapster]# cat dp.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      serviceAccountName: heapster
      containers:
      - name: heapster
        image: harbor.od.com/public/heapster:v1.5.4
        imagePullPolicy: IfNotPresent
        command:
        - /opt/bitnami/heapster/bin/heapster
        - --source=kubernetes:https://kubernetes.default

        c、Service

[root@hdss7-200 heapster]# cat svc.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster

    3、创建资源

[root@k8s-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/rbac.yaml
serviceaccount/heapster created
clusterrolebinding.rbac.authorization.k8s.io/heapster created
[root@k8s-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/dp.yaml
deployment.extensions/heapster created
[root@k8s-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/svc.yaml
service/heapster created

        至此,K8S Dashboard部署完成,添加dns解析后,访问dashboard.od.com查看结果

登陆页面

登陆后页面

三、升级Dashboard

        此时我们已经完成了dashboard的部署,但是我们在登录界面上,点击“SKIP”按钮后就直接登陆到我们的集群中去了,此时我们就拥有了对于集群最大的权限,显然,这样做是非常不安全的,此时就需要我们做权限的控制。在dashboard 1.8.3版本中,允许我们不认证登陆,所以,我们需要先升级一下我们的dashboard版本,本次升级到1.10.1。在K8S中,升级镜像版本是很容易的,我们只需要准备好响应的镜像,然后修改下deployment中的镜像版本,最后在应用一下就可以了。

# 拉取镜像
[root@hdss7-200 ~]# docker pull hexun/kubernetes-dashboard-amd64:v1.10.1
[root@hdss7-200 ~]# docker images | grep dashboard
hexun/kubernetes-dashboard-amd64   v1.10.1                    f9aed6605b81        15 months ago       122MB
[root@hdss7-200 ~]# docker tag f9aed6605b81 harbor.od.com/public/dashboard:v1.10.1
[root@hdss7-200 ~]# docker push harbor.od.com/public/dashboard:v1.10.1

# 修改dashboard的dp.yaml中的镜像版本号
[root@hdss7-200 ~]# cat /data/k8s-yaml/dashboard/dp.yaml
...
        image: harbor.od.com/public/dashboard:v1.10.1
...

# 应用修改后的dp.yaml
[root@k8s-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dp.yaml
deployment.apps/kubernetes-dashboard configured

        此时,我们的dashboard就更新成功了,我们再访问时就会提示认证失败,当我们打开登陆页面后,我们就会看到如下图所示的界面:

1.10.1的登陆界面

        此时我们可以看到,在1.10.1版本中,登陆界面已经没有了“SKIP”按钮,那么此时就需要用到K8S中的RBAC授权,然后才能登陆进去。接下来我们就介绍一下K8S的权限控制。

四、RBAC

        在K8S中,是有权限的控制的,K8S的权限控制采用RBAC形式,即Role Based Access Control,基于角色的访问控制。在K8S中存在账号、角色、权限规则、角色绑定这几个权限相关的概念,这些角色共同组成了RBAC机制,我们来一一了解一下这些概念:

    1、账号

        在K8S中,账号是用来访问K8S集群中的资源的,其中账号又分为UserAccount,ServiceAccount两种,即用户账号和服务账号。

        对于用户账户,存在K8S集群外部,用来访问K8S集群的资源,比如我们在部署kubelet时生成的那个kubelet.kubeconfig,这其中就配置了一个名为k8s-node的userAccount,用来将node加入到集群中。

        对于服务账户,在K8S集群内部管理,此类账户用于集群中的pod来访问K8S的集群资源时使用,集群中每个namespace都有一个名为default的默认服务账户,如果在创建pod的时候没有指定serviceAccount,那么就会使用这个default账户。

        所以,当我们在做权限限制时,首先要创建一个serviceAccount。

    2、角色

        在RBAC中,我们创建账户后,并不能直接给账户授权,而是要借住一个角色(role)来给账户授权,一个role其实就是一组权限的集合。在K8S中,角色也分为两类:Role和ClusterRole,对于Role,其只在某个namespace中生效,而ClusterRole在整个集群中有效:

[root@k8s-21 conf]# kubectl get role -n kube-system
NAME                                             AGE
extension-apiserver-authentication-reader        6d5h
system::leader-locking-kube-controller-manager   6d5h
system::leader-locking-kube-scheduler            6d5h
system:controller:bootstrap-signer               6d5h
system:controller:cloud-provider                 6d5h
system:controller:token-cleaner                  6d5h
[root@k8s-21 conf]# kubectl get clusterrole
NAME                                                                   AGE
admin                                                                  6d5h
cluster-admin                                                          6d5h
edit                                                                   6d5h
system:aggregate-to-admin                                              6d5h
system:aggregate-to-edit                                               6d5h
system:aggregate-to-view                                               6d5h
system:auth-delegator                                                  6d5h
system:basic-user                                                      6d5h
system:certificates.k8s.io:certificatesigningrequests:nodeclient       6d5h
system:certificates.k8s.io:certificatesigningrequests:selfnodeclient   6d5h
system:controller:attachdetach-controller                              6d5h
...

    3、权限

        当我们定义一个role或者一个clusterrole的时候,就需要为这个role授权,授权时,需要指明对哪些API组的哪些资源拥有哪些权限,我们来看一个具体的资源配置清单:

[root@k8s-21 ~]# kubectl get clusterrole traefik-ingress-controller -o yaml
apiVersion: rbac.authorization.k8s.io/v1      # api版本
kind: ClusterRole                       # 资源类型为ClusterRole
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1beta1","kind":"ClusterRole","metadata":{"annotations":{},"name":"traefik-ingress-controller"},"rules":[{"apiGroups":[""],"resources":["services","endpoints","secrets"],"verbs":["get","list","watch"]},{"apiGroups":["extensions"],"resources":["ingresses"],"verbs":["get","list","watch"]}]}
  creationTimestamp: "2020-03-20T07:00:17Z"
  name: traefik-ingress-controller        # 该ClusterRole的名称
  resourceVersion: "19036"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/traefik-ingress-controller
  uid: 9f750690-ce43-45cd-bbd2-ef893a84cbab
rules:                    # 定义权限规则
- apiGroups:              # 针对哪些apiGroup授权
  - ""                    # apiGroup名称,名称为""表示对core apiGroup授权
  resources:              # 针对哪些资源授权
  - services
  - endpoints
  - secrets
  verbs:                  # 针对这些资源可以执行哪些操作
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch

    4、角色绑定

        当我们创建了账户,角色,授权后,就需要将这些角色和我们的账户进行绑定,在K8S中,使用资源RoleBinding和ClusterRoleBinding两种资源来绑定role和clusterrole

[root@k8s-21 ~]# kubectl get clusterrolebinding traefik-ingress-controller -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1beta1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"traefik-ingress-controller"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"traefik-ingress-controller"},"subjects":[{"kind":"ServiceAccount","name":"traefik-ingress-controller","namespace":"kube-system"}]}
  creationTimestamp: "2020-03-20T07:00:17Z"
  name: traefik-ingress-controller
  resourceVersion: "19037"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/traefik-ingress-controller
  uid: abe11a81-0f1f-4fdc-92f3-1c25f6c4898b
roleRef:                                  # 指定要绑定的role
  apiGroup: rbac.authorization.k8s.io     # 必填字段,指定apiGroup
  kind: ClusterRole                       # 必填字段,指定要绑定的role的类型为ClusterRole
  name: traefik-ingress-controller        # 必填字段,指定要绑定的role的名称
subjects:                                 # 指定要绑定的账户
- kind: ServiceAccount                    # 必填字段,要绑定的账户类型
  name: traefik-ingress-controller        # 必填字段,要绑定的账户名称
  namespace: kube-system                  # 指定名称空间,当kind是没有用户空间属性时,例如User或者Group时,如果这个字段不为空,则会报错。

        在我们完成上述步骤之后,就可以使用我们创建的账号了,在pod控制器中我们就要指定我们的serviceAccount,从而实现使用我们自定义账号来管理集群。比如我们在上面部署dashboard时,在dashboard的dp.yaml中,我们定义spec字段是就指定了 serviceAccountName: kubernetes-dashboard-admin 这一项配置,这个配置就是指定了我们自己定义的serviceAccount,同时,我们在rbac.yaml中新建了这个账户,并且做了ClusterRolebinding,而role是我们直接用了集群默认的 cluster-admin 这个 ClusterRole ,这个角色就拥有集群管理员的权限。

        当我们创建ServiceAccount后,K8S默认会为这个账户创建一个secret资源,这个资源中就保存了一个token,我们利用这个token就可以实现登陆dashboard的界面。

五、Token登陆Dashboard

        我们在上面已经将我们的dashboard升级到了1.10.1,此时登陆页面上就没有了“SKIP”按钮,如果想进入dashboard,我们必须登陆才可以,此时就需要使用token来验证权限,那么首先我们要先找到这个token,我们在dashboard的rbac.yaml中创建了一个名为 kubernetes-dashboard-admin 的账户,K8S自动的为这个账户创建了一个secret资源,其中token就保存在这个资源中:

[root@k8s-21 ~]# kubectl get secret -n kube-system
NAME                                     TYPE                                  DATA   AGE
coredns-token-mk5xp                      kubernetes.io/service-account-token   3      6d21h
default-token-dj988                      kubernetes.io/service-account-token   3      7d
heapster-token-rh85d                     kubernetes.io/service-account-token   3      43h
kubernetes-dashboard-admin-token-ljcjk   kubernetes.io/service-account-token   3      2d18h
kubernetes-dashboard-key-holder          Opaque                                2      2d18h
traefik-ingress-controller-token-4vddr   kubernetes.io/service-account-token   3      6d20h
[root@k8s-21 ~]# kubectl describe secret kubernetes-dashboard-admin-token-ljcjk -n kube-system
Name:         kubernetes-dashboard-admin-token-ljcjk
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard-admin
              kubernetes.io/service-account.uid: e0e02fc4-4632-4416-b0b4-9e52095d3a62

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1346 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1samNqayIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImUwZTAyZmM0LTQ2MzItNDQxNi1iMGI0LTllNTIwOTVkM2E2MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.TvDXDaYQw55_UApA7nCNnICkKP0BsiAHvJ1v4sa7KSzRIa_29Dg54kCK6Ge7gy62vHD7B7kmmHUFmGra7Nz3mrGIc97Frp8RaIz_OIGSDLiS82-5XKNawa4quOQnbsL-fCrbiREI5rAwxB5_q2iVZEolYntWUbPGu8_e3qbzIh8ceISvtSiCDKabSDCh5Dfmm13L6jBogWxgdNGvUBmHqOQ_TbD23770kxjHqManddZ0CgLHT3KwbH6myBI_sxV36T5eDsphPfDha31Nv-MriV8857rPoZE3XH4eUtH0vDJkRSG8XLWfJC6_Nh_dD-pvEThhijgb7lS1UawViZFJdw

        然后我们就需要将这个token粘贴到dashboard的登陆页面的token中,然后我们点击后发现,并没有成功登陆,这是因为我们的token验证必须使用https的方式进行,所以我们还需要为我们的dashboard.od.com签发一个证书,配置HTTPS之后才能进行访问,接下来我们来签发证书:

[root@hdss7-200 ~]# cd /opt/certs/
[root@hdss7-200 certs]# (umask 077; openssl genrsa -out dashboard.od.com.key 2048)
Generating RSA private key, 2048 bit long modulus
.................................................+++
............................................................+++
e is 65537 (0x10001)
[root@hdss7-200 certs]# openssl req -new -key dashboard.od.com.key -out dashboard.od.com.csr -subj "/CN=dashboard.od.com/C=CN/ST=BJ/L=Beijing/O=OD/OU=ops"
[root@hdss7-200 certs]# openssl x509 -req -in dashboard.od.com.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out dashboard.od.com.crt -days 3650
Signature ok
subject=/CN=dashboard.od.com/C=CN/ST=BJ/L=Beijing/O=OD/OU=ops
Getting CA Private Key
[root@hdss7-200 certs]# ls dashboard.od.com.*
dashboard.od.com.crt  dashboard.od.com.csr  dashboard.od.com.key

        在 10.4.7.1110.4.7.12 上配置HTTPS

[root@k8s-11 ~]# cd /etc/nginx/
[root@k8s-11 nginx]# mkdir certs
[root@k8s-11 nginx]# cd  certs/
[root@k8s-11 certs]# scp 10.4.7.200:/opt/certs/dashboard.* ./
[root@k8s-11 certs]# rm -rf dashboard.od.com.csr 
[root@k8s-11 certs]# cd ../conf.d/
[root@k8s-11 conf.d]# vim dashboard.od.com.conf
[root@k8s-11 conf.d]# cat dashboard.od.com.conf 
server {
    listen       80;
    server_name  dashboard.od.com;

    rewrite ^(.*)$ https://${server_name}$1 permanent;
}
server {
    listen       443 ssl;
    server_name  dashboard.od.com;

    ssl_certificate "certs/dashboard.od.com.crt";
    ssl_certificate_key "certs/dashboard.od.com.key";
    ssl_session_cache shared:SSL:1m;
    ssl_session_timeout  10m;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;

    location / {
        proxy_pass http://default_backend_traefik;
        proxy_set_header Host       $http_host;
        proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
    }
}
[root@k8s-11 conf.d]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@k8s-11 conf.d]# nginx -s reload

        HTTPS配置完成后,我们就可以刷新我们的dashboard界面了,此时,由于证书是我们自己签发的,浏览器会提示证书 “链接不是私密链接”,此时我们只要点继续访问就可以了。此时,我们将刚才获取的token,粘贴到登陆页面的Token输入框中,然后再登录,就可以登录到dashboard中了:

输入token

登录成功

        至此,我们的dashboard就完全部署完成了。