K8S_06_安装部署coredns_recv

简单来说,服务发现就是服务(应用)之间相互定位的过程。

服务发现并非云计算时代独有的,传统的单体架构时代也会用到。以下应用场景下,更需要服务发现:
服务(应用)的动态性强
服务(应用)更新发布频繁
服务(应用)支持自动伸缩

在K8S 集群里,POD的是不断变化的,如何以不变应万变?
抽象出了Service资源,通过标签选择器,关联一组POD

抽象出了集群网络,通过相对固定的"集群IP",使服务接入点固定
那么如何自动关联Service资源的"名称"和"集群IP",从而达到服务被集群自动发现的目的呢?

考虑传统DNS的模型:hdss7-21.host.com---->10.4.7.21
能否在K8S里建立这样的模型:nginx-ds---->192.168.0.5

##########++++++++++++
K8S里服务发现的方式-----DNS

实现K8S里DNS功能的插件(软件)
Kube-dns-Kubernetes-v1.2至Kubernetes-v1.10

Coredns---Kubernetes-v1.111 至今
注意:

K8S里面的DNS不是万能的!它只负责自动维护"服务名" --->"集群网络IP"之间的关系

########++++++++++
K8S的服务发现组件---CoreDNS部署

先部署k8s的内网资源配置清单http服务
在运维主机hdss7-200.host.com(10.4.7.200)上,配置一个nginx虚拟主机,用以提供k8s统一的资源配置清单访问入口

mkdir -p /data/k8s-yaml/coredns
vim /etc/nginx/conf.d/k8s-yaml.od.com.conf

server {
listen 80;
server_name k8s-yaml.od.com;
location / {
autoindex on;
default_type text/plain;
root /data/k8s-yaml;
}
}

nginx -t
nginx -s reload
接下来还需要在hdss7-11(10.4.7.11)上面添加一个A记录,因为刚才在nginx 里面配置了一个域名

vim /var/named/od.com.zone


重启named 并测试解析

systemctl restart named
dig -t A k8s-yaml.od.com @10.4.7.11 +short



浏览器访问:


###########+++++++++
部署coredns ## hdss7-200(10.4.7.200)上面操作

官方网站:https://github.com/coredns/coredns
coredns 官方docker 地址:https://hub.docker.com/r/coredns/coredns/tags

本次操作用的软件版本是1.6.1
以容器的方式往k8s 里面交付软件:

cd /data/k8s-yaml/coredns/
docker pull docker.io/coredns/coredns:1.6.1

docker images|grep coredns



docker tag c0f6e815079e harbor.od.com/public/coredns:v1.6.1
docker login harbor.od.com
docker push harbor.od.com/public/coredns:v1.6.1


########+++++++
准备资源配置清单:10.4.7.200上面操作
cd /data/k8s-yaml/coredns/
vim rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
###########++++
vim cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        log
        health
        ready
        kubernetes cluster.local 192.168.0.0/16
        forward . 10.4.7.11
        cache 30
        loop
        reload
        loadbalance
       }
####+++++
vim dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      containers:
      - name: coredns
        image: harbor.od.com/public/coredns:v1.6.1
        args:
        - -conf
        - /etc/coredns/Corefile
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
##########++++++++
vim svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 192.168.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
  - name: metrics
    port: 9153
    protocol: TCP
######++++++++++

以上4个yaml文件,来源于官方文档

https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base

进行修改而成。
### 如何查找:















######+++++++++在hdss7-21.host.com(10.4.7.21)上面操作:

kubectl apply -f http://k8s-yaml.od.com/coredns/rbac.yaml
kubectl apply -f http://k8s-yaml.od.com/coredns/cm.yaml
kubectl apply -f http://k8s-yaml.od.com/coredns/dp.yaml
kubectl apply -f http://k8s-yaml.od.com/coredns/svc.yaml



kubectl get all -o wide -n kube-system



这里的cluster-ip 192.168.0.2 在kubelet启动的时候就写好了。



验证coredns
dig -t A www.baidu.com @192.168.0.2 +short
dig -t A hdss7-21.host.com @192.168.0.2 +short



+++++++++
kubectl get pods -o wide -n kube-public ## 先前创建的



kubectl get svc -n kube-public ##先前创建的



解析:
dig -t A nginx-dp.kube-public.svc.cluster.local. @192.168.0.2 +short