High Availability Kubernetes Cluster di Alibaba Cloud

Catatan: Tidak semua yang saya tulis ini adalah hal benar, dan dapat diimplementasikan mentah-mentah. Saya baru sampai diposisi, “Ini Jalan”, tapi belum tau bagian mana yang belum baik dan butuh ditunning.

Estu

Hae, sudah lama g nulis teknikal. Karena beberapa pekan, bulan terakhir ini masih terhipnotis pada tulisan piknik. Jadi balik ke judul. dan ke kubernetes.

Setelah sebelumnya menggunakan Kubernetes di AWS dengan jumawanya. Pake kops loh. Jumawa sejak awal saya makai kubernetes untuk production, agustus 2018.
di saat kawan-kawan di jogja pada make k8s di GCP. Macam orang bingung kalo bertanya k8s di AWS dan kops saat itu.

Kemudian di pertengahan Agustus 2019, saya berpindah tempat ngetik. Tugas pertama. beres-beres mesin dan migrasi k8s. Saya bilang diawal, sanggup.
Di hari pertama masuk ada mantra ajaib.

Kita pake Alicloud bang :D.

Someone

Saya nyengir cukup lebar. Bagaimana masang kubernetes di AliCloud? Pikiran pertama, menggunakan kops di alicloud. Setelah mencari catatan-catatan di github kops https://github.com/kubernetes/kops/issues/4127 still on WIP but not know when will release.

Mencari patokan harga servise ACK, Kubernetes Service on Alicloud, sepertinya belum masuk. Sejujurnya saya belum terlalu paham cara menggunakan dashboard Alicloud. hanya ngasal klik klik. Berarti balik ke semi-semi kubernetes baremetal.

Mencari tulisan-tulisan lain, prove on concept tentang K8S HA mengantarkan saya ke beberapa tulisan:

Saya kemudian mengikuti panduan kedua dengan beberapa perubahan secukupnya.

Berikut ini adalah step final pemasangan kluster Kubernetes di Alicloud. Dengan beberapa perubahan/perbaikan sebelum ke kondisi saat ini.

  • Penggunaan ip private tanpa ip public untuk cluster
  • HA, dari HAProxy ke full internal slb
  • Network migrasi dari weave net ke calico
  • SLB tunning
  • Yml config path
  • Way of deployment
  • Ansible way
  • Pindah billing, ke subcription
  • Upgrade type instance
  • Lock Kubernetes version

Design

Desain HA K8S Cluster

3 masters, 1 HA, 3 nodes

ha-01 : 192.168.68.56 ecs.t5-lc1m1.small 1core, 1GB, 40GB, Xenial64
internal-slb: 192.168.68.66 slb.s1.small

s-ext-slb : ip public slb.s1.small
p-ext-slb : ip public slb.s1.small

master-01 : 192.168.68.58 ecs.sn1ne.large 2core, 4GB, 40GB, Xenial64
master-02 : 192.168.68.59 ecs.sn1ne.large 2core, 4GB, 40GB, Xenial64
master-03 : 192.168.68.60 ecs.sn1ne.large 2core, 4GB, 40GB, Xenial64

node-01 : 192.168.68.61 ecs.hfg5.xlarge 4core, 16GB, 120GB, Xenial64
node-02 : 192.168.68.62 ecs.hfg5.xlarge 4core, 16GB, 120GB, Xenial64
node-03 : 192.168.68.63 ecs.hfg5.xlarge 4core, 16GB, 120GB, Xenial64

Create server

Masih manual. Terraform way masih otw.

Ceph

100GB x each node

Install Kubernetes Kluster

HA, Install dan Config HAProxy

Jadi fungsi HA di sini adalah membuat jembatan dan pembagi beban komunikasi antara nodes yang banyak itu ke master. Node membaca seolah-olah master hanya satu dan mengarah ke IP HA, namun sebenarnya di belakang HA ada 3 master yang melayani setiap request api dari nodes. Di kasus kali ini, HA hanya digunakan saat inisiasi awal, karena dari desain, HA yang hanya satu, bisa menjadi calon penyebab SPOF. sehingga setelah kluster berhasil diinisiasi, fungsi HA akan digantikan oleh internal slb yang harganya masih gratis. Kemudian mesin HA akan di hibernasi (power off) tapi tetep dikeep (tidak direlease). Kumpulan master adalah control plane di cluster ini.

SSH ke mesin HA dan lakukan instalasi:

sudo apt-get update && sudo apt-get upgrade -y
sudo apt-get install haproxy -y
sudo vim /etc/haproxy/haproxy.cfg
global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon
        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private
        # Default ciphers to use on SSL-enabled listening sockets.
        # For more information, see ciphers(1SSL). This list is from:
        #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
        # An alternative list with additional directives can be obtained from
        #  https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
        ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
        ssl-default-bind-options no-sslv3
defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http
frontend k8s-api
   bind 0.0.0.0:6443
   mode tcp
   option tcplog
   default_backend k8s-api
backend k8s-api
   mode tcp
   option tcplog
   option tcp-check
   balance roundrobin
   default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
        server master-01 192.168.68.58:6443 check fall 3 rise 2
        server master-02 192.168.68.59:6443 check fall 3 rise 2
        server master-03 192.168.68.60:6443 check fall 3 rise 2

Test config HAProxy

sudo haproxy -f /etc/haproxy/haproxy.cfg -c
sudo systemctl restart haproxy

Membuat Sertifikat Mandiri

Sumber bacaan:

Kita membutuhkan sertifikat untuk komunikasi masing-masing komponen di klaster Kubernetes. Sertifikat ini cukup sekali dibuat. Setelah dibuat, simpanlah di tempat yang aman. Saya membuat sertifikat ini di laptop saya, berikut langkah-langkahnya.

Siapkan File Template

Buat 3 file berikut, ganti isinya sesuai yang diinginkan.
43800h = 5 tahun.

mkdir ca && cd ca
vim ca-config.json
vim ca-csr.json
vim kubernetes-csr.json

ca-config.json

{
    "signing": {
        "default": {
            "expiry": "43800h"
        },
        "profiles": {
            "kubernetes": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

ca-csr.json

{
    "CN": "Tuan CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "ID",
            "L": "DKI Jakarta",
            "O": "Tuan",
            "ST": "Jakarta Pusat",
            "OU": "Tuan Team"
        }
    ]
}

kubernetes-csr.json

{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "ID",
            "L": "DKI Jakarta",
            "O": "TUAN",
            "ST": "Jakarta Pusat",
            "OU": "Tuan Team"
        }
    ]
}

Install Package

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssl*
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
cfssl version

Run Command

Membuat ca-key.pem & ca.pem

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

Membuat Kubernetes Cert

cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=192.168.68.66,192.168.68.56,192.168.68.57,192.168.68.58,192.168.68.59,192.168.68.60,127.0.0.1 \
-profile=kubernetes kubernetes-csr.json | \
cfssljson -bare kubernetes

Output yang akan digunakan:

  • ca.pem
  • kubernetes.pem
  • kubernetes-key.pem

Instalasi Standar untuk setiap Master dan Node

SSH sebagai user root.

  • Buat user ubuntu dengan sudo
  • Set ssh kunci standar
  • Set timezone Jakarta
adduser ubuntu
usermod -aG sudo ubuntu
su ubuntu
cd ~
mkdir .ssh
vim .ssh/authorized_keys
vim .ssh/id_rsa
vim .ssh/id_rsa.pub
chmod 600 .ssh/id_rsa
chmod 644 .ssh/id_rsa.pub .ssh/authorized_keys
sudo dpkg-reconfigure tzdata
exit

Ketiga fungsi diatas saya buat sebagai base playbook di ansible.

Kemudian paket khusus Kubernetes Klaster:

  • Install docker
sudo apt autoremove -y
sudo apt-get update && apt-get install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository \
  "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) \
  stable"
sudo apt-get update && sudo apt-get install docker-ce=18.06.2~ce~3-0~ubuntu -y
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo su
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl restart docker
usermod -aG docker ubuntu
exit
  • Install kubelet kubeadm kubectl nfs-client
sudo su
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl nfs-client
apt-mark hold kubelet kubeadm kubectl
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab

Persiapan Master Server

Salin ca ke setiap master

rsync -avz --progress ca.pem kubernetes.pem kubernetes-key.pem ubuntu@192.168.68.58:~
rsync -avz --progress ca.pem kubernetes.pem kubernetes-key.pem ubuntu@192.168.68.59:~
rsync -avz --progress ca.pem kubernetes.pem kubernetes-key.pem ubuntu@192.168.68.60:~

Install etcd di setiap master

SSH ke setiap master:

sudo mkdir /etc/etcd /var/lib/etcd
sudo mv ~/ca.pem ~/kubernetes.pem ~/kubernetes-key.pem /etc/etcd
wget https://github.com/etcd-io/etcd/releases/download/v3.3.15/etcd-v3.3.15-linux-amd64.tar.gz
tar xvzf etcd-v3.3.15-linux-amd64.tar.gz
sudo mv etcd-v3.3.15-linux-amd64/etcd* /usr/local/bin/.
sudo vim /etc/systemd/system/etcd.service

Ganti name section dengan ip tiap-tiap master

etcd.service on master-01

[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \
  --name 192.168.68.58 \
  --cert-file=/etc/etcd/kubernetes.pem \
  --key-file=/etc/etcd/kubernetes-key.pem \
  --peer-cert-file=/etc/etcd/kubernetes.pem \
  --peer-key-file=/etc/etcd/kubernetes-key.pem \
  --trusted-ca-file=/etc/etcd/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --initial-advertise-peer-urls https://192.168.68.58:2380 \
  --listen-peer-urls https://192.168.68.58:2380 \
  --listen-client-urls https://192.168.68.58:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.68.58:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster 192.168.68.58=https://192.168.68.58:2380,192.168.68.59=https://192.168.68.59:2380,192.168.68.60=https://192.168.68.60:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

etcd.service on master-02

[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \
  --name 192.168.68.59 \
  --cert-file=/etc/etcd/kubernetes.pem \
  --key-file=/etc/etcd/kubernetes-key.pem \
  --peer-cert-file=/etc/etcd/kubernetes.pem \
  --peer-key-file=/etc/etcd/kubernetes-key.pem \
  --trusted-ca-file=/etc/etcd/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --initial-advertise-peer-urls https://192.168.68.58:2380 \
  --listen-peer-urls https://192.168.68.58:2380 \
  --listen-client-urls https://192.168.68.58:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.68.58:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster 192.168.68.58=https://192.168.68.58:2380,192.168.68.59=https://192.168.68.59:2380,192.168.68.60=https://192.168.68.60:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

etcd.service on master-03

[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \
  --name 192.168.68.60 \
  --cert-file=/etc/etcd/kubernetes.pem \
  --key-file=/etc/etcd/kubernetes-key.pem \
  --peer-cert-file=/etc/etcd/kubernetes.pem \
  --peer-key-file=/etc/etcd/kubernetes-key.pem \
  --trusted-ca-file=/etc/etcd/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --initial-advertise-peer-urls https://192.168.68.58:2380 \
  --listen-peer-urls https://192.168.68.58:2380 \
  --listen-client-urls https://192.168.68.58:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.68.58:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster 192.168.68.58=https://192.168.68.58:2380,192.168.68.59=https://192.168.68.59:2380,192.168.68.60=https://192.168.68.60:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

Restart etcd di setiap master

sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd

Configure Internal slb

Konfigurasi ini diperlukan agar internal slb meneruskan request dari ip internal slb ke ip HA

Rule:
TCP 192.168.68.66:6443 forward 192.168.68.56:6443

Inisiasi Kluster Kubernetes

Konfigurasi file Master-01

SSH ke master-01. Buat file kubeadm-config.yaml dan ganti ip controlPlaneEndpoint ke ip internal elb.

kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
  certSANs:
  - "192.168.68.66"
controlPlaneEndpoint: "192.168.68.66:6443"
etcd:
  external:
    endpoints:
    - https://192.168.68.58:2379
    - https://192.168.68.59:2379
    - https://192.168.68.60:2379
    caFile: /etc/etcd/ca.pem
    certFile: /etc/etcd/kubernetes.pem
    keyFile: /etc/etcd/kubernetes-key.pem
networking:
  podSubnet: 10.30.0.0/16

Inisiasi cluster

SSH ke master-01

sudo kubeadm init --config kubeadm-config.yaml --upload-certs

Output: salin dan simpan.

Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
  kubeadm join 192.168.68.66:6443 --token ntiajl.etedzjdzw7pn6853 \
    --discovery-token-ca-cert-hash sha256:100750a346c9051nb2ce58c270641cd60e068594e2c766efc95ec2fce4357b8 \
    --control-plane --certificate-key a1f8f73ff48e17taeoidoaeb7a7f85231007ba73ced5d3429e26d35b2d9ade7
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use 
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.68.66:6443 --token ntiajl.etedzjdzw7pn6853 \
    --discovery-token-ca-cert-hash sha256:100750a346c9051nb2ce58c270641cd60e068594e2c766efc95ec2fce4357b8 

Salin configurasi kubectl ke user selain root;

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Salin juga file config ke laptop.

Gabung Master-02 dan Master-03 ke Kluster

SSN ke master-02 dan master-03. Jalankan:

sudo kubeadm join 192.168.68.66:6443 --token ntiajl.etedzjdzw7pn6853 \
    --discovery-token-ca-cert-hash sha256:100750a346c9051nb2ce58c270641cd60e068594e2c766efc95ec2fce4357b8 \
    --control-plane --certificate-key a1f8f73ff48e17taeoidoaeb7a7f85231007ba73ced5d3429e26d35b2d9ade7

Persiapan Node Server

Pastikan di node-01,node-02,node-03 base config sudah ada. Seperti user, ssh, docker, kubelet, kubeadm, kubectl.

SSH ke node satu persatu, dan lakukan join sebagai nodes.

sudo kubeadm join 192.168.68.66:6443 --token ntiajl.etedzjdzw7pn6853 \
    --discovery-token-ca-cert-hash sha256:100750a346c9051nb2ce58c270641cd60e068594e2c766efc95ec2fce4357b8

Persiapan Workspace

Salin config dari master-01 ke laptop

mkdir -p $HOME/.kube
vim $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes

Instalasi Network dan Namespace

Network saya menggunakan calico. namun karena ip calico memiliki kelas yang sama, maka saya mengganti ke kelas A, 10.30.0.0/16.

Dari terminal workspace

kubectl get nodes
cd /tmp/
wget https://docs.projectcalico.org/v3.10/manifests/calico.yaml
sed -i "s/192.168.0.0/10.30.0.0/g" "calico.yaml"
kubectl apply -f calico.yaml
kubectl apply -f namespace.yaml

namespace.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: development
  labels:
    name: development
---
apiVersion: v1
kind: Namespace
metadata:
  name: staging
  labels:
    name: staging
---
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    name: production

Instalasi Kubernetes Dashboard

Dashboard menggunakan panduan: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

dashboard.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
kubectl apply -f dashboard.yaml
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Simpan token yang dihasilkan. Token ini akan digunakan untuk info masuk ke dashboar dari kubectl proxy.

Internal Proxy

s-ext-slb to pods s-proxy

Kenapa saya masih menggunakan s-proxy sebagai proxy internal?
Karena saya belum berhasil tunning s-ext-slb dari alicloud. Sehingga dibutuhkan tunning nginx langsung dari pods yang menjalankan internal proxy.

Jadi s-proxy akan dilayani sebagai nodeport ke port 30000 di bagian VServer group SLB pada ketiga IP nodes kubernetes. Masing-masing node saya set weight 40% pada bagian load balancer.

Berikut config yml untuk s-proxy:

s-proxy-nginx.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: s-proxy-nginx-deployment
  namespace: default
  labels:
    app: proxy-nginx
    env: staging
spec:
  selector:
    matchLabels:
      app: s-proxy-nginx
      env: staging
  replicas: 3
  template:
    metadata:
      labels:
        app: s-proxy-nginx
        env: staging
    spec:
      volumes:
        - name: s-nginx-log
          persistentVolumeClaim:
            claimName: pvc-log
        - name: s-proxy-nginx-configmap
          configMap:
            name: s-proxy-nginx-configmap
      containers:
        - name: s-proxy-nginx
          image: proxy:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 80
              protocol: TCP
          volumeMounts:
            - name: s-nginx-log
              mountPath: "/var/log/nginx"
              subPath: "proxy/nginx/staging"
            - name: s-proxy-nginx-configmap
              mountPath: "/etc/nginx/conf.d/default.conf"
              subPath: "default.conf"
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: s-proxy-nginx-service
  namespace: default
  labels:
    app: proxy-nginx
    env: staging
spec:
  type: NodePort
  selector:
    app: s-proxy-nginx
  ports:
    - nodePort: 30000
      port: 80
      protocol: TCP
      targetPort: 80
  sessionAffinity: None
---

s-configmap.yml

apiVersion: v1
kind: ConfigMap
metadata:
  name: s-proxy-nginx-configmap
  namespace: default
  labels:
     app: proxy-nginx
     env: staging
data:
  default.conf: |
    server {
      listen    80;
      server_name  _;
      location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
      }
      error_page   500 502 503 504  /50x.html;
      location = /50x.html {
        root   /usr/share/nginx/html;
      }
    }
    server {
      listen       80;
      server_name  example.net;
      access_log      /var/log/nginx/example.net_access.log elk;
      error_log       /var/log/nginx/example.net_error.log;
      proxy_set_header HOST $host;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      location / {
        proxy_pass http://s-example-service.staging:80/;
      }
    }

Namun harus dipastikan service example harus hidup dulu sebelum mendeploy s-proxy.

Deploy s-proxy

Berikut cara mendeploy proxy, silahkan diperhatikan anomali urutan deploymentnya.

kubectl apply -f s-proxy-nginx.yml
kubectl replace -f s-configmap.yml
kubectl rollout restart deploy/s-proxy-nginx-deployment

Domain

Set domain dan subdomain di domain manajemen untuk mengarah ke ip publec s-ext-slb atau p-ext-slb

SSL

SSL Wildcard saya pasang di s-ext-slb ketika mengatur listen port 443.

Dah itu aja. Semoga bermanfaat

Estu~

Tinggalkan Balasan

Isikan data di bawah atau klik salah satu ikon untuk log in:

Logo WordPress.com

You are commenting using your WordPress.com account. Logout /  Ubah )

Foto Google

You are commenting using your Google account. Logout /  Ubah )

Gambar Twitter

You are commenting using your Twitter account. Logout /  Ubah )

Foto Facebook

You are commenting using your Facebook account. Logout /  Ubah )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.