<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[focaaby's Notes]]></title><description><![CDATA[趨勢不可擋，流行不可跟，探索與思考技術的本質]]></description><link>https://focaaby.com/</link><generator>Ghost 5.80</generator><lastBuildDate>Tue, 19 Mar 2024 08:43:00 GMT</lastBuildDate><atom:link href="https://focaaby.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[為什麼 EKS cluster 知道預設 CNI Plugin 為 Amazon VPC CNI plugin]]></title><description><![CDATA[預設 EKS cluster 使用 VPC CNI Plugin 作為 CNI。本文將探討「為什麼 EKS cluster 知道預設 CNI Plugin 為 VPC CNI plugin」，希望理解 EKS CNI plugin 設定過程。]]></description><link>https://focaaby.com/why-eks-cluster-recognize-amazon-vpc-cni-k8s-as-default-cni/</link><guid isPermaLink="false">63299179d3f23c00013954a6</guid><category><![CDATA[ironman-2022]]></category><category><![CDATA[eks]]></category><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Tue, 20 Sep 2022 10:12:07 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1491485880348-85d48a9e5312?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDIzfHxjYXR8ZW58MHx8fHwxNjYzNzA0Mzgw&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1491485880348-85d48a9e5312?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDIzfHxjYXR8ZW58MHx8fHwxNjYzNzA0Mzgw&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="&#x70BA;&#x4EC0;&#x9EBC; EKS cluster &#x77E5;&#x9053;&#x9810;&#x8A2D; CNI Plugin &#x70BA; Amazon VPC CNI plugin"><p>&#x9810;&#x8A2D; EKS cluster &#x4F7F;&#x7528; Amazon VPC Container Network Interface&#xFF08;CNI&#xFF09;Plugin &#x4F5C;&#x70BA; CNI&#x3002;EKS &#x96C6;&#x7FA4;&#x5EFA;&#x7F6E;&#x5F8C;&#xFF0C;&#x6211;&#x5011;&#x5373;&#x53EF;&#x4EE5;&#x900F;&#x904E; kubectl &#x547D;&#x4EE4;&#x67E5;&#x770B; DaemonSet <code>aws-node</code> &#x800C;&#x7121;&#x9700;&#x624B;&#x52D5;&#x5B89;&#x88DD; CNI plugin&#x3002;</p><pre><code>$ kubectl -n kube-system get ds
NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
aws-node     3         3         3       3            3           &lt;none&gt;          5d11h
kube-proxy   3         3         3       3            3           &lt;none&gt;          5d11h
</code></pre><p>&#x672C;&#x6587;&#x5C07;&#x63A2;&#x8A0E;&#x300C;&#x70BA;&#x4EC0;&#x9EBC; EKS cluster &#x77E5;&#x9053;&#x9810;&#x8A2D; CNI Plugin &#x70BA; Amazon VPC CNI plugin&#x300D;&#xFF0C;&#x5E0C;&#x671B;&#x7406;&#x89E3; EKS CNI plugin &#x8A2D;&#x5B9A;&#x904E;&#x7A0B;&#x3002;</p><h2 id="work-node">work node</h2><p>&#x8207;&#x4E0A;&#x4E00;&#x7BC7;&#x67E5;&#x770B; <code>kubelet</code> systemd unit &#x8A2D;&#x5B9A;&#x6A94;&#x6642;&#xFF0C;&#x6211;&#x5011;&#x5176;&#x5BE6;&#x5DF2;&#x7D93;&#x770B;&#x904E; kubelet &#x555F;&#x7528; <code>--network-plugin=cni</code>&#x3002;kubelet &#x5C07;&#x6703;&#x8B80;&#x53D6; <code>--network-plugin-dir</code> &#x76EE;&#x9304;&#x8A2D;&#x5B9A;&#x6A94;&#xFF0C;&#x4E26;&#x4F7F;&#x7528; CNI &#x8A2D;&#x5B9A;&#x6A94;&#x8A2D;&#x7F6E;&#x6BCF;&#x4E00;&#x500B; Pod network&#x3002;</p><ul><li>kubelet systemd unit</li></ul><pre><code>[ec2-user@ip-192-168-65-212 ~]$ systemctl cat kubelet
# /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service iptables-restore.service
Requires=docker.service

[Service]
ExecStartPre=/sbin/iptables -P FORWARD ACCEPT -w 5
ExecStart=/usr/bin/kubelet --cloud-provider aws \
    --config /etc/kubernetes/kubelet/kubelet-config.json \
    --kubeconfig /var/lib/kubelet/kubeconfig \
    --container-runtime docker \
    --network-plugin cni $KUBELET_ARGS $KUBELET_EXTRA_ARGS
...
... 
...
</code></pre><ul><li>&#x767B;&#x5165; EKS worker node &#x6AA2;&#x8996; kubelet &#x78BA;&#x8A8D;&#x9810;&#x8A2D; <a href="https://v1-22.docs.kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/?ref=focaaby.com#cni">CNI plugin &#x53C3;&#x6578;</a> [2]&#xFF1A;</li><li><code>--cni-bin-dir=&quot;/opt/cni/bin&quot;</code>&#xFF1A;kubelet &#x555F;&#x7528;&#x6642;&#x6703;&#x67E5;&#x770B;&#x6B64;&#x76EE;&#x9304;</li><li><code>--cni-conf-dir=&quot;/etc/cni/net.d&quot;</code></li></ul><pre><code>[ec2-user@ip-192-168-65-212 ~]$ journalctl -u kubelet | grep -e &quot;--cni&quot;
Sep 19 09:46:56 ip-192-168-65-212.eu-west-1.compute.internal kubelet[3347]: I0919 09:46:56.765003    3347 flags.go:59] FLAG: --cni-bin-dir=&quot;/opt/cni/bin&quot;
Sep 19 09:46:56 ip-192-168-65-212.eu-west-1.compute.internal kubelet[3347]: I0919 09:46:56.765008    3347 flags.go:59] FLAG: --cni-cache-dir=&quot;/var/lib/cni/cache&quot;
Sep 19 09:46:56 ip-192-168-65-212.eu-west-1.compute.internal kubelet[3347]: I0919 09:46:56.765014    3347 flags.go:59] FLAG: --cni-conf-dir=&quot;/etc/cni/net.d&quot;
</code></pre><p>&#x4E0D;&#x514D;&#x4FD7;&#x7684;&#x4F86;&#x9A57;&#x8B49;&#x4E00;&#x4E0B;&#xFF0C;&#x6211;&#x5011;&#x53EF;&#x4EE5;&#x76F4;&#x63A5;&#x900F;&#x904E; EC2 &#x555F;&#x7528; Amazon EKS optimized Amazon Linux AMI &#x4F86;&#x6BD4;&#x5C0D;&#xFF1A;</p><h3 id="%E9%80%8F%E9%81%8E-ec2-%E5%95%9F%E7%94%A8-eks-ami-%E7%9A%84-node">&#x900F;&#x904E; EC2 &#x555F;&#x7528; EKS AMI &#x7684; node</h3><ul><li>&#x4F7F;&#x7528; AMI&#xFF1A;<code>amazon/amazon-eks-node-1.22-v20220824</code></li></ul><pre><code>[ec2-user@ip-172-31-44-117 ~]$ ls /opt/cni/bin
bandwidth  bridge  dhcp  firewall  flannel  host-device  host-local  ipvlan  loopback  macvlan  portmap  ptp  sbr  static  tuning  vlan

[ec2-user@ip-172-31-44-117 ~]$ ls -al /etc/cni/net.d/
ls: cannot access /etc/cni/net.d/: No such file or directory
</code></pre><h3 id="%E5%B7%B2%E7%B6%93%E6%9C%89-vpc-cni-plugin-pod-%E7%9A%84-node">&#x5DF2;&#x7D93;&#x6709; VPC CNI plugin Pod &#x7684; node</h3><pre><code>[ec2-user@ip-192-168-65-212 ~]$ ls /opt/cni/bin
aws-cni  aws-cni-support.sh  bandwidth  bridge  dhcp  egress-v4-cni  firewall  flannel  host-device  host-local  ipvlan  loopback  macvlan  portmap  ptp  sbr  static  tuning  vlan
</code></pre><pre><code>[ec2-user@ip-192-168-65-212 ~]$ sudo cat /etc/cni/net.d/10-aws.conflist
{
  &quot;cniVersion&quot;: &quot;0.3.1&quot;,
  &quot;name&quot;: &quot;aws-cni&quot;,
  &quot;plugins&quot;: [
    {
      &quot;name&quot;: &quot;aws-cni&quot;,
      &quot;type&quot;: &quot;aws-cni&quot;,
      &quot;vethPrefix&quot;: &quot;eni&quot;,
      &quot;mtu&quot;: &quot;9001&quot;,
      &quot;pluginLogFile&quot;: &quot;/var/log/aws-routed-eni/plugin.log&quot;,
      &quot;pluginLogLevel&quot;: &quot;DEBUG&quot;
    },
    {
      &quot;name&quot;: &quot;egress-v4-cni&quot;,
      &quot;type&quot;: &quot;egress-v4-cni&quot;,
      &quot;mtu&quot;: 9001,
      &quot;enabled&quot;: &quot;false&quot;,
      &quot;nodeIP&quot;: &quot;192.168.65.212&quot;,
      &quot;ipam&quot;: {
         &quot;type&quot;: &quot;host-local&quot;,
         &quot;ranges&quot;: [[{&quot;subnet&quot;: &quot;169.254.172.0/22&quot;}]],
         &quot;routes&quot;: [{&quot;dst&quot;: &quot;0.0.0.0/0&quot;}],
         &quot;dataDir&quot;: &quot;/run/cni/v6pd/egress-v4-ipam&quot;
      },
      &quot;pluginLogFile&quot;: &quot;/var/log/aws-routed-eni/egress-v4-plugin.log&quot;,
      &quot;pluginLogLevel&quot;: &quot;DEBUG&quot;
    },
    {
      &quot;type&quot;: &quot;portmap&quot;,
      &quot;capabilities&quot;: {&quot;portMappings&quot;: true},
      &quot;snat&quot;: true
    }
  ]
}
</code></pre><p><code>/opt/cni/bin</code> &#x8DEF;&#x5F91;&#x6B63;&#x5982;&#x540C; CNI &#x53C3;&#x6578;&#x540D;&#x7A31;&#x70BA; CNI binary &#x8DEF;&#x5F91;&#x3002;&#x9810;&#x8A2D; EKS &#x5728;&#x88FD;&#x4F5C; worker node AMI &#x6642;&#xFF0C;&#x900F;&#x904E; <a href="https://github.com/awslabs/amazon-eks-ami/blob/master/scripts/install-worker.sh?ref=focaaby.com#L240">script</a> [3] &#x5B89;&#x88DD; <a href="https://www.cni.dev/plugins/current/?ref=focaaby.com">&#x5E38;&#x898B; CNI plugin binary</a> [4]&#x3002;&#x4E0A;&#x8FF0;&#x6BD4;&#x5C0D;&#x53EF;&#x4EE5;&#x4E86;&#x89E3;&#xFF0C;&#x5728; <code>aws-node</code> Pod &#x5C1A;&#x672A;&#x57F7;&#x884C;&#x4E4B;&#x524D;&#xFF0C;<code>aws-cni</code> binary &#x53CA;&#x8A2D;&#x5B9A;&#x6A94; <code>/etc/cni/net.d/10-aws.conflist</code> &#x7686;&#x5C1A;&#x672A;&#x88AB;&#x5B89;&#x88DD;&#x6216;&#x5EFA;&#x7ACB;&#x3002;</p><p>&#x63A5;&#x7E8C;&#xFF0C;&#x6211;&#x5011;&#x4E5F;&#x53EF;&#x4EE5;&#x900F;&#x904E; kubectl &#x6AA2;&#x8996; <code>aws-node</code> Pod logs&#xFF0C;&#x53EF;&#x4EE5;&#x770B;&#x5230;&#x7531; Amazon VPC CNI plugin &#x9810;&#x8A2D;&#x4F7F;&#x7528;&#x7684; initContainers <code>amazon-k8s-cni-init</code> &#x5B89;&#x88DD; <code>loopback</code> &#x3001;<code>portmap</code>&#x3001;<code>bandwidth</code> &#x53CA; <code>aws-cni-support.sh</code> plugin&#x3002;init container &#x57F7;&#x884C;&#x7D50;&#x675F;&#x5F8C;&#xFF0C;<code>aws-node</code> &#x4E5F;&#x6709;&#x57F7;&#x884C;&#x8907;&#x88FD;&#x8A2D;&#x5B9A;&#x6A94;&#x53CA; binary&#x3002;&#x4EE5;&#x4E0B;&#x70BA; logs&#xFF1A;</p><pre><code>$ kubectl -n kube-system logs aws-node-5c6w5 --all-containers --timestamps
2022-09-19T09:48:57.333672597Z Copying CNI plugin binaries ...
2022-09-19T09:48:57.333810096Z + PLUGIN_BINS=&apos;loopback portmap bandwidth aws-cni-support.sh&apos;
2022-09-19T09:48:57.333831182Z + for b in &apos;$PLUGIN_BINS&apos;
2022-09-19T09:48:57.333835571Z + &apos;[&apos; &apos;!&apos; -f loopback &apos;]&apos;
2022-09-19T09:48:57.333838091Z + for b in &apos;$PLUGIN_BINS&apos;
2022-09-19T09:48:57.333840648Z + &apos;[&apos; &apos;!&apos; -f portmap &apos;]&apos;
2022-09-19T09:48:57.333843085Z + for b in &apos;$PLUGIN_BINS&apos;
2022-09-19T09:48:57.333845490Z + &apos;[&apos; &apos;!&apos; -f bandwidth &apos;]&apos;
2022-09-19T09:48:57.333847874Z + for b in &apos;$PLUGIN_BINS&apos;
2022-09-19T09:48:57.333850503Z + &apos;[&apos; &apos;!&apos; -f aws-cni-support.sh &apos;]&apos;
2022-09-19T09:48:57.333908878Z + HOST_CNI_BIN_PATH=/host/opt/cni/bin
2022-09-19T09:48:57.333911360Z + echo &apos;Copying CNI plugin binaries ... &apos;
2022-09-19T09:48:57.333914044Z + for b in &apos;$PLUGIN_BINS&apos;
2022-09-19T09:48:57.333916695Z + install loopback /host/opt/cni/bin
2022-09-19T09:48:57.358632542Z + for b in &apos;$PLUGIN_BINS&apos;
2022-09-19T09:48:57.358667213Z + install portmap /host/opt/cni/bin
2022-09-19T09:48:57.371180218Z + for b in &apos;$PLUGIN_BINS&apos;
2022-09-19T09:48:57.371397188Z + install bandwidth /host/opt/cni/bin
2022-09-19T09:48:57.386902686Z + for b in &apos;$PLUGIN_BINS&apos;
2022-09-19T09:48:57.387089749Z + install aws-cni-support.sh /host/opt/cni/bin
2022-09-19T09:48:57.389252423Z + echo &apos;Configure rp_filter loose... &apos;
2022-09-19T09:48:57.389584940Z Configure rp_filter loose...
2022-09-19T09:48:57.389945375Z ++ get_metadata local-ipv4
2022-09-19T09:48:57.391264947Z +++ curl -X PUT http://169.254.169.254/latest/api/token -H &apos;X-aws-ec2-metadata-token-ttl-seconds: 60&apos;
2022-09-19T09:48:57.473320665Z   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
2022-09-19T09:48:57.473705220Z                                  Dload  Upload   Total   Spent    Left  Speed
100    56  100    56    0     0  28000      0 --:--:-- --:--:-- --:--:-- 56000
2022-09-19T09:48:57.478526895Z ++ TOKEN=AQAEAJKJMlzdfIbRn8BQh4U5I2i0dMhKrW4DxRx64XuuP_g4rPzuOw==
2022-09-19T09:48:57.478907025Z ++ attempts=60
2022-09-19T09:48:57.478914505Z ++ false
2022-09-19T09:48:57.478917627Z ++ &apos;[&apos; 1 -gt 0 &apos;]&apos;
2022-09-19T09:48:57.478920837Z ++ &apos;[&apos; 60 -eq 0 &apos;]&apos;
2022-09-19T09:48:57.479064447Z +++ curl -H &apos;X-aws-ec2-metadata-token: AQAEAJKJMlzdfIbRn8BQh4U5I2i0dMhKrW4DxRx64XuuP_g4rPzuOw==&apos; http://169.254.169.254/latest/meta-data/local-ipv4
2022-09-19T09:48:57.486678298Z   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
2022-09-19T09:48:57.486692175Z                                  Dload  Upload   Total   Spent    Left  Speed
100    14  100    14    0     0  14000      0 --:--:-- --:--:-- --:--:-- 14000
2022-09-19T09:48:57.488090438Z ++ meta=192.168.65.212
2022-09-19T09:48:57.488210992Z ++ &apos;[&apos; 0 -gt 0 &apos;]&apos;
2022-09-19T09:48:57.488331224Z ++ &apos;[&apos; 0 -gt 0 &apos;]&apos;
2022-09-19T09:48:57.488430469Z ++ echo 192.168.65.212
2022-09-19T09:48:57.488788529Z + HOST_IP=192.168.65.212
2022-09-19T09:48:57.489299361Z ++ get_metadata mac
2022-09-19T09:48:57.489716539Z +++ curl -X PUT http://169.254.169.254/latest/api/token -H &apos;X-aws-ec2-metadata-token-ttl-seconds: 60&apos;
2022-09-19T09:48:57.497186993Z   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
2022-09-19T09:48:57.497200745Z                                  Dload  Upload   Total   Spent    Left  Speed
100    56  100    56    0     0  56000      0 --:--:-- --:--:-- --:--:-- 56000
2022-09-19T09:48:57.499248303Z ++ TOKEN=AQAEAJKJMlxeFjZlUL_0INYoCXkf7UWmVm4nIKV6nnDeG_VvdZ9-Ig==
2022-09-19T09:48:57.499358341Z ++ attempts=60
2022-09-19T09:48:57.499440488Z ++ false
2022-09-19T09:48:57.499594430Z ++ &apos;[&apos; 1 -gt 0 &apos;]&apos;
2022-09-19T09:48:57.499599969Z ++ &apos;[&apos; 60 -eq 0 &apos;]&apos;
2022-09-19T09:48:57.500015110Z +++ curl -H &apos;X-aws-ec2-metadata-token: AQAEAJKJMlxeFjZlUL_0INYoCXkf7UWmVm4nIKV6nnDeG_VvdZ9-Ig==&apos; http://169.254.169.254/latest/meta-data/mac
2022-09-19T09:48:57.511676682Z   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
2022-09-19T09:48:57.511709736Z                                  Dload  Upload   Total   Spent    Left  Speed
100    17  100    17    0     0  17000      0 --:--:-- --:--:-- --:--:-- 17000
2022-09-19T09:48:57.515247164Z ++ meta=06:1b:3e:40:af:fd
2022-09-19T09:48:57.515648955Z ++ &apos;[&apos; 0 -gt 0 &apos;]&apos;
2022-09-19T09:48:57.515656468Z ++ &apos;[&apos; 0 -gt 0 &apos;]&apos;
2022-09-19T09:48:57.515659528Z ++ echo 06:1b:3e:40:af:fd
2022-09-19T09:48:57.515827055Z + PRIMARY_MAC=06:1b:3e:40:af:fd
2022-09-19T09:48:57.516696407Z ++ grep -F &apos;link/ether 06:1b:3e:40:af:fd&apos;
2022-09-19T09:48:57.516908826Z ++ awk &apos;-F[ :]+&apos; &apos;{print $2}&apos;
2022-09-19T09:48:57.517104498Z ++ ip -o link show
2022-09-19T09:48:57.529715651Z + PRIMARY_IF=eth0
2022-09-19T09:48:57.529881853Z + sysctl -w net.ipv4.conf.eth0.rp_filter=2
2022-09-19T09:48:57.549511061Z net.ipv4.conf.eth0.rp_filter = 2
2022-09-19T09:48:57.550035252Z + cat /proc/sys/net/ipv4/conf/eth0/rp_filter
2022-09-19T09:48:57.552013538Z 2
2022-09-19T09:48:57.552296409Z + &apos;[&apos; false == true &apos;]&apos;
2022-09-19T09:48:57.552638323Z + sysctl -e -w net.ipv4.tcp_early_demux=1
2022-09-19T09:48:57.557945154Z net.ipv4.tcp_early_demux = 1
2022-09-19T09:48:57.559523168Z + &apos;[&apos; false == true &apos;]&apos;
2022-09-19T09:48:57.560537081Z + echo &apos;CNI init container done&apos;
2022-09-19T09:48:57.560663349Z CNI init container done
2022-09-19T09:48:58.240326675Z {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2022-09-19T09:48:58.237Z&quot;,&quot;caller&quot;:&quot;entrypoint.sh&quot;,&quot;msg&quot;:&quot;Validating env variables ...&quot;}
2022-09-19T09:48:58.241282124Z {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2022-09-19T09:48:58.240Z&quot;,&quot;caller&quot;:&quot;entrypoint.sh&quot;,&quot;msg&quot;:&quot;Install CNI binaries..&quot;}
2022-09-19T09:48:58.316896574Z {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2022-09-19T09:48:58.316Z&quot;,&quot;caller&quot;:&quot;entrypoint.sh&quot;,&quot;msg&quot;:&quot;Starting IPAM daemon in the background ... &quot;}
2022-09-19T09:48:58.318121492Z {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2022-09-19T09:48:58.317Z&quot;,&quot;caller&quot;:&quot;entrypoint.sh&quot;,&quot;msg&quot;:&quot;Checking for IPAM connectivity ... &quot;}
2022-09-19T09:49:00.401119692Z {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2022-09-19T09:49:00.397Z&quot;,&quot;caller&quot;:&quot;entrypoint.sh&quot;,&quot;msg&quot;:&quot;Retrying waiting for IPAM-D&quot;}
2022-09-19T09:49:00.441859850Z {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2022-09-19T09:49:00.441Z&quot;,&quot;caller&quot;:&quot;entrypoint.sh&quot;,&quot;msg&quot;:&quot;Copying config file ... &quot;}
2022-09-19T09:49:00.450179414Z {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2022-09-19T09:49:00.449Z&quot;,&quot;caller&quot;:&quot;entrypoint.sh&quot;,&quot;msg&quot;:&quot;Successfully copied CNI plugin binary and config file.&quot;}
2022-09-19T09:49:00.451045960Z {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2022-09-19T09:49:00.450Z&quot;,&quot;caller&quot;:&quot;entrypoint.sh&quot;,&quot;msg&quot;:&quot;Foregrounding IPAM daemon ...&quot;}
</code></pre><p>&#x5728; <a href="https://github.com/aws/amazon-vpc-cni-k8s/blob/master/scripts/entrypoint.sh?ref=focaaby.com">Amazon VPC CNI plugin entrypoint</a> [5]&#xFF0C;&#x53EF;&#x4EE5;&#x78BA;&#x8A8D;&#x5176; <code>aws-cni</code> binary &#x70BA;&#x81EA;&#x8EAB;&#x555F;&#x7528;&#x6642;&#x6703;&#x8907;&#x88FD;&#x81F3;&#x672C;&#x5730;&#x7AEF;&#xFF0C;&#x540C;&#x6642;&#x4E5F;&#x80FD;&#x67E5;&#x770B;&#x5230; volumeMount mount &#x76F8;&#x61C9;&#x76EE;&#x9304;&#x3002;</p><pre><code>$ kubectl -n kube-system get ds aws-node -o yaml
...
...
		image: 602401143452.dkr.ecr.eu-west-1.amazonaws.com/amazon-k8s-cni:v1.10.1-eksbuild.1
...
...
        volumeMounts:
        - mountPath: /host/opt/cni/bin
          name: cni-bin-dir
        - mountPath: /host/etc/cni/net.d
          name: cni-net-dir
        - mountPath: /host/var/log/aws-routed-eni
          name: log-dir
        - mountPath: /var/run/aws-node
          name: run-dir
        - mountPath: /var/run/dockershim.sock
          name: dockershim
        - mountPath: /run/xtables.lock
          name: xtables-lock
...
...
      volumes:
      - hostPath:
          path: /opt/cni/bin
          type: &quot;&quot;
        name: cni-bin-dir
      - hostPath:
          path: /etc/cni/net.d
          type: &quot;&quot;
        name: cni-net-dir
      - hostPath:
          path: /var/run/dockershim.sock
          type: &quot;&quot;
        name: dockershim
      - hostPath:
          path: /run/xtables.lock
          type: &quot;&quot;
        name: xtables-lock
      - hostPath:
          path: /var/log/aws-routed-eni
          type: DirectoryOrCreate
        name: log-dir
      - hostPath:
          path: /var/run/aws-node
          type: DirectoryOrCreate
        name: run-dir
</code></pre><h2 id="%E9%82%A3%E7%82%BA%E4%BB%80%E9%BA%BC%E9%9C%80%E8%A6%81%E9%80%99%E6%A8%A3%E8%A8%AD%E5%AE%9A%E4%B8%80%E5%80%8B-entrypoint-script-%E5%AE%89%E8%A3%9D-cni-binary-%E5%91%A2%EF%BC%9F">&#x90A3;&#x70BA;&#x4EC0;&#x9EBC;&#x9700;&#x8981;&#x9019;&#x6A23;&#x8A2D;&#x5B9A;&#x4E00;&#x500B; entrypoint script &#x5B89;&#x88DD; CNI Binary &#x5462;&#xFF1F;</h2><p>VPC CNI plugin &#x4E3B;&#x8981;&#x6709;&#x5169;&#x500B;&#x5143;&#x4EF6;&#xFF1A;</p><ul><li><a href="https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/?ref=focaaby.com#cni">CNI Plugin</a>&#xFF1A;&#x4E3B;&#x8981;&#x6574;&#x5408; host &#x53CA; Pod &#x7DB2;&#x8DEF;&#x8A2D;&#x5B9A;&#x3002;</li><li><code>ipamd</code>&#xFF1A;&#x9577;&#x671F;&#x57F7;&#x884C;&#x65BC;&#x7BC0;&#x9EDE;&#x672C;&#x5730;&#x7AEF;&#x7684; IP Address Management (IPAM) daemon &#x4E3B;&#x8981;&#x8CA0;&#x8CAC;&#x4EE5;&#x4E0B;&#x5169;&#x4EF6;&#x4E8B;&#xFF1A;</li><li>&#x7DAD;&#x6301;&#x4E00;&#x500B;&#x53EF;&#x7528;&#x7684; IP &#x5730;&#x5740; warm-pool</li><li>&#x6307;&#x6D3E; IP &#x5730;&#x5740;&#x7D66; Pod</li></ul><p>&#x5728; &#xA0;<a href="https://github.com/aws/amazon-vpc-cni-k8s/blob/master/scripts/entrypoint.sh?ref=focaaby.com">Amazon VPC CNI plugin entrypoint</a> [5] &#x7684;&#x8A3B;&#x89E3;&#x88E1;&#x89E3;&#x91CB;&#xFF1A;&#x4E00;&#x822C;&#x4F86;&#x8AAA;&#xFF0C;kubelet &#x5728;&#x67E5;&#x770B; well-known directory&#xFF08;&#x9019;&#x908A;&#x6307;&#x7684;&#x662F; <code>/opt/cni/bin</code> &#x53CA; <code>/etc/cni/net.d</code>&#xFF09;&#x6642;&#xFF0C;&#x5C31;&#x6703;&#x5C07; CNI plugin &#x8996;&#x70BA; Ready &#x72C0;&#x614B;&#x3002;&#x7136;&#x800C;&#x56E0;&#x70BA; VPC CNI plugin &#x63D0;&#x4F9B;&#x4E86;&#x4E0A;&#x8FF0;&#x5169;&#x500B;&#x5143;&#x4EF6;&#xFF0C;&#x5E0C;&#x671B;&#x5728;&#x6210;&#x529F;&#x555F;&#x52D5; IPAM daemon &#x53EF;&#x4EE5;&#x78BA;&#x4FDD;&#x9023;&#x63A5;&#x4E0A; Kubernetes &#x53CA;&#x672C;&#x5730;&#x7AEF; EC2 metadata service&#xFF0C;&#x4E26;&#x5C07; <code>aws-cni</code> binary &#x8907;&#x88FD;&#x5230; well-known directory&#x3002;</p><h2 id="api-server">API server</h2><p>&#x900F;&#x904E; CloudWatch Logs insight syntax &#x6AA2;&#x8996; <code>kube-apiserver-audit</code> logs&#xFF0C;&#x4E26;&#x4E14;&#x904E;&#x6FFE; verb &#x70BA; create&#x3002;</p><pre><code>filter @logStream like /^kube-apiserver-audit/
 | fields @timestamp, @message
 | sort @timestamp asc
 | filter objectRef.name == &apos;aws-node&apos; AND objectRef.resource == &apos;daemonsets&apos; AND verb == &apos;create&apos;
 | limit 10000
</code></pre><p>&#x80FD;&#x767C;&#x73FE; username &#x70BA; <code>eks: cluster-bootstrap</code> &#x900F;&#x904E; <code>kubectl</code> &#x65B9;&#x5F0F;&#x90E8;&#x7F72; create &#x4E86; <code>aws-node</code> DaemonSet&#x3002;</p><ul><li><code>userAgent</code>: <code>kubectl/v1.22.12 (linux/amd64) kubernetes/dade57b</code></li><li><code>user.username</code>: <code>eks: cluster-bootstrap</code></li></ul><h2 id="%E7%B8%BD%E7%B5%90">&#x7E3D;&#x7D50;</h2><p>&#x7531;&#x4E0A;&#x8FF0;&#x9A57;&#x8B49;&#x5F8C;&#xFF0C;&#x6211;&#x5011;&#x53EF;&#x4EE5;&#x4E86;&#x89E3; EKS &#x5728; cluster bootstrap &#x904E;&#x7A0B;&#x4E2D;&#x900F;&#x904E; kubectl &#x65B9;&#x5F0F;&#x90E8;&#x7F72; VPC CNI plugin&#x3002;&#x800C;&#x5728; worker node &#x5C64;&#x7D1A;&#xFF0C;&#x8A2D;&#x5B9A;&#x4E86; kubelet CNI &#x76F8;&#x95DC;&#x53C3;&#x6578;&#xFF0C;&#x4E26;&#x7531; VPC CNI plugin &#x8907;&#x88FD; binary &#x6A94;&#x6848;&#x81F3;&#x4E3B;&#x6A5F;&#x4E0A;&#x4F7F;&#x7528;&#x3002;</p><hr><p>&#x4E0A;&#x8FF0;&#x8CC7;&#x8A0A;&#x900F;&#x904E; EKS &#x6240;&#x63D0;&#x4F9B; Logs &#x4F86;&#x9A57;&#x8B49;&#x4E0A;&#x6E38; Kubernetes &#x904B;&#x4F5C;&#x539F;&#x7406;&#xFF0C;&#x5018;&#x82E5;&#x4E0A;&#x8FF0;&#x5167;&#x6587;&#x6709;&#x6240;&#x932F;&#x8AA4;&#xFF0C;&#x96A8;&#x6642;&#x53EF;&#x4EE5;&#x7559;&#x8A00;&#x6216;&#x662F;&#x79C1;&#x8A0A;&#x6211;&#x3002;</p><h2 id="%E5%8F%83%E8%80%83%E6%96%87%E4%BB%B6">&#x53C3;&#x8003;&#x6587;&#x4EF6;</h2><ol><li>Pod networking in Amazon EKS using the Amazon VPC CNI plugin for Kubernetes - <a href="https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html?ref=focaaby.com">https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html</a></li><li>Network Plugins | Kubernetes 1.22 - <a href="https://v1-22.docs.kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/?ref=focaaby.com#cni">https://v1-22.docs.kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni</a></li><li><a href="https://github.com/awslabs/amazon-eks-ami/blob/master/scripts/install-worker.sh?ref=focaaby.com#L240">https://github.com/awslabs/amazon-eks-ami/blob/master/scripts/install-worker.sh#L240</a></li><li>The Container Network Interface - <a href="https://www.cni.dev/plugins/current/?ref=focaaby.com">https://www.cni.dev/plugins/current/</a></li><li><a href="https://github.com/aws/amazon-vpc-cni-k8s/blob/master/scripts/entrypoint.sh?ref=focaaby.com">https://github.com/aws/amazon-vpc-cni-k8s/blob/master/scripts/entrypoint.sh</a></li></ol>]]></content:encoded></item><item><title><![CDATA[為什麼 EKS worker node 可以自動加入 EKS cluster（二）？]]></title><description><![CDATA[<p>&#x63A5;&#x7E8C;&#x524D;&#x4E00;&#x7BC7;&#xFF0C;&#x6211;&#x5011;&#x4E86;&#x89E3;&#x4E86; EKS node group &#x5B9A;&#x7FA9;&#xFF0C;&#x4E26;&#x77E5;&#x9053;&#x4E86; EKS worker node &#x4F7F;&#x7528;&#x4E86; EKS optimized Amazon Linux AMI &#x5167;&#x9810;&#x5148;&#x8A2D;&#x7F6E;&#x597D;&#x7684; bootstrap &#x8A2D;&#x5B9A; container runtime&#x3001;kubelet &#x7B49;&#x8A2D;&#x5B9A;&#x3002;</p>]]></description><link>https://focaaby.com/why-eks-node-can-join-cluster-automatically-2/</link><guid isPermaLink="false">632882b3d3f23c0001395489</guid><category><![CDATA[ironman-2022]]></category><category><![CDATA[eks]]></category><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Mon, 19 Sep 2022 14:55:28 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1574144611937-0df059b5ef3e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDIwfHxjYXR8ZW58MHx8fHwxNjYzNTE1NjMw&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1574144611937-0df059b5ef3e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDIwfHxjYXR8ZW58MHx8fHwxNjYzNTE1NjMw&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="&#x70BA;&#x4EC0;&#x9EBC; EKS worker node &#x53EF;&#x4EE5;&#x81EA;&#x52D5;&#x52A0;&#x5165; EKS cluster&#xFF08;&#x4E8C;&#xFF09;&#xFF1F;"><p>&#x63A5;&#x7E8C;&#x524D;&#x4E00;&#x7BC7;&#xFF0C;&#x6211;&#x5011;&#x4E86;&#x89E3;&#x4E86; EKS node group &#x5B9A;&#x7FA9;&#xFF0C;&#x4E26;&#x77E5;&#x9053;&#x4E86; EKS worker node &#x4F7F;&#x7528;&#x4E86; EKS optimized Amazon Linux AMI &#x5167;&#x9810;&#x5148;&#x8A2D;&#x7F6E;&#x597D;&#x7684; bootstrap &#x8A2D;&#x5B9A; container runtime&#x3001;kubelet &#x7B49;&#x8A2D;&#x5B9A;&#x3002;</p><p>Kubernetes &#x70BA;&#x78BA;&#x4FDD; Control Plane &#x53CA; worker node &#x6E9D;&#x901A;&#x5B89;&#x5168;&#x6027;&#xFF0C;&#x65BC; <a href="https://github.com/kubernetes/kubernetes/pull/20439/files?ref=focaaby.com">Kubernetes 1.4 &#x5C0E;&#x5165;&#x4E86; certificate request &#x53CA; signing API &#x6A5F;&#x5236;</a> [1]&#x3002;&#x7576;&#x7136; EKS worker node &#x4E5F;&#x7686;&#x662F;&#x57FA;&#x65BC;&#x539F;&#x751F; Kubernetes &#x67B6;&#x69CB;&#x5BE6;&#x73FE;&#xFF0C;EKS worker node &#x4F7F;&#x7528; kubelet TLS bootstrapping &#x6D41;&#x7A0B;&#x8207; API server &#x9032;&#x884C;&#x6E9D;&#x901A;&#x3002;</p><p>&#x6545;&#x672C;&#x6587;&#x5C07;&#x7E7C;&#x7E8C; EKS worker node &#x5982;&#x4F55;&#x8B93; kubelet &#x81EA;&#x52D5;&#x5316;&#x5B8C;&#x6210; TLS bootstrap &#x6D41;&#x7A0B;&#xFF0C;&#x81EA;&#x52D5;&#x52A0;&#x5165; EKS cluster&#x3002;</p><h2 id="kubelet-bootstrap-%E5%88%9D%E5%A7%8B%E5%8C%96%E6%AD%A5%E9%A9%9F">kubelet bootstrap &#x521D;&#x59CB;&#x5316;&#x6B65;&#x9A5F;</h2><p>&#x6839;&#x64DA; <a href="https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/?ref=focaaby.com">Kubernetes TLS bootstrapping</a> [2] &#x6587;&#x4EF6;&#xFF0C;kubelet &#x5728; bootstrap &#x521D;&#x59CB;&#x5316;&#x6B65;&#x9A5F;&#x5982;&#x4E0B;&#xFF1A;</p><ol><li>kubelet &#x555F;&#x52D5;&#x3002;</li><li>kubelet &#x67E5;&#x770B;&#x662F;&#x5426;&#x6709;&#x76F8;&#x61C9; kubeconfig &#x8A2D;&#x5B9A;&#x6A94;&#x3002;</li><li>kueblet &#x6AA2;&#x8996;&#x662F;&#x5426;&#x6709;&#x5C0D;&#x61C9; bootstrap-kubeconfig &#x8A2D;&#x5B9A;&#x6A94;&#x3002;</li><li>&#x57FA;&#x65BC; bootstrap &#x8A2D;&#x5B9A;&#x6A94;&#xFF0C;&#x53D6;&#x5F97; API server endpoint &#x53CA;&#x6709;&#x9650;&#x5236;&#x6B0A;&#x9650; token&#x3002;</li><li>&#x900F;&#x904E;&#x4E0A;&#x8FF0; token &#x8A8D;&#x8B49;&#xFF08;authenticates&#xFF09;&#x8207; API server &#x5EFA;&#x7ACB;&#x9023;&#x7DDA;&#x3002;</li><li>kubelet &#x4F7F;&#x7528;&#x5EFA;&#x7ACB;&#x6709;&#x9650;&#x5236;&#x6B0A;&#x9650; credentials &#x5EFA;&#x7ACB;&#x4E26;&#x53D6;&#x5F97; <a href="https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/?ref=focaaby.com">Certificate Signing Requests&#xFF08;CSR&#xFF09;</a> [3]&#x3002;</li><li>kubelet &#x5EFA;&#x7ACB; CSR&#xFF0C;&#x4E26;&#x5C07; singerName &#x8A2D;&#x5B9A;&#x70BA; kubernetes.io/kube-apiserver-client-kubelet&#x3002;</li><li>CSR &#x88AB;&#x6838;&#x51C6;&#x3002;&#x53EF;&#x4EE5;&#x900F;&#x904E;&#x4EE5;&#x4E0B;&#x5169;&#x7A2E;&#x65B9;&#x5F0F;&#xFF1A;</li></ol><ul><li><code>kube-controller-manager</code> &#x81EA;&#x52D5;&#x6838;&#x51C6; CSR</li><li>&#x900F;&#x904E;&#x5916;&#x90E8;&#x6D41;&#x7A0B;&#x6216;&#x662F;&#x4EBA;&#x70BA;&#x6838;&#x51C6;&#xFF0C;&#x900F;&#x904E; kubectl &#x6216;&#x662F; Kubernetes API &#x65B9;&#x5F0F;&#x6838;&#x51C6; CSR</li></ul><ol><li>kubelet &#x6240;&#x9700;&#x8981;&#x7684;&#x6191;&#x8B49;&#xFF08;certificate&#xFF09;&#x88AB;&#x5EFA;&#x7ACB;&#x3002;</li><li>&#x6191;&#x8B49;&#xFF08;certificate&#xFF09;&#x88AB;&#x767C;&#x9001;&#x7D66; kubelet&#x3002;</li><li>kubelet &#x53D6;&#x5F97;&#x8A72;&#x6191;&#x8B49;&#xFF08;certificate&#xFF09;&#x3002;</li><li>kubelet &#x5EFA;&#x7ACB; kubeconfig&#xFF0C;&#x5176;&#x4E2D;&#x5305;&#x542B;&#x91D1;&#x9470;&#x548C;&#x5DF2;&#x7C3D;&#x540D;&#x7684;&#x6191;&#x8B49;&#x3002;</li><li>kubelet &#x958B;&#x59CB;&#x6B63;&#x5E38;&#x57F7;&#x884C;</li><li>&#xFF08;&#x9078;&#x64C7;&#x6027;&#x8A2D;&#x5B9A;&#xFF09;kubelet &#x5728;&#x6191;&#x8B49;&#x63A5;&#x8FD1;&#x65BC;&#x904E;&#x671F;&#x6642;&#x9593;&#x5C07;&#x6703;&#x81EA;&#x52D5;&#x8ACB;&#x6C42;&#x66F4;&#x65B0;&#x6191;&#x8B49;</li><li>&#x66F4;&#x65B0;&#x6191;&#x8B49;&#x6703;&#x88AB;&#x6838;&#x51C6;&#x4E26;&#x767C;&#x8B49;&#x3002;</li></ol><h2 id="%E9%A9%97%E8%AD%89-eks-%E8%A8%AD%E5%AE%9A">&#x9A57;&#x8B49; EKS &#x8A2D;&#x5B9A;</h2><p>&#x5728;&#x8A2D;&#x5B9A;&#x4E0A;&#xFF0C;&#x70BA;&#x9054;&#x5230;&#x81EA;&#x52D5;&#x5316;&#x6838;&#x51C6;&#x767C;&#x8B49;&#x6D41;&#x7A0B;&#xFF0C;&#x6211;&#x5011;&#x5FC5;&#x9808;&#x8A2D;&#x5B9A;&#x4EE5;&#x4E0B;&#x5143;&#x4EF6;&#xFF0C;&#x4E26;&#x9700;&#x8981; Kubernetes Certificate Authority (CA)&#xFF1A;</p><ul><li>kube-apiserver</li><li>kube-controller-manager</li><li>kubelet</li></ul><h3 id="kubelet">kubelet</h3><p>&#x9996;&#x5148;&#xFF0C;&#x900F;&#x904E; <code>systemctl</code> &#x547D;&#x4EE4;&#x67E5;&#x770B; <code>kubelet</code> systemd unit &#x8A2D;&#x5B9A;&#x6A94;&#xFF1A;</p><pre><code>[ec2-user@ip-192-168-65-212 ~]$ systemctl cat kubelet
# /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service iptables-restore.service
Requires=docker.service

[Service]
ExecStartPre=/sbin/iptables -P FORWARD ACCEPT -w 5
ExecStart=/usr/bin/kubelet --cloud-provider aws \
    --config /etc/kubernetes/kubelet/kubelet-config.json \
    --kubeconfig /var/lib/kubelet/kubeconfig \
    --container-runtime docker \
    --network-plugin cni $KUBELET_ARGS $KUBELET_EXTRA_ARGS

Restart=always
RestartSec=5
KillMode=process

[Install]
WantedBy=multi-user.target

# /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf
[Service]
Environment=&apos;KUBELET_ARGS=--node-ip=192.168.65.212 --pod-infra-container-image=602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/pause:3.5 --v=2&apos;
# /etc/systemd/system/kubelet.service.d/30-kubelet-extra-args.conf
[Service]
Environment=&apos;KUBELET_EXTRA_ARGS=--node-labels=eks.amazonaws.com/sourceLaunchTemplateVersion=1,alpha.eksctl.io/nodegroup-name=ng1-public-ssh,alpha.eksctl.io/cluster-name=ironman-2022,eks.amazonaws.com/nodegroup-image=ami-0ec9e1727a24fb788,eks.a
</code></pre><p>&#x53EF;&#x4EE5;&#x89C0;&#x5BDF;&#x5230;&#x8A2D;&#x5B9A;&#x4E86;&#xFF1A;</p><ul><li>kubeconfig&#xFF1A;<code>/var/lib/kubelet/kubeconfig</code> &#x4F7F;&#x7528;&#x4E86; kubelet username &#x4E26;&#x900F;&#x904E; <code>aws-iam-authenticator</code> &#x53D6;&#x5F97; cluster token&#x3002;</li></ul><pre><code>[ec2-user@ip-192-168-65-212 ~]$ cat /var/lib/kubelet/kubeconfig
apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority: /etc/kubernetes/pki/ca.crt
    server: https://A8E7A39CAEBEF6AA9250DFA9366FDFA2.gr7.eu-west-1.eks.amazonaws.com
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet
  name: kubelet
current-context: kubelet
users:
- name: kubelet
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: /usr/bin/aws-iam-authenticator
      args:
        - &quot;token&quot;
        - &quot;-i&quot;
        - &quot;ironman-2022&quot;
        - --region
        - &quot;eu-west-1&quot;

</code></pre><p>&#x6B64; kubeconfig &#x4E5F;&#x518D;&#x6B21;&#x9A57;&#x8B49;&#x4E86; EKS optimized Amazon Linux AMI &#x9810;&#x8A2D;&#x5B89;&#x88DD;&#x4E86; <a href="https://github.com/kubernetes-sigs/aws-iam-authenticator?ref=focaaby.com"><code>aws-iam-authenticator</code></a> [4] &#x4EE5;&#x63D0;&#x4F9B; EKS worker node &#x4E0A; kubelet &#x9032;&#x884C;&#x9A57;&#x8B49;&#xFF08;authenticate&#xFF09;&#x6D41;&#x7A0B;&#x3002;</p><ul><li>kubelet config&#xFF1A;<code>/etc/kubernetes/kubelet/kubelet-config.json</code></li></ul><pre><code>[ec2-user@ip-192-168-90-19 ~]$ cat /etc/kubernetes/kubelet/kubelet-config.json
{
  &quot;kind&quot;: &quot;KubeletConfiguration&quot;,
  &quot;apiVersion&quot;: &quot;kubelet.config.k8s.io/v1beta1&quot;,
  &quot;address&quot;: &quot;0.0.0.0&quot;,
  &quot;authentication&quot;: {
    &quot;anonymous&quot;: {
      &quot;enabled&quot;: false
    },
    &quot;webhook&quot;: {
      &quot;cacheTTL&quot;: &quot;2m0s&quot;,
      &quot;enabled&quot;: true
    },
    &quot;x509&quot;: {
      &quot;clientCAFile&quot;: &quot;/etc/kubernetes/pki/ca.crt&quot;
    }
  },
  &quot;authorization&quot;: {
    &quot;mode&quot;: &quot;Webhook&quot;,
    &quot;webhook&quot;: {
      &quot;cacheAuthorizedTTL&quot;: &quot;5m0s&quot;,
      &quot;cacheUnauthorizedTTL&quot;: &quot;30s&quot;
    }
  },
  &quot;clusterDomain&quot;: &quot;cluster.local&quot;,
  &quot;hairpinMode&quot;: &quot;hairpin-veth&quot;,
  &quot;readOnlyPort&quot;: 0,
  &quot;cgroupDriver&quot;: &quot;cgroupfs&quot;,
  &quot;cgroupRoot&quot;: &quot;/&quot;,
  &quot;featureGates&quot;: {
    &quot;RotateKubeletServerCertificate&quot;: true
  },
  &quot;protectKernelDefaults&quot;: true,
  &quot;serializeImagePulls&quot;: false,
  &quot;serverTLSBootstrap&quot;: true,
  &quot;tlsCipherSuites&quot;: [
    &quot;TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256&quot;,
    &quot;TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256&quot;,
    &quot;TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305&quot;,
    &quot;TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384&quot;,
    &quot;TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305&quot;,
    &quot;TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384&quot;,
    &quot;TLS_RSA_WITH_AES_256_GCM_SHA384&quot;,
    &quot;TLS_RSA_WITH_AES_128_GCM_SHA256&quot;
  ],
  &quot;clusterDNS&quot;: [
    &quot;10.100.0.10&quot;
  ],
  &quot;evictionHard&quot;: {
    &quot;memory.available&quot;: &quot;100Mi&quot;,
    &quot;nodefs.available&quot;: &quot;10%&quot;,
    &quot;nodefs.inodesFree&quot;: &quot;5%&quot;
  },
  &quot;kubeReserved&quot;: {
    &quot;cpu&quot;: &quot;70m&quot;,
    &quot;ephemeral-storage&quot;: &quot;1Gi&quot;,
    &quot;memory&quot;: &quot;574Mi&quot;
  }
}
</code></pre><p>&#x5728;&#x9019;&#x4E9B; <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/?ref=focaaby.com">kubelet flag</a> [5] &#x5176;&#x4E2D;&#x6211;&#x5011;&#x95DC;&#x6CE8;&#x8207; TLS Bootstrap &#x6709;&#x95DC; flag&#xFF1A;</p><ul><li><code>serverTLSBootstrap: true</code>&#xFF1A;&#x6B64;&#x5C07;&#x5141;&#x8A31;&#x4F86;&#x81EA; <code>certificates.k8s.io</code> API<br>&#x7D66;&#x4E88; kubelet &#x4F7F;&#x7528; kubelet serving certificates&#x3002;&#x7136;&#x800C;&#x6B64;&#x70BA;&#x4E00;&#x500B; <a href="https://github.com/kubernetes/community/pull/1982?ref=focaaby.com">&#x5DF2;&#x77E5;&#x5B89;&#x5168;&#x6027;&#x9650;&#x5236;</a> [6]&#xFF0C;&#x6B64;&#x985E;&#x578B; certificate &#x7121;&#x6CD5;&#x900F;&#x904E; <code>kube-controller-manager - kubernetes.io/kubelet-serving</code> &#x81EA;&#x52D5;&#x5316;&#x6838;&#x51C6; CSR&#xFF0C;&#x5247;&#x9700;&#x8981;&#x4EF0;&#x8CF4;&#x4F7F;&#x7528;&#x8005;&#x6216;&#x662F;&#x7B2C;&#x4E09;&#x65B9; controller&#x3002;</li><li><code>featureGates.RotateKubeletServerCertificate: ture</code>&#xFF1A;kubelet &#x5728; bootstrapping &#x81EA;&#x8EAB; client credentials &#x5F8C;&#x8ACB;&#x6C42; serving certificate &#x4E26;&#x81EA;&#x52D5;&#x66FF;&#x63DB;&#x6191;&#x8B49;&#x3002;</li></ul><h3 id="kube-apiserver">kube-apiserver</h3><p>The kube-apiserver &#x9700;&#x8981;&#x4EE5;&#x4E0B;&#x4E09;&#x9EDE;&#x689D;&#x4EF6;&#x624D;&#x80FD;&#x555F;&#x7528; TLS bootstrapping&#xFF1A;</p><ul><li>&#x8A2D;&#x5225; client certificate &#x7684; CA</li><li>&#x9A57;&#x8B49;&#xFF08;authenticate&#xFF09; bootstrapping kubelet &#x4E26;&#x8A2D;&#x5B9A;&#x65BC; <code>system: bootstrappers</code> group</li><li>&#x6388;&#x6B0A;&#xFF08;authorize&#xFF09; bootstrapping kubelet &#x5EFA;&#x7ACB; CSR</li></ul><p>&#x5176;&#x4E2D;&#x9A57;&#x8B49;&#x6D41;&#x7A0B;&#xFF0C;&#x7531;&#x4E0A;&#x8FF0;&#x63D0;&#x53CA;&#x900F;&#x904E; EKS optimized Amazon Linux AMI &#x6240;&#x5B9A;&#x7FA9; bootstrap.sh script &#x5B9A;&#x7FA9; kubeconfig &#x4F7F;&#x7528; &#xA0;<code>aws-iam-authenticator</code> &#x3002;&#x800C; kubelet &#x5728;&#x7D93;&#x7531; API server &#x8A8D;&#x8B49;&#x5F8C;&#xFF0C;&#x7531; RBAC <code>system: node</code> role &#x548C; <a href="https://kubernetes.io/docs/reference/access-authn-authz/node/?ref=focaaby.com">Node authorizer</a> [7] &#x5C07;&#x5141;&#x8A31;&#x7BC0;&#x9EDE;&#x5EFA;&#x7ACB;&#x548C;&#x8B80;&#x53D6; CSRs&#x3002;&#x56E0;&#x6B64;&#x6211;&#x5011;&#x53EF;&#x4EE5;&#x5728; EKS &#x9810;&#x8A2D; <code>eks: node-bootstrapper</code> role &#x4E0A;&#x6AA2;&#x8996;&#xFF1A;</p><ul><li>&#x7D66;&#x4E88; kubelet &#x6B0A;&#x9650;&#x63D0;&#x4EA4; CSR</li></ul><pre><code>$ kubectl get clusterrole eks:node-bootstrapper -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {&quot;apiVersion&quot;:&quot;rbac.authorization.k8s.io/v1&quot;,&quot;kind&quot;:&quot;ClusterRole&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;labels&quot;:{&quot;eks.amazonaws.com/component&quot;:&quot;node&quot;},&quot;name&quot;:&quot;eks:node-bootstrapper&quot;},&quot;rules&quot;:[{&quot;apiGroups&quot;:[&quot;certificates.k8s.io&quot;],&quot;resources&quot;:[&quot;certificatesigningrequests/selfnodeserver&quot;],&quot;verbs&quot;:[&quot;create&quot;]}]}
  creationTimestamp: &quot;2022-09-14T09:46:17Z&quot;
  labels:
    eks.amazonaws.com/component: node
  name: eks:node-bootstrapper
  resourceVersion: &quot;283&quot;
  uid: eb23d8fe-dfdf-4f01-aba7-72ca32b52ad7
rules:
- apiGroups:
  - certificates.k8s.io
  resources:
  - certificatesigningrequests/selfnodeserver
  verbs:
  - create
</code></pre><ul><li>cluster role <code>eks: node-bootstrapper</code> &#x7D81;&#x5B9A; <code>system: bootstrappers</code> &#x53CA; <code>system: nodes</code> group</li></ul><pre><code>$ kubectl get clusterrolebindings eks:node-bootstrapper -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {&quot;apiVersion&quot;:&quot;rbac.authorization.k8s.io/v1&quot;,&quot;kind&quot;:&quot;ClusterRoleBinding&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;labels&quot;:{&quot;eks.amazonaws.com/component&quot;:&quot;node&quot;},&quot;name&quot;:&quot;eks:node-bootstrapper&quot;},&quot;roleRef&quot;:{&quot;apiGroup&quot;:&quot;rbac.authorization.k8s.io&quot;,&quot;kind&quot;:&quot;ClusterRole&quot;,&quot;name&quot;:&quot;eks:node-bootstrapper&quot;},&quot;subjects&quot;:[{&quot;apiGroup&quot;:&quot;rbac.authorization.k8s.io&quot;,&quot;kind&quot;:&quot;Group&quot;,&quot;name&quot;:&quot;system:bootstrappers&quot;},{&quot;apiGroup&quot;:&quot;rbac.authorization.k8s.io&quot;,&quot;kind&quot;:&quot;Group&quot;,&quot;name&quot;:&quot;system:nodes&quot;}]}
  creationTimestamp: &quot;2022-09-14T09:46:16Z&quot;
  labels:
    eks.amazonaws.com/component: node
  name: eks:node-bootstrapper
  resourceVersion: &quot;282&quot;
  uid: 867196bc-b84a-410d-8d4b-cbf52d840108
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: eks:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
</code></pre><ul><li>&#x900F;&#x904E; CloudWatch Logs insight syntax &#x6AA2;&#x8996; <code>kube-apiserver</code> logs &#x4E00;&#x6A23;&#x53EF;&#x4EE5;&#x67E5;&#x770B;&#x5230; <code>--authorization-mode</code> &#x4F7F;&#x7528;&#x4E86; Node &#x53CA; RBAC &#x5169;&#x7A2E;&#x6388;&#x6B0A;&#x6A21;&#x5F0F;&#x3002;</li></ul><pre><code>filter @logStream not like /^kube-apiserver-audit/
 | filter @logStream like /^kube-apiserver-/
 | fields @timestamp, @message
 | sort @timestamp asc
 | filter @message like &quot;--authorization-mode&quot;
 | limit 10000
</code></pre><pre><code>I0914 09:46:06.295709 10 flags.go:59] FLAG: --authorization-mode=&quot;[Node,RBAC]&quot;
</code></pre><h3 id="kube-controller-manager">kube-controller-manager</h3><p>EKS &#x4F7F;&#x7528;&#x4E86; kubelet serving certificates &#x65B9;&#x5F0F;&#xFF0C;&#x6B64;&#x65B9;&#x5F0F;&#x50C5;&#x80FD;&#x900F;&#x904E;&#x7B2C;&#x4E09;&#x65B9; controller &#x65B9;&#x5F0F;&#x6838;&#x51C6; CSR&#x3002;&#x6B64;&#x90E8;&#x5206;&#xFF0C;&#x6211;&#x5011;&#x4E5F;&#x53EF;&#x4EE5;&#x518D;&#x6B21;&#x4F7F;&#x7528; CloudWatch Logs insight syntax &#x8A9E;&#x6CD5;&#x6AA2;&#x8996;&#x6B64; flag &#x53EF;&#x4EE5;&#x89C0;&#x5BDF;&#x5230;&#x505C;&#x7528;&#x539F;&#x751F; Kubernetes <code>csrsigning</code> controller&#x3002;</p><pre><code>filter @logStream not like /^kube-controller-manager/
 | fields @timestamp, @message
 | sort @timestamp asc
 | filter @message like &quot;--controller&quot;
 | limit 10000
</code></pre><pre><code>I0914 09:51:40.684692 11 flags.go:59] FLAG: --controllers=&quot;[*,-csrsigning]&quot;
</code></pre><h3 id="%E5%AF%A6%E9%9A%9B%E5%95%9F%E7%94%A8-eks-%E7%AF%80%E9%BB%9E">&#x5BE6;&#x969B;&#x555F;&#x7528; EKS &#x7BC0;&#x9EDE;</h3><p>&#x6211;&#x5011;&#x900F;&#x904E; <code>eksctl scale</code> &#x547D;&#x4EE4; scale up &#x4E00;&#x500B; node &#x4F86;&#x67E5;&#x770B;&#x6AA2;&#x8996;&#x76F8;&#x61C9;&#x5143;&#x4EF6;&#x8A2D;&#x5B9A;&#xFF0C;&#x53CA; TLS bootstrap &#x6D41;&#x7A0B;&#xFF1A;</p><pre><code>$ eksctl scale ng --nodes=3 --name=ng1-public-ssh --cluster=ironman-2022
2022-09-19 09:44:52 [&#x2139;]  scaling nodegroup &quot;ng1-public-ssh&quot; in cluster ironman-2022
2022-09-19 09:44:53 [&#x2139;]  waiting for scaling of nodegroup &quot;ng1-public-ssh&quot; to complete
2022-09-19 09:46:31 [&#x2139;]  nodegroup successfully scaled
</code></pre><p>&#x900F;&#x904E; kubectl get CSR &#x53EF;&#x4EE5;&#x5148;&#x89C0;&#x5BDF;&#x5230; &#x7531; node <code>system: node: ip-192-168-65-212.eu-west-1.compute.internal</code> &#x4F5C;&#x70BA; requestor &#x767C;&#x8D77; CSR&#x3002;&#x7D04; 11 &#x79D2;&#x5F8C;&#xFF0C;&#x6B64; CSR &#x88AB; Approved&#x3002;&#x63A5;&#x7E8C;&#xFF0C;CSR &#x88AB; Issued &#x7D66; node&#x3002;</p><pre><code>
$ kubectl get node
NAME                                           STATUS     ROLES    AGE     VERSION
...
ip-192-168-65-212.eu-west-1.compute.internal   NotReady   &lt;none&gt;   0s      v1.22.12-eks-ba74326

$ kubectl get csr
NAME        AGE   SIGNERNAME                      REQUESTOR                                                  REQUESTEDDURATION   CONDITION
csr-kdkll   11s   kubernetes.io/kubelet-serving   system:node:ip-192-168-65-212.eu-west-1.compute.internal   &lt;none&gt;              Approved

$ kubectl get csr
NAME        AGE   SIGNERNAME                      REQUESTOR                                                  REQUESTEDDURATION   CONDITION
csr-kdkll   16s   kubernetes.io/kubelet-serving   system:node:ip-192-168-65-212.eu-west-1.compute.internal   &lt;none&gt;              Approved,Issued

$ kubectl get node
NAME                                                STATUS   ROLES    AGE     VERSION
node/ip-192-168-18-254.eu-west-1.compute.internal   Ready    &lt;none&gt;   4d17h   v1.22.12-eks-ba74326
node/ip-192-168-40-16.eu-west-1.compute.internal    Ready    &lt;none&gt;   19h     v1.22.12-eks-ba74326
node/ip-192-168-65-212.eu-west-1.compute.internal   Ready    &lt;none&gt;   43s     v1.22.12-eks-ba74326
</code></pre><pre><code>$ kubectl describe csr csr-kdkll
Name:               csr-kdkll
Labels:             &lt;none&gt;
Annotations:        &lt;none&gt;
CreationTimestamp:  Mon, 19 Sep 2022 09:46:59 +0000
Requesting User:    system:node:ip-192-168-65-212.eu-west-1.compute.internal
Signer:             kubernetes.io/kubelet-serving
Status:             Approved,Issued
Subject:
  Common Name:    system:node:ip-192-168-65-212.eu-west-1.compute.internal
  Serial Number:
  Organization:   system:nodes
Subject Alternative Names:
         DNS Names:     ec2-52-211-162-59.eu-west-1.compute.amazonaws.com
                        ip-192-168-65-212.eu-west-1.compute.internal
         IP Addresses:  192.168.65.212
                        52.211.162.59
Events:  &lt;none&gt;
</code></pre><p>&#x7531;&#x65BC;&#x4E26;&#x672A;&#x80FD;&#x67E5;&#x770B;&#x5230; EKS &#x662F;&#x5426;&#x6709;&#x4F7F;&#x7528;&#x7B2C;&#x4E09;&#x65B9; controller &#x9032;&#x884C;&#x6838;&#x51C6;&#xFF08;approve&#xFF09;CSR&#x3002;&#x6211;&#x5011;&#x900F;&#x904E; CloudWatch Logs insight syntax &#x8A9E;&#x6CD5;&#x6AA2;&#x8996; <code>kube-apiserver-audit</code> &#x65E5;&#x8A8C;&#x5167;&#x8A18;&#x9304; CSR <code>csr-kdkll</code> &#x6D41;&#x7A0B;&#x8B8A;&#x5316;&#x3002;</p><pre><code>filter @logStream like /^kube-apiserver-audit/
 | fields @timestamp, @message
 | sort @timestamp asc
 | filter @message like &apos;csr-kdkll&apos;
 | limit 10000
</code></pre><ul><li>2022-09-19 09:46:59 UTC+0&#xFF1A;<code>kubelet/v1.22.12 (linux/amd64) kubernetes/1fc8914</code> &#x4F7F;&#x7528;&#x4E86;&#x5EFA;&#x7ACB;&#x4E86; CSR &#xA0;<code>csr-kdkll</code></li><li><code>user.username</code>&#xFF1A;`system: node: ip-192-168-65-212.eu-west-1.compute.internal</li><li><code>user.uid</code>&#xFF1A;<code>aws-iam-authenticator:111111111111: AROAYFMQSNSE3QYOZUIO6</code></li><li><code>responseObject.spec.signerName</code>&#xFF1A;<code>kubernetes.io/kubelet-serving</code></li><li><code>responseObject.spec.usages</code></li><li><code>digital signature</code></li><li><code>key encipherment</code></li><li><code>server auth</code></li><li>2022-09-19 09:46:59 UTC+0&#xFF1A;<code>eks: certificate-controller</code> &#x66F4;&#x65B0; CSR &#x4E26;&#x6838;&#x51C6;&#xFF08;approve&#xFF09;CSR</li><li><code>user.username</code>&#xFF1A;<code>eks: certificate-controller</code></li><li><code>responseObject.status.conditions.0.message</code>&#xFF1A;<code>Auto approving self kubelet server certificate after SubjectAccessReview.</code></li><li><code>responseObject.status.conditions.0.reason</code>&#xFF1A;<code>AutoApproved</code></li></ul><p>&#x7531;&#x4E0A;&#x8FF0;&#x53EF;&#x4EE5;&#x4E86;&#x89E3;&#x6B64; CSR &#x7B26;&#x5408; Kubernetes signers <code>signerName</code> <code>kubernetes.io/kubelet-serving</code> &#x800C;&#x4E0D;&#x88AB; <code>kube-controller-manager</code> &#x81EA;&#x52D5;&#x6838;&#x51C6;&#xFF0C;&#x6B64; CSR &#x5247;&#x662F;&#x7531; EKS <code>eks: certificate-controller</code> &#x4F86;&#x81EA;&#x52D5;&#x6838;&#x51C6;&#x3002;</p><p>&#x800C;&#x7D93;&#x7531; CSR &#x6838;&#x51C6;&#x4E26;&#x767C;&#x8B49;&#x65BC; node &#x4E0A;&#x3002;&#x6211;&#x5011;&#x4E5F;&#x53EF;&#x4EE5;&#x65BC; <code>/var/lib/kubelet/pki/</code> &#x76EE;&#x9304;&#x67E5;&#x770B;&#x5230; <code>kubelet-server-current.pem</code> &#x6191;&#x8B49;&#x3002;</p><pre><code>[ec2-user@ip-192-168-65-212 ~]$ journalctl -u kubelet | grep &quot;certificate signing request&quot;
Sep 19 09:47:00 ip-192-168-65-212.eu-west-1.compute.internal kubelet[3347]: I0919 09:47:00.509710    3347 csr.go:262] certificate signing request csr-kdkll is approved, waiting to be issued
Sep 19 09:47:14 ip-192-168-65-212.eu-west-1.compute.internal kubelet[3347]: I0919 09:47:14.707915    3347 csr.go:258] certificate signing request csr-kdkll is issued
</code></pre><pre><code>[ec2-user@ip-192-168-65-212 ~]$ sudo ls -al /var/lib/kubelet/pki/
total 4
drwxr-xr-x 2 root root   86 Sep 14 16:39 .
drwxr-xr-x 8 root root  182 Sep 14 16:39 ..
-rw------- 1 root root 1370 Sep 14 16:39 kubelet-server-2022-09-14-16-39-07.pem
lrwxrwxrwx 1 root root   59 Sep 14 16:39 kubelet-server-current.pem -&gt; /var/lib/kubelet/pki/kubelet-server-2022-09-14-16-39-07.pem
</code></pre><h2 id="%E7%B8%BD%E7%B5%90">&#x7E3D;&#x7D50;</h2><p>&#x6BD4;&#x5C0D;&#x539F;&#x751F; Kubernetes &#x6587;&#x4EF6;&#x53CA; EKS &#x4E0A; kube-apiserver&#x3001; kube-controller-manager &#x53CA; kubelet &#x5143;&#x4EF6;&#x8A2D;&#x5B9A;&#x5F8C;&#xFF0C;&#x6211;&#x5011;&#x53EF;&#x4EE5;&#x78BA;&#x5B9A; EKS &#x65BC; Control Plane &#x7AEF;&#x8A2D;&#x7F6E;&#x4E86; <code>eks: certificate-controller</code> &#x63D0;&#x4F9B;&#x81EA;&#x52D5;&#x6838;&#x51C6;&#xFF08;Auto-Aprroved&#xFF09;CSR &#x4E26; issue CSR &#x7D66;&#x4E88; kubelet &#x4F7F;&#x7528;&#x3002;&#x5728;&#x53D6;&#x5F97; CSR &#x5F8C;&#xFF0C;node &#x5247;&#x53EF;&#x4EE5;&#x9806;&#x5229;&#x52A0;&#x5165; EKS cluster&#x3002;</p><hr><p>&#x4E0A;&#x8FF0;&#x8CC7;&#x8A0A;&#x900F;&#x904E; EKS &#x6240;&#x63D0;&#x4F9B; Logs &#x4F86;&#x9A57;&#x8B49;&#x4E0A;&#x6E38; Kubernetes &#x904B;&#x4F5C;&#x539F;&#x7406;&#xFF0C;&#x5018;&#x82E5;&#x4E0A;&#x8FF0;&#x5167;&#x6587;&#x6709;&#x6240;&#x932F;&#x8AA4;&#xFF0C;&#x96A8;&#x6642;&#x53EF;&#x4EE5;&#x7559;&#x8A00;&#x6216;&#x662F;&#x79C1;&#x8A0A;&#x6211;&#x3002;</p><h2 id="%E5%8F%83%E8%80%83%E6%96%87%E4%BB%B6">&#x53C3;&#x8003;&#x6587;&#x4EF6;</h2><ol><li>Add proposal for kubelet TLS bootstrap - <a href="https://github.com/kubernetes/kubernetes/pull/20439/files?ref=focaaby.com">https://github.com/kubernetes/kubernetes/pull/20439/files</a></li><li>TLS bootstrapping - <a href="https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/?ref=focaaby.com">https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/</a></li><li>Certificate Signing Requests - <a href="https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/?ref=focaaby.com">https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/</a></li><li>AWS IAM Authenticator for Kubernetes - <a href="https://github.com/kubernetes-sigs/aws-iam-authenticator?ref=focaaby.com">https://github.com/kubernetes-sigs/aws-iam-authenticator</a></li><li>kubelet - <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/?ref=focaaby.com">https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/</a></li><li>design: reduce scope of node on node object w.r.t ip #1982 - <a href="https://github.com/kubernetes/community/pull/1982?ref=focaaby.com">https://github.com/kubernetes/community/pull/1982</a></li><li>Using Node Authorization - <a href="https://kubernetes.io/docs/reference/access-authn-authz/node/?ref=focaaby.com">https://kubernetes.io/docs/reference/access-authn-authz/node/</a></li><li><a href="https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/?ref=focaaby.com#certificate-rotation">https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#certificate-rotation</a></li></ol>]]></content:encoded></item><item><title><![CDATA[為什麼 EKS worker node 可以自動加入 EKS cluster（一）]]></title><description><![CDATA[<p>&#x5982;&#x679C;&#x662F;&#x4E00;&#x500B;&#x81EA;&#x5EFA;&#x7684; Kubernetes cluster&#xFF0C;&#x6211;&#x5011;&#x6703;&#x9700;&#x8981;&#x4F7F;&#x7528; Certificate Authority &#x6191;&#x8B49;&#x7D66;&#x4E88; <a href="https://kubernetes.io/docs/concepts/overview/components/?ref=focaaby.com#node-components">Kubernetes Components</a> [1] &#x6240;&#x4F7F;&#x7528;&#xFF0C;&#x5982; <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md?ref=focaaby.com">Kubernetes The Hard Way</a> [2] &#x6216;&#x662F;&#x4EE5;&#x4E0B; <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/?ref=focaaby.com"><code>kubeadm join</code></a> [3] &#x547D;&#x4EE4;&#x5247;&#x53EF;&#x4EE5;&#x4F7F;</p>]]></description><link>https://focaaby.com/why-eks-node-can-join-cluster-automatically-1/</link><guid isPermaLink="false">63273bd9d3f23c000139547c</guid><category><![CDATA[ironman-2022]]></category><category><![CDATA[eks]]></category><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Sun, 18 Sep 2022 15:40:43 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1618826411640-d6df44dd3f7a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDZ8fGNhdHxlbnwwfHx8fDE2NjM1MTU2MzA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1618826411640-d6df44dd3f7a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDZ8fGNhdHxlbnwwfHx8fDE2NjM1MTU2MzA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="&#x70BA;&#x4EC0;&#x9EBC; EKS worker node &#x53EF;&#x4EE5;&#x81EA;&#x52D5;&#x52A0;&#x5165; EKS cluster&#xFF08;&#x4E00;&#xFF09;"><p>&#x5982;&#x679C;&#x662F;&#x4E00;&#x500B;&#x81EA;&#x5EFA;&#x7684; Kubernetes cluster&#xFF0C;&#x6211;&#x5011;&#x6703;&#x9700;&#x8981;&#x4F7F;&#x7528; Certificate Authority &#x6191;&#x8B49;&#x7D66;&#x4E88; <a href="https://kubernetes.io/docs/concepts/overview/components/?ref=focaaby.com#node-components">Kubernetes Components</a> [1] &#x6240;&#x4F7F;&#x7528;&#xFF0C;&#x5982; <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md?ref=focaaby.com">Kubernetes The Hard Way</a> [2] &#x6216;&#x662F;&#x4EE5;&#x4E0B; <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/?ref=focaaby.com"><code>kubeadm join</code></a> [3] &#x547D;&#x4EE4;&#x5247;&#x53EF;&#x4EE5;&#x4F7F; Kubernetes node &#x52A0;&#x5165; cluster&#xFF1A;</p><pre><code>$ kubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef 1.2.3.4:6443
</code></pre><p>&#x5728; EKS &#x74B0;&#x5883;&#x4E2D;&#xFF0C;nodegroup &#x4E00;&#x8A5E;&#x4EE3;&#x8868;&#x4E86; Kubernetes nodes &#x96C6;&#x5408;&#xFF0C;&#x800C;&#x6211;&#x5011;&#x4E5F;&#x53EF;&#x4EE5;&#x4F7F;&#x7528; <a href="https://eksctl.io/usage/managing-nodegroups/?ref=focaaby.com"><code>eksctl scale</code></a> [4] &#x547D;&#x4EE4; scale up Nodes&#x3002;</p><pre><code>$ eksctl scale ng --nodes=3 --nodes-max=10 --name=ng1-public-ssh --cluster=ironman-2022
2022-09-18 14:14:18 [&#x2139;]  scaling nodegroup &quot;ng1-public-ssh&quot; in cluster ironman-2022
2022-09-18 14:14:18 [&#x2139;]  waiting for scaling of nodegroup &quot;ng1-public-ssh&quot; to complete
2022-09-18 14:14:48 [&#x2139;]  nodegroup successfully scaled
</code></pre><p>&#x800C;&#x5728;&#x77ED;&#x77ED;&#x5E7E;&#x5206;&#x9418;&#x5167;&#xFF0C;&#x6211;&#x5011;&#x4E26;&#x7121;&#x9700;&#x8981;&#x624B;&#x52D5;&#x8A2D;&#x5B9A; TLS &#x6191;&#x8B49;&#xFF0C;EKS node &#x5247;&#x53EF;&#x4EE5;&#x9806;&#x5229;&#x81EA;&#x52D5;&#x52A0;&#x5165;&#x96C6;&#x7FA4;&#x3002;</p><pre><code>$ kubectl get node
NAME                                           STATUS   ROLES    AGE     VERSION
...
ip-192-168-40-16.eu-west-1.compute.internal    Ready    &lt;none&gt;   34s     v1.22.12-eks-ba74326
...
</code></pre><p>&#x6545;&#x5C07;&#x63A2;&#x8A0E;&#x300C;&#x70BA;&#x4EC0;&#x9EBC; EKS &#x7BC0;&#x9EDE;&#x53EF;&#x4EE5;&#x81EA;&#x52D5;&#x52A0;&#x5165;&#x96C6;&#x7FA4;&#x300D;&#xFF0C;&#x5E0C;&#x671B;&#x7406;&#x89E3; EKS node &#x9810;&#x8A2D;&#x4E86;&#x54EA;&#x4E9B;&#x8A2D;&#x7F6E;&#x5141;&#x8A31; node &#x53EF;&#x4EE5;&#x81EA;&#x52D5;&#x52A0;&#x5165; cluster&#x3002;&#x672C;&#x6587;&#x5C07;&#x8457;&#x91CD;&#x65BC;&#xFF1A;</p><ul><li>&#x4F55;&#x8B02; EKS node group&#xFF1F;</li><li>EKS AMI bootstrap script &#x9810;&#x8A2D;&#x4E86;&#x54EA;&#x4E9B;&#x8A2D;&#x5B9A;&#xFF1F;</li></ul><h2 id="eks-node-group">EKS node group</h2><p>&#x4E00;&#x500B;&#x5B8C;&#x6574;&#x7684; <a href="https://kubernetes.io/docs/concepts/overview/components/?ref=focaaby.com">Kubernetes Components</a> [1] &#x53EF;&#x4EE5;&#x5206;&#x6210;&#x70BA; Control Plane &#x7AEF;&#x53CA; Node &#x7AEF;&#x3002;&#x5176;&#x4E2D; &#x5728; EKS cluster &#x74B0;&#x5883;&#x4E2D;&#xFF0C;AWS &#x670D;&#x52D9;&#x6574;&#x5408;&#x4E86; EC2&#x3001;Auto Scaling groups &#x7B49;&#x670D;&#x52D9;&#x7684; worker node &#x8CC7;&#x6E90;&#x7A31;&#x4E4B;&#x70BA; nodegroup&#xFF08;&#x7BC0;&#x9EDE;&#x7D44;&#xFF09;&#x3002;&#x6839;&#x64DA; EKS Node &#x6587;&#x4EF6;&#xFF0C;&#x53C8;&#x53EF;&#x4EE5;&#x5206;&#x6210; <a href="https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html?ref=focaaby.com">Managed node groups&#xFF08;&#x8A17;&#x7BA1;&#x7BC0;&#x9EDE;&#x7D44;&#xFF09;</a> [6] <a href="https://docs.aws.amazon.com/eks/latest/userguide/worker.html?ref=focaaby.com">&#x53CA; Self-managed nodes&#xFF08;&#x81EA;&#x884C;&#x7BA1;&#x7406;&#x7BC0;&#x9EDE;&#x7D44;&#xFF09;</a> [7]&#x3002;</p><p>&#x65BC; <a href="https://ithelp.ithome.com.tw/articles/10291924?ref=focaaby.com">&#x7B2C;&#x4E00;&#x5929;</a> &#x7684; <code>eksctl</code> ClusterConfig &#x6587;&#x4EF6;&#x5B9A;&#x7FA9;&#x4F7F;&#x7528;&#x4E86; Managed node groups <code>ng1-public-ssh</code>&#x3002;</p><pre><code>apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ironman-2022
  region: eu-west-1

managedNodeGroups:
  - name: &quot;ng1-public-ssh&quot;
    desiredCapacity: 2
    ssh:
      # Enable ssh access (via the admin container)
      allow: true
      publicKeyName: &quot;ironman-2022&quot;
    iam:
      withAddonPolicies:
        ebs: true
        fsx: true
        efs: true
        awsLoadBalancerController: true
        autoScaler: true

iam:
  withOIDC: true

cloudWatch:
  clusterLogging:
    enableTypes: [&quot;*&quot;]
</code></pre><p>&#x5728;&#x5EFA;&#x7ACB;&#x5B8C;&#x74B0;&#x5883;&#x5F8C;&#xFF0C;&#x6211;&#x5011;&#x6703;&#x6709;&#x540D;&#x70BA; <code>ng1-public-ssh</code> &#x7684; Managed node groups&#xFF1A;</p><pre><code>$ eksctl get ng --cluster=ironman-2022
CLUSTER NODEGROUP       STATUS  CREATED                 MIN SIZE        MAX SIZE        DESIRED CAPACITY        INSTANCE TYPE   IMAGE ID        ASG NAME                    TYPE
ironman-2022 ng1-public-ssh  ACTIVE  2022-09-14T09:54:36Z    0               10              3                       m5.large        AL2_x86_64      eks-ng1-public-ssh-62c19db5-f965-bdb7-373a-147e04d9f124      managed
</code></pre><p>&#x56E0;&#x6B64;&#x6211;&#x5011;&#x4E5F;&#x53EF;&#x4EE5;&#x900F;&#x904E; <a href="https://docs.aws.amazon.com/cli/latest/reference/eks/describe-nodegroup.html?ref=focaaby.com"><code>aws eks describe-nodegroup</code></a> [8] &#x547D;&#x4EE4;&#x67E5;&#x770B; nodegroup &#x8CC7;&#x6E90;&#x8CC7;&#x8A0A;&#xFF1A;</p><pre><code>$ aws eks describe-nodegroup --cluster-name ironman-2022 --nodegroup-name ng1-public-ssh
{
    &quot;nodegroup&quot;: {
        &quot;nodegroupName&quot;: &quot;ng1-public-ssh&quot;,
        &quot;nodegroupArn&quot;: &quot;arn:aws:eks:eu-west-1:111111111111:nodegroup/ironman-2022/ng1-public-ssh/62c19db5-f965-bdb7-373a-147e04d9f124&quot;,
        &quot;clusterName&quot;: &quot;ironman-2022&quot;,
        &quot;version&quot;: &quot;1.22&quot;,
        &quot;releaseVersion&quot;: &quot;1.22.12-20220824&quot;,
        &quot;createdAt&quot;: &quot;2022-09-14T09:54:36.211000+00:00&quot;,
        &quot;modifiedAt&quot;: &quot;2022-09-18T13:45:07.322000+00:00&quot;,
        &quot;status&quot;: &quot;ACTIVE&quot;,
        &quot;capacityType&quot;: &quot;ON_DEMAND&quot;,
        &quot;scalingConfig&quot;: {
            &quot;minSize&quot;: 0,
            &quot;maxSize&quot;: 2,
            &quot;desiredSize&quot;: 2
        },
        &quot;instanceTypes&quot;: [
            &quot;m5.large&quot;
        ],
        &quot;subnets&quot;: [
            &quot;subnet-0e863f9fbcda592a3&quot;,
            &quot;subnet-00ebeb2e8903fb3f9&quot;,
            &quot;subnet-02d98be342d8ab2a7&quot;
        ],
        &quot;amiType&quot;: &quot;AL2_x86_64&quot;,
        &quot;nodeRole&quot;: &quot;arn:aws:iam::111111111111:role/eksctl-ironman-2022-nodegroup-ng1-publ-NodeInstanceRole-HN27OZ18JS6U&quot;,
        &quot;labels&quot;: {
            &quot;alpha.eksctl.io/cluster-name&quot;: &quot;ironman-2022&quot;,
            &quot;alpha.eksctl.io/nodegroup-name&quot;: &quot;ng1-public-ssh&quot;
        },
        &quot;resources&quot;: {
            &quot;autoScalingGroups&quot;: [
                {
                    &quot;name&quot;: &quot;eks-ng1-public-ssh-62c19db5-f965-bdb7-373a-147e04d9f124&quot;
                }
            ]
        },
        &quot;health&quot;: {
            &quot;issues&quot;: []
        },
        &quot;updateConfig&quot;: {
            &quot;maxUnavailable&quot;: 1
        },
        &quot;launchTemplate&quot;: {
            &quot;name&quot;: &quot;eksctl-ironman-2022-nodegroup-ng1-public-ssh&quot;,
            &quot;version&quot;: &quot;1&quot;,
            &quot;id&quot;: &quot;lt-000b4417a7baebbf8&quot;
        },
        &quot;tags&quot;: {
            &quot;aws:cloudformation:stack-name&quot;: &quot;eksctl-ironman-2022-nodegroup-ng1-public-ssh&quot;,
            &quot;alpha.eksctl.io/cluster-name&quot;: &quot;ironman-2022&quot;,
            &quot;alpha.eksctl.io/nodegroup-name&quot;: &quot;ng1-public-ssh&quot;,
            &quot;aws:cloudformation:stack-id&quot;: &quot;arn:aws:cloudformation:eu-west-1:111111111111:stack/eksctl-ironman-2022-nodegroup-ng1-public-ssh/2b5ad730-3413-11ed-9adb-0296792ac05b&quot;,
            &quot;auto-delete&quot;: &quot;never&quot;,
            &quot;eksctl.cluster.k8s.io/v1alpha1/cluster-name&quot;: &quot;ironman-2022&quot;,
            &quot;aws:cloudformation:logical-id&quot;: &quot;ManagedNodeGroup&quot;,
            &quot;alpha.eksctl.io/nodegroup-type&quot;: &quot;managed&quot;,
            &quot;alpha.eksctl.io/eksctl-version&quot;: &quot;0.111.0&quot;
        }
    }
}
</code></pre><p>&#x7531;&#x4E0A;&#x8FF0;&#x8F38;&#x51FA;&#x5167;&#x5BB9; nodegroup &#x8CC7;&#x6E90;&#x5305;&#x542B;&#x4E86;&#x6B64; Node &#x4F7F;&#x7528;&#x7684; IAM role&#x3001;Auto Scaling Group &#x540D;&#x7A31;&#x3001;Launch Template &#x53CA; CloudFormation &#x8CC7;&#x8A0A;&#x3002;&#x5176;&#x4E2D;&#x6211;&#x5011;&#x53EF;&#x4EE5;&#x5F97;&#x77E5; EKS &#x9810;&#x8A2D;&#x4F7F;&#x7528; AMI &#x70BA;&#xFF1A; <code>AL2_x86_64</code>&#xFF0C;&#x6B64;&#x70BA; AWS &#x57FA;&#x65BC; Amazon Linux 2 AMI &#x7DAD;&#x8B77; <a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html?ref=focaaby.com">Amazon EKS optimized Amazon Linux AMIs</a> [9] &#x3002;</p><h2 id="amazon-eks-ami-build-specification">Amazon EKS AMI Build Specification</h2><p>&#x4F7F;&#x7528; <a href="https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instance-attribute.html?ref=focaaby.com"><code>aws ec2 describe-instance-attribute</code></a> &#x547D;&#x4EE4;&#x67E5;&#x770B; <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html?ref=focaaby.com">EC2 userdata</a> [11] &#x4E26;&#x900F;&#x904E; base64 &#x547D;&#x4EE4;&#x89E3;&#x6790;&#xFF1A;</p><pre><code>$ aws ec2 describe-instance-attribute --instance-id i-1234567890abcdef0 --attribute userData | jq -r .UserData.Value | base64 -d
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary=&quot;//&quot;

--//
Content-Type: text/x-shellscript; charset=&quot;us-ascii&quot;
#!/bin/bash
set -ex
B64_CLUSTER_CA=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHV... SKIP    ...
... CA      ...
... CONTENT ...
kZtcnI5V1lZRExMUDdLCm0xVUJQUWdzTzRQQlREUjlaLzhpbnZDV0FiT0szM2Z6OVZqU3dBbjlhQ0lXbU5FY2dVMkFUWm1FN0N4WEUrbFkKOFhjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
API_SERVER_URL=https://1234567890ABCDEFGHIJKLMNOPQRSTUV.gr7.eu-west-1.eks.amazonaws.com
K8S_CLUSTER_DNS_IP=10.100.0.10
/etc/eks/bootstrap.sh ironman-2022 --kubelet-extra-args &apos;--node-labels=eks.amazonaws.com/sourceLaunchTemplateVersion=1,alpha.eksctl.io/nodegroup-name=ng1-public-ssh,alpha.eksctl.io/cluster-name=ironman-2022,eks.amazonaws.com/nodegroup-image=ami-0ec9e1727a24fb788,eks.amazonaws.com/capacityType=ON_DEMAND,eks.amazonaws.com/nodegroup=ng1-public-ssh,eks.amazonaws.com/sourceLaunchTemplateId=lt-000b4417a7baebbf8 --max-pods=29&apos; --b64-cluster-ca $B64_CLUSTER_CA --apiserver-endpoint $API_SERVER_URL --dns-cluster-ip $K8S_CLUSTER_DNS_IP --use-max-pods false

--//--
</code></pre><p>&#x6211;&#x5011;&#x53EF;&#x4EE5;&#x5F97;&#x77E5; bash script &#x5B9A;&#x7FA9;&#xFF1A;</p><ul><li><code>B64_CLUSTER_CA</code>&#x3001;<code>API_SERVER_URL</code>&#x3001;<code>K8S_CLUSTER_DNS_IP</code> &#x74B0;&#x5883;&#x8B8A;&#x6578;&#x3002;</li><li>&#x57F7;&#x884C;&#x4E86;&#x65BC; <code>/etc/eks/bootstrap.sh</code> &#x8DEF;&#x5F91; script&#x3002;</li></ul><p>&#x6839;&#x64DA; <a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-ami-build-scripts.html?ref=focaaby.com">Amazon EKS optimized Amazon Linux AMI build script</a> [12] &#x63D0;&#x53CA;&#x4E86; <a href="https://github.com/awslabs/amazon-eks-ami?ref=focaaby.com">GitHub Amazon EKS AMI Build Specification</a> [13] &#x5B9A;&#x7FA9;&#x4E86; EKS node bootstrap script &#x5305;&#x542B; certificate data&#x3001;control plane endpoint&#x3001;cluster &#x540D;&#x7A31;&#x7B49;&#x8CC7;&#x8A0A;&#x3002;</p><p>&#x800C;&#x9810;&#x8A2D;&#x7684; userdata script &#x5247;&#x5B9A;&#x7FA9;&#x4E86;&#x4EE5;&#x4E0B;&#x5E7E;&#x500B;&#x53C3;&#x6578;&#xFF1A;</p><ul><li>cluster &#x540D;&#x7A31;&#x3002;</li><li><code>--kubelet-extra-args</code>&#xFF1A;&#x5B9A;&#x7FA9;&#x984D;&#x5916; kubelet &#x53C3;&#x6578;&#xFF0C;&#x65B9;&#x4FBF;&#x589E;&#x52A0;&#x8A2D;&#x5B9A; labels &#x6216; taints&#x3002;</li><li><code>--b64-cluster-ca</code>&#xFF1A;EKS cluster based64 &#x7DE8;&#x78BC;&#x5F8C;&#x7684;&#x5167;&#x5BB9;&#x3002;&#x6B64;&#x90E8;&#x5206;&#x900F;&#x904E; AWS CLI &#xA0;<code>aws eks describe-cluster</code> &#x547D;&#x4EE4;&#x53D6;&#x5F97;&#x5F8C;&#x8A2D;&#x5B9A;&#x5132;&#x5B58;&#x65BC; <code>/etc/kubernetes/pki/ca.crt</code>&#x3002;</li><li><code>--apiserver-endpoint</code>&#xFF1A; EKS cluster API Server endpoint&#x3002;&#x8207; <code>--b64-cluster-ca</code> &#x4E00;&#x6A23;&#x7686;&#x662F;&#x900F;&#x904E; <code>aws eks describe-cluster</code> &#x547D;&#x4EE4;&#xFF0C;&#x4E26;&#x8A2D;&#x5B9A; kubelet &#x6240;&#x4F7F;&#x7528;&#x7684; kubeconfig &#x6587;&#x4EF6;&#xFF08;<code>/var/lib/kubelet/kubeconfig</code> &#xFF09;&#x3002;</li><li><code>--dns-cluster-ip</code>&#xFF1A;&#x8A2D;&#x5B9A; EKS cluster &#x5167;&#x90E8;&#x4F7F;&#x7528;&#x7684; DNS IP &#x5730;&#x5740;&#x65BC; kubelet &#x8A2D;&#x5B9A;&#x6587;&#x4EF6; &#xA0;<code>/etc/kubernetes/kubelet/kubelet-config.json</code>&#xFF0C;&#x6B63;&#x662F;&#x9810;&#x8A2D; CoreDNS&#xFF08;<code>kube-dns</code>&#xFF09;Service Cluster IP&#x3002;&#x9810;&#x8A2D;&#x4F7F;&#x7528; <code>10.100.0.10</code>&#xFF0C;&#x82E5;&#x4F7F;&#x7528; CIDR 10. &#x70BA; prefix &#x5247;&#x6703;&#x4F7F;&#x7528; <code>172.20.0.10</code><a href="https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh?ref=focaaby.com#L462-L485">[14]</a>&#x3002;</li></ul><pre><code>$ kubectl -n kube-system get svc
NAME       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterIP   10.100.0.10   &lt;none&gt;        53/UDP,53/TCP   4d5h
</code></pre><ul><li><code>--use-max-pods</code>&#xFF1A;&#x8A2D;&#x7F6E; &#xA0;kubelet &#x53C3;&#x6578; <code>--max-pods</code> &#x65BC; kubelet &#x8A2D;&#x5B9A;&#x6587;&#x4EF6; <code>/etc/kubernetes/kubelet/kubelet-config.json</code>&#x3002;</li></ul><h2 id="%E7%B8%BD%E7%B5%90">&#x7E3D;&#x7D50;</h2><p>&#x4EE5;&#x4E0A;&#x662F; EKS worker node &#x555F;&#x7528;&#x524D;&#x5728;&#x9810;&#x8A2D; EKS optimized Amazon Linux AMI &#x5167;&#x9810;&#x8A2D; script &#x53CA;&#x91DD;&#x5C0D; kubelet &#x53C3;&#x6578;&#x8A2D;&#x5B9A;&#xFF0C;&#x5728;&#x7D93;&#x904E;&#x521D;&#x6B65;&#x4E86;&#x89E3;&#x4E4B;&#x5F8C;&#xFF0C;&#x4E0B;&#x4E00;&#x7BC7;&#x6211;&#x5011;&#x5C07;&#x6703;&#x63A2;&#x8A0E; kubelet TLS bootstrapping &#x555F;&#x52D5;&#x904E;&#x7A0B;&#xFF0C;&#x4F86;&#x4E86;&#x89E3;&#x5BE6;&#x969B;&#x81EA;&#x52D5;&#x52A0;&#x5165; cluster &#x904E;&#x7A0B; kubelet &#x53CA; control plane &#x6240;&#x9700;&#x8A2D;&#x7F6E;&#x3002;</p><hr><p>&#x4E0A;&#x8FF0;&#x8CC7;&#x8A0A;&#x900F;&#x904E; EKS &#x6240;&#x63D0;&#x4F9B; Logs &#x4F86;&#x9A57;&#x8B49;&#x4E0A;&#x6E38; Kubernetes &#x904B;&#x4F5C;&#x539F;&#x7406;&#xFF0C;&#x5018;&#x82E5;&#x4E0A;&#x8FF0;&#x5167;&#x6587;&#x6709;&#x6240;&#x932F;&#x8AA4;&#xFF0C;&#x96A8;&#x6642;&#x53EF;&#x4EE5;&#x7559;&#x8A00;&#x6216;&#x662F;&#x79C1;&#x8A0A;&#x6211;&#x3002;</p><h2 id="%E5%8F%83%E8%80%83%E6%96%87%E4%BB%B6">&#x53C3;&#x8003;&#x6587;&#x4EF6;</h2><ol><li>Kubernetes Components - <a href="https://kubernetes.io/docs/concepts/overview/components/?ref=focaaby.com#node-components">https://kubernetes.io/docs/concepts/overview/components/#node-components</a></li><li>Provisioning a CA and Generating TLS Certificates | Kubernetes The Hard Way - <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md?ref=focaaby.com">https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md</a></li><li>kubeadm join - <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/?ref=focaaby.com">https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/</a></li><li>Managing nodegroups | eksctl - <a href="https://eksctl.io/usage/managing-nodegroups/?ref=focaaby.com">https://eksctl.io/usage/managing-nodegroups/</a></li><li>Amazon EKS nodes - <a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-compute.html?ref=focaaby.com">https://docs.aws.amazon.com/eks/latest/userguide/eks-compute.html</a></li><li>Managed node groups - <a href="https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html?ref=focaaby.com">https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html</a></li><li>Self-managed nodes - <a href="https://docs.aws.amazon.com/eks/latest/userguide/worker.html?ref=focaaby.com">https://docs.aws.amazon.com/eks/latest/userguide/worker.html</a></li><li>aws eks describe-nodegroup - <a href="https://docs.aws.amazon.com/cli/latest/reference/eks/describe-nodegroup.html?ref=focaaby.com">https://docs.aws.amazon.com/cli/latest/reference/eks/describe-nodegroup.html</a></li><li>Amazon EKS optimized Amazon Linux AMIs - <a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html?ref=focaaby.com">https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html</a></li><li>aws ec2 describe-instance-attribute - <a href="https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instance-attribute.html?ref=focaaby.com">https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instance-attribute.html</a></li><li>Run commands on your Linux instance at launch - <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html?ref=focaaby.com">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html</a></li><li>Amazon EKS optimized Amazon Linux AMI build script - <a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-ami-build-scripts.html?ref=focaaby.com">https://docs.aws.amazon.com/eks/latest/userguide/eks-ami-build-scripts.html</a></li><li>Amazon EKS AMI Build Specification - <a href="https://github.com/awslabs/amazon-eks-ami?ref=focaaby.com">https://github.com/awslabs/amazon-eks-ami</a></li><li><a href="https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh?ref=focaaby.com#L462-L485">https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh#L462-L485</a></li><li>TLS bootstrapping - <a href="https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/?ref=focaaby.com">https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/</a></li></ol>]]></content:encoded></item><item><title><![CDATA[為什麼 kubectl 可以訪問 EKS cluster]]></title><description><![CDATA[本文將探討這兩個問題「為什麼原生的 kubectl 可以直接訪問 EKS cluster」以及「為什麼 AWS CLI 權限與訪問 EKS cluster」，希望理解在 kubectl 如何與 IAM 權限整合允許訪問 EKS cluster。]]></description><link>https://focaaby.com/why-kubectl-can-access-eks-cluster-with-iam/</link><guid isPermaLink="false">63232e8cd3f23c0001395462</guid><category><![CDATA[ironman-2022]]></category><category><![CDATA[eks]]></category><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Fri, 16 Sep 2022 16:55:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1533738363-b7f9aef128ce?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDR8fGNhdHxlbnwwfHx8fDE2NjMxNzA0MTQ&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1533738363-b7f9aef128ce?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDR8fGNhdHxlbnwwfHx8fDE2NjMxNzA0MTQ&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="&#x70BA;&#x4EC0;&#x9EBC; kubectl &#x53EF;&#x4EE5;&#x8A2A;&#x554F; EKS cluster"><p>&#x6839;&#x64DA; EKS &#x6587;&#x4EF6; <a href="https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html?ref=focaaby.com">Installing or updating kubectl</a> [1]&#xFF0C;&#x6216;&#x662F; <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/?ref=focaaby.com">Kubernetes &#x5B98;&#x65B9;&#x5B89;&#x88DD; kubectl &#x6587;&#x4EF6;</a> [2] &#x7686;&#x662F;&#x76F4;&#x63A5;&#x5B89;&#x88DD; kubectl binary &#x4E26;&#x8A2D;&#x7F6E;&#x76F8;&#x61C9; Linux &#x8DEF;&#x5F91;&#x53CA;&#x6B0A;&#x9650;&#x5373;&#x53EF;&#x4EE5;&#x76F4;&#x63A5;&#x4F7F;&#x7528;&#x3002;</p><p>&#x800C; EKS &#x524D;&#x7F6E;&#x4F5C;&#x696D;&#x5247;&#x9700;&#x8981;&#x8A2D;&#x7F6E; AWS CLI &#x6B0A;&#x9650;&#xFF0C;&#x90A3;&#x300C;&#x70BA;&#x4EC0;&#x9EBC;&#x539F;&#x751F;&#x7684; kubectl &#x53EF;&#x4EE5;&#x76F4;&#x63A5;&#x8A2A;&#x554F; EKS cluster&#x300D;&#x4EE5;&#x53CA;&#x300C;&#x70BA;&#x4EC0;&#x9EBC; AWS CLI &#x6B0A;&#x9650;&#x8207;&#x8A2A;&#x554F; EKS cluster&#x300D;&#xFF0C;&#x672C;&#x6587;&#x5C07;&#x63A2;&#x8A0E;&#x9019;&#x5169;&#x500B;&#x554F;&#x984C;&#xFF0C;&#x5E0C;&#x671B;&#x7406;&#x89E3;&#x5728; kubectl &#x5982;&#x4F55;&#x8207; IAM &#x6B0A;&#x9650;&#x6574;&#x5408;&#x5141;&#x8A31;&#x8A2A;&#x554F; EKS cluster&#x3002;</p><h2 id="aws-iam-authenticator">AWS IAM Authenticator</h2><p>EKS &#x6587;&#x4EF6;&#x4E26;&#x672A;&#x63D0;&#x53CA; kubectl &#x6240;&#x9700;&#x6700;&#x4F4E;&#x7248;&#x672C;&#xFF0C;&#x800C;&#x5728;&#x6AA2;&#x8996; <a href="https://github.com/weaveworks/eksctl?ref=focaaby.com"><code>eksctl</code></a> [3] GitHub &#x9801;&#x9762;&#x8AAA;&#x660E;&#x63D0;&#x53CA;&#x9700;&#x642D;&#x914D; AWS CLI&#xFF08;<code>aws eks get-token</code>&#xFF09;&#x6216;&#x662F;&#x4F7F;&#x7528; <a href="https://github.com/kubernetes-sigs/aws-iam-authenticator?ref=focaaby.com">AWS IAM Authenticator for Kubernetes</a> [4]&#x3002;</p><p><a href="https://github.com/kubernetes-sigs/aws-iam-authenticator?ref=focaaby.com">AWS IAM Authenticator for Kubernetes</a> &#x63D0;&#x4F9B;&#x4F7F;&#x7528; AWS IAM credentials &#x53EF;&#x4EE5;&#x5411; Kubernetes cluster &#x9032;&#x884C;&#x8EAB;&#x4EFD;&#x9A57;&#x8B49;&#xFF08;authenticate&#xFF09;&#x7684;&#x5DE5;&#x5177;&#x3002;</p><p>&#x82E5;&#x5E0C;&#x671B; Kubernetes &#x5728; AWS &#x74B0;&#x5883;&#x4E0A;&#x652F;&#x63F4; IAM Authenticator&#xFF0C;&#x9700;&#x8981;&#x4EE5;&#x4E0B;&#x4E94;&#x500B;&#x6B65;&#x9A5F;&#xFF1A;</p><ol><li>&#x5EFA;&#x7ACB; IAM &#x89D2;&#x8272;&#xFF08;role&#xFF09;</li><li>&#x4EE5; DaemonSet &#x65B9;&#x5F0F;&#x57F7;&#x884C; Authenticator server</li><li>&#x8A2D;&#x5B9A; Kubernetes API &#xA0;Server &#x8207; Authenticator server &#x6574;&#x5408;</li><li>&#x5275;&#x5EFA; IAM role/user &#x5230; Kubernetes user/group &#x5C0D;&#x7167;</li><li>&#x8A2D;&#x5B9A; kubectl &#x4F7F;&#x7528; AWS IAM Authenticator &#x6240;&#x63D0;&#x4F9B;&#x7684; authentication tokens</li></ol><p>&#x6709;&#x8DA3;&#x7684;&#x662F;&#xFF0C;&#x5176;&#x4E2D;&#x5728;&#x5C0D;&#x7167; IAM role/user &#x81F4; Kubernetes user/group &#x6240;&#x4F7F;&#x7528;&#x7684;&#x6B63;&#x662F; <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html?ref=focaaby.com#aws-auth-users">EKS &#x6587;&#x4EF6;</a> [5] &#x6240;&#x63D0;&#x53CA;&#x4F7F;&#x7528;&#x7684; EKS-style <code>kube-system/aws-auth</code> ConfigMap&#x3002;&#x4EE5;&#x4E0B;&#x70BA;&#x9810;&#x8A2D; eksctl &#x5EFA;&#x7ACB;&#x6642;&#x6703;&#x81EA;&#x52D5;&#x5C07; EKS worker node IAM role &#x95DC;&#x806F;&#x81F3;&#x6B64; <code>kube-system/aws-auth</code> ConfigMap&#x3002;</p><pre><code>$ kubectl -n kube-system get cm aws-auth -o yaml
apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::111111111111:role/eksctl-ironman-2022-nodegroup-ng1-publ-NodeInstanceRole-HN27OZ18JS6U
      username: system:node:{{EC2PrivateDNSName}}
kind: ConfigMap
metadata:
  creationTimestamp: &quot;2022-09-14T09:56:19Z&quot;
  name: aws-auth
  namespace: kube-system
  resourceVersion: &quot;2011&quot;
  uid: f4cb3fae-17ef-421d-871c-5be15efbe73f
</code></pre><p>&#x9032;&#x4E00;&#x6B65;&#x67E5;&#x770B; <a href="https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html?ref=focaaby.com">EKS control plane logging</a> [6] &#x6587;&#x4EF6;&#xFF0C;&#x6211;&#x5011;&#x4E5F;&#x53EF;&#x4EE5;&#x66F4;&#x8FD1;&#x4E00;&#x6B65;&#x78BA;&#x8A8D; EKS &#x9664;&#x4E86;&#x4EE5;&#x4E0B;&#x539F;&#x751F;&#x5143;&#x4EF6; logs &#x4E4B;&#x5916;&#xFF0C;&#x63D0;&#x4F9B;&#x4E86;</p><ul><li>Kubernetes API server</li><li>Audit</li><li>Controller manager</li><li>Scheduler</li><li>Authenticator (authenticator) &#x2013; Authenticator logs &#x70BA; EKS &#x74B0;&#x5883;&#x7368;&#x6709;&#x3002;&#x9019;&#x4E9B; logs &#x4EE3;&#x8868; EKS &#x4F7F;&#x7528; IAM &#x6191;&#x8B49;&#x9032;&#x884C; Kubernetes <a href="https://kubernetes.io/docs/admin/authorization/rbac/?ref=focaaby.com">RBAC</a> &#x8EAB;&#x5206;&#x9A57;&#x8B49;&#x7684; control plane &#x5143;&#x4EF6;&#x3002;</li></ul><h2 id="%E9%A9%97%E8%AD%89">&#x9A57;&#x8B49;</h2><h3 id="kubectl">kubectl</h3><p>&#x4E00;&#x822C;&#x4F86;&#x8AAA;&#xFF0C;<code>kubectl</code> &#x547D;&#x4EE4;&#x8A2D;&#x5B9A;&#x6587;&#x5EFA;&#x6703;&#x5132;&#x5B58;&#x65BC; <code>~/.kube/config</code>&#xFF0C;&#x900F;&#x904E; <code>eksctl</code> &#x547D;&#x4EE4;&#x5EFA;&#x7ACB; cluster &#x6642;&#x4E5F;&#x6703;&#x81EA;&#x52D5;&#x751F;&#x6210;&#x6B64;&#x6587;&#x4EF6;&#x3002;&#x5728; kubeconfig &#x8A2D;&#x5B9A;&#x6A94;&#xFF0C;&#x4F7F;&#x7528;&#x4E86; <code>users.user.exec</code> &#x547D;&#x4EE4;&#x4E26;&#x8ABF;&#x7528;&#x5916;&#x90E8; AWS CLI &#x547D;&#x4EE4;&#x3002;</p><pre><code>$ cat ~/.kube/config
apiVersion: v1
clusters:
... 
... SKIPPING certificate-authority-data INFORMATION ... 
...
current-context: arn:aws:eks:eu-west-1:111111111111:cluster/ironman-2022
kind: Config
preferences: {}
users:
- name: cli@ironman-2022.eu-west-1.eksctl.io
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - eks
      - get-token
      - --cluster-name
      - ironman-2022
      - --region
      - eu-west-1
      command: aws
</code></pre><p>&#x5728; Kubernetes 1.10 alpha <code>k8s.io/client-go</code> library &#x652F;&#x63F4;&#x4E86; <a href="https://github.com/kubernetes/kubernetes/pull/59495?ref=focaaby.com">exec-based credential providers</a> [7]&#x3002;&#x800C; <code>kubectl</code> &#x53CA; <code>kubectl</code> &#x547D;&#x4EE4;&#x6B63;&#x5F0F;&#x4F7F;&#x7528;&#x76F8;&#x540C; <code>k8s.io/client-go</code> library&#x3002;</p><p>&#x540C;&#x6642;&#xFF0C;&#x6211;&#x5011;&#x4E5F;&#x53EF;&#x4EE5;&#x5F97;&#x77E5;&#x82E5; client &#x7AEF;&#x4F7F;&#x7528; client-go credential plugins &#x65B9;&#x5F0F;&#x8207; API server &#x9032;&#x884C;&#x8A8D;&#x8B49;&#xFF0C;&#x6709;&#x4EE5;&#x4E0B;&#x6D41;&#x7A0B;&#xFF1A;</p><ul><li>&#x4F7F;&#x7528;&#x8005;&#x767C;&#x51FA; kubectl &#x547D;&#x4EE4;</li><li>&#x8ABF;&#x7528;&#x5916;&#x90E8; credentials &#x4E26;&#x8207;&#x5916;&#x90E8;&#x670D;&#x52D9;&#x53D6;&#x5F97; token</li><li>Credential plugin &#x53D6;&#x5F97; token &#x5F8C;&#x8FD4;&#x56DE;&#x7D66; client-go client&#xFF0C;&#x4E26;&#x4F7F;&#x7528; token &#x8A2A;&#x554F; API server</li><li>API Server &#x4F7F;&#x7528; &#xA0;webhook token authenticator &#x5143;&#x4EF6;&#x5411;&#x5916;&#x90E8;&#x670D;&#x52D9;&#x767C;&#x51FA; TokenReview &#x8ACB;&#x6C42;&#x3002;</li><li>&#x5916;&#x90E8;&#x670D;&#x52D9;&#x9A57;&#x8B49; token &#x4E0A;&#x7684; signature &#x5F8C;&#xFF0C;&#x8FD4;&#x56DE; Kubernetes user name &#x53CA; group</li></ul><p>&#x56E0;&#x6B64;&#x6211;&#x5011;&#x4E5F;&#x53EF;&#x4EE5;&#x900F;&#x904E; <code>aws eks get-token</code> &#x547D;&#x4EE4;&#x67E5;&#x770B;&#x6B64; token&#xFF1A;</p><pre><code>$ aws eks get-token --cluster ironman-2022 | jq .
{
  &quot;kind&quot;: &quot;ExecCredential&quot;,
  &quot;apiVersion&quot;: &quot;client.authentication.k8s.io/v1beta1&quot;,
  &quot;spec&quot;: {},
  &quot;status&quot;: {
    &quot;expirationTimestamp&quot;: &quot;2022-09-14T23:04:55Z&quot;,
    &quot;token&quot;: &quot;k8s-aws-v1.aHR0cHM6Ly9zdHMuZXUtd2VzdC0xLmFtYXpvbmF3cy5jb20vP0FjdGlvbj1HZXRDYWxsZXJJZGVudGl0eSZWZXJzaW9uPTIwMTEtMDYtMTUmWC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBWUZNUVNOU0U1SDVaVEpERSUyRjIwMjIwOTE0JTJGZXUtd2VzdC0xJTJGc3RzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyMjA5MTRUMjI1MDU1WiZYLUFtei1FeHBpcmVzPTYwJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCUzQngtazhzLWF3cy1pZCZYLUFtei1TaWduYXR1cmU9MDU5ZjQwNzEzNTY0OGYwNjdlMWZjOThjZjhhYmY3YjdkMmIzNDgxMjE1ZWEzNGE1NTI4YzcyODczMmU1YjJkYw&quot;
  }
}
</code></pre><p>&#x6BCF;&#x500B; token &#x7686;&#x662F; <a href="https://github.com/kubernetes-sigs/aws-iam-authenticator?ref=focaaby.com#api-authorization-from-outside-a-cluster">AWS IAM Authenticator</a> &#x5B9A;&#x7FA9;&#x4EE5; <code>k8s-aws-v1.</code> &#x5B57;&#x4E32;&#x70BA;&#x524D;&#x7DB4;&#xFF0C;&#x4E26;&#x63A5;&#x7E8C;&#x4F7F;&#x7528; base64 encoded string&#xFF0C;&#x6545;&#x900F;&#x904E; base64 &#x547D;&#x4EE4;&#x89E3;&#x6790;&#x51FA; STS presigned URL[9]&#xFF1A;</p><pre><code>$ aws eks get-token --cluster ironman-2022 | jq -r .status.token | awk -F. &apos;{print $2}&apos; | base64 -d
https://sts.eu-west-1.amazonaws.com/?Action=GetCallerIdentity&amp;Version=2011-06-15&amp;X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Credential=AKIAYFMQSNSE5H5ZTJDE%2F20220914%2Feu-west-1%2Fsts%2Faws4_request&amp;X-Amz-Date=20220914T225747Z&amp;X-Amz-Expires=60&amp;X-Amz-SignedHeaders=host%3Bx-k8s-aws-id&amp;X-Amz-Signature=9688c99e7abd33807bc7b3af3542a59235e501a0d9c807100707410fc3da5d33
</code></pre><h3 id="control-plane">Control plane</h3><p>&#x900F;&#x904E;&#x4EE5;&#x4E0B; CloudWatch Log Syntax &#x53EF;&#x67E5;&#x770B; <code>kube-apiserver</code> &#x6240;&#x4F7F;&#x7528; Flag&#xFF1A;</p><pre><code>filter @logStream not like /^kube-apiserver-audit/
 | filter @logStream like /^kube-apiserver-/
 | fields @timestamp, @message
 | sort @timestamp asc
 | filter @message like &quot;--authentication&quot;
 | limit 10000
</code></pre><p>kube-apiserver logs &#x5B9A;&#x7FA9;&#x4E86; <code>--authentication-token-webhook-config-file</code> &#x5247;&#x4EE3;&#x8868;&#x4F7F;&#x7528;&#x4E86; <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/?ref=focaaby.com#webhook-token-authentication">Webhook Token Authentication</a> [10] &#xFF0C;&#x4F46;&#x975E;&#x5E38;&#x907A;&#x61BE;&#x7684;&#x662F;&#x6211;&#x5011;&#x7121;&#x6CD5;&#x67E5;&#x770B;&#x5230; API Server &#x5143;&#x4EF6;&#x5167;&#x5B9A;&#x7FA9;&#x76EE;&#x9304;&#x67E5;&#x770B;&#x7D30;&#x7BC0;&#x3002;<code>/etc/kubernetes/authenticator/apiserver-webhook-kubeconfig.yaml</code>&#x3002;</p><pre><code>I0914 09:46:06.295581 10 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl=&quot;7m0s&quot;
I0914 09:46:06.295687 10 flags.go:59] FLAG: --authentication-token-webhook-config-file=&quot;/etc/kubernetes/authenticator/apiserver-webhook-kubeconfig.yaml&quot;
I0914 09:46:06.295703 10 flags.go:59] FLAG: --authentication-token-webhook-version=&quot;v1beta1&quot;
</code></pre><p>&#x6B64;&#x5916;&#xFF0C;&#x4E00;&#x6A23;&#x900F;&#x904E; &#xA0;CloudWatch Log Syntax &#x67E5;&#x770B; <code>authenticator</code> log&#xFF0C;&#x4E5F;&#x80FD;&#x89C0;&#x5BDF;&#x5230;&#x76F8;&#x540C;&#x7684;&#x8A2D;&#x7F6E;&#x53CA; logs&#xFF1A;</p><pre><code>filter @logStream like /^authenticator/
 | fields @timestamp, @message
 | sort @timestamp asc
 | filter @message like &quot;--authentication-token-webhook-config-file&quot;
 | limit 10000
</code></pre><pre><code>time=&quot;2022-09-14T09:46:01Z&quot; level=info msg=&quot;reconfigure your apiserver with `--authentication-token-webhook-config-file=/etc/kubernetes/authenticator/apiserver-webhook-kubeconfig.yaml` to enable (assuming default hostPath mounts)&quot;
</code></pre><h2 id="%E7%B8%BD%E7%B5%90">&#x7E3D;&#x7D50;</h2><p>&#x539F;&#x751F; kubectl &#x4F7F;&#x7528;&#x4E86; <code>k8s.io/client-go</code> library&#xFF0C;&#x4E26;&#x5728; Kubernetes 1.10 &#x7248;&#x672C;&#x958B;&#x59CB;&#x652F;&#x63F4; exec &#x547D;&#x4EE4;&#x4F7F;&#x7528;&#x5916;&#x90E8; credentials&#xFF0C;&#x800C;&#x5728; API server &#x5247;&#x8A2D;&#x5B9A;&#x4E86;&#x4F7F;&#x7528; &#xA0;webhook token authenticator &#x65B9;&#x5F0F;&#x8A2D;&#x5B9A; AWS IAM Authenticator &#x6574;&#x5408; IAM STS &#x670D;&#x52D9;&#x9A57;&#x8B49; IAM &#x4F7F;&#x7528;&#x8005;&#xFF0C;STS endpoint &#x9A57;&#x8B49;&#x5F8C;&#x4EA4;&#x7531; Kubernetes API server &#x8B80;&#x53D6; <code>kube-system/aws-auth</code> ConfigMap &#x5C0D;&#x7167; Kubernetes &#x4F7F;&#x7528; RBAC groups&#xFF0C;&#x6700;&#x7D42;&#x8FD4;&#x56DE;&#x7D50;&#x679C;&#x3002;</p><p>&#x4E86;&#x89E3;&#x4E0A;&#x8FF0; <code>aws eks get-token</code> &#x70BA; Kubernetes Bearer Token&#xFF0C;&#x6545;&#x6211;&#x5011;&#x4E5F;&#x53EF;&#x4EE5;&#x76F4;&#x63A5;&#x900F;&#x904E; curl &#x547D;&#x4EE4;&#x642D;&#x914D;&#x6B64; token &#x8A2A;&#x554F; EKS cluster endpoint&#x3002;</p><pre><code>$ TOKEN=$(aws eks get-token --cluster ironman-2022 | jq -r .status.token)
$ APISERVER=$(aws eks describe-cluster --name ironman-2022 | jq -r .cluster.endpoint) 
$ curl $APISERVER/api --header &quot;Authorization: Bearer $TOKEN&quot; --insecure
{
  &quot;kind&quot;: &quot;APIVersions&quot;,
  &quot;versions&quot;: [
    &quot;v1&quot;
  ],
  &quot;serverAddressByClientCIDRs&quot;: [
    {
      &quot;clientCIDR&quot;: &quot;0.0.0.0/0&quot;,
      &quot;serverAddress&quot;: &quot;ip-10-0-51-57.eu-west-1.compute.internal:443&quot;
    }
  ]
</code></pre><h2 id="%E5%8F%83%E8%80%83%E6%96%87%E4%BB%B6">&#x53C3;&#x8003;&#x6587;&#x4EF6;</h2><ol><li>Installing or updating kubectl - <a href="https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html?ref=focaaby.com">https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html</a></li><li>Install and Set Up kubectl on Linux &#xA0;| Kubernetes Documentation - &#xA0;<a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/?ref=focaaby.com">https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/</a></li><li>eksctl - The official CLI for Amazon EKS - &#xA0;<a href="https://github.com/weaveworks/eksctl?ref=focaaby.com">https://github.com/weaveworks/eksctl</a></li><li>AWS IAM Authenticator for Kubernetes - <a href="https://github.com/kubernetes-sigs/aws-iam-authenticator?ref=focaaby.com">https://github.com/kubernetes-sigs/aws-iam-authenticator</a>.</li><li>Enabling IAM user and role access to your cluster - Add IAM users or roles to your Amazon EKS cluster - <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html?ref=focaaby.com#aws-auth-users">https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html#aws-auth-users</a></li><li>Amazon EKS control plane logging - <a href="https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html?ref=focaaby.com">https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html</a></li><li>client-go: add an exec-based client auth provider #59495 - <a href="https://github.com/kubernetes/kubernetes/pull/59495?ref=focaaby.com">https://github.com/kubernetes/kubernetes/pull/59495</a></li><li>client-go credential plugins &#xA0;| Authenticating &#xA0;- <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/?ref=focaaby.com#client-go-credential-plugins">https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins</a></li><li><a href="https://github.com/kubernetes-sigs/aws-iam-authenticator?ref=focaaby.com#api-authorization-from-outside-a-cluster">https://github.com/kubernetes-sigs/aws-iam-authenticator#api-authorization-from-outside-a-cluster</a></li><li>Webhook Token Authentication &#xA0;| Authenticating - <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/?ref=focaaby.com#webhook-token-authentication">https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication</a></li></ol>]]></content:encoded></item><item><title><![CDATA[建立 EKS Cluster]]></title><description><![CDATA[故本系列文章希望研究在 AWS 平台上 EKS 一些有趣的功能是如何實現，並且了解 EKS 如何整合上游 Kubernetes 功能與 AWS 服務整合。]]></description><link>https://focaaby.com/ironman2022-create-eks-cluster/</link><guid isPermaLink="false">6321e735d3f23c00013953fa</guid><category><![CDATA[ironman-2022]]></category><category><![CDATA[eks]]></category><category><![CDATA[aws]]></category><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Thu, 15 Sep 2022 16:55:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1584290867415-527a8475726d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDY1fHxjYXR8ZW58MHx8fHwxNjYzMTcwNDQz&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h2 id="%E5%89%8D%E8%A8%80">&#x524D;&#x8A00;</h2><img src="https://images.unsplash.com/photo-1584290867415-527a8475726d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDY1fHxjYXR8ZW58MHx8fHwxNjYzMTcwNDQz&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="&#x5EFA;&#x7ACB; EKS Cluster"><p>&#x6839;&#x64DA; Kubernetes &#x5B98;&#x65B9;&#x516C;&#x544A;[1]&#xFF0C;Kubernetes &#x767C;&#x5E03;&#x9031;&#x671F;&#x7D04;&#x70BA; 15 &#x9031;&#x4E00;&#x6B21;&#xFF0C;&#x63DB;&#x8A00;&#x4E4B;&#x6BCF;&#x63A5;&#x8FD1; 4 &#x500B;&#x6708;&#x5C31;&#x6703;&#x767C;&#x5E03;&#x4E00;&#x6B21;&#x65B0;&#x7684;&#x7248;&#x672C;&#x3002;&#x800C;&#x5176;&#x4E2D; Cloud Provider &#x6240;&#x63D0;&#x4F9B;&#x7684; Kubernetes &#x5E73;&#x53F0;&#x4E5F;&#x5FC5;&#x9808;&#x6301;&#x7E8C;&#x66F4;&#x65B0;&#xFF0C;&#x53EF;&#x4EE5;&#x8AAA;&#x662F;&#x6BCF;&#x9694;&#x4E09;&#x81F3;&#x4E94;&#x500B;&#x6708;&#x7686;&#x6703;&#x6709;&#x4E00;&#x6B21;&#x65B0;&#x7684;&#x767C;&#x5E03;&#xFF0C;&#x5982; GKE[2] &#x53CA; EKS[3] &#x767C;&#x5E03;&#x983B;&#x7387;&#x3002;</p><p></p><p>&#x56E0;&#x6B64;&#x6211;&#x5011;&#x4E5F;&#x80FD;&#x89C0;&#x5BDF;&#x5230; GKE [2]&#x53CA; EKS [4][5] &#x65BC;&#x6587;&#x4EF6;&#x4E0A;&#x983B;&#x7E41;&#x66F4;&#x65B0;&#xFF0C;&#x4E26;&#x57FA;&#x65BC;&#x4E0A;&#x6E38; Kubernetes &#x5F15;&#x5165;&#x65B0;&#x7684;&#x529F;&#x80FD;&#xFF0C;&#x9032;&#x800C;&#x6574;&#x5408; Cloud Provider &#x7279;&#x6027;&#x3002;</p><p></p><p>&#x6545;&#x672C;&#x7CFB;&#x5217;&#x6587;&#x7AE0;&#x5E0C;&#x671B;&#x7814;&#x7A76;&#x5728; AWS &#x5E73;&#x53F0;&#x4E0A; EKS &#x4E00;&#x4E9B;&#x6709;&#x8DA3;&#x7684;&#x529F;&#x80FD;&#x662F;&#x5982;&#x4F55;&#x5BE6;&#x73FE;&#xFF0C;&#x7A76;&#x7ADF;&#x54EA;&#x4E9B;&#x4E8B;&#x60C5;&#x662F; EKS &#x5B98;&#x65B9;&#x6587;&#x4EF6;&#x4E0A;&#x4E26;&#x672A;&#x63D0;&#x53CA;&#x7684;&#x4E8B;&#x60C5;&#xFF0C;&#x5982; kubectl &#x5982;&#x4F55;&#x4F7F;&#x7528; IAM user &#x6B0A;&#x9650;&#x6574;&#x5408; EKS&#x3001;EKS worker node &#x5982;&#x4F55;&#x81EA;&#x52D5;&#x52A0;&#x5165; cluster&#xFF0C;&#x6700;&#x7D42;&#x4E86;&#x89E3; EKS &#x5982;&#x4F55;&#x6574;&#x5408;&#x4E0A;&#x6E38; Kubernetes &#x529F;&#x80FD;&#x8207; AWS &#x670D;&#x52D9;&#x6574;&#x5408;&#x3002;</p><h2 id="%E5%BB%BA%E7%AB%8B-eks-cluster-%E7%92%B0%E5%A2%83">&#x5EFA;&#x7ACB; EKS Cluster &#x74B0;&#x5883;</h2><p>&#x672C;&#x7CFB;&#x5217;&#x6587;&#x7AE0;&#x5C07;&#x6703;&#x4F7F;&#x7528;<code>eksctl</code> &#x7BA1;&#x7406; EKS Cluster&#xFF0C;&#x4EE5;&#x4E0B;&#x70BA;&#x5EFA;&#x7ACB; EKS &#x6B65;&#x9A5F;&#xFF0C;&#x53CA;&#x4F7F;&#x7528;&#x76F8;&#x95DC;&#x547D;&#x4EE4;&#x7248;&#x672C;&#x3002;</p><ol><li>&#x555F;&#x7528; EC2 &#x4E26;&#x9078;&#x7528; AMI Amazon Linux 2 Kernel 5.10 AMI &#x4F5C;&#x70BA;&#x63A7;&#x5236; EKS Cluster &#x4E3B;&#x6A5F;&#x3002;</li><li>&#x5B89;&#x88DD; AWS CLI[6]&#x3002;</li></ol><pre><code>$ curl &quot;https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip&quot; -o &quot;awscliv2.zip&quot;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 44.8M  100 44.8M    0     0  69.0M      0 --:--:-- --:--:-- --:--:-- 68.9M

$ unzip awscliv2.zip

$ sudo ./aws/install

$ aws --version
aws-cli/2.4.27 Python/3.8.8 Linux/4.14.290-217.505.amzn2.x86_64 exe/x86_64.amzn.2 prompt/off
</code></pre><p>3. &#x5B89;&#x88DD; <code>eksctl</code> [7] &#x547D;&#x4EE4;&#x3002;</p><pre><code>$ curl --silent --location &quot;https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz&quot; | tar xz -C /tmp

$ sudo mv /tmp/eksctl /usr/local/bin

$ eksctl version
0.111.0
</code></pre><p>4. &#x7576;&#x524D; <code>eksctl</code> &#x9810;&#x8A2D;&#x7248;&#x672C;&#x70BA; 1.22 &#x7248;&#x672C;[8]&#xFF0C;&#x6545;&#x5B89;&#x88DD; kubectl 1.22 &#x7248;&#x672C;[9]&#x3002;</p><pre><code>$ curl -LO https://dl.k8s.io/release/v1.22.13/bin/linux/amd64/kubectl
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   138  100   138    0     0    836      0 --:--:-- --:--:-- --:--:--   841
100 44.7M  100 44.7M    0     0  82.8M      0 --:--:-- --:--:-- --:--:-- 82.8M

$ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

$ kubectl version
Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run &apos;aws eks update-kubeconfig&apos; to update.
Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;22&quot;, GitVersion:&quot;v1.22.13&quot;, GitCommit:&quot;a43c0904d0de10f92aa3956c74489c45e6453d6e&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2022-08-17T18:28:56Z&quot;, GoVersion:&quot;go1.16.15&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;}
Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;22+&quot;, GitVersion:&quot;v1.22.12-eks-6d3986b&quot;, GitCommit:&quot;dade57bbf0e318a6492808cf6e276ea3956aecbf&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2022-07-20T22:06:30Z&quot;, GoVersion:&quot;go1.16.15&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;}
</code></pre><p>5. &#x8A2D;&#x7F6E; AWS CLI &#x6191;&#x8B49;&#xFF08;credentials&#xFF09;[10]&#x3002;eksctl &#x5C07;&#x6703;&#x4F7F;&#x7528;&#x6B64; IAM &#x4F7F;&#x7528;&#x8005;&#x6B0A;&#x9650;&#x4F5C;&#x70BA; EKS Cluster &#x5EFA;&#x7ACB;&#x8005;&#xFF08;creator&#xFF09;&#xFF0C;&#x4E26;&#x5EFA;&#x7ACB; EKS Cluster&#x3001;&#x7BC0;&#x9EDE;&#x7D44;&#xFF08;nodegroup&#xFF09;&#x7B49; AWS &#x8CC7;&#x6E90;&#x3002;&#x5018;&#x82E5;&#x5E0C;&#x671B; IAM &#x4F7F;&#x7528;&#x8005;&#x53EF;&#x4EE5;&#x9650;&#x7E2E; IAM &#x6700;&#x4F4E;&#x6B0A;&#x9650;&#xFF0C;&#x53EF;&#x4EE5;&#x53C3;&#x8003; eksctl &#x6700;&#x5C0F; IAM policies &#x6587;&#x4EF6;[11]</p><pre><code>$ aws configure
</code></pre><p>6. &#x5EFA;&#x7ACB; <code>eksctl</code> ClusterConfig &#x6587;&#x4EF6;&#xFF0C;&#x4E26;&#x555F;&#x7528; control plane logs[12]&#x3002;</p><pre><code>$ cat ./ironman-2022.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ironman-2022
  region: eu-west-1

managedNodeGroups:
  - name: &quot;ng1-public-ssh&quot;
    desiredCapacity: 2
    ssh:
      # Enable ssh access (via the admin container)
      allow: true
      publicKeyName: &quot;ironman-2022&quot;
    iam:
      withAddonPolicies:
        ebs: true
        fsx: true
        efs: true
        awsLoadBalancerController: true
        autoScaler: true

iam:
  withOIDC: true

cloudWatch:
  clusterLogging:
    enableTypes: [&quot;*&quot;]

$ eksctl create cluster -f ./ironman-2022.yaml                                                                                                                                                                      [133/4203]
2022-09-14 09:39:32 [&#x2139;]  eksctl version 0.111.0
2022-09-14 09:39:32 [&#x2139;]  using region eu-west-1
2022-09-14 09:39:32 [&#x2139;]  setting availability zones to [eu-west-1a eu-west-1c eu-west-1b]
2022-09-14 09:39:32 [&#x2139;]  subnets for eu-west-1a - public:192.168.0.0/19 private:192.168.96.0/19
2022-09-14 09:39:32 [&#x2139;]  subnets for eu-west-1c - public:192.168.32.0/19 private:192.168.128.0/19
2022-09-14 09:39:32 [&#x2139;]  subnets for eu-west-1b - public:192.168.64.0/19 private:192.168.160.0/19
2022-09-14 09:39:32 [&#x2139;]  nodegroup &quot;ng1-public-ssh&quot; will use &quot;&quot; [AmazonLinux2/1.22]
2022-09-14 09:39:32 [&#x2139;]  using EC2 key pair &quot;ironman-2022&quot;
2022-09-14 09:39:32 [&#x2139;]  using Kubernetes version 1.22
2022-09-14 09:39:32 [&#x2139;]  creating EKS cluster &quot;ironman-2022&quot; in &quot;eu-west-1&quot; region with managed nodes
2022-09-14 09:39:32 [&#x2139;]  1 nodegroup (ng1-public-ssh) was included (based on the include/exclude rules)
2022-09-14 09:39:32 [&#x2139;]  will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2022-09-14 09:39:32 [&#x2139;]  will create a CloudFormation stack for cluster itself and 1 managed nodegroup stack(s)
2022-09-14 09:39:32 [&#x2139;]  if you encounter any issues, check CloudFormation console or try &apos;eksctl utils describe-stacks --region=eu-west-1 --cluster=ironman-2022&apos;
2022-09-14 09:39:32 [&#x2139;]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster &quot;ironman-2022&quot; in &quot;eu-west-1&quot;
2022-09-14 09:39:32 [&#x2139;]  configuring CloudWatch logging for cluster &quot;ironman-2022&quot; in &quot;eu-west-1&quot; (enabled types: api, audit, authenticator, controllerManager, scheduler &amp; no types disabled)
2022-09-14 09:39:32 [&#x2139;]
2 sequential tasks: { create cluster control plane &quot;ironman-2022&quot;,
    2 sequential sub-tasks: {
        4 sequential sub-tasks: {
            wait for control plane to become ready,
            associate IAM OIDC provider,
            2 sequential sub-tasks: {
                create IAM role for serviceaccount &quot;kube-system/aws-node&quot;,
                create serviceaccount &quot;kube-system/aws-node&quot;,
            },
            restart daemonset &quot;kube-system/aws-node&quot;,
        },
        create managed nodegroup &quot;ng1-public-ssh&quot;,
    }
}
2022-09-14 09:39:32 [&#x2139;]  building cluster stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:39:33 [&#x2139;]  deploying stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:40:03 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:40:33 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:41:33 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:42:33 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:43:33 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:44:33 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:45:33 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:46:33 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:47:33 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:48:33 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:49:33 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:50:33 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:51:33 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-cluster&quot;
2022-09-14 09:53:34 [&#x2139;]  building iamserviceaccount stack &quot;eksctl-ironman-2022-addon-iamserviceaccount-kube-system-aws-node&quot;
2022-09-14 09:53:35 [&#x2139;]  deploying stack &quot;eksctl-ironman-2022-addon-iamserviceaccount-kube-system-aws-node&quot;
2022-09-14 09:53:35 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-addon-iamserviceaccount-kube-system-aws-node&quot;
2022-09-14 09:54:05 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-addon-iamserviceaccount-kube-system-aws-node&quot;
2022-09-14 09:54:05 [&#x2139;]  serviceaccount &quot;kube-system/aws-node&quot; already exists
2022-09-14 09:54:05 [&#x2139;]  updated serviceaccount &quot;kube-system/aws-node&quot;
2022-09-14 09:54:05 [&#x2139;]  daemonset &quot;kube-system/aws-node&quot; restarted
2022-09-14 09:54:05 [&#x2139;]  building managed nodegroup stack &quot;eksctl-ironman-2022-nodegroup-ng1-public-ssh&quot;
2022-09-14 09:54:05 [&#x2139;]  deploying stack &quot;eksctl-ironman-2022-nodegroup-ng1-public-ssh&quot;
2022-09-14 09:54:05 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-nodegroup-ng1-public-ssh&quot;
2022-09-14 09:54:35 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-nodegroup-ng1-public-ssh&quot;
2022-09-14 09:55:18 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-nodegroup-ng1-public-ssh&quot;
2022-09-14 09:56:04 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-nodegroup-ng1-public-ssh&quot;
2022-09-14 09:57:24 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-nodegroup-ng1-public-ssh&quot;
2022-09-14 09:59:10 [&#x2139;]  waiting for CloudFormation stack &quot;eksctl-ironman-2022-nodegroup-ng1-public-ssh&quot;
2022-09-14 09:59:10 [&#x2139;]  waiting for the control plane availability...
2022-09-14 09:59:12 [&#x2714;]  saved kubeconfig as &quot;/home/ec2-user/.kube/config&quot;
2022-09-14 09:59:12 [&#x2139;]  no tasks
2022-09-14 09:59:12 [&#x2714;]  all EKS cluster resources for &quot;ironman-2022&quot; have been created
2022-09-14 09:59:12 [&#x2139;]  nodegroup &quot;ng1-public-ssh&quot; has 2 node(s)
2022-09-14 09:59:12 [&#x2139;]  node &quot;ip-192-168-29-179.eu-west-1.compute.internal&quot; is ready
2022-09-14 09:59:12 [&#x2139;]  node &quot;ip-192-168-78-165.eu-west-1.compute.internal&quot; is ready
2022-09-14 09:59:12 [&#x2139;]  waiting for at least 2 node(s) to become ready in &quot;ng1-public-ssh&quot;
</code></pre><h2 id="%E5%8F%83%E8%80%83%E6%96%87%E4%BB%B6">&#x53C3;&#x8003;&#x6587;&#x4EF6;</h2><ol><li>Kubernetes Release Cadence Change: Here&#x2019;s What You Need To Know - <a href="https://kubernetes.io/blog/2021/07/20/new-kubernetes-release-cadence?ref=focaaby.com">https://kubernetes.io/blog/2021/07/20/new-kubernetes-release-cadence</a></li><li>Amazon EKS Kubernetes versions - Amazon EKS Kubernetes release calendar - <a href="https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html?ref=focaaby.com#kubernetes-release-calendar">https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html#kubernetes-release-calendar</a></li><li>GKE release notes -<a href="https://cloud.google.com/kubernetes-engine/docs/release-notes?ref=focaaby.com">https://cloud.google.com/kubernetes-engine/docs/release-notes</a></li><li>Document history for Amazon EKS - <a href="https://docs.aws.amazon.com/eks/latest/userguide/doc-history.html?ref=focaaby.com">https://docs.aws.amazon.com/eks/latest/userguide/doc-history.html</a></li><li>Amazon EKS User Guide - <a href="https://github.com/awsdocs/amazon-eks-user-guide?ref=focaaby.com">https://github.com/awsdocs/amazon-eks-user-guide</a></li><li>Installing or updating the latest version of the AWS CLI - <a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html?ref=focaaby.com">https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html</a></li><li>Installing or updating eksctl - <a href="https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html?ref=focaaby.com">https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html</a></li><li>Introduction | eksctl - <a href="https://eksctl.io/introduction/?ref=focaaby.com">https://eksctl.io/introduction/</a></li><li>Install and Set Up kubectl on Linux - <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/?ref=focaaby.com">https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/</a></li><li>Configuration and credential file settings - <a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html?ref=focaaby.com">https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html</a></li><li>Minimum IAM policies | eksctl - <a href="https://eksctl.io/usage/minimum-iam-policies/?ref=focaaby.com#minimum-iam-policies">https://eksctl.io/usage/minimum-iam-policies/#minimum-iam-policies</a></li><li>Amazon EKS control plane logging - <a href="https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html?ref=focaaby.com">https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html</a></li></ol>]]></content:encoded></item><item><title><![CDATA[How to control your external display brightness on macOS and Arch Linux]]></title><description><![CDATA[本文將介紹如何在 macOS 及 Arch Linux 系統上調整外接螢幕亮度。]]></description><link>https://focaaby.com/how-to-control-your-external-display-brightness-on-macos-and-arch-linux/</link><guid isPermaLink="false">622f4a657894f50001237ac7</guid><category><![CDATA[hardware]]></category><category><![CDATA[macOS]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Mon, 14 Mar 2022 14:05:33 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1588200908342-23b585c03e26?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDV8fG1vbml0b3J8ZW58MHx8fHwxNjQ3MjY2NzUx&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h2 id="preface">Preface</h2><img src="https://images.unsplash.com/photo-1588200908342-23b585c03e26?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDV8fG1vbml0b3J8ZW58MHx8fHwxNjQ3MjY2NzUx&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="How to control your external display brightness on macOS and Arch Linux"><p>&#x5728;&#x7B46;&#x8A18;&#x578B;&#x96FB;&#x8166;&#x4E0A;&#x57FA;&#x672C;&#x4E0A;&#x90FD;&#x6703;&#x5167;&#x5EFA;&#x529F;&#x80FD;&#x63D0;&#x4F9B;&#x8ABF;&#x6574;&#x87A2;&#x5E55;&#x4EAE;&#x5EA6;(Brightness)&#x3002;&#x5018;&#x82E5;&#x96FB;&#x8166;&#x4F7F;&#x7528;&#x4E86;&#x5916;&#x63A5;&#x87A2;&#x5E55;&#xFF0C;&#x4E00;&#x822C;&#x4F86;&#x8AAA;&#x50C5;&#x80FD;&#x900F;&#x904E;&#x5916;&#x63A5;&#x87A2;&#x5E55;&#x529F;&#x80FD;&#x9375;&#x9032;&#x884C;&#x8ABF;&#x6574;&#xFF0C;&#x800C;&#x6C92;&#x8FA6;&#x6CD5;&#x4F7F;&#x7528;&#x5167;&#x5EFA;&#x87A2;&#x5E55;&#x529F;&#x80FD;&#x8ABF;&#x6574;&#x5916;&#x63A5;&#x87A2;&#x5E55;&#x3002;</p><blockquote>&#x6709;&#x6C92;&#x6709;&#x8FA6;&#x6CD5;&#x900F;&#x904E;&#x5728;&#x7B46;&#x8A18;&#x578B;&#x96FB;&#x8166;&#x4E0A;&#x76F4;&#x63A5;&#x63A7;&#x5236;&#x5916;&#x63A5;&#x87A2;&#x5E55;&#x7684;&#x4EAE;&#x5EA6;&#x53CA;&#x97F3;&#x91CF;&#x5927;&#x5C0F;&#x5462;&#xFF1F;</blockquote><h2 id="ddcci">DDC/CI</h2><p>&#x9996;&#x5148;&#xFF0C;&#x5728;&#x4F5C;&#x696D;&#x7CFB;&#x7D71;&#x4E0A;&#x662F;&#x5426;&#x6709;&#x76F8;&#x95DC;&#x754C;&#x9762;&#x53EF;&#x4EE5;&#x5C0D;&#x5916;&#x63A5;&#x87A2;&#x5E55;&#x9032;&#x64CD;&#x4F5C;&#x3002;&#x6839;&#x64DA; Wiki&#xFF0C;2016 &#x5E74;&#x5F8C;&#x87A2;&#x5E55;&#x591A;&#x6578;&#x90FD;&#x5DF2;&#x7D93;&#x652F;&#x63F4;&#x4E86; <a href="https://en.wikipedia.org/wiki/Display_Data_Channel?ref=focaaby.com#DDC.2FCI">Display Data Channel Command Interface&#xFF08;DDC/CI&#xFF09;</a> [1]&#xFF1A;&#x63D0;&#x4F9B;&#x4E86;&#x7531;&#x96FB;&#x8166;&#x8207;&#x87A2;&#x5E55;&#x986F;&#x793A;&#x6E9D;&#x901A;&#x7684;&#x8A0A;&#x865F;&#x754C;&#x9762;&#xFF0C;&#x4E26;&#x53EF;&#x4EE5;&#x900F;&#x904E;&#x8EDF;&#x9AD4;&#x5B9A;&#x7FA9;&#x8ABF;&#x6574;&#x5916;&#x63A5;&#x87A2;&#x5E55;&#x4EAE;&#x5EA6;&#x3002;</p><h2 id="macos">macOS</h2><p>&#x5728; macOS &#x74B0;&#x5883;&#x4E2D;&#xFF0C;&#x4F7F;&#x7528; <a href="https://github.com/MonitorControl/MonitorControl?ref=focaaby.com">MonitorControl</a> [2] &#x4F86;&#x8ABF;&#x6574;&#x5916;&#x63A5;&#x87A2;&#x5E55;&#x4EAE;&#x5EA6;&#x3002;&#x4EE5;&#x4E0B;&#x4F7F;&#x7528;&#x5916;&#x63A5;&#x87A2;&#x5E55; LG 32ML600M &#x6E2C;&#x8A66;&#xFF0C;&#x53EF;&#x4EE5;&#x89C0;&#x5BDF;&#x5230;&#x5916;&#x63A5;&#x87A2;&#x5E55;&#x97F3;&#x8ECC;&#x9032;&#x884C;&#x8ABF;&#x6574;&#x3002;</p><figure class="kg-card kg-image-card"><img src="https://focaaby.com/content/images/2022/03/macos-ddc-volume.png" class="kg-image" alt="How to control your external display brightness on macOS and Arch Linux" loading="lazy" width="1280" height="960" srcset="https://focaaby.com/content/images/size/w600/2022/03/macos-ddc-volume.png 600w, https://focaaby.com/content/images/size/w1000/2022/03/macos-ddc-volume.png 1000w, https://focaaby.com/content/images/2022/03/macos-ddc-volume.png 1280w" sizes="(min-width: 720px) 720px"></figure><p>&#x6B64;&#x5916;&#xFF0C;macOS &#x7CFB;&#x7D71;&#x5DF2;&#x7D93;&#x63D0;&#x4F9B;&#x5167;&#x5EFA;&#x87A2;&#x5E55;&#x53CA;&#x5916;&#x63A5;&#x87A2;&#x5E55;&#x660E;&#x4EAE;&#xFF0C;&#x65BC; <a href="https://github.com/MonitorControl/MonitorControl/discussions/596?ref=focaaby.com">MonitorControl 4.0.0</a> [3] &#x5F8C;&#x63D0;&#x4F9B;&#x4E86;&#x540C;&#x6B65;&#x5167;&#x5EFA;&#x87A2;&#x5E55;&#x8207;&#x5916;&#x63A5;&#x87A2;&#x5E55;&#x4EAE;&#x5EA6;&#x529F;&#x80FD;&#x3002;</p><figure class="kg-card kg-image-card"><img src="https://focaaby.com/content/images/2022/03/macos-ddc-monitorcontrol-sync.png" class="kg-image" alt="How to control your external display brightness on macOS and Arch Linux" loading="lazy" width="1460" height="1104" srcset="https://focaaby.com/content/images/size/w600/2022/03/macos-ddc-monitorcontrol-sync.png 600w, https://focaaby.com/content/images/size/w1000/2022/03/macos-ddc-monitorcontrol-sync.png 1000w, https://focaaby.com/content/images/2022/03/macos-ddc-monitorcontrol-sync.png 1460w" sizes="(min-width: 720px) 720px"></figure><p>&#x7576;&#x7136;&#xFF0C;&#x9700;&#x8981;&#x6CE8;&#x610F;&#x82E5;&#x7FD2;&#x6163;&#x5C07; macbook &#x95DC;&#x9589;&#x4E0A;&#x84CB;&#xFF0C;&#x74B0;&#x5883;&#x5149;&#x5EA6;&#x611F;&#x6E2C;&#x5668;&#xFF08;ambient light sensor&#xFF09;&#x5C31;&#x4E0D;&#x6703;&#x81EA;&#x52D5;&#x611F;&#x6E2C;&#x800C;&#x8B8A;&#x5316;&#x5916;&#x63A5;&#x87A2;&#x5E55;&#x4EAE;&#x5EA6;&#x3002;</p><h2 id="linuxarch-linux">Linux - Arch Linux</h2><p>Linux &#x74B0;&#x5883;&#x5247;&#x53EF;&#x4EE5;&#x4F7F;&#x7528; <a href="https://www.kernel.org/doc/Documentation/i2c/dev-interface?ref=focaaby.com">i2c-dev</a> [4] &#x6216; <a href="https://gitlab.com/ddcci-driver-linux/ddcci-driver-linux?ref=focaaby.com">ddcci-driver-linux</a> [5] &#x6574;&#x5408;&#x3002;</p><ul><li><a href="https://www.kernel.org/doc/Documentation/i2c/dev-interface?ref=focaaby.com">i2c-dev</a> &#xFF1A;&#x539F;&#x751F; kernel &#x5DF2;&#x7D93;&#x652F;&#x63F4;&#xFF0C;&#x50C5;&#x9700;&#x8981;&#x8A2D;&#x5B9A; <code>/etc/modules-load.d</code> &#x4F7F; kernel &#x958B;&#x6A5F;&#x6642;&#x8F09;&#x5165;&#x5373;&#x53EF;&#xFF0C;&#x53C3;&#x8003; <a href="https://www.ddcutil.com/kernel_module/?ref=focaaby.com">ddcutil kernel module &#x8A2D;&#x7F6E;</a> [6]&#x3002;</li></ul><p>&#x6700;&#x7D42;&#x76EE;&#x7684;&#x5E0C;&#x671B;&#x900F;&#x904E;&#x684C;&#x9762;&#x74B0;&#x5883;&#x8ABF;&#x6574;&#x5916;&#x63A5;&#x87A2;&#x5E55;&#xFF0C;&#x4EE5;&#x4E0B;&#x70BA;&#x6211;&#x7684; Arch Linux &#x74B0;&#x5883;&#x8CC7;&#x8A0A;&#xFF1A;</p><pre><code>$ uname -a
Linux nuc-arch 5.16.14-arch1-1 #1 SMP PREEMPT Fri, 11 Mar 2022 17:40:36 +0000 x86_64 GNU/Linux

$ plasmashell -v     
plasmashell 5.24.3

$ kf5-config --version
Qt: 5.15.3
KDE Frameworks: 5.92.0
kf5-config: 1.0
</code></pre><p>&#x7531;&#x65BC;&#x4F7F;&#x7528;&#x4E86; Arch Linux KDE Plasma &#x74B0;&#x5883;&#x4E2D;&#x6574;&#x5408;&#xFF0C;&#x5176; Plasma <a href="https://docs.kde.org/stable5/en/powerdevil/kcontrol/powerdevil/index.html?ref=focaaby.com">PowerDevil</a> [7] module &#x63D0;&#x4F9B;&#x96FB;&#x6E90;&#x7BA1;&#x7406;&#x63A7;&#x5236;&#xFF0C;&#x5176;&#x5305;&#x542B;&#x87A2;&#x5E55;&#x4EAE;&#x5EA6;&#x53CA;&#x88DD;&#x7F6E;&#x96FB;&#x6E90;&#x7B49;&#x3002;</p><p>&#x4E0D;&#x904E;&#x76EE;&#x524D; <a href="https://github.com/KDE/powerdevil/blob/master/CMakeLists.txt?ref=focaaby.com#L67-L78">PowerDevil &#x5957;&#x4EF6;&#x9810;&#x8A2D;</a> [8] &#x5C07; <code>ddcutil</code>&#xFF08;<code>-DHAVE_DDCUTIL=Off</code>&#xFF09;&#x95DC;&#x9589;&#xFF0C;&#x6545;&#x9700;&#x8981;&#x624B;&#x52D5;&#x7DE8;&#x8B6F;&#x555F;&#x7528; <code>ddcutil</code>&#x529F;&#x80FD;&#x3002;</p><p>&#x7576;&#x7136; Arch user &#x7DAD;&#x8B77;&#x4E86; <a href="https://aur.archlinux.org/packages/powerdevil-ddcutil?ref=focaaby.com">powerdevil-ddcutil AUR</a> [9] &#x63D0;&#x4F9B;&#x5927;&#x5BB6;&#x4F7F;&#x7528;&#xFF1A;</p><pre><code>$ sudo gpasswd -a $USER i2c
$ sudo sh -c &apos;echo i2c-dev &gt; /etc/modules-load.d/ddc.conf&apos;
$ sudo reboot
</code></pre><p>&#x91CD;&#x8D77;&#x5F8C;&#xFF0C;KDE Plasma &#x9806;&#x5229;&#x5730;&#x76F4;&#x63A5;&#x8ABF;&#x6574;&#x5916;&#x63A5;&#x87A2;&#x5E55;&#xFF1A;</p><figure class="kg-card kg-image-card"><img src="https://focaaby.com/content/images/2022/03/kde-ddc.png" class="kg-image" alt="How to control your external display brightness on macOS and Arch Linux" loading="lazy" width="1920" height="1080" srcset="https://focaaby.com/content/images/size/w600/2022/03/kde-ddc.png 600w, https://focaaby.com/content/images/size/w1000/2022/03/kde-ddc.png 1000w, https://focaaby.com/content/images/size/w1600/2022/03/kde-ddc.png 1600w, https://focaaby.com/content/images/2022/03/kde-ddc.png 1920w" sizes="(min-width: 720px) 720px"></figure><h2 id="references">References</h2><ol><li><a href="https://en.wikipedia.org/wiki/Display_Data_Channel?ref=focaaby.com#DDC.2FCI">Display Data Channel Command Interface&#xFF08;DDC/CI&#xFF09;</a></li><li><a href="https://github.com/MonitorControl/MonitorControl?ref=focaaby.com">MonitorControl</a></li><li><a href="https://github.com/MonitorControl/MonitorControl/discussions/596?ref=focaaby.com">MonitorControl 4.0.0</a></li><li><a href="https://www.kernel.org/doc/Documentation/i2c/dev-interface?ref=focaaby.com">i2c-dev</a></li><li><a href="https://gitlab.com/ddcci-driver-linux/ddcci-driver-linux?ref=focaaby.com">ddcci-driver-linux</a></li><li><a href="https://www.ddcutil.com/kernel_module/?ref=focaaby.com">ddcutil kernel module &#x8A2D;&#x7F6E;</a></li><li><a href="https://docs.kde.org/stable5/en/powerdevil/kcontrol/powerdevil/index.html?ref=focaaby.com">PowerDevil</a></li><li><a href="https://github.com/KDE/powerdevil/blob/master/CMakeLists.txt?ref=focaaby.com#L67-L78">PowerDevil &#x5957;&#x4EF6;&#x9810;&#x8A2D;</a></li><li><a href="https://aur.archlinux.org/packages/powerdevil-ddcutil?ref=focaaby.com">powerdevil-ddcutil AUR</a></li></ol>]]></content:encoded></item><item><title><![CDATA[Why logging system collects container logs from specific directory]]></title><description><![CDATA[本文章介紹了常見 Kubernetes Logging 架構，並解探討了為了使用 Node Agent 架構模式上為何收集 /var/log/containers/ 目錄內的 log 而非其他目錄。]]></description><link>https://focaaby.com/why-logging-system-collects-container-logs-from-specific-directory/</link><guid isPermaLink="false">622229937894f500012379db</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[logging]]></category><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Fri, 04 Mar 2022 16:00:02 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1456255985051-dcbc4f615823?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDQ0fHxsb2d8ZW58MHx8fHwxNjQ2NDA5NDk3&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h2 id="logging-architecture">Logging Architecture</h2><img src="https://images.unsplash.com/photo-1456255985051-dcbc4f615823?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDQ0fHxsb2d8ZW58MHx8fHwxNjQ2NDA5NDk3&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Why logging system collects container logs from specific directory"><p>&#x6839;&#x64DA; Kubernetes Logging Architecture[1] &#x6587;&#x4EF6;&#xFF0C;&#x5927;&#x81F4;&#x4E0A;&#x5206;&#x70BA;&#x5169;&#x985E; node-level &#x53CA; cluster-level&#xFF1A;</p><h3 id="logging-at-the-node-level">Logging at the node level</h3><p> node-level &#x900F;&#x904E; logrotate &#x547D;&#x4EE4;&#x6216;&#x662F; kubelet <a href="https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/?ref=focaaby.com#kubelet-config-k8s-io-v1beta1-KubeletConfiguration"> containerLogMaxSize &#x53CA; containerLogMaxFiles &#x53C3;&#x6578;</a> rotate log&#x3002;</p><figure class="kg-card kg-image-card"><img src="https://focaaby.com/content/images/2022/03/image.png" class="kg-image" alt="Why logging system collects container logs from specific directory" loading="lazy" width="500" height="300"></figure><p></p><h3 id="cluster-level-logging-architectures">Cluster-level logging architectures</h3><p> cluster-level &#x4F7F;&#x7528;&#x4EE5;&#x4E0B;&#x4E09;&#x7A2E;&#x65B9;&#x5F0F;&#x5BE6;&#x73FE;&#xFF1A;</p><h4 id="using-a-node-logging-agent">Using a node logging agent</h4><p>&#x65BC;&#x6BCF;&#x4E00;&#x500B;&#x7BC0;&#x9EDE;&#x4E0A;&#x63A5;&#x90E8;&#x7F72; Logging Agent &#x6536;&#x96C6; application log &#x4E26;&#x4E0A;&#x50B3;&#x56DE; logging &#x7CFB;&#x7D71;&#x3002;&#x4F8B;&#x5982;&#xFF1A;Flutend &#x6216;&#x662F; Grafana Loki &#x90FD;&#x662F;&#x6BD4;&#x8F03;&#x63A5;&#x8FD1;&#x9019;&#x7A2E;&#x65B9;&#x5F0F;&#xFF0C;&#x65BC;&#x6BCF;&#x4E00;&#x500B;&#x7BC0;&#x9EDE;&#x900F;&#x904E; DaemonSet &#x90E8;&#x7F72;&#x65B9;&#x5F0F;&#xFF0C;&#x7B49;&#x540C;&#x6709;&#x4E00;&#x652F; agent process &#x6536;&#x96C6;&#x6BCF;&#x500B;&#x7BC0;&#x9EDE;&#x4E0A;&#x7684; log &#x3002;&#x57F7;&#x5F97;&#x6CE8;&#x610F;&#x7684;&#x662F;&#xFF0C;&#x5018;&#x82E5; logging system &#x4E5F;&#x662F;&#x4F9D;&#x8CF4;&#x65BC; Kubernetes &#x74B0;&#x5883;&#xFF0C;&#x800C; Kubernetes &#x7121;&#x6CD5;&#x6B63;&#x5E38;&#x904B;&#x4F5C;&#x6642;&#xFF0C;logging system &#x4E5F;&#x53EF;&#x80FD;&#x7671;&#x7613;&#xFF0C;&#x9700;&#x8981;&#x6CE8;&#x610F;&#x74B0;&#x5883;&#x76F8;&#x4F9D;&#x6027;&#x53CA;&#x55AE;&#x9EDE;&#x6545;&#x969C;&#x3002;</p><figure class="kg-card kg-image-card"><img src="https://focaaby.com/content/images/2022/03/image-12.png" class="kg-image" alt="Why logging system collects container logs from specific directory" loading="lazy" width="500" height="350"></figure><h4 id="using-a-sidecar-container-with-the-logging-agent">Using a sidecar container with the logging agent</h4><ul><li>Streaming sidecar container&#xFF1A;&#x4E00;&#x822C;&#x4F86;&#x8AAA;&#xFF0C;Container log &#x5C07; stdout &#x53CA; stderr &#x8F38;&#x51FA;&#x65BC; Container run time &#x9810;&#x8A2D;&#x76EE;&#x9304;&#xFF0C;&#x4F46;&#x662F;&#x6709;&#x4E9B; application &#x53EF;&#x80FD;&#x9810;&#x8A2D;&#x4E26;&#x4E0D;&#x6703;&#x8F38;&#x51FA;&#x81F3; stdout &#x6216; stderr&#x3002;&#x5982;&#x90E8;&#x5206;&#x8001;&#x820A;&#x7CFB;&#x7D71;&#x6709;&#x56FA;&#x5B9A;&#x7522;&#x51FA;&#x7684;&#x76EE;&#x9304;&#x4F4D;&#x7F6E;&#xFF0C;&#x5247;&#x53EF;&#x4EE5;&#x900F;&#x904E; sidecar container &#x642D;&#x914D;&#x4F7F;&#x7528; tail &#x547D;&#x4EE4;&#x5B9A;&#x671F;&#x8B80;&#x53D6;&#x8A72;&#x76EE;&#x9304;&#x4F4D;&#x7F6E;&#xFF0C;&#x4E26;&#x7531; logging agent &#x6536;&#x96C6; log &#x56DE; logging system&#x3002;</li></ul><figure class="kg-card kg-image-card"><img src="https://focaaby.com/content/images/2022/03/image-13.png" class="kg-image" alt="Why logging system collects container logs from specific directory" loading="lazy" width="500" height="400"></figure><ul><li>Sidecar container with a logging agent &#xFF1A;&#x76F4;&#x63A5;&#x900F;&#x904E; sidecar container &#x6536;&#x96C6; application &#x56DE; logging system&#x3002;&#x4F8B;&#x5982;&#xFF1A;&#x5018;&#x82E5;&#x50C5;&#x9700;&#x8981;&#x6536;&#x96C6;&#x7279;&#x6B8A; application log&#xFF0C;&#x53EF;&#x4EE5;&#x81EA;&#x884C;&#x64B0;&#x5BEB; script &#x6216;&#x662F; HTTP API &#x5C07; log &#x66F4;&#x65B0;&#x81F3; logging system&#x3002;</li></ul><figure class="kg-card kg-image-card"><img src="https://focaaby.com/content/images/2022/03/image-16.png" class="kg-image" alt="Why logging system collects container logs from specific directory" loading="lazy" width="500" height="250"></figure><ul><li>Exposing logs directly from the application&#xFF1A;&#x76F4;&#x63A5;&#x5C07;&#x900F;&#x904E; logging system &#x6536;&#x96C6; log&#x3002;</li></ul><figure class="kg-card kg-image-card"><img src="https://focaaby.com/content/images/2022/03/image-17.png" class="kg-image" alt="Why logging system collects container logs from specific directory" loading="lazy" width="500" height="150"></figure><h2 id="why-we-collects-logs-from-varlogcontainers-directory">Why we collects logs from <code>/var/log/containers/</code> directory</h2><p>&#x90A3;&#x5230;&#x5E95;&#x9019;&#x4E9B; Logging system &#x672C;&#x8CEA;&#x4E0A;&#x662F;&#x600E;&#x9EBC;&#x65BC;&#x7BC0;&#x9EDE;&#x4E0A;&#x63A1;&#x96C6; application log &#x5462;&#xFF1F;</p><p>&#x4EE5;&#x4E0B;&#x4EE5; EKS &#x5B98;&#x65B9; <a href="https://github.com/aws-samples/amazon-cloudwatch-container-insights/tree/master/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring?ref=focaaby.com">CloudWatch Agent for Container Insights Kubernetes Monitoring</a> &#x89E3;&#x6C7A;&#x65B9;&#x6848;&#x70BA;&#x4F8B;&#xFF0C;&#x63D0;&#x4F9B;&#x4E86; <a href="https://www.fluentd.org/?ref=focaaby.com">Flutend</a> &#x53CA; Fluent Bit &#x5169;&#x7A2E; Logging system&#x3002;</p><p>&#x6709;&#x8DA3;&#x7684;&#x662F;&#xFF0C;&#x9019;&#x5169;&#x7A2E;&#x4E0D;&#x540C;&#x7684; Logging system &#x537B;&#x540C;&#x6642;&#x6536;&#x96C6;&#x4E86;&#x76F8;&#x540C; <code>/var/log/containers/*.log</code> &#x76EE;&#x9304;&#x4F5C;&#x70BA; application log&#xFF0C;&#x9810;&#x8A2D; config &#x5206;&#x5225;&#x5982;&#x4E0B;&#x65B9;&#x6240;&#x793A;&#xFF1A;</p><h3 id="flutend">Flutend </h3><p>&#x9810;&#x8A2D; Flutend <a href="https://github.com/aws-samples/amazon-cloudwatch-container-insights/blob/master/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluentd/fluentd.yaml?ref=focaaby.com#L68">containers.conf </a> &#x6536;&#x96C6; <code>/var/log/containers/*.log</code></p><pre><code>  containers.conf: |
    &lt;source&gt;
      @type tail
      @id in_tail_container_logs
      @label @containers
      path /var/log/containers/*.log
      exclude_path [&quot;/var/log/containers/cloudwatch-agent*&quot;, &quot;/var/log/containers/fluentd*&quot;]
      pos_file /var/log/fluentd-containers.log.pos
      tag *
      read_from_head true
      &lt;parse&gt;
        @type json
        time_format %Y-%m-%dT%H:%M:%S.%NZ
      &lt;/parse&gt;
    &lt;/source&gt;
    ...
    ...
</code></pre><h3 id="fluent-bit">Fluent Bit</h3><p>&#x9810;&#x8A2D; Fluent Bit <a href="https://github.com/aws-samples/amazon-cloudwatch-container-insights/blob/master/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluent-bit/fluent-bit.yaml?ref=focaaby.com#L62-L78">application-log.conf</a> &#xA0;&#x6536;&#x96C6; <code>/var/log/containers/*.log</code></p><pre><code>  application-log.conf: |
    [INPUT]
        Name                tail
        Tag                 application.*
        Exclude_Path        /var/log/containers/cloudwatch-agent*, /var/log/containers/fluent-bit*, /var/log/containers/aws-node*, /var/log/containers/kube-proxy*
        Path                /var/log/containers/*.log
        Docker_Mode         On
        Docker_Mode_Flush   5
        Docker_Mode_Parser  container_firstline
        Parser              docker
        DB                  /var/fluent-bit/state/flb_container.db
        Mem_Buf_Limit       50MB
        Skip_Long_Lines     On
        Refresh_Interval    10
        Rotate_Wait         30
        storage.type        filesystem
        Read_from_Head      ${READ_FROM_HEAD}
        ...</code></pre><blockquote class="kg-blockquote-alt">&#x96E3;&#x9053; Kubernetes cluster &#x672C;&#x5C31;&#x6703;&#x5C07; container log &#x8F38;&#x51FA;&#x81F3; <code>/var/log/containers</code> &#x76EE;&#x9304;&#x55CE;&#xFF1F;</blockquote><p>&#x6839;&#x64DA; <a href="https://github.com/kubernetes/design-proposals-archive/blob/main/node/kubelet-cri-logging.md?ref=focaaby.com">Kubernetes Proposals</a>&#xFF0C;&#x660E;&#x78BA;&#x5B9A;&#x7FA9; Pod &#x662F;&#x57FA;&#x65BC; cluster-level &#x6536;&#x96C6; log &#x76EE;&#x7684;&#x5C0E;&#x5411;&#xFF0C;&#x90FD;&#x6703; kubelet &#x900F;&#x904E; soft link &#x65B9;&#x5F0F;&#x95DC;&#x806F; Container Runtime&#xFF08;&#x5982; Docker&#xFF09;&#x81F3; <code>/var/log/containers</code> &#x76EE;&#x9304;&#xFF0C;&#x4E26;&#x4E14;&#x4F9D;&#x7167; <code>/var/log/containers/&lt;pod_name&gt;_&lt;pod_namespace&gt;_&lt;container_name&gt;-&lt;container_id&gt;.log</code> &#x683C;&#x5F0F;&#x4F5C;&#x70BA; log &#x540D;&#x7A31;&#x3002;</p><pre><code>In a production cluster, logs are usually collected, aggregated, and shipped to a remote store where advanced analysis/search/archiving functions are supported. In kubernetes, the default cluster-addons includes a per-node log collection daemon, `fluentd`. To facilitate the log collection, kubelet creates symbolic links to all the docker containers logs under `/var/log/containers` with pod and container metadata embedded in the filename.

	/var/log/containers/&lt;pod_name&gt;_&lt;pod_namespace&gt;_&lt;container_name&gt;-&lt;container_id&gt;.log`

The fluentd daemon watches the `/var/log/containers/` directory and extract the metadata associated with the log from the path. Note that this integration requires kubelet to know where the container runtime stores the logs, and will not be directly applicable to CRI.
</code></pre><blockquote>&#x9644;&#x8A3B;&#xFF1A;Kubernetes Proposals <a href="https://www.cncf.io/blog/2021/04/12/enhancing-the-kubernetes-enhancements-process/?ref=focaaby.com">&#x5F9E; 2021 4 &#x6708;</a>&#x8D77;&#x5DF2;&#x7D93;&#x5C07;&#x76F8;&#x95DC; Proposal &#x9077;&#x79FB;&#x81F3; <a href="https://github.com/kubernetes?ref=focaaby.com" rel="author">kubernetes</a>/<a href="https://github.com/kubernetes/enhancements?ref=focaaby.com"><strong>enhancements</strong></a><strong> GitHub&#x3002;</strong></blockquote><h2 id="summary">Summary</h2><p>&#x6545;&#x73FE;&#x6CC1; Kubernetes - Cluster-level Using a node logging agent &#x67B6;&#x69CB;&#xFF0C;&#x7686;&#x662F;&#x6536;&#x96C6; <code>/var/log/contianers</code> &#x76EE;&#x9304; log &#x4F5C;&#x70BA;&#x4E00;&#x500B; Kubernetes &#x6536;&#x96C6; container application log &#x7684;&#x5171;&#x7528;&#x898F;&#x7BC4;&#x3002;</p><h2 id="%E5%8F%83%E8%80%83%E6%96%87%E4%BB%B6">&#x53C3;&#x8003;&#x6587;&#x4EF6;</h2><ol><li><a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/?ref=focaaby.com">https://kubernetes.io/docs/concepts/cluster-administration/logging/</a></li><li><a href="https://github.com/kubernetes/design-proposals-archive/blob/main/node/kubelet-cri-logging.md?ref=focaaby.com">https://github.com/kubernetes/design-proposals-archive/blob/main/node/kubelet-cri-logging.md</a></li></ol>]]></content:encoded></item><item><title><![CDATA[Why Linux OS takes over 5 minutes during boot-up process]]></title><description><![CDATA[In this article, I would show you how to find which systemd unit might lead the boot-up times through `systemd-analyze`[1] command.
]]></description><link>https://focaaby.com/why-linux-os-takes-over-5-minutes-during-boot-up-process/</link><guid isPermaLink="false">620ad6797894f500012379b5</guid><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Mon, 14 Feb 2022 22:28:56 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1498184103684-bc1a70b0c068?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDR8fHdhaXRpbmd8ZW58MHx8fHwxNjQ0ODc3Njk5&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h2 id="problem">Problem</h2><img src="https://images.unsplash.com/photo-1498184103684-bc1a70b0c068?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDR8fHdhaXRpbmd8ZW58MHx8fHwxNjQ0ODc3Njk5&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Why Linux OS takes over 5 minutes during boot-up process"><p>As the following output of <code>journalctl</code> command, we can view the network unit takes around 5 minutes between initial the LSB and failed to start:</p><ul><li>Jun 01 15:59:45: <code>Starting LSB: Bring up/down networking...</code></li><li>Jun 01 16:04:45: <code>Failed to start LSB: Bring up/down networking.</code></li></ul><pre><code>$ journalctl --unit=network
-- Logs begin at Tue 2021-06-01 15:59:40 UTC, end at Tue 2021-06-01 16:05:37 UTC. --
Jun 01 15:59:45 ip-172-31-29-127.eu-west-1.compute.internal systemd[1]: Starting LSB: Bring up/down networking...
Jun 01 15:59:45 ip-172-31-29-127.eu-west-1.compute.internal network[666]: Bringing up loopback interface:  [  OK  ]
Jun 01 15:59:45 ip-172-31-29-127.eu-west-1.compute.internal network[666]: Bringing up interface eth0:
Jun 01 15:59:45 ip-172-31-29-127.eu-west-1.compute.internal dhclient[789]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x7487dbae)
Jun 01 15:59:45 ip-172-31-29-127.eu-west-1.compute.internal dhclient[789]: DHCPACK from 172.31.16.1 (xid=0x7487dbae)
Jun 01 15:59:47 ip-172-31-29-127.eu-west-1.compute.internal NET[833]: /usr/sbin/dhclient-script : updated /etc/resolv.conf
Jun 01 15:59:47 ip-172-31-29-127.eu-west-1.compute.internal network[666]: Determining IP information for eth0... done.
Jun 01 15:59:48 ip-172-31-29-127.eu-west-1.compute.internal dhclient[856]: XMT: Solicit on eth0, interval 1020ms.
Jun 01 15:59:49 ip-172-31-29-127.eu-west-1.compute.internal dhclient[856]: XMT: Solicit on eth0, interval 2120ms.
Jun 01 15:59:51 ip-172-31-29-127.eu-west-1.compute.internal dhclient[856]: XMT: Solicit on eth0, interval 4450ms.
Jun 01 15:59:55 ip-172-31-29-127.eu-west-1.compute.internal dhclient[856]: XMT: Solicit on eth0, interval 9080ms.
Jun 01 16:00:05 ip-172-31-29-127.eu-west-1.compute.internal dhclient[856]: XMT: Solicit on eth0, interval 18980ms.
Jun 01 16:00:24 ip-172-31-29-127.eu-west-1.compute.internal dhclient[856]: XMT: Solicit on eth0, interval 37130ms.
Jun 01 16:01:01 ip-172-31-29-127.eu-west-1.compute.internal dhclient[856]: XMT: Solicit on eth0, interval 77810ms.
Jun 01 16:02:19 ip-172-31-29-127.eu-west-1.compute.internal dhclient[856]: XMT: Solicit on eth0, interval 113620ms.
Jun 01 16:04:12 ip-172-31-29-127.eu-west-1.compute.internal dhclient[856]: XMT: Solicit on eth0, interval 35520ms.
Jun 01 16:04:45 ip-172-31-29-127.eu-west-1.compute.internal systemd[1]: network.service start operation timed out. Terminating.
Jun 01 16:04:45 ip-172-31-29-127.eu-west-1.compute.internal systemd[1]: Failed to start LSB: Bring up/down networking.
Jun 01 16:04:45 ip-172-31-29-127.eu-west-1.compute.internal systemd[1]: Unit network.service entered failed state.
Jun 01 16:04:45 ip-172-31-29-127.eu-west-1.compute.internal systemd[1]: network.service failed.
Jun 01 16:04:48 ip-172-31-29-127.eu-west-1.compute.internal network[666]: Determining IPv6 information for eth0... failed.
Jun 01 16:04:48 ip-172-31-29-127.eu-west-1.compute.internal network[666]: WARN      : [/etc/sysconfig/network-scripts/ifup-eth] Unable to obtain IPv6 DHCP address eth0.
</code></pre><h2 id="find-the-systemd-unit-that-take-a-long-time">Find the systemd unit that take a long time</h2><p>In this case, we can use <code>systemd-analyze blame</code> command to show a list of all running units, ordered by the time they took to initialize.</p><pre><code>$ sudo systemd-analyze blame
  5min 115ms network.service
      3.866s dev-xvda1.device
      3.101s cloud-init-local.service
      1.206s cloud-init.service
</code></pre><p>From the output message of <code>systemd-analyze blame</code>, we can know the <code>network.service</code> take around 5 min. Moreover, we also can check the configuration of <code>network.service</code> that defined the <code>TimeoutSec=5min</code> by default[2].</p><pre><code>$ sudo systemctl edit --full network.service
...
...
[Service]
Type=forking
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
ExecStart=/etc/rc.d/init.d/network start
ExecStop=/etc/rc.d/init.d/network stop

</code></pre><p>This means that <code>network.service</code> might be waiting for something and cause the timeout. Therefore, we should check the networking configurations.</p><pre><code>$ cat /etc/sysconfig/network-scripts/ifcfg-ens5 

# Created by cloud-init on instance boot automatically, do not edit.
#
BOOTPROTO=dhcp
DEVICE=ens5
DHCPV6C=yes
IPV6INIT=yes
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
</code></pre><p>From this configuration of the interface <code>ens5</code>, we can know IPv6 address is enabled. However, the configurations in <code>/etc/sysconfig/network-scripts</code> are created by cloud-init on instance boot automatically.</p><p>By checking the configurations in <code>/etc/cloud/cloud.cfg.d/</code>, we can confirm the IPv6 address is enabled in cloud-init.</p><pre><code>...
    network:
      version: 1
      config:
      - type: physical
        name: eth0
        subnets:
          - type: dhcp
          - type: dhcp6
</code></pre><p>In the meantime, I can view the VPC did not be enabled the IPv6 support[3].</p><h2 id="summary">Summary</h2><p>If we enable the IPv6 at the Linux OS level, but we don&apos;t enable IPv6 support for your VPC and resources. During the boot-up process, <code>network.unit</code> would like to retrieve IPv6 IP address and wait for DHCP server for 5 min. After the timeout period, the <code>systemd.unit</code> would continue the process, and cause a slow boot-up time over 5 minutes.</p><h2 id="references">References</h2><ol><li><a href="https://www.freedesktop.org/software/systemd/man/systemd-analyze.html?ref=focaaby.com">https://www.freedesktop.org/software/systemd/man/systemd-analyze.html</a></li><li><a href="https://www.freedesktop.org/software/systemd/man/systemd.service.html?ref=focaaby.com">https://www.freedesktop.org/software/systemd/man/systemd.service.html</a></li><li>Get started with IPv6 for Amazon VPC - <a href="https://docs.aws.amazon.com/vpc/latest/userguide/get-started-IPv6.html?ref=focaaby.com">https://docs.aws.amazon.com/vpc/latest/userguide/get-started-IPv6.html</a></li></ol>]]></content:encoded></item><item><title><![CDATA[AWS VPC CNI plugin random livenessProbe failures after upgrading to Kubernetes 1.20]]></title><description><![CDATA[In this article, I will show you Kubelet fixed Exec Probe Timeouts issue, whose timeout of 1s is respected and triggers probes failures and pods restart.
]]></description><link>https://focaaby.com/aws-vpc-cni-plugin-random-livenessprobe-failures-after-upgrading-to-kubernetes-1-20/</link><guid isPermaLink="false">61edbd3e7894f5000123798e</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[vpc-cni]]></category><category><![CDATA[eks]]></category><category><![CDATA[aws]]></category><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Sun, 23 Jan 2022 20:42:07 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1477506252414-b2954dbdacf3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDE1fHxwb2RzfGVufDB8fHx8MTY0Mjk3MDU2Mw&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h2 id="problem">Problem</h2><img src="https://images.unsplash.com/photo-1477506252414-b2954dbdacf3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDE1fHxwb2RzfGVufDB8fHx8MTY0Mjk3MDU2Mw&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="AWS VPC CNI plugin random livenessProbe failures after upgrading to Kubernetes 1.20"><p>After upgrading the EKS 1.19 to 1.20, we can find some pods were using readiness and liveness probes failures without reason message. Take AWS VPC CNI plugin <code>aws-node</code> pod as an example below:</p><pre><code># Pod event
...
Normal   Killing    85m                    kubelet  Container aws-node failed liveness probe, will be restarted
Normal   Pulling    85m (x2 over 106m)     kubelet  Pulling image &quot;602401143452.dkr.ecr.eu-central-1.amazonaws.com/amazon-k8s-cni:v1.8.0-eksbuild.1&quot;
Normal   Created    85m (x2 over 106m)     kubelet  Created container aws-node
Normal   Pulled     85m                    kubelet  Successfully pulled image &quot;602401143452.dkr.ecr.eu-central-1.amazonaws.com/amazon-k8s-cni:v1.8.0-eksbuild.1&quot; in 158.21763ms
Normal   Started    85m (x2 over 106m)     kubelet  Started container aws-node
Warning  Unhealthy  11m (x31 over 106m)    kubelet  Readiness probe failed:
Warning  Unhealthy  4m57s (x28 over 100m)  kubelet  Liveness probe failed:
</code></pre><h2 id="what-changes-in-kubernetes-120">What changes in Kubernetes 1.20</h2><p>According to the <a href="https://kubernetes.io/blog/2020/12/08/kubernetes-1-20-release-announcement/?ref=focaaby.com">Kubernetes 1.20: The Raddest Release[1]</a>: </p><blockquote>A longstanding bug regarding exec probe timeouts that may impact existing pod definitions has been fixed. Prior to this fix, the field <code>timeoutSeconds</code> was not respected for exec probes. Instead, probes would run indefinitely, even past their configured deadline, until a result was returned. With this change, the default value of <code>1 second</code> will be applied if a value is not specified and existing pod definitions may no longer be sufficient if a probe takes longer than one second.</blockquote><p>This means a bug fix in Kubernetes 1.20: &#xA0;<a href="https://github.com/kubernetes/enhancements/issues/1972?ref=focaaby.com">Fixing Kubelet Exec Probe Timeouts[2]</a> and <a href="https://github.com/kubernetes/enhancements/pull/1973?ref=focaaby.com">KEP-1972: kubelet exec probe timeouts[3]</a>. Now the default timeout <code>1s</code> is respected but is periodically too short causing it to fail and pods to restart.</p><h2 id="workaround">Workaround</h2><p>Here are 2 methods to mitigate the issue:</p><ol><li>Disable the feature gate <code>ExecProbeTimeout</code> on <code>kubelet</code>: As a cluster administrator, we can disable the feature gate <code>ExecProbeTimeout</code> (set it to false) on each <code>kubelet</code> to restore the behavior from older versions, then remove that override once all the exec probes in the cluster have a <code>timeoutSeconds</code> value set[4].</li><li>Increase the <code>timeoutSeconds</code> to a proper value: If you have pods that are impacted from the default 1 second timeout, you should update their probe timeout so that you&#x2019;re ready for the eventual removal of that feature gate.</li></ol><p>For VPC CNI plugin, however, issue <a href="https://github.com/aws/amazon-vpc-cni-k8s/issues/1425?ref=focaaby.com">#1425[5]</a> is still considered a bug and needs to be followup.</p><h2 id="reference">Reference</h2><ol><li><a href="https://kubernetes.io/blog/2020/12/08/kubernetes-1-20-release-announcement/?ref=focaaby.com">Kubernetes 1.20: The Raddest Release</a></li><li><a href="https://github.com/kubernetes/enhancements/issues/1972?ref=focaaby.com">Fixing Kubelet Exec Probe Timeouts</a></li><li><a href="https://github.com/kubernetes/enhancements/pull/1973?ref=focaaby.com">KEP-1972: kubelet exec probe timeouts</a>.</li><li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/?ref=focaaby.com">Configure Liveness, Readiness and Startup Probes</a></li><li><a href="https://github.com/aws/amazon-vpc-cni-k8s/issues/1425?ref=focaaby.com">aws-node is restarting (Crashing, exiting on 137) sporadically which causes all pods on that node to stuck on ContainerCreating state. #1425</a></li></ol>]]></content:encoded></item><item><title><![CDATA[Why my CDK VPC constructor does not respect maximum availability zones]]></title><description><![CDATA[In this article, we'll know the environment variable account and region are requirements for the production CDK stack. Also, StacksPros need to be inherited as well.]]></description><link>https://focaaby.com/why-my-cdk-vpc-constructor-does-not-repct-availability-zones/</link><guid isPermaLink="false">61ddcc537894f50001237932</guid><category><![CDATA[cdk]]></category><category><![CDATA[cloudformation]]></category><category><![CDATA[aws]]></category><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Sat, 15 Jan 2022 11:30:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1499346030926-9a72daac6c63?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fGNsb3VkfGVufDB8fHx8MTY0MTkzMjU1Nw&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h2 id="problem">Problem</h2><img src="https://images.unsplash.com/photo-1499346030926-9a72daac6c63?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fGNsb3VkfGVufDB8fHx8MTY0MTkzMjU1Nw&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Why my CDK VPC constructor does not respect maximum availability zones"><p>In the following code snippet, we configure the maximum number of availability zones(AZs). However, CDK does not respect the value.</p><pre><code>$ cdk --version
2.5.0 (build 0951122)
</code></pre><pre><code class="language-bash">$ cat ./bin/vpc-additional-subnet-stack.ts
#!/usr/bin/env node
import &apos;source-map-support/register&apos;;
import * as cdk from &apos;aws-cdk-lib&apos;;
import { VpcAdditionalSubnetStackStack } from &apos;../lib/vpc-additional-subnet-stack-stack&apos;;

const app = new cdk.App();
new VpcAdditionalSubnetStackStack(app, &apos;VpcAdditionalSubnetStackStack&apos;, {

  env: { account: &apos;111111111111&apos;, region: &apos;eu-west-1&apos; },

});%

</code></pre><pre><code class="language-typescript">import { Stack, StackProps } from &apos;aws-cdk-lib&apos;;
import { Construct } from &apos;constructs&apos;;
import * as ec2 from &quot;aws-cdk-lib/aws-ec2&quot;;

export class VpcAdditionalSubnetStackStack extends Stack {
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id);

    const subnetConfig = [
      {
          cidrMask: 22,
          name: &quot;outputSubnet&quot;,
          subnetType: ec2.SubnetType.PUBLIC,
      },
      {
          cidrMask: 22,
          name: &quot;database&quot;,
          subnetType: ec2.SubnetType.PRIVATE_WITH_NAT,
      },
      {
          cidrMask: 22,
          name: &quot;application&quot;,
          subnetType: ec2.SubnetType.PRIVATE_WITH_NAT,
      },
    ];

    // VPC
    const vpc = new ec2.Vpc(this, &quot;Lab-VPC&quot;, {
      cidr: &quot;10.0.0.0/16&quot;,
      maxAzs: 3,
      subnetConfiguration: subnetConfig,
    });

  }
}
</code></pre><p>Also, we can view there are 3 AZs in <code>eu-west-1</code> region.</p><pre><code>$ aws ec2 describe-availability-zones

{
    &quot;AvailabilityZones&quot;: [
        {
            &quot;State&quot;: &quot;available&quot;,
            &quot;OptInStatus&quot;: &quot;opt-in-not-required&quot;,
            &quot;Messages&quot;: [],
            &quot;RegionName&quot;: &quot;eu-west-1&quot;,
            &quot;ZoneName&quot;: &quot;eu-west-1a&quot;,
            &quot;ZoneId&quot;: &quot;euw1-az1&quot;,
            &quot;GroupName&quot;: &quot;eu-west-1&quot;,
            &quot;NetworkBorderGroup&quot;: &quot;eu-west-1&quot;,
            &quot;ZoneType&quot;: &quot;availability-zone&quot;
        },
        {
            &quot;State&quot;: &quot;available&quot;,
            &quot;OptInStatus&quot;: &quot;opt-in-not-required&quot;,
            &quot;Messages&quot;: [],
            &quot;RegionName&quot;: &quot;eu-west-1&quot;,
            &quot;ZoneName&quot;: &quot;eu-west-1b&quot;,
            &quot;ZoneId&quot;: &quot;euw1-az2&quot;,
            &quot;GroupName&quot;: &quot;eu-west-1&quot;,
            &quot;NetworkBorderGroup&quot;: &quot;eu-west-1&quot;,
            &quot;ZoneType&quot;: &quot;availability-zone&quot;
        },
        {
            &quot;State&quot;: &quot;available&quot;,
            &quot;OptInStatus&quot;: &quot;opt-in-not-required&quot;,
            &quot;Messages&quot;: [],
            &quot;RegionName&quot;: &quot;eu-west-1&quot;,
            &quot;ZoneName&quot;: &quot;eu-west-1c&quot;,
            &quot;ZoneId&quot;: &quot;euw1-az3&quot;,
            &quot;GroupName&quot;: &quot;eu-west-1&quot;,
            &quot;NetworkBorderGroup&quot;: &quot;eu-west-1&quot;,
            &quot;ZoneType&quot;: &quot;availability-zone&quot;
        }
    ]
}
</code></pre><h2 id="dive-into-the-problem">Dive into the problem</h2><p>According to the <a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.Stack.html?ref=focaaby.com#availabilityzones">Stack.availabilityZones</a>:</p><blockquote>If the stack is environment-agnostic (either account and/or region are tokens), this property will return an array with 2 tokens that will resolve at deploy-time to the first two availability zones returned from CloudFormation&apos;s <code>Fn::GetAZs</code> intrinsic function.</blockquote><p>From the code snippet, we had already set up the account and region. It seems that the Stack does not inherit the environment. Thus, we can check the account and region of Stack again as the following snippet.</p><pre><code>export class VpcAdditionalSubnetStackStack extends Stack {
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);
    console.log(&apos;account: &apos;, Stack.of(this).account);
    console.log(&apos;region: &apos;, Stack.of(this).region);
    console.log(&apos;availability zones&apos;, Stack.of(this).availabilityZones);
</code></pre><pre><code>$ cdk synth
account:  ${Token[AWS.AccountId.6]}
region:  ${Token[AWS.Region.10]}
availability zones [ &apos;${Token[TOKEN.200]}&apos;, &apos;${Token[TOKEN.202]}&apos; ]
...
... 
</code></pre><p>AWS CDK encodes a token whose value is not yet known at construction time[2]. We can know the environment variables are not been used in <code>VpcAdditionalSubnetStackStack</code> Stack.</p><pre><code>    super(scope, id);
</code></pre><p>From the code snippet, which does not pass <code>props</code> in the call to<code>super()</code>, and the environment variable we pass when creating <code>VpcAdditionalSubnetStackStack</code> is ignored. Therefore, CDK considers this to be environment-agnostic and creates only 2 AZ.</p><pre><code>$ cat ./lib/vpc-additional-subnet-stack-stack.ts
import { Stack, StackProps } from &apos;aws-cdk-lib&apos;;
import { Construct } from &apos;constructs&apos;;
// import * as sqs from &apos;aws-cdk-lib/aws-sqs&apos;;

export class VpcAdditionalSubnetStackStack extends Stack {
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);

    // The code that defines your stack goes here

    // example resource
    // const queue = new sqs.Queue(this, &apos;VpcAdditionalSubnetStackQueue&apos;, {
    //   visibilityTimeout: cdk.Duration.seconds(300)
    // });
  }
}
</code></pre><p>By default, the CDK helps us pull a template that defines the basic constructor. Here is the example template:</p><pre><code>    super(scope, id, props);
</code></pre><pre><code>$ cdk synth
account:  111111111111
region:  eu-west-1
availability zones [ &apos;eu-west-1a&apos;, &apos;eu-west-1b&apos;, &apos;eu-west-1c&apos; ]
...
</code></pre><p>After updating the <code>super()</code> function, we can view the account and region output.</p><h2 id="summary">Summary</h2><p>In order to create the availability zones of an AWS region, we must set up the following items:</p><ul><li>Environment variable for <code>account</code> and <code>region</code>, it can be set up via AWS credential or <code>env</code> property.</li><li>Confirm the Stack inherits the<code>StackProps</code>, if not the stack will ignore the environment variable we set up in previous steps.</li></ul><h2 id="references">References</h2><ol><li><a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2-readme.html?ref=focaaby.com#advanced-subnet-configuration">https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2-readme.html#advanced-subnet-configuration</a></li><li>Tokens - <a href="https://docs.aws.amazon.com/zh_cn/cdk/v2/guide/tokens.html?ref=focaaby.com">https://docs.aws.amazon.com/zh_cn/cdk/v2/guide/tokens.html</a></li><li><a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.Stack.html?ref=focaaby.com#availabilityzones">https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.Stack.html#availabilityzones</a></li><li><a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.StackProps.html?ref=focaaby.com#env">https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.StackProps.html#env</a></li></ol>]]></content:encoded></item><item><title><![CDATA[How OpsWorks mount EBS Volume automatically]]></title><description><![CDATA[The ephemeral device mount points are mounted by the cloud-init module and the EBS volumes are mounted by the default OpsWorks recipe aws_opsworks_ebs. In this article, we will know how OpsWorks mount the EBS volumes.]]></description><link>https://focaaby.com/how-opsworks-mount-ebs-volume-automatically/</link><guid isPermaLink="false">61d725437894f500012378ba</guid><category><![CDATA[opsworks]]></category><category><![CDATA[ebs]]></category><category><![CDATA[aws]]></category><category><![CDATA[mount]]></category><category><![CDATA[linux]]></category><category><![CDATA[cloud-init]]></category><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Sat, 08 Jan 2022 11:30:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1495546968767-f0573cca821e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDh8fHJlY2lwZXxlbnwwfHx8fDE2NDI5NzA3MjM&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><ol>
<li>Add an Additional EBS Volumes with OpsWorks Layer&apos;s configuration.</li>
</ol>
<pre><code class="language-json">$ aws opsworks --region us-west-2 describe-layers --stack-id 433a1bd3-699d-4306-9533-b44530293ab5 --query &apos;Layers[*].VolumeConfigurations&apos;


[
    [
        {
            &quot;MountPoint&quot;: &quot;/mnt/workspace&quot;,
            &quot;NumberOfDisks&quot;: 1,
            &quot;Size&quot;: 10,
            &quot;VolumeType&quot;: &quot;gp2&quot;,
            &quot;Encrypted&quot;: false
        }
    ]
]
</code></pre>
<ol start="2">
<li>Add SSH key with Stack configuration. So, we can access the instance via SSH.</li>
<li>Access to the OpsWorks instance. I can view the EBS is mounted on <code>/mnt/workspace</code>.</li>
</ol>
<pre><code class="language-bash">$ mount | grep &quot;workspace&quot;
/dev/xvdi on /mnt/workspace type xfs (rw,relatime,attr2,inode64,noquota)
</code></pre>
<ol start="4">
<li>The ephemeral device ephemeral0 is mounted on <code>/media/ephemeral0</code>  by cloud-init.</li>
</ol>
<pre><code class="language-yaml">$ tail -n 5 /etc/cloud/cloud.cfg

mounts:
 - [ ephemeral0, /media/ephemeral0 ]
 - [ swap, none, swap, sw, &quot;0&quot;, &quot;0&quot; ]
# vim:syntax=yaml
</code></pre>
<ol start="5">
<li>EBS volumes will be set up the current configuration in <code>/var/lib/aws/opsworks/chef/xxxxxxxxx.json</code>. After that, the default OpsWork recipes <code>aws_opsworks_ebs</code>help us to mount the volume to the configured mount points.</li>
</ol>
<pre><code class="language-json">$ cat /var/lib/aws/opsworks/chef/2022-01-06-10-35-28-01.json
...
      &quot;volumes&quot;: [
        {
          &quot;name&quot;: &quot;Created for nodejs-server2&quot;,
          &quot;mount_point&quot;: &quot;/mnt/workspace&quot;,
          &quot;device&quot;: &quot;/dev/sdi&quot;,
          &quot;volume_id&quot;: &quot;vol-032b79ca4f686b2ee&quot;
        }
      ]
    },
</code></pre>
<pre><code class="language-bash">$ cat /var/lib/aws/opsworks/chef/2022-01-06-10-33-35-01.log
...
[2022-01-06T10:33:51+00:00] INFO: Processing ruby_block[delete_lines_from_fstab] action run (aws_opsworks_ebs::default line 1)
[2022-01-06T10:33:51+00:00] INFO: ruby_block[delete_lines_from_fstab] called
[2022-01-06T10:33:51+00:00] INFO: Processing yum_package[xfsprogs] action install (aws_opsworks_ebs::default line 9)
[2022-01-06T10:33:51+00:00] INFO: Processing ruby_block[add xfs to list of known filesystems] action run (aws_opsworks_ebs::default line 16)
[2022-01-06T10:33:51+00:00] INFO: ruby_block[add xfs to list of known filesystems] called
[2022-01-06T10:33:51+00:00] INFO: Processing apt_repository[add_required_repository_for_nvme-cli_for_ubuntu-14.04] action add (aws_opsworks_ebs::default line 23)
[2022-01-06T10:33:51+00:00] INFO: Processing yum_package[nvme-cli] action install (aws_opsworks_ebs::default line 28)
[2022-01-06T10:33:51+00:00] INFO: Processing ebs_volume[vol-032b79ca4f686b2ee] action mount (aws_opsworks_ebs::default line 44)
[2022-01-06T10:33:51+00:00] INFO: Processing execute[mkfs /dev/xvdi] action run (/var/lib/aws/opsworks/cache.internal/cookbooks/aws_opsworks_ebs/resources/ebs_volume.rb line 15)
[2022-01-06T10:33:52+00:00] INFO: execute[mkfs /dev/xvdi] ran successfully
[2022-01-06T10:33:52+00:00] INFO: Processing directory[/mnt/workspace] action create (/var/lib/aws/opsworks/cache.internal/cookbooks/aws_opsworks_ebs/resources/ebs_volume.rb line 29)
[2022-01-06T10:33:52+00:00] INFO: directory[/mnt/workspace] created directory /mnt/workspace
[2022-01-06T10:33:52+00:00] INFO: directory[/mnt/workspace] mode changed to 755
[2022-01-06T10:33:52+00:00] INFO: Processing ruby_block[delete existing fstab entries for this mount point and device] action run (/var/lib/aws/opsworks/cache.internal/cookbooks/aws_opsworks_ebs/resources/ebs_volume.rb line 35)
[2022-01-06T10:33:52+00:00] INFO: ruby_block[delete existing fstab entries for this mount point and device] called
[2022-01-06T10:33:52+00:00] INFO: Processing mount[/mnt/workspace] action mount (/var/lib/aws/opsworks/cache.internal/cookbooks/aws_opsworks_ebs/resources/ebs_volume.rb line 44)
[2022-01-06T10:33:52+00:00] INFO: mount[/mnt/workspace] mounted
[2022-01-06T10:33:52+00:00] INFO: Processing mount[/mnt/workspace] action enable (/var/lib/aws/opsworks/cache.internal/cookbooks/aws_opsworks_ebs/resources/ebs_volume.rb line 44)
[2022-01-06T10:33:52+00:00] INFO: mount[/mnt/workspace] enabled
...
</code></pre>
<img src="https://images.unsplash.com/photo-1495546968767-f0573cca821e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDh8fHJlY2lwZXxlbnwwfHx8fDE2NDI5NzA3MjM&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="How OpsWorks mount EBS Volume automatically"><p>The following is default <code>aws_opsworks_ebs</code> recipe.</p>
<pre><code class="language-ruby">$ cat /opt/aws/opsworks/current/cookbooks/aws_opsworks_ebs/recipes/default.rb
ruby_block &quot;delete_lines_from_fstab&quot; do
  block do
    file = Chef::Util::FileEdit.new(&quot;/etc/fstab&quot;)
    file.search_file_delete_line(&quot;/dev/nvme&quot;)
    file.write_file
  end
end

package &quot;xfsprogs&quot; do
  # RedHat 6 does not provide xfsprogs
  not_if { rhel6? }
  retries 2
end

# add xfs to list of known filesystems
ruby_block &quot;add xfs to list of known filesystems&quot; do
  block do
    Filesystems.add_xfs_to_known_filesystems
  end
  only_if { ::File.exist?(&quot;/etc/filesystems&quot;) &amp;&amp; node[&quot;aws_opsworks_agent&quot;][&quot;resources&quot;][&quot;volumes&quot;].size &gt; 0 }
end

apt_repository &apos;add_required_repository_for_nvme-cli_for_ubuntu-14.04&apos; do
  only_if { EbsVolumeHelpers.nvme_based? &amp;&amp; !EbsVolumeHelpers.has_ebs_tooling? &amp;&amp; platform?(&quot;ubuntu&quot;) &amp;&amp; node[:platform_version] == &quot;14.04&quot; &amp;&amp; node[&quot;aws_opsworks_agent&quot;][&quot;resources&quot;][&quot;volumes&quot;].size &gt; 0 }
  uri &apos;ppa:sbates&apos;
end

package &quot;nvme-cli&quot; do
  only_if { EbsVolumeHelpers.nvme_based? &amp;&amp; !EbsVolumeHelpers.has_ebs_tooling? &amp;&amp; !rhel6? &amp;&amp; node[&quot;aws_opsworks_agent&quot;][&quot;resources&quot;][&quot;volumes&quot;].size &gt; 0 }
  retries 2
end

node[&quot;aws_opsworks_agent&quot;][&quot;resources&quot;][&quot;volumes&quot;].each do |volume|
  if rhel6?
    log &quot;skipping volume #{volume[&quot;device&quot;]} - no EBS volume support for Red Hat Enterprise Linux 6&quot;
    next
  end

  if volume[&quot;mount_point&quot;].nil? || volume[&quot;mount_point&quot;].empty?
    log &quot;skip mounting volume #{volume[&quot;device&quot;]} (#{volume[&quot;volume_id&quot;]}) because no mount_point specified&quot;
    next
  end

  ebs_volume volume[&quot;volume_id&quot;] do
    mount_point volume[&quot;mount_point&quot;]
    volume_id volume[&quot;volume_id&quot;]
    device volume[&quot;device&quot;]
    fstype volume[&quot;fstype&quot;] || &quot;xfs&quot;
  end
end
</code></pre>
<!--kg-card-end: markdown--><h2 id="references">References</h2><ol><li>Editing an OpsWorks Layer&apos;s Configuration - EBS Volumes - <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html?ref=focaaby.com#workinglayers-basics-edit-ebs">https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html#workinglayers-basics-edit-ebs</a></li></ol>]]></content:encoded></item><item><title><![CDATA[Automatically validating the multiple SAN through DNS validation within multiple hosted zone by using CloudFormation template]]></title><description><![CDATA[ACM support automatically renews DNS-validated certificates. This article will go through how to request a public certificate with multiple SAN, which included the domain name in a public hosted zone and a private hosted zone.]]></description><link>https://focaaby.com/automatically-validating-the-multiple-san-through-dns-validation-within-multiple-hosted-zone-by-using-cloudformation-template/</link><guid isPermaLink="false">616376e4e5011b000117fa8f</guid><category><![CDATA[cloudformation]]></category><category><![CDATA[r53]]></category><category><![CDATA[acm]]></category><category><![CDATA[aws]]></category><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Tue, 18 May 2021 00:00:00 GMT</pubDate><content:encoded><![CDATA[<h2 id="summary">Summary</h2><p>ACM support automatically renews DNS-validated certificates[1]. In this article will go through how to request a public certificate with multiple Subject Alternative Names(SAN)[2], which included the domain name in a public hosted zone and a private hosted zone.</p><h2 id="steps">Steps</h2><!--kg-card-begin: markdown--><ol>
<li>
<p>Create a public hosted zone, and record the hosted zone id.</p>
</li>
<li>
<p>Create a CloudFormation template with following YAML file.</p>
<pre><code class="language-yaml">---
AWSTemplateFormatVersion: &apos;2010-09-09&apos;
Description: Test
Resources:
  MyPrivateHostedZone:
    Type: AWS::Route53::HostedZone
    Properties:
      Name: dev.example.com
      VPCs:
      - VPCId: vpc-000e0266
        VPCRegion: eu-west-1
  MyCertificate:
    Type: AWS::CertificateManager::Certificate
    Properties:
      DomainName: &quot;*.example.com&quot;
      DomainValidationOptions:
      - DomainName: example.com
        HostedZoneId: ZZYK0LLL1NN1XX
      - DomainName: &quot;*.dev.example.com&quot;
        HostedZoneId: !Ref MyPrivateHostedZone
      SubjectAlternativeNames:
      - &quot;*.dev.example.com&quot;
      - &quot;*.example.com&quot;
      ValidationMethod: DNS
</code></pre>
</li>
<li>
<p>Create the the CloudFormation stack.</p>
</li>
<li>
<p>During the certificate is creating, ACM will create the both CNAME in the hosted zone that specific in CloudFormation template.</p>
<pre><code class="language-bash">$ aws cloudformation describe-stack-events --stack-name my53-certificate
...
{
    &quot;StackId&quot;: &quot;arn:aws:cloudformation:eu-west-1:111222333444:stack/ttttteeset/eb245330-b821-11eb-89af-061b29697291&quot;,
    &quot;EventId&quot;: &quot;MyCertificate-0820989e-77e7-480e-8f57-2b3aaf3d59f4&quot;,
    &quot;StackName&quot;: &quot;my53-certificate&quot;,
    &quot;LogicalResourceId&quot;: &quot;MyCertificate&quot;,
    &quot;PhysicalResourceId&quot;: &quot;&quot;,
    &quot;ResourceType&quot;: &quot;AWS::CertificateManager::Certificate&quot;,
    &quot;Timestamp&quot;: &quot;2021-05-18T21:44:15.531000+00:00&quot;,
    &quot;ResourceStatus&quot;: &quot;CREATE_IN_PROGRESS&quot;,
    &quot;ResourceStatusReason&quot;: &quot;Content of DNS Record is: {Name: _7758ffb3838c6cf7c3ec68de36d03fe0.example.com.,
Type: CNAME,Value: _3402713d1c23665051ea05c1963caf81.olprtlswtu.acm-validations.aws.}&quot;,
    &quot;ClientRequestToken&quot;: &quot;Console-CreateStack-b05e9d4a-9da0-dfe3-387c-4890b84060c2&quot;
},
...
</code></pre>
</li>
</ol>
<!--kg-card-end: markdown--><h2 id="wrapping-up">Wrapping up</h2><ol><li>The AWS ACM is supported DNS validation for both public and private hosted zone.</li><li>We can add multiple SAN with <code>DomainValidationOptions</code>[3] in CloudFormation.</li><li>ACM will create the domain&apos;s DNS record for a validation CNAME.</li></ol><h2 id="references">References</h2><ol><li>Validating domain ownership - <a href="https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html?ref=focaaby.com">https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html</a></li><li>Subject Alternative Name - <a href="https://en.wikipedia.org/wiki/Subject_Alternative_Name?ref=focaaby.com">https://en.wikipedia.org/wiki/Subject_Alternative_Name</a></li><li>AWS::CertificateManager::Certificate DomainValidationOption - <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-certificatemanager-certificate-domainvalidationoption.html?ref=focaaby.com">https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-certificatemanager-certificate-domainvalidationoption.html</a></li></ol>]]></content:encoded></item><item><title><![CDATA[Push a Helm Chart to AWS ECR]]></title><description><![CDATA[This article will go through how to push a Helm chart to ECR. Also, I try to use the Helm Dependency on ECR. The ECR supported the Helm template written in Helm Dependency, but ECR repositories are still not can not add as Helm repo currently.]]></description><link>https://focaaby.com/push-a-helm-chart-to-aws-ecr/</link><guid isPermaLink="false">616366f320b2b300016b8722</guid><category><![CDATA[ecr]]></category><category><![CDATA[oci]]></category><category><![CDATA[helm]]></category><category><![CDATA[aws]]></category><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Tue, 04 May 2021 00:00:00 GMT</pubDate><content:encoded><![CDATA[<h2 id="summary">Summary</h2><p>AWS ECR supports pushing Open Container Initiative (OCI) artifacts[1][2] to your repositories, and this article will go through how to push a Helm chart to ECR. Also, I try to use the Helm Dependency on ECR. The ECR supported the Helm template written in Helm Dependency, but ECR repositories are still not can not add as Helm repo currently.</p><h2 id="testing-steps">Testing Steps</h2><!--kg-card-begin: markdown--><ol>
<li>
<p>Following the steps in document[2] to install the Helm client version 3.</p>
<pre><code class="language-bash">$ helm version

version.BuildInfo{Version:&quot;v3.5.3&quot;, GitCommit:&quot;041ce5a2c17a58be0fcd5f5e16fb3e7e95fea622&quot;, GitTreeState:&quot;dirty&quot;, GoVersion:&quot;go1.15.8&quot;}
</code></pre>
</li>
<li>
<p>Enable OCI support in the Helm 3 client.</p>
<pre><code class="language-bash">$ export HELM_EXPERIMENTAL_OCI=1
</code></pre>
</li>
<li>
<p>Create a repository to store your Helm chart</p>
<pre><code class="language-bash">$ aws ecr create-repository \
        --repository-name helm-test \
        --region eu-west-1
{
    &quot;repository&quot;: {
        &quot;repositoryArn&quot;: &quot;arn:aws:ecr:eu-west-1:123456789012:repository/helm-test&quot;,
        &quot;registryId&quot;: &quot;123456789012&quot;,
        &quot;repositoryName&quot;: &quot;helm-test&quot;,
        &quot;repositoryUri&quot;: &quot;123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-test&quot;,
        &quot;createdAt&quot;: &quot;2021-05-04T15:07:49+00:00&quot;,
        &quot;imageTagMutability&quot;: &quot;MUTABLE&quot;,
        &quot;imageScanningConfiguration&quot;: {
            &quot;scanOnPush&quot;: false
        },
        &quot;encryptionConfiguration&quot;: {
            &quot;encryptionType&quot;: &quot;AES256&quot;
        }
    }
}
</code></pre>
</li>
<li>
<p>Authenticate your Helm client to the Amazon ECR registry to which you intend to push your Helm chart.</p>
<pre><code class="language-bash">$ aws ecr get-login-password \
    --region eu-west-1 | helm registry login \
    --username AWS \
    --password-stdin 123456789012.dkr.ecr.eu-west-1.amazonaws.com
Login succeeded
</code></pre>
</li>
<li>
<p>Use the following steps to create a test Helm chart.</p>
<pre><code class="language-bash">$ mkdir helm-tutorial
$ cd helm-tutorial

$ helm create mychart
Creating mychart

$ rm -rf ./mychart/templates/*

$ cd mychart/templates
$ cat &lt;&lt;EOF &gt; configmap.yaml
&gt; apiVersion: v1
&gt; kind: ConfigMap
&gt; metadata:
&gt;   name: mychart-configmap
&gt; data:
&gt;   myvalue: &quot;Hello World&quot;
&gt; EOF
</code></pre>
</li>
<li>
<p>Save the chart locally and create an alias for the chart with your registry URI.</p>
<pre><code class="language-bash">$ cd ..
$ helm chart save . mychart
ref:     mychart:0.1.0
digest:  66ac23dc1e383370778e2a8db2bf1d93e73f169af14569618dcec38a086405a4
size:    1.4 KiB
name:    mychart
version: 0.1.0
0.1.0: saved

$ helm chart save . 123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-test:mychart
ref:     123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-test:mychart
digest:  2935917ffbc7492eae674f1e82308a2acd4e41b3b6db4c25b79a25095481ebca
size:    1.4 KiB
name:    mychart
version: 0.1.0
mychart: saved
</code></pre>
</li>
<li>
<p>Identify the Helm chart to push. Run the helm chart list command to list the Helm charts on your system.</p>
<pre><code class="language-bash">$ helm chart list
REF                                                             NAME    VERSION DIGEST  SIZE    CREATED
123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-test:my...    mychart 0.1.0   66ac23d 1.4 KiB 2 minutes
mychart:0.1.0
</code></pre>
</li>
<li>
<p>Push the Helm chart using the helm chart push command.</p>
<pre><code class="language-bash">$ helm chart push 123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-test:mychart
The push refers to repository [123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-test]
ref:     123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-test:mychart
digest:  2935917ffbc7492eae674f1e82308a2acd4e41b3b6db4c25b79a25095481ebca
size:    1.4 KiB
name:    mychart
version: 0.1.0
mychart: pushed to remote (1 layer, 1.4 KiB total)
</code></pre>
</li>
<li>
<p>Describe your Helm chart.</p>
<pre><code class="language-bash">$ aws ecr describe-images \
    --repository-name helm-test \
    --region eu-west-1
{
    &quot;imageDetails&quot;: [
        {
            &quot;registryId&quot;: &quot;123456789012&quot;,
            &quot;repositoryName&quot;: &quot;helm-test&quot;,
            &quot;imageDigest&quot;: &quot;sha256:2935917ffbc7492eae674f1e82308a2acd4e41b3b6db4c25b79a25095481ebca&quot;,
            &quot;imageTags&quot;: [
                &quot;mychart&quot;
            ],
            &quot;imageSizeInBytes&quot;: 1610,
            &quot;imagePushedAt&quot;: &quot;2021-05-04T15:16:24+00:00&quot;,
            &quot;imageManifestMediaType&quot;: &quot;application/vnd.oci.image.manifest.v1+json&quot;,
            &quot;artifactMediaType&quot;: &quot;application/vnd.cncf.helm.config.v1+json&quot;
        }
    ]
}
</code></pre>
</li>
<li>
<p>Copy the mychart directory to mychart-dependency.</p>
<pre><code class="language-bash">$ cp -r mychart/ mychart-dependency
</code></pre>
</li>
<li>
<p>Create another new ECR repository named helm-dependency.</p>
<pre><code class="language-bash">$ aws ecr create-repository \
        --repository-name helm-dependency \
        --region eu-west-1

{
    &quot;repository&quot;: {
        &quot;repositoryArn&quot;: &quot;arn:aws:ecr:eu-west-1:123456789012:repository/helm-dependency&quot;,
        &quot;registryId&quot;: &quot;123456789012&quot;,
        &quot;repositoryName&quot;: &quot;helm-dependency&quot;,
        &quot;repositoryUri&quot;: &quot;123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-dependency&quot;,
        &quot;createdAt&quot;: &quot;2021-05-04T15:21:25+00:00&quot;,
        &quot;imageTagMutability&quot;: &quot;MUTABLE&quot;,
        &quot;imageScanningConfiguration&quot;: {
            &quot;scanOnPush&quot;: false
        },
        &quot;encryptionConfiguration&quot;: {
            &quot;encryptionType&quot;: &quot;AES256&quot;
        }
    }
}
</code></pre>
</li>
<li>
<p>Update the Chart.yaml and add the configuration for Helm dependencies.</p>
<pre><code class="language-bash">$ cat Chart.yaml
apiVersion: v2
name: mychart-dependency
description: A Helm chart for Kubernetes

type: application
version: 0.1.0
appVersion: &quot;1.16.0&quot;

dependencies:
- name: mychart
  version: &quot;0.1.0&quot;
  repository: &quot;123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-test:mychart&quot;
</code></pre>
</li>
<li>
<p>Push Helm chart <code>mychart-dependency</code> using the helm chart push command.</p>
<pre><code class="language-bash">$ helm chart save . mychart-dependency
ref:     mychart-dependency:0.1.0
digest:  5d948b5d95b7e455215fd957930c057a2e556b437656713a7deefae0c8017dfa
size:    1.5 KiB
name:    mychart-dependency
version: 0.1.0
0.1.0: saved

$ helm chart save . 123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-dependency:mychart-dependency
ref:     123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-dependency:mychart-dependency
digest:  a1200cf17bfae3a733c83b8981da2b2e2b44a79963ffc1dab53cbc58e6f8e8c9
size:    1.5 KiB
name:    mychart-dependency
version: 0.1.0
mychart-dependency: saved

$ helm chart list
REF                                                             NAME                    VERSION DIGEST  SIZE    CREATED
123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-depende...    mychart-dependency      0.1.0   a1200cf 1.5 KiB 20 seconds
123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-test:my...    mychart                 0.1.0   a1200cf 1.4 KiB 14 minutes
mychart-dependency:0.1.0                                        mychart-dependency      0.1.0   a1200cf 1.5 KiB 58 seconds
mychart:0.1.0                                                   mychart                 0.1.0   a1200cf 1.4 KiB 15 minutes

$ helm chart push 123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-dependency:mychart-dependency
The push refers to repository [123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-dependency]
ref:     123456789012.dkr.ecr.eu-west-1.amazonaws.com/helm-dependency:mychart-dependency
digest:  a1200cf17bfae3a733c83b8981da2b2e2b44a79963ffc1dab53cbc58e6f8e8c9
size:    1.5 KiB
name:    mychart-dependency
version: 0.1.0
mychart-dependency: pushed to remote (1 layer, 1.5 KiB total)
</code></pre>
</li>
</ol>
<p>Helm dependency is using the &quot;repository&quot; URL should point to a Chart Repository[4], which is ECR not supported yet currently[5].</p>
<!--kg-card-end: markdown--><h2 id="references">References</h2><ol><li>OCI Artifact Support In Amazon ECR - <a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html?ref=focaaby.com">https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html</a></li><li>ECR supports pushing Open Container Initiative (OCI) artifacts to your repositories - <a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html?ref=focaaby.com">https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html</a></li><li>Using Amazon ECR Images with Amazon EKS - Installing a Helm chart hosted on Amazon ECR with Amazon EKS - <a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_on_EKS.html?ref=focaaby.com#using-helm-charts-eks">https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_on_EKS.html#using-helm-charts-eks</a></li><li>Helm Dependency - <a href="https://helm.sh/docs/helm/helm_dependency/?ref=focaaby.com">https://helm.sh/docs/helm/helm_dependency/</a></li><li>Allow Helm to automatically install from a chart stored in an ECR repository - <a href="https://github.com/aws/containers-roadmap/issues/1116?ref=focaaby.com">https://github.com/aws/containers-roadmap/issues/1116</a></li></ol>]]></content:encoded></item><item><title><![CDATA[EKS service with internal NLB, but gets timeouts randomly]]></title><description><![CDATA[While deploying both client and server on the EKS cluster and connecting through internal NLB, however, the communication gets timeout randomly.]]></description><link>https://focaaby.com/eks-service-with-internal-nlb-but-gets-timeouts-randomly/</link><guid isPermaLink="false">6163699920b2b300016b8796</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[eks]]></category><category><![CDATA[elb]]></category><category><![CDATA[k8s]]></category><category><![CDATA[aws]]></category><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Fri, 23 Apr 2021 00:00:00 GMT</pubDate><media:content url="https://focaaby.com/content/images/2021/10/EKS-internal-NLB-netshoot-2.png" medium="image"/><content:encoded><![CDATA[<h2 id="summary">Summary</h2><img src="https://focaaby.com/content/images/2021/10/EKS-internal-NLB-netshoot-2.png" alt="EKS service with internal NLB, but gets timeouts randomly"><p>While deploying both client and server on their EKS cluster and connecting through internal NLB, however, the communication gets timeout randomly.</p><h2 id="problem">Problem</h2><p>In Kubernetes, we can expose the pod to cluster with Kubernetes Service. For sure, we can create a Kubernetes Service of type LoadBalancer, AWS NLB, or CLB is provisioned that load balances network traffic.</p><figure class="kg-card kg-image-card"><img src="https://i0.wp.com/www.docker.com/blog/wp-content/uploads/2019/09/Kubernetes-NodePort-Service-2.png" class="kg-image" alt="EKS service with internal NLB, but gets timeouts randomly" loading="lazy"></figure><p>Figure source: <a href="https://www.docker.com/blog/designing-your-first-application-kubernetes-communication-services-part3/?ref=focaaby.com">https://www.docker.com/blog/designing-your-first-application-kubernetes-communication-services-part3/</a></p><p><br>To manage the workload easily, we might want to deploy both the client and server sides on Kubernetes. However, in the following conditions, you might notice the timeout issue happen randomly.</p><p>This use case includes both server-side and client-side in the same EKS cluster, and the server-side must use Kubernetes service with internal NLB - instance target type.</p><ol><li>Create a Kubernetes Deployment as server-side. For example, self-host Redis, Nginx server, etc.</li><li>Create a Kubernetes Service to expose the server application through the internal NLB.</li><li>Create another Kubernetes Deployment to test the application as client-side, and try to connect with the server&apos;s service.</li></ol><h2 id="reproduce">Reproduce</h2><!--kg-card-begin: markdown--><p>In my testing, I launched the Kubernetes cluster with Amazon EKS (1.8.9) with default deployments (such as CoreDNS, AWS CNI Plugin, and kube-proxy). The issue can be reproduced as the steps below:</p>
<ol>
<li>
<p>Deploy a Nginx Deployment and a troubleshooting container. I used the netshoot as troubleshooting pod, which had preinstalled the <code>curl</code> command. Also, you can find the <code>nginx-nlb-service.yaml</code> and <code>netshoot.yaml</code> in the attachments.</p>
<pre><code class="language-bash">$ kubectl apply -f ./netshoot.yaml
$ kubectl apply -f ./nginx-nlb-service.yaml
</code></pre>
<p><img src="https://focaaby.com/content/images/2021/10/EKS-internal-NLB-netshoot-1.png" alt="EKS service with internal NLB, but gets timeouts randomly" loading="lazy"></p>
</li>
<li>
<p>Use the <code>kubctl exec</code> command to attach into troubleshooting(netshoot) container.</p>
<pre><code class="language-bash">$ kubectl exec -it netshoot -- bash
bash-5.1#
</code></pre>
</li>
<li>
<p>Access the internal NLB domain name(Nginx&apos;s Service) via <code>curl</code> command, the connection will timeout randomly. At the same time, I capture the packets through with <code>tcpdump</code> command in EKS worker node.</p>
<pre><code class="language-bash"># In the EKS worker node(192.168.2.3)
$ sudo tcpdump -i any  -w packets.pcap

bash-5.1# date
Mon Mar 22 23:35:13 UTC 2021

bash-5.1# curl a7487a27201f2434490bada8096adce3-221f6e1d21fadd5b.elb.eu-west-1.amazonaws.com
... It will timeout randomly, you might need to try more times.
...

$ date
Mon Mar 22 23:35:32 UTC 2021
</code></pre>
</li>
</ol>
<h2 id="why-do-we-get-the-timeout-from-the-internal-nlb">Why do we get the timeout from the Internal NLB</h2>
<p>Firstly, we must know the NLB introduced the source IP address preservation feature[3]: the original IP addresses and source ports for the incoming connections remain unmodified. When the backend answers a request, the VPC internals capture this packet and forward it to the NLB, which will forward it to its destination.</p>
<p>Therefore, if the worker node is running the client-side, and NLB forward to the same nodes. It will generate a random connection timeout. We can dive into this process:</p>
<p><img src="https://focaaby.com/content/images/2021/10/EKS-internal-NLB-seq.png" alt="EKS service with internal NLB, but gets timeouts randomly" loading="lazy"></p>
<ol>
<li>
<p>The troubleshooting container starts from an ephemeral port and sends a SYN packet.</p>
<ul>
<li>src: 192.168.27.239:49134</li>
<li>dst: 192.168.168.103:80</li>
</ul>
</li>
<li>
<p>The NLB receives the SYN packet and forwards it to the backend EKS worker node 192.168.2.3 with target port 31468, which was registered by Kubernetes Service. NLB modified the destination IP address as Client IP preservation feature. Thus, the EKS worker node receives a SYN packet:</p>
<ul>
<li>src: 192.168.27.239:49134</li>
<li>dst: 192.168.2.3:31468</li>
</ul>
</li>
<li>
<p>Base on the iptables rules(Nginx Service) on the EKS worker node, this SYN packet was forwarded to the Nginx pod.</p>
<ul>
<li>src: 192.168.2.3:19969</li>
<li>dst: 192.168.16.142:80<pre><code class="language-bash"># Kubernetes Service will update the following iptables rules in every node.
$ sudo iptables-save | grep &quot;service-nginx-demo&quot;
-A KUBE-NODEPORTS -p tcp -m comment --comment &quot;nginx-demo/service-nginx-demo:&quot; -m tcp --dport 31919 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment &quot;nginx-demo/service-nginx-demo:&quot; -m tcp --dport 31919 -j KUBE-SVC-7D7VEXWNCKBQRZ7W
-A KUBE-SEP-2IF7DICDPRGPK5UI -s 192.168.39.2/32 -m comment --comment &quot;nginx-demo/service-nginx-demo:&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-2IF7DICDPRGPK5UI -p tcp -m comment --comment &quot;nginx-demo/service-nginx-demo:&quot; -m tcp -j DNAT --to-destination 192.168.39.2:80
-A KUBE-SEP-2V4THTSJVFKRX4LC -s 192.168.16.142/32 -m comment --comment &quot;nginx-demo/service-nginx-demo:&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-2V4THTSJVFKRX4LC -p tcp -m comment --comment &quot;nginx-demo/service-nginx-demo:&quot; -m tcp -j DNAT --to-destination 192.168.16.142:80
-A KUBE-SEP-73EDW25F4C3YFWYZ -s 192.168.49.70/32 -m comment --comment &quot;nginx-demo/service-nginx-demo:&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-73EDW25F4C3YFWYZ -p tcp -m comment --comment &quot;nginx-demo/service-nginx-demo:&quot; -m tcp -j DNAT --to-destination 192.168.49.70:80
-A KUBE-SERVICES -d 10.100.235.20/32 -p tcp -m comment --comment &quot;nginx-demo/service-nginx-demo: cluster IP&quot; -m tcp --dport 80 -j KUBE-SVC-7D7VEXWNCKBQRZ7W
-A KUBE-SVC-7D7VEXWNCKBQRZ7W -m comment --comment &quot;nginx-demo/service-nginx-demo:&quot; -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-2IF7DICDPRGPK5UI
-A KUBE-SVC-7D7VEXWNCKBQRZ7W -m comment --comment &quot;nginx-demo/service-nginx-demo:&quot; -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-73EDW25F4C3YFWYZ
-A KUBE-SVC-7D7VEXWNCKBQRZ7W -m comment --comment &quot;nginx-demo/service-nginx-demo:&quot; -j KUBE-SEP-2V4THTSJVFKRX4LC
</code></pre>
</li>
</ul>
</li>
<li>
<p>Nginx pod replies the SYN-ACK packet back to the EKS worker node. We can view the <code>tcp.stream eq 9</code> for step 3 and step4.</p>
<p><img src="https://focaaby.com/content/images/2021/10/internal-nlb-tcp9.png" alt="EKS service with internal NLB, but gets timeouts randomly" loading="lazy"></p>
</li>
<li>
<p>VPC CNI plugin maintains the route table for binding the ENI with pod. Base on the route table in Node 1, it will reply the SYN-ACK packet for the SYN packet for troubleshooting pod in step 2.</p>
<pre><code class="language-bash"># Node 1 - routing table

$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.32.1    0.0.0.0         UG    0      0        0 eth0
169.254.169.254 0.0.0.0         255.255.255.255 UH    0      0        0 eth0
192.168.32.0    0.0.0.0         255.255.224.0   U     0      0        0 eth0
192.168.27.239  0.0.0.0         255.255.255.255 UH    0      0        0 eni708fb089496
...
</code></pre>
</li>
<li>
<p>The troubleshooting pod notices this SYN-ACK connection is an abnormal connection, and send RST packet to close the connection to <code>192.168.2.3:31468</code>.</p>
</li>
<li>
<p>The RST packet is forwarded to the Nginx pod through iptables rules(Nginx server) as well. Also, the troubleshooting container will send both TCP_RETRANSMISSION and RST several times until it times out.</p>
<p><img src="https://focaaby.com/content/images/2021/10/internal-nlb-tcp8.png" alt="EKS service with internal NLB, but gets timeouts randomly" loading="lazy"></p>
</li>
<li>
<p>The initiating socket(src: 192.168.27.239:49134, dst: 192.168.168.103:80) still expects the SYN-ACK from the NLB 192.168.168.103:80. However, the troubleshooting pod did not receive the SYN-ACK, so the troubleshooting pod will send several TCP RETRANSMISSION - SYN until it times out.</p>
<p><img src="https://focaaby.com/content/images/2021/10/internal-nlb-tcp7.png" alt="EKS service with internal NLB, but gets timeouts randomly" loading="lazy"></p>
</li>
</ol>
<!--kg-card-end: markdown--><h2 id="workaround">Workaround</h2><p>If you don&apos;t care about keeping the source IP address, we can suggest that you can consider using the NLB IP mode[4].</p><pre><code class="language-yaml">metadata:
      name: my-service
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: &quot;nlb-ip&quot;
</code></pre><h3 id="2021-05-21-update">2021-05-21 Update</h3><p>The NLB adds support to configure client IP preservation[5] with instance mode, which is supported in version v2.2.0[6]. Thus, we can use the following annotations for the client IP preservation with instance mode.</p><ul><li>Note: Please use the annotation <code>service.beta.kubernetes.io/aws-load-balancer-type: &quot;external&quot;</code> to ignore in-tree <code>cloud-controller-manager</code>[7] to create an in-tree NLB service.</li></ul><pre><code class="language-yaml">...
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: &quot;external&quot;
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
    service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: &quot;preserve_client_ip.enabled=false&quot;
...
</code></pre><h2 id="references">References</h2><ol><li>Connecting Applications with Services - <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/?ref=focaaby.com">https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/</a></li><li>Network load balancing on Amazon EKS - <a href="https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html?ref=focaaby.com">https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html</a></li><li>Target groups for your Network Load Balancers - Client IP preservation<a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html?ref=focaaby.com#client-ip-preservation">https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#client-ip-preservation</a></li><li><a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/nlb_ip_mode/?ref=focaaby.com#nlb-ip-mode">https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/nlb_ip_mode/#nlb-ip-mode</a></li><li><a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html?ref=focaaby.com#client-ip-preservation">https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#client-ip-preservation</a></li><li>AWS Load Balancer Controller - v2.2.0 - <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/tag/v2.2.0?ref=focaaby.com">https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/tag/v2.2.0</a></li><li>AWS Load Balancer Controller - Network Load Balancer - <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/nlb/?ref=focaaby.com#configuration">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/nlb/#configuration</a></li></ol>]]></content:encoded></item><item><title><![CDATA[WordPress Tuning]]></title><description><![CDATA[<h2 id="%E5%89%8D%E8%A8%80">&#x524D;&#x8A00;</h2><p>&#x4EE5;&#x524D;&#x5728;&#x517C;&#x5DEE;&#x5E6B;&#x5FD9;&#x7BA1;&#x7406;&#x7CFB;&#x6240;&#x7DB2;&#x7AD9;&#x6642;&#xFF0C;&#x90A3;&#x6642;&#x5019;&#x5B78;&#x6821;&#x4E3B;&#x6D41; <a href="https://en.wikipedia.org/wiki/Content_management_system?ref=focaaby.com">CMS</a> &#x4F7F;&#x7528;&#x7684;&#x662F; Joomla&#xFF0C;&#x7576;&#x5E74;&#x4E5F;&#x5F9E; apache2 &#x79FB;&#x8F49;&#x4F7F;&#x7528; NGINX &#x505A;&#x4E86;&#x4E9B;&#x5FAE;&#x7684;&#x8ABF;&#x6559;&#xFF0C;&#x4F46;</p>]]></description><link>https://focaaby.com/wordpress-tuning/</link><guid isPermaLink="false">615e26af20b2b300016b8689</guid><category><![CDATA[wordpress]]></category><category><![CDATA[nginx]]></category><dc:creator><![CDATA[Mao-Lin Wang]]></dc:creator><pubDate>Sun, 03 Mar 2019 22:38:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1593642532842-98d0fd5ebc1a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wxfDF8YWxsfDF8fHx8fHwyfHwxNjMzNzMyNzUw&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h2 id="%E5%89%8D%E8%A8%80">&#x524D;&#x8A00;</h2><img src="https://images.unsplash.com/photo-1593642532842-98d0fd5ebc1a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wxfDF8YWxsfDF8fHx8fHwyfHwxNjMzNzMyNzUw&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="WordPress Tuning"><p>&#x4EE5;&#x524D;&#x5728;&#x517C;&#x5DEE;&#x5E6B;&#x5FD9;&#x7BA1;&#x7406;&#x7CFB;&#x6240;&#x7DB2;&#x7AD9;&#x6642;&#xFF0C;&#x90A3;&#x6642;&#x5019;&#x5B78;&#x6821;&#x4E3B;&#x6D41; <a href="https://en.wikipedia.org/wiki/Content_management_system?ref=focaaby.com">CMS</a> &#x4F7F;&#x7528;&#x7684;&#x662F; Joomla&#xFF0C;&#x7576;&#x5E74;&#x4E5F;&#x5F9E; apache2 &#x79FB;&#x8F49;&#x4F7F;&#x7528; NGINX &#x505A;&#x4E86;&#x4E9B;&#x5FAE;&#x7684;&#x8ABF;&#x6559;&#xFF0C;&#x4F46;&#x662F;&#x4E5F;&#x6C92;&#x6709;&#x8A8D;&#x771F;&#x505A;&#x500B;&#x7B46;&#x8A18;&#x3002;&#x525B;&#x597D;&#x516C;&#x53F8;&#x7684;&#x5B98;&#x7DB2;&#x4E5F;&#x8981;&#x5F9E; windows &#x79FB;&#x8F49;&#x5230; linux &#x4F3A;&#x670D;&#x5668;&#x4E0A;&#xFF0C;&#x4E0D;&#x904E;&#x4F7F;&#x7528;&#x7684;&#x662F; WordPress&#xFF0C;&#x4E5F;&#x7A0D;&#x5FAE; survey &#x4E86;&#x76F8;&#x95DC;&#x8ABF;&#x6559;&#xFF0C;&#x672C;&#x7BC7;&#x70BA;&#x79FB;&#x8F49;&#x904E;&#x7A0B;&#x53CA;&#x76F8;&#x95DC;&#x8ABF;&#x6559;&#x7B46;&#x8A18;&#x3002;</p><h2 id="%E5%AE%89%E8%A3%9D%E8%88%87%E7%A7%BB%E8%BD%89">&#x5B89;&#x88DD;&#x8207;&#x79FB;&#x8F49;</h2><p>&#x4F7F;&#x7528;&#x4F3A;&#x670D;&#x5668;&#x67B6;&#x69CB;&#x70BA;&#x57FA;&#x672C;&#x7684; LNMP&#xFF0C;&#x53EF;&#x4EE5;&#x53C3;&#x8003; <a href="https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-in-ubuntu-16-04?ref=focaaby.com">DigitalOcean - How To Install Linux, Nginx, MySQL, PHP (LEMP stack) in Ubuntu 16.04</a>&#xFF0C;&#x984D;&#x5916;&#x5B89;&#x88DD;&#x7684; php &#x5957;&#x4EF6;&#x6709; <code>php7.0-curl</code>&#x3001;<code>php7.0-zip</code>&#x3002;</p><pre><code class="language-bash">$ apt install php7.0-curl php7.0-zip</code></pre><p>&#x6BD4;&#x8F03;&#x9EBB;&#x7169;&#x7684;&#x9EDE;&#x5728;&#x65BC; windows &#x63DB;&#x884C;&#x7B26;&#x865F;&#x70BA; <code>^M</code> &#xFF0C;&#x56E0;&#x6B64;&#x5FC5;&#x9808;&#x628A;&#x6BCF;&#x500B;&#x6A94;&#x6848;&#x7684;&#x63DB;&#x884C;&#x7B26;&#x865F;&#x5207;&#x63DB;&#x6210; linux &#x7684;&#x63DB;&#x884C;&#x7B26;&#x865F;&#x3002;</p><pre><code class="language-bash"># &#x66FF;&#x63DB;&#x63DB;&#x884C;&#x7B26;&#x865F;
find . -type f -exec dot2unix {} \;
# &#x78BA;&#x4FDD;&#x8CC7;&#x6599;&#x593E;&#x53CA;&#x6A94;&#x6848;&#x6B0A;&#x9650;
find . -type d -exec chmod 755 {} \;
find . -type f -exec chmod 644 {} \;
</code></pre><h2 id="nginx-%E8%A8%AD%E5%AE%9A">Nginx &#x8A2D;&#x5B9A;</h2><h3 id="gzip-%E5%8F%8A-cache">gzip &#x53CA; cache</h3><pre><code class="language-nginx">
# gzip
gzip on;
gzip_static on;
gzip_disable &quot;msie6&quot;;

gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
# Don&apos;t compress files smaller than 256 bytes, as size reduction will be negligible.
gzip_min_length 256;
gzip_types
    application/atom+xml
    application/javascript
    application/json
    application/ld+json
    application/manifest+json
    application/rss+xml
    application/vnd.geo+json
    application/vnd.ms-fontobject
    application/x-font-ttf
    application/x-web-app-manifest+json
    application/xhtml+xml
    application/xml
    font/opentype
    image/bmp
    image/svg+xml
    image/x-icon
    text/cache-manifest
    text/css
    text/plain
    text/vcard
    text/vnd.rim.location.xloc
    text/vtt
    text/x-component
    text/x-cross-domain-policy;

# cache
open_file_cache max=100000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
</code></pre><h3 id="%E7%B6%B2%E7%AB%99%E8%A8%AD%E5%AE%9A">&#x7DB2;&#x7AD9;&#x8A2D;&#x5B9A;</h3><p>&#x53C3;&#x8003;&#x5B98;&#x7DB2; <a href="https://codex.wordpress.org/Nginx?ref=focaaby.com">Wordpress for Nginx config</a></p><ul><li>General WordPress rules&#xFF1A;&#x4E00;&#x822C; WordPress &#x53CA; php &#x76F8;&#x95DC;&#x8A2D;&#x5B9A;&#x3002;</li><li>Global restrictions file&#xFF1A;&#x4E0D;&#x8A72;&#x958B;&#x653E;&#x6B0A;&#x9650;&#x7D66;&#x4EBA;&#x5B58;&#x53D6;&#x76F8;&#x95DC;&#x8A2D;&#x5B9A;&#x3002;</li></ul><h2 id="tuning">Tuning</h2><h3 id="nginx-%E7%B6%B2%E7%AB%99%E8%A8%AD%E5%AE%9A">NGINX &#x7DB2;&#x7AD9;&#x8A2D;&#x5B9A;</h3><ul><li>&#x76E1;&#x53EF;&#x80FD;&#x7684; cache &#x4E00;&#x4E9B;&#x975C;&#x614B;&#x6A94;&#x6848;&#x3002;</li></ul><pre><code class="language-nginx">location ~* \.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
    expires max;
    log_not_found off;
    access_log off;
}
</code></pre><ul><li>HTTP2 &#x8A2D;&#x5B9A;&#x3002;</li></ul><h3 id="wordpress-plugin">WordPress Plugin</h3><ul><li><a href="https://wordpress.org/plugins/wp-super-cache/?ref=focaaby.com">WP Super Cache</a>&#xFF1A; WordPress &#x4EF0;&#x8CF4;&#x7684;&#x662F; php &#x52D5;&#x614B;&#x5B58;&#x53D6;&#x8CC7;&#x6599;&#x5EAB;&#x7522;&#x751F;&#x51FA;&#x5C0D;&#x61C9;&#x7684; HTML &#x9801;&#x9762;&#x5448;&#x73FE;&#xFF0C;Super Cache &#x529F;&#x80FD;&#x5148;&#x628A;&#x4E00;&#x4E9B;&#x5E38;&#x7528;&#x7684;&#x7DB2;&#x9801;&#x5148;&#x7522;&#x51FA;&#x6210; Static HTML &#x6A94;&#x6848;&#x66AB;&#x5B58;&#x8D77;&#x4F86;&#xFF0C;&#x8B93;&#x4F7F;&#x7528;&#x8005;&#x5B58;&#x53D6;&#x6642;&#x53EF;&#x4EE5;&#x76F4;&#x63A5;&#x5B58;&#x53D6;&#x3002;&#x4F9D;&#x7167;&#x5B98;&#x7DB2;&#x6559;&#x5B78;&#x5B89;&#x88DD;&#x65B9;&#x5F0F;&#x5373;&#x53EF;&#x3002;</li><li><a href="https://wordpress.org/plugins/fast-velocity-minify/?ref=focaaby.com">Fast Velocity Minify</a>&#xFF1A;&#x628A;&#x6563;&#x843D;&#x5404;&#x500B; plugin &#x7684; css, js merge &#x4E26; minify&#xFF0C;&#x540C;&#x6642;&#x4E5F;&#x6709;&#x5C07; WordPress &#x7248;&#x672C;&#x865F;&#x7684;&#x6A94;&#x6848;&#x4E5F;&#x4E00;&#x4F75;&#x6574;&#x7406;&#x58D3;&#x7E2E;&#x9054;&#x5230;&#x96B1;&#x85CF;&#x7248;&#x672C;&#x865F;&#x7684;&#x6548;&#x679C;&#x3002;</li><li><a href="https://wordpress.org/plugins/wp-smushit/?ref=focaaby.com">WP SMush</a>&#xFF1A;&#x58D3;&#x7E2E;&#x4E0A;&#x50B3;&#x7684; image&#xFF0C;&#x514D;&#x8CBB;&#x7248;&#x5247;&#x662F;&#x6BCF;&#x6B21;&#x5E6B;&#x4F60;&#x58D3;&#x7E2E; 50 &#x5F35;&#x5716;&#x7247;&#xFF0C;&#x6BCF;&#x6B21;&#x90FD;&#x5F97;&#x81EA;&#x5DF1;&#x91CD;&#x8907;&#x9EDE;&#x64CA;&#x58D3;&#x7E2E;&#x3002;</li><li><a href="https://tw.wordpress.org/plugins/all-in-one-seo-pack/?ref=focaaby.com">All in One SEO</a>&#xFF1A;&#x5728;&#x6211;&#x63A5;&#x6536;&#x4EE5;&#x524D;&#x5C31;&#x5DF2;&#x7D93;&#x8A2D;&#x5B9A;&#x597D;&#x4E86;&#xFF0C;&#x4F46;&#x662F;&#x4E5F;&#x5E6B;&#x52A9;&#x4E86;&#x7DB2;&#x7AD9; SEO &#x8A55;&#x5206;&#xFF0C;&#x6BCF;&#x500B;&#x9801;&#x9762;&#x4E5F;&#x90FD;&#x53EF;&#x4EE5;&#x8A2D;&#x5B9A;&#x5C0D;&#x61C9; SEO tag &#x53CA; Sitemap &#x81EA;&#x52D5;&#x7522;&#x51FA;&#xFF0C;&#x5927;&#x5E45;&#x63D0;&#x9AD8;&#x4E86;&#x7DB2;&#x7AD9;&#x88AB;&#x641C;&#x5C0B;&#x5230;&#x7684;&#x6392;&#x540D;&#x3002;</li></ul><h2 id="%E5%AE%89%E5%85%A8%E6%80%A7">&#x5B89;&#x5168;&#x6027;</h2><h3 id="nginx">Nginx</h3><pre><code class="language-nginx">location ~* ^/wp-content/uploads/.*.(html|htm|shtml|php|js|swf)$ {
    deny all;
}

location ~ /(wp-config|xmlrpc).php$ {
    deny all;
}

location ~ /\.ht {
    deny all;
}

# &#x53EA;&#x5141;&#x8A31; localhost &#x53EF;&#x4EE5;&#x767B;&#x5165;&#x53CA;&#x67E5;&#x770B; admin &#x5F8C;&#x53F0;
location ~ /(wp-admin|wp-login\.php)$ {
    include snippets/fastcgi-php.conf;
    fastcgi_pass unix:/run/php/php7.0-fpm.sock;
    allow 127.0.0.1;
    deny  all;
}
</code></pre><h3 id="ssl">SSL</h3><p>&#x53C3;&#x8003; <a href="https://cipherli.st/?ref=focaaby.com">Cipherli.st - Strong Ciphers for Apache, nginx and Lighttpd</a>&#x3002;</p><p>WordPress&#x3001;Joomla &#x76F8;&#x95DC;&#x5957;&#x4EF6;&#x5927;&#x91CF;&#x4F7F;&#x7528;&#x4E86; iFrame &#x9077;&#x5165;&#x5728; plugin &#x8A2D;&#x5B9A;&#xFF0C;&#x56E0;&#x6B64;&#x4EE5;&#x4E0B;&#x9019;&#x500B;&#x8A2D;&#x5B9A;&#x53EF;&#x4EE5;&#x770B;&#x6709;&#x6C92;&#x6709;&#x5F71;&#x97FF;&#x64CD;&#x4F5C;&#x659F;&#x914C;&#x52A0;&#x5165;&#x8A2D;&#x5B9A;&#x6A94;&#x3002;</p><pre><code>add_header X-Frame-Options DENY;
</code></pre><h2 id="%E6%B8%AC%E6%95%88%E7%B6%B2%E7%AB%99">&#x6E2C;&#x6548;&#x7DB2;&#x7AD9;</h2><ul><li><a href="https://gtmetrix.com/?ref=focaaby.com">GTMetrix</a>&#xFF1A;&#x63D0;&#x4F9B; PageSpeed &#x548C; YSlow &#x500B;&#x9805;&#x76EE;&#x8A55;&#x5206;&#xFF0C;&#x53EF;&#x4EE5;&#x67E5;&#x770B;&#x6709;&#x54EA;&#x4E9B;&#x53EF;&#x80FD;&#x53EF;&#x4EE5;&#x6539;&#x5584;&#x7684;&#x5716;&#x7247;&#x58D3;&#x7E2E;&#xFF0C;&#x6216;&#x662F;&#x4F3A;&#x670D;&#x5668;&#x6C92;&#x6709;&#x8A2D;&#x5B9A; cache &#x7B49;&#x9A57;&#x8B49;&#x3002;</li><li><a href="https://developers.google.com/web/tools/lighthouse/?ref=focaaby.com">Lighthouse</a>&#xFF1A; Google Opensource &#x7684;&#x81EA;&#x52D5;&#x5316;&#x5DE5;&#x5177;&#xFF0C;&#x4F46;&#x662F;&#x6EFF;&#x591A;&#x90FD;&#x4EE5;&#x975E;&#x5E38;&#x300C;&#x56B4;&#x683C;&#x300D;&#x7684;&#x6A19;&#x6E96;&#x4F86;&#x67E5;&#x770B;&#x7DB2;&#x9801;&#x5B58;&#x53D6;&#x901F;&#x5EA6;&#xFF0C;&#x4E5F;&#x63D0;&#x4F9B;&#x4E86; Progressive Web App &#x7684;&#x6AA2;&#x9A57;&#x3002;</li></ul><h2 id="%E7%9B%B8%E9%97%9C%E9%80%A3%E7%B5%90">&#x76F8;&#x95DC;&#x9023;&#x7D50;</h2><ul><li><a href="https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-in-ubuntu-16-04?ref=focaaby.com">DigitalOcean - How To Install Linux, Nginx, MySQL, PHP (LEMP stack) in Ubuntu 16.04</a></li><li><a href="https://codex.wordpress.org/Nginx?ref=focaaby.com">Wordpress for Nginx config</a></li><li><a href="https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/?ref=focaaby.com">9 Tips for Improving WordPress Performance</a></li><li>WordPress Plugins</li><li><a href="https://wordpress.org/plugins/wp-super-cache/?ref=focaaby.com">WP Super Cache</a></li><li><a href="https://wordpress.org/plugins/fast-velocity-minify/?ref=focaaby.com">Fast Velocity Minify</a></li><li><a href="https://wordpress.org/plugins/wp-smushit/?ref=focaaby.com">WP SMush</a></li><li><a href="https://tw.wordpress.org/plugins/all-in-one-seo-pack/?ref=focaaby.com">All in One SEO</a></li><li><a href="https://cipherli.st/?ref=focaaby.com">Cipherli.st - Strong Ciphers for Apache, nginx and Lighttpd</a></li></ul>]]></content:encoded></item></channel></rss>