Difference between revisions of "Kubernetes"
From Fixme.ch
(→SSL) |
|||
Line 26: | Line 26: | ||
* We use cert-manager to manage LetsEncrypt certs, you only need to add this annotation to your Ingress Controller for it to manage your cert, | * We use cert-manager to manage LetsEncrypt certs, you only need to add this annotation to your Ingress Controller for it to manage your cert, | ||
<pre> | <pre> | ||
+ | ubuntu@k8s:~$ k edit -n mattermost ingress/mattermost-ingress | ||
+ | [...] | ||
metadata: | metadata: | ||
annotations: | annotations: | ||
cert-manager.io/cluster-issuer: letsencrypt | cert-manager.io/cluster-issuer: letsencrypt | ||
− | <pre> | + | [...] |
+ | </pre> | ||
=== Debug === | === Debug === |
Revision as of 22:06, 8 August 2021
Contents
Kubernetes @ FIXME
Information
- Endpoint: k8s.fixme.ch
- Credentials are available in file k8s:/etc/kubernetes/admin.conf.
- Currently running on Bellatrix
- Backup: https://git.fixme.ch/Comite/fixme-kube-backup (restricted for secret access)
Services
- Some services that are deployed on our instance
- Chat
- Etherpad
- Power monitoring
- Fablab wiki
- Led API endpoint
- Trigger
- MQTT gateway
- gitlab: ongoing
Add your own
SSL
- We use cert-manager to manage LetsEncrypt certs, you only need to add this annotation to your Ingress Controller for it to manage your cert,
ubuntu@k8s:~$ k edit -n mattermost ingress/mattermost-ingress [...] metadata: annotations: cert-manager.io/cluster-issuer: letsencrypt [...]
Debug
Access impossible
Sometimes the eth interface is in the sauce (to investigate), you have to reconfigure it.
ubuntu@k8s:~$ sudo ip addr add 62.220.135.219/32 dev ens6
It should look like this
ubuntu@k8s:~$ ip -4 a show ens6 2: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 inet 62.220.135.205/26 brd 62.220.135.255 scope global ens6 valid_lft forever preferred_lft forever inet 62.220.135.219/32 scope global ens6 valid_lft forever preferred_lft forever
Certificate expiration
Sometimes K8S is in the sauce, something like this might help regenerate the certs
# Service state systemctl stop kubelet.service systemctl restart docker.service # Backup rsync -av /etc/kubernetes/ /root/kubernetes-$(date +%s)/ rsync -av /var/lib/etcd/ /root/etcd-$(date +%s)/ cd /etc/kubernetes rm {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} cd /etc/kubernetes/pki rm {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} # Regen certificates cd kubeadm init phase certs all --apiserver-advertise-address 62.220.135.205 --ignore-preflight-errors=all kubeadm init phase kubeconfig all cp -i /etc/kubernetes/admin.conf $HOME/.kube/config # Check states kubeadm join 62.220.135.205:6443 --token XXX --discovery-token-ca-cert-hash YYY --ignore-preflight-errors=all kubectl get nodes kubectl get all