Compare commits

...

4 commits
v2 ... main

2 changed files with 54 additions and 1 deletions

View file

@ -5,6 +5,8 @@ using Docker, with the intention to efficiently deploy to a k3s or k8s cluster u
# How to Use # How to Use
## How to Configure in .github/workflows/main.yaml
```yaml ```yaml
jobs: jobs:
deploy_staging: deploy_staging:
@ -15,11 +17,22 @@ jobs:
with: with:
kust_config: kustomize/overlays/testing kust_config: kustomize/overlays/testing
env: env:
K3S_YAML: ${{ secrets.K3S_YAML }} K3S_YAML: ${{ secrets.K3S_YAML }} # assuming that K3S_YAML is defined in a README, see also below
- name: Check output of previous step (kinda dummy) - name: Check output of previous step (kinda dummy)
run: echo "The start time was ${{ steps.deploy.outputs.time }}" run: echo "The start time was ${{ steps.deploy.outputs.time }}"
``` ```
## How to Setup K3S_YAML
We assume you use k3s. Otherwise, use comparable kubectl configuration.
Do it all in one command (experimental): `wget -q -O - https://source.c3.uber5.com/uber5-public/gha-deploy-to-k3s/raw/branch/main/encode-k3s-yaml.sh | bash`
- Grab k3s.yaml (\`/etc/rancher/k3s/k3s.yaml\`), copy it to /tmp/ and make it readable for you, then copy it from the master node of the k3s cluster: `scp your-node-123.uber5.com:/tmp/k3s.yaml /tmp/`
- Change the `server` entry to use its public DNS name
- Insert `tls-server-name: worker1` underneath the `server` key. The value (`worker1` in this case) needs to be one of the names that are in the cert. If you get it wrong, the error message in the pipeline will tell you.
- encode k3s.yaml with `base64 -i /tmp/k3s.yaml -o /tmp/encoded`, and set it as the value for a secret `K3S_YAML` in gitea for the repository under "Settings > Actions > Secrets"
# Open Questions # Open Questions
- We use [kustomize](https://kustomize.io/). Is this overkill? As the complexity of deployments is not that high, usually, this may be more technical complexity than necessary. We could go back to using plain kubernetes manifests, and just have different ones for staging and prod. - We use [kustomize](https://kustomize.io/). Is this overkill? As the complexity of deployments is not that high, usually, this may be more technical complexity than necessary. We could go back to using plain kubernetes manifests, and just have different ones for staging and prod.

40
encode-k3s-yaml.sh Executable file
View file

@ -0,0 +1,40 @@
#!/usr/bin/env bash
# Define a function to read from the terminal
read_from_terminal() {
# Check if a tty is available
if [[ -t 0 ]]; then
# Use 'read -p' if standard input is an interactive terminal
read -p "$1" user_input
else
printf "\n" > /dev/tty
# If not interactive (e.g. piped), read from /dev/tty
# The '< /dev/tty' redirects input for 'read' to the terminal device
read -p "$1" user_input < /dev/tty
fi
echo "$user_input"
}
server_url=$(read_from_terminal "Server URL (public DNS name or IP address for the k3s server): ")
workdir=$(mktemp -d /tmp/encode-k3s-yaml.XXXXXX)
cd $workdir
echo "Working directory: $workdir"
pwd
cp /etc/rancher/k3s/k3s.yaml ./
# update server url
sed -i "s/127.0.0.1/$server_url/g" k3s.yaml
# append tls-server-name: kubernetes after 'server:' line
sed -i "/server:/a\ \ \ \ tls-server-name: kubernetes" k3s.yaml
# base64 encode the yaml file
base64 -w 0 -i k3s.yaml > k3s.yaml.b64
echo "Base64 encoded k3s.yaml for use as K3S_YAML for deployment scripts:"
cat k3s.yaml.b64
# cleanup
rm -rf $workdir