feat: Add kubeconfig refresh script and fix Ansible Finalize to use public IP
All checks were successful
Deploy Cluster / Terraform (push) Successful in 53s
Deploy Cluster / Ansible (push) Successful in 5m25s

- scripts/refresh-kubeconfig.sh fetches a fresh kubeconfig from CP1
- Ansible site.yml Finalize step now uses public IP instead of Tailscale
  hostname for the kubeconfig server address
- Updated AGENTS.md with kubeconfig refresh instructions
This commit is contained in:
2026-03-29 03:31:36 +00:00
parent 905d069e91
commit 6e5b0518be
3 changed files with 41 additions and 1 deletions

View File

@@ -66,6 +66,13 @@ Repository guide for agentic contributors working in this repo.
- Prefer validating the exact directory you edited, not the whole repo, unless the change is cross-cutting. - Prefer validating the exact directory you edited, not the whole repo, unless the change is cross-cutting.
- For Flux changes, verify the relevant `Kustomization`/`HelmRelease`/`ExternalSecret` manifests render cleanly before committing. - For Flux changes, verify the relevant `Kustomization`/`HelmRelease`/`ExternalSecret` manifests render cleanly before committing.
### Kubeconfig refresh
After a full cluster rebuild, the kubeconfig goes stale (new certs, new IPs). Refresh it with:
- `scripts/refresh-kubeconfig.sh <cp1-public-ip>` (preferred)
- Or manually: `ssh -i ~/.ssh/infra root@<cp1-ip> "cat /etc/rancher/k3s/k3s.yaml" | sed 's/127.0.0.1/<cp1-ip>/g' > outputs/kubeconfig`
- The Ansible `site.yml` Finalize step also rewrites the server address to the public IP during bootstrap.
## Code Style ## Code Style
### General ### General

View File

@@ -134,7 +134,7 @@
tasks: tasks:
- name: Update kubeconfig server address - name: Update kubeconfig server address
command: | command: |
sed -i 's/127.0.0.1/{{ groups["control_plane"][0] }}.{{ tailscale_tailnet }}/g' ../outputs/kubeconfig sed -i 's/127.0.0.1/{{ hostvars[groups["control_plane"][0]]["ansible_host"] }}/g' ../outputs/kubeconfig
changed_when: true changed_when: true
- name: Display success message - name: Display success message

33
scripts/refresh-kubeconfig.sh Executable file
View File

@@ -0,0 +1,33 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
KUBECONFIG_PATH="$REPO_ROOT/outputs/kubeconfig"
SSH_KEY="${SSH_KEY:-$HOME/.ssh/infra}"
CP1_PUBLIC_IP="${1:-}"
if [ -z "$CP1_PUBLIC_IP" ]; then
if [ -f "$REPO_ROOT/ansible/inventory.ini" ]; then
CP1_PUBLIC_IP=$(grep -A2 '\[control_plane\]' "$REPO_ROOT/ansible/inventory.ini" | grep -oP '\d+\.\d+\.\d+\.\d+' | head -1)
fi
fi
if [ -z "$CP1_PUBLIC_IP" ]; then
echo "Usage: $0 <control-plane-1-public-ip>"
echo " Or ensure ansible/inventory.ini exists with control plane IPs."
exit 1
fi
echo "Fetching kubeconfig from $CP1_PUBLIC_IP ..."
ssh -i "$SSH_KEY" \
-o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null \
"root@$CP1_PUBLIC_IP" "cat /etc/rancher/k3s/k3s.yaml" \
| sed "s/127.0.0.1/$CP1_PUBLIC_IP/g" \
> "$KUBECONFIG_PATH"
chmod 600 "$KUBECONFIG_PATH"
echo "Kubeconfig saved to $KUBECONFIG_PATH"
echo "Run: export KUBECONFIG=$KUBECONFIG_PATH"