Infrastructure-as-Code to deploy services needed by measure 4.2
This project delivers a vscode dev container that contains all tools needed. To use that container
you need a running docker installation and choose 'Clone in Volume' after opening it in vscode.
It will take quite some time to build the container.
As the project contains encrypted secrets (that are also instantiated when running the container),
you will need to enter your personal gpg passphrase. This of course requires that your pgp key
is registered within the project. If not, please ask a colleague to add your key. Any colleague
already registered for the project will be able to do so. You can find the list of registered people
within the public_gpg_keys folder of an environment (so every environment has his own list of
registered users).
To add a new user to the project, please peform these steps.
On the machine where you have generated your key:
-
Find out the key fingerprint:
userid=Any string that identifies yor gpg key fingerprint=$(gpg --list-keys -a $userid | sed '2!d' | tr -d " ") echo $fingerprint
useridcan be any sub-string that identifies your key, i.e. that is listed behind 'uid' when executinggpg --list-keyswithout the-aswitch. In this example it coud be 'carsten':pub ed25519 2023-07-24 [SC] CC7B10CE8D78010ABB043F8DB1C462E90012ECFE # the fingerprint uid [ultimate] Carsten Scharfenberg <carsten.scharfenberg@zalf.de> # the full user id sub cv25519 2023-07-24 [E] -
Export the public key:
username=Your_Name filename=${fingerprint}_${username}.asc gpg --export --armor -o $filename $fingerprint
In this case
usernameis the actual user name consisting of capitalized first and last name, divided by an underscore. E.g.:Carsten_Scharfenberg.
With in this project structure on any machine/dev container:
-
Import the key manually by:
gpg --import '<filename>` -
cd into an environment folder
-
Add the public
gpgkey file to the folderpublic_gpg_keyswithout changing the filename -
Add the key fingerprint to the file
.sops.yaml -
Re-encrypt all secret files:
find . -name *.enc.yaml | xargs sops updatekeys
find some useful kubectl calls below.
The normal call to list 'everything' within a namespace is, e.g.:
kubectl get all -n fairagro-nextcloudThis won't really list everything. Instead you can use:
kubectl api-resources --namespaced=true --verbs=list -o name \
| xargs -n 1 kubectl get --show-kind --ignore-not-found -n fairagro-nextcloudTo delete everything from a namespace you would normally call, e.g.
kubectl delete all --all -n fairagro-nextcloudIn fact this does not delete everything, but keeps, e.g., ConfigMaps, Secrets and
PersistentVolumeClaims. You also could delete the whole namespace to really delete everything:
kubectl delete namespace fairagro-nextcloudOnly this is not ideal for this project, because it will also delete Roless and RoleBindings
that are created by the zalf-rdm/misc project and are thus not easy to re-create.
Instead this is a good alternative:
kubectl delete all,pdb,configmap,secret,pvc,ingress,serviceaccount --all -n fairagro-nextcloudConsider to list all resources to check if really
everything except Roless and RoleBindings has been deleted. Note that kubernetes will
automatically recreate configmap/kube-root-ca.crt and serviceaccount/default.
I case you would like to access the underlying etcd database of kuberentes directly, log into one of the kubernetes nodes. Then:
sudo -i
set -a
. /etc/ssl/etdctl
set +a
/usr/local/bin/etcdctl get / --prefix --keys-only # an exampleWe create a CronJob that runs the middleware. Sometimes it's useful to manually
create a pod from this job and execute a shell within for debugging. The solution
is not that striaght forward:
First we need to terminate any running/scheduled jobs/pods that have been created form the cronjob, as otherwise ReadWriteOnce volumes might be mounted serveral times:
for job in $(kubectl get jobs -o json | jq -r '
.items[]
| select(.metadata.ownerReferences[]?.kind == "CronJob")
| select(.metadata.ownerReferences[]?.name == "basic-middleware-fairagro-basic-middleware")
| select((.status.succeeded | not) or (.status.succeeded == 0))
| .metadata.name
'); do
kubectl delete job "$job"
doneNow we can create a debug job and exec into it:
JOB_NAME=debug-job
# Create a job/pod in suspended state, so it does not run immediately and replace
# the command so it does not do anything but wait.
kubectl create job $JOB_NAME \
--from=cronjob/basic-middleware-fairagro-basic-middleware \
-o yaml --dry-run=client | \
yq '
.spec.suspend = true
| .spec.template.spec.containers[0].command = ["/bin/sh","-c","sleep infinity"]
| del(.spec.template.spec.containers[0].args)
' | \
kubectl apply -f -
# Now start the job/pod
kubectl patch job $JOB_NAME -p '{"spec":{"suspend":false}}'
# Wait for the job/pod to be running
kubectl wait --for=condition=ready pod -l batch.kubernetes.io/job-name=$JOB_NAME --timeout=300s
# And exec into it
kubectl exec -it $(kubectl get pod -l batch.kubernetes.io/job-name=$JOB_NAME \
-o jsonpath='{.items[0].metadata.name}') -- /bin/sh