I am using CrunchyData Postgres Operator to manage postgres databases on my Kubernetes cluster. This cluster will host multiple databases used by many projects.
I have set up the operator and a PostgresCluster
named postgres
on the namespace postgres-operator
Now, I am deploying a Laravel application, in the namespace rates
that would need access to the database with the user rates
and DB rates
Using CrunchyData helm chart, I have the following in values.yaml
:
users: [
{
name: "rates",
databases: ["rates"],
}
]
Which creates the secret postgres-pguser-rates
in the namespace postgres-operator
In my Laravel Deployement
, I have the following in my yaml environment section:
- name: DB_HOST
valueFrom: { secretKeyRef: { name: postgres-pguser-rates, key: host } }
- name: DB_PORT
valueFrom: { secretKeyRef: { name: postgres-pguser-rates, key: port } }
- name: DB_DATABASE
valueFrom: { secretKeyRef: { name: postgres-pguser-rates, key: dbname } }
- name: DB_USERNAME
valueFrom: { secretKeyRef: { name: postgres-pguser-rates, key: user } }
- name: DB_PASSWORD
valueFrom: { secretKeyRef: { name: postgres-pguser-rates, key: password } }
Which basically accesses the secret the Postgres operator creates.
The problem: since my Laravel deployement and my postgres cluster live in different namespaces, the operator creates the secret in the namespace postgres-operator
, and I am trying to access it from the deployment on the namespace rates
I have tried using Role, RoleBinding, and RBAC features in Kubernetes. But since I am trying to access the secret from the yaml file to setup environment variables, this method doesn't work.
I tried exploring kubernetes-reflector, however, since the operator cannot be modified to setup labels on the secrets it generates, this method doesn't work
I have 2 questions:
Mainly, how to access the DB secret create by the operator from different namespaces?
Or, would it be better deploying a separate DB cluster for each project? It seems to be too resource intensive for a lot of simple projects.
how to access the DB secret create by the operator from different namespaces?
As you have discovered, this isn't an RBAC issue; fundamentally, the envFrom
/valueFrom
/etc directives don't support referencing secrets in another namespace. The only way to get what you want is to ensure your application and the secrets are in the same namespace.
I think your idea using kubernetes-reflector looks like a good one. It would involve manually labelling the secrets after they have been created, but that's fine; you're going to have to select which secrets get placed in which namespace somehow, and this seems reasonable.
If you're consistently naming things so that secret postgres-pguser-<name>
should be exposed in namespace <name>
, then you could write your own operator that watches for secrets matching that pattern and then copies them to the appropriate namespace. The operator sdk makes this relatively easy; you could also accomplish the same thing with a shell script that runs kubectl get secret --watch
. Something like:
#!/bin/bash
if [[ -z "$WATCH_NAMESPACE" ]]; then
echo "ERROR: WATCH_NAMESPACE is undefined." >&2
exit 1
fi
while :; do
# watch for secret create/update/delete
kubectl -n "$WATCH_NAMESPACE" get secret -o name --watch | while read -r thing; do
# extract name from "secret/<name>"
name=${thing#*/}
echo "found secret name: $name"
# check if it matches specific pattern
if [[ $name = postgres-pguser-* ]]; then
# extract namespace from secret name
target_ns=${name##*-}
if kubectl -n "$WATCH_NAMESPACE" get secret "$name" > /dev/null 2>&1; then
# copy secret to target namespace
echo " copy to $target_ns"
kubectl -n "$WATCH_NAMESPACE" get secret "$name" -o json |
jq 'del(.metadata.namespace, .metadata.uid, .metadata.resourceVersion)' |
kubectl -n "$target_ns" apply -f-
else
# delete secret from target namespace
echo " delete from $target_ns"
kubectl -n "$target_ns" delete secret "$name"
fi
fi
done
sleep 1
done
Run that in a Pod with appropriate permissions and it should do what you want.
Or would it be better deploying a separate DB cluster for each project? It seems to be too resource intensive for a lot of simple projects.
I think this would be a much more common configuration.
Depending on the nature of your simple projects, it may make more sense to just deploy a stock postgres container rather than using all the features offered by the crunchy postgres operator.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With