Kubernetes integration with external Conjur

We have multiple Kubernetes clusters and would like a single Conjur server in a tools cluster to serve them all.

The documentation asks me to create a rolebinding in each namespace and point to the conjur server namespace.

How do I use authn-k8s when using an external conjur?

Hi @presidenten,

For starters, I recommend disconnecting the concept of a follower from the concept of an authenticator web service. While our documentation makes it seem like these two things are intrinsically linked, in reality they are not. All we need to understand is that the authenticator webservice needs to be enabled on the follower and that each cluster will have a unique authenticator webservice ID since the webservice definition has connection details for the follower to connect to each cluster’s API.

Now with that out of the way, let’s look at what needs to be defined on the app side. First, each authenticator webservice will have a k8s service account associated with it. This is the account the follower will use to authenticate to the k8s API to inject the client certificate. Lets refer to this service account going forward as conjur-authn-sa. The permissions we intend to give to conjur-authn-sa are outlined in a ClusterRole named conjur-authn-role. We would typically recommend creating a namespace for the conjur-authn-sa account, named conjur-authn-ns. This allows us to restrict who has access to the service account. To recap, we create a namespace conjur-authn-ns with the service account conjur-authn-sa. We also create a ClusterRole called conjur-authn-role.

Next, we need to bind the conjur-authn-sa service account to the conjur-authn-role ClusterRole in each application namespace. This allows the service account to enumerate the pods and inject the client certificate through the k8s API for the app in that namespace. Typically the application owner would perform this step, then configure their app deployment to use the authn-k8s client and leverage the cluster specific shared config details from a configmap.

One final note, the follower deployment needs to be modified to force it to speak to the k8s API using the authenticator web service’s configuration details. Otherwise it will default to connecting through the control plane to the API of the cluster the follower is deployed on. To force the Follower to do that:

Add the following to the Follower Pod Spec:
enableServiceLinks: false
automountServiceAccountToken: false

And add the following to the Follower environment variables:

- name: KUBERNETES_SERVICE_HOST
value: ""
- name: KUBERNETES_SERVICE_PORT
value: ""

Hope that helps!
Regards,
Nate