This article was first published on Official TenX Blog - Medium
If you use Kubernetes, you’ve probably heard of “Helm” the Kubernetes package manager by now. Helm is very useful for installing packages on a Kubernetes cluster quickly as well as extras such as tracking releases to offer easy rollback. Unfortunately, Helm has an Achilles heel if you want to use it in a shared cluster, that is, its server-side component called “Tiller”. Tiller is a service that essentially accepts manifests from Helm and executes them on the user's behalf.
Having a server-side component is useful because it allows for tasks to be executed asynchronously, for example, you can deploy a chart that uses help hooks from your laptop, and the hooks can execute even if your laptop is closed or disconnected.
With the default configuration of Tiller, however, it is configured such that Tiller is the cluster administrator, and that anyone with access to the cluster can ask Tiller to do things on their behalf. So, even though an engineer may only have access to staging, they can run a helm delete production, and Tiller willingly executes it on their behalf. For more details, check out this article. So, how do we fix this?
Step one, set up a local Tiller instance
Like that article says, Helm can run in “Tiller-less” mode, which instead of having a shared, cluster-wide Tiller service that runs as cluster admin, you can have a local Tiller that runs using your credentials!
Here at TenX, we use Concourse CI for our CI system, and subsequently, use the concourse-helm-resource to perform deployments. Since the default image doesn’t use helm, we made some changes that we hope will be accepted. With the first hurdle out of the way, the next problem was the CI didn’t have an account of its own to connect to GCP, so onto the next step
To keep reading, please go to the original article at:
Official TenX Blog - Medium