Installation
Requirements
- Helm 3 is required when installing the Capsule Operator chart. Follow Helm’s official documentation for installing Helm on your operating system.
- A Kubernetes cluster (v1.16+) with the following Admission Controllers enabled:
- PodNodeSelector
- LimitRanger
- ResourceQuota
- MutatingAdmissionWebhook
- ValidatingAdmissionWebhook
- A Kubeconfig file accessing the Kubernetes cluster with cluster admin permissions.
- Cert-Manager is required by default but can be disabled. It is used to manage the TLS certificates for the Capsule Admission Webhooks.
Installation
We officially only support the installation of Capsule using the Helm chart. The chart itself handles the installation/upgrade of the required CustomResourceDefinitions. The following Artifact Hub repositories are official:
Perform the following steps to install the Capsule operator:
Add repository:
helm repo add projectcapsule https://projectcapsule.github.io/chartsInstall Capsule:
helm install capsule projectcapsule/capsule --version 0.12.4 -n capsule-system --create-namespaceor (OCI)
helm install capsule oci://ghcr.io/projectcapsule/charts/capsule --version 0.12.4 -n capsule-system --create-namespaceShow the status:
helm status capsule -n capsule-systemUpgrade the Chart
helm upgrade capsule projectcapsule/capsule -n capsule-systemor (OCI)
helm upgrade capsule oci://ghcr.io/projectcapsule/charts/capsule --version 0.12.4Uninstall the Chart
helm uninstall capsule -n capsule-system
Production
Here are some key considerations to keep in mind when installing Capsule. Also check out the Best Practices for more information.
Strict RBAC
By default, the Capsule controller runs with the ClusterRole cluster-admin, which provides full access to the cluster. This is because the controller itself must grant RoleBindings on a per-namespace basis that by default reference the ClusterRole admin, which needs to at least match the permissions of the controller itself. However, for production environments we recommend configuring stricter RBAC permissions for the Capsule controller. You can enable the minimal required permissions by setting the following value in the Helm chart:
manager:
rbac:
strict: true
This grants the controller the minimal permissions required for its own operation. However, that alone is not sufficient for it to function properly. The ClusterRole for the controller allows aggregating further permissions to it via the following labels:
projectcapsule.dev/aggregate-to-controller: "true"projectcapsule.dev/aggregate-to-controller-instance: {{ .Release.Name }}
In other words, you must aggregate all ClusterRoles that are assigned to Tenant owners or used for additional RoleBindings. This applies only to ClusterRoles that are not managed by Capsule (see Configuration). By default, the only such ClusterRole granted to owners is admin (not managed by Capsule).
kubectl label clusterrole admin projectcapsule.dev/aggregate-to-controller=true
Verify that the label has been applied:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: admin
labels:
projectcapsule.dev/aggregate-to-controller: "true"
rules:
...
If you are missing permissions you will see an error status for the respective tenants reflecting
kubectl get tnt
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR READY STATUS AGE
green Active 2 False cannot sync rolebindings items: rolebindings.rbac.authorization.k8s.io "capsule:managed:658936e7f2a30e35" is forbidden: user "system:serviceaccount:capsule-system:capsule" (groups=["system:serviceaccounts" "system:serviceaccounts:capsule-system" "system:authenticated"]) is attempting to grant RBAC permissions not currently held:... 5s
Alternatively, you can enable only the minimal required permissions by setting the following value in the Helm chart:
manager:
rbac:
minimal: true
Before you enable this option, you must implement the required permissions for your use case. Depending on which features you are using, you may need to take manual action, for example:
Admission Policies
While Capsule provides a robust framework for managing multi-tenancy in Kubernetes, it does not include built-in admission policies for enforcing specific security or operational standards for all possible aspects of a Kubernetes cluster. We provide additional policy recommendations here.
Certificate Management
By default, Capsule delegates its certificate management to cert-manager. This is the recommended way to manage the TLS certificates for Capsule. However, you can also use Capsule’s built-in TLS reconciler to manage the certificates. This is not recommended for production environments. To enable the TLS reconciler, use the following values:
certManager:
generateCertificates: false
tls:
enableController: true
create: true
Webhooks
Capsule makes use of webhooks for admission control. Ensure that your cluster supports webhooks and that they are properly configured. The webhooks are automatically created by Capsule during installation. However, some of these webhooks will cause problems when Capsule is not running (this is especially problematic in single-node clusters). Here are the webhooks you need to watch out for.
Generally, we recommend using matchConditions for all webhooks to avoid problems when Capsule is not running. You should exclude your system-critical components from the Capsule webhooks. For namespaced resources (pods, services, etc.) the webhooks select only namespaces that are part of a Capsule Tenant. If your system-critical components are not part of a Capsule Tenant, they will not be affected by the webhooks. However, if you have system-critical components that are part of a Capsule Tenant, you should exclude them from the Capsule webhooks by using matchConditions as well, or add more specific namespaceSelectors/objectSelectors to exclude them. This can also improve performance.
The Webhooks below are the most important ones to consider.
Nodes
There is a webhook which catches interactions with the Node resource. This webhook is mainly relevant when you make use of Node metadata. In most other cases, it will only cause problems. By default, the webhook is disabled, but you can enable it by setting the following value:
webhooks:
hooks:
nodes:
enabled: true
Or you could at least consider to set the failure policy to Ignore, if you don’t want to disrupt critical nodes:
webhooks:
hooks:
nodes:
failurePolicy: Ignore
If you still want to use the feature, you could exclude the kube-system namespace (or any other namespace you want to exclude) from the webhook by setting the following value:
webhooks:
hooks:
nodes:
matchConditions:
- name: 'exclude-kubelet-requests'
expression: '!("system:nodes" in request.userInfo.groups)'
- name: 'exclude-kube-system'
expression: '!("system:serviceaccounts:kube-system" in request.userInfo.groups)'
Namespaces
Namespaces are the most important resource in Capsule. The Namespace webhook is responsible for enforcing the Capsule Tenant boundaries. It is enabled by default and should not be disabled. However, you may change the matchConditions to exclude certain namespaces from the Capsule Tenant boundaries. For example, you can exclude the kube-system namespace by setting the following value:
webhooks:
hooks:
namespaces:
matchConditions:
- name: 'exclude-kube-system'
expression: '!("system:serviceaccounts:kube-system" in request.userInfo.groups)'
GitOps
There are no specific requirements for using Capsule with GitOps tools like ArgoCD or FluxCD. You can manage Capsule resources as you would with any other Kubernetes resource.
ArgoCD
Visit the ArgoCD Integration for more options to integrate Capsule with ArgoCD.
Manifests to get you started with ArgoCD. For ArgoCD you might need to skip the validation of the CapsuleConfiguration resources, otherwise there might be errors on the first install:
Information
TheValidate=false option is required for the CapsuleConfiguration resource, because ArgoCD tries to validate the resource before the Capsule CRDs are installed via our CRD Lifecycle hook. Upstream Issue. This has mainly been observed in ArgoCD Applications using Service-Side Diff/Apply.manager:
options:
annotations:
argocd.argoproj.io/sync-options: "Validate=false,SkipDryRunOnMissingResource=true"
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: capsule
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: system
source:
repoURL: ghcr.io/projectcapsule/charts
targetRevision: 0.12.4
chart: capsule
helm:
valuesObject:
crds:
install: true
manager:
options:
annotations:
argocd.argoproj.io/sync-options: "Validate=false,SkipDryRunOnMissingResource=true"
capsuleConfiguration: default
ignoreUserGroups:
- oidc:administators
users:
- kind: Group
name: oidc:kubernetes-users
- kind: Group
name: system:serviceaccounts:tenants-system
monitoring:
dashboards:
enabled: true
serviceMonitor:
enabled: true
annotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
destination:
server: https://kubernetes.default.svc
namespace: capsule-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- ServerSideApply=true
- CreateNamespace=true
- PrunePropagationPolicy=foreground
- PruneLast=true
- RespectIgnoreDifferences=true
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
---
apiVersion: v1
kind: Secret
metadata:
name: capsule-repo
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
url: ghcr.io/projectcapsule/charts
name: capsule
project: system
type: helm
enableOCI: "true"
FluxCD
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: capsule
namespace: flux-system
spec:
serviceAccountName: kustomize-controller
targetNamespace: "capsule-system"
interval: 10m
releaseName: "capsule"
chart:
spec:
chart: capsule
version: "0.12.4"
sourceRef:
kind: HelmRepository
name: capsule
interval: 24h
install:
createNamespace: true
upgrade:
remediation:
remediateLastFailure: true
driftDetection:
mode: enabled
values:
crds:
install: true
manager:
options:
capsuleConfiguration: default
ignoreUserGroups:
- oidc:administators
users:
- kind: Group
name: oidc:kubernetes-users
- kind: Group
name: system:serviceaccounts:tenants-system
monitoring:
dashboards:
enabled: true
serviceMonitor:
enabled: true
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: capsule
namespace: flux-system
spec:
type: "oci"
interval: 12h0m0s
url: oci://ghcr.io/projectcapsule/charts
Security
Signature
To verify artifacts you need to have cosign installed. This guide assumes you are using v2.x of cosign. All of the signatures are created using keyless signing. You can set the environment variable COSIGN_REPOSITORY to point to this repository. For example:
# Docker Image
export COSIGN_REPOSITORY=ghcr.io/projectcapsule/capsule
# Helm Chart
export COSIGN_REPOSITORY=ghcr.io/projectcapsule/charts/capsule
To verify the signature of the docker image, run the following command.
COSIGN_REPOSITORY=ghcr.io/projectcapsule/charts/capsule cosign verify ghcr.io/projectcapsule/capsule:<release_tag> \
--certificate-identity-regexp="https://github.com/projectcapsule/capsule/.github/workflows/docker-publish.yml@refs/tags/*" \
--certificate-oidc-issuer="https://token.actions.githubusercontent.com" | jq
To verify the signature of the helm image, run the following command.
COSIGN_REPOSITORY=ghcr.io/projectcapsule/charts/capsule cosign verify ghcr.io/projectcapsule/charts/capsule:<release_tag> \
--certificate-identity-regexp="https://github.com/projectcapsule/capsule/.github/workflows/helm-publish.yml@refs/tags/*" \
--certificate-oidc-issuer="https://token.actions.githubusercontent.com" | jq
Provenance
Capsule creates and attests to the provenance of its builds using the SLSA standard and meets the SLSA Level 3 specification. The attested provenance may be verified using the cosign tool.
Verify the provenance of the docker image.
cosign verify-attestation --type slsaprovenance \
--certificate-identity-regexp="https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@refs/tags/*" \
--certificate-oidc-issuer="https://token.actions.githubusercontent.com" \
ghcr.io/projectcapsule/capsule:0.12.4 | jq .payload -r | base64 --decode | jq
cosign verify-attestation --type slsaprovenance \
--certificate-identity-regexp="https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@refs/tags/*" \
--certificate-oidc-issuer="https://token.actions.githubusercontent.com" \
ghcr.io/projectcapsule/charts/capsule:0.12.4 | jq .payload -r | base64 --decode | jq
Software Bill of Materials (SBOM)
An SBOM (Software Bill of Materials) in CycloneDX JSON format is published for each release, including pre-releases. You can set the environment variable COSIGN_REPOSITORY to point to this repository. For example:
# Docker Image
export COSIGN_REPOSITORY=ghcr.io/projectcapsule/capsule
# Helm Chart
export COSIGN_REPOSITORY=ghcr.io/projectcapsule/charts/capsule
To inspect the SBOM of the docker image, run the following command.
COSIGN_REPOSITORY=ghcr.io/projectcapsule/capsule cosign download sbom ghcr.io/projectcapsule/capsule:0.12.4
To inspect the SBOM of the helm image, run the following command.
COSIGN_REPOSITORY=ghcr.io/projectcapsule/charts/capsule cosign download sbom ghcr.io/projectcapsule/charts/capsule:0.12.4
Compatibility
The Kubernetes compatibility is announced for each Release. Generally we are up to date with the latest upstream Kubernetes Version. Note that the Capsule project offers support only for the latest minor version of Kubernetes. Backwards compatibility with older versions of Kubernetes and OpenShift is offered by vendors.