ArgoCD helps to deliver applications to Kubernetes by using the GitOps approach, i.e. when a Git-repository is used as a source of trust, thus all manifest, configs and other data are stored in a repository.
It can b used with Kubernetes manifest,
ArgoCD spins up its controller in the cluster and watches for changes in a repository to compare it with resources deployed in the cluster, synchronizing their states.
Some additional features which are really useful are SSO with SAML, so we can integrate it with our Okta, it can deploy to multiple clusters, Kubernetes RBAC support, great WebUI and CLI, Github, GitLab, etc webhooks integration, plus Prometheus metrics out of the box, and awesome
I’m planning to us eArgoCD to replace our current deployment process with Jenkins and Helm, see the Helm: пошаговое создание чарта и деплоймента из Jenkins post (Rus).
Contents
Components
ArgoCD consists of the three main components – API server, Repository Server, and Application Controller.
- API server (pod: argocd-server): controls the whole ArgoCD instance, all its operations, authentification, and secrets access which are stored as Kubernetes Secrets, etc
- Repository Server (pod: argocd-repo-server): stores and synchronizes data from configured Git-repositories and generates Kubernetes manifests
- Application Controller (pod: argocd-application-controller): used to monitor applications in a Kubernetes cluster to make them the same as they are described in a repository, and controls PreSync, Sync, PostSync hooks
ArgoCD CLI installation
In macOS:
In Linux – from the Github repository:
Check it:
Running ArgoCD in Kubernetes
And let’s spin up an ArgoCD instance. We can use a manifest from the documentation which will create all necessary resources such as CRD, ServiceAccounts, RBAC roles and bindingConfigMaps, Secrets, Services, and Deployments.
I’m pretty sure there is an existing Helm chart for ArgoCD, but this time let’s use the manifest as it is described in the
The documentation suggests to use the argocd namespace and it will be simpler, but we are not looking for simplicity so let’s create own namespace:
Deploy resources:
Edit the argcd-server Service – change its type to the LoadBalancer to get access to the WebUI from the world:
Find its URL:
A password for the ArgoCD is generated automatically is set to the name of its pod’s name, get it with the next command:
Log in via CLI, don’t pay attention to the certificate error:
Change the password:
Open WebUI, again ignore the SSL warning, we will set it up in a moment, and log in:
LoadBalancer, SSL, and DNS
Cool – we have all services running, now let’s configure a DNS name and an SSL certificate.
AWS ALB and ELB didn’t support gRPC, see
Let’s leave the Service with the LoadBalancer type as we did it above – it creates an AWS Classic LoadBalancer.
SSL certificate can be issued with the
Download the manifest file:
Find the argocd-server Service:
--- apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: server app.kubernetes.io/name: argocd-server app.kubernetes.io/part-of: argocd name: argocd-server spec: ports: - name: http port: 80 protocol: TCP targetPort: 8080 - name: https port: 443 protocol: TCP targetPort: 8080 selector: app.kubernetes.io/name: argocd-server
To the annotations add the service.beta.kubernetes.io/aws-load-balancer-ssl-cert
, in the spec.type
– the LoadBalancer
type, and limit access with the loadBalancerSourceRanges
:
--- apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: server app.kubernetes.io/name: argocd-server app.kubernetes.io/part-of: argocd name: argocd-server annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:534***385:certificate/ddaf55b0-***-53d57c5ca706" spec: type: LoadBalancer loadBalancerSourceRanges: - "31.***.***.117/32" - "194.***.***.24/29" ports: - name: http port: 80 protocol: TCP targetPort: 8080 - name: https port: 443 protocol: TCP targetPort: 8080 selector: app.kubernetes.io/name: argocd-server
Deploy it:
Check the ELB’s SSL:
Create a DNS record:
Open the URL – and get the ERR_TOO_MANY_REDIRECTS error:
ArgoCD SSL: ERR_TOO_MANY_REDIRECTS
Go to the Google and find this topic –
Go back to the install.yaml
, in the Deployment
argocd-server add the --insecure
flag:
--- apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/component: server app.kubernetes.io/name: argocd-server app.kubernetes.io/part-of: argocd name: argocd-server spec: selector: matchLabels: app.kubernetes.io/name: argocd-server template: metadata: labels: app.kubernetes.io/name: argocd-server spec: containers: - command: - argocd-server - --staticassets - /shared/app - --insecure ...
Deploy it over, and check:
Okay – we are done here.
ArgoCD: deploy from a Github repository
And let’s proceed with the example from the Getting started guide – Helm deployment will be observed in the following part.
Click on the New App, specify a name, and set Project == default:
In the Git set the
In the Destination – https://kubernetes.default.svc and the default namespace, click on the Create:
Looks like it was created, but why its Sync Status is Unknown?
Somethig went wrong:
Try the sync
command from the CLI – also didn’t work:
ArgocD: ComparisonError failed to load initial state of resource
Actually, better consider the “User “system:serviceaccount:dev-1-devops-argocd-ns:argocd-application-controller” cannot list resource “pods” in API group “” at the cluster scope” error.
Check the argocd-application-controller ServiceAccount (really helpful here the Kubernetes: ServiceAccounts, JWT-tokens, authentication, and RBAC authorization post):
Yup, we have the User system:serviceaccount:dev1-devops-argocd-ns:argocd-application-controller
user created.
And it has the argocd-application-controller ClusterRoleBinding mapped:
But it can’t perform the list pods
action:
Although its ClusterRole gives all the permissions:
--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: app.kubernetes.io/component: application-controller app.kubernetes.io/name: argocd-application-controller app.kubernetes.io/part-of: argocd name: argocd-application-controller rules: - apiGroups: - '*' resources: - '*' verbs: - '*' - nonResourceURLs: - '*' verbs: - '*'
Check the ClusterRoleBinding again, but this time with the -o yaml
output:
namespace: argocd
– aha, here we are.
Find two ClusterRoleBinding in the install.yaml
– argocd-application-controller and argocd-server, update their namespaces:
--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: app.kubernetes.io/component: application-controller app.kubernetes.io/name: argocd-application-controller app.kubernetes.io/part-of: argocd name: argocd-application-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: argocd-application-controller subjects: - kind: ServiceAccount name: argocd-application-controller namespace: dev-1-devops-argocd-ns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: app.kubernetes.io/component: server app.kubernetes.io/name: argocd-server app.kubernetes.io/part-of: argocd name: argocd-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: argocd-server subjects: - kind: ServiceAccount name: argocd-server namespace: dev-1-devops-argocd-ns
Well, that’s why I told about the default argocd namespace and why it’s simpler to use it, as if you are using a custom namespace then you’ll have to update those bindings.
So, fix it, deploy again:
Try to execute the sync
one more time:
And it’s working:
A pod’s logs:
The next step will be to deploy a Helm chart and figure out how to work with Helm Secrets.
Similar posts
Also published on