Install prerequisite tooling first.
helm repo add atlassian-data-center \ https://atlassian.github.io/data-center-helm-charts helm repo update
Use a more powerful VM in the config first otherwise the pods will complain about the lack of CPU, something like c5.xlarge then run:
eksctl create cluster -f eks-cluster-config.yml
helm show values atlassian-data-center/<product> > values.yaml
Create AWS RDS mysql 8.0 database. Now we need to set some parameters for mysql.
You might also need in values.yml ?sessionVariables=transaction_isolation='READ-COMMITTED' e.g.
url: "jdbc:mysql://your.confluence.ap-southeast-2.rds.amazonaws.com/confluencedb?sessionVariables=transaction_isolation='READ-COMMITTED'"
Connect to the database (use the URL from the console) and create the DB:
CREATE DATABASE confluencedb CHARACTER SET utf8mb4 COLLATE utf8mb4_bin; CREATE USER 'confluenceusr'@'%' IDENTIFIED BY 'somelongpassword'; GRANT ALL PRIVILEGES ON confluencedb.* TO 'confluenceusr'@'%';
Also create kubernetes secret and update values.yml with it:
kubectl create secret generic counfluencedb --from-literal=username='user' --from-literal=password='pwd'
kubectl create namespace ingress && \ helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress && \
Configure shared and local volumes see this https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/efs-create-filesystem.md. In short run this, replacing any parameters if not the same:
vpc_id=$(aws eks describe-cluster \ --name atlassian-cluster \ --query "cluster.resourcesVpcConfig.vpcId" \ --output text \ --region ap-southeast-2) && \ cidr_range=$(aws ec2 describe-vpcs \ --vpc-ids $vpc_id \ --query "Vpcs[].CidrBlock" \ --output text \ --region ap-southeast-2) && \ security_group_id=$(aws ec2 create-security-group \ --group-name confluence_efs \ --description "confluence EFS security group" \ --vpc-id $vpc_id \ --output text \ --region ap-southeast-2) && \ aws ec2 authorize-security-group-ingress --group-id $security_group_id --protocol tcp --port 2049 --cidr $cidr_range --region ap-southeast-2 && \ file_system_id=$(aws efs create-file-system \ --region ap-southeast-2 \ --performance-mode generalPurpose \ --query 'FileSystemId' \ --output text)
Get the subnets in the VPC:
aws ec2 describe-subnets --filters "Name=vpc-id,Values=$vpc_id" --query 'Subnets[*].{SubnetId: SubnetId,AvailabilityZone: AvailabilityZone,CidrBlock: CidrBlock}' --output table --region ap-southeast-2
Check the node ips:
kubectl get nodes
Now you will need to run the below command for each subnet (from above command) the node (its IP address) is in:
aws efs create-mount-target --file-system-id $file_system_id --subnet-id subnet-035ce5337e5f51cbb --security-groups $security_group_id --region ap-southeast-2
This one is used for local storage of pods.
First we need to create the IAM service account:
Might need to append region like
eksctl utils associate-iam-oidc-provider --cluster $cluster_name --approve --region ap-southeast-2 eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster atlassian-cluster --role-name AmazonEKS_EBS_CSI_DriverRole --role-only --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy --approve --region ap-southeast-2
You should now see new ebs pods in kube-system namespace:
$ kubectl get pods -n kube-system | grep -i ebs ebs-csi-controller-76f5dcb948-4fkbb 3/3 Running 0 31s ebs-csi-controller-76f5dcb948-xxl7t 3/3 Running 0 31s ebs-csi-node-45rhc 3/3 Running 0 31s ebs-csi-node-9xxhh 3/3 Running 0 31s
This driver is used for the shared storage of pods.
Run this replacing any values with yours:
export cluster_name=atlassian-cluster export role_name=AmazonEKS_EFS_CSI_DriverRole eksctl create iamserviceaccount \ --name efs-csi-controller-sa \ --namespace kube-system \ --cluster $cluster_name \ --role-name $role_name \ --role-only \ --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy \ --approve --region ap-southeast-2 eksctl create iamserviceaccount \ --name efs-csi-node-sa \ --namespace kube-system \ --cluster $cluster_name \ --role-name $role_name \ --role-only \ --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy \ --approve --region ap-southeast-2 TRUST_POLICY=$(aws iam get-role --role-name $role_name --query 'Role.AssumeRolePolicyDocument' --output json | \ sed -e 's/efs-csi-controller-sa/efs-csi-*/' -e 's/StringEquals/StringLike/') aws iam update-assume-role-policy --role-name $role_name --policy-document "$TRUST_POLICY"
Now create the addon (replace your account id):
eksctl create addon --name aws-efs-csi-driver --cluster atlassian-cluster --service-account-role-arn arn:aws:iam::<account id>:role/AmazonEKS_EFS_CSI_DriverRole --force --region ap-southeast-2
You should now see new efs pods in kube-system namespace:
$ kubectl get pods -n kube-system | grep -i efs efs-csi-controller-76f5dcb948-4fkbb 3/3 Running 0 31s efs-csi-controller-76f5dcb948-xxl7t 3/3 Running 0 31s efs-csi-node-45rhc 3/3 Running 0 31s efs-csi-node-9xxhh 3/3 Running 0 31s
Create/update the values in yaml files for local and for shared storage:
kubectl apply -f confluence-volume-local.yml kubectl apply -f confluence-volume-shared.yml
Create the licence secret. Use your license or a use a trial one:
kubectl create secret generic counfluence-licence --from-literal=license-key='xxxxxxxxxxxxxxxxxxx'
Use BX02-9YO1-IN86-LO5G for Server ID if generating the trial licence
Update the values.yml with this licence.
Finally install the confluence:
helm install confluence-test-install atlassian-data-center/confluence --values values.yaml
Check events for any warnings/errors with this:
kubectl describe pods
eksctl delete cluster -f eks-cluster-config.yml
Above will probably fail, you may need to go AWS UI and manually delete resources like VPCs, EFS etc.
helm uninstall ingress-nginx ingress-nginx/ingress-nginx --namespace ingress helm uninstall confluence-test-install
Check with:
kubectl describe pods
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 45m (x6 over 70m) default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.. Warning FailedScheduling 4m1s (x11 over 44m) default-scheduler 0/2 nodes are available: 2 Insufficient cpu. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.. ... Warning FailedScheduling 8m57s (x2 over 18m) default-scheduler running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition
Make sure the EFS/EBS drivers above are installed correctly.
helm test confluence-test-install -n default --logs POD LOGS: confluence-test-install-db-connectivity-test Oct 15, 2023 2:20:00 PM java.util.prefs.FileSystemPreferences$1 run INFO: Created user preferences directory. Exception java.sql.SQLException: No suitable driver found for jdbc:mysql://confluencedb.chesfhqzuwnk.ap-southeast-2.rds.amazonaws.com/confluencedb at DriverManager.getConnection (DriverManager.java:702) at DriverManager.getConnection (DriverManager.java:228) at (#5:1) Establishing connection to jdbc:mysql://confl.something.ap-southeast-2.rds.amazonaws.com/confluencedb Error: unable to get pod logs for confluence-test-install-shared-home-permissions-test: pods "confluence-test-install-shared-home-permissions-test" not found
We need to install the mysql driver, but before that we need another pod to mount the jar file see here:
kubectl apply -f shared-home-browser.yaml
If you get an error like
$ kubectl describe pod shared-home-browser Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 85s default-scheduler Successfully assigned default/shared-home-browser to ip-192-168-15-3.ap-southeast-2.compute.internal Warning FailedMount 21s (x8 over 84s) kubelet MountVolume.MountDevice failed for volume "confluence-shared-vol-pv" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name efs.csi.aws.com not found in the list of registered CSI drivers
As it states make sure the EFS driver is installed.
Then continue
kubectl exec -it shared-home-browser -- bash -c "mkdir -p /shared-home/libraries"
wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-j-8.1.0.tar.gz && tar xf mysql-connector-j-8.1.0.tar.gz && kubectl cp $PWD/mysql-connector-j-8.1.0/mysql-connector-j-8.1.0.jar shared-home-browser:/shared-home/libraries
No matching mount target in the az ap-southeast-2c. Please create one mount target in ap-southeast-2c, or try the mount target in another AZ by passing the availability zone name option. Available mount target(s) are in az ['ap-southeast-2b', 'ap-southeast-2a']
Go to Amazon EFS > File systems > select the EFS > Network tab > Manage and add mount target in the missin az (ap-southeast-2c in this instance)
Setup Database Failed The following error(s) occurred: Configuring database failed StatementCallback; uncategorized SQLException for SQL [CREATE TRIGGER denormalised_space_trigger_on_update AFTER UPDATE ON SPACES FOR EACH ROW sp: BEGIN DECLARE isServiceDisabled BOOL DEFAULT TRUE; CALL space_procedure_for_denormalised_permissions(isServiceDisabled); IF (isServiceDisabled) THEN LEAVE sp; END IF; IF (NEW.LOWERSPACEKEY = OLD.LOWERSPACEKEY) THEN LEAVE sp; END IF; INSERT INTO DENORMALISED_SPACE_CHANGE_LOG(SPACE_ID) VALUES (NEW.SPACEID); END;]; SQL state [HY000]; error code [1419]; You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable); nested exception is java.sql.SQLException: You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)
And
Setup Database Failed The following error(s) occurred: Configuring database failed MySQL session isolation level 'REPEATABLE-READ' is no longer supported. Session isolation level must be 'READ-COMMITTED'. See http://confluence.atlassian.com/x/GAtmDg
Make sure you added the correct options in parameters group in configure_database step.