Bootstrapping CrateDB Cloud regions¶
New regions that are added to CrateDB Cloud usually do not run the full CrateDB Cloud stack (any more), but only the “agent” part. This part consists of essential services that are required to deploy CrateDB cluster in the respective Kubernetes cluster.
This document covers the steps for bootstrapping regions without Cloud API for CrateDB Cloud.
Note
In the following steps, names in curly brackets ({}) refer to variable
values that are defined in the same step or one of the previous steps, e.g.
{region} can always be replaced by the actual full region name defined
in step 2.
Steps¶
Pick a name for the region. The region identifier needs to be globally unique, across all regions that exist in CrateDB Cloud. The region object will be created later on. A good name has the format
<purpose>.<location>.<provider>, e.g.:eks1.eu-central-1.aws.Generate a password for the Docker service principal and add it to Vault at the path
/crate/infra/pillar/cr8/spnwith the key{region}.$ pwgen -s 32 1
Add the region to the Terraform state file in crate/salt GitHub repository and run
applyon thecratedb-cloudstate:$ cd terraform/cratedb-cloud $ vim regions.tf # add new region $ vim outputs.tf # add outputs for new region $ terraform init $ terraform apply
Take the output of the keys
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_IAM_USER,ZONE_NAME, andZONE_IDand add them to Vault at path/crate/infra/pillar/application/cratedb/{env}/external-dns/{region}. TakeDOCKER_LOGINcredentials and add them to Vault at path/crate/infra/pillar/common/docker/registries/cr8cloud/{region}.Example:
$ REGION_ID="k8s-1.eu-central-1.aws" $ REGION_ENV="prod" $ DNS_SECRET="/crate/infra/pillar/application/cratedb/$REGION_ENV/external-dns/$REGION_ID" $ vault kv put "$DNS_SECRET" \ zone_id="..." \ zone_name="..." \ aws_iam_user="..." \ aws_access_key_id="..." \ aws_secret_access_key="..." $ DOCKER_SECRET="/crate/infra/pillar/common/docker/registries/cr8cloud/$REGION_ID" $ vault kv put "$DOCKER_SECRET" \ username="..." \ password="..."
Create a new region in the database by running the following SQL statement: eg. agent_url
https://agent.eks1.us-east-1.aws.cratedb-dev.net, eg. prometheus_urlhttps://prometheus.eks1.us-east-1.aws.cratedb-dev.neteg. aws_regionus-east-1eg. aws_bucketcratedb-cloud-backup-us-east-1-aws-dev-bucketINSERT INTO "core"."regions" ( "name", "description", "organization_id", "agent_url", "prometheus_url", "aws_region", "aws_bucket", "deprecated", "provider", "cert_type" ) VALUES ( '{region}', 'Description for region', 'region still unavailable', 'https://agent.{region}.cratedb.net', 'https://prometheus.{region}.cratedb.net', '...', -- pick appropriate region '...', -- pick appropriate bucket FALSE '...', -- pick appropriate provider 'wildcard' )
Note
Any non-NULL string for
organization_idthat does not match an organization id ensures, that the region does not show up in the region listing yet.
Run the script
create-region-credentials.shin the crate/cloud GitHub repository to create credentials to be used for authentication from Cloud API and Prometheus to Cloud Agent and Cloud Telemetry. For now please ignore the warning about the command being deprecated. For parameters eg.:eks1.us-east-1.awsanddev.$ ./deploy/create-region-credentials.sh {region} {env}
Note
This script requires access to Vault via the command line!
Add the newly created credentials to the database. This is done by executing into one of the cloud-api pods and use the
region-credentialscommand to insert the password hash:$ region-credentials create {region} Password: {password from previous step}
Create a service principal for the services in the region. This is done by executing into one of the cloud-api pods and use the
service-principalscommand to create a new service principal. There is a warning that the command is deprecated, but it is still the way to go for now:$ service-principals create {region}
The store the generated Access Key and Secret Key in Vault under the path
/crate/infra/pillar/application/cloud_app/{env}/service-principals/{region}/with the keysacccess_keyandaccess_secret.Setup the region in kubernetes-gitops, similar to how other regions are setup. Use Flux v2 (in the spirit of the
infra-westeuropebranch) for new clusters. Checkout similar region, preferable same cloud-provider and do the:git checkout --orphan NewRegionthere.
Note
YOU NEED TO UPDATE: secrets,specific names, urls, terraform state etc. but you have everything you need duplicated.
Add the new regions as a Jenkins deployment target. This must be configured in crate/salt in various places, i.e. the
kubeconfig_sharedfile and others. Also in crate/jenkins-dsl where it must be added to the numerous lists of regions. Sadly there is no single tutorial for this.Add newly created k8s cluster endpoint to wireguard.rst (infrastructure repo).
Add k8s shared-config to ansible runner config (infrastructure repo).
Add k8s user-config to vault.
Re-create jenkins-runners snapshotp with packer build and switch labels to the new snapshots.
Deploy the required services using the regular Jenkins jobs for:
Make the region available for everyone by running the following SQL statement against the database:
UPDATE "core"."regions" SET "organization_id" = NULL WHERE "name" = '{region}'
Add a
productto dojo on https://dojo.cr8.net.Add the region to jenkins
cloud-dr-backup-manifests.
FAQ¶
Help! I deployed the agent but the API cannot log in (403)!
The agent is normally IP-restricted to be only accessible from the outbound IP of where the API lives (the “prod” or “dev” clusters). This means that even if you specify the right credentials, the ingress controller will return a 403 if the IP is not whitelisted. Check the
cloud-agentingress configuration whether the correct IPs are allowed.