Hybrid Deployment with Kubernetes Setup Guide Private Preview
Follow our setup guide to set up the Hybrid Deployment model with Kubernetes.
Prerequisites
To use Hybrid Deployment with Kubernetes, you need the following:
- Kubernetes v1.29.x or above.
- A Helm (v3.16.1 or above) chart to deploy the Hybrid Deployment Agent.
- One of the following cloud-based Kubernetes services:
- Amazon Elastic Kubernetes Service (Amazon EKS)
- Azure Kubernetes Service (AKS)
- Google Kubernetes Engine (GKE)
NOTE: We do not support AWS Fargate for Hybrid Deployment.
- Worker nodes: Minimum 2 vCPUs with x86-64 processors and 4 GB of RAM for each pipeline processing job. The CPU and memory requirements depend on the number of jobs running concurrently. For example, the minimum requirements for a worker node running 8 concurrent jobs are 16 vCPUs with x86-64 processors and 32 GB of RAM.
- Storage:
- The storage must be sufficient to accommodate the total dataset volume for all the connectors you plan to deploy in the cluster. We recommend using a storage (for example, Network File System) that is available to all worker nodes. Some of the recommended storage options are:
- A Persistent Volume Claim (PVC) with ReadWriteMany access mode.
NOTE: To create a PVC, you need a Storage Class and Persistent Volume with appropriate permissions. For more information, see your cloud service providers' documentation. For more information about checking the status of your PVC, see our FAQ documentation.
- Reliable connectivity between the cluster, source, and destination.
- (Optional) A dedicated namespace for your deployment. If you do not have a dedicated namespace, we will use the
default
namespace. You can only run one agent per namespace.
Setup instructions
Create agent
Log in to your Fivetran account.
Go to the Destinations page and click Add destination.
Select your destination type.
Enter a Destination name of your choice.
Click Add.
In the destination setup form, choose Hybrid Deployment as your deployment model.
Click + Configure new agent.
In the Configure a new agent pane, read the Fivetran On-Prem Software License Addendum, and select the I have read and agree to the terms of the License Addendum and the Software Specific Requirements checkbox.
Click Next.
Choose Kubernetes as the environment for your deployment.
Click Next.
Enter an Agent name.
Click Generate agent token to generate the token and installation command for your agent.
NOTE: Each Hybrid Deployment Agent has a unique token and installation command.
Make a note of the agent token. You will need it to install and start the agent.
Copy the installation command and paste it in a separate file where you can edit it, and then make the following changes to the command:
- (Optional) Replace the default deployment name (
hd-agent
) with a name of your choice. - (Optional) Replace the default deployment namespace (
default
) with the namespace you want to use for your agent. - Set the value of the
data_volume_pvc
parameter to your Persistent Volume Claim name. By default, this parameter contains a dummy value (VOL_CLAIM_HERE
). - Update the Helm chart version by replacing the default value of
version
(0.1.0
) with a version that meets your requirements. For more information about the latest version, see the latest releases.
- (Optional) Replace the default deployment name (
Make a note of the updated command. You will need it to install and start the agent.
Go back to the Fivetran dashboard and click Save.
IMPORTANT: You must install and start the agent before completing the destination setup.
Install and start agent
Log into the environment where kubectl and Helm is configured to connect to your Kubernetes cluster.
NOTE: You can test the connectivity to your cluster using
kubectl cluster-info
andhelm list --all-namespaces
.Run the agent installation command to install the Helm chart and start the agent.
IMPORTANT: You must run the command with the changes you made and not the default command that you copied from the Fivetran dashboard.
Example:
$ helm upgrade --install hd-agent \ oci://us-docker.pkg.dev/prod-eng-fivetran-ldp/public-docker-us/helm/hybrid-deployment-agent \ --create-namespace \ --namespace default \ --set config.data_volume_pvc=VOL_CLAIM_HERE \ --set config.token="YOUR_TOKEN_HERE" \ --version 0.1.0
The installation command does the following:
- Creates a ConfigMap with the agent configurations
- Creates the necessary service account, role, and role bindings
- Deploys the Hybrid Deployment Agent Pod, which pulls the latest agent container image
NOTE: For more information about the Helm chart, see our GitHub repository.
Verify whether the agent is up and running.
Run the following commands to verify the agent status:
kubectl get deployments -n <namespace> kubectl get pods -n <namespace>
Run either of the following commands to verify the agent log:
kubectl logs -l app.kubernetes.io/name=<deployment_name> -n <namespace>
or
kubectl logs pod/<pod_name> --follow -n <namespace>
TIP:
- Once the agent is deployed, you can go to the Fivetran dashboard and view the agent details and status in Account Settings > General > Hybrid Deployment Agents.
- You can use
helm list -a
to view the list of all Helm charts installed in your environment andhelm uninstall <deployment_name> --namespace <namespace>
to uninstall a Helm chart.
Agent configuration parameters
The following are the mandatory agent configuration parameters:
- Deployment name
- Namespace
- Persistent Volume Claim name
- Agent token
- Helm chart version
You can set the configuration parameters as command line options using the --set
command. For example:
$ helm upgrade --install hd-agent \
oci://us-docker.pkg.dev/prod-eng-fivetran-ldp/public-docker-us/helm/hybrid-deployment-agent \
--create-namespace \
--namespace default \
--set config.data_volume_pvc=VOL_CLAIM_HERE \
--set config.token="YOUR_TOKEN_HERE" \
--version 0.1.0
Advanced users can also use a values.yaml
file to set these parameters. For example:
image: "us-docker.pkg.dev/prod-eng-fivetran-ldp/public-docker-us/ldp-agent:production"
image_pull_policy: "Always"
config:
data_volume_pvc: VOL_CLAIM_HERE
token: YOUR_TOKEN_HERE
labels: {}
node_selector: {}
tolerations: []
agent:
resources:
requests:
cpu: "2000m"
memory: "4Gi"
limits:
cpu: "4000m"
memory: "4Gi"
NOTE:
- The above example of a
values.yaml
file contains the default agent container resources also.- If required, you can specify additional labels or node selection options in the
values.yaml
file.- You must specify the agent configuration parameters and their values only in the
config
section of thevalues.yaml
file.
The configuration options supported by the agent in Kubernetes along with their default values are listed in the table below. You must specify these parameters only in the config
section of the values.yaml
file. The ConfigMap created by the Helm chart during the agent installation will by default contain all the configuration parameters you specify in the values.yaml
file.
Parameter | Default Value | Description |
---|---|---|
token | your-agent-token | The agent token that appears on the Fivetran dashboard. |
cleanup_jobs_interval_seconds | 60 | Job cleanup interval in seconds. |
donkey_container_min_cpu_request | 2 | Minimum CPU request for pipeline processing job. |
donkey_container_max_cpu_limit | unlimited | Maximum CPU limit for pipeline processing job. |
donkey_container_min_memory_request | 4Gi | Minimum memory request for pipeline processing job. |
donkey_container_max_memory_limit | 4Gi | Maximum memory limit for pipeline processing job. |
test_runner_container_min_cpu_request | 2 | Minimum CPU request for connectivity test job. |
test_runner_container_max_cpu_limit | unlimited | Maximum CPU limit for connectivity test job. |
test_runner_container_min_memory_request | 4Gi | Minimum memory request for connectivity test job. |
test_runner_container_max_memory_limit | 4Gi | Maximum memory limit for connectivity test job. |
standard_config_container_min_cpu_request | 2 | Minimum CPU request for getting schema (standard configuration) job |
standard_config_container_max_cpu_limit | unlimited | Maximum CPU limit for getting schema (standard configuration) job. |
standard_config_container_min_memory_request | 4Gi | Minimum memory request for getting schema (standard configuration) job. |
standard_config_container_max_memory_limit | 4Gi | Maximum memory limit for getting schema (standard configuration) job. |
hva_container_min_cpu_request | 2 | Minimum CPU request for HVA sidecar. |
hva_container_max_cpu_limit | unlimited | Maximum CPU limit for HVA sidecar. |
hva_container_min_memory_request | 4Gi | Minimum memory request for HVA sidecar. |
hva_container_max_memory_limit | 4Gi | Maximum memory limit for HVA sidecar. |
Related articles
description Hybrid Deployment Overview
assignment Hybrid Deployment FAQ