Five strategies for building error-free data pipelines

We take data integrity seriously so your business doesn’t pay the price.
March 29, 2021

Bugs and errors are more than just an annoyance when it comes to your data strategy. At best, they can leave you blind by taking down your day-to-day business reporting. At worst, they can cripple your company’s ability to function if you rely on the data to power parts of your business.  And in the world of data pipelines, things inevitably break from time to time. The cause might be an unannounced API change from one of our source partners, a hiccup from a recent update to your warehouse, or something else.

Diagnosing and resolving bugs can take up a great deal of your data engineers’ time. These “fire drill” efforts often derail their current work. We understand how painful this can be, which is why we’ve adopted five strategies for keeping your pipelines bug-free.

1.  Build a solid foundation

Building an error-free data pipeline has to start at the beginning. Otherwise, you’ll be constantly paying down your tech debt. Our approach to new connector builds is to work as closely as possible with the platform itself to understand 1) the business processes in the source system, 2) the data model that reflects the processes and 3) how the processes change the data and how to capture those changes with perfect integrity.

All our new connectors go through vigorous QA testing and release phases, ensuring we’ve pressure-tested every available connector and exposed our code to as many edge cases as possible. Compare this to an open-source route, where you are at the mercy of strangers on the internet: There’s little incentive to find and fix bugs, update pipelines for API changes, or build out new ones.

2. Escalate instantly

We have a global team of engineers and product managers, so Fivetranners are always working on data pipelines somewhere in the world. When a bug does come in, it is instantly escalated to whichever offices are up and running. Once the bug is spotted, all other engineering work is deprioritized. We have a “zero-bug policy,” which requires that any bug be addressed and fixed within 21 days. On average, we are much faster than that — less than six days.

By making bugs the number one priority, we are often able to squash them before our customers ever experience the effects. You don’t need to worry being on pager duty over the weekend, because we are.

3. Debug securely

Sometimes our engineers will need access to your source or destination for debugging or incident resolution. Because of how we process and replicate the data, this is actually harder than it may seem. Customer data and credentials are encrypted with ephemeral keys both in transit and at rest. The only way our engineers can do any debugging within your source or data warehouse is by getting explicit approval from you, the customer, via your Fivetran dashboard.

4. Make uptime and incident reporting public

If something does go wrong with your data pipeline, it’s important to know when, where and why. If your data engineering team is doing this alone, they will spend many hours going line by line to find the error and then fix it. It is therefore of utmost importance that when things do go wrong, everyone gets the most pertinent information as quickly as possible. To that end, we have both a public uptime calendar and public incident reporting. Those looking for even more real-time insights can subscribe to email updates.

Offering this public view is important to ensure that everyone has the information they need to solve problems with their data stack. It’s also a way to increase trust and transparency with our customers.

5. Stay proactive behind the scenes

Much of the work we do in steps one through four is visible, but a whole lot more goes on behind the scenes to enable our customers to set and forget their data pipelines. Every quarter, when we go into our development planning process, we carve out a significant amount of time and resources to maintain and improve reliability. As platforms, tools, databases, events and more change and update regularly, we need to keep up.

It’s one thing to be highly reliable, and it’s another thing to do so while continuously driving up performance and maintaining best-in-class security. Pushing the throttle on these three at the same time is a constant balancing act that we’ve worked hard to perfect.

Test-drive error-free pipelines

Being error-free is about ensuring that your end-to-end data pipeline is always up and running and that your data is always correct. This involves much more than ensuring that our software is up and running at all times so you can check the down status of your pipelines.

See how well our strategies work firsthand by signing up for a 14-day free trial and testing our system yourself.

Start for free

Join the thousands of companies using Fivetran to centralize and transform their data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Data insights
Data insights

Five strategies for building error-free data pipelines

Five strategies for building error-free data pipelines

March 29, 2021
March 29, 2021
Five strategies for building error-free data pipelines
We take data integrity seriously so your business doesn’t pay the price.

Bugs and errors are more than just an annoyance when it comes to your data strategy. At best, they can leave you blind by taking down your day-to-day business reporting. At worst, they can cripple your company’s ability to function if you rely on the data to power parts of your business.  And in the world of data pipelines, things inevitably break from time to time. The cause might be an unannounced API change from one of our source partners, a hiccup from a recent update to your warehouse, or something else.

Diagnosing and resolving bugs can take up a great deal of your data engineers’ time. These “fire drill” efforts often derail their current work. We understand how painful this can be, which is why we’ve adopted five strategies for keeping your pipelines bug-free.

1.  Build a solid foundation

Building an error-free data pipeline has to start at the beginning. Otherwise, you’ll be constantly paying down your tech debt. Our approach to new connector builds is to work as closely as possible with the platform itself to understand 1) the business processes in the source system, 2) the data model that reflects the processes and 3) how the processes change the data and how to capture those changes with perfect integrity.

All our new connectors go through vigorous QA testing and release phases, ensuring we’ve pressure-tested every available connector and exposed our code to as many edge cases as possible. Compare this to an open-source route, where you are at the mercy of strangers on the internet: There’s little incentive to find and fix bugs, update pipelines for API changes, or build out new ones.

2. Escalate instantly

We have a global team of engineers and product managers, so Fivetranners are always working on data pipelines somewhere in the world. When a bug does come in, it is instantly escalated to whichever offices are up and running. Once the bug is spotted, all other engineering work is deprioritized. We have a “zero-bug policy,” which requires that any bug be addressed and fixed within 21 days. On average, we are much faster than that — less than six days.

By making bugs the number one priority, we are often able to squash them before our customers ever experience the effects. You don’t need to worry being on pager duty over the weekend, because we are.

3. Debug securely

Sometimes our engineers will need access to your source or destination for debugging or incident resolution. Because of how we process and replicate the data, this is actually harder than it may seem. Customer data and credentials are encrypted with ephemeral keys both in transit and at rest. The only way our engineers can do any debugging within your source or data warehouse is by getting explicit approval from you, the customer, via your Fivetran dashboard.

4. Make uptime and incident reporting public

If something does go wrong with your data pipeline, it’s important to know when, where and why. If your data engineering team is doing this alone, they will spend many hours going line by line to find the error and then fix it. It is therefore of utmost importance that when things do go wrong, everyone gets the most pertinent information as quickly as possible. To that end, we have both a public uptime calendar and public incident reporting. Those looking for even more real-time insights can subscribe to email updates.

Offering this public view is important to ensure that everyone has the information they need to solve problems with their data stack. It’s also a way to increase trust and transparency with our customers.

5. Stay proactive behind the scenes

Much of the work we do in steps one through four is visible, but a whole lot more goes on behind the scenes to enable our customers to set and forget their data pipelines. Every quarter, when we go into our development planning process, we carve out a significant amount of time and resources to maintain and improve reliability. As platforms, tools, databases, events and more change and update regularly, we need to keep up.

It’s one thing to be highly reliable, and it’s another thing to do so while continuously driving up performance and maintaining best-in-class security. Pushing the throttle on these three at the same time is a constant balancing act that we’ve worked hard to perfect.

Test-drive error-free pipelines

Being error-free is about ensuring that your end-to-end data pipeline is always up and running and that your data is always correct. This involves much more than ensuring that our software is up and running at all times so you can check the down status of your pipelines.

See how well our strategies work firsthand by signing up for a 14-day free trial and testing our system yourself.

Topics
No items found.
Share

Related blog posts

No items found.
No items found.
No items found.

Start for free

Join the thousands of companies using Fivetran to centralize and transform their data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.