Approaches to Data Integration

In this excerpt from The Essential Guide to Data Integration, we describe data integration, the rise of the cloud, and the difference between ETL and ELT.
Approaches to Data Integration

The following blog post is an excerpt from the book, The Essential Guide to Data Integration: How to Thrive in an Age of Infinite Data. The rest of the book is available to you for free here.

Struggling with ETL? Sign up for the free, definitive guide to data integration!

download now

Continued from An Overview of Data Integration and Analytics

Data integration consists of the following steps:

  1. Data is gathered from sensor feeds, manual data entry or software, and stored in files or databases.
  2. Data is extracted from files, databases and API endpoints and centralized in a data warehouse.
  3. Data is cleansed and modeled to meet the analytics needs of various business units.
  4. Data is used to power products or generate business intelligence.

A scalable, sustainable approach to analytics requires a systematic, replicable approach to data integration — a data stack. 

Data Integration With a Data Stack

A data stack consists of tools and technologies that collectively integrate and analyze data from a variety of sources. The components of a data stack include: 

  1. Data sources:
    1. Applications
    2. Databases
    3. Files
    4. Digital events
  2. Data pipeline and data connectors. Software used to extract data from a data source and load it into a data warehouse.
  3. Data warehouse and/or data lake. A data repository of record designed to permanently accommodate large amounts of data.
  4. Data modeling and/or transformations. Oftentimes, it may be necessary to prepare your data by applying custom business logic, such as changing column names or conducting aggregations.
  5. Business intelligence tool. Software meant for summarizing, visualizing and modeling data in order to guide business decisions.

The most basic unit in a data pipeline is a piece of software called a data connector. A data pipeline may contain one or several connectors, each of which extracts data from a source and routes it to a data warehouse. Transformations can either be performed before the data arrives in a data warehouse, or within the data warehouse after the data arrives. Finally, the data is analyzed with the help of a business intelligence tool. The individual components of a data stack can be hosted on-premise or in the cloud. 

The Traditional Approach to Data Integration (ETL)

The traditional approach to data integration, known as extract-transform-load (ETL), has been predominant since the 1970s. The acronym ETL is often used colloquially to describe data integration activities in general. ETL evolved at a time when computing power, storage and bandwidth were scarce and expensive.

An ETL system performs the following steps:

  1. Extract – data is extracted from connectors
  2. Transform – through a series of transformations, the data is rearranged into models as needed by analysts and end-users
  3. Load – data is loaded into a data warehouse
  4. Visualize – the data is summarized and visualized through a business intelligence tool

Transformations must be specifically tailored to the unique configurations of both the original and the destination data. This means that upstream changes to data schemas, as well as downstream changes to business requirements and data models, can break the software that performs the transformations.

Since ETL does not directly replicate data from each source to the data warehouse, there is no comprehensive repository of record for analytics. Failures at any stage of the process will render the data inaccessible to analysts and require engineering effort to repair.

Limitations of ETL

The traditional ETL process has three serious and related downsides:

  1. Complexity. Data pipelines run on custom code dictated by the specific needs of specific transformations.
  2. Brittleness. Parts of the code base can become nonfunctional with little warning, and new business requirements and use cases require extensive revisions of the code.
  3. Inaccessibility. More importantly, ETL is all but inaccessible to smaller organizations without dedicated data engineers. 

The Emergence of Cloud Technology

Even a casual observer of technological trends knows that computation, storage and bandwidth have become cheap and ubiquitous. 

The convergence of these three cost-reduction trends has created the cloud — namely, the use of remote, decentralized, web-enabled computational resources. Cloud technology, in turn, has given rise to a huge range of cloud-native applications and services unshackled from physical infrastructure. 

The Modern Approach to Data Integration: ELT

An extract-load-transform (ELT) stack replaces on-premise technologies with cloud-native SaaS technologies. Properly implemented, the modern data stack delivers continuous data integration and organization-wide accessibility, with a minimum of manual intervention and bespoke code.

Switching the order of the loading and transformation stages addresses each of the three major shortcomings of ETL:

  1. Complexity. The pipeline is simplified — warehousing standard schemas shifts a great deal of pipeline-related work downstream to analysts instead of data engineers.
  2. Brittleness. The pipeline is more resilient and less risky — because transformations are applied after the data is warehoused, breakages caused by changes in source systems mainly affect the analytics layer.
  3. Accessibility. The pipeline is more accessible because it's less labor-intensive to maintain. 

In-warehouse transformations enable the creation of derivative tables, called “views,” without altering the source data. This allows organizations to create a repository of record that is immune to changing business needs or upstream schema changes.

Once the data is warehoused, analysts can use SQL to perform transformations at their discretion. Stoppages and failures will no longer cripple the entire data pipeline or consume significant engineering resources.

A Better Way Forward: Automated ELT

The simplified, cloud-based nature of an ELT data stack lends itself easily to automation and outsourcing.

The specific activities involved in ELT include detecting and replicating data changes, lightly cleaning and normalizing data, and updating and creating tables. These activities require a deep knowledge of data sources, extensive data modeling and analytics expertise, and the engineering know-how to build robust software systems. Without an automated data integration tool, your team must perform these activities and develop the requisite capabilities. 

The main benefits of automated ELT, as with most forms of automation, are savings of time, effort and money. Your data or business intelligence team should focus on providing actionable insights, not on routine, upstream work focused on problems that have already been identified and solved.

Data engineers can leverage the time savings of automated ELT to shift their efforts toward problems impacting external customers, or to pursue higher-value data activities such as machine learning and artificial intelligence. Automated ELT is best thought of as a force multiplier rather than as a replacement for human talent.

Click here to read the next installment in this series!

The excerpt above is from The Essential Guide to Data Integration: How to Thrive in an Age of Infinite Data. The book covers topics such as how data integration fuels analytics, the evolution from ETL to ELT to automated data integration, the benefits of automated data integration, and tips on how to evaluate data integration providers. Get your free copy of the guide today:

Learn how ELT can jumpstart your analytics. Sign up to read the full contents of this chapter and the rest of the ebook for free!

download now

Start analyzing your data
in minutes, not months

Launch any Fivetran connector instantly.
We have detected that you are using an adblocking plugin in your browser. We don't show ads, but we rely on advertising services, so it might restrict you from completing important functions or seeing important content. Please make sure you whitelist our website in your adblocking plugin.
Fivetran uses cookies to enhance your user experience and improve the quality of our website. Unless you disable cookies, you consent to the placement and use of cookies as described in our Cookie Policy by continuing to use this website.
Adblock Detection