HomeBlog
What Is ETL?
Fivetran
HomeBlog
What Is ETL?

What Is ETL?

How do data integration architectures work, and how are they continuing to evolve?

By Charles Wang, May 21, 2021

Today’s digital businesses have a common overarching goal: move data in all shapes and forms to a central platform from its moment of creation, and deliver actionable insights at the moment of need. To meet this goal, data needs to be extracted from a plethora of data sources, transformed into an easy-to-understand standardized format, and loaded into a central data warehouse. ETL is what makes that happen, and it stands for “extract, transform, load.

ETL happens behind the scenes, however, it is a key enabler for several business-critical use cases. Here are a few examples -

  • Master data management (MDM) improves the consistency and quality of critical data assets such as customers, suppliers, employees, and location by bringing them together in a single source of truth using ETL.  For many business-critical applications, MDM is the golden record that applications can depend on without relying on human data entry or complex manual processes. ETL is the lifeblood that brings MDM to enterprise apps. 

  • Cross-application data consistency and sharing are key to deliver higher agility businesses. ETL makes sharing valuable application metadata and user data easy, allowing businesses to get a complete 360-degree view of their day-to-day operations.  Imagine that, the moment one of your customers calls or emails your support department, the support representative immediately knows about the issue the customer is facing and the appropriate next step, without having to browse multiple systems or ask another representative in a different department.

  • Data migration using ETL is more than just transferring data from a source system to a destination system. It is about data integrity, moving data securely, and ensuring that data transfer happens quickly yet seamlessly. Whether it’s modernizing to newer hardware, or making a replica copy for disaster recovery purposes, no one wants their data lost, or to wait hours for a migration task to finish.  

The Three Stages of ETL 

ETL has three main stages – extraction, transformation, and loading. Let’s take a deeper look into these stages. 

Extraction

First, relevant data is discovered and pulled together from multiple data sources and put in a staging area, which is an intermediary destination before the data warehouse. Since it may not be feasible to extract all the data from all the sources at the same time, a staging area helps bring all the data together at different times without overwhelming the data sources. A staging area is also useful if there is a problem loading data into the data warehouse, allowing syncs to be rolled back and resumed as needed.

Transformation

This step is all about making data understandable. Data is complicated, comes from a wide variety of sources, and can be challenging to interpret, reconcile, and utilize. For this reason, data needs to be structured, cleaned, and reformatted before it can be used for analytics. Only then can the information be used by data scientists and business analysts to make decisions. Making data understandable involves enriching it with a consistent naming scheme, removing duplicates and cleansing invalid data, standardizing units of measurement, as well as structuring data, e.g. by splitting and merging data items, to make it easy to connect with business needs. 

Loading

The final step in this journey is loading the transformed data into the data warehouse, using batch or stream loading. Within the data warehouse, data is typically organized in a star schema consisting of larger fact tables with millions of rows, and smaller dimension tables with fewer rows. Once the data is loaded, materialized views can easily be constructed that can accelerate BI or reporting analysts’ tasks.   

Challenges of Traditional ETL Systems

ETL has been around since the 1970s and has changed the way data is managed by enterprises. However, it comes with its share of challenges. 

Monolithic, unscalable architectures

Today’s application demands have become more complex and the volume of data is ever increasing. Beefier hardware is not sufficient to solve the problem; more servers are needed to scale out the data processing infrastructure. Traditional ETL systems are not engineered for scale, which means that they typically sacrifice the granularity of raw data processing as the data volumes increase. They are monolithic, complex beasts to manage, and require very expensive scale-up SMP (symmetric multiprocessing) machines to run. Most IT organizations thus minimize the impact on the data warehouse by scheduling batch processing ETL tasks during off-peak hours. As business needs become more real-time, ETL batch processing will no longer meet common business requirements.

Labor-intensive development

Typically, traditional ETL engines are written in low-level languages such as Cobol and C++ and leverage parallel processing frameworks. They need to be manually tuned to meet specific configuration requirements, making them expensive to build, and hard to manage without specially trained staff. With such a level of manual coding and technical know-how needed, any small change or customization in the technology stack can easily destabilize the infrastructure and disrupt daily operations. 

Handling newer data formats

Traditionally, ETL tools were designed with relational data in mind. However, today’s business leverages all sorts of big data that constantly changes — from videos, social, media, server logs, crowdsourced data, and more.  It is tedious to constantly monitor changes to source interfaces and rebuild integrations. To support these varied data formats, traditional ETL tools often feature additional data transformation options. However, without a purpose-built solution for unstructured data, these band-aid solutions are expensive and not scalable.  

Moving Beyond ETL

A lot has changed since the origins of ETL. The future for ETL is bright but requires a reimagining of the data integration workflow. Today’s computation paradigm has shifted from on-premise to the cloud, and this movement has re-energized ETL in several ways. 

ETL reimagined: ELT

With modern cloud-native ETL technologies, companies do not need to design, build, or manage ETL hardware — they simply choose the connectors for their pipelines and get billed based on consumption. If enterprises still host their data on-premise, some ETL technologies support remote engines that can extract data behind the firewall and migrate it to the cloud. If more processing power is needed, more servers are automatically started in the cloud, and when storage needs grow, massive storage pools with better price-to-performance ratios can be attached on-demand. 

Data warehousing technology has also leapt forward, allowing raw data to be written directly to underlying cloud storage as-is, without any prior transformations. There is no more fear of an “exploding” data warehouse since the cloud supports ample data storage and scalable computing power. This brings us to ELT or “Extract, Load, and Transform.” Because ELT data pipelines load data into the data warehouse where it will eventually be processed, staged, and transformed, it allows analysts to postpone the data transformation step until after data arrives in the data warehouse. This preserves the original granularity of the data and allows transformations to be performed using SQL. If existing data models don’t meet business needs, analysts can easily re-compute newer materialized views. This is far more efficient than running the entire extract and transform steps again, as would be necessary under ETL.

Operationalizing data: Reverse ETL

To break down business silos, essential data needs to be delivered to the right team at the right time in an accessible manner. Reverse ETL empowers organizations to meet this requirement by syncing transformed data from data warehouses into operational systems and tools such as CRMs and ERPs. This ensures a consistent view of critical business data across all the SaaS systems in the enterprise. Imagine the boost to the customer experience if the marketing department can personalize emails based on the customer’s past buying data in a CRM, or if, using the same data, a customer service representative knows exactly how to help the customer. 

Data Integration Is Here to Stay

In the ever-evolving landscape of data management, ETL has evolved into fully-managed ELT as well as reverse ETL. Data integration will continue to grow in importance as newer cloud data technologies become available. As data becomes an increasingly central part of your enterprise, there will always be a need to scale, transform, and extract actionable business insights from it. 

Start analyzing your data in minutes, not months

Launch any Fivetran connector instantly.