A primer to the world of data

How does data become useful knowledge and products?
July 15, 2019

The world is saturated with data. Websites, apps, devices and sensors embedded in machines, buildings and vehicles continuously collect and stream enormous volumes of information. This data is used to guide business decisions and power artificially intelligent products that we interact with daily.

How is data transformed from raw signals and scraps of information into useful knowledge and products? The process involves several stages, in roughly the following order:

  1. Data is gathered from sensor feeds, manual data entry or software and stored in files or databases.
  2. Data is extracted from files, databases, and API endpoints and centralized in data warehouses.
  3. Data is processed to meet the needs of various business units.
  4. Data is used for business intelligence or to power products.

The tools and technologies an organization uses to execute this process form its data stack. Modern data stacks are hosted in the cloud.

What is the cloud and what does it have to do with data?

The “cloud” refers to the use of internet-enabled decentralized computation and storage. Cloud technology distributes software and data across internet-enabled machines as needed, allowing organizations to easily scale their operations up and down. Traditionally, organizations hosted their code and data on-premises, on hardware that they owned. At larger scales, organizations operate data centers, designing and building proprietary IT infrastructure.

There is little need today for most organizations to host their software and data on-premises. With the plummeting cost of storage, computation and internet bandwidth, the cloud offers accessible, cheap, performant and scalable off-the-shelf solutions to a range of IT infrastructure needs.

Many products and services are now “cloud-native,” meaning they are designed from the ground up to leverage web infrastructure. These products and services include every element of the data stack: data sources, data pipelines, data warehouses and business intelligence tools. The providers of such products and services frequently strive to make their clients’ experiences as easy and painless as possible. Such services are called fully managed services.

Today, the third-party data centers that host these products and services are frequently provided by large tech companies such as Amazon, Google and Microsoft.

1. Where data comes from

Data can originate from sensor inputs, such as scans at a checkout line, manual data entry, such as forms collected by the Census Bureau, digital documents and content, such as social media posts, or digital activity recorded by software triggers, such as clicks on a website. The data is typically stored in cloud-based digital files and operational databases. These files and databases may be made directly accessible to the parties that need them, or streamed in the form of API endpoints.

Organizations today use a wide range of cloud applications to provide services such as customer relationship management, payment processing, enterprise resource planning and more. Data generated by these applications not only provides a high-level overview of an organization’s performance, but also invaluable insight at the level of individual accounts. A highly capable data science team using a rich data set can predict customers’ needs as well as seasonal trends with uncanny accuracy.

Concepts:

  1. Digital files – Files that store structured or semi-structured data, as in a spreadsheet. Examples include CSV (comma-separated values), JSON (Javascript Object Notation) and TSV (tab-separated values).
  2. Database – Software applications that store data in a structured, typically relational manner
  3. Operational database – A database that is updated in real time and meant to support day-to-day operations. For instance, an ecommerce website will likely have an operational database to record transactions and store listings and customer profiles.
  4. API endpoint – Application programming interfaces allow applications to communicate with each other. An endpoint is one end of such a communications channel. An endpoint streams data in a machine-readable format such as XML or JSON.
  5. Data science – An umbrella term for the analytical use of data

Notable data sources:

  1. Salesforce – Leading customer relationship management (CRM) platform
  2. NetSuite – Popular suite of enterprise resource planning (ERP) software
  3. Zendesk – Used for customer service ticketing
  4. Zuora – Used to manage subscriptions and billing
  5. Shopify – Popular ecommerce platform
  6. Square – Popular software for retail transactions
  7. Google AdWords – Common online advertising platform

See more data sources here.

2. Centralizing data

An organization will typically contain multiple teams using a variety of applications to aid different parts of its workflow. In order to fully leverage its data, the organization must extract and load it in a central environment to gain a comprehensive view of its operations and track individual entities across multiple applications. The destination for this data is typically a data warehouse, which, unlike, an operational database, is meant to be a structured repository of record for the purposes of analytics and business intelligence. Some organizations opt to use data lakes, which store both structured and raw, unstructured data.

This work can be conducted on an ad-hoc basis, or a dedicated data engineering team can build custom software to ingest the various files, database tables and API feeds. A more practical approach is to use pre-built software to outsource or automate some or all of the process, i.e., using a fully managed service. These tools are referred to as data connectors or data pipelines.

Concepts:

  1. Extraction – Reading data from a data source
  2. Loading – Writing data to a data warehouse
  3. Data connector/data pipeline – Software used to extract data from a source and load it into a data warehouse
  4. Data warehouse – A data repository that, like a database, typically has a relational structure but, unlike a database, is meant to be a central repository of record for the purposes of analytics
  5. Data lake – A data repository meant to permanently accommodate large amounts of raw, unstructured data

Notable data pipeline tools:

  1. Fivetran – (That’s us!) A data pipeline and ELT tool featuring a wide range of proprietary data connectors
  2. Stitch – A cheap, accessible data pipeline and ELT tool that relies extensively on open-source data connectors
  3. Informatica – A legacy ETL tool that was originally designed to work with on-premises systems

Notable data warehouses:

  1. Google BigQuery – A true serverless data warehouse that activates (and deactivates) additional computation and storage resources on the fly
  2. Snowflake – A quasi-serverless data warehouse that scales easily, though with some manual configuration
  3. Amazon Redshift
  4. Microsoft Azure

3. Processing the data

Data from the aforementioned sources is not always provided in a readily usable format. The data must be transformed to comply with data models that organize the data in a way suitable for reporting, dashboards or machine learning. Transformations include data cleaning, summarizing and pivoting tables, as well as joining records from multiple sources together.

Stages 2 and 3 are collectively called "data integration" as well as the acronyms ELT (extract-load-transform) and ETL (extract-transform-load). Traditionally, organizations used ETL because transforming data before loading it lessened the computational and storage load demanded of an on-premises data warehouse. The chief disadvantage of ETL is brittleness. Both downstream changes to business needs as well as upstream changes to the structure of the source data can necessitate a revision of the data pipeline.

Because the transformation stage is sensitively placed between the extraction and loading stages, automating the ETL process can also require the careful, rule-based coordination of different pieces of transformation software, called orchestration.

The rise of cloud technology has made the labor-intensive ETL approach obsolete. By replicating data straight from the source and allowing transformations to be performed at the discretion of analysts, ELT does away with the brittleness of ETL. This approach leverages the cloud, outsourcing and automation to preserve labor, time and money.

An important concern that arises alongside centralizing and processing data is that of data governance. Organizations must use data in a manner that is operationally efficient and complies with legal transparency and privacy regulations. This means that organizations must dictate and document (data cataloging) how and what data is integrated, how that data is used, that the data is accurate and consistent, and that it is accessible only to the appropriate parties.

Concepts:

  1. Data model – An abstract representation of real-world entities and their relations, meant primarily to inform business intelligence. A semantic layer translates elements in a data model into a human-readable lexicon.
  2. Transformation – The process of altering data so that it complies with the requirements of a data model
  3. Data cleaning – Removing errors, inconsistencies and irrelevant records from data
  4. ELT – Extract-load-transform, a data pipeline doctrine involving extraction, loading and transformation, in that order. Enabled by the low cost of computation, storage and internet bandwidth.
  5. ETL – Extract-transform-load, the traditional data pipeline doctrine involving extraction, transformation and loading. Formerly necessary to preserve scarce computation, storage and bandwidth resources.
  6. Orchestration – The process of scheduling and coordinating multiple pieces of software in order to perform transformations
  7. Data integration – A general term for the process of extracting, centralizing and processing data. Used interchangeably with data acquisition and data ingestion.
  8. Data governance – Processes related to ensuring the proper usage, integrity and security of data
  9. Data cataloging – Documentation about the meaning, relationships and origin of data. Sometimes used interchangeably with data dictionary, which refers to database-specific documentation.

Notable tools with transformation, orchestration and governance features:

  1. Matillion – A cloud-based ETL tool featuring GUI-based orchestration
  2. Airflow – An open-source orchestration tool originally developed by AirBnB. Most famous for the use of directed acyclic graphs.
  3. Luigi – A Python-based open-source orchestration tool originally developed by Spotify
  4. Alteryx – A tool that supports orchestration, transformation, governance and analytics

4. Using the data

Ultimately, the data furnished by a data stack is meant to guide decisions made at every level of an organization and to power artificially intelligent products. The use of data to support decisions within an organization is known as analytics or business intelligence. A relatively technical and difficult approach to conducting analytics is to use a language like R or Python to build visualizations, dashboards, and tables of summary statistics. The advantage of this approach is that a highly code-savvy analyst or data scientist can build entire custom websites and applications from scratch or even prototype applications of machine learning.

The disadvantage of the technical approach is that it is extremely labor-intensive, often doesn’t easily integrate with the rest of the data stack, and few people are qualified to perform it. A more accessible approach is to use a business intelligence platform. These tools typically integrate directly with data warehouses, do not require coding competency outside of SQL, and feature a large selection of visualization and dashboard templates.

The low-level, technical approach remains necessary to achieve the pinnacle of data science: machine learning. Well-known practical examples of machine learning range from targeted ads and self-driving vehicles to IBM Watson. Early signs indicate that machine learning will become more accessible in the future. Google BigQuery features machine learning conducted entirely using SQL and integrates with the BI tool Looker.

Concepts:

  1. Analytics – A general term that encompasses the use of data to guide decisions. Often used interchangeably in corporate settings with business intelligence.
  2. R – A programming language designed for statistical computing. It is a high-level scripting language.
  3. Python – A popular general-purpose programming language with extensive libraries for statistical computing, machine learning, web development and other purposes. It is a high-level scripting language, but many of its packages are actually wrappers for lower-level languages and are thus highly performant.
  4. SQL – “Structured Query Language,” the standard language used to import, read, modify and delete data in relational databases. It is commonly used by analysts and non-technical users alike.
  5. Visualization – An image meant to quickly convey numerical information
  6. Dashboard – An organized collection of visualizations
  7. Machine learning – The application of mathematics to pattern recognition and prediction. A simple and common example is linear regression.
  8. Supervised learning – Also known as classification. The algorithm requires a training set with known correct answers.
  9. Unsupervised learning – Also known as clustering. Does not require a training set with known answers.
  10. Reinforcement learning – Uses feedback from the environment and/or an adversary to optimize the agent’s behavior

Notable business intelligence platforms:

  1. Looker – Features a proprietary language called LookML as an additional layer of abstraction over SQL
  2. Tableau – Capable of generating stunning visualizations
  3. Mode – Includes Python integration for those inclined to machine learning and predictive modeling
  4. Domo – All-in-one data integration and business intelligence tool

Notable tools for machine learning:

  1. Google BigQuery – Uses SQL for machine learning! As of this writing, it supports linear regression, logistic regression and k-means clustering.
  2. Python packages – Including Jupyter, Scikit-learn, Tensorflow, Pandas, PyTorch
  3. R packages – Including ROD-BC, Gmodels, Class, Tm

Notable applications of machine learning:

  1. Recommendation engines
  2. Google Search
  3. Netflix recommendations
  4. Facebook targeted ads
  5. Vision recognition
  6. Self-driving cars
  7. Automated photo and video tagging
  8. Speech and natural language recognition
  9. Siri
  10. Alexa
  11. Watson
  12. Prediction
  13. Medical diagnostics
  14. Fraud detection
  15. Crime prediction

The information age has just begun

The modern preeminence of data makes it essential for organizations to conduct themselves with the guidance of facts and develop products that leverage the wealth of data their activities generate. In order to be competitive and innovative, your organization must have a solid foundation in data engineering and data integration. With the proliferation of managed services, this foundation is within your grasp.

Learn how Fivetran can help with a demo or a free trial.

Start for free

Join the thousands of companies using Fivetran to centralize and transform their data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Data insights
Data insights

A primer to the world of data

A primer to the world of data

July 15, 2019
July 15, 2019
A primer to the world of data
How does data become useful knowledge and products?

The world is saturated with data. Websites, apps, devices and sensors embedded in machines, buildings and vehicles continuously collect and stream enormous volumes of information. This data is used to guide business decisions and power artificially intelligent products that we interact with daily.

How is data transformed from raw signals and scraps of information into useful knowledge and products? The process involves several stages, in roughly the following order:

  1. Data is gathered from sensor feeds, manual data entry or software and stored in files or databases.
  2. Data is extracted from files, databases, and API endpoints and centralized in data warehouses.
  3. Data is processed to meet the needs of various business units.
  4. Data is used for business intelligence or to power products.

The tools and technologies an organization uses to execute this process form its data stack. Modern data stacks are hosted in the cloud.

What is the cloud and what does it have to do with data?

The “cloud” refers to the use of internet-enabled decentralized computation and storage. Cloud technology distributes software and data across internet-enabled machines as needed, allowing organizations to easily scale their operations up and down. Traditionally, organizations hosted their code and data on-premises, on hardware that they owned. At larger scales, organizations operate data centers, designing and building proprietary IT infrastructure.

There is little need today for most organizations to host their software and data on-premises. With the plummeting cost of storage, computation and internet bandwidth, the cloud offers accessible, cheap, performant and scalable off-the-shelf solutions to a range of IT infrastructure needs.

Many products and services are now “cloud-native,” meaning they are designed from the ground up to leverage web infrastructure. These products and services include every element of the data stack: data sources, data pipelines, data warehouses and business intelligence tools. The providers of such products and services frequently strive to make their clients’ experiences as easy and painless as possible. Such services are called fully managed services.

Today, the third-party data centers that host these products and services are frequently provided by large tech companies such as Amazon, Google and Microsoft.

1. Where data comes from

Data can originate from sensor inputs, such as scans at a checkout line, manual data entry, such as forms collected by the Census Bureau, digital documents and content, such as social media posts, or digital activity recorded by software triggers, such as clicks on a website. The data is typically stored in cloud-based digital files and operational databases. These files and databases may be made directly accessible to the parties that need them, or streamed in the form of API endpoints.

Organizations today use a wide range of cloud applications to provide services such as customer relationship management, payment processing, enterprise resource planning and more. Data generated by these applications not only provides a high-level overview of an organization’s performance, but also invaluable insight at the level of individual accounts. A highly capable data science team using a rich data set can predict customers’ needs as well as seasonal trends with uncanny accuracy.

Concepts:

  1. Digital files – Files that store structured or semi-structured data, as in a spreadsheet. Examples include CSV (comma-separated values), JSON (Javascript Object Notation) and TSV (tab-separated values).
  2. Database – Software applications that store data in a structured, typically relational manner
  3. Operational database – A database that is updated in real time and meant to support day-to-day operations. For instance, an ecommerce website will likely have an operational database to record transactions and store listings and customer profiles.
  4. API endpoint – Application programming interfaces allow applications to communicate with each other. An endpoint is one end of such a communications channel. An endpoint streams data in a machine-readable format such as XML or JSON.
  5. Data science – An umbrella term for the analytical use of data

Notable data sources:

  1. Salesforce – Leading customer relationship management (CRM) platform
  2. NetSuite – Popular suite of enterprise resource planning (ERP) software
  3. Zendesk – Used for customer service ticketing
  4. Zuora – Used to manage subscriptions and billing
  5. Shopify – Popular ecommerce platform
  6. Square – Popular software for retail transactions
  7. Google AdWords – Common online advertising platform

See more data sources here.

2. Centralizing data

An organization will typically contain multiple teams using a variety of applications to aid different parts of its workflow. In order to fully leverage its data, the organization must extract and load it in a central environment to gain a comprehensive view of its operations and track individual entities across multiple applications. The destination for this data is typically a data warehouse, which, unlike, an operational database, is meant to be a structured repository of record for the purposes of analytics and business intelligence. Some organizations opt to use data lakes, which store both structured and raw, unstructured data.

This work can be conducted on an ad-hoc basis, or a dedicated data engineering team can build custom software to ingest the various files, database tables and API feeds. A more practical approach is to use pre-built software to outsource or automate some or all of the process, i.e., using a fully managed service. These tools are referred to as data connectors or data pipelines.

Concepts:

  1. Extraction – Reading data from a data source
  2. Loading – Writing data to a data warehouse
  3. Data connector/data pipeline – Software used to extract data from a source and load it into a data warehouse
  4. Data warehouse – A data repository that, like a database, typically has a relational structure but, unlike a database, is meant to be a central repository of record for the purposes of analytics
  5. Data lake – A data repository meant to permanently accommodate large amounts of raw, unstructured data

Notable data pipeline tools:

  1. Fivetran – (That’s us!) A data pipeline and ELT tool featuring a wide range of proprietary data connectors
  2. Stitch – A cheap, accessible data pipeline and ELT tool that relies extensively on open-source data connectors
  3. Informatica – A legacy ETL tool that was originally designed to work with on-premises systems

Notable data warehouses:

  1. Google BigQuery – A true serverless data warehouse that activates (and deactivates) additional computation and storage resources on the fly
  2. Snowflake – A quasi-serverless data warehouse that scales easily, though with some manual configuration
  3. Amazon Redshift
  4. Microsoft Azure

3. Processing the data

Data from the aforementioned sources is not always provided in a readily usable format. The data must be transformed to comply with data models that organize the data in a way suitable for reporting, dashboards or machine learning. Transformations include data cleaning, summarizing and pivoting tables, as well as joining records from multiple sources together.

Stages 2 and 3 are collectively called "data integration" as well as the acronyms ELT (extract-load-transform) and ETL (extract-transform-load). Traditionally, organizations used ETL because transforming data before loading it lessened the computational and storage load demanded of an on-premises data warehouse. The chief disadvantage of ETL is brittleness. Both downstream changes to business needs as well as upstream changes to the structure of the source data can necessitate a revision of the data pipeline.

Because the transformation stage is sensitively placed between the extraction and loading stages, automating the ETL process can also require the careful, rule-based coordination of different pieces of transformation software, called orchestration.

The rise of cloud technology has made the labor-intensive ETL approach obsolete. By replicating data straight from the source and allowing transformations to be performed at the discretion of analysts, ELT does away with the brittleness of ETL. This approach leverages the cloud, outsourcing and automation to preserve labor, time and money.

An important concern that arises alongside centralizing and processing data is that of data governance. Organizations must use data in a manner that is operationally efficient and complies with legal transparency and privacy regulations. This means that organizations must dictate and document (data cataloging) how and what data is integrated, how that data is used, that the data is accurate and consistent, and that it is accessible only to the appropriate parties.

Concepts:

  1. Data model – An abstract representation of real-world entities and their relations, meant primarily to inform business intelligence. A semantic layer translates elements in a data model into a human-readable lexicon.
  2. Transformation – The process of altering data so that it complies with the requirements of a data model
  3. Data cleaning – Removing errors, inconsistencies and irrelevant records from data
  4. ELT – Extract-load-transform, a data pipeline doctrine involving extraction, loading and transformation, in that order. Enabled by the low cost of computation, storage and internet bandwidth.
  5. ETL – Extract-transform-load, the traditional data pipeline doctrine involving extraction, transformation and loading. Formerly necessary to preserve scarce computation, storage and bandwidth resources.
  6. Orchestration – The process of scheduling and coordinating multiple pieces of software in order to perform transformations
  7. Data integration – A general term for the process of extracting, centralizing and processing data. Used interchangeably with data acquisition and data ingestion.
  8. Data governance – Processes related to ensuring the proper usage, integrity and security of data
  9. Data cataloging – Documentation about the meaning, relationships and origin of data. Sometimes used interchangeably with data dictionary, which refers to database-specific documentation.

Notable tools with transformation, orchestration and governance features:

  1. Matillion – A cloud-based ETL tool featuring GUI-based orchestration
  2. Airflow – An open-source orchestration tool originally developed by AirBnB. Most famous for the use of directed acyclic graphs.
  3. Luigi – A Python-based open-source orchestration tool originally developed by Spotify
  4. Alteryx – A tool that supports orchestration, transformation, governance and analytics

4. Using the data

Ultimately, the data furnished by a data stack is meant to guide decisions made at every level of an organization and to power artificially intelligent products. The use of data to support decisions within an organization is known as analytics or business intelligence. A relatively technical and difficult approach to conducting analytics is to use a language like R or Python to build visualizations, dashboards, and tables of summary statistics. The advantage of this approach is that a highly code-savvy analyst or data scientist can build entire custom websites and applications from scratch or even prototype applications of machine learning.

The disadvantage of the technical approach is that it is extremely labor-intensive, often doesn’t easily integrate with the rest of the data stack, and few people are qualified to perform it. A more accessible approach is to use a business intelligence platform. These tools typically integrate directly with data warehouses, do not require coding competency outside of SQL, and feature a large selection of visualization and dashboard templates.

The low-level, technical approach remains necessary to achieve the pinnacle of data science: machine learning. Well-known practical examples of machine learning range from targeted ads and self-driving vehicles to IBM Watson. Early signs indicate that machine learning will become more accessible in the future. Google BigQuery features machine learning conducted entirely using SQL and integrates with the BI tool Looker.

Concepts:

  1. Analytics – A general term that encompasses the use of data to guide decisions. Often used interchangeably in corporate settings with business intelligence.
  2. R – A programming language designed for statistical computing. It is a high-level scripting language.
  3. Python – A popular general-purpose programming language with extensive libraries for statistical computing, machine learning, web development and other purposes. It is a high-level scripting language, but many of its packages are actually wrappers for lower-level languages and are thus highly performant.
  4. SQL – “Structured Query Language,” the standard language used to import, read, modify and delete data in relational databases. It is commonly used by analysts and non-technical users alike.
  5. Visualization – An image meant to quickly convey numerical information
  6. Dashboard – An organized collection of visualizations
  7. Machine learning – The application of mathematics to pattern recognition and prediction. A simple and common example is linear regression.
  8. Supervised learning – Also known as classification. The algorithm requires a training set with known correct answers.
  9. Unsupervised learning – Also known as clustering. Does not require a training set with known answers.
  10. Reinforcement learning – Uses feedback from the environment and/or an adversary to optimize the agent’s behavior

Notable business intelligence platforms:

  1. Looker – Features a proprietary language called LookML as an additional layer of abstraction over SQL
  2. Tableau – Capable of generating stunning visualizations
  3. Mode – Includes Python integration for those inclined to machine learning and predictive modeling
  4. Domo – All-in-one data integration and business intelligence tool

Notable tools for machine learning:

  1. Google BigQuery – Uses SQL for machine learning! As of this writing, it supports linear regression, logistic regression and k-means clustering.
  2. Python packages – Including Jupyter, Scikit-learn, Tensorflow, Pandas, PyTorch
  3. R packages – Including ROD-BC, Gmodels, Class, Tm

Notable applications of machine learning:

  1. Recommendation engines
  2. Google Search
  3. Netflix recommendations
  4. Facebook targeted ads
  5. Vision recognition
  6. Self-driving cars
  7. Automated photo and video tagging
  8. Speech and natural language recognition
  9. Siri
  10. Alexa
  11. Watson
  12. Prediction
  13. Medical diagnostics
  14. Fraud detection
  15. Crime prediction

The information age has just begun

The modern preeminence of data makes it essential for organizations to conduct themselves with the guidance of facts and develop products that leverage the wealth of data their activities generate. In order to be competitive and innovative, your organization must have a solid foundation in data engineering and data integration. With the proliferation of managed services, this foundation is within your grasp.

Learn how Fivetran can help with a demo or a free trial.

Topics
No items found.
Share

Related blog posts

No items found.
No items found.
No items found.

Start for free

Join the thousands of companies using Fivetran to centralize and transform their data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.