Logs
Fivetran generates and logs several types of data related to your account and destinations:
- Structured log events from connections, dashboard user actions, and Fivetran API calls
- Account- and destination-related metadata that includes:
- Role/membership information
- Data flow
- Granular consumption information
You can use this data for the following purposes:
- Monitoring and troubleshooting of connections
- Tracking your usage
- Conducting audits
You can monitor and process this data in your Fivetran account by using either of the following:
- Our free Fivetran Platform Connector. We automatically add a Fivetran Platform connection to every destination you create.
- External log services
Fivetran Platform connector
Fivetran Platform Connector is a free connector that delivers your logs and account metadata to a schema in your destination. We automatically add a Fivetran Platform connection to every destination you create. The Fivetran Platform Connector is available on all plans. Learn more in our Fivetran Platform Connector documentation.
IMPORTANT: The MAR that Fivetran Platform connections generate is free, though you may incur costs in your destination. Learn more in our pricing documentation.
External log services
As an alternative to the Fivetran Platform Connector, you can use any of the following external log services:
You can connect one external logging service per destination. Fivetran will write log events for all connections in the destination to the connected service. If there is a logging service that you would like but that is not yet supported, let us know.
IMPORTANT: You must be on an Enterprise or Business Critical plan to use an external logging service.
Log event format
The log events are in a standardized JSON format:
{
"event": <Event name>,
"data": {
// Event-specific data. This section is optional and varies for each relevant event.
},
"created": <Event creation timestamp in UTC>,
"connector_type": <Connector type>,
"connector_id": <Connector ID>,
"connector_name": <Connector name>,
"sync_id": <Sync identifier as UUID>,
"exception_id": <Fivetran error identifier>
}
Field | Description |
---|---|
event | Event name |
data | Optional object that contains event type-specific data. Its contents vary depending on log event type |
created | Event creation timestamp in UTC |
connector_type | Connector type |
connector_id | Connection ID |
connector_name | Connection name. It is either schema name, schema prefix name, or schema table prefix name |
sync_id | Sync identifier as UUID. This optional field is only present in sync-related events generated by Fivetran connections |
exception_id | Optional field that is only present if Fivetran encountered an unexpected problem |
See our Log event details documentation to learn about event-specific data.
NOTE: Some events may not be defined in the Log event details documentation as they are either connector type-specific or don't have the
data
object.
Log event list
Connector log events
Fivetran connectors generate the following events:
Connector Event Name | Description |
---|---|
alter_table | Table columns added to, modified in or dropped from destination table |
change_schema_config_via_sync | Schema configuration updated during a sync |
connection_failure | Failed to establish connection with source system |
connection_successful | Successfully established connection with source system |
copy_rows | Data copied to staging table |
create_schema | Schema created in destination |
create_table | Table created in destination |
delete_rows | Stale rows deleted from main table |
diagnostic_access_ended | Data access was removed because the related Zendesk ticket was resolved or deleted |
diagnostic_access_expired | Data access expired |
diagnostic_access_granted | Data accessed by Fivetran support for diagnostic purposes |
drop_table | Table dropped from destination |
error | Error during data sync |
extract_summary | Summary of data extracted and API call count |
forced_resync_connector | Forced re-sync of a connector |
forced_resync_table | Forced re-sync of a table |
import_progress | Rough estimate of import progress |
info | Information during data sync |
insert_rows | Updated rows inserted in main table |
json_value_too_long | A JSON value was too long for your destination and had to be truncated |
processed_records | Number of records read from source system |
read_end | Data reading ended |
read_start | Data reading started |
records_modified | Number of records upserted, updated, or deleted in table within single operation during a sync |
schema_migration_end | Schema migration ended |
schema_migration_start | Schema migration started |
sql_query | SQL query executed on a source database |
sync_end | Data sync completed |
sync_start | Connection started syncing data |
sync_stats | Current sync metadata. This event is only displayed for a successful sync for the following connectors: |
update_rows | Existing rows in main table updated with new values |
update_state | Connector-specific data you provided |
warning | Warning during data sync |
write_to_table_end | Finished writing records to destination table |
write_to_table_start | Started writing records to destination table |
Dashboard activity log events
Dashboard activities generate the following events:
Dashboard Activity Event Name | Description |
---|---|
change_schema_config | Schema configuration updated |
connect_logger | Logging service is connected |
create_connector | New connection is created |
create_warehouse | New destination is created |
delete_connector | Connection is deleted |
delete_warehouse | Destination is deleted |
diagnostic_access_approved | Data access granted for 21 days |
diagnostic_access_denied | Data access denied |
diagnostic_access_revoked | Data access revoked |
disconnect_logger | Logging service is disconnected |
edit_connector | Connection's credential, sync period or delay notification period is edited |
force_update_connector | Trigger manual update for connection |
pause_connector | Connection is paused |
pause_logging | Logging service is paused |
resume_connector | Connection is resumed |
resume_logging | Logging service is resumed |
resync_connector | Connection's re-sync is triggered |
resync_table | Connection's table re-sync is triggered |
test_connector_connection | Connection test(s) run |
update_logger | Logging service credential is updated |
update_warehouse | Destination configuration is updated |
REST API call log events
REST API calls generate the following events:
API Call Event Name | Description |
---|---|
change_schema_config_via_api | Schema configuration updated using API call |
Transformation log events
Transformations for dbt Core generate the following events:
Transformations for dbt Core Event Name | Description |
---|---|
dbt_run_start | The dbt transformation started. |
dbt_run_succeeded | The dbt transformation was successfully finished. |
dbt_run_failed | The dbt transformation failed. |
New user-defined dbt jobs, new Quickstart Data Models and External Orchestration jobs (dbt Cloud and Coalesce) generate the following events:
Transformations Event Name | Description |
---|---|
transformation_start | The transformation started. |
transformation_succeeded | The transformation was successfully finished. |
transformation_failed | The transformation failed. |
Connection stages and related log events
This section describes the life cycle of a connection. It lists the corresponding log event generated at each stage of the connection, or connection-related dashboard activities. This will help you recognize the events in the logs and understand how their ordering relates to the operations of the connection.
The connection-related log events are included in the logs captured by the Fivetran Platform Connector and external logging services.
NOTE: The connection life cycle stages as are listed in chronological order where possible.
1. Connection creation initialized
When you create a connection, Fivetran writes its ID in Fivetran's database and assigns the connection status “New.”
NOTE: Fivetran writes encrypted credentials in its database.
If you create a connection in your Fivetran dashboard, you need to specify the required properties and authorization credentials.
If you create a connection using the Create a Connector API endpoint, you need to specify the required properties. However, you can omit the authorization credentials and authorize the connection later using the Connect Card.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
create_connector | New connection is created |
2. Connection setup tests run
During these tests, Fivetran verifies that credentials such as authentication details, paths, and IDs are correct and valid, and resources are available and accessible.
When you click Save & Test while creating a connection in your Fivetran dashboard, Fivetran runs the setup tests. Also, you can run the setup tests by using the Run Connector Setup Tests endpoint. If the setup tests have succeeded, it means the connection has been successfully created.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
test_connector_connection | Connection test(s) run |
sql_query | SQL query executed on a source database |
3. Connection successfully created
After the setup tests have succeeded, Fivetran records the Connection Created user action in the User Actions Log. At this stage, the connection is paused. It does not extract, process, or load any data from the source to the destination.
After the connection has been successfully created, you can trigger the historical sync.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
connection_successful | Successfully established connection with source system |
connection_failure | Failed to establish connection with source system |
4. Connection schema changed
Connection schema changes include changing the connection schema, tables and table columns.
You change your connection's schema in the following cases:
- You want to switch the sync mode
- You want to change what data your connection syncs, which includes using the data blocking and column hashing features.
- You need to fix a broken connection.
- You need to change a connection schema as part of the schema review. The schema review after you have created a connection and before you run the historical sync is required for particular connector types.
You can change your schema in the following ways:
- In your Fivetran dashboard
- With the Fivetran REST API using any of the following endpoints:
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
change_schema_config | Schema configuration updated |
NOTE: For an un-paused connection, changing the schema will trigger a sync to run. If a sync is already running, Fivetran will cancel the running sync and immediately initiate a new one with the new schema configuration.
5. Sync triggered
You need to run the historical sync for the connection to start working as intended. The first historical sync that Fivetran does for a connection is called the initial sync. During the historical sync, we extract and process all the historical data from the selected tables in the source. Periodically, we will load data into the destination.
After a successful historical sync, the connection runs in the incremental sync mode. In this mode, whenever possible, only data that has been modified or added - incremental changes - is extracted, processed, and loaded on schedule. We will reimport tables where it is not possible to only fetch incremental changes. We use cursors to record the history of the syncs.
The connection sync frequency that you set in your Fivetran dashboard or by using the Modify a Connector endpoint defines how often the incremental sync is run.
IMPORTANT: Incremental sync runs on schedule at a set sync frequency only when the connection's sync scheduling type is set to
auto
in our REST API. Setting the scheduling type tomanual
effectively disables the schedule. You can trigger a manual sync to sync the connection in this case.
NOTE: Some connector types support the priority-first sync mode.
For this stage, Fivetran generates the following log events:
Event name | Description | Step |
---|---|---|
alter_table | Table columns added to, modified in or dropped from destination table | Load |
change_schema_config_via_sync | Schema configuration updated during a sync. Updates are done when a new table was created during the sync and the user selected to automatically include new tables in the schema. | Process |
copy_rows | Data copied to staging table | Load |
create_schema | Schema created in destination | Load |
create_table | Table created in destination | Load |
delete_rows | Stale rows deleted from main table | Load |
drop_table | Table dropped from destination | Load |
insert_rows | Updated rows inserted in main table | Load |
json_value_too_long | A JSON value was too long for your destination and had to be truncated | Process |
read_end | Data reading ended | Extract |
read_start | Data reading started | Extract |
records_modified | Number of records modified during sync | Load |
schema_migration_end | Schema migration ended | Load |
schema_migration_start | Schema migration started | Load |
sql_query | SQL query executed on a source database | Extract |
sync_end | Data sync completed Valid status field values: SUCCESSFUL , FAILURE , FAILURE_WITH_TASK , and RESCHEDULED | Load |
sync_start | Connection started syncing data | Extract |
update_rows | Existing rows in main table updated with new values | Load |
write_to_table_end | Finished writing records to destination table | Load |
6. Connection paused/resumed
When you have just created a connection, it is paused, which means it does not extract, process, or load data. After you successfully run the setup tests, the connection becomes enabled/resumed. After the successful initial sync, it starts working in incremental sync mode.
You can pause and resume the connection either in your Fivetran dashboard, or by using various Connector endpoints in our API.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
pause_connector | The connection was paused |
resume_connector | The connection was resumed |
NOTE: Resuming a connection will trigger either the initial sync, or an incremental sync, depending on the stage the connection was at when it was paused.
7. Re-sync triggered
In some cases you may need to re-run a historical sync to fix a data integrity error. This is called a re-sync. We sync all historical data in the tables and their columns in the source as selected in the connection configuration.
You can trigger a re-sync from your Fivetran dashboard. You can also trigger a re-sync by using the Modify a Connector endpoint.
For connections that support table re-sync, you can trigger it either in the dashboard or by using the Re-sync connector table data endpoint.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
resync_connector | Connection's re-sync is triggered |
resync_table | Connection's table re-sync is triggered |
8. Manual sync triggered
You can trigger an incremental sync manually without waiting for the scheduled incremental sync.
You can do this either by clicking Sync Now in your Fivetran dashboard or by using the Sync Connector Data endpoint.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
force_update_connector | Trigger manual update for connection |
9. Sync ended
A sync can end in one of the following states:
- Successful - the sync was completed without issue and data in the destination is up to date.
- Failure - the sync failed due to a unknown issue.
- Failure with Error - the sync failed due to a known issue that requires the user to take actions to be fixed. An Error is generated and displayed on the Alerts page in the dashboard.
- Rescheduled - the sync was unable to complete at this time and will automatically resume syncing when it can complete. This is most commonly caused by hitting API quotas.
- Canceled - the sync was canceled by the user.
When the sync ends, Fivetran generates the following log events:
Event name | Explanation |
---|---|
sync_end | Final status of sync. |
sync_stats | Current sync metadata. This event is only displayed for a successful sync for the following connectors: |
IMPORTANT: If you set the sync scheduling for your connection to
manual
, you need to manually trigger your syncs after you make this change. If a manually-triggered sync wasrescheduled
, you need to manually re-trigger that sync, since sync rescheduling only works with automatic sync scheduling.
10. Connection broken
IMPORTANT: This is an abnormal state for a connection. It commonly happens due to transient networking or server errors and most often resolves itself with no action on your part.
A connection is considered to be broken when during its sync it fails to either extract, process, or load data.
You can see a connection in the Connection List is broken when it has the red Broken label.
In the Connection dashboard, a broken connection also has the red Broken label.
If we know the breaking issue, we create a corresponding Error and notify you by email with instructions on how to resolve the issue. The Error is displayed on the Alerts page in your Fivetran dashboard. You need to take the actions listed in the Error message to fix the connection. We resend the Error email every seven hours until the Error is resolved.
If we don't know the breaking issue, we generate an Unknown Error in your dashboard after three failed syncs in a row. After a connection is broken with an Unknown Error for 48 hours, Fivetran automatically escalates the issue to our support and engineering teams.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
sync_end | The value of the status field is either FAILURE or FAILURE_WITH_TASK |
11. Connection modified
You can change the connection credentials, incremental sync frequency, delay notification period and other connector-specific details of a connection. You can modify the connection in your Fivetran dashboard or by using the Modify a Connector endpoint.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
edit_connector | Connection's credential, sync period or delay notification period is edited |
NOTE: After you have modified and saved your connection, Fivetran automatically runs the setup tests.
12. Connector deleted
When you delete a connection, we delete all of its data from Fivetran's database. You can delete connections both in your Fivetran dashboard and by using the Delete a Connector endpoint.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
delete_connector | Connection is deleted. Events of this type can be only seen in the logs generated by the Fivetran Platform Connector and external logging services because the corresponding connection's details page becomes unavailable after the connection has been deleted. |
Log event details
The following sections provide details for log events that have event-specific data.
alter_table
"data" : {
"type": "ADD_COLUMN",
"table": "competitor_profile_pvo",
"properties" : {
"columnName": "object_version_number",
"dataType": "INTEGER",
"byteLength": null,
"precision": null,
"scale": null,
"notNull": null
}
}
fields | description |
---|---|
type | Table change type. Valid values for table change type:
|
table | Table name |
properties | Column properties |
columnName | Column name |
dataType | Column data type |
byteLength | Column value byte length |
precision | Column value precision |
scale | Column value scale |
notNull | Whether column data is NULL |
fields | description |
---|---|
method | API method |
uri | Endpoint URI |
body | Request body. Optional |
change_schema_config
"data": {
"actor": "john.doe@company.com",
"connectorId": "sql_server_test",
"properties": {
"ENABLED_COLUMNS": [
{
"schema": "testSchema",
"table": "testTable2",
"columns": [
"ID"
]
}
],
"ENABLED_TABLES": [
{
"schema": "testSchema",
"tables": [
"testTable2"
]
}
]
}
}
fields | description |
---|---|
actor | Actor's account login email |
connectorID | Connection ID |
properties | Contains schema change type and the relevant entities |
DISABLED | Array of names of schemas disabled to sync |
ENABLED | Array of names of schemas enabled to sync |
DISABLED_TABLES | Array of names of tables disabled to sync |
ENABLED_TABLES | Array of names of tables enabled to sync |
DISABLED_COLUMNS | Array of names of columns disabled to sync |
ENABLED_COLUMNS | Array of names of columns enabled to sync |
HASHED_COLUMNS | Array of names of hashed columns |
UNHASHED_COLUMNS | Array of names of unhashed columns |
SYNC_MODE_CHANGE_FOR_TABLES | Array of schemas containing tables and columns with updated sync mode |
tables SyncModes | Array of tables with updated sync mode within schema |
schema | Schema name |
tables | Array of table names |
table | Table name |
columns | Array of column names |
syncMode | Updated sync mode. Possible values: Legacy , History |
change_schema_config_via_api
"data": {
"actor": "john.doe@company.com",
"connectorId": "purported_substituting",
"properties": {
"DISABLED_TABLES": [
{
"schema": "shopify_pbf_schema28",
"tables": [
"price_rule"
]
}
],
"ENABLED_TABLES": [
{
"schema": "shopify_pbf_schema28",
"tables": [
"order_rule"
]
}
],
"includeNewByDefault": false,
"ENABLED": [
"shopify_pbf_schema28"
],
"DISABLED": [
"shopify_pbf_schema30"
],
"includeNewColumnsByDefault": false
}
}
fields | description |
---|---|
actor | Actor's account login email |
connectorId | Connection ID |
properties | Contains schema change types and relevant entities |
DISABLED | Array of names of schemas disabled to sync |
ENABLED | Array of names of schemas enabled to sync |
DISABLED_TABLES | Array of names of tables disabled to sync |
ENABLED_TABLES | Array of names of tables enabled to sync |
schema | Schema name |
tables | Array of table names |
columns | Array of column names |
includeNewByDefault | If set to true , all new schemas, tables, and columns are enabled to sync. |
includeNewColumnsByDefault | If set to true , only new columns are enabled to sync. |
change_schema_config_via_sync
"data": {
"connectorId": "documentdb",
"properties": {
"ADDITION": [
{
"schema": "docdb_1",
"tables": [
"STRING_table"
]
}
],
"REMOVAL": [
{
"schema": "docdb_2",
"tables": [
"STRING_table"
]
}
]
}
}
fields | description |
---|---|
connectorId | Connection ID |
properties | Contains schema change types and relevant entities |
ADDITION | Contains schemas and tables enabled to sync |
REMOVAL | Contains schemas and tables disabled to sync |
schema | Schema name |
tables | Array of table names |
connection_failure
"data" : {
"actor" : "john.doe@company.com",
"id" : "db2ihva_test5",
"testName" : "Connecting to SSH tunnel",
"message" : "The ssh key might have changed"
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connection ID |
testName | Name of failed test |
message | Message |
connection_successful
"data" : {
"actor" : "john.doe@company.com",
"id" : "db2ihva_test5",
"testName" : "DB2i DB accessibility test",
"message" : ""
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connection ID |
testName | Name of succeeded test |
message | Message |
copy_rows
"data" : {
"schema" : "company_bronco_shade_staging",
"name" : "facebook_ads_ad_set_attribution_2022_11_03_73te45jo6q34mvjfqfbcwjaea",
"destinationName" : "ad_set_attribution",
"destinationSchema" : "facebook_ads",
"copyType" : "WRITE_TRUNCATE"
}
fields | description |
---|---|
schema | Schema name |
name | Table name |
destinationName | Table name in destination. Optional |
destinationSchema | Schema name in destination. Optional |
copyType | Copy type. Optional |
create_connector
"data" : {
"actor" : "john.doe@company.com",
"id" : "db2ihva_test5",
"properties" : {
"host" : "111.111.11.11",
"password" : "************",
"user" : "dbihvatest",
"database" : "dbitest1",
"port" : 1111,
"tunnelHost" : "11.111.111.111",
"tunnelPort" : 11,
"tunnelUser" : "hvr",
"alwaysEncrypted" : true,
"agentPublicCert" : "...",
"agentUser" : "johndoe",
"agentPassword" : "************",
"agentHost" : "localhost",
"agentPort" : 1112,
"publicKey" : "...",
"parameters" : null,
"connectionType" : "SshTunnel",
"databaseHost" : "111.111.11.11",
"databasePassword" : "dbihvatest",
"databaseUser" : "dbihvatest",
"logJournal" : "VBVJRN",
"logJournalSchema" : "DBETEST1",
"agentToken" : null,
"sshHostFromSbft" : null
}
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connection ID |
properties | Connector type-specific properties |
create_table
"data" : {
"schema" : "facebooka_ads",
"name" : "account_history",
"columns" : {
"schema" : "STRING",
"update_id" : "STRING",
"_fivetran_synced" : "TIMESTAMP",
"rows_updated_or_inserted" : "INTEGER",
"update_started" : "TIMESTAMP",
"start" : "TIMESTAMP",
"progress" : "TIMESTAMP",
"id" : "STRING",
"message" : "STRING",
"done" : "TIMESTAMP",
"table" : "STRING",
"status" : "STRING"
"primary_key_clause" : ""\"_FIVETRAN_ID\""
}
}
fields | description |
---|---|
schema | Schema name |
name | Table name |
columns | Table columns. Contains table column names and their data types |
primary_key_clause | Column or set of columns forming primary key. Optional |
create_warehouse
"data": {
"actor": "john.doe@company.com",
"id": "Warehouse",
"properties": {
"projectId": "werwe-ert-234567",
"dataSetLocation": "US",
"bucket": null,
"secretKey": "************",
"secretKeyRemoteExecution": "************"
}
}
fields | description |
---|---|
actor | Actor's account login email |
id | Destination ID |
properties | Destination type-specific properties |
dbt_run_failed
"data": {
"dbtJobId": "upriver_avidity",
"dbtJobName": "Every minute run and test project models",
"dbtJobType": "Custom dbt job",
"models": [],
"startTime": "2023-05-26T00:44:02.306Z",
"startupDetails": {
"type": "scheduled"
},
"endTime": "2023-05-26T00:44:16.136Z",
"result": {
"stepResults": [
{
"step": {
"name": "Run project models",
"command": "dbt run --models dbt_demo_project"
},
"success": false,
"startTime": "2023-05-26T00:44:13.258Z",
"endTime": "2023-05-26T00:44:16.054Z",
"commandResult": {
"exitCode": 1,
"output": "Running with dbt=0.20.1\nFound 20 models, 19 tests, 0 snapshots, 0 analyses, 453 macros, 0 operations, 0 seed files, 9 sources, 0 exposures\n\nERROR: Database Error\n connection to server at \"testing.cw43lptekopo.us-east-1.redshift.amazonaws.com\" (34.204.122.158), port 5439 failed: FATAL: password authentication failed for user \"developers\"\n connection to server at \"testing.cw43lptekopo.us-east-1.redshift.amazonaws.com\" (34.204.122.158), port 5439 failed: FATAL: password authentication failed for user \"developers\"\n ",
"error": ""
},
"successfulModelRuns": 0,
"failedModelRuns": 0
}
],
"description": "Steps: successful 0, failed 1"
}
},
fields | description |
---|---|
dbtJobId | dbt job ID |
dbtJobName | dbt job name |
dbtJobType | dbt job type |
models | Array of models |
id | Model ID |
name | Model name |
startTime | Run start time |
startupDetails | Startup details |
type | Startup type |
endTime | Run end time |
result | Result details |
stepResults | Step results |
step | Step details |
name | Step name |
command | Step command |
success | Boolean specifying whether step was successful |
startTime | Step run start time |
endTime | Step run end time |
commandResult | Command run result details |
exitCode | Command exit code |
output | Command output |
error | Command execution errors |
successfulModelRuns | Number of successful model runs |
failedModelRuns | Number of successful model runs |
description | Step description |
dbt_run_start
"data": {
"dbtJobId": "skepticism_filled",
"dbtJobName": "RUN_MODELS:wait",
"dbtJobType": "Scheduled: Run",
"models": [
{
"id": "blessed_enjoyer",
"name": "wait"
}
],
"startTime": "2023-05-26T00:27:24.356Z",
"startupDetails": {
"type": "integrated_scheduler",
"jobId": 1111111
}
},
fields | description |
---|---|
dbtJobId | dbt job ID |
dbtJobName | dbt job name |
dbtJobType | dbt job type |
models | Array of models |
id | Model ID |
name | Model name |
startTime | Run start time |
startupDetails | Startup details |
type | Startup type |
jobId | Startup job ID |
dbt_run_succeeded
"data": {
"dbtJobId": "canine_extravagant",
"dbtJobName": "every5minutes1",
"dbtJobType": "Custom dbt job",
"models": [
{
"id": "splashed_obliterated",
"name": "simple_model"
}
],
"startTime": "2023-05-26T00:40:09.790Z",
"startupDetails": {
"type": "scheduled"
},
"endTime": "2023-05-26T00:41:03.350Z",
"result": {
"stepResults": [
{
"step": {
"name": "run models",
"command": "dbt run --models simple_model"
},
"success": true,
"startTime": "2023-05-26T00:40:42.620Z",
"endTime": "2023-05-26T00:41:03.060Z",
"commandResult": {
"exitCode": 0,
"output": "00:40:46 Running with dbt=1.4.5\n00:40:47 Unable to do partial parsing because saved manifest not found. Starting full parse.\n00:40:49 Found 3 models, 4 tests, 0 snapshots, 0 analyses, 571 macros, 0 operations, 0 seed files, 1 source, 0 exposures, 0 metrics\n00:40:49 \n00:40:54 Concurrency: 1 threads (target='prod')\n00:40:54 \n00:40:54 1 of 1 START sql table model google_sheets.simple_model ........................ [RUN]\n00:41:01 1 of 1 OK created sql table model google_sheets.simple_model ................... [OK in 7.13s]\n00:41:02 \n00:41:02 Finished running 1 table model in 0 hours 0 minutes and 12.87 seconds (12.87s).\n00:41:02 \n00:41:02 Completed successfully\n00:41:02 \n00:41:02 Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1",
"error": ""
},
"successfulModelRuns": 1,
"failedModelRuns": 0
}
],
"description": "Steps: successful 1, failed 0"
}
},
fields | description |
---|---|
dbtJobId | dbt job ID |
dbtJobName | dbt job name |
dbtJobType | dbt job type |
models | Array of models |
id | Model ID |
name | Model name |
startTime | Run start time |
startupDetails | Startup details |
type | Startup type |
endTime | Run end time |
result | Result details |
stepResults | Step results |
step | Step details |
name | Step name |
command | Step command |
success | Boolean specifying whether step was successful |
startTime | Step run start time |
endTime | Step run end time |
commandResult | Command run result details |
exitCode | Command exit code |
output | Command output |
error | Command execution errors |
successfulModelRuns | Number of successful model runs |
failedModelRuns | Number of successful model runs |
description | Step description |
transformation_failed
"data": {
"id": "rivalry_occupier",
"name": "core-v2-cron_0_8",
"startTime": "2024-12-18T16:57:41.671Z",
"transformationType": "DBT_CORE",
"schedule": { "type": "cron", "entries": ["*/3 * * * *"] },
"endTime": "2024-12-18T16:58:43.323Z",
"result": {
"stepResults": [
{
"step": {
"name": "a_step_1_1",
"command": "dbt run --select +modela",
"processBuilderCommand": null
},
"success": false,
"startTime": "2024-12-18T16:57:51.671Z",
"endTime": "2024-12-18T16:58:38.323Z",
"commandResult": {
"exitCode": 1,
"output": "16:57:54 Running with dbt=1.7.3\n16:57:55 Registered adapter: bigquery=1.7.2\n16:57:55 Unable to do partial parsing because saved manifest not found. Starting full parse.\n16:57:56 Found 3 models, 1 source, 0 exposures, 0 metrics, 447 macros, 0 groups, 0 semantic models\n16:57:56 \n16:58:37 Concurrency: 7 threads (target='prod')\n16:58:37 \n16:58:37 1 of 1 START sql view model test_schema.modela .................................. [RUN]\n16:58:37 BigQuery adapter: https://console.cloud.google.com/bigquery\n16:58:37 1 of 1 ERROR creating sql view model test_schema.modela ......................... [ERROR in 0.35s]\n16:58:37 \n16:58:37 Finished running 1 view model in 0 hours 0 minutes and 40.91 seconds (40.91s).\n16:58:37 \n16:58:37 Completed with 1 error and 0 warnings:\n16:58:37 \n16:58:37 Database Error in model modela (models/modela.sql)\n Quota exceeded: Your table exceeded quota for imports or query appends per table. For more information, see https://cloud.google.com/bigquery/docs/troubleshoot-quotas\n compiled Code at target/run/test_schema/models/modela.sql\n16:58:37 \n16:58:37 Done. PASS=0 WARN=0 ERROR=1 SKIP=0 TOTAL=1",
"error": ""
},
"error": null,
"successfulModelRuns": 0,
"failedModelRuns": 1,
"modelResults": [
{
"name": "test_schema.modela",
"errorCategory": "UNCATEGORIZED",
"errorData": null,
"succeeded": false
}
]
}
],
"error": null,
"description": "Steps: successful 0, failed 1"
}
},
fields | description |
---|---|
id | Job ID |
name | Job name |
startTime | Job run start time |
transformationType | Job type (DBT_CORE, QUICKSTART, DBT_CLOUD or COALESCE) |
schedule | Job schedule |
endTime | Job run end time |
result | Result details |
stepResults | Step results |
step | Step details |
name | Step name |
command | Step command for DBT_CORE |
processBuilderCommand | Step command for QUICKSTART |
success | Boolean specifying whether step was successful |
startTime | Step run start time |
endTime | Step run end time |
commandResult | Command run result details |
exitCode | Command exit code |
output | Command output |
error | Command execution errors |
error | An exception message that occurred during step execution |
successfulModelRuns | Number of successful model runs |
failedModelRuns | Number of successful model runs |
modelResults | Model run results |
name | Model name |
errorCategory | Model error category |
errorData | Model error data |
succeeded | Boolean specifying whether model run was successful |
error | Error message if the job failed outside of step execution |
description | Step description |
transformation_start
"data": {
"id": "rivalry_occupier",
"name": "core-v2-cron_0_8",
"startTime": "2024-12-19T02:28:40.122Z",
"transformationType": "DBT_CORE",
"schedule": { "type": "cron", "entries": ["*/3 * * * *"] }
},
fields | description |
---|---|
id | Job ID |
name | Job name |
startTime | Job run start time |
transformationType | Job type (DBT_CORE, QUICKSTART, DBT_CLOUD or COALESCE) |
schedule | Job schedule |
transformation_succeeded
"data": {
"id": "rivalry_occupier",
"name": "core-v2-cron_0_8",
"startTime": "2024-12-18T16:42:38.557Z",
"transformationType": "DBT_CORE",
"schedule": { "type": "cron", "entries": ["*/3 * * * *"] },
"endTime": "2024-12-18T16:43:41.933Z",
"result": {
"stepResults": [
{
"step": {
"name": "a_step_1_1",
"command": "dbt run --select +modela",
"processBuilderCommand": null
},
"success": true,
"startTime": "2024-12-18T16:42:48.557Z",
"endTime": "2024-12-18T16:43:36.933Z",
"commandResult": {
"exitCode": 0,
"output": "16:42:51 Running with dbt=1.7.3\n16:42:52 Registered adapter: bigquery=1.7.2\n16:42:52 Unable to do partial parsing because saved manifest not found. Starting full parse.\n16:42:53 Found 3 models, 1 source, 0 exposures, 0 metrics, 447 macros, 0 groups, 0 semantic models\n16:42:53 \n16:43:35 Concurrency: 7 threads (target='prod')\n16:43:35 \n16:43:35 1 of 1 START sql view model test_schema.modela .................................. [RUN]\n16:43:36 1 of 1 OK created sql view model test_schema.modela ............................. [CREATE VIEW (0 processed) in 0.83s]\n16:43:36 \n16:43:36 Finished running 1 view model in 0 hours 0 minutes and 42.27 seconds (42.27s).\n16:43:36 \n16:43:36 Completed successfully\n16:43:36 \n16:43:36 Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1",
"error": ""
},
"error": null,
"successfulModelRuns": 1,
"failedModelRuns": 0,
"modelResults": [
{
"name": "test_schema.modela",
"errorCategory": null,
"errorData": null,
"succeeded": true
}
]
}
],
"error": null,
"description": "Steps: successful 1, failed 0"
}
},
fields | description |
---|---|
id | Job ID |
name | Job name |
startTime | Job run start time |
transformationType | Job type (DBT_CORE, QUICKSTART, DBT_CLOUD OR COALESCE) |
schedule | Job schedule |
endTime | Job run end time |
result | Result details |
stepResults | Step results |
step | Step details |
name | Step name |
command | Step command for DBT_CORE |
processBuilderCommand | Step command for QUICKSTART |
success | Boolean specifying whether step was successful |
startTime | Step run start time |
endTime | Step run end time |
commandResult | Command run result details |
exitCode | Command exit code |
output | Command output |
error | Command execution errors |
error | An exception message that occurred during step execution |
successfulModelRuns | Number of successful model runs |
failedModelRuns | Number of successful model runs |
modelResults | Model run results |
name | Model name |
errorCategory | Model error category |
errorData | Model error data |
succeeded | Boolean specifying whether model run was successful |
error | Error message if the job failed outside of step execution |
description | Step description |
delete_connector
"data": {
"actor": "john.doe@company.com",
"id": "hva_main_metrics_test_qe_benchmark"
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connection ID |
delete_rows
"data" : {
"schema" : "company_bronco_shade_staging",
"name" : "facebook_ads_ad_set_attribution_2022_11_03_73er44io6q0m1dfgjfbcghjea",
"deleteCondition" : "`#existing`.`ad_set_id` = `#scratch`.`ad_set_id` AND `#existing`.`ad_set_updated_time` = `#scratch`.`ad_set_updated_time`"
}
fields | description |
---|---|
schema | Schema name |
name | Table name |
deleteCondition | Delete condition |
delete_warehouse
"data": {
"actor": "john.doe@company.com",
"id": "Warehouse"
}
fields | description |
---|---|
actor | Actor's account login email |
id | Destination ID |
diagnostic_access_approved
"data": {
"message": "Data access granted for 21 days.",
"ticketId": "123456",
"destinationName": "destination",
"connectorName": "facebook_ads",
"actor": "actor"
}
fields | description |
---|---|
message | Diagnostic data access message |
ticketId | Zendesk support ticket number |
destinationName | Destination name |
connectorName | Connection name |
actor | Requester name as specified in Zendesk |
diagnostic_access_granted
"data": {
"message": "Data accessed by Fivetran support for diagnostic purposes",
"connectorName": "connector",
"destinationName": "destination",
"requester": "requester-name",
"supportTicket": "1234"
}
fields | description |
---|---|
message | Diagnostic data access message |
connectorName | Connection name |
destinationName | Destination name |
requester | Requester name as specified in Zendesk |
supportTicket | Zendesk support ticket number |
drop_table
"data" : {
"schema" : "company_bronco_shade_staging",
"name" : "facebook_ads_company_audit_2022_11_03_puqdfgi35r6e1odfgy36rdfgv"
}
fields | description |
---|---|
schema | Schema name |
name | Table name |
reason | Reason why table was dropped. Optional |
edit_connector
"data" : {
"actor" : "john.doe@company.com",
"editType" : "CREDENTIALS",
"id" : "db2ihva_test5",
"properties" : {
"host" : "111.111.11.11",
"password" : "************",
"user" : "dbihvatest",
"database" : "dbitest1",
"port" : 1111,
"tunnelHost" : "11.111.111.111",
"tunnelPort" : 11,
"tunnelUser" : "hvr",
"alwaysEncrypted" : true,
"agentPublicCert" : "...",
"agentUser" : "johndoe",
"agentPassword" : "************",
"agentHost" : "localhost",
"agentPort" : 1111,
"publicKey" : "...",
"parameters" : null,
"connectionType" : "SshTunnel",
"databaseHost" : "111.111.11.12",
"databasePassword" : "dbihvatest",
"databaseUser" : "dbihvatest",
"logJournal" : "SDSJRN",
"logJournalSchema" : "SFGTEST1",
"agentToken" : null,
"sshHostFromSbft" : null
},
"oldProperties" : {
"host" : "111.111.11.11",
"password" : "************",
"user" : "dbihvatest",
"database" : "dbitest1",
"port" : 1111,
"tunnelHost" : "11.111.111.111",
"tunnelPort" : 12,
"tunnelUser" : "hvr",
"alwaysEncrypted" : true,
"agentPublicCert" : "...",
"agentUser" : "johndoe",
"agentPassword" : "************",
"agentHost" : "localhost",
"agentPort" : 1111,
"publicKey" : "...",
"parameters" : null,
"connectionType" : "SshTunnel",
"databaseHost" : "111.111.11.12",
"databasePassword" : "dbihvatest",
"databaseUser" : "dbihvatest",
"logJournal" : "SDSJRN",
"logJournalSchema" : "SFGTEST1",
"agentToken" : null,
"sshHostFromSbft" : null
}
}
fields | description |
---|---|
actor | Actor's account login email |
editType | Edit type. Valid values for edit type:
|
id | Connection ID |
properties | Connector type-specific properties |
oldProperties | Changed connection type-specific properties |
extract_summary
NOTE: The
object
field of theextract_summary
log event is not available for all connectors. If you need a specific connector to support this log event, submit a feature request.
"data" : {
"status": "SUCCESS",
"total_queries" : 984,
"total_rows": 3746,
"total_size": 8098757,
"rounded_total_size": "7 MB",
"objects": [
{
"name": "https://aggregated_endpoint_a",
"queries": 562
},
{
"name": "https://aggregated_endpoint_b",
"queries": 78
},
{
"name": "https://aggregated_endpoint_c",
"queries": 344
}
]
}
fields | desciption |
---|---|
status | The overall status of the query operation. Possible values: SUCCESS , FAIL |
total_queries | Total count of API calls |
total_rows | Total number of rows extracted |
total_size | Total size of the data extracted in bytes |
rounded_total_size | A human-readable format of the total size. The value is rounded down to the nearest unit. For example, if the size is 3430KB, the number is rounded to display 3MB. The same logic applies to round to KB or GB, whatever the nearest unit is |
objects | An array of objects. Each object contains an aggregation template designed to consolidate similar API calls into a single count |
name | The URL or identifier of the endpoint |
queries | The number of API calls to a specific endpoint |
force_update_connector
"data" : {
"actor" : "john.doe@company.com",
"id" : "db2ihva_test5"
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connection ID |
forced_resync_connector
"data": {
"reason": "Credit Card Payment resync",
"cause": "MIGRATION"
}
fields | description |
---|---|
reason | Re-sync reason |
cause | Re-sync cause |
forced_resync_table
"data" : {
"schema" : "hubspot_test",
"table" : "ticket_property_history",
"reason" : "Ticket's cursor is older than a day, triggering re-sync for TICKET and its child tables.",
"cause" : "STRATEGY"
}
fields | description |
---|---|
schema | Schema name |
table | Table name |
reason | Resync reason |
cause | Resync cause |
import_progress
"data": {
"tableProgress": {
"dbo.orders": "NOT_STARTED",
"dbo.history": "NOT_STARTED",
"dbo.district": "COMPLETE",
"dbo.new_order": "NOT_STARTED"
}
}
fields | description |
---|---|
tableProgress | Table progress as list of tables with their import status. Valid values for status:
|
info
"data" : {
"type" : "extraction_start",
"message" : "{currentTable: DBIHVA.JOHNDOE_TEST_DATE, imported: 2, selected: 3}"
}
fields | description |
---|---|
type | Information message type |
message | Information message |
insert_rows
"data": {
"schema": "quickbooks",
"name": "journal_entry_line"
}
fields | description |
---|---|
schema | Schema name |
name | Name of inserted row |
pause_connector
"data" : {
"actor" : "john.doe@company.com",
"id" : "db2ihva_test5"
}
"data" : {
"actor" : "Fivetran",
"id" : "db2ihva_test5",
"reason" : "Connector is paused because trial period has ended"
}
fields | description |
---|---|
actor | Actor's account login email or Fivetran if connector has been paused automatically |
id | Connection ID |
reason | If a connection has been paused automatically, this field contains a short description of why this happened |
processed_records
"data": {
"table": "ITEM_PRICES",
"recordsCount": 24
}
fields | description |
---|---|
table | Actor's account login email |
recordsCount | Number of processed records |
read_end
"data": {
"source":"be822a.csv.gz"
}
fields | description |
---|---|
source | The connector-specific source of read data |
read_start
"data": {
"source":"incremental update"
}
fields | description |
---|---|
source | The connector-specific source of read data |
records_modified
"data" : {
"schema" : "facebook_ads",
"table" : "company_audit",
"operationType" : "REPLACED_OR_INSERTED",
"count" : 12
}
fields | description |
---|---|
schema | Schema name |
table | Table name |
operationType | Operation type |
count | Number of operations |
resume_connector
"data" : {
"actor" : "john.doe@company.com",
"id" : "db2ihva_test5"
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connection ID |
resync_connector
"data": {
"actor": "john.doe@company.com",
"id": "bench_10g"
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connection ID |
resync_table
"data": {
"actor": "john.doe@company.com",
"id": "ash_hopper_staging",
"schema": "public",
"table": "big"
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connection ID |
schema | Schema name |
table | Table name |
schema_migration_end
"data" : {
"migrationStatus" : "SUCCESS"
}
fields | description |
---|---|
migrationStatus | Migration status. Valid values:
|
sql_query
"data" : {
"query" : "SELECT OBJECT_SCHEMA_NAME(sc.object_id) as TABLE_SCHEMA, OBJECT_NAME(sc.object_id) as TABLE_NAME, sc.name as COLUMN_NAME, sc.column_id, ISNULL(TYPE_NAME(sc.system_type_id), t.name) as DATA_TYPE, COLUMNPROPERTY(sc.object_id, sc.name, 'ordinal') as ORDINAL_POSITION, CONVERT(nvarchar(4000), OBJECT_DEFINITION(sc.default_object_id)) as COLUMN_DEFAULT, ISNULL(TYPE_NAME(sc.system_type_id), t.name) as IS_NULLABLE, COLUMNPROPERTY(sc.object_id, sc.name, 'octetmaxlen') as CHARACTER_OCTET_LENGTH, convert(tinyint, CASE WHEN sc.system_type_id IN (48, 52, 56, 59, 60, 62, 106, 108, 122, 127) THEN sc.precision END) as NUMERIC_PRECISION, convert(int, CASE WHEN sc.system_type_id IN (40, 41, 42, 43, 58, 61) THEN NULL ELSE ODBCSCALE(sc.system_type_id, sc.scale) END) as NUMERIC_SCALE FROM sys.columns AS sc LEFT JOIN sys.types t ON sc.user_type_id = t.user_type_id LEFT JOIN sys.tables as tbs ON sc.object_id = tbs.object_id WHERE tbs.is_ms_shipped = 0",
"number" : 5,
"executionTime" : 44
}
fields | description |
---|---|
query | SQL query |
number | Number of SQ queries run against the source |
executionTime | Execution time in seconds |
sync_end
"data" : {
"status" : "SUCCESSFUL"
}
fields | description |
---|---|
status | Sync status. Valid values: "SUCCESSFUL", "RESCHEDULED", "FAILURE", "FAILURE_WITH_TASK" |
reason | If status is FAILURE , this is the description of the reason why the sync failed. If status is FAILURE_WITH_TASK , this is the description of the Error. If status is RESCHEDULED , this is the description of the reason why the sync is rescheduled. |
taskType | If status is FAILURE_WITH_TASK or RESCHEDULED , this field displays the type of the Error that caused the failure or rescheduling, respectively, e.g., reconnect , update_service_account , etc. |
rescheduledAt | If status is RESCHEDULED , this field displays the scheduled time to resume the sync. The scheduled time depends on the reason it was rescheduled for |
sync_stats
NOTE: The
sync_stats
event is only generated for a successful sync for the following connectors:
"data" : {
"extract_time_s" : 63,
"extract_volume_mb" : 0,
"process_time_s" : 21,
"process_volume_mb" : 0,
"load_time_s" : 34,
"load_volume_mb" : 0,
"total_time_s" : 129
}
fields | description |
---|---|
extract_time_s | Extract time in seconds |
extract_volume_mb | Extracted data volume in Mb |
process_time_s | Process time in seconds |
process_volume_mb | Processed data volume in Mb |
load_time_s | Load time in seconds |
load_volume_mb | Loaded data volume in Mb |
total_time_s | Total time in seconds |
test_connector_connection
"data" : {
"actor" : "john.doe@company.com",
"id" : "db2ihva_test5",
"testCount" : 6
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connection ID |
testCount | Number of tests |
update_rows
"data" : {
"schema" : "hubspot_johndoe",
"name" : "company"
}
fields | description |
---|---|
schema | Schema name |
name | Table name |
update_state
"data": {
"state": 681
}
fields | description |
---|---|
state | Connector-specific data you provide to us as JSON. Supports nested objects |
update_warehouse
"data": {
"actor": "john.doe@company.com",
"id": "redshift_tst_1",
"properties": {
"region": "us-east-2"
},
"oldProperties": {
"region": "us-east-1"
}
}
fields | description |
---|---|
actor | Actor's account login email |
id | Destination ID |
properties | Destination type-specific properties |
oldProperties | Changed destination type-specific properties |
warning
Example 1
"data" : {
"type" : "skip_table",
"table" : "api_access_requests",
"reason" : "No changed data in named range"
}
Example 2
"data" : {
"type" : "retry_api_call",
"message" : "Retrying after 60 seconds. Error : ErrorResponse{msg='Exceeded rate limit for endpoint: /api/export/data.csv, project: 11111 ', code='RateLimitExceeded', params='{}'}"
}
fields | description |
---|---|
type | Warning type |
table | Table name |
reason | Warning reason |
message | Warning message |
write_to_table_start
"data" : {
"table" : "company_audit"
}
fields | description |
---|---|
table | Table name |
write_to_table_end
"data" : {
"table" : "company_audit"
}
fields | description |
---|---|
table | Table name |