Logslink
Fivetran generates and logs several types of data related to your account and destinations:
- Structured log events from connectors, dashboard user actions, and Fivetran API calls
- Account- and destination-related metadata that includes:
- Role/membership information
- Data flow
- Granular consumption information
You can use this data for the following purposes:
- Monitoring and troubleshooting of connectors
- Tracking your usage
- Conducting audits
You can monitor and process this data in your Fivetran account by using either of the following:
- Our free Fivetran Platform Connector, which we automatically add to every destination you create.
- External log services
Fivetran Platform connectorlink
Fivetran Platform Connector is a free connector that delivers your logs and account metadata to a schema in your destination. We automatically add it to every destination you create. The Fivetran Platform Connector is available on all plans. Learn more in our Fivetran Platform Connector documentation.
IMPORTANT: The MAR that the Fivetran Platform Connector generates is free, though you may incur costs in your destination. Learn more in our pricing documentation.
External log serviceslink
As an alternative to the Fivetran Platform Connector, you can use any of the following external log services:
You can connect one external logging service per destination. Fivetran will write log events for all connectors in the destination group to the connected service. If there is a logging service that you would like but that is not yet supported, let us know.
IMPORTANT: You must be on an Enterprise or Business Critical plan to use an external logging service.
Log event formatlink
The log events are in a standardized JSON format:
{
"event": <Event name>,
"data": {
// Event-specific data. This section is optional and varies for each relevant event.
},
"created": <Event creation timestamp in UTC>,
"connector_type": <Connector type>,
"connector_id": <Connector ID>,
"connector_name": <Connector name>,
"sync_id": <Sync identifier as UUID>,
"exception_id": <Fivetran error identifier>
}
Field | Description |
---|---|
event | Event name |
data | Optional object that contains event type-specific data. Its contents vary depending on log event type |
created | Event creation timestamp in UTC |
connector_type | Connector type |
connector_id | Connector ID |
connector_name | Connector name. It is either schema name, schema prefix name, or schema table prefix name |
sync_id | Sync identifier as UUID. This optional field is only present in sync-related events generated by Fivetran connectors |
exception_id | Optional field that is only present if Fivetran encountered an unexpected problem |
See our Log event details documentation to learn about event-specific data.
NOTE: Some events may not be defined in the Log event details documentation as they are either connector type-specific or don't have the
data
object.
Log event listlink
Connector log eventslink
Fivetran connectors generate the following events:
Connector Event Name | Description |
---|---|
api_call | API calls made to a source service. Each event represents up to 10 API calls |
sql_query | SQL query executed on a source database |
create_schema | Schema created in destination |
create_table | Table created in destination |
drop_table | Table dropped from destination |
alter_table | Table columns added to, modified in or dropped from destination table |
connection_successful | Successfully established connection with source system |
connection_failure | Failed to establish connection with source system |
sync_start | Connector started syncing data |
import_progress | Rough estimate of import progress |
processed_records | Number of records read from source system |
write_to_table_start | Started writing records to destination table |
write_to_table_end | Finished writing records to destination table |
schema_migration_start | Schema migration started |
schema_migration_end | Schema migration ended |
copy_rows | Data copied to staging table |
delete_rows | Stale rows deleted from main table |
insert_rows | Updated rows inserted in main table |
update_rows | Existing rows in main table updated with new values |
records_modified | Number of records upserted, updated, or deleted in table within single operation during a sync |
sync_end | Data sync completed |
sync_stats | Current sync metadata. This event is only displayed for a successful sync for the following connectors: |
json_value_too_long | A JSON value was too long for your destination and had to be truncated |
info | Information during data sync |
warning | Warning during data sync |
error | Error during data sync |
forced_resync_table | Forced re-sync of a table |
forced_resync_connector | Forced re-sync of a connector |
change_schema_config_via_sync | Schema configuration updated during a sync |
diagnostic_access_expired | Data access expired |
diagnostic_access_ended | Data access was removed because the related Zendesk ticket was resolved or deleted |
diagnostic_access_granted | Data accessed by Fivetran support for diagnostic purposes |
update_state | Connection-specific data you provided |
Dashboard activity log eventslink
Dashboard activities generate the following events:
Dashboard Activity Event Name | Description |
---|---|
change_schema_config | Schema configuration updated |
create_connector | New connector is created |
edit_connector | Connector's credential, sync period or delay notification period is edited |
delete_connector | Connector is deleted |
pause_connector | Connector is paused |
resume_connector | Connector is resumed |
resync_connector | Connector's re-sync is triggered |
resync_table | Connector's table re-sync is triggered |
force_update_connector | Trigger manual update for connector |
connect_logger | Logging service is connected |
update_logger | Logging service credential is updated |
pause_logging | Logging service is paused |
resume_logging | Logging service is resumed |
disconnect_logger | Logging service is disconnected |
update_warehouse | Destination configuration is updated |
test_connector_connection | Connector test(s) run |
diagnostic_access_approved | Data access granted for 21 days |
diagnostic_access_denied | Data access denied |
diagnostic_access_revoked | Data access revoked |
REST API call log eventslink
REST API calls generate the following events:
API Call Event Name | Description |
---|---|
change_schema_config_via_api | Schema configuration updated using API call |
Transformation log eventslink
Transformations for dbt Core generate the following events:
Transformations for dbt Core Event Name | Description |
---|---|
dbt_run_start | The dbt transformation started. |
dbt_run_succeeded | The dbt transformation was successfully finished. |
dbt_run_failed | The dbt transformation failed. |
Connector stages and related log eventslink
This section describes the life cycle of a connector. It lists the corresponding log event generated at each stage of the connector, or connector-related dashboard activities. This will help you recognize the events in the logs and understand how their ordering relates to the operations of the connector.
The connector-related log events are included in the logs captured by the Fivetran Platform Connector and external logging services.
NOTE: The connector life cycle stages as are listed in chronological order where possible.
1. Connector creation initializedlink
When you create a connector, Fivetran writes its ID in Fivetran's database and assigns the connector status “New.”
NOTE: Fivetran writes encrypted credentials in its database.
If you create a connector in your Fivetran dashboard, you need to specify the required properties and authorization credentials.
If you create a connector using the Create a Connector API endpoint, you need to specify the required properties. However, you can omit the authorization credentials and authorize the connector later using the Connect Card.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
create_connector | New connector is created |
2. Connector setup tests runlink
During these tests, Fivetran verifies that credentials such as authentication details, paths, and IDs are correct and valid, and resources are available and accessible.
When you click Save & Test while creating a connector in your Fivetran dashboard, Fivetran runs the setup tests.
Also, you can run the setup tests by using the Run Connector Setup Tests endpoint. If the setup tests have succeeded, it means the connector has been successfully created.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
test_connector_connection | Connector test(s) run |
api_call | API calls made to a source service. Each event represents up to 10 API calls |
sql_query | SQL query executed on a source database |
3. Connector successfully createdlink
After the setup tests have succeeded, Fivetran records the Connection Created user action in the User Actions Log. At this stage, the connector is paused. It does not extract, process, or load any data from the source to the destination.
After the connector has been successfully created, you can trigger the historical sync.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
connection_successful | Successfully established connection with source system |
connection_failure | Failed to establish connection with source system |
4. Connector schema changedlink
Connector schema changes include changing the connector schema, tables and table columns.
You change your connector's schema in the following cases:
- You want to switch the sync mode
- You want to change what data your connector syncs, which includes using the data blocking and column hashing features.
- You need to fix a broken connector.
- You need to change a connector schema as part of the schema review. The schema review after you have created a connector and before you run the historical sync is required for particular connector types.
You can change your schema in the following ways:
- In your Fivetran dashboard
- With the Fivetran REST API using any of the following endpoints:
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
change_schema_config | Schema configuration updated |
NOTE: For an un-paused connector, changing the schema will trigger a sync to run. If a sync is already running, Fivetran will cancel the running sync and immediately initiate a new one with the new schema configuration.
5. Sync triggeredlink
You need to run the historical sync for the connector to start working as intended. The first historical sync that Fivetran does for a connector is called the initial sync. During the historical sync, we extract and process all the historical data from the selected tables in the source. Periodically we will load data into the destination.
After a successful historical sync, the connector runs in the incremental sync mode. In this mode, whenever possible, only data that has been modified or added - incremental changes - is extracted, processed, and loaded on schedule. We will reimport tables where it is not possible to only fetch incremental changes. We use cursors to record the history of the syncs.
The connector sync frequency that you set in your Fivetran dashboard or by using the Modify a Connector endpoint defines how often the incremental sync is run.
IMPORTANT: Incremental sync runs on schedule at a set sync frequency only when the connector's sync scheduling type is set to
auto
in our REST API. Setting the scheduling type tomanual
effectively disables the schedule. You can trigger a manual sync to sync the connector in this case.
NOTE: Some connector types support the priority-first sync mode.
For this stage, Fivetran generates the following log events:
Event name | Description | Step |
---|---|---|
sync_start | Connector started syncing data | Extract |
api_call | API calls made to a source service. Each event represents up to 10 API calls | Extract |
sql_query | SQL query executed on a source database | Extract |
schema_migration_start | Schema migration started | Load |
schema_migration_end | Schema migration ended | Load |
write_to_table_start | Started writing records to destination table | Load |
copy_rows | Data copied to staging table | Load |
delete_rows | Stale rows deleted from main table | Load |
insert_rows | Updated rows inserted in main table | Load |
update_rows | Existing rows in main table updated with new values | Load |
create_schema | Schema created in destination | Load |
create_table | Table created in destination | Load |
json_value_too_long | A JSON value was too long for your destination and had to be truncated | Process |
drop_table | Table dropped from destination | Load |
alter_table | Table columns added to, modified in or dropped from destination table | Load |
change_schema_config_via_sync | Schema configuration updated during a sync. Updates are done when a new table was created during the sync and the user selected to automatically include new tables in the schema. | Process |
write_to_table_end | Finished writing records to destination table | Load |
records_modified | Number of records modified during sync | Load |
sync_end | Data sync completed Valid status field values: SUCCESSFUL , FAILURE , FAILURE_WITH_TASK , and RESCHEDULED | Load |
6. Connector paused/resumedlink
When you have just created a connector, it is paused, which means it does not extract, process, or load data. After you successfully run the setup tests, the connector becomes enabled/resumed. After the successful initial sync, it starts working in incremental sync mode.
You can pause and resume the connector either in your Fivetran dashboard, or by using various Connector endpoints in our API.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
pause_connector | The connector was paused |
resume_connector | The connector was resumed |
NOTE: Resuming a connector will trigger either the initial sync, or an incremental sync, depending on the stage the connector was at when it was paused.
7. Re-sync triggeredlink
In some cases you may need to re-run a historical sync to fix a data integrity error. This is called a re-sync. We sync all historical data in the tables and their columns in the source as selected in the connector configuration.
You can trigger a re-sync from your Fivetran dashboard. You can also trigger a re-sync by using the Modify a Connector endpoint.
For connectors that support table re-sync, you can trigger it either in the dashboard or by using the Re-sync connector table data endpoint.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
resync_connector | Connector's re-sync is triggered |
resync_table | Connector's table re-sync is triggered |
8. Manual sync triggeredlink
You can trigger an incremental sync manually without waiting for the scheduled incremental sync.
You can do this either by clicking Sync Now in your Fivetran dashboard or by using the Sync Connector Data endpoint.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
force_update_connector | Trigger manual update for connector |
9. Sync endedlink
A sync can end in one of the following states:
- Successful - the sync was completed without issue and data in the destination is up to date.
- Failure - the sync failed due to a unknown issue.
- Failure with Error - the sync failed due to a known issue that requires the user to take actions to be fixed. An Error is generated and displayed on the Alerts page in the dashboard.
- Rescheduled - the sync was unable to complete at this time and will automatically resume syncing when it can complete. This is most commonly caused by hitting API quotas.
- Canceled - the sync was canceled by the user.
When the sync ends, Fivetran generates the following log events:
Event name | Explanation |
---|---|
sync_end | Final status of sync. |
sync_stats | Current sync metadata. This event is only displayed for a successful sync for the following connectors: |
IMPORTANT: If you set the sync scheduling for your connector to
manual
, you need to manually trigger your syncs after you make this change. If a manually-triggered sync wasrescheduled
, you need to manually re-trigger that sync, since sync rescheduling only works with automatic sync scheduling.
10. Connector brokenlink
IMPORTANT: This is an abnormal state for a connector. It commonly happens due to transient networking or server errors and most often resolves itself with no action on your part.
A connector is considered to be broken when during its sync it fails to either extract, process, or load data.
You can see a connector in the Connector List is broken when it has the red Broken label.
In the Connector dashboard, a broken connector also has the red Broken label.
If we know the breaking issue, we create a corresponding Error and notify you by email with instructions on how to resolve the issue. The Error is displayed on the Alerts page in your Fivetran dashboard. You need to take the actions listed in the Error message to fix the connector. We resend the Error email every seven hours until the Error is resolved.
If we don't know the breaking issue, we generate an Unknown Error in your dashboard after three failed syncs in a row. After a connector is broken with an Unknown Error for 48 hours, Fivetran automatically escalates the issue to our support and engineering teams.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
sync_end | The value of the status field is either FAILURE or FAILURE_WITH_TASK |
11. Connector modifiedlink
You can change the connector credentials, incremental sync frequency, delay notification period and other connector-specific details of a connector. You can modify the connector in your Fivetran dashboard or by using the Modify a Connector endpoint.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
edit_connector | Connector's credential, sync period or delay notification period is edited |
NOTE: After you have modified and saved your connector, Fivetran automatically runs the setup tests.
12. Connector deletedlink
When you delete a connector, we delete all of its data from Fivetran's database. You can delete connectors both in your Fivetran dashboard and by using the Delete a Connector endpoint.
For this stage, Fivetran generates the following log events:
Event name | Description |
---|---|
delete_connector | Connector is deleted. Events of this type can be only seen in the logs generated by the Fivetran Platform Connector and external logging services because the corresponding connector's Dashboard becomes unavailable after the connector has been deleted. |
Log event detailslink
The following sections provide details for log events that have event-specific data.
alter_table
link
"data" : {
"type": "ADD_COLUMN",
"table": "competitor_profile_pvo",
"properties" : {
"columnName": "object_version_number",
"dataType": "INTEGER",
"byteLength": null,
"precision": null,
"scale": null,
"notNull": null
}
}
fields | description |
---|---|
type | Table change type. Valid values for table change type:
|
table | Table name |
properties | Column properties |
columnName | Column name |
dataType | Column data type |
byteLength | Column value byte length |
precision | Column value precision |
scale | Column value scale |
notNull | Whether column data is NULL |
api_call
link
"data" : {
"method" : "GET",
"uri" : "https://sheets.googleapis.com/v4/spreadsheets/1frvH9KzuMXN8MoIdrTpw7UAROS2FhTNmf_JZkj2CiK0/values/Requests"
}
fields | description |
---|---|
method | API method |
uri | Endpoint URI |
body | Request body. Optional |
change_schema_config
link
"data": {
"actor": "john.doe@company.com",
"connectorId": "sql_server_test",
"properties": {
"ENABLED_COLUMNS": [
{
"schema": "testSchema",
"table": "testTable2",
"columns": [
"ID"
]
}
],
"ENABLED_TABLES": [
{
"schema": "testSchema",
"tables": [
"testTable2"
]
}
]
}
}
fields | description |
---|---|
actor | Actor's account login email |
connectorID | Connector ID |
properties | Contains schema change type and the relevant entities |
DISABLED | Array of names of schemas disabled to sync |
ENABLED | Array of names of schemas enabled to sync |
DISABLED_TABLES | Array of names of tables disabled to sync |
ENABLED_TABLES | Array of names of tables enabled to sync |
DISABLED_COLUMNS | Array of names of columns disabled to sync |
ENABLED_COLUMNS | Array of names of columns enabled to sync |
HASHED_COLUMNS | Array of names of hashed columns |
UNHASHED_COLUMNS | Array of names of unhashed columns |
SYNC_MODE_CHANGE_FOR_TABLES | Array of schemas containing tables and columns with updated sync mode |
tables SyncModes | Array of tables with updated sync mode within schema |
schema | Schema name |
tables | Array of table names |
table | Table name |
columns | Array of column names |
syncMode | Updated sync mode. Possible values: Legacy , History |
change_schema_config_via_api
link
"data": {
"actor": "john.doe@company.com",
"connectorId": "purported_substituting",
"properties": {
"DISABLED_TABLES": [
{
"schema": "shopify_pbf_schema28",
"tables": [
"price_rule"
]
}
],
"ENABLED_TABLES": [
{
"schema": "shopify_pbf_schema28",
"tables": [
"order_rule"
]
}
],
"includeNewByDefault": false,
"ENABLED": [
"shopify_pbf_schema28"
],
"DISABLED": [
"shopify_pbf_schema30"
],
"includeNewColumnsByDefault": false
}
}
fields | description |
---|---|
actor | Actor's account login email |
connectorId | Connector ID |
properties | Contains schema change types and relevant entities |
DISABLED | Array of names of schemas disabled to sync |
ENABLED | Array of names of schemas enabled to sync |
DISABLED_TABLES | Array of names of tables disabled to sync |
ENABLED_TABLES | Array of names of tables enabled to sync |
schema | Schema name |
tables | Array of table names |
columns | Array of column names |
includeNewByDefault | If set to true , all new schemas, tables, and columns are enabled to sync. |
includeNewColumnsByDefault | If set to true , only new columns are enabled to sync. |
change_schema_config_via_sync
link
"data": {
"connectorId": "documentdb",
"properties": {
"ADDITION": [
{
"schema": "docdb_1",
"tables": [
"STRING_table"
]
}
],
"REMOVAL": [
{
"schema": "docdb_2",
"tables": [
"STRING_table"
]
}
]
}
}
fields | description |
---|---|
connectorId | Connector ID |
properties | Contains schema change types and relevant entities |
ADDITION | Contains schemas and tables enabled to sync |
REMOVAL | Contains schemas and tables disabled to sync |
schema | Schema name |
tables | Array of table names |
connection_failure
link
"data" : {
"actor" : "john.doe@company.com",
"id" : "db2ihva_test5",
"testName" : "Connecting to SSH tunnel",
"message" : "The ssh key might have changed"
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connector ID |
testName | Name of failed test |
message | Message |
connection_successful
link
"data" : {
"actor" : "john.doe@company.com",
"id" : "db2ihva_test5",
"testName" : "DB2i DB accessibility test",
"message" : ""
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connector ID |
testName | Name of succeeded test |
message | Message |
copy_rows
link
"data" : {
"schema" : "company_bronco_shade_staging",
"name" : "facebook_ads_ad_set_attribution_2022_11_03_73te45jo6q34mvjfqfbcwjaea",
"destinationName" : "ad_set_attribution",
"destinationSchema" : "facebook_ads",
"copyType" : "WRITE_TRUNCATE"
}
fields | description |
---|---|
schema | Schema name |
name | Table name |
destinationName | Table name in destination. Optional |
destinationSchema | Schema name in destination. Optional |
copyType | Copy type. Optional |
create_connector
link
"data" : {
"actor" : "john.doe@company.com",
"id" : "db2ihva_test5",
"properties" : {
"host" : "111.111.11.11",
"password" : "************",
"user" : "dbihvatest",
"database" : "dbitest1",
"port" : 1111,
"tunnelHost" : "11.111.111.111",
"tunnelPort" : 11,
"tunnelUser" : "hvr",
"alwaysEncrypted" : true,
"agentPublicCert" : "...",
"agentUser" : "johndoe",
"agentPassword" : "************",
"agentHost" : "localhost",
"agentPort" : 1112,
"publicKey" : "...",
"parameters" : null,
"connectionType" : "SshTunnel",
"databaseHost" : "111.111.11.11",
"databasePassword" : "dbihvatest",
"databaseUser" : "dbihvatest",
"logJournal" : "VBVJRN",
"logJournalSchema" : "DBETEST1",
"agentToken" : null,
"sshHostFromSbft" : null
}
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connector ID |
properties | Connector type-specific properties |
create_table
link
"data" : {
"schema" : "facebooka_ads",
"name" : "account_history",
"columns" : {
"schema" : "STRING",
"update_id" : "STRING",
"_fivetran_synced" : "TIMESTAMP",
"rows_updated_or_inserted" : "INTEGER",
"update_started" : "TIMESTAMP",
"start" : "TIMESTAMP",
"progress" : "TIMESTAMP",
"id" : "STRING",
"message" : "STRING",
"done" : "TIMESTAMP",
"table" : "STRING",
"status" : "STRING"
"primary_key_clause" : ""\"_FIVETRAN_ID\""
}
}
fields | description |
---|---|
schema | Schema name |
name | Table name |
columns | Table columns. Contains table column names and their data types |
primary_key_clause | Column or set of columns forming primary key. Optional |
dbt_run_failed
link
"data": {
"dbtJobId": "upriver_avidity",
"dbtJobName": "Every minute run and test project models",
"dbtJobType": "Custom dbt job",
"models": [],
"startTime": "2023-05-26T00:44:02.306Z",
"startupDetails": {
"type": "scheduled"
},
"endTime": "2023-05-26T00:44:16.136Z",
"result": {
"stepResults": [
{
"step": {
"name": "Run project models",
"command": "dbt run --models dbt_demo_project"
},
"success": false,
"startTime": "2023-05-26T00:44:13.258Z",
"endTime": "2023-05-26T00:44:16.054Z",
"commandResult": {
"exitCode": 1,
"output": "Running with dbt=0.20.1\nFound 20 models, 19 tests, 0 snapshots, 0 analyses, 453 macros, 0 operations, 0 seed files, 9 sources, 0 exposures\n\nERROR: Database Error\n connection to server at \"testing.cw43lptekopo.us-east-1.redshift.amazonaws.com\" (34.204.122.158), port 5439 failed: FATAL: password authentication failed for user \"developers\"\n connection to server at \"testing.cw43lptekopo.us-east-1.redshift.amazonaws.com\" (34.204.122.158), port 5439 failed: FATAL: password authentication failed for user \"developers\"\n ",
"error": ""
},
"successfulModelRuns": 0,
"failedModelRuns": 0
}
],
"description": "Steps: successful 0, failed 1"
}
},
fields | description |
---|---|
dbtJobId | dbt job ID |
dbtJobName | dbt job name |
dbtJobType | dbt job type |
models | Array of models |
id | Model ID |
name | Model name |
startTime | Run start time |
startupDetails | Startup details |
type | Startup type |
endTime | Run end time |
result | Result details |
stepResults | Step results |
step | Step details |
name | Step name |
command | Step command |
success | Boolean specifying whether step was successful |
startTime | Step run start time |
endTime | Step run end time |
commandResult | Command run result details |
exitCode | Command exit code |
output | Command output |
error | Command execution errors |
successfulModelRuns | Number of successful model runs |
failedModelRuns | Number of successful model runs |
description | Step description |
dbt_run_start
link
"data": {
"dbtJobId": "skepticism_filled",
"dbtJobName": "RUN_MODELS:wait",
"dbtJobType": "Scheduled: Run",
"models": [
{
"id": "blessed_enjoyer",
"name": "wait"
}
],
"startTime": "2023-05-26T00:27:24.356Z",
"startupDetails": {
"type": "integrated_scheduler",
"jobId": 1111111
}
},
fields | description |
---|---|
dbtJobId | dbt job ID |
dbtJobName | dbt job name |
dbtJobType | dbt job type |
models | Array of models |
id | Model ID |
name | Model name |
startTime | Run start time |
startupDetails | Startup details |
type | Startup type |
jobId | Startup job ID |
dbt_run_succeeded
link
"data": {
"dbtJobId": "canine_extravagant",
"dbtJobName": "every5minutes1",
"dbtJobType": "Custom dbt job",
"models": [
{
"id": "splashed_obliterated",
"name": "simple_model"
}
],
"startTime": "2023-05-26T00:40:09.790Z",
"startupDetails": {
"type": "scheduled"
},
"endTime": "2023-05-26T00:41:03.350Z",
"result": {
"stepResults": [
{
"step": {
"name": "run models",
"command": "dbt run --models simple_model"
},
"success": true,
"startTime": "2023-05-26T00:40:42.620Z",
"endTime": "2023-05-26T00:41:03.060Z",
"commandResult": {
"exitCode": 0,
"output": "00:40:46 Running with dbt=1.4.5\n00:40:47 Unable to do partial parsing because saved manifest not found. Starting full parse.\n00:40:49 Found 3 models, 4 tests, 0 snapshots, 0 analyses, 571 macros, 0 operations, 0 seed files, 1 source, 0 exposures, 0 metrics\n00:40:49 \n00:40:54 Concurrency: 1 threads (target='prod')\n00:40:54 \n00:40:54 1 of 1 START sql table model google_sheets.simple_model ........................ [RUN]\n00:41:01 1 of 1 OK created sql table model google_sheets.simple_model ................... [OK in 7.13s]\n00:41:02 \n00:41:02 Finished running 1 table model in 0 hours 0 minutes and 12.87 seconds (12.87s).\n00:41:02 \n00:41:02 Completed successfully\n00:41:02 \n00:41:02 Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1",
"error": ""
},
"successfulModelRuns": 1,
"failedModelRuns": 0
}
],
"description": "Steps: successful 1, failed 0"
}
},
fields | description |
---|---|
dbtJobId | dbt job ID |
dbtJobName | dbt job name |
dbtJobType | dbt job type |
models | Array of models |
id | Model ID |
name | Model name |
startTime | Run start time |
startupDetails | Startup details |
type | Startup type |
endTime | Run end time |
result | Result details |
stepResults | Step results |
step | Step details |
name | Step name |
command | Step command |
success | Boolean specifying whether step was successful |
startTime | Step run start time |
endTime | Step run end time |
commandResult | Command run result details |
exitCode | Command exit code |
output | Command output |
error | Command execution errors |
successfulModelRuns | Number of successful model runs |
failedModelRuns | Number of successful model runs |
description | Step description |
delete_connector
link
"data": {
"actor": "john.doe@company.com",
"id": "hva_main_metrics_test_qe_benchmark"
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connector ID |
delete_rows
link
"data" : {
"schema" : "company_bronco_shade_staging",
"name" : "facebook_ads_ad_set_attribution_2022_11_03_73er44io6q0m1dfgjfbcghjea",
"deleteCondition" : "`#existing`.`ad_set_id` = `#scratch`.`ad_set_id` AND `#existing`.`ad_set_updated_time` = `#scratch`.`ad_set_updated_time`"
}
fields | description |
---|---|
schema | Schema name |
name | Table name |
deleteCondition | Delete condition |
diagnostic_access_approved
link
"data": {
"message": "Data access granted for 21 days.",
"ticketId": "123456",
"destinationName": "destination",
"connectorName": "facebook_ads",
"actor": "actor"
}
fields | description |
---|---|
message | Diagnostic data access message |
ticketId | Zendesk support ticket number |
destinationName | Destination name |
connectorName | Connector name |
actor | Requester name as specified in Zendesk |
diagnostic_access_granted
link
"data": {
"message": "Data accessed by Fivetran support for diagnostic purposes",
"connectorName": "connector",
"destinationName": "destination",
"requester": "requester-name",
"supportTicket": "1234"
}
fields | description |
---|---|
message | Diagnostic data access message |
connectorName | Connector name |
destinationName | Destination name |
requester | Requester name as specified in Zendesk |
supportTicket | Zendesk support ticket number |
drop_table
link
"data" : {
"schema" : "company_bronco_shade_staging",
"name" : "facebook_ads_company_audit_2022_11_03_puqdfgi35r6e1odfgy36rdfgv"
}
fields | description |
---|---|
schema | Schema name |
name | Table name |
reason | Reason why table was dropped. Optional |
edit_connector
link
"data" : {
"actor" : "john.doe@company.com",
"editType" : "CREDENTIALS",
"id" : "db2ihva_test5",
"properties" : {
"host" : "111.111.11.11",
"password" : "************",
"user" : "dbihvatest",
"database" : "dbitest1",
"port" : 1111,
"tunnelHost" : "11.111.111.111",
"tunnelPort" : 11,
"tunnelUser" : "hvr",
"alwaysEncrypted" : true,
"agentPublicCert" : "...",
"agentUser" : "johndoe",
"agentPassword" : "************",
"agentHost" : "localhost",
"agentPort" : 1111,
"publicKey" : "...",
"parameters" : null,
"connectionType" : "SshTunnel",
"databaseHost" : "111.111.11.12",
"databasePassword" : "dbihvatest",
"databaseUser" : "dbihvatest",
"logJournal" : "SDSJRN",
"logJournalSchema" : "SFGTEST1",
"agentToken" : null,
"sshHostFromSbft" : null
},
"oldProperties" : {
"host" : "111.111.11.11",
"password" : "************",
"user" : "dbihvatest",
"database" : "dbitest1",
"port" : 1111,
"tunnelHost" : "11.111.111.111",
"tunnelPort" : 12,
"tunnelUser" : "hvr",
"alwaysEncrypted" : true,
"agentPublicCert" : "...",
"agentUser" : "johndoe",
"agentPassword" : "************",
"agentHost" : "localhost",
"agentPort" : 1111,
"publicKey" : "...",
"parameters" : null,
"connectionType" : "SshTunnel",
"databaseHost" : "111.111.11.12",
"databasePassword" : "dbihvatest",
"databaseUser" : "dbihvatest",
"logJournal" : "SDSJRN",
"logJournalSchema" : "SFGTEST1",
"agentToken" : null,
"sshHostFromSbft" : null
}
}
fields | description |
---|---|
actor | Actor's account login email |
editType | Edit type. Valid values for edit type:
|
id | Connector ID |
properties | Connector type-specific properties |
oldProperties | Changed connector type-specific properties |
force_update_connector
link
"data" : {
"actor" : "john.doe@company.com",
"id" : "db2ihva_test5"
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connector ID |
forced_resync_connector
link
"data": {
"reason": "Credit Card Payment resync",
"cause": "MIGRATION"
}
fields | description |
---|---|
reason | Re-sync reason |
cause | Re-sync cause |
forced_resync_table
link
"data" : {
"schema" : "hubspot_test",
"table" : "ticket_property_history",
"reason" : "Ticket's cursor is older than a day, triggering re-sync for TICKET and it's child tables.",
"cause" : "STRATEGY"
}
fields | description |
---|---|
schema | Schema name |
table | Table name |
reason | Resync reason |
cause | Resync cause |
import_progress
link
"data": {
"tableProgress": {
"dbo.orders": "NOT_STARTED",
"dbo.history": "NOT_STARTED",
"dbo.district": "COMPLETE",
"dbo.new_order": "NOT_STARTED"
}
}
fields | description |
---|---|
tableProgress | Table progress as list of tables with their import status. Valid values for status:
|
info
link
"data" : {
"type" : "extraction_start",
"message" : "{currentTable: DBIHVA.JOHNDOE_TEST_DATE, imported: 2, selected: 3}"
}
fields | description |
---|---|
type | Information message type |
message | Information message |
insert_rows
link
"data": {
"schema": "quickbooks",
"name": "journal_entry_line"
}
fields | description |
---|---|
schema | Schema name |
name | Name of inserted row |
pause_connector
link
"data" : {
"actor" : "john.doe@company.com",
"id" : "db2ihva_test5"
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connector ID |
processed_records
link
"data": {
"table": "ITEM_PRICES",
"recordsCount": 24
}
fields | description |
---|---|
table | Actor's account login email |
recordsCount | Number of processed records |
records_modified
link
"data" : {
"schema" : "facebook_ads",
"table" : "company_audit",
"operationType" : "REPLACED_OR_INSERTED",
"count" : 12
}
fields | description |
---|---|
schema | Schema name |
table | Table name |
operationType | Operation type |
count | Number of operations |
resume_connector
link
"data" : {
"actor" : "john.doe@company.com",
"id" : "db2ihva_test5"
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connector ID |
resync_connector
link
"data": {
"actor": "john.doe@company.com",
"id": "bench_10g"
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connector ID |
resync_table
link
"data": {
"actor": "john.doe@company.com",
"id": "ash_hopper_staging",
"schema": "public",
"table": "big"
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connector ID |
schema | Schema name |
table | Table name |
schema_migration_end
link
"data" : {
"migrationStatus" : "SUCCESS"
}
fields | description |
---|---|
migrationStatus | Migration status. Valid values:
|
sql_query
link
"data" : {
"query" : "SELECT OBJECT_SCHEMA_NAME(sc.object_id) as TABLE_SCHEMA, OBJECT_NAME(sc.object_id) as TABLE_NAME, sc.name as COLUMN_NAME, sc.column_id, ISNULL(TYPE_NAME(sc.system_type_id), t.name) as DATA_TYPE, COLUMNPROPERTY(sc.object_id, sc.name, 'ordinal') as ORDINAL_POSITION, CONVERT(nvarchar(4000), OBJECT_DEFINITION(sc.default_object_id)) as COLUMN_DEFAULT, ISNULL(TYPE_NAME(sc.system_type_id), t.name) as IS_NULLABLE, COLUMNPROPERTY(sc.object_id, sc.name, 'octetmaxlen') as CHARACTER_OCTET_LENGTH, convert(tinyint, CASE WHEN sc.system_type_id IN (48, 52, 56, 59, 60, 62, 106, 108, 122, 127) THEN sc.precision END) as NUMERIC_PRECISION, convert(int, CASE WHEN sc.system_type_id IN (40, 41, 42, 43, 58, 61) THEN NULL ELSE ODBCSCALE(sc.system_type_id, sc.scale) END) as NUMERIC_SCALE FROM sys.columns AS sc LEFT JOIN sys.types t ON sc.user_type_id = t.user_type_id LEFT JOIN sys.tables as tbs ON sc.object_id = tbs.object_id WHERE tbs.is_ms_shipped = 0",
"number" : 5,
"executionTime" : 44
}
fields | description |
---|---|
query | SQL query |
number | Number of SQ queries run against the source |
executionTime | Execution time in seconds |
sync_end
link
"data" : {
"status" : "SUCCESSFUL"
}
fields | description |
---|---|
status | Sync status. Valid values: "SUCCESSFUL", "RESCHEDULED", "FAILURE", "FAILURE_WITH_TASK" |
reason | If status is FAILURE , this is the description of the reason why the sync failed. If status is FAILURE_WITH_TASK , this is the description of the Error. If status is RESCHEDULED , this is the description of the reason why the sync is rescheduled. |
task_type | If status is FAILURE_WITH_TASK or RESCHEDULED , this field displays the type of the Error that caused the failure or rescheduling, respectively, e.g., reconnect , update_service_account , etc. |
rescheduledAt | If status is RESCHEDULED , this field displays the scheduled time to resume the sync. The scheduled time depends on the reason it was rescheduled for |
sync_stats
link
NOTE: The
sync_stats
event is only generated for a successful sync for the following connectors:
"data" : {
"extract_time_s" : 63,
"extract_volume_mb" : 0,
"process_time_s" : 21,
"process_volume_mb" : 0,
"load_time_s" : 34,
"load_volume_mb" : 0,
"total_time_s" : 129
}
fields | description |
---|---|
extract_time_s | Extract time in seconds |
extract_volume_mb | Extracted data volume in Mb |
process_time_s | Process time in seconds |
process_volume_mb | Processed data volume in Mb |
load_time_s | Load time in seconds |
load_volume_mb | Loaded data volume in Mb |
total_time_s | Total time in seconds |
test_connector_connection
link
"data" : {
"actor" : "john.doe@company.com",
"id" : "db2ihva_test5",
"testCount" : 6
}
fields | description |
---|---|
actor | Actor's account login email |
id | Connector ID |
testCount | Number of tests |
update_rows
link
"data" : {
"schema" : "hubspot_johndoe",
"name" : "company"
}
fields | description |
---|---|
schema | Schema name |
name | Table name |
update_state
link
"data": {
"state": 681
}
fields | description |
---|---|
state | Connection-specific data you provide to us as JSON. Supports nested objects |
warning
link
Example 1
"data" : {
"type" : "skip_table",
"table" : "api_access_requests",
"reason" : "No changed data in named range"
}
Example 2
"data" : {
"type" : "retry_api_call",
"message" : "Retrying after 60 seconds. Error : ErrorResponse{msg='Exceeded rate limit for endpoint: /api/export/data.csv, project: 11111 ', code='RateLimitExceeded', params='{}'}"
}
fields | description |
---|---|
type | Warning type |
table | Table name |
reason | Warning reason |
message | Warning message |
write_to_table_start
link
"data" : {
"table" : "company_audit"
}
fields | description |
---|---|
table | Table name |
write_to_table_end
link
"data" : {
"table" : "company_audit"
}
fields | description |
---|---|
table | Table name |