Creating Destinations with Hevo API

Hevo supports a number of databases, and data warehouses as Destinations. You can use Hevo API to create and work with Destinations for your data Pipelines. Read Create Destinations to know about the endpoints available.

You need to provide the Basic token to make the API request for creating the Destination. The Basic token is the combination of your API Secret and Key from your API credentials. Read Generating your API credentials for steps to generate your API credentials.

Basic token

Once you provide the Basic token in the Authorization header of the request, you can specify the Destination details such as its name, type, and configuration settings in the API request body.

Here are a few sample API requests for creating Destinations using Hevo API:

Snowflake

Snowflake offers a cloud-based data storage and analytics service, generally termed as data warehouse-as-a-service. Companies can use it to store and analyze data using cloud-based hardware and software. Read more about Snowflake as a Destination.

In the API Request:

  • Replace the values in the sample API request given below with your own to create your Snowflake Destination.
  • For the region parameter, specify the region in which your Snowflake account is located. Read obtain your Snowflake account and region names).
  • Set populate_loaded_timestamp = true to append the __hevo_loaded_at column to the Destination table to indicate the time when the Event was loaded to the Destination. Else, set it to false.
curl --location -g --request POST  'https://us.hevodata.com/api/public/v2.0/destinations' \ 
--header 'Authorization: Basic <auth_token>' \ 
--header 'Content-Type: application/json' \ 
--data-raw '{ 
              "name": "snowflake_dest_name", // replace "snowflake_dest_name" with a name for the Destination
               "type": "SNOWFLAKE", 
               "config": { 
                         "db_name": "sf_db", // replace "sf_db" with the name of your database 
                         "account_name": "sf_acc",// replace "sf_acc" with your Snowflake account name 
                         "db_user": "super_user", // replace "super_user" with user you created for Hevo 
                         "schema_name": "PUBLIC", // replace "PUBLIC" with the database schema name 
                         "region": "ap-south-1", // replace 
                         "warehouse": "COMPUTE_WH", // replace "COMPUTE_WH" with your warehouse name 
                         "db_password": "password" // replace with the user password 
                         "populate_loaded_timestamp": true, 
                         } 
            }'

Google BigQuery

Google BigQuery is a fully-managed, server-less data warehouse that supports Structured Query Language (SQL) to derive meaningful insights from the data. Read more about Google BigQuery.

In the API Request:

  • Replace the values in the sample API request below with your own to create your Google BigQuery Destination.
  • Set populate_loaded_timestamp = true to append the __hevo_loaded_at column to the Destination table to indicate the time when the Event was loaded to the Destination. Else, set it to false.
  • Set enable_streaming_inserts = true to stream data to your BigQuery Destination as it gets received from the Source instead of loading it via a job as per a defined Pipeline schedule. Else, set it to false.

curl --location -g --request POST 'https://us.hevodata.com/api/public/v2.0/destinations' \ 
--header 'Authorization: Basic <auth_token>' \ 
--header 'Content-Type: application/json' \ 
--data-raw '{ 
           "name": "bq_dest_name", // replace "bq_dest_name" with the name you want for your Destination 
           "type": "BIGQUERY", 
           "config": { 
                   "project_id": "hevo-test-project", // replace "hevo-test-project" with your projectID in BigQuery 
                   "dataset_name": "hevo_test_dataset", // replace "hevo_test_dataset" with your Dataset ID 
                   "bucket": "hevo-test-bucket", // replace "hevo-test-bucket" with your GCS bucket 
                   "oauth_account_id": 1, // replace "1" with your Oauth client ID 
                   "sanitize_name": true, 
                   "populate_loaded_timestamp": true, 
                   "enable_streaming_inserts": false, 
                     } 
            }'

MySQL

Hevo can load data from any of your Pipelines into a MySQL database. Read MySQL.

In the API Request:

  • Replace the values of the parameters given in the sample API request below with your own to create your MySQL Destination.
  • Set sanitize_name = true to remove all non-alphanumeric characters and spaces in a table or column name and replaces them with an underscore. Else, set it to true.

Read Configure MySQL settings for details on other configuration parameters.


curl --location -g --request POST 
'https://us.hevodata.com/api/public/v2.0/destinations' \ 
--header 'Authorization: Basic <auth_token>' \ 
--header 'Content-Type: application/json' \ 
--data-raw '{ 
          "name": "mysql_db", // replace with the name you want for your Destination. 
          "type": "MYSQL", 
          "config": { 
                 "db_host": "demo.mysql.com", // replace with your MySQL host’s IP address or DNS 
                 "db_port": 5439, // replace with the port on which your MySQL server is listening for connections 
                 "db_name": "demo", // replace with your Destination database name 
                 "db_user": "root", // replace with a user that has non-administrative role of the MySQL database 
                 "db_password": "dbpassword" // replace with the user's database password 
                 "sanitize_name": false 
                    } 
            }'