Prerequisites

Using TD Console

Create a new connection

Fill in the required credentials. Note that the API User Name and API Password are different from the Cvent application username and password, contact your Cvent admin to generate an API User Name and API Password.

Check on Sandbox if you are using a testing API account against Cvent Sandbox

Create new transfer

Start Date: The starting point of time for the data time window. In the preceding image example, all Registrations that get modified from 2018-08-01 00:00:00 UTC will be fetched.

Duration: The length of the time window. In the preceding example, the length of time is equal to fetching from 2018-08-01 00:00:00 UTC to 2018-09-01 00:00:00 UTC.

Incremental: When running on schedule, the time window of the fetched data automatically shifts forward on each run. For example, if the initial config is January 1, with 10 days in duration, the first run fetches data that was modified from January 1 to January 10, the second run fetches data that was modified from January 11 to January 20, and so on.

Preview

This shows a preview of the actual data in the specified data transfer configuration. The columns are sorted alphabetically, but custom field columns (if there are any) are placed at the end. This order also applies to the final results on the target database.

Choose the target database and table

Choose an existing or create a new database and table.

Note that connector inserts literal time values received from Cvent API. This means that the time values are relative to the implicit time zone of Cvent server and assumed to be UTC. Also, Event data type's time-related fields are relative to its own event's time zone. That means the Data Storage Timezone doesn't indicate the actual time zone of the data.

Scheduling

Set a schedule if you want to. Import starts when the scheduled time comes, or immediately if you choose ‘Once now’.

Execute Load Job

You must specify the database and table where the data is stored.

It is recommended to specify --time-column option, because Treasure Data’s storage is partitioned by time. If the option is not given, the data connector selects the first long or timestamp column as the partitioning time. The type of the column specified by --time-column must be either of long and timestamp type (use Preview results to check for the available column name and type, generally, most data types have a last_modified_date column).

If your data doesn’t have a time column you can add the column by using the add_time filter option. See details at add_time filter plugin.

Submit the load job. It may take a couple of hours depending on the data size. Users need to specify the database and table where their data is stored.

The connector:issue command assumes you have already created a database (sample_db) and a table (sample_table). If the database or the table do not exist in TD, the connector:issue command will fail. Therefore you must create the database and table manually or use --auto-create-table option with td connector:issue command to automatically create the database and table:

Scheduled execution

You can schedule periodic data connector execution for periodic Cvent import. By using this feature, you no longer need a cron daemon on your local data center.

Create the schedule

A new schedule can be created by using the td connector:create command. The name of the schedule, cron-style schedule, the database and table where their data will be stored, and the Data Connector configuration file are required.

The `cron` parameter also accepts three special options: `@hourly`, `@daily` and `@monthly`. For more detail on Scheduled Jobs

By default, the schedule is set up in the UTC timezone. You can set the schedule in a timezone using -t or --timezone option. The --timezone option supports only extended timezone formats like ‘Asia/Tokyo’, ‘America/Los_Angeles’ etc. Timezone abbreviations like PST, CST are not supported and may lead to unexpected schedules.

List the Schedules

You can see the list of currently scheduled entries by td connector:list.

Appendix A. How column names are mapped

Due to an upstream problem from Cvent API, the "RSVP By Date" is mapped to an unusual column name: "rsv_pby_date"

Custom field names are mapped to snake case by the following steps:

Replace all non-alphanumeric to underscore "_"

Remove all leading and trailing underscores after step 1

If the first character is a digit, prefix the name with "col_"

Remove all consecutive underscores "_"

If the name is empty after the preceding steps, then name it "custom_field" (This column field name is different from the column name in the Console)

Lowercase all of the characters

For example: "Hello @ World" will be mapped to "hello_world", "" (empty) will be mapped to "custom_field".If there are naming conflicts, the conflicted custom field name is appended with the ID of the field. For example, if there is a custom field name "First Name" on Contact type (which already has a predefined field with that name), the custom field name is mapped to something like "first_name_A3E3_ERQNIHOIU_324AE".

Appendix B. Event's time zone

Unlike other datetime fields, the preceding 4 event-related datetime fields are imported as text and can be recognized in a database by a slightly different format (for example. the Start Date is imported as "2018-10-09T17:59:00", and would be "2018-10-09 17:59:00.000" if the Start Date is a default datetime value). Event-related datetime fields are relative to their own event time zone. Therefore, these event-related datetime fields, are not treated as absolute time references like other fields.

The event-related datetime fields correspond to the following columns after import: