Amazon S3


You can create an Amazon S3 Destination Connector to write CSV files (by default) or JSONL files to an S3 bucket or folder using an Access Key ID and Secret Access Key.
Supported file formats: CSV and JSONL


Required information:
  • Bucket Name
  • Region
  • Access Key ID
  • Secret Access Key

Creating an Amazon S3 Destination Connector

Step 1: After selecting + New Connector on the Connectors page, click Amazon S3 under the System prompt.
Step 2: Provide a Connector Name.
Step 3: Select Destination Connector.

S3 Bucket Information

Step 4: Provide the name of the Amazon S3 bucket in the Bucket Name field.
Step 5: Provide the folder name with the trailing “/”. Subfolders within the folder provided will be ignored.
If this field is left blank, we will read from the root of the Amazon S3 bucket and ignore any folders within the bucket
Step 6: Provide the region for the S3 bucket.

Access Key

Step 7: Provide an access key ID for the S3 bucket. To learn more about creating an access key ID, visit:​
Step 8: Provide the secret access key ID for the S3 bucket

Building the Schema for the Destination Connector

Use the schema designer to build the output schema for this Destination Connector.
Field Name
Provide a field name for the output fields. These names will be used as the column headers or field names in the output file you are writing to.
Define the type of each field. The field types will be used to enforce rules when you send data to this Connector.
Check this box if the field is nullable. If the field is not nullable, you will be required to provide values for this field when sending data to this Connector.
Deletes the field.
Add Field
Adds another field to the schema.
Step 1: Click Add Field for each additional field required in the Schema Step 2: Select Create Schema once you have built the schema.

Advanced Options

Output File Format
By default, this Connector writes CSV files, and each Pipeline run produces a new file. If preferred, you can choose to change the output to a JSONL file instead of a CSV file.

File Prefix Format String

We support the designation of file prefixes in order to more easily manage the output of this connector. The contents of this field will be written into the filename of the data this connector writes. If a prefix is specified, a UUID will be appended to it to prevent filename conflicts. You can include a UUID that corresponds to the UUID of the job by including {jobId} in your prefix format string. Strftime syntax is allowed here.

Limit Records Per File

By default, we do not set a limit on the number of records to be written to a single destination file by a single job (i.e. a single run of a Pipeline or Uploader). If this box is checked, the data written to the destination will be "chunked" into separate files which contain at-most the number of records designated here. These "chunked" files will be suffixed with it's position in the sequence i.e. filename_part_1.csv, filename_part_2.csv, etc.

Validation Webhook

We support the use of Validation Webhooks to prevent bad data from being written to your systems, adding another layer of protection to the built-in validations that Osmos provides. The Webhook URL can be posted here.
For more information on Validation webhook configuration, see Server Side Validation Webhooks​

Overwrite Output Column with Raw Input Data

Enter the name of the destination column where you'd like to store the entire raw source record data. The raw source record data will be stored as a JSON string in the provided destination column.

Additional Options

Organizing File Structure
A user can chunk files and output to different folders based on job_id. Osmos leverages a magic string {jobid} in the file prefix and the file output names for these file based Destination Connectors. To set a file prefix, go to the Destination Connector > Show Advanced Options > populate prefix information in the File Prefix Format String field.
Output Scenarios:
  1. 1.
    No file prefix Output: <user base path>/chunk-<chunk num>-<GUID>.<file extension>
  2. 2.
    File includes description in the prefix Sample prefix: my_osmos_output_ Output: <user base path>/my_osmos_output_chunk-<chunk num>-<GUID>.<file extension>
  3. 3.
    File includes description and job_id in the prefix Sample prefix: my_osmos_output/{jobId}/ Output: <user base path>/my_osmos_output/<ACTUAL JOB_ID HERE>/chunk-<chunk num>-<GUID>.<file extension>