# Amazon S3

## Overview

You can create an Amazon S3 Destination Connector to write CSV files (by default) or JSONL files to an S3 bucket or folder using an Access Key ID and Secret Access Key.&#x20;

**Supported file formats:** CSV and JSONL

## Prerequisites

Required information:

* Bucket Name
* Region
* Access Key ID
* Secret Access Key

## Creating an Amazon S3 **D**estination Connector

**Step 1:** After selecting **+ New Connector**, under the system prompt, click **Amazon S3**.

![](https://353417064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MYrsDW6vGBTygB1qqSE%2Fuploads%2FmKsMkgCnRzzDmQfe3Gjp%2FAmazon%20S3.png?alt=media\&token=2c1ec890-1d26-40f9-b961-188b98d18c43)

**Step 2:** Provide a **Connector Name.**

**Step 3:** Select **Destination Connector.**

### S3 Bucket Information

**Step 4:** Provide the name of the Amazon S3 bucket in the **Bucket Name** field.

**Step 5:** Provide the **folder** name with the trailing “/”. Subfolders within the folder provided will be ignored.&#x20;

{% hint style="info" %}
If this field is left blank, we will read from the root of the Amazon S3 bucket and ignore any folders within the bucket
{% endhint %}

**Step 6:** Provide the **region** for the S3 bucket.

### Access Key

**Step 7:** Provide an **access key ID** for the S3 bucket.\
\
To learn more about creating an access key ID, visit: <https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html>

**Step 8:** Provide the **secret access key ID** for the S3 bucket

### **Destination Schema**

Design the output schema via two options, either import the schema or build it within Osmos.

#### Option 1: Schema Import

Upload or drag & drop the schema file.

<figure><img src="https://353417064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MYrsDW6vGBTygB1qqSE%2Fuploads%2FfKAbBpaYvcPNKrVCnCka%2FCleanShot%202023-08-31%20at%2011.39.16%402x.png?alt=media&#x26;token=09d81450-71fb-4710-b15d-239825b24a1e" alt=""><figcaption><p>Schema Upload</p></figcaption></figure>

{% hint style="info" %}
Import a file with the headers along with one row of sample data.  This data is used only in schema creation.
{% endhint %}

#### **Option 2: Building the Schema for the Destination Connector**

Use the schema designer to build the output schema for this Destination Connector.&#x20;

![](https://353417064-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MYrsDW6vGBTygB1qqSE%2F-MYzModXo7qYLoVm0Fuv%2F-MZAhLIR_rAexEPvI1Y4%2Fimage.png?alt=media\&token=2d9f9118-0b5e-488f-9792-4772ef069931)

| Parameter  | Description                                                                                                                                                       |
| ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Field Name | Provide a field name for the output fields. These names will be used as the column headers or field names in the output file you are writing to.                  |
| Type       | Define the type of each field. The field types will be used to enforce rules when you send data to this Connector.                                                |
| Nullable   | Check this box if the field is nullable. If the field is not nullable, you will be required to provide values for this field when sending data to this Connector. |
| Delete     | Deletes the field.                                                                                                                                                |
| Add Field  | Adds another field to the schema.                                                                                                                                 |

\
**Step 1:** Click **Add Field** for each additional field required in the Schema\
\
**Step 2:** Select **Create Schema** once you have built the schema.&#x20;

## **Advanced Options**

**Output File Format**

By default, this Connector writes CSV files, and each Pipeline run produces a new file. If preferred, you can choose to change the output to a JSONL file instead of a CSV file.&#x20;

#### File Prefix Format String

We support the designation of file prefixes in order to more easily manage the output of this connector. The contents of this field will be written into the filename of the data this Connector writes. If a prefix is specified, a UUID will be appended to it to prevent filename conflicts.&#x20;

For additional configuration go to [Additional Configuration for File Prefix Format.](#additional-configuration-for-advanced-options)

#### Limit Records Per File

By default, we do not set a limit on the number of records to be written to a single destination file by a single job (i.e. a single run of a Pipeline or Uploader).  If this box is checked, the data written to the destination will be "chunked" into separate files which contain at-most the number of records designated here. These "chunked" files will be suffixed with it's position in the sequence i.e. *filename*\_part\_1.csv, *filename*\_part\_2.csv, etc.

#### Validation Webhook

We support the use of Validation Webhooks to prevent bad data from being written to your systems, adding another layer of protection to the built-in validations that Osmos provides. The Webhook URL can be posted here.

{% hint style="info" %}
For more information on Validation webhook configuration, see [Server Side Validation Webhooks](https://docs.osmos.io/developer-docs/validation-and-transformation-webhooks)
{% endhint %}

#### Overwrite Output Column with Raw Input Data

Enter the name of the destination column where you'd like to store the entire raw source record data. The raw source record data will be stored as a JSON string in the provided destination column.

## Additional Configuration for File Prefix Format

**Organizing File Structure**

A user can chunk files and output to different buckets based on job\_id. Osmos leverages a case-sensitive magic string {jobId} in the file prefix and the file output names for these file based Destination Connectors. To set a file prefix, go to the Destination Connector > Show Advanced Options > populate prefix information in the File Prefix Format String field.

#### File Output

We support the designation of file prefixes in order to more easily manage the output of this connector. The contents of this field will be written into the filename of the data this Connector writes. A UUID will be appended to it to the filename prevent writing conflicts.  Osmos leverages two types of magic string identifiers in order to include additional information in your file prefix.

**Job\_Id**:  You can include an identifier that corresponds each individual Job (a run of an Osmos Uploader or Pipeline) by including ***{jobId}*** in your prefix format string. See examples 3 & 4

**DateTime**: You can include datetime values in your file output using [String from time (strftime) format specifiers](https://docs.osmos.io/data-transformations/formulas/date-and-time-formulas/date-format-specifiers). The time values created here correspond to Osmos internal system time at the moment the job was started. See example 4

Output Scenarios:&#x20;

1. No file prefix \
   Output: *\<user base path>/chunk-\<chunk num>-\<GUID>.\<file extension>*
2. File includes description in the prefix\
   Sample prefix: *my\_osmos\_output\_*\
   Output: *\<user base path>/my\_osmos\_output\_chunk-\<chunk num>-\<GUID>.\<file extension>*
3. File includes description and job\_id in the prefix\
   Sample prefix: *my\_osmos\_output/{jobId}/*\
   Output: *\<user base path>/my\_osmos\_output/\<ACTUAL JOB\_ID HERE>/chunk-\<chunk num>-\<GUID>.\<file extension>*
4. File includes datetime specifiers and job\_id in the prefix

   Sample prefix: *{jobId}/%F\_%T/*\
   Output: *\<user base path>/\<ACTUAL JOB\_ID HERE>/\<YYYY-MM-DD>\_\<HH:mm:ss>/*&#x63;hunk-\<chunk num>-\<UUID>.\<file extension>

## Connector Options

The connector can be deleted, edited and duplicated.

#### Duplication

To save time, the connector can be duplicated.  This new connector needs to be named and can be edited, as needed.

<figure><img src="https://353417064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MYrsDW6vGBTygB1qqSE%2Fuploads%2FANgVInC7cO2LYR2ozlVF%2FCleanShot%202024-01-04%20at%2020.53.21%402x.png?alt=media&#x26;token=8e186811-e6b2-4a19-89c3-b94f452ce655" alt="" width="563"><figcaption></figcaption></figure>
