Comment on page
You can create an Amazon S3 Source Connector to read from your S3 bucket or folder by providing your Access Key ID and Secret Access Key.
The schema for an S3 Source Connector is defined by the newest file in the S3 bucket or folder. All files should ideally have the same schema (number and order of columns). If a new file that does not match the original schema is picked up, Osmos will do best efforts to match the schema, and throw errors should one or more of the mapped fields are missing in the file, or skip the file entirely should there be no schema overlap.
Supported file formats: CSV, XLSX, XLS, TXT (Comma separated), JSONL, and ZIP files containing these files.
- S3 Bucket Information
- Access Key ID
- Secret Access Key
Prior to setting up the Source Connector, ensure there is at least one file (with at least one row of data) within the S3 bucket/folder you are connecting to.
Step 1: After selecting + New Connector, under the System prompt, click Amazon S3
Step 2: Provide a Connector Name.
Step 3: Select Source Connector.
Step 4: Provide the Amazon S3 bucket name.
The schema for this Source Connector is defined by the newest file in the bucket (or in the folder provided below). All files must have the same schema (number and order of columns).
Step 5: To read from a specific folder, provide the folder name with the trailing “/”. Subfolders within the folder provided will be ignored.
If this field is left blank, we will read from the root of the S3 bucket and ignore any folders within the bucket
Step 6: Provide the region for the S3 bucket.
Step 7: Provide an access key ID for the S3 bucket.
Step 8: Provide the secret access key for the access key ID above.
You can choose to process all source files, or filter the files based on the file name. Any files that do not meet the filter criteria will be ignored. Select one of the options:
- 1.Include all files: If this option is chosen, all of the files in the folder will be processed in chronological order.
- 2.Only include files that: If you choose this option, you can filter which files to process from the source folder based on three options:
Any files that do not meet the filter criteria will be ignored.
- File names starting with,
- File names containing, or
- File names ending with.
If you provide a ZIP file with a name that contains the filter criteria, all files within the ZIP file will be processed (if the files match with the Connector’s schema). The file filter does not filter any files within a ZIP file.
Within the source folder, all files can contain column header names or none of the files can contain column header names. Select one of the options:
- 1.All source files contain headers: If this option is selected, we will use the first row as column header names to label the schema within Osmos. Rows two and up will be read as data records.
- 2.No source files contain headers: If this option is selected, we autogenerate column names for the schema within Osmos. All rows, including the first row, will be read as data records.
The delimiter to use when reading files. Delimiters are selectable in the form of a dropdown list:
There are then two available options for how these delimiters should be applied:
- Selected delimiter applies to ..TXT file only...:By default, the delimiter selected from the dropdown list will only apply to
.csv(Comma-separated files) and
.tsv(Tab-separated values) will continue to be processed according to their file extension designation.
- Selected delimiter applies to all files in the folder...: Can be selected for situations when file extension designations should be ignored, and the delimiter selected from the dropdown menu should be the exclusive delimiter for all files processed by the connector.
The source file may have characters at the start or end that includes spaces, tabs, carriage returns and line endings. You can choose to keep all characters from the source or remove all whitespace. Select one of the options:
- 1.Don't normalize headers. Use headers exactly as they appear in the source: If this option is selected, we will retain all characters from the source file.
- 2.Remove extra whitespace and other common untypable characters from headers: If this option is selected, we remove all whitespace (spaces, tabs, carriage returns, line endings) at start/end.
Handle Invalid Characters
The source file may have characters that may not be valid. You can choose to keep all characters from the source, or to strip the null characters. Select one of the options:
- 1.Keep all characters from source: If this option is selected, we will retain all characters from the source file, replacing characters we cannot decode with the unicode undefined character.
- 2.Strip null characters: If this option is selected, we filter out all characters that are equal to 0. Useful when dealing with null-terminated strings.
We support four different deduplication methods. You can choose to deduplicate at file level, or record level. Select one of the following options:
- 1.File level Deduplication - If this option is selected, deduplication will be performed at a file level only. If the metadata or the contents of a file are changed, the entire file will be processed in subsequent runs. Note, for some filetypes changing the filename alone is not sufficient for the metadata to update. Likewise, even if a file is created with the same data and filename as another file, their metadata will differ.
- 2.Record level Deduplication across all historical data - When this is selected, in addition to file-level deduplication, deduplication will be performed at a record level across all the files processed by this Pipeline. An identical record that was already processed in a previous Pipeline run will not be processed in the current file, nor will duplicated records within the same file.Example:file_a.csv:item, quantityapple, 3orange, 9banana, 2file_b.csv:item, quantitypear, 9apple, 3banana, 2After processing
file_a.csv, if we add
file_b.csvto the same directory and run a job, only the row containing
pear, 9will be processed, as
banana, 2were already seen when
file_a.csvwas processed. The same applies within the same file - if we'd added
file_a.csvinstead of creating
file_b.csv, the net result would be the same:
pear, 9would be the only new row.
- 3.Record level Deduplication within individual files - When this is selected, in addition to file-level deduplication, deduplication will be performed at a record level, but only within the same file. If the file being processed has the same record appearing multiple times, the record will be processed only once.Example:file_a.csv:item, quantityapple, 3orange, 9banana, 2file_b.csv:item, quantitypear, 9apple, 3banana, 2After processing
file_a.csv, if we add
file_b.csvto the same directory and run a job, all three records in
file_b.csvwill be processed. If instead we'd added those records to
file_a.csv, the duplicated records (
banana, 2) would be skipped, and the new record
pear, 9would be the only new record processed.
- 4.No deduplication - Do no deduplication, neither for files nor records. All rows of all files will be processed.
We support Starting Cell offset for spreadsheet type data (
.xsv, etc.) in order to crop unnecessary information out of a dataset and to ensure headers are correctly mapped.
The coordinates provided will serve as the starting location from which the data will be read. By default, The data read begins at coordinates (1,1) which will result in a read of all the data in the document. The example below shows in blue where the data has been read, and in white where data has been omitted, based on a configuration of Row 2 Column 2.
Note, that even with no Starting Cell offset in place (i.e. a Row 1, Column 1 configuration) only the first row containing data will begin the data read, omitting any leading rows containing no data.
Leading rows that are completely void of data will be omitted
If no sheet names are designated, this connector will read the schema of the first sheet of a document, then will continue to search subsequent sheets for data that matches this schema. If sheet name(s) are designated, they will be read exclusively, allowing the connector to skip non-relevant sheets, and to read multiple sheets from a single workbook.
We support the use of a parser webhook for the purpose of pre-processing data. This field allows for the designation of a webhook URL. The webhook protocol must also be designated here. Currently, gRPC webhooks are supported.
A webhook must first be built and configured in order to be utilized by a connector, please contact Support for more information.