Google Cloud Storage (GCS)

Overview

You can create a Google Cloud Storage Connector to read from your Google Google Cloud Storage Bucket.

To set up this Connector using a GCP Service Account Key, you will need a GCP Service Account that has access to the bucket where the resources reside. To learn more about creating and managing service accounts within GCP, visit: https://cloud.google.com/iam/docs/creating-managing-service-accounts.

The schema for this Source Connector is defined by the newest file in the folder. All files should ideally have the same schema (number and order of columns). If a new file that does not match the original schema is picked up, Osmos will do best efforts to match the schema, and throw errors should one or more of the mapped fields are missing in the file, or skip the file entirely should there be no schema overlap.

Supported file formats: CSV, XLSX, XLS, TXT (comma separated), JSONL, and ZIP files containing these files.

Prerequisites

Required information:

  • Service Account Key with the proper privileges

  • Existing GCS Bucket Name

Creating a GCS Source Connector

Step 1: After selecting + New Connector, under the System prompt, click GCS

Step 2: Enter a Connector Name

Step 3: Select Source Connector

Step 4: Authentication

Authentication is accomplished using Service Account Keys. Provide the Service Account JSON key for the account you wish to connect to.

  • Service accounts associated with a GCS Source Connector will need the proper Cloud Storage Privileges in order to successfully establish a connection.

Creating a Service Account Key in the Google Cloud Console

  1. To create a Service Account JSON key, first navigate to the Service Accounts page in the Google Cloud Console.

  2. Click the project dropdown in the top navigation bar to view all of your projects, choose the project you want to create a service account key for, and then click Open.

  3. Find the row of the service account that you want to create a key for. In that row, click the More button, and then click Create key.

  4. Select the JSON Key type and click Create.

Note: to set up a Source Connector using your service account, the service account you select needs to have access to the project you want to connect to.

Step 5: Bucket Name

1. To find the Bucket Name, first select the Google Cloud Navigation menu, then scroll to Cloud Storage and select Buckets.

2. On the Buckets page, select the name of the bucket you would like to connect to.

3. The Bucket Name can then be copied from the top of the resulting page.

Advanced Options

File Filtering

You can choose to process all source files, or filter the files based on the file name. Any files that do not meet the filter criteria will be ignored. Select one of the options:

  1. Include all files: If this option is chosen, all of the files in the folder will be processed in chronological order.

  2. Only include files that: If you choose this option, you can filter which files to process from the source folder based on three options:

    • File names starting with,

    • File names containing, or

    • File names ending with.

    Any files that do not meet the filter criteria will be ignored.

If you provide a ZIP file with a name that contains the filter criteria, all files within the ZIP file will be processed (if the files match with the Connector’s schema). The file filter does not filter any files within a ZIP file.

File Headers

Within the source folder, all files can contain column header names or none of the files can contain column header names. Select one of the options:

  1. All source files contain headers: If this option is selected, we will use the first row as column header names to label the schema within Osmos. Rows two and up will be read as data records.

  2. No source files contain headers: If this option is selected, we autogenerate column names for the schema within Osmos. All rows, including the first row, will be read as data records.

Delimiter for TXT Files

The delimiter to use when reading files. Delimiters are selectable in the form of a dropdown list:

  • Comma ,

  • Tab

  • Pipe |

  • Semicolon ;

There are then two available options for how these delimiters should be applied:

  • Selected delimiter applies to ..TXT file only...:By default, the delimiter selected from the dropdown list will only apply to .txt files, .csv (Comma-separated files) and .tsv (Tab-separated values) will continue to be processed according to their file extension designation.

  • Selected delimiter applies to all files in the folder...: Can be selected for situations when file extension designations should be ignored, and the delimiter selected from the dropdown menu should be the exclusive delimiter for all files processed by the connector.

Header Normalization

The source file may have characters at the start or end that includes spaces, tabs, carriage returns and line endings. You can choose to keep all characters from the source or remove all whitespace. Select one of the options:

  1. Don't normalize headers. Use headers exactly as they appear in the source: If this option is selected, we will retain all characters from the source file.

  2. Remove extra whitespace and other common untypable characters from headers: If this option is selected, we remove all whitespace (spaces, tabs, carriage returns, line endings) at start/end.

Handle Invalid Characters

The source file may have characters that may not be valid. You can choose to keep all characters from the source, or to strip the null characters. Select one of the options:

  1. Keep all characters from source: If this option is selected, we will retain all characters from the source file, replacing characters we cannot decode with the unicode undefined character.

  2. Strip null characters: If this option is selected, we filter out all characters that are equal to 0. Useful when dealing with null-terminated strings.

Deduplication Method

We support three different deduplication methods. You can choose to deduplicate at file level, or record level. Select one of the following options:

  1. File level Deduplication - If this option is selected, deduplication will be performed at a file level only. If the metadata or the contents of a file are changed, the entire file will be processed in subsequent runs. Note, for some filetypes changing the filename alone is not sufficient for the metadata to update. Likewise, even if a file is created with the same data and filename as another file, their metadata will differ.

  2. Record level deduplication across all historical data: When this is selected, in addition to file level deduplication, deduplication will be performed at a record level across all the files processed by this Pipeline. An identical record that was already processed in a previous Osmos Pipeline run will not be processed in the current file, nor will duplicated records within the same file.

    Example:

    file_a.csv:
    item, quantity
    apple, 3
    orange, 9
    banana, 2
    file_b.csv:
    item, quantity
    pear, 9
    apple, 3
    banana, 2

    After processing file_a.csv, if we add file_b.csv to the same directory and run a job, only the row containing pear, 9 will be processed, as apple, 3 and banana, 2 were already seen when file_a.csv was processed. The same applies within the same file - if we'd added pear, 9 to file_a.csv instead of creating file_b.csv, the net result would be the same: pear, 9 would be the only new row.

  3. Record level deduplication within individual files: When this is selected, in addition to file level deduplication, deduplication will be performed at a record level, but only within the same file. If the file being processed has the same record appearing multiple times, the record will be processed only once.

    Example:

    file_a.csv:
    item, quantity
    apple, 3
    orange, 9
    banana, 2
    file_b.csv:
    item, quantity
    pear, 9
    apple, 3
    banana, 2

    After processing file_a.csv, if we add file_b.csv to the same directory and run a job, all three records in file_b.csv will be processed. If instead we'd added those records to file_a.csv, the duplicated records (apple, 3, banana, 2) would be skipped, and the new record pear, 9 would be the only new record processed.

  4. No deduplication - Do no deduplication, neither for files nor records. All rows of all files will be processed.

Starting Cell

We support Starting Cell offset for spreadsheet type data (.csv, .xls, .xsv, etc.) in order to crop unnecessary information out of a dataset and to ensure headers are correctly mapped.

The coordinates provided will serve as the starting location from which the data will be read. By default, The data read begins at coordinates (1,1) which will result in a read of all the data in the document. The example below shows in blue where the data has been read, and in white where data has been omitted, based on a configuration of Row 2 Column 2.

Note, that even with no Starting Cell offset in place (i.e. a Row 1, Column 1 configuration) only the first row containing data will begin the reading of the data, omitting any leading rows containing no data.

Leading rows that are completely void of data will be omitted

Sheet Names

If no sheet names are designated, this connector will read the schema of the first sheet of a document, then will continue to search subsequent sheets for data that matches this schema. If sheet name(s) are designated, they will be read exclusively, allowing the connector to skip non-relevant sheets, and to read multiple sheets from a single workbook.

Parser Webhook

We support the use of a parser webhook for the purpose of pre-processing data. This field allows for the designation of a webhook URL. The webhook protocol must also be designated here. Currently, gRPC webhooks are supported.

A webhook must first be built and configured in order to be utilized by a connector, please contact Support for more information

Connector Options

The connector can be deleted, edited and duplicated.

Duplication

To save time, the connector can be duplicated. This new connector needs to be named and can be edited, as needed.

Last updated