Google Cloud Storage (GCS)
Last updated
Last updated
You can create a GCS (Google Cloud Storage) Destination Connector to write to write CSV files (by default) or JSONL files to a Cloud Storage Bucket.
To set up this Connector using a GCP Service Account Key associated with a GCP Service Account that has access to the project(s) where you wish the resources reside. To learn more about creating and managing service accounts within GCP, visit: https://cloud.google.com/iam/docs/creating-managing-service-accounts.
Supported file formats: CSV and JSONL
Required information:
Service Account Key with the proper privileges
Existing GCS Bucket Name
Step 1: After selecting + New Connector, under the System prompt, click GCS
Step 2: Enter a Connector Name
Step 3: Select Destination Connector
Authentication is accomplished using Service Account Keys. Provide the Service Account JSON key for the account you wish to connect to.
Service accounts associated with a GCS Source Connector will need the proper Cloud Storage Privileges in order to successfully establish a connection.
Creating a Service Account Key in the Google Cloud Console
To create a Service Account JSON key, first navigate to the Service Accounts page in the Google Cloud Console.
Click the project dropdown in the top navigation bar to view all of your projects, choose the project you want to create a service account key for, and then click Open.
Find the row of the service account that you want to create a key for. In that row, click the More button, and then click Create key.
Select the JSON Key type and click Create.
Note: to set up a Source Connector using your service account, the service account you select needs to have access to the project you want to connect to.
Step 1: To find the Bucket Name, first select the Google Cloud Navigation menu, then scroll to Cloud Storage and select Buckets.
Step 2: On the Buckets page, select the name of the bucket you would like to connect to.
Step 3: The Bucket Name can then be copied from the top of the resulting page.
Use the schema designer to build the output schema for this Destination Connector.
Parameter | Description |
Field Name | Provide a field name for the output fields. These names will be used as the column headers or field names in the output file you are writing to. |
Type | Define the type of each field. The field types will be used to enforce rules when you send data to this Connector. |
Nullable | Check this box if the field is nullable. If the field is not nullable, you will be required to provide values for this field when sending data to this Connector. |
Delete | Deletes the field. |
Add Field | Adds another field to the schema. |
Step 1: Click Add Field for each additional field required in the schema Step 2: Select Create Schema once you have built the schema.
Output File Format
By default, this Destination Connector writes CSV files, and each Osmos Pipeline run produces a new file. If preferred, you can choose to change the output to a JSONL file instead of a CSV file.
We support the designation of file prefixes in order to more easily manage the output of this connector. The contents of this field will be written into the filename of the data this connector writes. If a prefix is specified, a UUID will be appended to it to prevent filename conflicts.
For additional configuration go to Additional Configuration for File Prefix Format.
By default, we do not set a limit on the number of records to be written to a single destination file by a single job (i.e. a single run of a Pipeline or Uploader). If this box is checked, the data written to the destination will be "chunked" into separate files which contain at-most the number of records designated here. These "chunked" files will be suffixed with it's position in the sequence i.e. filename_part_1.csv, filename_part_2.csv, etc.
We support the use of Validation Webhooks to prevent bad data from being written to your systems, adding another layer of protection to the built-in validations that Osmos provides. The Webhook URL can be posted here.
For more information on Validation Webhook configuration, see Server Side Validation Webhooks
Enter the name of the destination column where you'd like to store the entire raw source record data. The raw source record data will be stored as a JSON string in the provided destination column.
Organizing File Structure
A user can chunk files and output to a different naming structure based on job_id. Osmos leverages a case-sensitive magic string {jobId} in the file prefix and the file output names for these file based Destination Connectors. To set a file prefix, go to the Destination Connector > Show Advanced Options > populate prefix information in the File Prefix Format String field.
We support the designation of file prefixes in order to more easily manage the output of this connector. To set a file prefix, go to the Destination Connector > Show Advanced Options > populate prefix information in the File Prefix Format String field. The contents of this field will be written into the filename of the data this Connector writes. A UUID will be appended to the filename to it to prevent writing conflicts. Osmos leverages two types of magic string identifiers in order to include additional information in your file prefix.
Job_Id: You can include an identifier that corresponds each individual Job (a run of an Osmos Uploader or Pipeline) by including {jobId} in your prefix format string. See examples 3 & 4
DateTime: You can include datetime values in your file output using String from time (strftime) format specifiers. The time values created here correspond to Osmos internal system time at the moment the job was started. See example 4
Output Scenarios:
No file prefix Output: <user base path>/chunk-<chunk num>-<UUID>.<file extension>
File includes description in the prefix Sample prefix: my_osmos_output_ Output: <user base path>/my_osmos_output_chunk-<chunk num>-<UUID>.<file extension>
File includes description and job_id in the prefix Sample prefix: my_osmos_output_{jobId}_ Output: <user base path>/my_osmos_output_<ACTUAL JOB_ID HERE>_chunk-<chunk num>-<UUID>.<file extension>
File includes datetime specifiers and job_id in the prefix
Sample prefix: {jobId}_%F_%T_ Output: <user base path>/<ACTUAL JOB_ID HERE>_<YYYY-MM-DD>_<HH:mm:ss>_chunk-<chunk num>-<UUID>.<file extension>
The connector can be deleted, edited and duplicated.
Duplication
To save time, the connector can be duplicated. This new connector needs to be named and can be edited, as needed.