Airbyte is a user-friendly tool that enables companies to gather data from various sources and load it into a variety of locations for analytics and business intelligence. In this blog, we demonstrate how to use Trino as a source connector for Airbyte.
Airbyte enables companies to gather data from various sources and load it into a variety of locations for analytics and business intelligence. It offers a user-friendly and expandable tool that can link to different data sources, transform the data, and load it into various data storage services, including data warehouses, databases, and cloud storage.
In addition to connecting to numerous data sources like databases, APIs, file storage, and SaaS programs, Airbyte also supports a broad variety of data destinations like Snowflake, BigQuery, Redshift, and others. Additionally, it provides pre-built connectors, making it simple to link to various data sources and destinations without putting much effort into development.
Due to its distributed and parallelized architecture, Airbyte is built to manage large amounts of data. Additionally, it is simple to use and has a user-friendly web interface and flexible data integration framework that can be tailored to meet various use cases.
We frequently want to gather data from various sources to a centralized location so that analysis can be performed on that data and later used to back certain decisions that the company makes or wants to take.
There was a period when HRmon was a startup. HRmon set out to change the way companies handle human resources by offering a cloud-based platform for managing employee records, payroll, and benefits.
To accomplish this, HRmon is required to integrate with a variety of data sources, including various HR systems, financial software, and APIs. They were having difficulty keeping up with the complexity of creating and maintaining custom integrations for all of these various sources.
That's when they came across Airbyte, a modern and open-source data integration platform with pre-built connectors for a wide range of common data sources, including many of the systems they needed to integrate with.
With Airbyte, HRmon was able to easily integrate their platform with all of the data sources they needed, without having to build custom integrations from scratch. They were able to connect to their customers' data sources quickly, saving them time and resources.
However, there were still some data sources that were not supported by Airbyte's pre-built connectors. That's when they decided to build a custom connector using Airbyte's Custom Connector feature. The development team was able to build the custom connector quickly and easily, thanks to Airbyte's SDK.
With the custom connector in place, HRmon was able to connect to their customers' unique data sources, providing a more personalized and flexible solution. Their customers loved the flexibility and ease of use of the platform, and HRmon's customer base continued to grow.
In the end, HRmon was able to deliver a powerful and flexible platform to manage HR, payroll, and benefits, thanks to Airbyte's pre-built connectors and Custom Connector feature. They were able to save time and resources while providing their customers with a flexible and personalized solution that met their unique needs.
As you see, users can build their custom connectors to link to data sources or destinations that the pre-built connectors do not support. These connectors are especially helpful for companies that need specialized integration methods and have private or uncommon data sources. Utilizing the Airbyte Connector Development Kit, developers can create their custom connections, which Airbyte supports. (CDK). Programming skills are typically needed to handle the authentication, data extraction, and data loading procedures particular to the data source or destination when creating custom connectors. But Airbyte's connector development kit provides a streamlined structure that facilitates developers' development processes.
Once users build custom connectors, they can integrate them with Airbyte's data integration pipeline, alongside the pre-built connectors.
Trino is a distributed SQL query engine that allows users to perform complex and interactive queries across multiple data sources. It was originally developed by Facebook and is now an open-source project that can handle petabytes of data at a fast and scalable rate.
The Trino query engine comprises several parts that work together to deliver high-performance SQL query processing. The Coordinator component is the central part that manages query execution, planning, and result coordination. The Worker nodes execute the tasks assigned by the Coordinator, while the Connectors act as Trino's interface to various data sources, including relational databases, NoSQL databases, and file systems.
Trino's distributed architecture enables horizontal scaling, which makes it ideal for processing massive amounts of data. Moreover, its ability to integrate with a broad range of data sources makes it a versatile tool for data querying and analysis.
How do I create a Trino Source Connector for Airbyte:
1. Generate a Source Connector Template using Airbyte’s CDK (Connector Development Kit)
This can be achieved through the following commands:
2. From the prompt choose:
3. Give the source a name. For example: trino. This will create a new folder named source-trino, with the airbyte source connector code template under airbyte-integrations/connectors folder.
4. The Template contains a few important files that we require to modify for our connector to work, namely:
5. The spec.yaml file contains the specifications about the source connector, which includes the mandatory fields for the connector to work, and these fields are rendered in the UI as well. In this case:
a. Hostname
b. Username
c. Catalog
d. Schema
The above are required fields to work with Trino as a connector.
6. The source_specs.yaml file contains information regarding the docker image name and other information similar to the spec.yaml file. We need to configure the docker-image name as -
7. Open airbyte-config/init/src/main/resources/seed/source_definitions.yaml. You'll find a list of all the connectors that Airbyte displays in the UI. Pattern match to add your own connector. Make sure to generate a new unique UUIDv4 for the sourceDefinitionId field. You can get one here : UUID_Generator
Note: Modifications made to the source_definitions.yaml will only be picked-up the first time you start Airbyte, or when you upgrade Airbyte, or if you entirely wipe your instance of Airbyte and start from scratch.
8. The source.py contains the main code where the magic happens, this is where we write different functions: read(), discover(), and check() to use the connector properly with Airbyte.
9. Once we understand the structure of the Template, we need to set up the Python Virtual Environment to develop and test our connector code. To do this:
And activate our Virtual Environment with the:
10. We are required to install the requirements for the connector to work, which includes the airbyte-cdk modules, and trino python client.
By default, the template has a requirements.txt file, and to add extra requirements, we are required to edit the setup.py file. Once our requirements are planned out and ready to be installed we install it through pip, using the:
11. When the requirements are installed, we perform Step 5 and update the spec file.
12. We set a sample configuration based on the spec.yaml, inside secrets/config.json, For example:
13. Next, we edit the source.py file to meet our needs for the connector, here are the required functions:
a. Check: The check step verifies whether the connector can establish a connection with the underlying data source using the user-provided configuration
b. Discover: The discover phase identifies the various data streams that the connector can produce.
c. Read: the read step involves extracting data from the underlying data source
For example, our check function will contain the following:
Similarly, the discover and read functions are to be written to perform their desired action
14. In discover(), we add streams = [ ] which will be a list of AirbyteStream objects. Let's walk through what each field in a stream means.
15. Read() function takes a parameter ConfiguredAirbyteCatalog. Just as with the AirbyteCatalog the ConfiguredAirbyteCatalog contains a list. This time it is a list of ConfiguredAirbyteStream (instead of just AirbyteStream). It is basically filtering out the schemas for users according to the data required. We add it in a new folder sample_files and create a configured_catalog.json for testing.
Example of how configuredAirbyteCatalog looks like:
To test our connection, we can invoke the check function using the following:
On a successful connection, the output is:
17. Similarly, the discover and read functions are tested to ensure that they work according to our requirements.
18. Once, our connector is working locally, we are ready to build the docker container for our connector, using:
Note : `source-trino` is the name of airbyte_source and `0.1.0` is the dockerImageTag mentioned in the source_definitions.yaml file.
Once our image is built locally, we can build the entire application using the `gradle-build` command from your application directory.
Note: Before gradle build, make sure to docker-compose down and remove all the images , build the trino image again using docker build command mentioned above, and then run this command. After successful build of gradle, you can run:
Performing the above steps adds Trino as a Source Connector within Airbyte. Now, we can easily sync up Trino with other available Destinations like Postgres, MySQL etc.
1. Trino as a Source and Postgres as Destination
2. File (pre-built connector) and Postgres as Destination
3. Zendesk Support as Source Connector and Postgres as Destination
Different connectors can be set up similarly into Airbyte, hence making it flexible and a great place for integration connections!