The following article shows how to create a custom connector in Matillion Data Loader that loads SAP data via Xtract Universal into Snowflake.
Matillion Data Loader is a cloud based data loading platform that extracts data from popular sources and loads it into cloud destinations, see Official Website: Matillion Data Loader.

Prerequisites #

Setup in Xtract Universal #

  1. Create an extraction in Xtract Universal, see Online Help: Defining an Extraction.
    The depicted example scenario extracts the SAP table KNA1 (General Data in Customer Master).
  2. Assign the http-json destination to the extraction, see Online Help: Assigning Destinations.

Create a Custom Connector in Matillion #

To extract SAP data via Xtract Universal you must define a custom connector that contains the connection details of Xtract Universal, see Matillion Documentation: Matillion Custom Connector Overview.

  1. Open the website and log in to create the custom connector.
  2. Click [Add Connector] (1) to create a new custom connector.
  3. Click on the edit icon to change the name of the connector (2).
  4. Copy the URL the extraction into the designated input field and select GET as the http method (3).
    The URL has the following format: <Protocol>://<HOST or IP address>:<Port>/?name=<Name of the Extraction>{&<parameter_i>=<value_i>}
    Example: the URL calls the extraction “kna1” via ngrock. For more information about calling extractions via web services, see Online Help: Call via Webservice.
  5. To test the connection, enter your authentication details and click [Send] (4). If the connection is successful, the http response contains the SAP customer data extracted by Xtract Universal (5).
  6. Click on the edit icon to edit the structure (names and data types) of the http response (6).
    The structure is used when loading data into your destination. This example scenario only extracts the KNA1 columns City_ORT01, Name 1_NAME1, Country Key_LAND1 and Customer Number_KUNNR.
  7. Optional: If your extraction uses parameters, open the Parameters tab and define the parameters.
  8. Click [Save] (7) to save the connector.

The custom connector can now be used in a Matillion Data Loader pipeline.

Note: The Matillion Custom Connector must be set to the same region as Matillion Data Loader, e.g., US (N. Virginia).

Create a Pipeline in Matillion Data Loader #

Create a pipeline that triggers the extraction and writes the data to a destination, see Matillion Documentation: Create a pipeline with custom connectors.

  1. Open your Matillion Data Loader dashboard under
  2. Click [Add Pipeline] to create a new pipeline (1). matillion-pipelines
  3. Open the Custom Connectors tap to select the custom connector (2), that contains the connection settings for Xtract Universal. matillion-source
  4. Select the endpoint that calls the Xtract Universal extraction and use the arrow buttons to add the endpoint to the list Endpoints to extract and load. Note that a custom connector can have multiple endpoints.
  5. Click [Continue with x endpoint] (3).
  6. In the General tab enter a name for the target table (4) under Data warehouse table name.
  7. Open the Authentication tab and enter the authentication details for the Xtract Universal webservice.
  8. Open the Behaviour tab and select the elements you want to include as columns in the target table. By default, all elements are selected.
  9. Optional: If your endpoint uses parameters, open the Parameters tab to define the parameters.
  10. Open the Keys tab and select a key column that is used to match existing data and prevent duplicates, e.g., Customer Number_KUNNR (5).
  11. Click [Continue] (6).
  12. Select the destination to which the data is written to, e.g., Snowflake (7). For more information on how to connect to Snowflake, see Matillion Documentation: Connect to Snowflake.
  13. Configure the destination, see Matillion Documentation: Configure Snowflake.
  14. Click [Continue].
  15. Enter a name for the pipeline (8).
  16. Select at which interval pipeline is to be executed (9). The pipeline first runs after it is created and then continues with the specified frequency.
  17. Click [Create pipeline] to create and run the pipeline (10). The pipeline is now listed in your dashboard.
  18. Check if the data was successfully uploaded to the destination.

The pipeline now runs automatically at the specified frequency.