How to set-up ADF Pipeline from Oracle Fusion Cloud to Snowflake Using BI Connector

Modified on Sun, 19 Apr at 11:05 PM

Most companies using Oracle Fusion cloud have a pressing need to push the Fusion data to Snowflake to maintain a single source of truth. These companies typically come across any of the following combinations of Data Migration/ETL needs for each table they want to pull from Oracle Fusion:

  1. Type of data pull from source table - Full data pull or Incremental data pull
  2. Data insertion in target table - Insert new rows or update existing rows or both
  3. Frequency of automated refresh – Hour-based scheduling (runs every X hours based on FREQUENCY_IN_HOURS).
  4. Pipeline monitoring – Tracks execution status, start time, end time, and error description for each table run.


In some cases, they might have a composite primary key (a set of concatenated values from different columns of the table to uniquely identify each row). Some tables may not have time-based columns to facilitate incremental loads, for which the primary key is used as a reference to perform the incremental load.


This article will help you to handle all of the above scenarios with a Data pipeline (that refers to a Lookup table). 

The lookup table acts as a configuration controller for data pipelines. Each row in the table represents a data load job, including source/destination table details, load type, execution frequency, and watermark column information for incremental loads.


Prerequisites

1. Required Setup:

  1. Windows Server with Microsoft Integration Runtime and BI Connector
    1. Install and configure Microsoft Self-hosted Integration Runtime (SHIR) from Microsoft
    2. Note: An existing SHIR can be reused if already available
    3. Install BI Connector Server Edition and activate using a valid license key
  2. Admin Access to Gateway Server
    Admin access is required to set up System DSNs in ODBC
  3. Network Connectivity
    The Self-hosted Integration Runtime (SHIR) server must be accessible from the Azure Data Factory workspace
  4. Snowflake Access
    1. Snowflake must be available as the target database
    2. Write permissions required on target schema.
    3. ACCOUNT ADMIN role required to set up Storage Integration and External Stage
  5. Azure Blob Storage Access
    1. Write permissions required on staging container
    2. Snowflake Azure Blob Storage Integration must be configured
    3. Refer to the Snowflake Azure Blob Storage Integration Setup Guide for detailed setup instructions link here. 

2. Recommended Configurations:

  1. Windows Server 2016 or above (can also be set up on a VM on the cloud)
  2. .NET Framework 4.7.2 or above installed
  3. Minimum of 8 CPU cores, 8 GB RAM, and 80 GB hard disk space
  4. Strongly recommended to disable hibernate/sleep on this server.
    Note: The SHIR wouldn't work when the server enters hibernate/sleep mode



Step 1: Download the ADF Pipeline Template

Please Contact us or email support@biconnector.com to get our ready-to-deploy ADF pipeline template.


Step 2: Import the ADF Pipeline using the template:


Note: Before proceeding, ensure the BI Connector DSN is configured and that the required BIC tables (in case of Custom SQL Queries or existing Data Models) are already available. These are essential for the ODBC Linked Service to connect successfully and retrieve data.

For guidance on how to create a DSN and BIC tables, refer this article.  .


2a. Extract the ZIP Folder

  • After Downloading the provided ZIP folder.

  • Right-click the file and select Extract All.

  • After extraction, you will find the following two files:

    • Pipeline Template named FUSION_TO_SNOWFLAKE_USING_BIC (ZIP Folder)
      → Contains the ADF pipeline template to be imported.

    • UPDATE_LOOKUP_STATUS.txt
      → Contains the SQL script for the UPDATE_LOOKUP_STATUS stored procedure.
      This script must be executed in the Snowflake database before running the pipeline.
    • DYNAMIC_UPSERT.txt
      → Contains the SQL script for the DYNAMIC_UPSERT stored procedure.
      This script must be executed in the Snowflake database before running the pipeline.


2b. Import the Pipeline Template

  • Click the + (plus) icon under Integrate.
  • Choose Import from pipeline template from the options shown.
  • Add the ZIP folder containing your pipeline template 

     


2c. Map the services 

After importing the pipeline template, you need to map the linked services.


  • Snowflake Linked Service
    Either:
    • Select an existing Snowflake Linked Service configured to point to your Snowflake.
      OR
    • Click + New to create a new one:
      • In the New Snowflake Linked Service window:
        • Give Name to Linked Service
        • Under Connect via Integration Runtime, choose Auto Resolve Integration Runtime.
        • Choose Version as 1.1
      • Add the appropriate details:
        • Account Name

        • Database Name

        • Warehouse Name

      • Select the required Authentication type and Provide the necessary authentication details.
      • Click Test Connection to ensure successful connectivity.
      • Click Create.
  • Azure Blob Storage Linked Service
    Either:
    • Select an existing Linked Service configured to point to your Azure Blob Storage.
      OR
    • Click + New to create a new one:
      • In the Linked Service window:
        • Give Name to Linked Service
        • Under Connect via Integration Runtime, choose Auto Resolve Integration Runtime.
    • Select System Assigned Managed Identity authentication
    • Select your Azure subscription.
    • Select your storage account name
    • Select Test Connection as To Linked Service.
    • Click Test Connection to ensure successful connectivity.
    • Click Create.
  • ODBC Linked Service
    Either:
    • Select an existing ODBC Linked Service that is already configured with the BI Connector DSN.
      OR
    • Click + New to create a new one:
      • In the New Linked Service window:
        • Give Name to the ODBC Linked service.
        • Select Self Hosted Integration Runtime under Connect via Integration Runtime.
      • Under Connection String:
        • Add the DSN name that was created using BI Connector, in the format: 
          dsn=<<your_dsn_name>>
      • Under Authentication type: Choose the appropriate method (Basic)
      • Provide the username and password of your Oracle Fusion Account.
      • Click Test Connection to verify.
      • Click Create.


2d. Configure Copy Data Activities

  • After importing the pipeline template, Locate the following Copy Data activities:
    • GET_ALL_DATA_FROM_FUSION
    • GET_DELTA_DATA
  • For each activity perform the following steps:
    • Click on the Copy Data activity
    • Go to the Sink tab
    • Select Storage Integration
    • Go to the Settings tab
    • Under Staging Settings → select Storage Path
  • Repeat the above steps for both activities
  • Click Publish All to save changes


2e. Create Stored Procedure

  • Navigate to the Worksheets section in Snowflake
  • Click on + Worksheet to open a new SQL editor
  • Select the appropriate Database from the top dropdown
  • Open the file UPDATE_LOOKUP_STATUS.txt (extracted earlier)
  • Copy the SQL script from the file
  • Paste the script into the Snowflake worksheet editor
  • Click Run to execute the script
  • Verify that the stored procedure UPDATE_LOOKUP_STATUS is created successfully under:
    • Database → Your Database → Schemas → Your SchemaProcedures
  • Confirm procedure appears in the list
  • Repeat the above steps for DYNAMIC_UPSERT.txt and verify that the stored procedure DYNAMIC_UPSERT is created successfully under the same location 



Step 3: Create the Lookup Table in the BI Connector

3a. Create New Lookup Table 

  1. In BI Connector , Navigate to the Data Pipelines section.

  2. Click + New Lookup Table.

  3. Enter the desired Lookup Table Name.

  4. Select the Fusion Data Source created in BI Connector.

  5. Enter the Destination Schema Name.

  6. Select the Destination Warehouse as Snowflake.

  7. Click Next.


3b. Select Base Tables and BIC Tables 

  • In the Base Tables and BIC Tables section:
    • You can manually select required tables from searchable drop down
      OR

    • Bulk paste table names if needed.
  • For Base Tables:

BI Connector will automatically:

  • Fetch the Primary Key

  • Retrieve all Column Names

  • Automatically select LAST_UPDATE_DATE as the watermark column (if available)

  • Assign the correct Watermark Format

  • For BIC Tables (Custom SQL Queries or Existing Data Models):

You must manually provide:

  • Primary Key – Add if available; mandatory for incremental load. Without it, only full load is supported
  • Watermark Column – Required for incremental load
  • Watermark Format – Only needed if the Watermark Column is a timestamp; otherwise, leave blank

  • Click Save after configuration.


3c. Download and Execute Table Scripts

  • In the Actions column, select the required option:

    • Download CREATE TABLE statement

    • Download INSERT statement

    • Download as CSV

    • Download as Excel

  • Use the downloaded CREATE TABLE and INSERT queries to create and populate the lookup table in the Snowflake database.


3d. Set the Lookup Table Name in ADF Pipeline

  • Add the Lookup Table Schema name in ADF Pipeline under Parameters schema_name , where lookup tables was created
  • Add the Lookup Table name in ADF Pipeline under Parameters table_name 
  • Publish the changes.


To learn more about lookup table, please refer this article.




Step 4: Run the Azure Synapse Pipeline


4a. To Run the pipeline Manually

  • Click Add Trigger and then click Trigger now


  • Enter the Parameters
  • Under schema_name: Enter the name of the schema where lookup table was created 
  • Under table_name: Enter the name of the lookup table you created 
  • Click OK

 

4b. To Run the Pipeline on a Schedule 

  • Add the Trigger

       Click the Add Trigger and New/Edit


  • Create the Trigger

    Click Choose trigger and Click + New 

  • Configure the Trigger
    • Under Name: Enter trigger name
    • Under Type: Choose Schedule
    • Under Start date: Choose appropriate date with time
    • Under Time zone: Choose appropriate time zone
    • Under Recurrence: Choose Every 1 hour
    • Click OK



  • Enter the Parameters
  • Under schema_name: Enter the name of the schema where lookup table was created 
  • Under table_name: Enter the name of the lookup table you created 
  • Click OK


 

  •  Publish the Pipeline
    After creating the trigger, click Publish All.




To see more on frequently asked question, please refer this article.