How to set-up Power BI Fabric Data Pipeline from Oracle Fusion Cloud to SQL Server Using BI Connector

Modified on Fri, 18 Jul at 7:19 AM

Most companies using Oracle Fusion Cloud have a pressing need to push the Fusion data to their SQL Server database to maintain a single source of truth. These companies typically come across any of the following combinations of Data Migration/ETL needs for each table they want to pull from Oracle Fusion:


  1. Type of data pull from source table - Full data pull or Incremental data pull
  2. Data insertion in target table - Insert new rows or update existing rows or both
  3. Frequency of automated refresh - daily/weekly/bi-weekly or once in x days


In some cases, they might have a composite primary key (a set of concatenated values from different columns of the table to uniquely identify each row). Some tables may not have time-based columns to facilitate incremental loads, for which the primary key is used as a reference to perform the incremental load.


This article will help you to handle all of the above scenarios with Power BI Fabric Data Pipeline (that refers to a Lookup table).


Need a visual guide? Watch the full step-by-step setup video here: Click to Watch.


The lookup table acts as a configuration controller for Power BI Fabric Data Pipelines. Each row in the table represents a data load job, including source/destination table details, load type, execution frequency, and watermark column information for incremental loads.


Prerequisites:

1. Required Setup:

  1. Windows Server with On-premises Data Gateway and BI Connector
    1. Install and configure On-premises Data Gateway from Microsoft
      Note: It's possible to use an existing gateway as well 
    2. Install BI Connector Server edition and activate it using a license key.
  2. Admin Access to Gateway Server
    1. Admin access is required to set up System DSNs in ODBC.

2. Recommended Configurations:

  1. Windows Server 2016 or above (can also be set up on a VM on the cloud)
  2. .NET Framework 4.7.2 or above installed
  3. Minimum of 8 CPU cores, 8 GB RAM, and 80 GB hard disk space
  4. Strongly recommended to disable hibernate/sleep on this server. 

            Note: The gateway wouldn't work when the server enters hibernate/sleep mode


Step 1: Setting Up Connections in Power BI Fabric:

Note: Ensure that the SQL Server and ODBC connections are set up before importing the pipeline. If already configured, you can skip ahead to Step 2. 


1a. Creating SQL Server Connection:

  • In your Power BI Fabric, go to Settings option and Select "Manage connections and gateways":

     
  • Select "+ New":

  • Select your Gateway cluster.
  • Enter a descriptive name for connection: Your_Connection_Name (e.g.TestDB_1):

  • Select Connection type “SQL Server”.
  • Fill in the connection details:
    • Server: Your SQL Server name
    • Database: TestDB (or your database name)
    • Authentication method: Basic
    • Username: Your SQL Server username
    • Password: Your SQL Server password
    • Privacy Level: You can select “Organizational”
  • Click "Create".

   

1b. Creating BI Connector ODBC Connection:

Note: Before proceeding, ensure the BI Connector DSN is configured and the required BIC tables (source tables created in BI Connector) are already available. These are essential for the BI Connector ODBC Connection. For further assistance on creating the DSN and BIC tables, refer this article. 

  • Stay in "Manage connections and gateways".
  • Click "+ New" again.
  • Select your Gateway cluster (same as above).
  • Enter a descriptive name for connection: Your_Connection_Name(e.g. Oracle_BIC_Source).
  • Select Connection type ODBC”.
  • Fill in details:
    • Connection string: dsn=YOUR_DSN_NAME (replace YOUR_DSN_NAME with your actual BI Connector DSN)
    • Authentication: Basic
    • Username: Your Oracle Fusion username
    • Password: Your Oracle Fusion password
  • Click Create”.


Step 2: Importing the Pipeline Files


To download the pipeline files, please refer to this link.

After downloading, unzip the file — you will find two separate ZIP files inside:

  • FUSION_TO_SQL_SERVER_WORKER_PIPELINE 
  • FUSION_TO_SQL_SERVER_PARENT_PIPELINE


2a. Importing FUSION_TO_SQL_SERVER_WORKER_PIPELINE 


  • In your workspace, Click "+ New items":

  • Select "Data pipeline":
  • Enter a descriptive name for your pipeline (e.g. FUSION_TO_SQL_SERVER_WORKER_PIPELINE ) and click “Create”.
  • Then choose "Import " option:
  • Import pipeline (FUSION_TO_SQL_SERVER_WORKER_PIPELINE):
    • Upload the FUSION_TO_SQL_SERVER_WORKER_PIPELINE.zip file.
    • From the first dropdown, select the SQL Server connection configured in Step 1 (refer to the image below).
    • From the second dropdown, select the BI Connector ODBC connection configured in Step 1 (refer to the image below).
    • Click "Use this template".
    • Save the pipeline:

2b. Importing FUSION_TO_SQL_SERVER_PARENT_PIPELINE


Note: This will be your main pipeline

  • In your workspace, click "+ New items".
  • Select "Data pipeline".
  • Enter a descriptive name for your pipeline (e.g. FUSION_TO_SQL_SERVER_PARENT_PIPELINE) and click “Create”.
  • Then choose "Import " option.
  • Import pipeline (FUSION_TO_SQL_SERVER_PARENT_PIPELINE)
    • Upload the FUSION_TO_SQL_SERVER_PARENT_PIPELINE.zip file.
    • In the first dropdown, select your SQL Server connection.
    • In the second dropdown, select your Fabric Data Pipeline connection. (If not created please follow the steps below)
      • Select the second dropdown and Click "More"

      • Select "Fabric Data Pipeline" option

      • Provide a name for the connection and Sign in by selecting your respective account.

      • Then Click "Connect"
    • In the third dropdown, select your BI Connector ODBC connection (as shown in the image below).
    • Click "Use this template"
    • Now configure the RUN_INCREMENTAL_PIPELINE activity
      • In the pipeline, locate and select the RUN_INCREMENTAL_PIPELINE activity
      • Then click the Settings option
      • Select the workspace where the initial pipeline was created, and then choose the same pipeline (FUSION_TO_SQL_SERVER_WORKER_PIPELINE).
      • Save the pipeline 


Step 3: Create the Lookup Table in the SQL Server Database


3a. Run the Table Creation Script:

CREATE TABLE <your_schema_name>.<your_lookup_table_name>
(
    TABLE_SRC               VARCHAR(255),  -- Source table name from which data is extracted
    TABLE_DEST              VARCHAR(255),  -- Destination table name where data is loaded
    PRIMARY_KEY             VARCHAR(255),  -- Primary key column(s) used to uniquely identify records
    SCHEMA_DEST             VARCHAR(255),  -- Target schema for the destination table
    WATERMARK_COLUMN        VARCHAR(255),  -- Column used for incremental loading
    WATERMARK_COLUMN_FORMAT VARCHAR(255),  -- Format of the watermark column
    LAST_RUN_DATE           DATE,  -- Date when the data pipeline last ran successfully
    FREQUENCY_IN_DAYS       INT,  -- Number of days between scheduled data refreshes
    REFRESH_TYPE            VARCHAR(50)  -- Type of data refresh: FULL_LOAD, INCR_LOAD
);
SQL


3b. Insert Rows into the Lookup Table using SQL INSERT Query:


Sample Query: 

INSERT INTO <your_schema_name>.<your_lookup_table_name>
(
  TABLE_SRC,
  TABLE_DEST,
  PRIMARY_KEY,
  SCHEMA_DEST,
  WATERMARK_COLUMN,
  WATERMARK_COLUMN_FORMAT,
  LAST_RUN_DATE,
  FREQUENCY_IN_DAYS,
  REFRESH_TYPE
)
VALUES (
    'BIC_TABLE_NAME',           
    'SQL_TABLE_NAME',           
    'ID_COLUMN',                
    'BIC_TEST',                 
    'LAST_UPDATE_DATE',         
    'YYYY-MM-DD"T"HH24:MI:SS',  
    NULL,                       
    1,                          
    'INCR_LOAD'                 
);
SQL

3c. (Optional) Updating and Deleting Rows in the Lookup Table 

This step is required only if you want to make any changes to the Lookup table. If not, you can move to the next step.


To Update a Row:

Add the Update statement in the sql editor and execute it .

UPDATE <your_schema_name>.<your_lookup_table_name>
SET FREQUENCY_IN_DAYS = 1
WHERE TABLE_SRC = 'BIC_TABLE_NAME';
SQL
  • FREQUENCY_IN_DAYS = 1 → sets the pipeline to run every 1 day

To Delete a Row:
Use the DELETE statement with a proper condition:

DELETE FROM <your_schema_name>.<your_lookup_table_name>
WHERE TABLE_SRC = 'BIC_TABLE_NAME';
SQL
  • This condition ensures that only the row(s) where the TABLE_SRC column has the value 'BIC_TABLE_Name' will be deleted.


3d. Set the Lookup Table Name in Fabric Pipeline

  • In the main pipeline (e.g., FUSION_TO_SQL_SERVER_PARENT_PIPELINE ) in Power BI Fabric, replace the existing default value of the table_name parameter with the name of the lookup table you created.  
  • Click Save to save your changes


To learn more about lookup table, please refer this article.


Step 4: Configure Pipeline Schedule:

  • Open your main pipeline (FUSION_TO_SQL_PARENT_PIPELINE)
  • Click on "Schedule" in the top ribbon/menu
  • Turn on "Scheduled run" (toggle to "On")
  • Configure Repeat interval: Select from dropdown options like Daily, Weekly or Monthly
  • Set Execution Time: Choose optimal time (e.g., 2:00 AM)
  • Set Start date: When scheduling should begin
  • Set End date: Far future date (12/31/2099) for continuous running
    Note: This field is mandatory and cannot be left empty, so we use a distant future date to ensure the pipeline runs indefinitely without manual intervention
  • Select Time zone: Your regional time-zone (UTC+05:30 for India)
  • Then click "Apply" to save the changes
     
    Note: You can also run the pipeline on-demand anytime by clicking the "Run" button in the top ribbon, regardless of the schedule.


To see more on frequently asked question, please refer this article.