Most companies using Oracle Fusion Cloud have a pressing need to push the Fusion data to their SQL Server database to maintain a single source of truth. These companies typically come across any of the following combinations of Data Migration/ETL needs for each table they want to pull from Oracle Fusion:
- Type of data pull from source table - Full data pull or Incremental data pull
- Data insertion in target table - Insert new rows or update existing rows or both
- Frequency of automated refresh - daily/weekly/bi-weekly or once in x days
In some cases, they might have a composite primary key (a set of concatenated values from different columns of the table to uniquely identify each row). Some tables may not have time-based columns to facilitate incremental loads, for which the primary key is used as a reference to perform the incremental load.
This article will help you to handle all of the above scenarios with Power BI Fabric Data Pipeline (that refers to a Lookup table).
Need a visual guide? Watch the full step-by-step setup video here: Click to Watch.
The lookup table acts as a configuration controller for Power BI Fabric Data Pipelines. Each row in the table represents a data load job, including source/destination table details, load type, execution frequency, and watermark column information for incremental loads.
Prerequisites:
1. Required Setup:
- Windows Server with On-premises Data Gateway and BI Connector
- Install and configure On-premises Data Gateway from Microsoft
Note: It's possible to use an existing gateway as well - Install BI Connector Server edition and activate it using a license key.
- Install and configure On-premises Data Gateway from Microsoft
- Admin Access to Gateway Server
- Admin access is required to set up System DSNs in ODBC.
2. Recommended Configurations:
- Windows Server 2016 or above (can also be set up on a VM on the cloud)
- .NET Framework 4.7.2 or above installed
- Minimum of 8 CPU cores, 8 GB RAM, and 80 GB hard disk space
- Strongly recommended to disable hibernate/sleep on this server.
Note: The gateway wouldn't work when the server enters hibernate/sleep mode
Step 1: Setting Up Connections in Power BI Fabric:
Note: Ensure that the SQL Server and ODBC connections are set up before importing the pipeline. If already configured, you can skip ahead to Step 2.
1a. Creating SQL Server Connection:
- In your Power BI Fabric, go to Settings option and Select "Manage connections and gateways":
- Select "+ New":
- Select your Gateway cluster.
- Enter a descriptive name for connection: Your_Connection_Name (e.g.TestDB_1):
- Select Connection type “SQL Server”.
- Fill in the connection details:
- Server: Your SQL Server name
- Database: TestDB (or your database name)
- Authentication method: Basic
- Username: Your SQL Server username
- Password: Your SQL Server password
- Privacy Level: You can select “Organizational”
- Click "Create".
1b. Creating BI Connector ODBC Connection:
Note: Before proceeding, ensure the BI Connector DSN is configured and the required BIC tables (source tables created in BI Connector) are already available. These are essential for the BI Connector ODBC Connection. For further assistance on creating the DSN and BIC tables, refer this article.
- Stay in "Manage connections and gateways".
- Click "+ New" again.
- Select your Gateway cluster (same as above).
- Enter a descriptive name for connection: Your_Connection_Name(e.g. Oracle_BIC_Source).
- Select Connection type “ODBC”.
- Fill in details:
- Connection string: dsn=YOUR_DSN_NAME (replace YOUR_DSN_NAME with your actual BI Connector DSN)
- Authentication: Basic
- Username: Your Oracle Fusion username
- Password: Your Oracle Fusion password
- Click “Create”.
Step 2: Importing the Pipeline Files
To download the pipeline files, please refer to this link.
After downloading, unzip the file — you will find two separate ZIP files inside:
- FUSION_TO_SQL_SERVER_WORKER_PIPELINE
- FUSION_TO_SQL_SERVER_PARENT_PIPELINE
2a. Importing FUSION_TO_SQL_SERVER_WORKER_PIPELINE
- In your workspace, Click "+ New items":
- Select "Data pipeline":
- Enter a descriptive name for your pipeline (e.g. FUSION_TO_SQL_SERVER_WORKER_PIPELINE ) and click “Create”.
- Then choose "Import " option:
- Import pipeline (FUSION_TO_SQL_SERVER_WORKER_PIPELINE):
- Upload the FUSION_TO_SQL_SERVER_WORKER_PIPELINE.zip file.
- From the first dropdown, select the SQL Server connection configured in Step 1 (refer to the image below).
- From the second dropdown, select the BI Connector ODBC connection configured in Step 1 (refer to the image below).
- Click "Use this template".
- Save the pipeline:
2b. Importing FUSION_TO_SQL_SERVER_PARENT_PIPELINE
Note: This will be your main pipeline
- In your workspace, click "+ New items".
- Select "Data pipeline".
- Enter a descriptive name for your pipeline (e.g. FUSION_TO_SQL_SERVER_PARENT_PIPELINE) and click “Create”.
- Then choose "Import " option.
- Import pipeline (FUSION_TO_SQL_SERVER_PARENT_PIPELINE)
- Upload the FUSION_TO_SQL_SERVER_PARENT_PIPELINE.zip file.
- In the first dropdown, select your SQL Server connection.
- In the second dropdown, select your Fabric Data Pipeline connection. (If not created please follow the steps below)
- Select the second dropdown and Click "More"
- Select "Fabric Data Pipeline" option
- Provide a name for the connection and Sign in by selecting your respective account.
- Then Click "Connect"
- Select the second dropdown and Click "More"
- In the third dropdown, select your BI Connector ODBC connection (as shown in the image below).
- Click "Use this template"
- Now configure the RUN_INCREMENTAL_PIPELINE activity
- In the pipeline, locate and select the RUN_INCREMENTAL_PIPELINE activity
- Then click the Settings option
- Select the workspace where the initial pipeline was created, and then choose the same pipeline (FUSION_TO_SQL_SERVER_WORKER_PIPELINE).
- Save the pipeline
- In the pipeline, locate and select the RUN_INCREMENTAL_PIPELINE activity
Step 3: Create the Lookup Table in the SQL Server Database
3a. Run the Table Creation Script:
3b. Insert Rows into the Lookup Table using SQL INSERT Query:
Sample Query:
3c. (Optional) Updating and Deleting Rows in the Lookup Table
This step is required only if you want to make any changes to the Lookup table. If not, you can move to the next step.
To Update a Row:
Add the Update statement in the sql editor and execute it .
FREQUENCY_IN_DAYS = 1 → sets the pipeline to run every 1 day
To Delete a Row:
Use the DELETE statement with a proper condition:
- This condition ensures that only the row(s) where the TABLE_SRC column has the value 'BIC_TABLE_Name' will be deleted.
3d. Set the Lookup Table Name in Fabric Pipeline
- In the main pipeline (e.g., FUSION_TO_SQL_SERVER_PARENT_PIPELINE ) in Power BI Fabric, replace the existing default value of the table_name parameter with the name of the lookup table you created.
- Click Save to save your changes
To learn more about lookup table, please refer this article.
Step 4: Configure Pipeline Schedule:
- Open your main pipeline (FUSION_TO_SQL_PARENT_PIPELINE)
- Click on "Schedule" in the top ribbon/menu
- Turn on "Scheduled run" (toggle to "On")
- Configure Repeat interval: Select from dropdown options like Daily, Weekly or Monthly
- Set Execution Time: Choose optimal time (e.g., 2:00 AM)
- Set Start date: When scheduling should begin
- Set End date: Far future date (12/31/2099) for continuous running
Note: This field is mandatory and cannot be left empty, so we use a distant future date to ensure the pipeline runs indefinitely without manual intervention - Select Time zone: Your regional time-zone (UTC+05:30 for India)
- Then click "Apply" to save the changes
Note: You can also run the pipeline on-demand anytime by clicking the "Run" button in the top ribbon, regardless of the schedule.
To see more on frequently asked question, please refer this article.