Using the BI Connector with Azure Data Factory (ADF), you can set up and automate data pipelines from Oracle Fusion Cloud ERP, SCM, HCM, CX application data sources (via the BI Publisher Data Models) to your preferred Data Warehouse (DW) or Lakehouse sinks, such as SQL Server, Snowflake, Azure SQL, Azure Lakehouse, Amazon S3, Oracle database, etc.
This article will show the steps to set up these pipelines.
Before you start
Who is this guide for?
This guide is for BI Connector/ADF users looking to set up data pipelines from the BI Publisher Data Models (created out of their Oracle Fusion cloud applications data) to their desired data warehouse.
Compatibility
BI Connector Versions
Prerequisites
Required Setup
Windows Server with Microsoft Integration Runtime and BI Connector
Install and configure Microsoft Self-hosted Integration Runtime (SHIR) from Microsoft
Note: Its possible to use an existing SHIR as wellInstall BI Connector Server edition and activate it using a license key.
Admin Access to Gateway Server
Admin access is required to set up System DSNs in ODBC.
Recommended Configurations
- Windows Server:
- Windows Server 2016 or above (can also be set up on a VM on the cloud)
- .NET Framework 4.7.2 or above installed
- Minimum of 8 CPU cores, 8 GB RAM, and 80 GB hard disk space
- Strongly recommended to disable hibernate/sleep on this server.
Note: The SHIR wouldn't work when the server enters hibernate/sleep mode
Oracle Fusion and Sink DW Authentication & Permissions
- Oracle Fusion Authentication and Permissions
Basic authentication is required (Username/password only).
To confirm: Log in to Oracle Fusion in a private browser window. If prompted for only a username and password, you have Basic Authentication.
If MFA is enabled: Contact your Oracle Fusion admin to create a service account for BI Connector.
Role Assignments
Assign the BI Publisher Data Model Developer role to your Oracle Fusion account.
Add additional roles based on data access needs:
Example: Budget Manager and Budget Analyst roles for budget data.
Custom Roles and Role Hierarchies are supported.
Tip: To ensure the account has access to the right set of data, its recommended that you assign all the roles and privileges of the Oracle reporting user. In addition, add the BI Publisher Data Model Developer role.
- Sink Data Warehouse (DW) or Lakehouse Authentication and Permissions
- The necessary credentials or auth tokens to make a connection to the Sink DW or Lakehouse
- Write permissions to the desired Database/Schema/Table
- ADF Permissions
- The ADF user must have access to use the following and publish them:
- Create Linked Services
- Access to use an existing SHIR or create a new one
- Datasets
- Pipelines
- The ADF user must have access to use the following and publish them:
Downloads
Step-by-Step Process
Step 1: Create BI Connector Data Source on the Server
Log in to the Windows Server as an Admin user.
Open BI Connector (“Run as Administrator”).
Click New Data Source and provide a name that's convenient for you.
Example: Oracle_Fusion_ERP_Prod_Data_ModelSelect Data Model as the Data Source Type.
Choose your Oracle application:
Fusion (Oracle Fusion Cloud ERP/SCM/HCM/CX)
Analytics Cloud (Oracle Analytics Cloud)
OBIEE (Oracle Analytics Server)
Custom (if using a custom SSO setup—contact BI Connector Support).
- Enter your Oracle Account URL:
Log in to Oracle Fusion, navigate to the Analytics/OTBI/Catalog page, and copy the URL up to oraclecloud.com.
Example: https://acmeprod.ocp.oraclecloud.com
Enter Oracle Username and Password.
Add the folder path:
Locate the Data Model folder in Oracle Fusion, click More, select Properties, and copy the Location.
Example: /shared/Custom/ERP/GL
Paste the folder path into the BI Connector Folder Path field.
To add multiple folder paths, repeat the steps above.
Sub-folders are automatically included.
- Click Verify & Add to validate the folder path.
- Test the connection by clicking Test & Save.
Step 2: Create a BI Connector Table
Open the Tables tab in the BI Connector application.
Click Add Table and provide a name. Note that the table name cannot contain spaces.
Example: Monthly_GL_Trial_BalanceSelect the Data Source created in Step 1.
Navigate through the folder paths to locate the desired Data Model and dataset. Or write your own SQL query by choosing the appropriate option.
Enter parameter values if applicable:
Enter values for mandatory parameters.
Use flexible date options (TODAY, YESTERDAY, etc.) based on your needs.
Preview the data and click Save.
Step 3: Create a Linked Service and Dataset For Oracle Fusion Source in ADF
- Login to your ADF account
- Click on Manage in the left-side pane
- Navigate to Linked Services, and click the + New button
- Search and select ODBC, click Continue
- Fill out the required info for creating the Linked Service to Oracle Fusion:
- Name: Give an appropriate name to the Linked Service
Example: Source_Oracle_Fusion - Select the appropriate Self-hosted Integration Runtime, which also lives with the BI Connector Server Edition installed on the same server
- Connection String: Enter the name of the BI Connector Data Source, preceded by "dsn="
Ex: If your BI Connector Data Source is Oracle_Fusion_ERP_Prod_Data_Model, then the connection string is:
dsn=Oracle_Fusion_ERP_Prod_Data_Model - Keep the Authentication Type as Basic, and enter your Oracle Fusion username and password in the appropriate fields
- You may want to click on the "Test Connection" at the bottom right
- Finally, click the Create button
- Name: Give an appropriate name to the Linked Service
- Next, click on Author in the left pane. Hover over Datasets, click on the 3 dots, and select New Dataset
- Search and select ODBC, click Continue
- Name the dataset:
Example: Source_Monthly_GL_Trial_Balance_Table - Select the Linked Service (that was created in the previous step) from the drop-down
- Select the Integration Runtime from the drop-down
- Select the table you created in BI Connector from the drop-down
- Click Ok
Step 4: Create a Linked Service to the Sink System and a Dataset For the Appropriate Destination table
- In ADF, click on Manage in the left-side pane
- Navigate to Linked Services, and click the + New button
- Search and select the appropriate option, click Continue
- Name the Linked Service appropriately and fill out the required info for creating the Linked Service to your sink system
- Next, click on Author in the left pane. Hover over Datasets, click on the 3 dots, and select New Dataset
- Name the Dataset appropriately and fill out the required info by pointing to your Sink system, and selecting the appropriate destination table
Step 5: Create a Pipeline in ADF and Publish it
In ADF, click on Author in the left pane. Hover over Pipelines, click on the 3 dots, and select New Pipeline
Name the Pipeline appropriately
Under activities, search for Copy Data
Drag and drop the Copy data activity on to the Pipeline canvas, and click on it
In the General tab at the bottom, give a name for the activity, and use other settings as required
In the Source tab, pick the source dataset from drop-down, and keep other settings as required
In the Sink tab, pick the sink dataset from drop-down and keep other settings as required
In the Mapping tab, click Import Schemas to make ADF pull the columns from both source and sink datasets and map them automatically
Select the columns needed in the Sink system's destination table
Click Debug at the top to do a trial run.
On success of the trial run, click on the Add Trigger option
Select New/Edit
Under Choose Triggers drop-down, click + New
Name the trigger
Select the Type as Schedule
Set the appropriate timeframe, say every x hours or days or weeks or months to automatically run the pipepline.
Next Click on Publish All button on top left, and publish the pipeline, datasets, trigger and Linked Services.