How to track data lineage with SQL Server Graph Tables – Part 3 Populate Graph Tables

Featured

In this 3rd article I will show you how to use the procedures that we created in the previous post to populate the graph tables with useful data lineage information.

Data Flow

We will simulate the flow of data from two source systems (Finance and HR) into a data warehouse/business intelligence system as illustrated in the diagram below.

Staging

Most modern BI systems include a staging area, also sometimes referred to as a landing area. Data is collected from various disparate systems and loaded into the staging database with very minimal transformations, typically only data type conversions. The goal is to quickly grab the data from the different source systems and load it into the staging area for further processing so that any performance impact to the source systems is minimized.

To capture the data lineage for this first hop from source system to staging we can use our procedure as follows.

-- JDE to Staging
EXEC	[dbo].[Create_Data_Lineage]
		
		--System level
		@SourceSystem = 'Finance',
		@TargetSystem = 'Business Intelligence System',

		--Database level 
		@SourceDatabase = 'JDE',
		@TargetDatabase = 'Staging',
		
		--Table Level
		@SourceTable = 'F0411',
		@TargetTable = 'F0411',

		--Column Level
		@SourceColumnList = 'RPKCO, RPDOC, RPDCT, RPSFX, RPSFXE, RPDCTA, RPAN8, RPPYE, RPSNTO, RPDIVJ',
		@TargetColumnList = 'RPKCO, RPDOC, RPDCT, RPSFX, RPSFXE, RPDCTA, RPAN8, RPPYE, RPSNTO, RPDIVJ',

		@ProcessName = Load F0411';
GO



-- HR to Staging 
EXEC	[dbo].[Create_Data_Lineage]
		
		--System level
		@SourceSystem = 'HR',
		@TargetSystem = 'Business Intelligence System',

		--Database level 
		@SourceDatabase = 'Payroll',
		@TargetDatabase = 'Staging',
		
		--Table Level
		@SourceTable = 'EMP',
		@TargetTable = 'EMP',

		--Column Level
		@SourceColumnList = 'EMPID, HIREDT, JOBTITLE',
		@TargetColumnList = 'EMPID, HIREDT, JOBTITLE',

		@ProcessName = 'Load EMP';
GO 

As you can see from the code above it is very easy to map an input table to an output table using the procedures we created in the previous steps.

EDW

Next, we will capture the data lineage from Staging to the Enterprise Data Warehouse (EDW)

-- Staging to EDW for Finance tables

EXEC	[dbo].[Create_Data_Lineage]
		
		--System level
		@SourceSystem = 'Business Intelligence System',
		@TargetSystem = 'Business Intelligence System',

		--Database level 
		@SourceDatabase = 'Staging',
		@TargetDatabase = 'EDW',
		
		--Table Level
		@SourceTable = 'F0411',
		@TargetTable = 'Fact Accounts Payable',

		--Column Level
		@SourceColumnList = 'RPDOC, RPDCT, RPSFX, RPSFXE, RPDCTA, RPAN8, RPPYE, RPSNTO, RPDIVJ, RPDIVJ',
		@TargetColumnList = 'Document, Document Type, Document Pay Item, Pay Item Extension Number, Adjusting Document Type, Address Number, Payee Address Number, 	Approvers Address Number, Invoice Date - Julian, Invoice Date - Gregorian',

		@ProcessName = 'Load Fact Accounts Payable';
GO



-- Staging to EDW for HR tables
EXEC	[dbo].[Create_Data_Lineage]
		
		--System level
		@SourceSystem = 'Business Intelligence System',
		@TargetSystem = 'Business Intelligence System',

		--Database level 
		@SourceDatabase = 'Staging',
		@TargetDatabase = 'EDW',
		
		--Table Level
		@SourceTable = 'EMP',
		@TargetTable = 'Dim Employee',

		--Column Level
		@SourceColumnList = 'EMPID, HIREDT, HIREDT, JOBTITLE',
		@TargetColumnList = 'Employee Number, Hire Date, Tenure, Job Title',

		@ProcessName = 'Load Dim Employee';
GO

If you have a keen eye you might have noticed that we are mapping some of the fields twice. The field RPDIVJ in the JDE table F0411 is mapped to both Invoice Date – Julian and Invoice Date – Gregorian. The field HIREDT in the HR EMP table is mapped to both the Hire Date and the Tenure columns. In both scenarios this implies some transformation logic between these columns.

Accounts Payable Model

The final step in our example is the process that loads the SQL Server Analysis Services (SSAS) model.

-- EDW to SSAS Finance
EXEC	[dbo].[Create_Data_Lineage]
		
		--System level
		@SourceSystem = 'Business Intelligence System',
		@TargetSystem = 'Business Intelligence System',

		--Database level 
		@SourceDatabase = 'EDW',
		@TargetDatabase = 'SSAS Accounts Payable Model',
		
		--Table Level
		@SourceTable = 'Fact Accounts Payable',
		@TargetTable = 'Fact Accounts Payable',

		--Column Level
		@SourceColumnList = 'Document, Document Type, Document Pay Item, Pay Item Extension Number, Adjusting Document Type, Address Number,  Invoice Date - Gregorian',
		@TargetColumnList = 'Document, Document Type, Document Pay Item, Pay Item Extension Number, Adjusting Document Type, Address Number,  Invoice Date',

		@ProcessName = 'Process SSAS Accounts Payable Model'



-- EDW to SSAS HR
EXEC	[dbo].[Create_Data_Lineage]
		
		--System level
		@SourceSystem = 'Business Intelligence System',
		@TargetSystem = 'Business Intelligence System',

		--Database level 
		@SourceDatabase = 'EDW',
		@TargetDatabase = 'SSAS Accounts Payable Model',
		
		--Table Level
		@SourceTable = 'Dim Employee',
		@TargetTable = 'Dim Employee',

		--Column Level
		@SourceColumnList = 'Employee Number, Hire Date, Tenure,  Job Title',
		@TargetColumnList = 'Employee Number, Hire Date, Tenure,  Job Title',

		@ProcessName = 'Process SSAS Accounts Payable Model'


GO

As you can see in the code above the Invoice Date – Gregorian from the Finance system is used to populate the Invoice Date and the Invoice Date – Julian did not make it into the SSAS cube.

Summary

The database procedures make it really easy to capture data lineage, they could even be enhanced to accept and parse JSON or XML inputs so that data lineage could be dropped in and maintained using a web application….can someone say Power App

In the next article I will cover how to query the graph tables as well as visualize the data in order to answer data lineage questions.

Hopefully you have found this to be another practical post.

Until next time.

Anthony

Abstract and encapsulate with Power BI and SQL Server Table-Valued Functions – Use Case 2: Change results based on a database point in time and a user defined parameter

This is the second article  in which I cover different use cases for using SQL Server Table-Valued Functions with Power BI.

In the previous article I showed you how a table valued function can be used to hide lower levels of a hierarchy based on user id. This is handy if you need to prevent certain users from breaking down aggregate measures while interacting with a report or dashboard in Power BI.

In this article I will show you how you can use table valued functions to filter the results based on a date range stored in your database and allow the report author to control how many years of data to bring back with an optional input parameter.

Prerequisites

Use case 2: Change results based on time

This use case comes in handy if you want your results to be filtered based on a date and time in your database and not on the date time of your Power BI users. This situation may occur in a globally distributed system in which Power BI users and the database they are querying are located in different time zones or in situations where the data lags behind the current date/time of your users. I will cover a situation where you want the data set to only show data up to a point in time stored in your database.

Using the Wide World Importers sample database suppose you wanted to limit the orders that a person can report on to the last year in which there were orders in the database. Run the following query.

USE WideWorldImportersDW
GO

SELECT
MIN([Order Date Key]) AS [Earliest Order Date]
, MAX([Order Date Key]) AS [Latetest Order Date]
FROM
WideWorldImportersDW.Fact.[Order]
GO

As you can see from the results, we have orders from 2013 up to 2016. If we were to use the Power BI relative date slicer and set it to only show data from the past 1 year we would not see any results because the Power BI relative date slicer is based on today’s date April 6, 2019 and not on a date in the database. One way to overcome this problem is to use SQL Server Table-valued functions and encapsulate the logic to only show orders from June 1 2015 to May 31 2016. To do this will create a new function using the code below.

Create Function

Run the following code in the Wide World Importers database to create a new function. We are including an optional parameter so that the report author can change how many years they want to go back when they connect to the data.

--DROP FUNCTION dbo.ufn_Orders_PastYear
--GO


CREATE FUNCTION dbo.ufn_Orders_PastYear(@NumberOfYears INT = NULL)
RETURNSTABLE
AS
RETURN
(
SELECT
*
FROM
WideWorldImportersDW.Fact.[Order] ord
WHERE
ord.[Order Date Key]
BETWEEN
(SELECT DATEADD(year, -ISNULL(@NumberOfYears, 1), MAX([Order Date Key])) FROM WideWorldImportersDW.Fact.[Order])
AND
(SELECT MAX([Order Date Key]) FROM WideWorldImportersDW.Fact.[Order])
);
GO

Notice that I negate the number of years by adding a negative sign in front of the ISNULL function in the select statement, this is to simplify the report authoring experience with using this function. Next, we will make a DirectQuery connection to the function using Power BI.

Connect with Power BI

Similar to before connect to the SQL Server database using a DirectQuery connection and select the function ufn_Orders_PastYear from the list of database objects.

As you can see in the image above the function parameter @NumberOfYears appears in Power BI as an optional parameter. If you leave it blank and click apply it will pull back 1 year’s worth of data based on what is available in the database. You can enter in your own number to control how many years back you query the Orders fact table. Incorporating parameters is a very powerful way to give the report author control of the results.

Once the data has been loaded in let’s visualize it using a simple bar chart.

Your results should look like the following image below.

As you can see in the chart we only have data from 2015 to 2016. To make the chart a bit easier to read lets add a proper date hierarchy. We will need to build it because we are using a DirectQuery to access the data so the autogenerated date hierarchies are not available, those are only created when you import data into Power BI and set the data type to be a date.

Create Year Column

Create a new calculated column and use the following DAX code to pull out the year value from the Order Date Key field.

OrderYear = Year([Order Date Key])

Create Month Columns

Next, we will create two month columns one will be used to sort and the other will be used to display the month name on the report.

Use the following DAX code to create a new month number calculated column.

OrderMonth = Month([Order Date Key])

Now create a new calculated column to store the month name using the following DAX code.

Order Month Name = 
SWITCH (
    [OrderMonth],
    1, "January",
    2, "February",
    3, "March",
    4, "April",
    5, "May",
    6, "June",
    7, "July",
    8, "August",
    9, "September",
    10, "October",
    11, "November",
    12, "December"
)

Now we need to set the sort by column property of the Order Month Name to use the value of the OrderMonth column. To do this navigate to the model viewer and click on the Order Month Name field and then set the Sort by column to OrderMonth.

Now we will create a new Hierarchy based on OrderYear and Order Month Name.

Click on the chart and the Axis value with the new hierarchy we just created. Drill down a level to see the years and months.

As you can see this makes the chart much easier to read. Now lets insert some new data into the table and refresh the report.

Add some data

Run the following SQL to create new Date and Order records.

INSERT INTO [Dimension].[Date]
([Date]
,[Day Number]
,[Day]
,[Month]
,[Short Month]
,[Calendar Month Number]
,[Calendar Month Label]
,[Calendar Year]
,[Calendar Year Label]
,[Fiscal Month Number]
,[Fiscal Month Label]
,[Fiscal Year]
,[Fiscal Year Label]
,[ISO Week Number])
VALUES
(‘4/1/2019’,
,1
,1
,’April’
,’Apr’
,4
,’CY2019-Apr’
,2019
,’CY2019′
,6
,’FY2019-APR’
,2019
,’FY2019′
,14)
GO
INSERT INTO [Fact].[Order]
([City Key]
,[Customer Key]
,[Stock Item Key]
,[Order Date Key]
,[Picked Date Key]
,[Salesperson Key]
,[Picker Key]
,[WWI Order ID]
,[WWI Backorder ID]
,[Description]
,[Package]
,[Quantity]
,[Unit Price]
,[Tax Rate]
,[Total Excluding Tax]
,[Tax Amount]
,[Total Including Tax]
,[Lineage Key])
VALUES
(45901
,0
,175
,’4/1/2019′
,’4/1/2019′
,76
,67
,9073
,NULL
,’April Fools ain’t no joke’
,’Each’
,3
,13.00
,15.00
,39.00
,5.85
,44.85
,9)
GO

Refresh the report and notice how it updates so that it only has one column.

This is because there is only 1 record from March 2018 until April 2019.

Combining SQL Server database functions with Power BI is a powerful way to abstract and encapsulate logic in the database thus simplifying the report authors job and ensuring the right data is presented to report consumers.

Hopefully you have found this to be another practical post.

Until next time.

Anthony

Extend your information reach without over stretching by virtualizing data using SQL Server 2019 and MongoDB – Part 2 SQL Server 2019

In this series of blog posts, I will explain how you can connect MongoDB to SQL Server 2019 using Polybase so that you have the benefit of both a schemaless and relational database technologies integrated and working together to form a modern data ecosystem that can handle both traditional and “big data”.

In my previous post I explained how to install and configure Mongo DB in an Azure VM running Linux. In this post I will walk you through the process of setting up SQL Server 2019, which is the area on the right of the diagram below.

Spin up Azure VM with SQL Server 2019

Microsoft has a pre-built VM with the latest release of SQL Server 2019 ( at the time of this writing it is CTP2.3) which makes it really quick and easy to setup. Simply navigate to your Azure portal and search for SQL Server 2019. You should see a Free SQL Server : (CTP2.3) SQL 2019 Developer option, once you select it you should see the following.

Click create and fill out the subsequent screens as follows.

Step 1 Basics

NOTE Be sure to remember the Username and Password because it will be required later when we connect to the VM and install Polybase.

Step 2 Size

I went for a DS2_v2 but you are free to pick a size that suites your needs. NOTE Microsoft recommends a DS2 or higher for development and functional testing.

Step 3 Settings

You will need to open a public inbound port (3389 RDP) so that you can remotely connect to it and install Polybase.

Step 4 SQL Server settings

This last step is optional. You can enable external connections directly into the SQL Server database which is handy if you want to connect to the database without having to log onto the VM. Once the VM is created we will need to log into it to install Polybase.

Install Polybase

Unfortunately, the pre-built VM does not have Polybase installed on it so you will need to log onto the VM and install. To connect to the VM go the resource in the Azure Portal and select Connect. You should see a screen like this.

Download the RDP file and enter the credentials you used when you first created the VM.

Once you have logged onto the VM you will need to navigate to the SQL Server 2019 installation software. You can find it here C:\SQLServerFull. Double click on setup.

In the SQL Server Installation Center menu select New SQL Server stand-alone installation or add features to an existing installation.

For the Installation Type select Add features to an existing instance of SQL Server 2019 CTP2.3

On the Feature Selection screen select PolyBase Query Service for External Data.

On the PolyBase Configuration screen select the first option. A PolyBase scale-out group is ideal for scenarios in which you have multiple external data sources that you want to connect to and you need to optimize performance.

We will use the defaults for Server Configuration.

Review the summary and click the install button. If all goes well you should see the following screen when complete.

In the next post I will explain how to configure Polybase to connect it to your MongDB database installed on separate VM. This setup allows you to extend your reach without overstretching by letting the data stay where it is but still making it available for integration and analytics with line of business application data.

Hopefully you have found this to be another practical post.

Until next time.

Anthony