How to track data lineage with SQL Server Graph Tables – Part 5 how to visualize the data using R

In my previous article I explained how you can query the graph tables in SQL Server to pull out pertinent information regarding the lineage of various data fields in your business intelligence system. In this article I will show you how you can visualize the results using R inside of SQL Server.

I completed these steps using a docker image of SQL Server 2019. I was curious to test the performance of using a SQL Server docker image and I thought I would kill two birds with one stone and finish the series using docker. There were a couple of unexpected extra steps that I had to figure out to get this working properly inside of a docker container which I will call out and explain along the way.

Install R packages

We will use a package called igraph to visualize the data. Before you can use this package, you will need to download and install it.

To properly add packages to SQL Server2019 or later you must use sqlmlutils.

Docker image issue #1

In order to use Machine Learning Services in the SQL Server docker image you must accept the End-User License Agreement associated with both SQL Server and SQL Server Machine Learning Service. I also mounted a volume so that the images generated by the R script are easily accessible from outside the container. To do this start the docker image with the following command.

docker run -d -e MSSQL_PID=Developer -e ACCEPT_EULA=Y -e ACCEPT_EULA_ML=Y -e MSSQL_SA_PASSWORD=YourPassword -v C:\Temp:/var/opt/mssql/images -p 1433:1433 mssql-server-mlservices

Next you will need to install sqlmlutils into the docker image. The instructions provided by Microsoft, which can be found here Install R packages with sqlmlutils – SQL Server Machine Learning Services | Microsoft Docs do not work for a docker image.

Docker image issue #2

The quick and dirty way to install sqlmlutils to the docker image is to bash into it. This is considered hacky and should be avoided in a production environment. For production I would create a dockerfile so that package dependencies are part of the docker image definition and are installed on build.

To bash into the container running SQL Server using the following command.

docker exec -it <your container id> "bash"

To find out the ID of the container running SQL Server run the following command

docker ps --filter "ancestor=mssql-server-mlservices"

Hopefully you will have only one container built off of the mssql-server-mlservices image.

Once you have connected into the container you will need to install some additional software into the Linux Ubuntu running the container.

Docker image issue #3

Before you can install new R packages into the container you will need to update and install some additional dependences.

sudo apt-get update


sudo apt-get install build-essential gdb


sudo apt-get install gfortran

Once you have installed these dependencies you can add packages to the container, for simplicity I have added the igraph package directly using the following command.

sudo su - -c "R -e \"install.packages('igraph', repos='http://cran.rstudio.com/')\""

In production I would have installed the sqlmlutils and then used this package to install igraph, however to do this I would have to first install unixodbc-dev, then ODBC, then manually install SQLMLUTILS from the zip file.

If you have a problem installing igraph you most likely need to the ~/.R/Makevars file using the following command

cd ~

cat > Makevars

C=gcc-5

CXX=g++

CXX1X = g++-5

CFLAGS = -std=c99 -Wall -pedantic

Press CTRL-D to save and exit the file

Enable External Scripts

Now that your environment is setup you need to enable external scripts to run in your SQL Server instance. To do this connect into your SQL Server instance using SQL Server Management Studio or Azure Data Studio and then run the following command.

exec
sp_configure
'external scripts enabled', 1;
RECONFIGURE;

Generate image using R

Now that we are all setup we can use R to generate the following visuals. This images will be stored as PNG files on your C:\Temp folder (the one mounted in your container if using a container)

System Data Flow

The system level image was generated with the following code.

--System level
EXECUTE
sp_execute_external_script
@language =
N'R',
@script =
N'
    require(igraph)
    g <- graph.data.frame(graphdf)
    V(g)$label.cex <- 2
    png(filename = "/var/opt/mssql/images/SystemDataFlow.png", height = 800, width = 1500, res = 100);
    plot(g, vertex.label.family = "sans", vertex.size = 5)
    dev.off()',
@input_data_1 =
N'
 SELECT 
        src.SystemName AS Source,
        trgt.SystemName AS Target
 FROM 
         [DataLineage].[dbo].[DataEntity] src,
 [DataLineage].[dbo].[DataFlowsTo] df,
 [DataLineage].[dbo].[DataEntity] trgt
 WHERE MATCH(src-(df)->trgt);',
@input_data_1_name =
N'graphdf'
GO

Database Level

The database level image was generated with the following code.

--Database level
EXECUTE
sp_execute_external_script
@language =
N'R',
@script =
N'
    require(igraph)
    g <- graph.data.frame(graphdf)
    V(g)$label.cex <- 2
    png(filename = "/var/opt/mssql/images/DatabaseDataFlow.png", height = 2400, width = 3000, res = 50);
    plot(g, vertex.label.family = "sans", vertex.size = 5)
    dev.off()',
@input_data_1 =
N'
 SELECT 
        src.DatabaseName AS Source,
        trgt.DatabaseName AS Target
 FROM 
         [DataLineage].[dbo].[DataEntity] src,
 [DataLineage].[dbo].[DataFlowsTo] df,
 [DataLineage].[dbo].[DataEntity] trgt
 WHERE MATCH(src-(df)->trgt);',
@input_data_1_name =
N'graphdf'
GO

Table Level

The table level image was generated with the following code.

--Table level
EXECUTE
sp_execute_external_script
@language =
N'R',
@script =
N'
    require(igraph)
    g <- graph.data.frame(graphdf)
    V(g)$label.cex <- 2
    png(filename = "/var/opt/mssql/images/TableDataFlow.png", height = 800, width = 1500, res = 100);
    plot(g, vertex.label.family = "sans", vertex.size = 5)
    dev.off()',

@input_data_1 =
N'
 SELECT 
        src.TableName AS Source,
        trgt.TableName AS Target
 FROM 
         [DataLineage].[dbo].[DataEntity] src,
 [DataLineage].[dbo].[DataFlowsTo] df,
 [DataLineage].[dbo].[DataEntity] trgt
 WHERE MATCH(src-(df)->trgt);',

@input_data_1_name =
N'graphdf'
GO

Column Level

The column level image was generated with the following code.

--Column level
EXECUTE
sp_execute_external_script
@language =
N'R',
@script =
N'
    require(igraph)
    g <- graph.data.frame(graphdf)
    V(g)$label.cex <- 2
    png(filename = "/var/opt/mssql/images/ColumnDataFlow.png", height = 2400, width = 3000, res = 50);
    plot(g, vertex.label.family = "sans", vertex.size = 5)
    dev.off()',

@input_data_1 =
N'
 SELECT 
        CONCAT(src.TableName, ''.'', src.ColumnName) AS Source,
        CONCAT(trgt.TableName, ''.'', trgt.ColumnName) AS Target
 FROM 
         [DataLineage].[dbo].[DataEntity] src,
 [DataLineage].[dbo].[DataFlowsTo] df,
 [DataLineage].[dbo].[DataEntity] trgt
 WHERE MATCH(src-(df)->trgt);',

@input_data_1_name =
N'graphdf'
GO

Security

With SQL Server 2019 there is a known issue when writing files to disk using an external script.

igraph png not working on SQL Server 2019 (microsoft.com)

I was able to bypass this issue by writing my PNG files into the docker container in the folder /var/opt/mssql/images/ this folder is then mounted to C:\Temp which makes them accessible from my local host. I mounted the folder when running the docker image using the -v argument

docker run -d -e MSSQL_PID=Developer -e ACCEPT_EULA=Y -e ACCEPT_EULA_ML=Y -e MSSQL_SA_PASSWORD=YourPassword -v C:\Temp:/var/opt/mssql/images -p 1433:1433 mssql-server-mlservices

Summary

I was able to visualize the data by generating PNG files using R and the igraph packages which is very handy for generating documentation that is easily consumable by anyone. The basic concepts explained in this article can be extended and made dynamic so that a user can pick what level to see lineage at as well as enhanced so that a hierarchy can be shown with all levels shown based on user input.

In the next article I will cover how to visualize data in the graph tables using PowerBI.

Hopefully you have found this to be another practical post.

Until next time.

Anthony

How to track data lineage with SQL Server Graph Tables – Part 4 Querying the Graph Tables

In the previous article I showed you how you can populate the graph tables using the database procedures we created in an earlier post. In this article I will show you how you can query the data contained in the graph tables to quickly answer questions regarding data lineage.

Data Flow

As you may recall from the previous article, we created data to store the flow of data from two source systems in to our BI system as depicted in the illustration below.

MATCH

Starting in SQL Server 2017 the MATCH clause can be used with node and edge graph tables to traverse a path in the graph tables. Imagine a scenario where you want to know the impact of removing a field in your Staging database which you no longer believe you need. In the previous article we captured data lineage for a data flow that was coming out of JDE, one of the columns was an invoice data in Julian called RPDIVJ which is coming from the table F0411. For the purpose of this article lets assume that the business believes this date field is not useful and needs to be removed. Without this data lineage solution you would need to manually search through ETL processes to determine the impact of removing this column, a time consuming task, thankfully we have a more elegant solution.

We can query our graph tables in our DataLineage database to quickly see which processes will need to be modified. We will start by querying Staging which is the start of the data flow.

Run the following code to see which ETL process will need to be modified.

SELECT 
	df.ProcessName,
	df.CreatedDt
 FROM 
        [DataLineage].[dbo].[DataEntity] src,
        [DataLineage].[dbo].[DataFlowsTo] df,
        [DataLineage].[dbo].[DataEntity] trgt
 
WHERE MATCH(src-(df)->trgt)
	 AND src.SystemName = 'Business Intelligence System'
	 AND src.DatabaseName = 'Staging'
	 AND src.TableName = 'F0411'
	 AND src.ColumnName = 'RPDIVJ'


This should give you results like the table below.

ProcessName

CreatedDt

Load Fact Accounts Payable

2020-02-12

Load Fact Accounts Payable

2020-02-12

This tells you that the ETL process Load Fact Accounts Payable uses the field RPDIVJ in the Staging database twice. I have also included the CreatedDt filed in the query so that you know if the data is still relevant, typically the older the data the less reliable it is to make decision. If you have built your BI system correctly you should be able to compare that age of this data with the last modified date of the ETL process to see if there is a significant difference, if there is odds are you need to update the data lineage for this process.

Now let’s modify the query above to get some more meaningful insights.

SELECT 
			src.SystemName AS SourceSystem,
			src.DatabaseName AS SourceDatabase,
			src.TableName AS SourceTable,
			src.ColumnName AS SourceColumn,
			src.CreatedDt AS SourceNodeCreatedDt,
			
			df.ProcessName,
			df.CreatedDt AS ProcessCreatedDt,
			
			trgt.SystemName AS TargetSystem,
			trgt.DatabaseName AS TargetDatabase,
			trgt.TableName AS TargetTable,
			trgt.ColumnName AS TargetColumn,
			trgt.CreatedDt AS TargetNodeCreatedDt

 FROM 
	  [DataLineage].[dbo].[DataEntity] src,
          [DataLineage].[dbo].[DataFlowsTo] df,
          [DataLineage].[dbo].[DataEntity] trgt
 
WHERE MATCH(src-(df)->trgt)
	 AND src.SystemName = 'Business Intelligence System'
	 AND src.DatabaseName = 'Staging'
	 AND src.TableName = 'F0411'
	 AND src.ColumnName = 'RPDIVJ'

This query provides you with details on the source, target and the process that copies the data from source to target. When you run this query, it should provide results similar to the table below.

SourceSystem

SourceDatabase

SourceTable

SourceColumn

SourceNodeCreatedDt

ProcessName

ProcessCreatedDt

TargetSystem

TargetDatabase

TargetTable

TargetColumn

TargetNodeCreatedDt

Business Intelligence System

Staging

F0411

RPDIVJ

2020-02-12

Load Fact Accounts Payable

2020-02-12

Business Intelligence System

EDW

Fact Accounts Payable

InvoiceDate-Julian

2020-02-12

Business Intelligence System

Staging

F0411

RPDIVJ

2020-02-12

Load Fact Accounts Payable

2020-02-12

Business Intelligence System

EDW

Fact Accounts Payable

InvoiceDate-Gregorian

2020-02-12

The match clause is great however it is still difficult to understand the full impact of removing the RPDIVJ column, this only tells us about the impact to the staging load process and will not tell us down stream impacts to the EDW and the cubes which may rely on this field. What we really want to know is the shortest path across all layers of the BI system.

SHORTEST_PATH()

To get a better understanding of the end to end impact of removing this field we will use the SHORTEST_PATH option. This is a new feature only available in SQL Server 2019 or later. To use this feature, write a new query as follows.

SELECT
	CONCAT(src.SystemName,'.', src.DatabaseName, '.', src.TableName, '.', src.ColumnName) AS Source,
	COUNT(trgt.ColumnName) WITHIN GROUP (GRAPH PATH) AS [BI System Layer],
	STRING_AGG(CONCAT(trgt.SystemName, '.', trgt.DatabaseName, '.', trgt.TableName, '.', trgt.ColumnName) , ' -> ') WITHIN GROUP (GRAPH PATH) AS [Data Lineage],
	STRING_AGG(df.ProcessName , ', ') WITHIN GROUP (GRAPH PATH) AS [ETL Processes]

FROM
	[DataLineage].[dbo].[DataEntity] AS src,
	[DataLineage].[dbo].[DataFlowsTo] FOR PATH AS df,
	[DataLineage].[dbo].[DataEntity] FOR PATH AS trgt
WHERE
	MATCH (SHORTEST_PATH(src(-(df)->trgt)+))
	 AND src.SystemName = 'Finance'
	 AND src.DatabaseName = 'JDE'
	 AND src.TableName = 'F0411'
	 AND src.ColumnName = 'RPDIVJ'
ORDER BY 2 DESC

After running the query above you should get results like the table below.

Source

BI System Layer

Data Lineage

ETL Processes

Finance.JDE.F0411.RPDIVJ

3

Business Intelligence System.Staging.F0411.RPDIVJ -> Business Intelligence System.EDW.Fact Accounts Payable.InvoiceDate-Gregorian -> Business Intelligence System.SSAS Accounts Payble Model.Fact Accounts Payable.InvoiceDate

Load F0411, Load Fact Accounts Payable, Process SSAS Accounts Payble Model

Finance.JDE.F0411.RPDIVJ

2

Business Intelligence System.Staging.F0411.RPDIVJ -> Business Intelligence System.EDW.Fact Accounts Payable.InvoiceDate-Julian

Load F0411, Load Fact Accounts Payable

Finance.JDE.F0411.RPDIVJ

2

Business Intelligence System.Staging.F0411.RPDIVJ -> Business Intelligence System.EDW.Fact Accounts Payable.InvoiceDate-Gregorian

Load F0411, Load Fact Accounts Payable

Finance.JDE.F0411.RPDIVJ

1

Business Intelligence System.Staging.F0411.RPDIVJ

Load F0411

Let’s break down the results and look at the first row returned by the query.

The first column Source is the fully qualified source for the data

Finance.JDE.F0411.RPDIVJ

The second field BI System Layer tells you how many layers up the data travels, so in this case the data makes its way through all 3 layers of the BI system (Staging, EDW, SSAS).

The third field Data Lineage tells you the source to target mapping from end to end.
Business Intelligence System.Staging.F0411.RPDIVJ -> Business Intelligence System.EDW.Fact Accounts Payable.InvoiceDate-Gregorian -> Business Intelligence System.SSAS Accounts Payble Model.Fact Accounts Payable.InvoiceDate
The last field ETL Processes lets you know the names of all the ETL processes that depend on the data coming from Finance.JDE.F0411.RPDIVJ.

Load F0411, Load Fact Accounts Payable, Process SSAS Accounts Payble Model
The query is using SHORTEST_PATH, STRING_AGG and COUNT to provide meaningful information regarding data lineage on a specific field used as a source for the BI system. With this query you can quickly understand the impact of removing this field across all layers of the BI system.

Summary

By leveraging Graph tables and new querying capabilities associated with these tables you can quickly answer questions regarding data lineage and provide greater trust, understanding and ultimately adoption of the BI system.

In the next article I will cover how to visualize data in the graph tables using R.

Hopefully you have found this to be another practical post.

Until next time.

Anthony

How to track data lineage with SQL Server Graph Tables – Part 3 Populate Graph Tables

Featured

In this 3rd article I will show you how to use the procedures that we created in the previous post to populate the graph tables with useful data lineage information.

Data Flow

We will simulate the flow of data from two source systems (Finance and HR) into a data warehouse/business intelligence system as illustrated in the diagram below.

Staging

Most modern BI systems include a staging area, also sometimes referred to as a landing area. Data is collected from various disparate systems and loaded into the staging database with very minimal transformations, typically only data type conversions. The goal is to quickly grab the data from the different source systems and load it into the staging area for further processing so that any performance impact to the source systems is minimized.

To capture the data lineage for this first hop from source system to staging we can use our procedure as follows.

-- JDE to Staging
EXEC	[dbo].[Create_Data_Lineage]
		
		--System level
		@SourceSystem = 'Finance',
		@TargetSystem = 'Business Intelligence System',

		--Database level 
		@SourceDatabase = 'JDE',
		@TargetDatabase = 'Staging',
		
		--Table Level
		@SourceTable = 'F0411',
		@TargetTable = 'F0411',

		--Column Level
		@SourceColumnList = 'RPKCO, RPDOC, RPDCT, RPSFX, RPSFXE, RPDCTA, RPAN8, RPPYE, RPSNTO, RPDIVJ',
		@TargetColumnList = 'RPKCO, RPDOC, RPDCT, RPSFX, RPSFXE, RPDCTA, RPAN8, RPPYE, RPSNTO, RPDIVJ',

		@ProcessName = Load F0411';
GO



-- HR to Staging 
EXEC	[dbo].[Create_Data_Lineage]
		
		--System level
		@SourceSystem = 'HR',
		@TargetSystem = 'Business Intelligence System',

		--Database level 
		@SourceDatabase = 'Payroll',
		@TargetDatabase = 'Staging',
		
		--Table Level
		@SourceTable = 'EMP',
		@TargetTable = 'EMP',

		--Column Level
		@SourceColumnList = 'EMPID, HIREDT, JOBTITLE',
		@TargetColumnList = 'EMPID, HIREDT, JOBTITLE',

		@ProcessName = 'Load EMP';
GO 

As you can see from the code above it is very easy to map an input table to an output table using the procedures we created in the previous steps.

EDW

Next, we will capture the data lineage from Staging to the Enterprise Data Warehouse (EDW)

-- Staging to EDW for Finance tables

EXEC	[dbo].[Create_Data_Lineage]
		
		--System level
		@SourceSystem = 'Business Intelligence System',
		@TargetSystem = 'Business Intelligence System',

		--Database level 
		@SourceDatabase = 'Staging',
		@TargetDatabase = 'EDW',
		
		--Table Level
		@SourceTable = 'F0411',
		@TargetTable = 'Fact Accounts Payable',

		--Column Level
		@SourceColumnList = 'RPDOC, RPDCT, RPSFX, RPSFXE, RPDCTA, RPAN8, RPPYE, RPSNTO, RPDIVJ, RPDIVJ',
		@TargetColumnList = 'Document, Document Type, Document Pay Item, Pay Item Extension Number, Adjusting Document Type, Address Number, Payee Address Number, 	Approvers Address Number, Invoice Date - Julian, Invoice Date - Gregorian',

		@ProcessName = 'Load Fact Accounts Payable';
GO



-- Staging to EDW for HR tables
EXEC	[dbo].[Create_Data_Lineage]
		
		--System level
		@SourceSystem = 'Business Intelligence System',
		@TargetSystem = 'Business Intelligence System',

		--Database level 
		@SourceDatabase = 'Staging',
		@TargetDatabase = 'EDW',
		
		--Table Level
		@SourceTable = 'EMP',
		@TargetTable = 'Dim Employee',

		--Column Level
		@SourceColumnList = 'EMPID, HIREDT, HIREDT, JOBTITLE',
		@TargetColumnList = 'Employee Number, Hire Date, Tenure, Job Title',

		@ProcessName = 'Load Dim Employee';
GO

If you have a keen eye you might have noticed that we are mapping some of the fields twice. The field RPDIVJ in the JDE table F0411 is mapped to both Invoice Date – Julian and Invoice Date – Gregorian. The field HIREDT in the HR EMP table is mapped to both the Hire Date and the Tenure columns. In both scenarios this implies some transformation logic between these columns.

Accounts Payable Model

The final step in our example is the process that loads the SQL Server Analysis Services (SSAS) model.

-- EDW to SSAS Finance
EXEC	[dbo].[Create_Data_Lineage]
		
		--System level
		@SourceSystem = 'Business Intelligence System',
		@TargetSystem = 'Business Intelligence System',

		--Database level 
		@SourceDatabase = 'EDW',
		@TargetDatabase = 'SSAS Accounts Payable Model',
		
		--Table Level
		@SourceTable = 'Fact Accounts Payable',
		@TargetTable = 'Fact Accounts Payable',

		--Column Level
		@SourceColumnList = 'Document, Document Type, Document Pay Item, Pay Item Extension Number, Adjusting Document Type, Address Number,  Invoice Date - Gregorian',
		@TargetColumnList = 'Document, Document Type, Document Pay Item, Pay Item Extension Number, Adjusting Document Type, Address Number,  Invoice Date',

		@ProcessName = 'Process SSAS Accounts Payable Model'



-- EDW to SSAS HR
EXEC	[dbo].[Create_Data_Lineage]
		
		--System level
		@SourceSystem = 'Business Intelligence System',
		@TargetSystem = 'Business Intelligence System',

		--Database level 
		@SourceDatabase = 'EDW',
		@TargetDatabase = 'SSAS Accounts Payable Model',
		
		--Table Level
		@SourceTable = 'Dim Employee',
		@TargetTable = 'Dim Employee',

		--Column Level
		@SourceColumnList = 'Employee Number, Hire Date, Tenure,  Job Title',
		@TargetColumnList = 'Employee Number, Hire Date, Tenure,  Job Title',

		@ProcessName = 'Process SSAS Accounts Payable Model'


GO

As you can see in the code above the Invoice Date – Gregorian from the Finance system is used to populate the Invoice Date and the Invoice Date – Julian did not make it into the SSAS cube.

Summary

The database procedures make it really easy to capture data lineage, they could even be enhanced to accept and parse JSON or XML inputs so that data lineage could be dropped in and maintained using a web application….can someone say Power App

In the next article I will cover how to query the graph tables as well as visualize the data in order to answer data lineage questions.

Hopefully you have found this to be another practical post.

Until next time.

Anthony

How to track data lineage with SQL Server Graph Tables – Part 2 Create Database Procedures

This is my second post in a series which explains how to use SQL Server Graph Tables to track data lineage.

In my previous post I covered creating the node and edges tables required to store data lineage. In this post I will show you how to create database procedures in order to simplify the process of populating the graph tables. These procedures are deliberately basic so that they are easy to understand but they are also easily extensible and can serve as the starting point for a more sophisticated solution.

Overview

Before diving into the code it is always good to have a conceptual understanding of the process. The flowchart below explains the logic of these procedures.

As you can see in the diagram above, we check if the source or target is new and then based on the condition create the appropriate nodes and edges. In order to avoid duplicating code we will create a procedure for creating the nodes and the edges and then call theses sub processes from a parent process that controls the logic flow.

Create Nodes Procedure

First, we will create a procedure to populate our node table. As you recall from the previous post our node table is called [dbo].[DataEntity].

DROP PROCEDURE IF EXISTS dbo.Create_Nodes
GO

CREATE PROCEDURE dbo.Create_Nodes

    @ColumnList VARCHAR(MAX),
    @Table VARCHAR(MAX),
    @Database VARCHAR(MAX),
    @System VARCHAR(MAX)

AS
BEGIN
    SET NOCOUNT ON;
    DECLARE @NodeIDs TABLE (ID NVARCHAR(1000));

    --Insert data ino Node table and keep Source IDS so that we can populate the Edge table
    INSERT INTO [dbo].[DataEntity]
    OUTPUT INSERTED.$NODE_ID AS TOID INTO @NodeIDs(ID)
    SELECT
        value AS [ColumnName],
        @Table AS [TableName],
        @Database AS [DatabaseName],
        @System AS [SystemName],
        GETDATE() AS [CreatedDt]

    FROM  
        STRING_SPLIT(@ColumnList, ',');

    SELECT ID FROM @NodeIDs;    
    RETURN 
END

Now that we have a procedure to create our nodes, we will create another procedure to create the edges.

Create Edges Procedure

We need to pass a list of procedure a list of node ids to insert into the [dbo].[DataFlowsTo] table so we will create a user defined type to store and pass this information.

/* Create a table type in order to pass a list of Node IDs to this procedure*/
DROP TYPE IF EXISTS dbo.NodeIDTableType
CREATE TYPE dbo.NodeIDTableType  AS TABLE  ( ID NVARCHAR(1000) );

GO

After the user defined table type is created we can create the procedure to populate the edge table.

DROP PROCEDURE IF EXISTS dbo.Create_Edges
GO

CREATE PROCEDURE dbo.Create_Edges

    @Source  NodeIDTableType READONLY,
    @Target  NodeIDTableType READONLY,
    @Process VARCHAR(MAX)
AS
BEGIN
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT ON;

   --Get the FROM and TO node ids into a common table exppression and populate the edge table in batch.
   */
   
    WITH CTE (TOID, FROMID) AS
    (
        SELECT 
            A.FROMID,
            B.TOID
        FROM
            (SELECT ID AS FROMID, ROW_NUMBER() OVER(ORDER BY ID ASC) as RowNum FROM @Source) as A
        JOIN 
            (SELECT ID AS TOID, ROW_NUMBER() OVER(ORDER BY ID ASC) as RowNum FROM @Target) as B
        ON A.RowNum = B.RowNum
    )
    
    --Insert Lineage data into DataFlowsTo Edge table
     INSERT INTO [dbo].[DataFlowsTo] ($from_id, $to_id, ProcessName)
     SELECT CTE.TOID, CTE.FROMID, @Process FROM CTE;

    
END

In case you are wondering I used a common table expression (CTE) so that the edge records could be created in bulk rather than one at a time.

Create Data Lineage Procedure

This is the final procedure which checks to see if the source or target is new and calls the procedures above as required. The conditional logic check is premised on the assumption a the combination of system name, database name, table name and column name is unique.

DROP PROCEDURE IF EXISTS dbo.Create_Data_Lineage
GO
USE [DataLineage]
GO


CREATE PROCEDURE [dbo].[Create_Data_Lineage] 

    --Source data
    @SourceSystem VARCHAR(MAX),
    @SourceDatabase VARCHAR(MAX),
    @SourceTable VARCHAR(MAX),
    @SourceColumnList VARCHAR(MAX),

    --Target data
    @TargetSystem VARCHAR(MAX),
    @TargetDatabase VARCHAR(MAX),
    @TargetTable VARCHAR(MAX),
    @TargetColumnList VARCHAR(MAX),

    @ProcessName VARCHAR(MAX)

AS
BEGIN
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT ON;

        
    --DECLARE @SourceIDs TABLE (ID NVARCHAR(1000));
    --DECLARE @TargetIDs TABLE (ID NVARCHAR(1000));
    
    DECLARE @SourceIDs NodeIDTableType;
    DECLARE @TargetIDs NodeIDTableType;

    
    --Prepare the list of fields by removing any spaces or tabs between the comma seperated list of source and target columns
    SET @SourceColumnList = REPLACE(REPLACE(@SourceColumnList,' ', ''), char(9), '');
    SET @TargetColumnList = REPLACE(REPLACE(@TargetColumnList,' ', ''), char(9), '');

    --Check for existing sources , if found use those id's otherwise create new nodes and use new IDs
    INSERT INTO @SourceIDs
    SELECT 
        $NODE_ID
    FROM 
        [dbo].[DataEntity] DE
    INNER JOIN 
            (SELECT
                value AS [ColumnName],
                @SourceTable AS [TableName],
                @SourceDatabase AS [DatabaseName],
                @SourceSystem AS [SystemName]
            FROM  
                STRING_SPLIT(@SourceColumnList, ',')
            ) SRC
    ON
        DE.[ColumnName] = SRC.[ColumnName]
    AND 
        DE.[TableName] = SRC.[TableName]
    AND
        DE.[DatabaseName] = SRC.[DatabaseName]
    AND
        DE.[SystemName] = SRC.[SystemName];

    --Check for existing  targets, if found use those id's otherwise create new nodes and use new IDs
    INSERT INTO @TargetIDs
    SELECT 
        $NODE_ID
    FROM 
        [dbo].[DataEntity] DE
    INNER JOIN 
            (SELECT
                value AS [ColumnName],
                @TargetTable AS [TableName],
                @TargetDatabase AS [DatabaseName],
                @TargetSystem AS [SystemName]
            FROM  
                STRING_SPLIT(@TargetColumnList, ',')
            ) TRGT
    ON
        DE.[ColumnName] = TRGT.[ColumnName]
    AND 
        DE.[TableName] = TRGT.[TableName]
    AND
        DE.[DatabaseName] = TRGT.[DatabaseName]
    AND
        DE.[SystemName] = TRGT.[SystemName];

    IF (NOT EXISTS (SELECT 1 FROM @SourceIDs)) AND (NOT EXISTS (SELECT 1 FROM @TargetIDs))
        BEGIN

            --Create source nodes
            INSERT @SourceIDs
            EXEC  dbo.CREATE_NODES 
                    @System = @SourceSystem,
                    @Database = @SourceDatabase,
                    @Table = @SourceTable,
                    @ColumnList = @SourceColumnList;
        
            --Create target nodes
            INSERT @TargetIDs
            EXEC  dbo.CREATE_NODES 
                    @System = @TargetSystem,
                    @Database = @TargetDatabase,
                    @Table = @TargetTable,
                    @ColumnList = @TargetColumnList;
        
        
            --Create edges between source and target
            EXEC dbo.Create_Edges @Source = @SourceIDs, @Target = @TargetIDs, @Process = @ProcessName;
        END
    ELSE IF (EXISTS (SELECT 1 FROM @SourceIDs)) AND (NOT EXISTS (SELECT 1 FROM @TargetIDs))
        BEGIN
        
            --create target nodes
            INSERT @TargetIDs
            EXEC  dbo.CREATE_NODES 
                    @System = @TargetSystem,
                    @Database = @TargetDatabase,
                    @Table = @TargetTable,
                    @ColumnList = @TargetColumnList;

            --Create edges between source and target
            EXEC dbo.Create_Edges @Source = @SourceIDs, @Target = @TargetIDs, @Process = @ProcessName;
        END
    ELSE IF (NOT EXISTS (SELECT 1 FROM @SourceIDs)) AND (EXISTS (SELECT 1 FROM @TargetIDs))
        BEGIN
        
            --create source nodes
            INSERT @SourceIDs
            EXEC  dbo.CREATE_NODES 
                    @System = @SourceSystem,
                    @Database = @SourceDatabase,
                    @Table = @SourceTable,
                    @ColumnList = @SourceColumnList;

            --Create edges between source and target
            EXEC dbo.Create_Edges @Source = @SourceIDs, @Target = @TargetIDs, @Process = @ProcessName
        END
    ELSE 
        --No new nodes required
        --Create edges between source and target
        EXEC dbo.Create_Edges @Source = @SourceIDs, @Target = @TargetIDs, @Process = @ProcessName
    

END
GO

Summary

In this article we have created the database procedures to simplify the process of creating the necessary node and edge records to capture data lineage between source and target fields.

In the next article I will cover how to call these procedures and query the graph tables to answer data lineage questions.

Hopefully you have found this to be another practical post.

Until next time.

Anthony

Abstract and encapsulate with Power BI and SQL Server Table-Valued Functions – Use Case 2: Change results based on a database point in time and a user defined parameter

This is the second article  in which I cover different use cases for using SQL Server Table-Valued Functions with Power BI.

In the previous article I showed you how a table valued function can be used to hide lower levels of a hierarchy based on user id. This is handy if you need to prevent certain users from breaking down aggregate measures while interacting with a report or dashboard in Power BI.

In this article I will show you how you can use table valued functions to filter the results based on a date range stored in your database and allow the report author to control how many years of data to bring back with an optional input parameter.

Prerequisites

Use case 2: Change results based on time

This use case comes in handy if you want your results to be filtered based on a date and time in your database and not on the date time of your Power BI users. This situation may occur in a globally distributed system in which Power BI users and the database they are querying are located in different time zones or in situations where the data lags behind the current date/time of your users. I will cover a situation where you want the data set to only show data up to a point in time stored in your database.

Using the Wide World Importers sample database suppose you wanted to limit the orders that a person can report on to the last year in which there were orders in the database. Run the following query.

USE WideWorldImportersDW
GO

SELECT
MIN([Order Date Key]) AS [Earliest Order Date]
, MAX([Order Date Key]) AS [Latetest Order Date]
FROM
WideWorldImportersDW.Fact.[Order]
GO

As you can see from the results, we have orders from 2013 up to 2016. If we were to use the Power BI relative date slicer and set it to only show data from the past 1 year we would not see any results because the Power BI relative date slicer is based on today’s date April 6, 2019 and not on a date in the database. One way to overcome this problem is to use SQL Server Table-valued functions and encapsulate the logic to only show orders from June 1 2015 to May 31 2016. To do this will create a new function using the code below.

Create Function

Run the following code in the Wide World Importers database to create a new function. We are including an optional parameter so that the report author can change how many years they want to go back when they connect to the data.

--DROP FUNCTION dbo.ufn_Orders_PastYear
--GO


CREATE FUNCTION dbo.ufn_Orders_PastYear(@NumberOfYears INT = NULL)
RETURNSTABLE
AS
RETURN
(
SELECT
*
FROM
WideWorldImportersDW.Fact.[Order] ord
WHERE
ord.[Order Date Key]
BETWEEN
(SELECT DATEADD(year, -ISNULL(@NumberOfYears, 1), MAX([Order Date Key])) FROM WideWorldImportersDW.Fact.[Order])
AND
(SELECT MAX([Order Date Key]) FROM WideWorldImportersDW.Fact.[Order])
);
GO

Notice that I negate the number of years by adding a negative sign in front of the ISNULL function in the select statement, this is to simplify the report authoring experience with using this function. Next, we will make a DirectQuery connection to the function using Power BI.

Connect with Power BI

Similar to before connect to the SQL Server database using a DirectQuery connection and select the function ufn_Orders_PastYear from the list of database objects.

As you can see in the image above the function parameter @NumberOfYears appears in Power BI as an optional parameter. If you leave it blank and click apply it will pull back 1 year’s worth of data based on what is available in the database. You can enter in your own number to control how many years back you query the Orders fact table. Incorporating parameters is a very powerful way to give the report author control of the results.

Once the data has been loaded in let’s visualize it using a simple bar chart.

Your results should look like the following image below.

As you can see in the chart we only have data from 2015 to 2016. To make the chart a bit easier to read lets add a proper date hierarchy. We will need to build it because we are using a DirectQuery to access the data so the autogenerated date hierarchies are not available, those are only created when you import data into Power BI and set the data type to be a date.

Create Year Column

Create a new calculated column and use the following DAX code to pull out the year value from the Order Date Key field.

OrderYear = Year([Order Date Key])

Create Month Columns

Next, we will create two month columns one will be used to sort and the other will be used to display the month name on the report.

Use the following DAX code to create a new month number calculated column.

OrderMonth = Month([Order Date Key])

Now create a new calculated column to store the month name using the following DAX code.

Order Month Name = 
SWITCH (
    [OrderMonth],
    1, "January",
    2, "February",
    3, "March",
    4, "April",
    5, "May",
    6, "June",
    7, "July",
    8, "August",
    9, "September",
    10, "October",
    11, "November",
    12, "December"
)

Now we need to set the sort by column property of the Order Month Name to use the value of the OrderMonth column. To do this navigate to the model viewer and click on the Order Month Name field and then set the Sort by column to OrderMonth.

Now we will create a new Hierarchy based on OrderYear and Order Month Name.

Click on the chart and the Axis value with the new hierarchy we just created. Drill down a level to see the years and months.

As you can see this makes the chart much easier to read. Now lets insert some new data into the table and refresh the report.

Add some data

Run the following SQL to create new Date and Order records.

INSERT INTO [Dimension].[Date]
([Date]
,[Day Number]
,[Day]
,[Month]
,[Short Month]
,[Calendar Month Number]
,[Calendar Month Label]
,[Calendar Year]
,[Calendar Year Label]
,[Fiscal Month Number]
,[Fiscal Month Label]
,[Fiscal Year]
,[Fiscal Year Label]
,[ISO Week Number])
VALUES
(‘4/1/2019’,
,1
,1
,’April’
,’Apr’
,4
,’CY2019-Apr’
,2019
,’CY2019′
,6
,’FY2019-APR’
,2019
,’FY2019′
,14)
GO
INSERT INTO [Fact].[Order]
([City Key]
,[Customer Key]
,[Stock Item Key]
,[Order Date Key]
,[Picked Date Key]
,[Salesperson Key]
,[Picker Key]
,[WWI Order ID]
,[WWI Backorder ID]
,[Description]
,[Package]
,[Quantity]
,[Unit Price]
,[Tax Rate]
,[Total Excluding Tax]
,[Tax Amount]
,[Total Including Tax]
,[Lineage Key])
VALUES
(45901
,0
,175
,’4/1/2019′
,’4/1/2019′
,76
,67
,9073
,NULL
,’April Fools ain’t no joke’
,’Each’
,3
,13.00
,15.00
,39.00
,5.85
,44.85
,9)
GO

Refresh the report and notice how it updates so that it only has one column.

This is because there is only 1 record from March 2018 until April 2019.

Combining SQL Server database functions with Power BI is a powerful way to abstract and encapsulate logic in the database thus simplifying the report authors job and ensuring the right data is presented to report consumers.

Hopefully you have found this to be another practical post.

Until next time.

Anthony

Abstract and encapsulate with Power BI and SQL Server Table-Valued Functions – Use Case 1: Change results based on user

If you’ve ever required a dynamic data source in Power BI that can change based on who the user is, when they are querying the data source or if certain data elements have changed you can leverage the ability for Power BI to connect to a table value function in SQL Server.

Table-valued functions allow you to abstract complex business logic from the report author and encapsulate it into a database object. This simplifies report building and enables you to do things like hide hierarchy levels, filter data based on a certain point in time stored in the database or check for certain data conditions and alter the query results as appropriate.

Prerequisites

  • SQL Server 2016 or later. You can download the SQL Server 2017 developer edition HERE
  • Wide World Importer sample database. A copy can be found HERE
  • Power BI Desktop. You can download the latest version from HERE

In this series of articles I will step through several use cases for direct queries from SQL Server Table-valued Functions in Power BI.

Use case 1: Change results based on user

For this first use case we will cover how you can embed some simple logic in your table-valued function to hide lower levels of a hierarchy. This is useful if you want to prevent certain individuals from breaking down aggregated values but still allow them to use data at a summary level.

Create user accounts

For the purposes of simplicity, we will create some users in the database using SQL Server authentication. Connect to your SQL Server database and execute the following SQL code.

USE [master]

GO
--Create Bob
CREATE LOGIN [Bob] WITH PASSWORD='Bob', DEFAULT_DATABASE=[WideWorldImportersDW], DEFAULT_LANGUAGE=[us_english], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO

--Create Mary
CREATE LOGIN [Mary] WITH PASSWORD='Mary', DEFAULT_DATABASE=[WideWorldImportersDW], DEFAULT_LANGUAGE=[us_english], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO

USE [WideWorldImportersDW]
GO

--Grant Bob access to WideWorldImportersDW
CREATE USER [Bob] FOR LOGIN [Bob] WITH DEFAULT_SCHEMA=[dbo]
GO

--Grant Mary access to WideWorldImportersDW
CREATE USER [Mary] FOR LOGIN [Mary] WITH DEFAULT_SCHEMA=[dbo]
GO

--Grant Bob access to read access to WideWorldImportersDW
ALTER ROLE db_datareader ADD MEMBER [Bob]
GO

--Grant Mary access to read access to WideWorldImportersDW
ALTER ROLE db_datareader ADD MEMBER [Mary]
GO

 

Next, we will create a function in SQL Server with the following code.

Create function

Use the code below to create a new Table-Valued function in SQL Server. The function is what we will directly connect to in Power BI.

CREATE FUNCTION dbo.ufn_Customer()  
RETURNS TABLE 
AS
RETURN   
(  
    SELECT  
       [Customer Key]
      ,[WWI Customer ID]
      ,[Customer]
      ,[Bill To Customer]
      ,[Category]
      ,CASE SYSTEM_USER
        WHEN 'Bob' THEN
            NULL
        WHEN 'Mary' THEN
            [Buying Group]
        ELSE
            [Buying Group]
        END AS [Buying Group]
      ,[Primary Contact]
      ,[Postal Code]
      ,[Valid From]
      ,[Valid To]
      ,[Lineage Key]
  FROM [WideWorldImportersDW].[Dimension].[Customer]
);  

GO

As you can see in the code above, I have created a function called dbo.ufn_Customer which returns the data from the Customer dimension table. In the code I have added a simple case statement that returns different data for the Buy Group based on who executing the function.

Next we will bring this function into Power BI and see the results.

Connect with Power BI

Open Power BI and get data from SQL Server. Enter in the server name and select DirectQuery.

Click on OK. Log in using a Database account. First, we will try using Bob.

Click on connect and select the function ufn_Customer from the list of available objects.

If you wanted to force the report author to use the function rather than the actual customer table you can use database security to only expose the function and not the table. I typically use custom database roles and schemas because it is easier to manage and allows me to enable “data discovery with guard rails”.

Load the data in and create a simple hierarchy using the fields Category and Buying Group.

Drop a Matrix onto the canvas of the report and use the Category Hierarchy you just created for the rows and the Customer Key for the values.

You should have a report that looks similar to the image below. Notice how the Buying Group is null because for Bob the function is not returning the Buyin Group value but the NULL value instead.

Now lets switch to Mary and see how the lower level values of the hierarchy appear in the report. Click on Home > Edit Queries > Data Source Settings. Select the data source that you are using for this report and click on Edit Permissions…

In the Edit Permission pop up menu click on Edit.. then in Database enter in Mary and Mary for the ID and PWD.

Click on Save and OK and Close. Refresh the report.

Notice how the lower levels of the hierarchy now appear in the matrix visual because for Mary the actual Buying Group value is being return as specified in the function.

NOTE: The reason why you need to refresh is because of caching. When using this technique to obfuscate lower levels of a hierarchy make sure to build your visuals so that the default view of the report is at an aggregated level and minimize the amount of caching which will force Power BI to re-query from the source and update the results appropriately.

If you want to add an enterprise semantic layer such as an Analysis Services Tabular model and still have the same dynamic results you will need to build your SSAS model using DirectQuery mode because the results of the model need to change and cannot be processed and stored in memory in advance of the user querying it.

In the next article I will cover how you can use a database function to curate the results based on a date time in your database.

Next we will look at using a database function to curate the result set based on time.

Hopefully you have found this to be another practical post.

Until next time.

Anthony

Instant insights, automation and action – Part 6 Integrate Power BI, Power Apps, Azure Machine Learning and Dynamics 365 using MS Flow

This is the last article in a 6-part series in which I will explain how you can integrate Power BI, Power Apps, Azure Machine and Dynamics 365 using MS Flow.

For reference here are the descriptions and links to the previous articles.

Instant insights, automation and action – Part 1 Create Power App

Instant insights, automation and action – Part 2 Create Azure Machine Learning Experiment

Instant insights, automation and action – Part 3 Create the Power BI Report

Instant insights, automation and action – Part 4 Register Power BI in Azure Active Directory

Instant insights, automation and action – Part 5 Integrate with MS Flow

In this article I will explain how you can kick off a MS Flow by adding an action to your Power App and then how you can integrate the Power App into a Power BI Dashboard. Data alerts can by tied to tiles in the Power BI Dashboard that can kick off additional flows which will insert records into Dynamics. The complete system is depicted in the diagram below.


Modify the Power APP

In Part 1 of this series we created a simple app that allowed a user to enter new sales data. We now need to go back to this app and modify it. Navigate to Power Apps and edit the app


Once the app is open click on the submit button to select it and then from the Action menu at the top select Flows.


This will open up a new pane in which you can select the flow that we created in Part 5 of this series. Once you have selected the flow enter the following code into the formula expression bar.

PowerApptoAzureMLtoPowerBIbkp.Run(NAME.Text, CHANNEL.Text, REGION.Text, FRESH.Text, MILK.Text, GROCERY.Text, FROZEN.Text, DETERGENT.Text, DELICASSEN.Text,CATEGORY.Text)


This will execute the flow and pass the data values from each of the text input boxes into the flow. You can test the flow by clicking on the play button in the top right-hand corner of the screen.

Save the report and publish it so that the new version with the flow attached to the submit button is available to integrate into Power BI.

Modify the Power BI Report

Next, we will need to modify the Power BI report to drop in a PowerApps visual. Open the Power BI report that we created in Part 3 and add a new custom visual from the marketplace. We need to add the Power App custom visual to the report.


Once the new visual has been successfully added we will add it to a new page in the report. In the Power BI report create a new page and call it Data Entry. We are doing this to keep the report clean and simple. We will integrate various visuals including the Power App in a Power BI Dashboard once we have finished putting the necessary polish in the report.

Drop the new visual onto the canvas of the new page in the report and add any field from the list of fields in the dataset, I used customer name. You should see a screen like the image below.


We are not creating or editing an app since we already built it in Part 1. Click ok and then select Choose app. Select the app we created for entering new whole customer sales data.


Click Add. You may see another warning about creating or editing the app, just ignore this by clicking ok.


New report page should now look like the image below.


Rename Page 1 and call it Wholesale Customer Report. You can spruce up the first page to make it look more appealing. I modified my report to make it look like this.


Once you are happy with the design of the report you need to publish it to Power BI. You can replace the existing report that we created in Part 3. Once the report has been published navigate to the cloud service and go the report that you just published.

Build the Dashboard

It’s now time to build a dashboard. With the report open pin the following visuals to a new dashboard.


To pin a visual to a dashboard click on the visual and select the pin from the menu bar.


A menu like the one below will pop up. Give the new dashboard a name such as Wholesale customer dashboard.


Select pin to create and add the visual to the new dashboard. Repeat this for all of the card visuals in the report except instead of selecting New Dashboard select Existing dashboard and if not already selected pick the Wholesale customer dashboard that we just created.

Next, we will need to pin the Power App visual. Go to the Data Entry page and pin the Power App just like we did for the card visuals. If you are having trouble selecting the pin option you may need to edit the report to pin the visual.

Your dashboard should now look something like this.


Let’s rearrange the tiles and add some new visuals by using Q&A.

First add a new visual by typing the following questions in the Q&A bar at the top of the screen.

Fresh by customer sort by fresh

Pin the visual to the existing Wholesale customer dashboard.


Then place this at the bottom of the dashboard.

Repeat these steps using the following questions:

Milk by customer sort by milk

Grocery by customer sort by grocery

Frozen by customer sort by frozen

Detergent paper by customer sort by detergent paper

Delicassen by customer sort by delicassen

Your dashboard should now look similar to the image below.


Try adding a new customer by using the Power App embedded in the Power BI Dashboard. After you have entered data into each of the input boxes in the Power App hit the submit button and in about 5 seconds or less you should see the customer count go up and your new customer on the dashboard in real-time. Also try entering in a new customer but do not fill out the Category field blank. Notice how even though the field is blank it is still populated by the time it shows up in Power BI, that is because the Azure Machine Learning model is supplying this data.

Integrate with Dynamics 365

The last step is to add a data alert to one of the tiles which will create a record in Dynamics 365. Navigate to the dashboard if not already there and click the … in the top right hand corner of the Fresh tile.


Then select Manage alerts.


This will open a new menu on the right-hand side of the screen. From this screen click + Add alert rule. Create an alert that will fire once the Fresh goes above a certain value. In my case I used 60,000.


For the purposes of this tutorial an alert based on an absolute value is adequate however a better choice would be to create an alert on a relative value such as % change since you do not want to have to go in and modify the alert to increase its threshold every time you surpass it. Click Save and close.

Go back to Manage alerts for this tile (Fresh) and this time select Use Microsoft Flow to trigger additional actions.



This will launch MS Flow. Use the default template to create a new flow triggered from a Power BI alert.


Use the template and select the Alert for Fresh from the Alert id drop down menu. Next select add new step and search for Dynamics 365. Then select Create a new record Dynamics 365.

Your flow should now look like this.


Enter the details for the Dynamics 365 tenant and select the Entity that you want a record created in. For my purposes I created a new task to follow-up with the customer by using the tasks entity. Save the flow and test it out by entering in new sales data using the Power App embedded in the Power BI report. If you have wired up the flow correctly a new record should be created in Dynamics 365 once you have triggered the data alert in your Power BI dashboard.

We have now reached the end of this series hopefully you have realized that by combining Power BI, Power Apps, Flow, Azure Machine Learning and Dynamics 365 you can open up new possibilities which lead to insights, automation and action at the speed of business.

Until next time.

Anthony


Instant insights, automation and action – Part 4 Register Power BI in Azure Active Directory

This is the fourth post in a series of articles in which I explain how to integrate Power BI, Power Apps, Flow, Azure Machine Learning and Dynamics 365 to rapidly build a functioning system which allows users to analyze, insert, automate and action data.

In the previous article I covered building the Power BI Report.

In this article I will cover how to enable data to be pushed into Power BI use Flow. This is a fast no code solution.


This is a one-time setup that is required in order to use the Power BI connector in MS Flow. If you do not do this step you will see an error screen in MS Flow like the screen clip below.


Prerequisites

In order to complete this tutorial, you will need permission to register applications in your Azure Active Directory tenant.


For more information on the Azure AD Tenant you can click the following link.

https://docs.microsoft.com/en-us/power-bi/developer/create-an-azure-active-directory-tenant

Power BI Development Center

Log onto the Power BI Development Center and enable API features and get the key to register the app in Azure.

Go to the following URL and sign in.

https://dev.powerbi.com/apps


Enter in a meaningful name for your app, I called mine AnthonysPowerBIApp but you can call yours whatever you would like. Choose Native for the Application Type and select Read all datasets and Read and write all datasets for the API Access


Click on Register. A screen like the one below should pop up. Be sure to copy down the Application ID as this is needed to register the application in Azure.


Azure Portal

Next log onto the azure portal using the following URL https://portal.azure.com/#home

Once in the portal admin page navigate to the Azure Active Directory menu blade


Next click on App registrations and select the app that we created using the Power BI Development Center.


You can change settings in the app if you whish to tailor it be clicking on Properties.

Now that the Power BI App has been registered in Azure Active Directory you can use it in various Microsoft cloud services such as Flow.


As you can see in the image above, I no longer get a permission error and I am able to select the workspace, dataset and table.


In the next post we will build out the flow so that data is passed from the Power App to an Azure Machine Learning experiment for scoring and then into the Power BI API Enabled Dataset for real-time analytics.

Hopefully you have found this to be another practical post.

Until next time

Anthony

References

Here is the official documentation from Microsoft on how to register Power BI to push data into it using REST API calls.

https://docs.microsoft.com/en-us/power-bi/developer/overview-of-power-bi-rest-api


Instant insights, automation and action – Part 3 Create the Power BI Report

This is the third post in a series of articles in which I explain how to integrate Power BI, Power Apps, Flow, Azure Machine Learning and Dynamics 365 to rapidly build a functioning system which allows users to analyze, insert, automate and action data.

In the previous article I covered building the Power App. In this article I will cover the Power BI report.

We will build out this system in the following order; Power App, Azure Machine Learning, Power BI and then last MS Flow to connect the components. Before you can begin this tutorial there are some required prerequisites.

Prerequisites

  • Power Apps
  • MS Flow
  • Power BI Pro or Premium
  • Access to Azure Active Directory to register Power BI App
  • Dynamics 365

Create the API Enabled Dataset

Log onto Power BI and create a new app workspace called Customer Segmentation. This step is not required however if you are like me you create a lot of different content so it’s a good habitat to get into so that you can better manage your work.


In case you are wondering the screen clip above is using the new App Workspace experience. Next, we will create a new streaming data set.

On the splash page for the app click Skip at the bottom right corner of the page.


Now select +Create > Streaming dataset.


Select API and click next.


Next create the WholeSaleCustomer dataset.

It will have the following field names and data types

Field Name Data Type
Customer Name Text
Channel Number
Region Number
Fresh Number
Milk Number
Grocery Number
Frozen Number
Detergents_Paper Number
Delicassen Number
Category Number


Click the Create button to generate the dataset.

Next, we will leverage the generated PowerShell script to create some test records in our newly formed dataset. Click on PowerShell and copy the code into Notepad.


We will create three test records by running the PowerShell code below. Modify the code you coped into Notepad so that it looks simlar to the code below. Before you can run this you will need to replace <Your Key> with the key displayed in your Power BI service.

$endpoint = "https://api.powerbi.com/beta/8c17d9d4-2652-4573-8a9c-d5dde0750715/datasets/13b74183-5eb2-480b-ba11-c0af0ecbdd26/rows?
key=<Your Key>

$payload = @{
"Customer Name" ="Test1"
"Channel" =1
"Region" =1
"Fresh" =98.6
"Milk" =98.6
"Grocery" =98.6
"Frozen" =98.6
"Detergents_Paper" =98.6
"Delicassen" =98.6
"Category" =0
}


Invoke-RestMethod -Method Post -Uri "$endpoint" -Body (ConvertTo-Json @($payload))
$payload = @{
"Customer Name" ="Test2"
"Channel" =2
"Region" =2
"Fresh" =98.6
"Milk" =98.6
"Grocery" =98.6
"Frozen" =98.6
"Detergents_Paper" =98.6
"Delicassen" =98.6
"Category" =1
}


Invoke-RestMethod -Method Post -Uri "$endpoint" -Body (ConvertTo-Json @($payload))
$payload = @{
"Customer Name" ="Test3"
"Channel" =3
"Region" =3
"Fresh" =98.6
"Milk" =98.6
"Grocery" =98.6
"Frozen" =98.6
"Detergents_Paper" =98.6
"Delicassen" =98.6
"Category" =2
}


Invoke-RestMethod -Method Post -Uri "$endpoint" -Body (ConvertTo-Json @($payload))

To do this launch PowerShell in Administrator mode and copy and paste the code into the PowerShell desktop app.


The data set now has three records in it and you can start to use it in Power BI. To do this go to the dataset and click the three dots beside the name of the dataset. This will open a new report with a blank canvas. Add a table and drop all of the fields from the data set into the visual.


As you may notice from the screen shot above the fields Fresh, Milk, Grocery, Frozen, Detergents_Paper and Delicassen are not formatted as currency but should be. Unfortunately, API enabled data sets only have three data types Text, Number and Date and no formatting options so we cannot specify that these fields are currency fields.

Thankfully we can leverage the Report level measures for live connections to Analysis Services tabular models & Power BI service datasets feature that was released in May 2017 to add new measures with the proper currency data type defined.
Continue reading “Instant insights, automation and action – Part 3 Create the Power BI Report”

Instant insights, automation and action – Part 2 Create Azure Machine Learning Experiment

This is the second post in a series of articles in which I explain how to integrate Power BI, Power Apps, Flow, Azure Machine Learning and Dynamics 365 to rapidly build a functioning system which allows users to analyze, insert, automate and action data.

In the previous article I covered building the Power App. In this article I will cover the Azure Machine Learning Studio Experiment.

We will build out this system in the following order; Power App, Azure Machine Learning, Power BI and then last MS Flow to connect the components. Before you can begin this tutorial there are some required prerequisites.

Prerequisites

  • Power Apps
  • MS Flow
  • Power BI Pro or Premium
  • Access to Azure Active Directory to register Power BI App
  • Dynamics 365

Build the Azure Machine Learning Experiment

The Azure Machine Learning Studio platform is a powerful cloud service from Microsoft that allows data scientists to rapidly build and deploy machine learning experiments. For the purpose of brevity, we will leverage an existing template from the Azure AI Gallery. The Azure AI Gallery is a great resource for creating and learning about Machine Learning experiments in the Microsoft platform.

Weehyong Tok from Microsoft created an experiment that segments customers based on the dataset Wholesale customers Data Set from UCI Machine Learning Repository which is perfect for our purposes.

You can find the experiment here https://gallery.azure.ai/Experiment/Customer-Segmentation-of-Wholesale-Customers-3 .

Open the experiment in the Azure Machine Learning Studio by clicking on Open in Studio. Be sure to log in using the same account that you used to build the Power App.

This will launch the Azure Machine Learning Studio platform and create an experiment for you based on Weehyong Tok template. You may notice that the experiment has to be updated, click ok.

This is because the Assign to Clusters module has been deprecated and replaced by a new module called Assign Data to Clusters. Thankfully the upgrade takes care of the necessary changes and we can use the experiment as is with out having to modify it.

Click the Run button at the bottom of the page.

Once the experiment has finished running click on the output of the Assign to Cluster module and select Visualize from the drop down menu.

As you can see in the image the data is grouped into clusters.

This experiment uses the K-Means clustering algorithm to assign the data points to groups. As you can see in the image below it currently uses 2 centroids which essentially means that each row will be assigned to 1 of 2 groups based on the distance of the data points in the row to the centroid.

Modify the experiment to determine the optimum number of centroids

Now you may wonder if this is the optimal number of clusters or not. Thankfully we can use an elbow chart to help determine the optimal number of centroids. To do this we will add a Python module to drop some code into our experiment.

Search for the Execute Python Script and drop it onto the canvas of the experiment. Connect the first output (the one on the left) of the Split Data module to the first input of the Execute Python Script module. Your experiment should look as follows.

Now you will need to add the following code to the Execute Python Script module. Replace the generated with the code below.

Python Code

# The script MUST contain a function named azureml_main
# which is the entry point for this module.

# imports up here can be used to 
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from math import sin, cos, sqrt, atan2
from sklearn.cluster import KMeans
from sklearn import metrics
from scipy.spatial.distance import cdist

# The entry point function can contain up to two input arguments:
#   Param<dataframe1>: a pandas.DataFrame
#   Param<dataframe2>: a pandas.DataFrame
def azureml_main(dataframe1 = None, dataframe2 = None):

# Execution logic goes here
#print('Input pandas.DataFrame #1:\r\n\r\n{0}'.format(dataframe1)) #We don't need this, we just want the visual.

colors = ['b','g','r']
markers = ['o','v','s']

distortions = []
centroids = range(1, 10)
for i in centroids:
kmeanModel = KMeans(n_clusters=i).fit(dataframe1)
kmeanModel.fit(dataframe1)
distortions.append(sum(np.min(cdist(dataframe1, kmeanModel.cluster_centers_, 'euclidean'),axis=1))/dataframe1.shape[0])

plt.plot(centroids, distortions, 'bx-')
plt.xlabel('Number of centroids')
plt.ylabel('Distortions')
plt.title('Elbow chart showing the optimal number of centroids')
plt.show()

plt.savefig("elbow.png") #To see the chart in Azure Machine Learning Studio we need to save the image as a png.

# If a zip file is connected to the third input port is connected,
# it is unzipped under ".\Script Bundle". This directory is added
# to sys.path. Therefore, if your zip file contains a Python file
# mymodule.py you can import it using:
# import mymodule

# Return value must be of a sequence of pandas.DataFrame
return dataframe1,

Run the experiment and click on the second output, Python device (Dataset), of the Python Script module and select visualize. You should see something like the image below.

The optimal number of centroids is at the “elbow” of the chart above which looks to be about 5. Based on this insight we will update the algorithm and change the number of centroids to 5. We will also increase the number of iterations to 500 since we have more centroids.

Run the experiment and click on the output of the Assign to Cluster module and select Visualize from the drop down menu. The output should look like the image below.

Next, we will convert this experiment into a Predictive Web Service. At the bottom of the screen select Predictive Web Service > Predictive Web Service [Recommended]

Once the predictive experiment has been setup, we are going to modify it slightly so that it only returns the Assignment field. To do this we need to drop in the Select Columns in Dataset module and place it between the Assign to Clusters module and the Web service output.

Launch the column selector and enter in the Assignments column as the only value to get passed through to the web service output.

Run the experiment and Deploy Web Service.

This concludes the second part of this series. Next, we will build the API enabled dataset in Power BI which will store the data that we will use in the Power BI Reports and Dashboards. Since the dataset is API enabled we can push data into it using Flow.

Hopefully you have found this to be another practical post.

Until next time

Anthony

References

@Python Programming has a good site for understanding the Python code to plot an elbow chart.

https://pythonprogramminglanguage.com/kmeans-elbow-method/