Writing for BIDN

Looking to give back to the community or learn through teaching others? Anyone can post blogs by clicking Add Blog Post after contacting us for permission.

«August 2015»

Power BI and Big Data

If you’re worked in the wide and diverse field of information technology for almost any amount of time, it probably hasn’t taken you long to discover that the one thing constant about IT is that the technologies and strategies involved change faster than you can learn them. And if you work in business intelligence like I do, you don’t have to look very far at all to see change. The Microsoft Power BI team rolls out a software update every month! If I want to stay learned up on the technology, I have to really be on top of things.

About ten years ago when Hadoop was first being developed at Yahoo, I don’t think anyone could have anticipated the size of the ripples (more likes cannonball sized splashes) being able to access Big Data could and would have on the IT industry. Hadoop (and other advances in hardware and software technologies) gave us something we never had before: The ability to access and report on data in real time on a scale never previously imagined gives an organization to identify and understand trends and patterns in the data and gain previously unknown insights. The organizations that are able to leverage big data will be the organizations that leave their competition in the dust.

Set Up and Configure the Hortonworks Sandbox in Azure

Not only does Power BI Desktop give us the ability to connect to Hadoop Distributed File System (HDFS) for reporting we can also mash it up with other more traditional and structured data sources with minimal effort required. But that’s not what this blog post is all about. This post is about setting up a virtual machine in Azure running Hadoop and connecting to our Hortonworks Sandbox with Power BI Desktop :).

The first thing you do if you don’t have access to a Hadoop cluster is to set up the Hortonworks Sandbox on Azure. The good news is its free (for the duration of the trial) and its super easy. Just follow the instructions at this link to set up the Hortonworks Sandbox.

Hadoop in Azure

Once that’s set up, you’ll need to add mapping for the IP address and host name to your hosts file. Devin Knight has a blog on this that you’ll find helpful.

Connecting to Hadoop with Power BI Desktop

Once your Hortonworks Sandbox is set up, you’re ready to set up your connection to Hadoop with Power BI Query. Start up the Power BI Desktop and click Get Data. Scroll down and select Hadoop File (HDFS) and click Connect.

Get Data with Power BI

From there you can follow the rest of the wizard to load the data into the semantic model.

Load Data with Power BI

Once the data is loaded, you’ll need to modify the query to navigate to the data you wish to use in your model.

In Power BI Desktop, go to the Home ribbon and click Edit Queries.

Read more


Three Best Practices for Power BI

Since the release of Power BI Desktop this past week, I’ve been really spending my extra time digging into the application focusing on learning and experimenting as much as I can. When my wife has been watching Law and Order: SVU reruns at night after the rug rats are in bed, I’ve been right there next to her designing Power BI dashboards like the total data nerd that I am. When my kids have been taking their naps during the weekend, I’ve been writing calculations in the model for my test dashboards. Or when I’ve been riding in the car back and forth to work I’ve been thinking of new things to do with Power BI Desktop.

Since I’ve been spending a decent amount of time with Power BI Desktop, I thought I’d take a moment to share three things to know and remember when designing your Power BI models and dashboards that I think will help you make the most of this tool and be effective at providing the data your business needs to succeed.

1. Optimize your Power BI Semantic Model

It probably hasn’t taken you long to figure this one out if you’ve built Power Pivot/Tabular models or at least it won’t when you do start developing Power BI dashboards. The visualizations in Power BI and Power View are heavily meta-data driven which means that column names, table or query names, formatting and more are surfaced to the user in the dashboard. So if you using a really whacky naming convention in your data warehouse for your tables like “dim_Product_scd2_v2” and the column names aren’t much better, these naming conventions are going to be shown to the users in the report visualizations and field list.

For example, take a look at the following report.

Power BI Dashboard without formatting

Notice anything wonky about it? Check the field names, report titles and number formatting. Not very pretty, is it? Now take a look at this report.

Power BI Dashboard with formatting

See the difference a little cleaned up metadata makes? All I did was spend a few minutes giving the fields user-friendly name and formatting the data types. This obviously makes a huge difference in the way the dashboard appears to the users. By the way, I should get into the movie production business. ;)

My point is that the names of columns, formatting, data types, data categories and relationships are all super important to creating clean, meaningful and user friendly dashboards. The importance of a well-defined semantic model cannot be understated in my opinion. A good rule of thumb is to spend 80% to 90% of your time on the data model (besides, designing the reports is the easy part).

I’d also like the mention the importance of the relationships between the objects in the semantic model. Chance are you will have a small group of power users that will want to design their own dashboards to meet their job’s requirements and that’s one of the beauties of Power BI. But when users began developing reports, they may query your model in unexpected ways that will generate unexpected behaviors and results. I only want to mention this because the relationships between the objects in the model will impact the results your users will see in their reports. Double check your relationships and ensure that they are correct, especially after you add new objects to the model since the

Read more

Power BI Fantasy Football Player Stats Dashboards for Download

Every year at Pragmatic Works some coworkers, including consultants, marketing staff, support team members, software development staff and project management, partake in a company fantasy football league. And with the recent release of the new Power BI Desktop, I thought what better way is there to prepare to completely annihilate my coworkers and friends in an imaginary nonsensical game than by creating some nifty Power BI dashboards based on last years player stats as recorded by Yahoo! Sports. So I thought I’d walk you through some of the steps I followed to leverage the Yahoo! Sports NFL player stats page as a data source and some of the query transformations I applied to prepare the data for reporting.

Power BI dashboard with Power BI Desktop

Click here to download my Fantasy Football Dashboards Power BI .pbix file.

If you’re completed new to Power BI Desktop I highly suggest you watch my video walkthrough of Power BI Desktop or read my blog post which walks you through each step of creating your first Power BI dashboards with Power BI Desktop. Last Friday, I also blogged about my three best practices for designing a killer Power BI solution, so take a look at that.

To create these dashboards, I simply navigated to the Yahoo! Sports NFL stats page and found the page for each position I’m interested in for this fantasy football season. I copied the URL to my clipboard. In Power BI Desktop, click Get Data and then use the Web data source option. Then all you have to do is copy and paste the URL into the text box and click OK.

Get data from web with Power BI Desktop

Then select the HTML table that contains your data and click Edit. We need to edit our query because there are some issues with the data. By clicking Edit, we can apply transformations to our query which will allow us to do things like rename columns, remove unwanted columns, modify data types, create custom columns and much more.

Get data from web with Power BI Desktop

One thing you’ll notice in the above screen grab is that the column names are in the first row, so we need to fix that.

On the Home ribbon of the Query Editor, just click the Use First Row As Headers button. Pre

Read more

SQL Internals Reading Data Records Part 1: Tag Bytes

  • 26 June 2012
  • Author: BradleyBall
  • Number of views: 4198

Hello Dear Reader while learning about SQL Server the more you learn the more you start looking at SQL Internals and how data is stored on Disk.  As you go deeper you will eventually use commands like DBCC IND to find Page numbers or DBCC PAGE to look at the contents of a data page to look at actual data records.  When you do you will also see a numeric output along the left hand portion of your screen, it is the data as it is stored on the page, but it isn’t in a format that we would normally read it.  I’ve been in presentations and read books written by some very smart people and I’ve heard them say that reading a data record takes a little practice.  This isn’t so much something that you will use in your job as it is having some fun exploring the tool we use every day, so come along Dear Reader and let’s practice together. 


We’ll start by looking at a graphical representation of a Data Record.  This is the best I’ve ever seen and it comes directly from Paul Randal (@PaulRandal | Blog) and the MCM Video series on Data Structures.


When we look at a Data Record the first part of it is the Tag Bytes.  Tag Bytes are two bytes and are made up of two parts Status Bits A and Status Bits B.  Status Bits A are detailed in Kalen Delaney’s excellent book, SQL Server 2008 Internals

Bit 0 contains versioning Information.  Bits 1 to 3 are used as a 3 bit value to cover a host of things, (we’ll get to in a moment).  Bit 4 indicates that a NULL Bitmap exists, there will always be a NULL Bitmap.  Bit 5 is meant to show if a variable length field exists.  Bit 6 will tell us if there is any versioning information in the record, and bit 7 is not used.  Status B Bits are only used to indicate a ghost forwarded record.  All of that information is stored in two bytes per data record, one byte for Status A and one byte for Status B.


So that was pretty technical, how do we relate this to English?  First off let’s talk about how a record is displayed.  On a page it will be displayed in Hexidecimal code, in order to read it we have to translate that to binary and then flip the binary.  It’s clear as mud but let’s walk through it real quick. 

First we’ll create a database and a table for this and insert a record.

USE master;
IF EXISTS(SELECT name FROM sys.databases WHERE Name=N'demoInternals')
              DROP Database demoInternals
USE demoInternals


Let's create a Clustered Index


IF EXISTS(SELECT NAME FROM sys.tables WHERE name=N'myTable1')
       DROP TABLE dbo.myTable1
       myID INT IDENTITY(1,1)
       ,productName char(500) DEFAULT 'some product'
       ,productDescription CHAR(1000) DEFAULT 'Product Description'
) ;   


Insert one data record


INSERT INTO dbo.myTable1

Now that we have our table and our data record, let’s take a closer look.  I’ll be using DBCC IND and DBCC PAGE over and over again, the only thing that will change in future examples is the structure of the table we are creating.

DBCC IND(demoInternals, 'myTable1', 1)



Now we have our pages in our table, PageType 10 is an allocation unit our data page is a PageType 1, we can get our page number and plug it into DBCC PAGE.  Don’t forget to turn on Trace Flag 3604 so we can get the output of DBCC PAGE to our SSMS window.


DBCC PAGE('demoInternals', 1, 276, 3)


I’m only going to post the part that is relevant to us, so this is extremely paired down, and I’m only grabbing our first bit of data

0000000000000000:   10 00e405 01000000 736f6d65 2070726f 64756374  ..ä.....some product


In red you will find our Status A Tag Byte and in blue our Status B. 


I hear what your saying Dear Reader, how is 10 all of that information I talked about up top, it doesn’t look like much.  But this is actually 0x10 in hexadecimal, you can use the converter tool at http://easycalculation.com/hex-converter.php (plug in value 10) and watch as it is converted to binary.  In binary it is translated to 00010000, we are not quite finished remember we need to flip this value to get what we are looking for.   So how do we flip it, first we read it backwards one section at a time. 


Now one important thing to note is that our 3 bits even though they say 0-7 just like everything else they are binary values as well 0=000, 1=001, 2=010, 3=011, 4=100, 5=101, 6=110, and 7=111.

3-bit binary value

Decimal value



















So these tag tell us that this is a Primary Data  Record, it contains no Versioning Information, no variable length fields, and no Row Versioning information.


So let’s take this same row in our table and delete it to make it a ghost record and see how our Tag bytes change.



DBCC PAGE('demoInternals', 1, 276, 3)


We don’t want to commit or rollback the transaction just yet, we want the record marked as ghosted.  Now let’s look at our column.

0000000000000000:   1c 00e405 01000000 736f6d65 2070726f 64756374  ..ä.....some product


Remember to convert your 1c value from Hexadecimal to binary, and flip your binary value.       



Remember our chart 110=6, so this is indeed a Ghost Data Record.  So now let’s drop our table and add a variable length column to it.  Then we’ll insert a record.

IF EXISTS(SELECT NAME FROM sys.tables WHERE name=N'myTable1')
       DROP TABLE dbo.myTable1

       myID INT IDENTITY(1,1)
       ,productName char(500) DEFAULT 'some product'
       ,productDescription CHAR(1000) DEFAULT 'Product Description'
       ,productExtendedDescription varchar(2000) DEFAULT 'Extended Product Description'
) ;   

INSERT INTO dbo.myTable1

Remember to use DBCC IND and DBCC PAGE to get our internal information, then let’s look at our record.  And translate our Tag A bytes.

0000000000000000:   30 00e405 01000000 736f6d65 2070726f 64756374  0.ä.....some product


Here are our results.



And once again we see translated we have our Primary Data record with a null bitmap and variable length columns.  One interesting note, if you alter your original table to have a variable length column you have to rebuild the table in order for the Tag bytes to change from 0x10 to 0x30.


Hope you enjoyed the read, I had fun writing this up.





Categories: Analysis Services
Rate this article:
No rating


Other posts by BradleyBall

Please login or register to post comments.