BLOGS BY TOPIC
BLOGS BY AUTHOR
BLOGS BY YEAR
Posted by Andy Brown on 23 July 2018
You need a minimum screen resolution of about 700 pixels width to see our blogs. This is because they contain diagrams and tables which would not be viewable easily on a mobile phone or small laptop. Please use a larger tablet, notebook or desktop computer, or change your screen resolution settings.
How to load a large fact table into SSAS Tabular
So you have a large fact table, consisting of perhaps millions of records:
The sample fact table we'll use for this example. You can try out the steps below yourself by downloading the Excel workbook containing these rows.
You want to load this into a tabular data model, bit by bit. How to manage the process?
I couldn't find an official approach anywhere, but the method shown below certainly works. I'm using Visual Studio 2017 and SSAS Tabular 2016, but think this method will work for all software versions.
Importing the table structure
The first thing to do is to import the table's structure:
While importing your fact table, click on the Preview & Filter button.
Set a filter which effectively excludes all of your rows from import:
Here we set the filter to import only rows where the purchase id is less than or equal to 0 (which can never happen, since this is an identity column).
The result? An empty table:
No rows have been imported.
Creating partitions on the table
Having loaded the structure for the table, you now want to load some data in a controlled way. To do this, create partitions:
Start by choosing to create table partitions (you should have the table in question selected when you choose this menu option).
You can now create one or more partitions of your data:
Here I've divided the data into pre-2016, 2016, 2017, 2018 and post-2018.
You can now load the data bit by bit! First bring up the Process Partitions dialog box:
First choose this menu option to load data for specific partitions.
Now choose which partitions you want to process:
Tick the partitions you want to process.
OK, so I didn't actually have any data after 2016 (this is an older dataset).
By setting up your partitions sensibly, you now have complete control over how you process your fact table!