Sunday 5 November 2017

Snowflake or star? Which one is the better approach for designing a project from scratch?

It's a very old question and certainly a very important one before starting a new project from scratch. Though there are lot of factors which must be considered before coming to the final decision, here are some insights on the same

Microstrategy prefers snowflake schema over star. Parent child relationship is one of the most important part of schema design and it's always easier to design when each dimension has it's own lookup table. Though de-normalized dimensions works without any issues and avoids lot of joins, but if you have prompts in your reports then it's always better to stick with snowflake.

Lets take an example- suppose you have a prompt on month. Now in star schema you have date and month in the same table but in snowflake you'll have a month table separately. Now in the month table you have 365 rows and in month table you have 12 only. The prompt basically gets the values by firing a select distinct in the lookup table. You can understand the performance impact of distinct query when the number of rows is more. It may not effect much in this example but when your lookup table has thousands of records then it's huge. Hope this helps

Friday 3 November 2017

The key specified is the true key for the warehouse table

It's a very common issue when the aggregation just doesn't show up in the query. For example, lets say a report is having one attribute and one metric and the aggregation defined in the metric is "sum". When you view the SQL you see just the fact column with no aggregation. What could be the reason for this?

There is a small trick for this to work. We have a setting in the logical table editor which might be causing the issue. The screen shot below illustrates the setting "The key specified is the true key for the warehouse table" in the logical view of a warehouse table:



This setting is only applicable when the attributes on the template are comprised of all the attributes listed as table keys in the logical view of the fact table. Therefore, when checked, this setting forces the Structured Query Language (SQL) to not aggregate the data and to pull only from the fact tables directly.

you can just uncheck the check box and the issue will be resolved

Monday 30 October 2017

Unchecking a table from attribute editor throws error

It's a very common bug that has been seen across almost all the recent versions of Microstrategy. It's a very silly thing but often causes difficulty in modifying the attribute

Problem:
Let's say you have an attribute which has been linked to multiple tables a,b,c,d. Now you just want to remove table b from the definition. When you do that it doesn't allow you to do that and throws and error.

Solution:
Though it's an irritating bug, it has a very simple fix for this. You just need to rename the table you want to remove so that it comes last if you sort the table names. So let's say we rename the table b to table z. Now if you remove z from attribute definition then it will not throw error again. Now after doing this do not forget to rename the table name back to the original.

Thank you

Tuesday 25 April 2017

Microstrategy Best practices around dashboard performance

Dashboard performance depends most on the report performance those are being used in the dashboard but there are other factors too which can affect the dashboard loading time. Below is a short list of things that should be kept in mind while developing dashboards

1. Reduce the Number of Datasets and Combine Datasets Where Possible. In general, the more datasets you include in a dashboard, the longer it takes for that dashboard to execute, given the same amount of data returned. 

2.  Remove Unused Attributes and Metrics from the Datasets.

3. Keep the Datasets to the Appropriate Level of Aggregation. Avoid lower-level attributes if data is not required at that detailed level
.

4. Test Datasets for Performance before Including Them in Dashboards. Test each dataset before including it in the dashboard to ensure it passes your performance requirements.  For long running datasets, check the SQL generation and adjust the report’s VLDB settings, indexing or caching strategy to improve performance
.

5. Enable Report Caching. Whenever possible, and especially with longer-running dashboards, use the caching capabilities of the Intelligence Server to save dashboard results for future use.  Caches can be created/managed on a scheduled basis. 

6. Use datasets that are based off Intelligent Cubes. Highly prompted reports generally do not make good caching candidates.  For these reports, consider building datasets based on intelligent cubes.  Not all datasets will be cube candidates (i.e. reports with conditional metrics, custom groups/ consolidations, AND NOT or
OR NOT logical operators in the filter, pass through functions, report as filters).