29 October 2014

Tuning Cognos Transformer Cubes – Part I

Performance Tuning has always been a dear matter to me, and fortunately my professional path has given me several opportunities to work on it. A lot of the performance tuning usually starts by reading documentation and googling for answers, however when it comes to Cognos Transformer cubes, I have always found that information about performance tuning lacks online and, when found, is very much spread through multiple websites.

There is one very interesting guide provided by IBM which covers a lot of tuning and best practices (find here: https://www.ibm.com/developerworks/data/library/cognos/page354.html). This article is a very good start, and should clearly be your first step into any tuning initiative. With this three part post I intend to share a part of my knowledge and learnt lessons tuning transformer cubes: where to start, how to debug and how to fix. I will cover some real case scenarios that I have come across, and perhaps you will be lucky enough to find the solution to your problem here. Off we go!

Picture it

Every tuning initiative starts with a request or intention of making a process faster, whatever process it is. You are not usually given the particular part of the process to focus on, you are just asked to make the “whole thing better”. So, the first question you will need an answer to is: where can I make this better? I have found that in order for me to successfully identify the bottlenecks or issues in a process, it always helps to visualize the whole process and cut it into meaningful slices. Then, it’s a matter of isolating the problematic slice(s) with, for example, a “binary search”:

  1. Cut process in half (left/right)

  2. Is the issue on the left or right slice?

  3. Select the problematic slice, and restart from a.

For that purpose, I created a diagram of how I visualise the cube processing in Transformer:

This may not be 100% accurate, so please bear that in mind: this is simply a picture or sketch I use myself for my analysis and you may choose to use it for yours. Because the cube logs are the only detailed output from the entire cube build process that we can use, each one of the blocks map to one of the critical log steps/phases. Find some of the mappings below:

Dimension – Generate Categories

Fact – Extract Data

Publish

Depending on the infrastructure, you may identify different bottlenecks in different parts of the process. On this specific implementation, we did have powerful servers and enough machine power, and therefore I will not focus on situations where the CPU/Memory capability is not up to pair (note: some optimizations for these types of issues may be found in the already mentioned links). What kind of issues should you find when your infrastructure is not a problem, might you ask?

Long Running “Open Datasource”

In the past, I have come across both dimensions and fact tables with very long Open Data Source times. Here’s an example of this behaviour in a fact table load:

The Open Data Source represents the time elapsed from sending the SQL Query to the database to getting yourself your first row, and this is a behaviour you can easily replicate by querying the database directly (it should be the time you wait until the result set starts showing). As you know, there is a certain extent of work that needs to be performed by the database engine before it can start returning you some rows of data, more on some queries than others.

Select distinct … from DIM_TABLE

This is the typical query for a dimension that is configured to “Auto Summarize”.

For a database, working out a distinct is much less trivial than doing a regular select. The biggest issue with a select distinct is the fact that the database will not give you any row until it finishes working out the whole set. Before you complete the set, exceptions aside, it will not know if the rows already at hand are unique and, therefore, will need to wait to evaluate the entire set before providing you the first unique row. This will force Transformer to wait a long time for the Open Data Source to happen, and no work is done during that time (delaying the build).

This option is there to be used, but use it on a need basis. As an example, if your query already guarantees row uniqueness, there is no reason to create an overhead with a distinct. Where it doesn’t guarantee uniqueness already, in situations where the duplication is very little, you may want to let transformer handle it anyway.

Select … from FACT_TABLE … group by…

This is the typical query for a fact table that is configured with “Auto Summarize”, and the reasons for the delay on getting the first row are almost same as on previous situation: sometimes you may need to wait for the full aggregated set to be completed until you can get the first row. The question here is: are you aggregating enough to make it worth the wait? You do not need to aggregate on the database, as Transformer will do its own aggregations anyway (even though it may be less efficient in terms of performance). You may want to try building the cube with and without the database aggregation, and reflect on the results.

.

.

.

.

    João Marques
       Consultant