I found another one
code in 8.2
set( emptySet([My Dimension].[My Hierarchy]) )
code in 8.3
set( item(emptySet([My Dimension].[My Hierarchy]),0) )
without the use of item() you get an error
Invalid coercion from 'memberSet' to 'member, measure'
even though the expression validates.
This is important when you want to create prompts that return MUNs but are optional.
As a recap from my older posts optional for a prompt means that it has a default value. The default is ... no members selected
...in other words: empty set
Thursday, July 31, 2008
Tuesday, July 29, 2008
Difference in dimensional expressions between RS8.2 and 8.3
We are just upgrading to Cognos 8.3 (I know, I know...) and I've just found this difference in the use of the filter() function:
I think it actually makes more sense in 8.3... probably that's why they fixed it.
Code in 8.2
filter(
[My Dimension].[My Hierarchy].[My Level],
roleValue('_businessKey', currentMember([My Dimension].[My Hierarchy])) contains ?Param1?
)
Code in 8.3
filter(
[My Dimension].[My Hierarchy].[My Level],
roleValue('_businessKey', [My Dimension].[My Hierarchy].[My Level]) contains ?Param1?
)
In 8.2 one had to use the currentMember() function to refer to the member being filtered.
In 8.3 it is not needed. Instead one can just use the level.
I think it actually makes more sense in 8.3... probably that's why they fixed it.
Code in 8.2
filter(
[My Dimension].[My Hierarchy].[My Level],
roleValue('_businessKey', currentMember([My Dimension].[My Hierarchy])) contains ?Param1?
)
Code in 8.3
filter(
[My Dimension].[My Hierarchy].[My Level],
roleValue('_businessKey', [My Dimension].[My Hierarchy].[My Level]) contains ?Param1?
)
In 8.2 one had to use the currentMember() function to refer to the member being filtered.
In 8.3 it is not needed. Instead one can just use the level.
Tuesday, June 24, 2008
Version 2 is Coming ;-)
My second baby is about to be born this week.
You won't see much blogging for the next couple of weeks.
When I come back I'll write about our experiences switching the DMR model to OLAP.
We're evaluating both Cognos PowerPlay and MS Analysis Services...
You won't see much blogging for the next couple of weeks.
When I come back I'll write about our experiences switching the DMR model to OLAP.
We're evaluating both Cognos PowerPlay and MS Analysis Services...
Monday, June 23, 2008
See generated MDX when using DMR model
When you use a DMR model you'd expect that Cognos Report Studio generates MDx expressions, since the framework model is supposed to look just like any other cube from a report author's perspective.
Yet, when you click on a query and select "Geenrated SQL/MDX" from the properties you can only see Native and Cognos SQL.
I've recently learned that there is a way to see MDX...
Click "Tools"/"Show Generated SQL/MDX" and you'll have the option to see Native MDX.
Why you cannot see the same when you go from the properties... ask Cognos.
Another interesting thing is that the generated MDx ends with something like this:
FROM [F:/Cognos8.2\./bin\../temp\dmbTemp_5324_0004_12142423860.dmc]
Looks like the generated SQL will result in a query that gets fetched into a temporary cache (or cube???) that can then process MDX expressions.
ps I tried this on 8.2, I wonder if there are any changes in 8.3
Yet, when you click on a query and select "Geenrated SQL/MDX" from the properties you can only see Native and Cognos SQL.
I've recently learned that there is a way to see MDX...
Click "Tools"/"Show Generated SQL/MDX" and you'll have the option to see Native MDX.
Why you cannot see the same when you go from the properties... ask Cognos.
Another interesting thing is that the generated MDx ends with something like this:
FROM [F:/Cognos8.2\./bin\../temp\dmbTemp_5324_0004_12142423860.dmc]
Looks like the generated SQL will result in a query that gets fetched into a temporary cache (or cube???) that can then process MDX expressions.
ps I tried this on 8.2, I wonder if there are any changes in 8.3
Tuesday, June 3, 2008
Advanced TreePrompt
I have played around with Treeprompts lately and would like to show you tow different usage of this handy tool.
Treeprompt provides a hierarchical selection tool.
you just point it to the top of the hierarchy tree that you want to select from and it lets the user make selections at any level.


Actually you don't have much control over it, it goes down to the leaf nodes (lowest granularity) whether you want it or not.
e.g. there is no way to "stop" this treeprompt aty the product line level...
it will always allow the selection of products.
I find Treeprompts very useful in two scenarios:
As a replacement for a list of subsequent combo boxes (having "cascading" master-detail relationships.
This provides the user with fine grained selection in an easy way.
The treeprompt return the member selected, so in this case we want to automagically "expand" that member if it's at a higher level to include all lowest level members.
e.g.
user makes a selection at the product item level but we still want to display all "selected" products.
Selection: Cooking Gear, Rope

We can see what the prompt return by dropping a Text Item, setting it's Source Type to "Report Expression" and setting the Expression to ParamValue('pProduct')
Where pProduct is how I named the prompt's parameter.
This will return:
[Sales].[Product].[Product].[Product type]->[all].[1].[1], [Sales].[Product].[Product].[Product type]->[all].[2].[6]
In a crosstab, summary kind of report we want to see all products that the user selected:
in the crosstab's query we need to get the descendants of every member returned by the prompt at the product level:
union( descendants( set( # join(',', split(';', promptmany('pProduct', 'MUN')) )# ), [Sales].[Product].[Product].[Product]) , emptySet([Sales].[Product].[Product]) )
Charts
If we are dealing with charts treeprompts are also very handy because we can allow the user to have full control over how many series to display in the chart.
In this case we do not want to "expand" the selection.


The expression to use here is more simple:
set( #promptmany('pProduct', 'MUN')#
)
Treeprompt provides a hierarchical selection tool.
you just point it to the top of the hierarchy tree that you want to select from and it lets the user make selections at any level.
Actually you don't have much control over it, it goes down to the leaf nodes (lowest granularity) whether you want it or not.
e.g. there is no way to "stop" this treeprompt aty the product line level...
it will always allow the selection of products.
I find Treeprompts very useful in two scenarios:
As a replacement for a list of subsequent combo boxes (having "cascading" master-detail relationships.
This provides the user with fine grained selection in an easy way.
The treeprompt return the member selected, so in this case we want to automagically "expand" that member if it's at a higher level to include all lowest level members.
e.g.
user makes a selection at the product item level but we still want to display all "selected" products.
Selection: Cooking Gear, Rope
We can see what the prompt return by dropping a Text Item, setting it's Source Type to "Report Expression" and setting the Expression to ParamValue('pProduct')
Where pProduct is how I named the prompt's parameter.
This will return:
[Sales].[Product].[Product].[Product type]->[all].[1].[1], [Sales].[Product].[Product].[Product type]->[all].[2].[6]
in the crosstab's query we need to get the descendants of every member returned by the prompt at the product level:
union( descendants( set( # join(',', split(';', promptmany('pProduct', 'MUN')) )# ), [Sales].[Product].[Product].[Product]) , emptySet([Sales].[Product].[Product]) )
Charts
If we are dealing with charts treeprompts are also very handy because we can allow the user to have full control over how many series to display in the chart.
In this case we do not want to "expand" the selection.
The expression to use here is more simple:
set( #promptmany('pProduct', 'MUN')#
)
Monday, May 26, 2008
Design Utility
I had this idea to create a utility that can be used during the dimensional design process.
The design starts off from looking at the requirements.
Then you can create very interesting reports from this:
It helps to verify the requirements and produces all sections (of what I can think of) of a "design document".
It can be reviewed and has sufficient information to start implementation.
Furthermore I would suggest to create a placeholder model i.e. one with no real values for measures, but all dimensions hooked up.
The bulk of the work goes into building the "real" metrics.
The placeholder model serves a double purpose:
The design starts off from looking at the requirements.
- What are the reports? This is usually well known...
- What metrics are there?
This is sometimes lesser known as the business analyst may not recognize that different metrics are really the same, just rolled-up differently.
e.g. He/She may have a "sales total metric" and a "branch sale for the month" metric.
I suggest using two terms to differentiate here e.g. in the requirements document call it metric, in the design document call it measure. "measure" can then become sort of the "normalized metric.
Start the design by collecting these, why not enter this information into two tables: Report and Metric - Collect the different levels where the metrics appear.
- Figure out which levels belong to the same dimension.
- What are the attributes that have to be displayed for each level?
- Categorize the levels into hierarchies. Is there a need for more than one hierarchy per dimension?
Store all this into table also. So far we added Level, Attribute, Dimension and Hierarchy, HierarchyLevel - Which metric appears at which level? What is the roll-up aggregation?
- Which report uses which metric at which level?
Yes... store this info also in tables. Add MetricLevel, ReportMetricLevel - ... and there you go!
Then you can create very interesting reports from this:
- Dimension hierarchy
Report on the Dimension, Hierarchy, Level and Attribute tables
Will describe what dimensions to create and how to set them up - Cube design
Report on the MetricLevel, Level, Hierarchy, Dimension tables
Will describe what cubes to create, what are the roll-up aggregations - Report usage of metrics
Report on the ReportMetricLevel, MetricLevel, Level, Metric, Report tables
Will describe how the requirements are met and how each report will be built from the dimensional model - Star Schema design
Report on the Metric, MetricLevel, Level, HierarchyLevel tables. Find the lowest granularity level in each hierarchy
Will describe the star schema structure
It helps to verify the requirements and produces all sections (of what I can think of) of a "design document".
It can be reviewed and has sufficient information to start implementation.
Furthermore I would suggest to create a placeholder model i.e. one with no real values for measures, but all dimensions hooked up.
The bulk of the work goes into building the "real" metrics.
The placeholder model serves a double purpose:
- Development can be split between report authors and modelers, here modeling really means implementing the logic for each metric.
- A mock-up of each report can be created that in my opinion shoud still belong to the design phase. This can be reviewed with the customer and serves to verify both the design and the requirements.
Wednesday, May 21, 2008
Cognos jobs are fragile
While working with cognos jobs to automate testing I found that jobs are sometimes not very well suited to do this.
The issue is that it's very easy to break them while working on the reports.
Certain types of updates to the report invalidates the job, e.g. if you rename a report or even if you use copy-paste through the clipboard.
Another problem is if there are any changes to the prompt pages or the parameter values. Since these are stored in the job it won't be able to fill out the prompt pages properly when it's trying to run the reports.
Conclusion
Since I don't know any better I still think it's a good idea to use jobs to automate unit and smoke testing. It's just that we have to keep an eye on them to make sure they're not broken.
The issue is that it's very easy to break them while working on the reports.
Certain types of updates to the report invalidates the job, e.g. if you rename a report or even if you use copy-paste through the clipboard.
Another problem is if there are any changes to the prompt pages or the parameter values. Since these are stored in the job it won't be able to fill out the prompt pages properly when it's trying to run the reports.
Conclusion
Since I don't know any better I still think it's a good idea to use jobs to automate unit and smoke testing. It's just that we have to keep an eye on them to make sure they're not broken.
Subscribe to:
Posts (Atom)