Archive for June 18, 2010

Distinguishing Business Intelligence from Corporate Performance Management Software

We have been doing some thinking around the essential differences between BI and CPM solutions so as to more effectively communicate the value to each to our clients. A colleague directed me to two insightful white papers from Prophix on the nature of and differentiation between BI and CPM software packages.

PROPHIX and Corporate Performance Management

This white paper describes the benefits of Corporate Performance Management software and attempts to answer some of the commonly asked questions. It also illustrates PROPHIX’s expertise in CPM software and OLAP database technology and its flexible offerings to mid-market companies.

Form: http://www2.prophix.com/e/444/ontact-php-tag-prophix-and-cpm/FRXAQ/240014456

PDF: http://www2.prophix.com/e/444/hitepapers-prophix-and-cpm-pdf/FRXBA/240014456

I think there is some good thinking in this CPM article around differentiating CPM from BI. This list of 5 types of software applications which a CPM Software product consists of are helpful in identifying what functions are specific to CPM, as opposed to BI.

  1. Budgeting, Planning and Forecasting Software
  2. Software used for Financial, Statutory and Management Reporting
  3. Applications used for formal Financial Consolidation
  4. Software used for Profitability Modeling and Optimization
  5. Strategy Management Software

I take issue with #4, only in that I have yet to see a CPM product with as robust an optimization algorithm as that found in Data Mining tools such as SSAS Data Mining. If you would like to do predictive analysis in order to optimize profit margins, I suggest you consider doing this outside of the CPM suite.

Also, this statement on Page 11 struck a chord: “What CPM really does is automate processes that otherwise are performed with spreadsheets.” A simple statement, but underneath is a universe of multi-user collaboration, potentially intense financial calculations, and a manageable process for getting to an elusive compliance and reporting outcome.

PROPHIX and Business Intelligence

This white paper explains the difference between BI and CPM software and describes how PROPHIX fits in the BI software category. Being an open system, PROPHIX can be accessed by any BI tool capable of reading data from Microsoft SQL Server Analysis Services.

Form: http://www2.prophix.com/e/444/contact-php-tag-prophix-and-bi/FRXBU/240014456

PDF: http://www2.prophix.com/e/444/whitepapers-prophix-and-bi-pdf/FRXCE/240014456

The second article on BI does not meet the same standard; in my opinion, it includes some flimsy assertions about what BI is and is not. In particular, the assertion that CPM software offers “structured” data and BI offers “un-structured” is an over-simplification. Also, many of the observations about BI seem to be heavily influenced by the Microsoft BI Vision. 10 years ago, Cognos dominated the BI market: therefore, BI was OLAP. Now, Microsoft has influenced the perception and BI is Collaboration. I’m not sure what is next.

While Business Intelligence continues to evolve, but I think we can say that it has some core tenets:

    1. Data Visualization

    2. Flexible and Rich Data Models (optimally supported by OLAP)

    3. Reduced dependence upon Technology to extract information from business systems

I think the definition of CPM is a much easier thing to get your arms around. There is much overlap in terms of tools with Business Intelligence. However, CPM means a much more specialized and complex thing.

Brian Berry is a Director of Technology Consulting with BlumShapiro, focusing on Microsoft Business Intelligence solutions, with a strong focus on Systems Integration, Master Data Management and PerformancePoint Services. He has been helping companies optimize their investments in Microsoft technology for over 12 years.

Simple Solutions in Dynamics CRM – Reporting

I continue to be amazed at how easy it is to work with, extend, and build complete solutions with Dynamics CRM 4. Some Marketing users I support have been building Reports in the Report Wizard tool. This is a great tool for very simple reports. However, if you are looking for any kind of summary information, or calculations which are not built in as a calculated attribute, you can’t do that with the wizard.

I had heard that there were 3 easy to develop reports in CRM.

  1. Use the Report Wizard
  2. Start with the Report Wizard and Customize in Visual Studio
  3. Develop in Visual Studio

Option #2 looks appealing as it seems to hold out the promise of getting you 80% of the way to completion before you touch the RDL. However, there is one big problem: you won’t be able to Preview the report in Visual Studio. This is because the main Data Set for the report (named DSMain) is actually using Dynamic SQL to generate the query. The query builders which come with Report Designer do not support dynamic SQL, and even if they did, you would not be able to set the parameters the way you want to. I would say this option is only well suited for styling and adding a company brand to your custom CRM Reports.

This morning I tried Option #3 and I find this to be the easiest to work with. There did not seem to be any real limitation on how the report was designed. However, I can tell you that when designing your SQL queries, you want to be going against the set of Filtered Views in CRM which are designed to offer precisely this kind of access to the underlying data. I have found that as I build out my custom entities, attributes and relationships, all of these customizations generate a corresponding FilteredX view in the Organization data store. In other words, if you create a custom entity named “Photo” (I am removing any customization prefix for simplicity), should look to report on Photos from a View named FilteredPhoto. Additionally, Relationships to other entities will be manifested in these views as well. As you might expect, a Many-to-Many relationship which is defined between entitles (custom or system), becomes a view which can be used to represent that relationship. Not only are your customizations First Class Citizens in the CRM data schema, but easy access to the data is made available immediately.

 

Brian Berry is a Director of Technology Consulting with BlumShapiro, focusing on Microsoft Business Intelligence solutions, with a strong focus on Systems Integration, Master Data Management and PerformancePoint Services. He has been helping companies optimize their investments in Microsoft technology for over 12 years.

Day 11 – Location-Based Metadata

One of SharePoint’s greatest strengths is also an area that many organizations struggle with – metadata. Metadata, by definition, is data about data. Within the context of SharePoint it often refers to the columns of information you store on a document library. For example, in a standard document library, “Created By” tracks who created the document. Other columns track the date the document was created, modified, etc.

This type of metadata has traditionally been automatic. In other words, a user didn’t need to specify the date they created the document or that they were the one that created it. However, other metadata columns, especially those that are created to track specific, custom pieces of data have not been automatic.

Say, for example, that you wanted to add a column called “Project” to a document library so that the document was easily findable through a search based on the project name. This works well if you can get your users to fill in the name of the project. You can make the column a required value. Sometimes that’s not always possible. So, how does SharePoint 2010 address this? Location-based metadata!

Location-based metadata manages the default values of metadata fields based on location and applies them so that they are available when the user edits a document. When a user interacts with a Microsoft SharePoint Server 2010 site, SharePoint Server applies default values so that they appear the first time that the user sees a document edit form. Microsoft Office 2010 applications such as Microsoft Word 2010 get the default values for a location when the document is saved. When saving the document, the client application gets content type information for the location where the content item is saved, and the server applies the default values and builds the property schema inside the Office 2010 document.

So, for example, looking back at our “Project” example, if we had a document library with a “Project” column on it we could set up location-based metadata to pull the name of the site the document library is in, which perhaps represents the project we’re working with, and automatically applies the value.

Location-based metadata goes a long way to making it easier to enrich content within a SharePoint environment to make it easier to find and organize.

As a partner with BlumShapiro Consulting, Michael Pelletier leads our Technology Consulting Practice. He consults with a range of businesses and industries on issues related to technology strategy and direction, enterprise and solution architecture, service oriented architecture and solution delivery.

Master Data Management – Candidate Microsoft Technologies

In my last post, I talked about 4 types of Solution Architecture for an MDM Hub Solution. These architectures seek to address questions of data ownership, data residency and data publication patterns. In this post, I’d like to talk a little bit about our vision for how Microsoft technologies could be used as candidates for implementing each of these.

As I mentioned previously, the Repository describes something of an idealized state with a single data source for all master data assets. Access to the repository is managed by a well-designed and architected services layer (either a Windows Communication Foundation service or an ASMX-SOAP endpoint) and distributed enterprise clients read and write data to the repository. The key technology here is the services layer, implemented with WCF services.

It is interesting to note that SQL Server 2008 R2 Master Data Services includes a WCF services layer; MDS can be used as your central repository. However, there is no opportunity to inject custom business logic into this layer to prevent data from entering the MDS hub – data is loaded in to MDS “optimistically”. If you need to inject logic into the services layer, consider rolling your own WCF services or consider a Metadata driven application like Microsoft Dynamics CRM. I have been very impressed with Dynamics CRM ability to quickly design business entities and expose them as strongly typed objects through services contracts (more on that in another post).

With Registry, there is no central data store at all; instead, the Hub is a lookup to data residing in another system. SQL Server Linked Servers are often employed here to re-direct client to the actual data store. However, this can pose scalability problems with large data volumes because Linked Servers rely upon the Distributed Transaction coordinator (DTC). We prefer taking a Service Bus approach, meaning that like repository, clients work with data stores via a services interface. Unlike Repository, the services client is not fully aware of the service on the other end. This poses security challenges as well.

For a lightweight implementation of this, you might consider Windows Azure AppFabric Service Bus; this will allow you to put your registry in the cloud. However, the registry will be point-to-point: if you need more control or orchestration capabilities, leverage Microsoft BizTalk Server as a full featured Enterprise Service Bus.

Master Data Services does not have a great deal to offer a Registry Solution.

The Federation pattern is quite well-known and has been around for some time: SQL Server Replication Services can help you quickly publish data from point to point. However, the question is: where is the data governance? If the publishing database is not managed by business rules, the architecture will not solve the core MDM issues.

Master Data Services provides great value to this architecture in its use of Subscription Views. Subscription Views provide master data sources to subscribers. These views can take advantage of model versions, prohibiting subscribers from seeing invalid data in the MDS Hub.

Finally, and this is the tough one, the Hybrid architecture is a full featured MDM solution which involves the least impact to existing systems. In order to achieve an MDM solution which delivers on the promise of data co-residency, low latency for updates, full management and data stewardship, we recommend:

  1. Implementing Master Data Services for the Business Rules
  2. SharePoint Foundation 2010 / SharePoint Server 2010 for Master Asset collaboration
  3. Microsoft BizTalk Server for Update Integration

In my next post, I’ll talk about some integration scenarios between MDS and SharePoint 2010.

Brian Berry is a Director of Technology Consulting with BlumShapiro, focusing on Microsoft Business Intelligence solutions, with a strong focus on Systems Integration, Master Data Management and PerformancePoint Services. He has been helping companies optimize their investments in Microsoft technology for over 12 years.