Quantcast
Channel: The Official System Center Service Manager Blog
Viewing all 40 articles
Browse latest View live

Service Manager Data Warehouse and Reporting

$
0
0

Hi, my name is Chad Rowe and I am a Program Manager for the System Center Service Manager team working on the data warehouse and reporting features. I’ve been with the team for about 8 months and prior to that I worked within Microsoft’s IT organization for 9+ years managing operations teams that supported our line of business applications.

Service Manager data warehouse and reporting will give folks the ability to report on operational data in near-real time. It will also have historical and analytical functions to drive strategic service delivery and operations decision making.

There are 3 different elements that make up data warehouse and reporting in Service Manager:

  • Data warehouse – the processes and databases that manage the data from it’s raw source form to it’s end form that is used for reporting
  • Reporting infra – the framework that is used for in-console reporting that includes the custom controls used to support the interface between reports and the DW
  • Reports – consumed by the end user for operational and analytical data

The data warehouse is optimized for reporting purposes and the data warehouse can be extended via management packs. As Travis pointed out in his common platform post, data warehouse and the reporting infrastructure are a couple of those components that will be used in other System Center products over time which drives the need for both to be highly extensible.

I plan on doing a series of posts going into detail about the data warehouse and reporting features, starting with data warehouse and working my way up the stack. As mentioned in John’s setup post there are a couple of different components that make up the data warehouse at a high level, the data warehouse databases and the data warehouse management server.

Data Warehouse Databases - We have 3 databases, DWStagingAndConfig, DWRepository, and DWDataMart.

  • DWStagingAndConfig is where we store all of our management packs, ETL (extract, transform, load) configuration, and other configuration information. It is also the initial store for the source data coming from the Service Manager CMDB.
  • DWRepository is where we transform the extracted source data into the reporting optimized structure.
  • DWDataMart is the database for our published data that gets consumed by the reports.

Data Warehouse Management Server - The DW management server controls all the workflow processes associated with the data warehouse. There are three main processes:

  • Management Pack synchronization
  • Data warehouse schema and report deployment
  • Extract, Transform, Load

These workflow processes are what make the data warehouse tick, from bringing the management packs in to deploying the reports out and all the stuff in between. In my next post, we will dig into the MP synchronization process and see what it does and why it does it. Come back soon to see the MP synchronization diagram (below) in full detail. In the meantime, please leave comments with any questions you may have or any requests for coverage on a particular topic in the data warehouse and reporting areas of Service Manager.

clip_image001


Data Warehouse – Anatomy of DW/Reporting Deployment

$
0
0

 

In my last post I went over the management pack synchronization process that brings over MP’s from Service Manager and how those MP’s drive the structure, data, and reports for data warehouse and reporting. Once those MP’s are synchronized between Service Manager and the data warehouse, we need to get the data and/or reports deployed for user consumption.

Sequentially, deployment works in this way (see figure below):

  1. Once all identified MP’s are synchronized with DW, MP sync triggers the report deployment workflow
  2. Since DWStagingandConfig is the final destination of the MP’s that have been synchronized, the deployment workflow will query the DWStagingandConfig database for any new or changed reports to deploy or any reports to remove.
  3. The deployment workflow will then publish any new or updated reports to the SQL Server Reporting Services server via the SSRS webservices.
  4. SSRS then stores the reports and appropriate metadata.
  5. Schema deployment workflow is triggered by MP sync
  6. Once again, information that is driving schema changes is retrieved from the DWStagingandConfig database based off the newly synchronized MP’s that are driving the changes.
  7. The schema changes are then deployed to the DWRepository.
  8. Any necessary changes to Extract, Transform, and Load modules are made to the DWStagingandConfig database

MP’s that contain only Service Manager specific information will not trigger the deployment activities to execute. They will only be triggered for new DW/Reporting specific elements. In my next post I will dive into what is Extract, Transform, and Load (ETL), its benefits, and why deployment makes changes to it.

 

DeploymentAnatomy

Data Retention Policies (aka "Grooming") in the Service Manager Database

$
0
0

What is Grooming?  Why Groom?

The ServiceManager database is the “online” or “operational” database in Service Manager.  It is the database which most users are interacting with when they are using Service Manager.  Therefore, you want to keep it running as fast as possible.  Databases are kind of like car engines.  The longer they run, the more they fill up with “junk” and need to be cleansed to operate at peak efficiency.  “Junk” in the case of databases is extraneous data that nobody cares about anymore.  At one point it was useful, but days, weeks, months, or even years later it is highly unlikely to be accessed in the normal course of operations.  Having that extra data in there makes the queries for data that people really care about slower.

The System Center Data Warehouse in Service Manager is intended to be the long term storage and reporting store.  It exists in part to offload the “junk” data from the ServiceManager database.  You see, people care about “junk” data in aggregate – for example, “How many incidents did we have last month compared to the same month last year?”  For the most part, nobody really cares about incident ID: 5233 that occurred last February though.  Sure, it’s there if you really need it in the warehouse, but it’s highly unlikely you’ll need to go back that far and look at the details of that incident.

How Does Grooming Work?

So, to keep the ServiceManager database running at peak efficiency we periodically cleanse it with engine dejunker via a process we call “Grooming”.  Basically, it works like this…

1)      The extract workflow copies the data (new and updated objects) from the ServiceManager database to the System Center Data Warehouse every 5 minutes (by default).  This creates a copy of the data for long term storage and reporting.  The object still remains in the ServiceManager database so it can be further acted upon as appropriate. It's important to note that even though the data gets into the warehouse every 5 minutes, the reports themselves get access to the fresh data once an hour by default. If you'd like to learn more about what happens to the data between the time it gets copied out of the  ServiceManager database and when it shows up in the reports, see our blog about the Extract, Transform and Load process.

2)      As time goes on, the data loses its relevance.  Now, one of two things happen to mark an object deleted in the database:

a.       A user, connector, or some other client makes a call to the Data Access Service and says ‘Delete this object’ or

b.      An automated workflow looks for objects which match certain criteria and marks those objects deleted.  For example: Delete all incidents which are Status=Closed and the incident hasn’t been modified in 90 days.

3)      An automated purging workflow actually removes the objects marked deleted from the database forever (but don’t worry you have a copy in the data warehouse, remember?).

Grooming Data Using the Console

Out of the box, we provide two ways to groom objects using the console:

1)      Grooming configuration items (CIs) or “Manual Grooming”– Only users in the Advanced Operators and Administrators user roles can select any configuration item in the Configuration Items workspace in their security scope and click Delete in the task pane.

 

 

 

Despite what it sounds like, this does not actually delete the object from the database.  It merely changes the Object Status enumeration data type property on the configuration item from ‘Active’ (System.ConfigItem.ObjectStatusEnum.Active)  to ‘Pending Delete’ (System.ConfigItem.ObjectStatusEnum.PendingDelete).  When the object is in this ‘Pending Delete’ state it will no longer be displayed in any configuration item views or in the Select Object dialog.  It will only be shown in the Deleted Items view in the Administration workspace.  From there you can choose to Restore the object by clicking Restore Items or Remove the object by clicking Remove Items in the task pane. 

 

 

 

Restore sets the Object Status property back to ‘Active’ so it will show up in the Configuration Items views again.  Remove sets the Object Status property to ‘Deleted’ (System.ConfigItem.ObjectStatusEnum.Deleted) and tells the Data Access service to mark the object to Deleted (#2a above).  This changes the value of the IsDeleted field on the BaseManagedEntity table in the database from 0 to 1.  The object is not yet actually dropped from the database.  That’s what the purging workflow (#3 above) is for.

 

2)      Grooming Work Items or “Automated Grooming” - The second method is for Problem, Incident, and Change Request work items which are automatically groomed based on certain criteria.  You can go to the Settings view in the Administration workspace and configure the Retention Period settings for each of these.

 

 

 

The default retention period for these work items is:

·         Incidents: 90 days

·         Change requests: 365 days

·         Problems: 365 days

You can configure these values to be anything from 0 up to 10,000 days.  If you set it to 0 then there is essentially no retention and any object that matches the criteria the next time the grooming workflow runs will be deleted.

The retention period here is really talking about how many days must have passed since the last time the object was modified before it can be deleted.

The object must also meet other criteria.  For the work items we provide out of the box, the additional criteria are:

·         Incidents: Status = Closed (IncidentStatusEnum.Closed)

·         Change request: Status = Closed (ChangeStatusEnum.Closed)

·         Problem: Status = Closed (ProblemStatusEnum.Closed)

So, to summarize and provide an example, an incident must be Status = Closed AND not be modified for more than 90 days before it will be deleted.

 

When an object is deleted, the objects that are hosted (System.Hosting) by that object or have a membership relationship (System.Membership) with that object are also deleted.  For example, when a change request is deleted, the activities that are members of that change request are also deleted.  Objects which are related to (System.Reference) or contained by (System.Containment) the object being deleted are not deleted.  For example, if an incident is deleted, the computer it is related to is not deleted.

 

Once an object is marked deleted it will no longer be shown in the console or returned by a query to the Data Access Service.  As with the configuration items, the grooming workflow will only mark the objects IsDeleted = 1.  It is not actually dropped from the database yet though.  That’s the job of the purging workflow.

Grooming Workflows

There are five grooming workflows for the ServiceManager database.  They are implemented as Rule workflows:

·         Microsoft.ServiceManager.Grooming.GroomEntities

Management Pack: ServiceManager.Grooming.Configuration

This rule is the primary grooming workflow for objects.  It runs every night at midnight on the management server that is currently configured to run workflows within the management group.

·         Microsoft.ServiceManager.Grooming.GroomSubscriptionLogs

Management Pack: ServiceManager.Grooming.Configuration

This rule grooms the subscription data source logs.  This is an internal log store that you should never need to worry about.  It runs every 15 minutes on the management server.

·         Microsoft.ServiceManager.Grooming.GroomChangeLogs

Management Pack: ServiceManager.Grooming.Configuration

This rule grooms the object history logs that keep track of every property and relationship change of many of the objects in the database.  It runs every day at 2:00 in the morning on the management server.

·         Microsoft.SystemCenter.SqlJobs.PartitioningAndGrooming

Management Pack: ServiceManager.Core.Library

This rule does some additional grooming of system data types and partitions tables so that entire tables of internal data can be dropped.  This workflow isn’t really relevant to Service Manager.  It runs every day at midnight on the management server.

·         Microsoft.SystemCenter.SqlJobs.DiscoveryDataPurging

Management Pack: ServiceManager.Core.Library

This rule is the purging workflow that looks for objects marked IsDeleted = 1 and actually drops those records from the database completely.  It runs every day at 2:00 in the morning on the management server.

 

A couple of notes about these:

·         There are actually two rules with the ID Microsoft.SystemCenter.SqlJobs.PartitioningAndGrooming.  One of them is in the ServiceManager.Core.Library MP and the other is in the Microsoft.SystemCenter.Internal MP.  The rule in the Microsoft.SystemCenter.Internal MP is a legacy rule that is disabled by a rule override in the ServiceManager.Core.Library such that only the rule in the ServiceManager.Core.Library MP runs.  The same is true of the Microsoft.SystemCenter.SqlJobs.DiscoveryDataPurging rule.

·         The ServiceManager.Grooming.Configuration MP is an unsealed MP which means you can change the configuration of those grooming rules such as the schedule and whether or not they are enabled.  For example, you could change the schedule for the from midnight to 3:00 in the morning like this:

              <SimpleReccuringSchedule>

                <IntervalUnit="Days">1</Interval>

                <SyncTime>03:00</SyncTime>

              </SimpleReccuringSchedule>

              <ExcludeDates />

            </Scheduler>

 

Or you could disable object grooming altogether:

 

<RuleID="Microsoft.ServiceManager.Grooming.GroomEntities"Enabled="false"…

 

To do this, just export out the MP, modify it, increment the version number in the Manifest section and reimport it.  See Hacking Management Pack XML Like a Pro.

Grooming Extensibility

Partners and customers can extend the grooming infrastructure to add their own grooming criteria.  To do this you need to create a new instance of the System.GroomingConfiguration class.  This is the model of that class:

<ClassTypeID="System.GroomingConfiguration"Base="System.SolutionSettings">

  <PropertyID="TargetId"Type="guid"Key="true"Required="true" />

  <PropertyID="Category"Type="enum"EnumType="GroomingCategory"Key="true"Required="true" />

  <PropertyID="RetentionPeriodInMinutes"Type="int"Required="true" />

  <PropertyID="IsInternal"Type="bool"Required="true" />

  <PropertyID="BatchSize"Type="int"Required="true" />

  <PropertyID="StoredProcedure"Type="string"Required="true" />

  <PropertyID="Criteria"Type="string"MaxLength="10000" />

</ClassType>

Here is some guidance on creating instances of this class:

·         You should set the TargetId to the ID of the class that you are grooming objects of.

·         Set the Category equal to GroomingCategory.Entity. This enum hierarchy is in the System.AdminItem.Library MP.

·         Set RetentionPeriodInMinutes to whatever you want – remember this is in minutes.  This is how long you want to keep the object for from the last time the object was modified.

·         Set IsInternal = false.

·         Set BatchSize to some positive integer.  We are using 1,000 for incident, problem, and change requests.

·         Set StoredProcedure to ‘p_GroomManagedEntity’ (without the single quotes)

·         Set the Criteria equal to the T-SQL statement that selects just the objects you want to have deleted.  The @Retention variable will be converted and set for you at runtime according to the value defined in RetentionPeriodInMinutes. The @TargetTypeIdvariable will be set for you at runtime with the value in TargetId.  For example, here are the Criteria for Incident grooming:

SELECT BME.[BaseManagedEntityId]

FROM dbo.[BaseManagedEntity] BME

JOIN dbo.MT_System$WorkItem$Incident I ON I.BaseManagedEntityId = BME.BaseManagedEntityId

WHERE

BME.[IsDeleted] = 0 AND

BME.[LastModified] <@RetentionAND

BME.BaseManagedTypeId =@TargetTypeIdAND

I.Status_785407A9_729D_3A74_A383_575DB0CD50ED ='AED4C69E-1891-A855-AEB4-6E456C6FA33F'

As a partner delivering a solution to customers, you may want to groom your objects out of the database and possibly give the customers the control over when things are groomed.  You can provide a user interface for creating these instances or updating them similar to what we did for work item grooming configuration above.  Also, similar to what we did, you can create instances of this class during the setup of your solution on top of the Service Manager product.

Data Warehouse Grooming

For more information on DW grooming see this blog post:

http://blogs.technet.com/b/servicemanager/archive/2011/06/07/how-much-data-do-we-retain-in-the-service-manager-data-warehouse.aspx

Update Oct 25, 2012 - Content contributed by Manoj Parvathaneni

If you want to look at a history of the grooming job execution you can run this query in the ServiceManager database:

 

select
top 1000 * from InternalJobHistory (nolock)

--where
Command like 'Exec dbo.p_GroomManagedEntity A604B942%'    -- Incident
grooming (once a day)

--where
Command like 'Exec
dbo.p_DataPurging'                    
-- Incident purging (once a day)

--where
Command like 'exec
dbo.p_GroomChangeLogs%'               
-- History grooming (EntityChangeLog) (once a day)

--where
Command like 'exec dbo.p_GroomSubscriptionSpecificECL%'   -- Grooming
subscription specifc EntityChangeLog rows (every fifteen minutes)

--where
Command like 'exec dbo.p_GroomSubscriptionSpecificRECL%'  -- Grooming
subscription specifc RelatedEntityChangeLog rows  (every fifteen minutes)

order
by InternalJobHistoryId desc

Each of these clauses queries a different type of grooming. The query will return the last 1000 occurrences of that type of grooming. If you see a non-NULL TimeFinished value and a Status of 1 that means grooming successfully completed. If you see a NULL value for TimeFinished and a Status of 0 or 2, it means something went wrong:

1.      There were too many rows to groom and we couldn’t finish it within 30 minutes.

2.      A deadlock happened and SQL killed the grooming spid.

3.      Some other failure, in which case Status code would be 2.

In all these cases you should see an event log entry in the Management Server with Event Id 10880. Note that for the first two cases above we would retry once more before we give up. So you will see two calls back to back if the first one fails.

Introduction to the Data Warehouse: Custom Fact Tables, Dimensions and Outriggers

$
0
0

So you’ve deployed the data warehouse and tried out the reports but now you want to want to make it your own. You may be trying to recreate some reports you’ve been using forever and now need to run them against the Service Manager platform, or perhaps you simply want to take full advantage of the customizations you’re doing in the Incident or Change Management solutions and want those changes to flow through to the reports. Either way, if you’re stumped on how to proceed our latest posts from the Platform team will help you extend and customize the Data Warehouse to enable the in-depth analyses you’re aiming for.

Danny Chen, one of the Developers on our common Platform team, did a great job writing up how to create fact tables, dimensions and outriggers. If you’re not familiar with data warehousing principles, I’ll provide some clarity as to what these terms mean and how they apply to Service Manager below. If you understand the principles well enough and are chomping at the bit to dig into the details, here are the links:

1. A Deep Dive on Creating Relationship Facts in the Data Warehouse

2. A Deep Dive on Creating Outriggers and Dimensions in the Data Warehouse

Principles behind the Platform: Dimensional modeling and the star schema

The data warehouse is a set of databases and processes to populate those databases automatically. At a high level, the end goal is to populate the data mart where users will run reports and perform analyses to help them manage their business. We keep this data around longer in the warehouse than in the CMDB because it’s usefulness for trending and analysis generally outlives it’s usefulness for normal transactional processing needs.

A data warehouse is optimized for aggregating and analyzing a lot of data at once in a lot of different, unpredictable ways. This differs from transactional processing systems which are optimized for write access on few records in any given transaction , and those transactions are more predictable in behavior.

To optimize the data warehouse for performance and ease of use, we use the Kimball approach to dimensional modeling. What this means to you is that tables in the DWDataMart database are logically grouped into subject matter areas which resemble a star when laid out in a diagram, so these groupings are often called “star schemas”.

  1. In the center of the star is a Fact table. Fact tables represent relationships, measures & key performance indicators. They are normally long and skinny as they have relatively few columns but contain a large number of transactions.
  2. The fact table joins to Dimension tables, which represent classes, properties & enumerations. Dimension tables usually contain far fewer rows than fact tables but are wider as they have the interesting attributes by which users slice and dice reports (ie status, classifications, date attributes of a class like Created Date or Resolved Date, etc).
  3. An outrigger is a special kind of dimension table which hangs off another dimension table for performance and/or usability reasons.

Generalized representation of a star schema:

Star Schema

Consider what a star schema for a local coffee shop might look like. The transactions are the coffee purchases themselves, whereas the dimensions might include:

  1. Date dimension (to rollup the transaction by both gregorian and fiscal calendars)
  2. Customer dimension (bought the coffee)
  3. Employee dimension (made the coffee)
  4. Product dimension (espresso, drip, latte, breve,  etc etc…. and this could get quite complicated if you track the details of Seattleites drink orders)
  5. Store dimension
  6. And more

What measures might the fact table have? You could easily imagine:

  1. Quantity sold
  2. Price per Unit
  3. Total Sales
  4. Total Discounts
  5. etc

IT processes aren’t so different from the local coffee shop when it comes time to designing your dimensional model. There are a set of transactions which happen, like incident creation/resolution/closure which produce some interesting and useful metrics (time to resolution, resolution target adherence, billable time incurred by analysts, duration in status, etc).

When thinking about extending and customizing your data warehouse, think about the business questions you’d like to be able to answer, read up on dimensional modeling for some tips on best practices, and then check out Danny’s posts on creating fact tables, dimensions and outriggers for the technical know-how.

 

And of course, we’re always here to help so feel free to send me your questions.

Create a report model with localized outriggers (aka “Lists”)

$
0
0

If you've watched my Reporting and Business Intelligence with Service Manager 2010 webcast and followed along in your environment, you may have unintentionally created a report which displays enumeration guids instead of Incident Classification strings, like below. Not too useful. In this post I'll tell you the simple way to fix your report model to include the display strings for outriggers for a specific language, and in a follow on post I'll share more details as to how to localize your reports and report models.

You may be wondering what happened. This is because we made a change in SP1 to consistently handle outrigger values which removed the special handling we had for our out of the box enumerations in outriggers. If you're now wondering what outriggers are, read up on the types of tables in data warehouse in my last post in which I provided the service manager data warehouse schema.

Here's the screenshot of the report we need to fix, the rest of the post will explain how to fix it.

 

Replace table binding in the Data Source view with Query binding

Rather than including references to the outriggers directly (in the screenshot below the outriggers are IncidentClassificationvw, IncidentSourcevw, IncidentUrgencyvw, and IncidentStatusvw) we'll replace these with named queries.

To do this, you simply right click the "table" and select Replace Table > With New Named Query.

 

You then paste in your query which joins to DisplayStringDimvw and filter on the language of your choice. Repeat for each outrigger.

SELECT outrigger.IncidentClassificationId, Strings.DisplayName AS Classification

FROM IncidentClassificationvw AS outrigger INNER JOIN

DisplayStringDimvw AS Strings ON outrigger.EnumTypeId = Strings.BaseManagedEntityId

WHERE (Strings.LanguageCode = 'ENU')

 

Create & publish your report model

To create a simple report model, right click the Report Models node in the Solution Explorer (right pane) and select Add New Report Model. Follow the wizard, selecting the default options.

 

If you want to clean it up a little, double click the Report Model, then select IncidentDim on the left.

Scroll down the properties in the center and you'll notice there is now a Role added to the IncidentDim named Classification Incident Classification, along with an Attribute named Classification. This is because using outriggers to describe dimensions is an industry standard approach and SQL BI Dev Studio understands that these outriggers should essentially get added as properties directly to the Incident dimension for the easiest end user report authoring experience.

The attribute is populated directly by the column I mentioned you should not use in reports, so you should select and delete that attribute from your model. You may also rename the Role "Classification Incident Classification" to a more user-friendly name like "Incident Classification" if you'd like to.

 

Now save, right click your report model and click Deploy.

Create a report to try out your new report model

Open up SQL Reporting Services Report Builder (below screenshots are using Report Builder 3.0). If you haven't gotten a chance to check it out yet, here's a good jump start guide.

 

Follow the wizard, select your newly published report model:

 

Drag & drop your Incident Classification and Incidents measure. Hit the red ! to preview.

 

Drag & drop to layout the report

 

Continue with the wizard, selecting the formatting options of your choice. If you would like, you can then resize the columns, add images and more. For our quick and simple example, though, I'm going to intentionally leave formatting reports for another post. If you've been following along, your report should now look like this:

 

Go ahead and publish to the SSRS server under the /SystemCenter/ServiceManager/ folder of your choice to make the report show up in the console.

 

 

How long does the Service Manager Data Warehouse retain historical data?

$
0
0

The short answer is that we keep data in the warehouse for 3 years for fact tables and forever for dimension and outrigger tables. Antoni Hanus, a Premier Field Engineer with Microsoft, has put together the detailed steps on how to adjust this retention period so you can retain data longer or groom it out more aggressively.

DISCLAIMER: Microsoft does not support direct querying or manipulation of the SQL Databases. 

To learn more about the different type of tables in the data warehouse, see the blog post which describes the data warehouse schema.

To determine which are the fact tables and which are the dimension tables you can run the appropriate query against your DWDataMart database

SELECT WarehouseEntityName

      ,ViewName

      ,wet.WarehouseEntityTypeName

  FROM etl.WarehouseEntity (nolock) we

  JOIN      etl.WarehouseEntityType (nolock) wet on we.WarehouseEntityTypeId = wet.WarehouseEntityTypeId

  WHERE     wet.WarehouseEntityTypeName = 'Fact'

 

SELECT WarehouseEntityName

      ,ViewName

      ,wet.WarehouseEntityTypeName

  FROM etl.WarehouseEntity (nolock) we

  JOIN      etl.WarehouseEntityType (nolock) wet on we.WarehouseEntityTypeId = wet.WarehouseEntityTypeId

  WHERE     wet.WarehouseEntityTypeName = 'Dimension'

NOTE: Microsoft does not support directly accessing nor managing the tables (dimensions, facts nor outriggers).

Instead, please use the views as defined by the ‘ViewName’ column in the above query.

Fact Table Retention Settings

There are 2 two types of retention setting in the data warehouse:

1) Global - The global retention period (set to 3 years by default) which any subsequently created fact tables use as their default retention setting.

2) Individual Fact– The granular retention period for each individual fact table (uses the global setting of 3 years, unless individually modified).

Global:

The default global retention period for data stored in the Service Manager Data Warehouse is 3 years so all OOB (Out of the box) Fact tables use 3 years as the default retention setting.

Any subsequently created fact tables will use this setting upon creation for their individual retention setting.

The default Global setting value is 1576800, which is 3 years (1576800 = 1440 minutes per day * 365 days * 3 years)

This value can be verified by running the following SQL Query against the DWDataMart database:

select ConfiguredValue from etl.Configuration where ConfigurationFilter = 'DWMaintenance.grooming'

Individual Fact Tables:

Individual fact tables will inherit the global retention value upon creation, or can be customized to a value that is different from the default global setting. 

OOB Individual Fact tables that were created upon installation, can also be individually configured with a specific retention value as required. 

All of the Fact tables in the Database can be returned by running the following query against the DWDataMart Database:

SELECT WarehouseEntityName

      ,ViewName

      ,wet.WarehouseEntityTypeName

  FROM etl.WarehouseEntity (nolock) we

  JOIN      etl.WarehouseEntityType (nolock) wet on we.WarehouseEntityTypeId = wet.WarehouseEntityTypeId

  WHERE     wet.WarehouseEntityTypeName = 'Fact'

An example of an OOB fact table returned is  ActivityStatusDurationFact which has a warehouseentity ID of 81;

clip_image002

The corresponding retention setting for this Fact table is stored in the etl.warehouseentitygroominginfo table, so if we run the following query, the ‘RetentionPeriodInMinutes’ field will show us the individual retention configured for that particular table
Query:

select warehouseEntityID, RetentionPeriodInMinutes from etl.WarehouseEntityGroomingInfo where WarehouseEntityId = 81

Result:

clip_image004

A SQL Statement such as the following could be used to update an individual fact table to an appropriate value:

Use DWDatamart

UPDATE etl.WarehouseEntityGroomingInfo

SET RetentionPeriodInMinutes = [number of minutes to retain data]

WHERE WarehouseEntityId = [WarehouseEntityID of Fact table to update]

Service Manager Data Warehouse and Reporting

$
0
0

Hi, my name is Chad Rowe and I am a Program Manager for the System Center Service Manager team working on the data warehouse and reporting features. I’ve been with the team for about 8 months and prior to that I worked within Microsoft’s IT organization for 9+ years managing operations teams that supported our line of business applications.

Service Manager data warehouse and reporting will give folks the ability to report on operational data in near-real time. It will also have historical and analytical functions to drive strategic service delivery and operations decision making.

There are 3 different elements that make up data warehouse and reporting in Service Manager:

  • Data warehouse – the processes and databases that manage the data from it’s raw source form to it’s end form that is used for reporting
  • Reporting infra – the framework that is used for in-console reporting that includes the custom controls used to support the interface between reports and the DW
  • Reports – consumed by the end user for operational and analytical data

The data warehouse is optimized for reporting purposes and the data warehouse can be extended via management packs. As Travis pointed out in his common platform post, data warehouse and the reporting infrastructure are a couple of those components that will be used in other System Center products over time which drives the need for both to be highly extensible.

I plan on doing a series of posts going into detail about the data warehouse and reporting features, starting with data warehouse and working my way up the stack. As mentioned in John’s setup post there are a couple of different components that make up the data warehouse at a high level, the data warehouse databases and the data warehouse management server.

Data Warehouse Databases - We have 3 databases, DWStagingAndConfig, DWRepository, and DWDataMart.

  • DWStagingAndConfig is where we store all of our management packs, ETL (extract, transform, load) configuration, and other configuration information. It is also the initial store for the source data coming from the Service Manager CMDB.
  • DWRepository is where we transform the extracted source data into the reporting optimized structure.
  • DWDataMart is the database for our published data that gets consumed by the reports.

Data Warehouse Management Server - The DW management server controls all the workflow processes associated with the data warehouse. There are three main processes:

  • Management Pack synchronization
  • Data warehouse schema and report deployment
  • Extract, Transform, Load

These workflow processes are what make the data warehouse tick, from bringing the management packs in to deploying the reports out and all the stuff in between. In my next post, we will dig into the MP synchronization process and see what it does and why it does it. Come back soon to see the MP synchronization diagram (below) in full detail. In the meantime, please leave comments with any questions you may have or any requests for coverage on a particular topic in the data warehouse and reporting areas of Service Manager.

clip_image001

Data Warehouse – Anatomy of DW/Reporting Deployment

$
0
0

 

In my last post I went over the management pack synchronization process that brings over MP’s from Service Manager and how those MP’s drive the structure, data, and reports for data warehouse and reporting. Once those MP’s are synchronized between Service Manager and the data warehouse, we need to get the data and/or reports deployed for user consumption.

Sequentially, deployment works in this way (see figure below):

  1. Once all identified MP’s are synchronized with DW, MP sync triggers the report deployment workflow
  2. Since DWStagingandConfig is the final destination of the MP’s that have been synchronized, the deployment workflow will query the DWStagingandConfig database for any new or changed reports to deploy or any reports to remove.
  3. The deployment workflow will then publish any new or updated reports to the SQL Server Reporting Services server via the SSRS webservices.
  4. SSRS then stores the reports and appropriate metadata.
  5. Schema deployment workflow is triggered by MP sync
  6. Once again, information that is driving schema changes is retrieved from the DWStagingandConfig database based off the newly synchronized MP’s that are driving the changes.
  7. The schema changes are then deployed to the DWRepository.
  8. Any necessary changes to Extract, Transform, and Load modules are made to the DWStagingandConfig database

MP’s that contain only Service Manager specific information will not trigger the deployment activities to execute. They will only be triggered for new DW/Reporting specific elements. In my next post I will dive into what is Extract, Transform, and Load (ETL), its benefits, and why deployment makes changes to it.

 

DeploymentAnatomy


Data Retention Policies (aka "Grooming") in the Service Manager Database

$
0
0

What is Grooming?  Why Groom?

The ServiceManager database is the “online” or “operational” database in Service Manager.  It is the database which most users are interacting with when they are using Service Manager.  Therefore, you want to keep it running as fast as possible.  Databases are kind of like car engines.  The longer they run, the more they fill up with “junk” and need to be cleansed to operate at peak efficiency.  “Junk” in the case of databases is extraneous data that nobody cares about anymore.  At one point it was useful, but days, weeks, months, or even years later it is highly unlikely to be accessed in the normal course of operations.  Having that extra data in there makes the queries for data that people really care about slower.

The System Center Data Warehouse in Service Manager is intended to be the long term storage and reporting store.  It exists in part to offload the “junk” data from the ServiceManager database.  You see, people care about “junk” data in aggregate – for example, “How many incidents did we have last month compared to the same month last year?”  For the most part, nobody really cares about incident ID: 5233 that occurred last February though.  Sure, it’s there if you really need it in the warehouse, but it’s highly unlikely you’ll need to go back that far and look at the details of that incident.

How Does Grooming Work?

So, to keep the ServiceManager database running at peak efficiency we periodically cleanse it with engine dejunker via a process we call “Grooming”.  Basically, it works like this…

1)      The extract workflow copies the data (new and updated objects) from the ServiceManager database to the System Center Data Warehouse every 5 minutes (by default).  This creates a copy of the data for long term storage and reporting.  The object still remains in the ServiceManager database so it can be further acted upon as appropriate. It's important to note that even though the data gets into the warehouse every 5 minutes, the reports themselves get access to the fresh data once an hour by default. If you'd like to learn more about what happens to the data between the time it gets copied out of the  ServiceManager database and when it shows up in the reports, see our blog about the Extract, Transform and Load process.

2)      As time goes on, the data loses its relevance.  Now, one of two things happen to mark an object deleted in the database:

a.       A user, connector, or some other client makes a call to the Data Access Service and says ‘Delete this object’ or

b.      An automated workflow looks for objects which match certain criteria and marks those objects deleted.  For example: Delete all incidents which are Status=Closed and the incident hasn’t been modified in 90 days.

3)      An automated purging workflow actually removes the objects marked deleted from the database forever (but don’t worry you have a copy in the data warehouse, remember?).

Grooming Data Using the Console

Out of the box, we provide two ways to groom objects using the console:

1)      Grooming configuration items (CIs) or “Manual Grooming”– Only users in the Advanced Operators and Administrators user roles can select any configuration item in the Configuration Items workspace in their security scope and click Delete in the task pane.

 

 

 

Despite what it sounds like, this does not actually delete the object from the database.  It merely changes the Object Status enumeration data type property on the configuration item from ‘Active’ (System.ConfigItem.ObjectStatusEnum.Active)  to ‘Pending Delete’ (System.ConfigItem.ObjectStatusEnum.PendingDelete).  When the object is in this ‘Pending Delete’ state it will no longer be displayed in any configuration item views or in the Select Object dialog.  It will only be shown in the Deleted Items view in the Administration workspace.  From there you can choose to Restore the object by clicking Restore Items or Remove the object by clicking Remove Items in the task pane. 

 

 

 

Restore sets the Object Status property back to ‘Active’ so it will show up in the Configuration Items views again.  Remove sets the Object Status property to ‘Deleted’ (System.ConfigItem.ObjectStatusEnum.Deleted) and tells the Data Access service to mark the object to Deleted (#2a above).  This changes the value of the IsDeleted field on the BaseManagedEntity table in the database from 0 to 1.  The object is not yet actually dropped from the database.  That’s what the purging workflow (#3 above) is for.

 

2)      Grooming Work Items or “Automated Grooming” - The second method is for Problem, Incident, and Change Request work items which are automatically groomed based on certain criteria.  You can go to the Settings view in the Administration workspace and configure the Retention Period settings for each of these.

 

 

 

The default retention period for these work items is:

·         Incidents: 90 days

·         Change requests: 365 days

·         Problems: 365 days

You can configure these values to be anything from 0 up to 10,000 days.  If you set it to 0 then there is essentially no retention and any object that matches the criteria the next time the grooming workflow runs will be deleted.

The retention period here is really talking about how many days must have passed since the last time the object was modified before it can be deleted.

The object must also meet other criteria.  For the work items we provide out of the box, the additional criteria are:

·         Incidents: Status = Closed (IncidentStatusEnum.Closed)

·         Change request: Status = Closed (ChangeStatusEnum.Closed)

·         Problem: Status = Closed (ProblemStatusEnum.Closed)

So, to summarize and provide an example, an incident must be Status = Closed AND not be modified for more than 90 days before it will be deleted.

 

When an object is deleted, the objects that are hosted (System.Hosting) by that object or have a membership relationship (System.Membership) with that object are also deleted.  For example, when a change request is deleted, the activities that are members of that change request are also deleted.  Objects which are related to (System.Reference) or contained by (System.Containment) the object being deleted are not deleted.  For example, if an incident is deleted, the computer it is related to is not deleted.

 

Once an object is marked deleted it will no longer be shown in the console or returned by a query to the Data Access Service.  As with the configuration items, the grooming workflow will only mark the objects IsDeleted = 1.  It is not actually dropped from the database yet though.  That’s the job of the purging workflow.

Grooming Workflows

There are five grooming workflows for the ServiceManager database.  They are implemented as Rule workflows:

·         Microsoft.ServiceManager.Grooming.GroomEntities

Management Pack: ServiceManager.Grooming.Configuration

This rule is the primary grooming workflow for objects.  It runs every night at midnight on the management server that is currently configured to run workflows within the management group.

·         Microsoft.ServiceManager.Grooming.GroomSubscriptionLogs

Management Pack: ServiceManager.Grooming.Configuration

This rule grooms the subscription data source logs.  This is an internal log store that you should never need to worry about.  It runs every 15 minutes on the management server.

·         Microsoft.ServiceManager.Grooming.GroomChangeLogs

Management Pack: ServiceManager.Grooming.Configuration

This rule grooms the object history logs that keep track of every property and relationship change of many of the objects in the database.  It runs every day at 2:00 in the morning on the management server.

·         Microsoft.SystemCenter.SqlJobs.PartitioningAndGrooming

Management Pack: ServiceManager.Core.Library

This rule does some additional grooming of system data types and partitions tables so that entire tables of internal data can be dropped.  This workflow isn’t really relevant to Service Manager.  It runs every day at midnight on the management server.

·         Microsoft.SystemCenter.SqlJobs.DiscoveryDataPurging

Management Pack: ServiceManager.Core.Library

This rule is the purging workflow that looks for objects marked IsDeleted = 1 and actually drops those records from the database completely.  It runs every day at 2:00 in the morning on the management server.

 

A couple of notes about these:

·         There are actually two rules with the ID Microsoft.SystemCenter.SqlJobs.PartitioningAndGrooming.  One of them is in the ServiceManager.Core.Library MP and the other is in the Microsoft.SystemCenter.Internal MP.  The rule in the Microsoft.SystemCenter.Internal MP is a legacy rule that is disabled by a rule override in the ServiceManager.Core.Library such that only the rule in the ServiceManager.Core.Library MP runs.  The same is true of the Microsoft.SystemCenter.SqlJobs.DiscoveryDataPurging rule.

·         The ServiceManager.Grooming.Configuration MP is an unsealed MP which means you can change the configuration of those grooming rules such as the schedule and whether or not they are enabled.  For example, you could change the schedule for the from midnight to 3:00 in the morning like this:

              <SimpleReccuringSchedule>

                <IntervalUnit="Days">1</Interval>

                <SyncTime>03:00</SyncTime>

              </SimpleReccuringSchedule>

              <ExcludeDates />

            </Scheduler>

 

Or you could disable object grooming altogether:

 

<RuleID="Microsoft.ServiceManager.Grooming.GroomEntities"Enabled="false"…

 

To do this, just export out the MP, modify it, increment the version number in the Manifest section and reimport it.  See Hacking Management Pack XML Like a Pro.

Grooming Extensibility

Partners and customers can extend the grooming infrastructure to add their own grooming criteria.  To do this you need to create a new instance of the System.GroomingConfiguration class.  This is the model of that class:

<ClassTypeID="System.GroomingConfiguration"Base="System.SolutionSettings">

  <PropertyID="TargetId"Type="guid"Key="true"Required="true" />

  <PropertyID="Category"Type="enum"EnumType="GroomingCategory"Key="true"Required="true" />

  <PropertyID="RetentionPeriodInMinutes"Type="int"Required="true" />

  <PropertyID="IsInternal"Type="bool"Required="true" />

  <PropertyID="BatchSize"Type="int"Required="true" />

  <PropertyID="StoredProcedure"Type="string"Required="true" />

  <PropertyID="Criteria"Type="string"MaxLength="10000" />

</ClassType>

Here is some guidance on creating instances of this class:

·         You should set the TargetId to the ID of the class that you are grooming objects of.

·         Set the Category equal to GroomingCategory.Entity. This enum hierarchy is in the System.AdminItem.Library MP.

·         Set RetentionPeriodInMinutes to whatever you want – remember this is in minutes.  This is how long you want to keep the object for from the last time the object was modified.

·         Set IsInternal = false.

·         Set BatchSize to some positive integer.  We are using 1,000 for incident, problem, and change requests.

·         Set StoredProcedure to ‘p_GroomManagedEntity’ (without the single quotes)

·         Set the Criteria equal to the T-SQL statement that selects just the objects you want to have deleted.  The @Retention variable will be converted and set for you at runtime according to the value defined in RetentionPeriodInMinutes. The @TargetTypeIdvariable will be set for you at runtime with the value in TargetId.  For example, here are the Criteria for Incident grooming:

SELECT BME.[BaseManagedEntityId]

FROM dbo.[BaseManagedEntity] BME

JOIN dbo.MT_System$WorkItem$Incident I ON I.BaseManagedEntityId = BME.BaseManagedEntityId

WHERE

BME.[IsDeleted] = 0 AND

BME.[LastModified] <@RetentionAND

BME.BaseManagedTypeId =@TargetTypeIdAND

I.Status_785407A9_729D_3A74_A383_575DB0CD50ED ='AED4C69E-1891-A855-AEB4-6E456C6FA33F'

As a partner delivering a solution to customers, you may want to groom your objects out of the database and possibly give the customers the control over when things are groomed.  You can provide a user interface for creating these instances or updating them similar to what we did for work item grooming configuration above.  Also, similar to what we did, you can create instances of this class during the setup of your solution on top of the Service Manager product.

Data Warehouse Grooming

For more information on DW grooming see this blog post:

http://blogs.technet.com/b/servicemanager/archive/2011/06/07/how-much-data-do-we-retain-in-the-service-manager-data-warehouse.aspx

Update Oct 25, 2012 - Content contributed by Manoj Parvathaneni

If you want to look at a history of the grooming job execution you can run this query in the ServiceManager database:

 

select
top 1000 * from InternalJobHistory (nolock)

--where
Command like 'Exec dbo.p_GroomManagedEntity A604B942%'    -- Incident
grooming (once a day)

--where
Command like 'Exec
dbo.p_DataPurging'                    
-- Incident purging (once a day)

--where
Command like 'exec
dbo.p_GroomChangeLogs%'               
-- History grooming (EntityChangeLog) (once a day)

--where
Command like 'exec dbo.p_GroomSubscriptionSpecificECL%'   -- Grooming
subscription specifc EntityChangeLog rows (every fifteen minutes)

--where
Command like 'exec dbo.p_GroomSubscriptionSpecificRECL%'  -- Grooming
subscription specifc RelatedEntityChangeLog rows  (every fifteen minutes)

order
by InternalJobHistoryId desc

Each of these clauses queries a different type of grooming. The query will return the last 1000 occurrences of that type of grooming. If you see a non-NULL TimeFinished value and a Status of 1 that means grooming successfully completed. If you see a NULL value for TimeFinished and a Status of 0 or 2, it means something went wrong:

1.      There were too many rows to groom and we couldn’t finish it within 30 minutes.

2.      A deadlock happened and SQL killed the grooming spid.

3.      Some other failure, in which case Status code would be 2.

In all these cases you should see an event log entry in the Management Server with Event Id 10880. Note that for the first two cases above we would retry once more before we give up. So you will see two calls back to back if the first one fails.

Introduction to the Data Warehouse: Custom Fact Tables, Dimensions and Outriggers

$
0
0

So you’ve deployed the data warehouse and tried out the reports but now you want to want to make it your own. You may be trying to recreate some reports you’ve been using forever and now need to run them against the Service Manager platform, or perhaps you simply want to take full advantage of the customizations you’re doing in the Incident or Change Management solutions and want those changes to flow through to the reports. Either way, if you’re stumped on how to proceed our latest posts from the Platform team will help you extend and customize the Data Warehouse to enable the in-depth analyses you’re aiming for.

Danny Chen, one of the Developers on our common Platform team, did a great job writing up how to create fact tables, dimensions and outriggers. If you’re not familiar with data warehousing principles, I’ll provide some clarity as to what these terms mean and how they apply to Service Manager below. If you understand the principles well enough and are chomping at the bit to dig into the details, here are the links:

1. A Deep Dive on Creating Relationship Facts in the Data Warehouse

2. A Deep Dive on Creating Outriggers and Dimensions in the Data Warehouse

Principles behind the Platform: Dimensional modeling and the star schema

The data warehouse is a set of databases and processes to populate those databases automatically. At a high level, the end goal is to populate the data mart where users will run reports and perform analyses to help them manage their business. We keep this data around longer in the warehouse than in the CMDB because it’s usefulness for trending and analysis generally outlives it’s usefulness for normal transactional processing needs.

A data warehouse is optimized for aggregating and analyzing a lot of data at once in a lot of different, unpredictable ways. This differs from transactional processing systems which are optimized for write access on few records in any given transaction , and those transactions are more predictable in behavior.

To optimize the data warehouse for performance and ease of use, we use the Kimball approach to dimensional modeling. What this means to you is that tables in the DWDataMart database are logically grouped into subject matter areas which resemble a star when laid out in a diagram, so these groupings are often called “star schemas”.

  1. In the center of the star is a Fact table. Fact tables represent relationships, measures & key performance indicators. They are normally long and skinny as they have relatively few columns but contain a large number of transactions.
  2. The fact table joins to Dimension tables, which represent classes, properties & enumerations. Dimension tables usually contain far fewer rows than fact tables but are wider as they have the interesting attributes by which users slice and dice reports (ie status, classifications, date attributes of a class like Created Date or Resolved Date, etc).
  3. An outrigger is a special kind of dimension table which hangs off another dimension table for performance and/or usability reasons.

Generalized representation of a star schema:

Star Schema

Consider what a star schema for a local coffee shop might look like. The transactions are the coffee purchases themselves, whereas the dimensions might include:

  1. Date dimension (to rollup the transaction by both gregorian and fiscal calendars)
  2. Customer dimension (bought the coffee)
  3. Employee dimension (made the coffee)
  4. Product dimension (espresso, drip, latte, breve,  etc etc…. and this could get quite complicated if you track the details of Seattleites drink orders)
  5. Store dimension
  6. And more

What measures might the fact table have? You could easily imagine:

  1. Quantity sold
  2. Price per Unit
  3. Total Sales
  4. Total Discounts
  5. etc

IT processes aren’t so different from the local coffee shop when it comes time to designing your dimensional model. There are a set of transactions which happen, like incident creation/resolution/closure which produce some interesting and useful metrics (time to resolution, resolution target adherence, billable time incurred by analysts, duration in status, etc).

When thinking about extending and customizing your data warehouse, think about the business questions you’d like to be able to answer, read up on dimensional modeling for some tips on best practices, and then check out Danny’s posts on creating fact tables, dimensions and outriggers for the technical know-how.

 

And of course, we’re always here to help so feel free to send me your questions.

Create a report model with localized outriggers (aka “Lists”)

$
0
0

If you've watched my Reporting and Business Intelligence with Service Manager 2010 webcast and followed along in your environment, you may have unintentionally created a report which displays enumeration guids instead of Incident Classification strings, like below. Not too useful. In this post I'll tell you the simple way to fix your report model to include the display strings for outriggers for a specific language, and in a follow on post I'll share more details as to how to localize your reports and report models.

You may be wondering what happened. This is because we made a change in SP1 to consistently handle outrigger values which removed the special handling we had for our out of the box enumerations in outriggers. If you're now wondering what outriggers are, read up on the types of tables in data warehouse in my last post in which I provided the service manager data warehouse schema.

Here's the screenshot of the report we need to fix, the rest of the post will explain how to fix it.

 

Replace table binding in the Data Source view with Query binding

Rather than including references to the outriggers directly (in the screenshot below the outriggers are IncidentClassificationvw, IncidentSourcevw, IncidentUrgencyvw, and IncidentStatusvw) we'll replace these with named queries.

To do this, you simply right click the "table" and select Replace Table > With New Named Query.

 

You then paste in your query which joins to DisplayStringDimvw and filter on the language of your choice. Repeat for each outrigger.

SELECT outrigger.IncidentClassificationId, Strings.DisplayName AS Classification

FROM IncidentClassificationvw AS outrigger INNER JOIN

DisplayStringDimvw AS Strings ON outrigger.EnumTypeId = Strings.BaseManagedEntityId

WHERE (Strings.LanguageCode = 'ENU')

 

Create & publish your report model

To create a simple report model, right click the Report Models node in the Solution Explorer (right pane) and select Add New Report Model. Follow the wizard, selecting the default options.

 

If you want to clean it up a little, double click the Report Model, then select IncidentDim on the left.

Scroll down the properties in the center and you'll notice there is now a Role added to the IncidentDim named Classification Incident Classification, along with an Attribute named Classification. This is because using outriggers to describe dimensions is an industry standard approach and SQL BI Dev Studio understands that these outriggers should essentially get added as properties directly to the Incident dimension for the easiest end user report authoring experience.

The attribute is populated directly by the column I mentioned you should not use in reports, so you should select and delete that attribute from your model. You may also rename the Role "Classification Incident Classification" to a more user-friendly name like "Incident Classification" if you'd like to.

 

Now save, right click your report model and click Deploy.

Create a report to try out your new report model

Open up SQL Reporting Services Report Builder (below screenshots are using Report Builder 3.0). If you haven't gotten a chance to check it out yet, here's a good jump start guide.

 

Follow the wizard, select your newly published report model:

 

Drag & drop your Incident Classification and Incidents measure. Hit the red ! to preview.

 

Drag & drop to layout the report

 

Continue with the wizard, selecting the formatting options of your choice. If you would like, you can then resize the columns, add images and more. For our quick and simple example, though, I'm going to intentionally leave formatting reports for another post. If you've been following along, your report should now look like this:

 

Go ahead and publish to the SSRS server under the /SystemCenter/ServiceManager/ folder of your choice to make the report show up in the console.

 

 

How long does the Service Manager Data Warehouse retain historical data?

$
0
0

The short answer is that we keep data in the warehouse for 3 years for fact tables and forever for dimension and outrigger tables. Antoni Hanus, a Premier Field Engineer with Microsoft, has put together the detailed steps on how to adjust this retention period so you can retain data longer or groom it out more aggressively.

DISCLAIMER: Microsoft does not support direct querying or manipulation of the SQL Databases. 

To learn more about the different type of tables in the data warehouse, see the blog post which describes the data warehouse schema.

To determine which are the fact tables and which are the dimension tables you can run the appropriate query against your DWDataMart database

SELECT WarehouseEntityName

      ,ViewName

      ,wet.WarehouseEntityTypeName

  FROM etl.WarehouseEntity (nolock) we

  JOIN      etl.WarehouseEntityType (nolock) wet on we.WarehouseEntityTypeId = wet.WarehouseEntityTypeId

  WHERE     wet.WarehouseEntityTypeName = 'Fact'

 

SELECT WarehouseEntityName

      ,ViewName

      ,wet.WarehouseEntityTypeName

  FROM etl.WarehouseEntity (nolock) we

  JOIN      etl.WarehouseEntityType (nolock) wet on we.WarehouseEntityTypeId = wet.WarehouseEntityTypeId

  WHERE     wet.WarehouseEntityTypeName = 'Dimension'

NOTE: Microsoft does not support directly accessing nor managing the tables (dimensions, facts nor outriggers).

Instead, please use the views as defined by the ‘ViewName’ column in the above query.

Fact Table Retention Settings

There are 2 two types of retention setting in the data warehouse:

1) Global - The global retention period (set to 3 years by default) which any subsequently created fact tables use as their default retention setting.

2) Individual Fact– The granular retention period for each individual fact table (uses the global setting of 3 years, unless individually modified).

Global:

The default global retention period for data stored in the Service Manager Data Warehouse is 3 years so all OOB (Out of the box) Fact tables use 3 years as the default retention setting.

Any subsequently created fact tables will use this setting upon creation for their individual retention setting.

The default Global setting value is 1576800, which is 3 years (1576800 = 1440 minutes per day * 365 days * 3 years)

This value can be verified by running the following SQL Query against the DWDataMart database:

select ConfiguredValue from etl.Configuration where ConfigurationFilter = 'DWMaintenance.grooming'

Individual Fact Tables:

Individual fact tables will inherit the global retention value upon creation, or can be customized to a value that is different from the default global setting. 

OOB Individual Fact tables that were created upon installation, can also be individually configured with a specific retention value as required. 

All of the Fact tables in the Database can be returned by running the following query against the DWDataMart Database:

SELECT WarehouseEntityName

      ,ViewName

      ,wet.WarehouseEntityTypeName

  FROM etl.WarehouseEntity (nolock) we

  JOIN      etl.WarehouseEntityType (nolock) wet on we.WarehouseEntityTypeId = wet.WarehouseEntityTypeId

  WHERE     wet.WarehouseEntityTypeName = 'Fact'

An example of an OOB fact table returned is  ActivityStatusDurationFact which has a warehouseentity ID of 81;

clip_image002

The corresponding retention setting for this Fact table is stored in the etl.warehouseentitygroominginfo table, so if we run the following query, the ‘RetentionPeriodInMinutes’ field will show us the individual retention configured for that particular table
Query:

select warehouseEntityID, RetentionPeriodInMinutes from etl.WarehouseEntityGroomingInfo where WarehouseEntityId = 81

Result:

clip_image004

A SQL Statement such as the following could be used to update an individual fact table to an appropriate value:

Use DWDatamart

UPDATE etl.WarehouseEntityGroomingInfo

SET RetentionPeriodInMinutes = [number of minutes to retain data]

WHERE WarehouseEntityId = [WarehouseEntityID of Fact table to update]

Fixing Service Manager Data Warehouse Registration Information

How to rebuild your SSAS and cubes in SCSM 2012 R2

Service Manager Lync Up call available

$
0
0
Happy New Year! Our first Service Manager #LyncUp Call for 2016  is now available for download.  Even though SCU was also on, our call was well attended by our customers and partners. The January call covered the following topics:
 
  • Cased Dimension’s Management Pack introduction: Thank you to the Managing Director Liam Murray for his time and quick overview.
  • Discussed the upcoming Update Rollup 9 and the next hotfix for the portal, coming January 26, 2016. More details on UR9 are available in last month’s LyncUp call here:

Service Manager December 2015 #LyncUp Call available for download 

  • Service Manager Technical Preview 4. The new iteration in the SCSM 2016 Technical Preview series has many new improvements:
    • Configuration Manager and AD connectors become 65%* and 50%* faster respectively with ECL log disabling feature (optional)
    • Incident workflows have become 50%* faster
    • AD group expansion enhancements have made it schedulable and 75%* faster
    • ECL grooming has become 3 times* faster

Read more about what’s new here:

 

https://technet.microsoft.com/en-us/library/mt346039.aspx

You can also get the evaluation VHD here:

 

https://www.microsoft.com/en-us/download/details.aspx?id=49967

You can download the recording for January’s call here: https://t.co/mQAvO7c1QU
 
Thank you,
 
Kathleen Wilson and Harsh Verma

 fbTwitterPic

Our Blogs


Update Rollup 9 for System Center 2012 R2 Service Manager is now available

$
0
0

Update Rollup 9 for System Center 2012 R2 Service Manager (SCSM 2012 R2 UR9) is now available to download. For complete details including issues fixed, installation instructions and a download link, please see the following:

3129780Update Rollup 9 for System Center 2012 R2 Service Manager (https://support.microsoft.com/en-us/kb/3129780)

3134286 - Update for System Center R2 Service Manager Self Service Portal (https://support.microsoft.com/en-us/kb/3134286 )

J.C. Hornbeck | Solution Asset PM | Microsoft

fbTwitterPic

Our Blogs

 

Update is available for the System Center 2012 R2 Service Manager Self-Service Portal

$
0
0

We are pleased to announce second cumulative update for new HTML5 Self Service Portal and it can be downloaded from here.  As this is cumulative update, you can either install this update directly on RTM Release or even if you have previous update installed on RTM.

Please note that this release is independent of Service Manager UR9 Release and do not need UR9 to be installed on machines. This specific patch needs to be applied only on machine(s) which are hosting new Self Service Portal to enable following features and fixes. Installing UR9 on Machine won’t fix Self Service Portal issue.

Here are key features of this release –

  • Query form element in request offerings now supported on all configurations

  • Portal now filters the service offerings based on language.

  • Portal now lets you configure a generic request button to associate default and language-specific request offerings. Read more about this here.

  • A share button has been added in all Portal pages to easily share a request offering, submitted request, activity, or Help article.

  • User inputs are now getting populated in service requests when they are filed from the Self-Service Portal.

  • The date picker is now available for date type fields in Request Offerings forms.

Here are other bugs fixed in this release –

  • You cannot remove a Favorites item for request offerings.

  • Query UI Issues Fixed –

    • Query allows multiple selection even when they configured to allow a single selection only.

    • Pagination in a query does not work if the query is configured not to show object details.

    • Search box in query form element does not appear if the query is configured not to show object details.

    • Query field shows “No results found” by default instead of asking the user to refresh and fetch the results.

    • Multiple refreshes distort the query UI.

  • Attachments issues fixed –

    • The Attached by field is not populated in the work item when you attach a file from the Portal.

    • Mandatory attachments inside a service request do not work.

    • Action log does not show an entry if a file is attached during the creation or update of a request.

    • Full path is displayed in file name when you attach a file from the Portal.

  • Submitting request to create SR in end-user role reports error.

  • Access for Announcements can now be controlled by using user roles.

  • Submitted service requests and incidents in the Portal enter a closed state.

  • Help Article Search does not return results from all Knowledge Base articles.

  • Activities listed under the “My Request” section are not sorted per their sequence.

  • Regex tooltip does not appear for text fields in request offerings if it is constrained by .Net Regular Expression criteria.

  • Action logs don’t expand when any request is opened by using a direct URL.

  • My Activity does not show the priority display name correctly.

  • User Inputs data is displayed as XML in the related request of My Activities.

  • Request Offerings user templates that have predefined service items or configuration items generate an error during submission.

  • Notes tab is missing from My Activities.

  • Service offerings appear in an arbitrary order (they are now sorted alphabetically).

  • Description/Notes field does not show multiple lines.

  • My Activities filter does not work in the Portuguese-Brazil language locale.

Service Manager UR9 re-release – status

$
0
0

[Current State – 18th Feb 2016, 11AM IST] New patch for Update Rollup 9 is now available for download.

[State – 15th Feb 2016, 11AM IST] Beta testing is in progress. No issues have got reported so far, if this trend continues, then the re-release will happen by Wednesday (17th feb).

[State – 12th Feb 2016, 11AM IST] Updated UR9 bits are being shared with affected customers for their validation. Please reach us at veharshv@microsoft.com if you are also interested in validating the bits.

[State- 11th Feb 2016,  11 AM IST] Updated UR9 bits will go into beta testing by tomorrow and will be made public after validation.

——

We would like to inform you that the re-release of Update Rollup 9 is pulled back after the reports of console crashes in non-English deployments. We are investigating the cause of the problem and request you to refrain from installing the existing UR9 patches (if you have already downloaded them), until the new updated patches are available from our side.

We have reports of two issues which we are actively looking into -

  • Update Rollup 9 (v 3079.571 – Released Jan 26th) is resetting the customizations because it replaces the following unsealed Management Packs -
    1. ServiceManager.IncidentManagement.Configuration.xml
    2. ServiceManager.ServiceRequest.Configuration.xml
    3. ServiceManager.ProblemManagement.Configuration.xml
    4. ServiceManager.KnowledgeManagement.Configuration.xml
  • Re-release of Update Rollup 9 (v 3079.601 – Released Feb 8th) is causing console crashes while opening forms in non-English Service Manager console.

We sincerely regret the inconvenience caused, and would like to assure you that providing UR9 is our top priority and we are actively working to resolve all the issues. We will use this blogpost as the medium to keep you regularly updated about the progress, and you can contact us at servicemanager@microsoft.com for any more queries or concerns.

Announcing support of SQL 2012 SP3

$
0
0

Hi everyone. This is just a small post to make an announcement. As some of you were waiting to upgrade your SQL servers, the Service Manager 2012 R2 with Update Rollup 9 installed, now officially supports Service Pack 3 for SQL Server 2012.

We have done a fair amount of validation to make sure that everything continues to work as expected. That said, if there is anything which seems suspicious let us know via your comments below.

Our February Service Manager #LyncUp Call is Available for Download.

$
0
0

The February call covered, we had over 110 participants and a lively audience, as always.

  • Chris Ross and Will Udovich as our special Partner guests and presented/demoed  Cireson’s new Portal as well as their other amazing addins from the Cireson Platform
  • Service Manager UR9 re-release
  • Technical Preview 5 Highlights
  • Improved data processing
  • Improved console performance
  • Higher work item per minute processing

You can download the recording here http://1drv.ms/1KX9VRp

Thank you and we will chat next month

Kathleen Wilson and Harsh Verma

Viewing all 40 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>