IVI Framework Viewer

Empirical Model

A1

Define and quantify the relationships between IT infrastructure (for example, CPU load or latency) and IT services, business processes, and ultimately, the organization. Populate models with data to act as a basis for all analysis (including historical and projected).

Improvement Planning

Practices-Outcomes-Metrics (POM)

Representative POMs are described for Empirical Model at each level of maturity.

1Initial
  • Practice
    Rely on the best endeavours of staff prior to formal data analysis being in place.
    Outcomes
    • Reactive environment with minimal predictability and poor corrective actions, leading to budget overruns and/or over-investment (“sledgehammer to crack a nut”).
    • Bulk of historical infrastructure performance measurements are discarded as perceived to be of minimal value.
    • Focus on IT functionality with no visibility of what constitutes an end-to-end service.
    Metrics
    • Number of outages
    • Total downtime
    • Variance from planned BAU cost
    • Helpdesk calls
    • MTTR
2Basic
  • Practice
    Define and model IT resources for each technology building block, with resolution to individual component level, e.g. Network (LAN, WAN), Servers (database, Webserver, application), Storage (SAN, Fileshare, …).
    Outcome
    Capability to get visibility of response times, availability, capacity, utilization and cost for each infrastructure category.
    Metrics
    • IT component availability
    • IT component response times
    • IT component capacity (storage capacity, %CPU utilization, bandwidth capacity, …)
    • Cost per discrete component
    • % IT infrastructure covered
    • MTTR
  • Practice
    Condition performance measures for each individual component, in order to enable analysis on a consistent basis (i.e. time synchronisation, nomenclature synchronisation, data cleansing, data gap filling).
    Outcome
    Drive continuous improvements in infrastructure performance metrics, leading to overall better predictability
    Metrics
    • Availability trend by category
    • Capacity utilization trend by category
    • IT component SLA breaches
    • Response times trend by category
    • Number of IT component SLA breaches
    • MTTR
    • MTBF
    • Variance from IT budget
    • NOTE: Database server availability must exceed 99.97% during core hours
3Intermediate
  • Practice
    Establish end-to-end service view(s) for each IT service comprising all relevant metrics from underlying IT components.
    Outcome
    Capability to get business-understandable view of the availability, capacity, response times, utilization and cost of each IT service, and how the performance of each infrastructure component contributes to these metrics (e.g. end-to-end service response time).
    Metrics
    • Service availability (trended)
    • Service utilization (trended)
    • Service capacity (trended)
    • Service latency (trended)
    • Service cost (trended)
    • % IT Services covered
    • Service MTTR
    • Service MTBF
    • Cost per IT Service vs Industry Benchmark
    • Adoption of IT service metrics
  • Practice
    Correlate behaviour of underlying infrastructure IT components against IT service performance metrics in order to understand how service is performing and causes of deviations from normal behaviour.
    Outcomes
    • Empirical Modelling:
    • Improved service metrics and predictability within the IT environment.
    • Improved user experience (as potential to minimise infrastructure threats before user impact)
    • Monitoring:
    • Increased availability
    • Reduction in the time taken to resolve IT service issues and reduction in the number of recurring issues.
    • Improved user experience.
    Metrics
    • Empirical Modelling:
    • Service response times (trended)
    • Service availability (trended)
    • Service capacity (trended)
    • Service latency (trended)
    • Service utilization (trended)
    • Number of SLA breaches
    • Theoretical Weighted Business Impact (Estimate of hours lost due to service outages/shortages).
    • Monitoring:
    • Availability
    • MTTR
    • MTBF
    • Number of issues (by incident type)
    • IT component metrics (actual vs SLA)
    • IT service metrics (actual vs SLA)
4Advanced
  • Practice
    Define and model business processes in terms of the IT services which are used to implement them (Level 3 provides mapping on to the infrastructure components).
    Outcomes
    • Abstracted view of IT services defined in terms of the business processes delivered, with an understanding of how each business process is performing.
    • Ability to monitor IT contribution to business process performance via metrics such as business proce
    Metrics
    • % Business processes mapped to IT services & monitored
    • Industry benchmark cost per IT-enabled business process
  • Practice
    Correlate business process performance (e.g. #Client sales booking arranged) against performance of underlying IT services (e.g. 5 hours degraded performance on CRM application) in order to understand how business process is performance and causes of deviations from expected business process performance.
    Outcomes
    • Provides insight into Actual vs Planned SLA performance (and thereby identifies areas where IT is hampering business performance)
    • Drive continuous improvement in business process performance, helping to identify where IT investments will make the most beneficial.
    • NOTE: By understanding how changes in business demand impact on IT services, IT services is better placed to divert scarce IT resources from one IT service to another.
    • Following the introduction of the competitor's Fleximortgate product, the number of new applications.
    Metrics
    • IT SLAs breached
    • Business process SLAs breached
    • Business process hours lost (or Number of transactions lost)
    • Transactions handled per IT dollar
    • Business process availability
    • Business process response times
    • Business process utilization
5Optimized
  • Practice
    Construct the model at organization level, comprising all IT-enabled business processes for each of the underlying business entities (thereby establishing link from the business to the business processes, to the IT services, to the infrastructure).
    Outcomes
    • Provides a 2-way understanding of the linkage from the business to the underlying IT environment, allowing the impact of IT on the business, and the impact of business plans on IT to be understood and quantified.
    • Facilitates business-level scenario planning.
    Metrics
    • % Organization processes covered by model
    • % Business decisions which leverage model input
    • % Variance from plan (ie how well does model reflect reality?)
    • IT-contribution/loss ($) for each business process
    • ROI (per IT dollar)
  • Practice
    Validate the organization-level model, by comparing actual vs planned, and amending the model as required to improve accuracy.
    Outcomes
    • Improved strategic planning capabilities (business expansion/contraction, contingency planning, risk management, …) supporting opimised allocation of scarce IT resources to enable alignment with business priorities
    • Quantification of IT's contribution/costs.
    • NOTE: Different profiles of user place differing demands on IT services. Service analytics allow the business to understand accurately the usage and associated IT costs of, say, recruiting 400 new tax graduates for a UK prossional services firm. Service analytics is improving.
    Metrics
    • Actual cost of achieving strategic goal vs Baseline percentage estimate
    • Cost of excess/under capacity
    • Business growth headroom capacity (% and Time-based), for each process
    • Improved operating margin
    • IT Cost/Value relationship for each business process.