Having decided that the best way to plan for Modernization is to have a robust understanding of your current situation, the second part of our Modernization blog series introduces how best to approach the activity of detailed planning, avoiding common pitfalls along the way.
In the first blog article we discussed the inevitability of change across business and IT. Change means the world today is a very different place from when your core business applications were originally devised. Businesses diversify, customers evolve, technology shifts, application requirements change and the people who work on these systems have themselves changed, moved on, up, retired.
The first and fundamental activity on your modernization journey is getting a clear view of your enterprise applications ecosystem. Before thinking about how to plan your Modernization project, you should consider the decision you are trying to reach… and why. A large-scale internal IT change program typically supports a business imperative. This could be a cost-driven issue like a system consolidation, platform retirement or replacement. Or new business initiatives like new application provision, or reaching out to clients through new interfaces or devices. The spark could come from compliance or risk-management activities such as upgrades, process improvement, and security activities. While there may be assorted reasons behind the major change, it’s always the case that such change is accompanied by a hypothesis – a view of “what ought to happen” – that has been factored into the original plan.
For the plan to be sound and the risk of IT failure mitigated, the hypothesis needs to be tested against factual information.
What do you need to know? What factors influence your decision? Let’s look at the major attributes – Cost, Value, and Risk – and consider the important information needed
Cost. It’s important to consider all the associated IT and operational costs together. This includes the cost of the application platform, supporting software, including maintenance; the cost of wages and attributed overhead costs (e.g. data centre expense). There are also downstream costs of current systems, e.g. likely system upgrades, leasing renewals, support staff contracts etc. Additionally, any ‘new’ system costs need to have been evaluated for comparison, along the same lines. Usually there is a new hardware CapEx element as well as the purchase of new system software. Additionally, re-training on any new system has to be considered and incorporated.
Value. You need to know how much your incumbent or outgoing systems return. Equally important are the projected new revenues – from a business case rather than from any internal reporting system. “Revenue” is an aggregate of a number of variables – geography, time, product line, etc. However value isn’t restricted to revenue, it includes customer experience – customer satisfaction, measurable through a range of means, is as important in the decision making process.
Risk / Operational Metrics. Typically plotting “cost” against “value” will give you a pretty decent measure of any system from a business perspective. However there are dangers in ignoring the readily-available and valuable operational data points:
- Rate of Change – number of issues “solved” by the system on average. This gives a good view of the flexibility of any system
- Number of defects – an overall measure of outstanding issues may indicate a level of robustness
- Rates of change – the amount of recorded change made is an important factor
- Application Complexity – a number of scientific measurements can be undertaken against applications to deduce ‘complexity’
- Customer / User views –customer opinion can be determined through help desk systems or surveys
- Strategic Fit – systems conform to an internal architectural blueprint to a lesser or a greater extent and this information is typically recorded in a way that means it can be rated
So, if an application appears to be costly to maintain with modest revenues, at first glance it might seem a good candidate for replacement. Taking into account all the variables above, i.e. learning that the strategic fit is an exact match, and that the application (and supporting skills) is highly flexible, and that the number of defects is low, the idea that the application is deficient turns out to be incorrect.
Understanding the levels of detail required is a laborious exercise to undertake manually. The information collected would, in any case, be a subjective view of information (factual data is difficult to ascertain by purely visual inspection) and prone to error. Also the task of manually-collecting, storing, assessing and reporting on this data is obviously extremely costly.
A smarter move is to exploit enabling technology – Application Portfolio Management (APM) – to inspect data that covers the IT landscape. Being able to interrogate the systems that house this information directly means information can be pulled, stored, assessed and reported on instantaneously. Computer programs, SCM systems, defect systems, help desk systems, financial systems, can feed into a meta-repository to be used as the basis for detailed analysis. For information that is recorded and held outside of systems, such as customer surveys, APM technology can be used to created questionnaires and measure results.
What this leads to is a review of applications on a Business Value versus Operational Cost chart. This is a simplified view as the amount of analysis will vary according to the hypothesis you are trying to test. Reviewing the factual information through the different views APM technology provides you, enables you to prioritize which applications should be modernized according to the key criteria you are considering.
In the next blog we will describe how you can determine the appropriate supporting technology to modernize your chosen application subset(s). Once again, the use of enabling technology will be driven by the issues discovered during your application portfolio assessment and the Modernization activities proposed.