So, here the irony: the person with the word “Information” in their job title. The CIO, has no information about how their business is running.
Yes, I know they think they have information, but the reality is they have opinion masquerading as pretty Gantt charts fooling them into thinking they have data about the state of their business. If we continue to think that going to status meetings is the way to get status we will continue to struggle to run IT and we will never deliver the kind of quality, ROI and timeliness the business is demanding.
When the CFO asks if invoices are paid we do not have a status meeting. We offer up empirical data showing the exact disposition of each and every invoice, accurate to the penny from our computerized systems.
So why can’t we do that for IT projects?
As one customer said to me recently in New York, “because developing software is not like invoicing”; and she’s right. Invoicing is incredibly complex and integrated into the fabric of the business so deeply that just about everything we do affects invoices.
Developing software is a straight forward process and follows a more-or-less linear progression. The rules are really clear and we have software to enforce them. My mentor and friend, Doug Troxel, once said to me “How can your job be so difficult when all you have is a ’1? and a ’0??”
So why is it that we continue to try and run IT as a manual process?
I know we think we have automation but what we have are hundreds of point tools (723 according to a VP of development at EADS and over 200 according to a VP of development at Ford). These tools help individuals complete their part of the lifecycle but they do nothing to assist in the transfer of information from one silo in IT to the next.
This is what I call the Archipelago Problem: lots of islands of technology but no integration between them other than rickety rope bridges and a few dugout logs acting as row boats. What we need is Highway 1 from Key Largo to Key West that is high speed, uniform and consistent.
Today’s software delivery processes are error prone, contain much that is duplicative and involve significant amounts of rework. If we sent invoices out that were wrong, what would we do? If we sent invoices out twice, what would we do? If we had to correct invoices and resend them, what would we do?
That’s right – we would find where the error was in the process and fix it and we would try to automate it so it doesn’t happen again.
This is what we must do for the software delivery process. Let’s take a closer look at what is going on in the development community by looking at two common appliction delivery myths.
Application Delivery Myths
Application Delivery Myth #1: Best-in-class is best
Each part of the Software Development Lifecycle (SDLC) is a silo. No matter your role, whether an architect, designer, developer, tester, release engineer or whatever you are blessed with incredibly good technology tools to support your ability to conceive and deliver your part of the application. These are point solution tools. They usually come from specialist vendors with decades of experience in developing tools specifically to support that part of the lifecycle. These tools speak your language, they are optimized for how you work and they encapsulate the best practices and modern methodologies of your part of the application delivery experience.
However these tools are not best suited for collaboration across the lifecycle, they rarely provide any automation for the handoff from one phase of the SDLC to the next. There is frequent cut-and-paste from one database to another often requiring some kind of data migration and transformation process to run. The tools don’t “talk” to each other and so when data changes in one it is not reflected in the other. And when the tools are integrated that integration is often very brittle and fails as soon as one or other of the tools is upgraded.
Application Delivery Myth #2: One-size-fits-all fits me
So enter the one-size-fits-all (OSFA), one-stop-shop, everything-you-wanted-all-in-one-place tools. These tools try to provide an end-to-end solution that does facilitate the support of the whole lifecycle. The problem is that these tools come from vendors who have to reach a mass audience and so they develop very generic solutions that are entirely agnostic as to your role in the SDLC, your organization’s domain or methodology, best practices, policies and procedures. The big selling point for them is often their “single repository” database. I have all my artifacts in one database and I can control access to everything from that one place.
While the concept seems to make sense you have to dig a little deeper and ask some questions. Most of these tools are incredible “bloatware” solutions with hundreds of menus and options and a dizzying battery of configuration and customization needs. But perhaps the most insidious feature of these tools is that they are optimized for a single platform. Whether it is optimized for the OS, the hardware, the language, the methodology; these tools are trying to force you focus on one OS/platform/hardware/methodology. But today’s applications are highly heterogeneous because we constantly innovate in our solutions and embrace new ideas and incorporate them into extant systems. Of course migrating all your existing data into one of these OSFA products is expensive, time consuming and very error prone. And let’s not forget none of these vendors is the domain expert in every part of the lifecycle. So these bloatware tools provide adequate to capable functionality throughout but never the deeper capabilities for dealing with the inevitable corner-cases we all have in all of our systems.
So what is the answer? There is a hybrid solution here that needs to be considered. You want best-in-class tools but you want them to work together and support your end-to-end delivery process.
Orchestrated Application Delivery addresses this problem directly, simply and preserves your existing investment in solutions you have chosen because they meet your technology needs. It supports your methodology and topology and geographical challenges. It is designed to allow the most flexible implementation without massive data migration and it supports an evolutionary roll-out strategy (not a revolutionary overturn everything we hold dear because we have new tool to implement strategy!).
How to Orchestrate Application Delivery
- Processes: in terms of micro and macro processes.
- Tools: the underlying data storage they use, the processes they encapsulate and the integration points they offer.
- Integrations and the other myth.
- Interfaces: how the user interacts, if the user interacts.
- Reports, controls, measures and the whole battery of mechanisms that make Orchestrated Application Delivery the reason we are making the this quantum leap.
Processes
In the quest for the perfect development process we strive to create the one, the simple, the direct, the optimized definition of how we develop software. When we follow this path we inevitably run into trouble. There is no such thing as the one path for development. Better we try to solve the problem in chunks.
Macro Processes
Start by defining the macro-level process into major phases that are meaningful to your development process. You probably have this already. You most likely call it your SDLC, your software development lifecycle. In the most straightforward terms it is the process from the Demand arriving at the door of IT, through the Development phases that design, create and test the solution to the Deployment of the solution into the production environment. Or Planning, Analysis, Design, Implementation and Maintenance (according to Wikipedia) or any one of a hundred different methodologies.
This macro-level process needs to have as many phases required to track your project and provide meaningful feedback to senior management. However, keep it high level so that the granularity of phases isn’t meaningless. A good rule of thumb is to pick an odd number of phases between 3 and 9 and you have the right number. Implement these phases directly in your process automation tool. From this you will create business-level dashboards.
Micro Processes
Within each of these macro-level phases think about what the micro-level detail steps need to look like and then think about how they will be implemented. Will a tool be used extensively in this phase? If so does the tool have the ability to implement the process model you need for this phase at a detailed level? Does the tool have the ability to communicate to other tools in the lifecycle? Does the tool have the ability to communicate with the high level process automation tool managing the SDLC, macro-level?
Perhaps we have no tools at all. In which case we must automate hand-offs and sign-offs so that everyone involved in the project is included in the automation. Essentially, we need to create interfaces for them to interact. What if there are point-solution tools that are completely isolated in one phase and have no integration points?
Next, we should ask if the process steps in the phase will be the same each time through. The answer ought to be yes but before we rush to that answer, consider these possibilities:
- We have web, client-server and mainframe development. Tools will certainly be different. Time-frames and sign-offs will also. Perhaps they need separate micro-level processes.
- We have agile, lean and waterfall development styles. Inputs and outputs may be different; measurement certainly is.
- We have high risk projects, mission critical projects, time-to-market sensitive projects. These often lead to different development processes at the micro-level.
- We have new development, enhancement, maintenance and emergency projects. Are their processes the same?
We need to implement the micro-process in the tool of choice for that phase of the lifecycle or we need to introduce automation using our process automation tool of choice.
Application Delivery Tools
As we have seen, every application development group has a huge extant investment in technologies to support their efforts. Ripping them out and replacing them with something generic, but integrated, is not the answer.
We need to step up our requirements in the identification and selection of tools for application development. In fact, I want you to start to demand two things from each of your vendors starting today:
- Is the tool process-centric and does it support a flexible process model? Any tool that insists you follow its model rather than your own should be crossed off the shortlist. Insist on tools that are process-centric and that support your ability to customize the process without having to write code to do so!
- Is the tool open? Does the tool have an open, standards-based, API and does that include the ability to run the tool without a user interface? Does the API support both a push and pull model? Can the tools be driven from an event generator? Does the tool generate events? Only with the broadest and most open API will you ever get the depth of integration you need for Orchestrated ALM.
Make these essential requirements in your vendor RFP’s and RFI’s today. Integration at the process level is essential.
Integrations: The Other Great Application Delivery Myth
Point-to-point integrations are the answer vendors say. “We can integrate our point tool to their point tool.” But when they do, the point-to-point integrations are usually limited in functionality, barely offer more than automated cut-and-paste and are all too often very brittle. Upgrade the software at one end of the integration and the integration falls apart. And you, the customer, are left trying to get vendors to fix the integration.
Interfaces
Ideally the tools one uses have role-specific user interfaces.
There has been a trend recently to create the all-singing-all-dancing IDE with every conceivable feature and function buried in them down amongst many layers of menus. But the one-size-fits-all myth applies to interfaces too. We need role-specific user interfaces.
Classes and libraries are jargon that is fine for a developer but resources and collections might be better for a user interface designer. Painting pictures with a stylus might be right for the web designer but a Java IDE is better for the web developer. One size does not fill all.
So this means it is essential to use tools that meet your UI needs not those of the vendor. And if you do not have the tool in that part of the lifecycle you can easily create one with the automation of the process steps with your automation tool.
Reports, controls and measurement
If we implement all of these features and connect them together with the automation tool we will be able to:
- Create the kind of reporting dashboards that allow us to manage our business.
- Implement controls so we can ensure the right stakeholders give informed consent to projects as they move through the lifecycle.
- Get real-time data, trending over time, so we can see where we are improving development efforts and where we are making them worse.
When we look at most dashboards they are awash with charts and grids and are more colorful than Harlequin’s suit. What we need are the KPIs (key performance indicators) and we need to limit this to less than ten.
Application Delivery KPIs for the CIO:
- Percentage of projects delivered on time
- Percentage of planned content delivered
- Percentage of projects delivered to budget
- Percentage of defects reported post delivery
Application Delivery KPIs for the VP of AppDev:
- Percentage of requirements changed post freeze
- Average number of items in developer queues
- Average number of closed tickets per day
- Number of severity 1 issues outstanding more than 24 hours
- Percentage of automated tests that fail
- Percentage of automated test coverage
Whatever we choose as our key indicators we can now have this information delivered to us because of process automation. But another key benefit of automation is the guarantee that processes will be followed and the designated individuals will be able to insert themselves into the process and confirm their approval (or disapproval) at each step of the lifecycle. This is essential for accountability and traceability.
Defining Processes
So you need to define your high-level process. And then your low-level ones.
We implement the high-level ones in our automation tool. The low-level ones are implemented in the tool of choice for the phase. If there is no tool, we implement in the process automation tool.
We connect the low-level tools to the high-level process via web-services based integrations. We do this based on the process needs, not on point-to-point capabilities.
Where there are parts of the lifecycle that are not supported by tools we create the interfaces we need so that every stakeholder is required to participate in the process.
We develop dashboards, controls and key performance indicators based on the automation.
I know it sounds easy and it is really. It just requires effort and dedication supported by commitment and open-mindedness. Not a lot to ask.
Automating Application Delivery
It doesn’t matter what your application development processes look like, nor what methodology you’re following nor which platform you are developing for. Whatever your technology topology you will find that automation will significantly improve how you deliver applications.
Of course, it is all about the process and not the tools. But it is easier to improve automated processes than it is to improve manual ones; to enforce compliance, to obtain metrics, manage exceptions, and report on status… when automation is in place. So the rule of thumb here is “automate then optimize”. Of course, this results in the automation of bad processes. But it is going to be easier to fix that bad process in the long run.
So, who do we start with when we want to automate application delivery processes?
The short answer is: we start where the most pain is being experienced. Automation brings rapid relief to organizational issues. These are often within a silo and can be implemented quickly, as all the decision-making authority is in one command structure.
They are, equally often, between silos, managing the boundary conditions and hand-offs, and can be implemented here too with speed and efficiency, though with more negotiation as two (or more) groups have the decision-making authority. As a result, it is essential to have senior management commitment to the automation project.
Whether it is managing the exchange of tasks and artifacts or just keeping track of status, the importance of automation cannot be understated. Incredible amounts of effort are expended in every IT department doing manually what could be done with automation. You need to find a champion on each team, in each silo, and involve them in every step of the automation process. Empower them to make the decisions for their team and make them responsible for communicating the project detail to their team members.
As we enter the deployment phase, your virtual team needs to celebrate the victories and highlight the benefits that are being accrued from the automation. This means regular comparison of the results with previous baselines of data so that the productivity gains can be shown and that the error rates can be seen as falling.
Learn More About Our Application Delivery Solutions:
- Application Lifecycle Management
- Agile Manager
- Unified Functional Testing Enterprise, includes UFT Pro (LeanFT)
- Mobile Testing
- Load Testing
- Service Virtualization
- What is DevOps?
- What is Functional Testing?
- What is Performance Engineering?
- What is Performance Testing?
- What is Test Management?
- What is Value Stream Management?