Unified AppDev – DevOps Mainframe Efficiency

While the DevOps methodology appears to promise significant improvements in efficiency, it relies on significant changes to internal practices in environments which are – frankly – quite hard to change. In his third blog in the series Derek Britton explores whether taking a smarter approach to mainframe DevOps might bring the goal of development efficiency within reach

Introduction

‘We’re all about smaller, focused teams, working on manageable sizes of work, with a regular and clean delivery process’. That’s Agile, or Scrum or one of its equivalents, talking. DevOps, to paraphrase, would respond ‘that’s fine, but let’s include the testing  teams as well, so those regular deliveries can stand up to scrutiny over quality, fitness for purpose and  user feedback‘.

Using a focused, multi-skilled team to work together on manageable deliveries and testing the whole delivery as a single unit of work? Well, frankly, it takes courage to argue against it. Anyone doing so would be presumably thinking wistfully of protracted waterfall models with huge projects that never fully sign off the scope and cannot test that volume of changes in the time allotted, where precious skilled resources are pulled from pillar to post to rectify unplanned issues. Leaving that all behind is certainly a good strategy. But how does that stack up for the mainframe environment?

Regular and Clean Delivery Process?

These are where the problems could start.

Of course, the mainframe processes are regimented and rule-based, yes, orchestrated and established, certainly.

But the frequency of delivery is based on the payload, other priorities, resource availability, and often – crucially – mainframe resources too.

And ‘clean’ is slightly subjective perhaps, but there’s nothing simple about the quality of required deliveries. Rudimentary errors such as compile errors aren’t uncovered until the next day. The COBOL team has their part, the Java team another – and what about the middleware? Even straightforward debugging or basic unit testing requires QA to be on stand-by to help set up the environment, as well as sys-admin.

So, clean? A little tainted, maybe. One Micro Focus customer faced an application change reality – not uncommon in the mainframe world – where regular challenges faced the beleaguered development team and application development cycles were slow compared to other language teams [though the mainframe applications were where the business value resided]. The cause was diagnosed as a combination of cumbersome tooling and restrictions in available mainframe processing time.

Delivering applications composed of mainframe COBOL and distributed Java assets was becoming more regular, and therefore more critical, yet the teams involved were split by structure, process and technology usage. Collaboration over bug fixing, unit testing and QA was extremely difficult.

The Solution – AppDev Sans Frontiers

The solution was as dramatic as it was straightforward. A single, unifying toolset, as modern as it needed but mindful of more established technology.

DevOps Blog 3 image 1Figure 1 Smart z/OS development with Enterprise Developer

Micro Focus’ IBM mainframe application delivery suite flagship – Micro Focus Enterprise Developer (figure 1) – enabled the teams to use a common development toolset, independent of scarce mainframe resources. The powerful Eclipse-based IDE provided freedom for mainframe developers to work in isolation when necessary, but also share source code and other application resources as a group, without burdening the busy mainframe.

To summarise the net benefits for this particular customer – and many other current Micro Focus users:

  • Composite core applications can be developed, debugged and unit tested using a single development environment
  • COBOL and Java application developers can collaborate and interact at a development level, using the same environment

All application developers, including mainframe COBOL developers, can achieve improved levels of efficiency, through access to contemporary development tooling. Over 25% efficiency improvements are possible.

DevOps Blog 3 Image 2Figure 2 – Side by side COBOL and Java development, courtesy of Micro Focus

Figure 2 illustrates how Micro Focus technology literally provides side-by-side development for both COBOL and Java application assets from within the same Eclipse IDE.

Additionally, customers looking to unite COBOL and Java, for example, can utilise Enterprise Test Server as a basis for composite application testing, without imposing further on mainframe resources – perfect for integration and functional testing.

A timeless, ageless solution

Furthermore, many customers are also grappling with questions over future resourcing. After all, these COBOL systems have outlasted any reasonable prediction of their life-span, such is the robust and enduring value of both the applications and the underlying technology. Micro Focus’ development tools helps with this – it is a smooth introduction to a more efficient development model for mainframers, but it is also a perfect training ground into the mainframe world (and the COBOL language) for the younger developers with no such background.

In a recent article, a 19-year-old intern at a technology company described a two-hour learning process during which they learnt how to code in COBOL, using a contemporary IDE and had built their first application. In our client scenario, the results were astounding – skills are now no longer a concern, as the average age of the mainframe developer for this client has been reduced to 26.

Unity is Strength

The Micro Focus environment now acts as the basis for all core application development work in that organization; such has been the far-reaching benefits to the technical teams in meeting efficiency targets. So whether the application development requirements are directly connected to the mainframe, fully offloaded, or whether the language requirements are COBOL, Java, PL/I or a mixture, there is a unified development framework for all.

The phrases ‘shorter development cycle’ and ‘increased release velocity’ are often cited as the major outcomes of adopting DevOps. One measurement the client shared was that, thanks to reduced mainframe dependency and improved tooling; a full core system compilation task has been reduced from a full day on the mainframe to 23 minutes under Enterprise Developer.

Find Out More…

Better development efficiency through modern tooling and collaboration is not the only DevOps objective. But this was a major issue with this client. And their goal was improved efficiency to provide better services, faster to their clients.

Micro Focus technology acted as a platform for improvements in working practices to enable a step change in efficiency, based on central DevOps principle of collaborative development. Visit us to learn how Micro Focus can assist your DevOps adoption and resolve your mainframe skills questions.

In our next and final blog of this series I will take a look at the DevOps challenge of testing efficiencies, in the context of mainframe application delivery.

Artix : Modernizing and Maximizing CORBA

John McHugh looks into 25 years of CORBA’s reliability, security and proven performance and how Micro Focus Artix can maximize existing assets and connect CORBA with Web Services, .NET, and REST to maximise and modernize your existing investment.

I often hear that CORBA has made itself absolutely indispensable to the organizations that have relied upon it for the past 25 years. Why? CORBA’s reliability, security and proven performance are second to none.

However, like any technology, CORBA is often just one piece of a much larger solution. And for anyone who is responsible for ensuring that CORBA and additional applications function as one, integration may appear a daunting task. Not only do you have to deal with different technologies with their own protocols, products and applications, but, in many cases, you also have to do so without the expertise that was used to develop the system in the first place.

All too often, solution providers expect you to replace perfectly functioning systems and rewrite them using their products. Clearly, this is a costly and expensive exercise you likely want to avoid. It also introduces risk that may not become apparent until late in the project, or worse, in production. For this reason, integrating rather than replacing is a vital part of any IT solution, which, thanks to Artix, is a very straightforward undertaking, especially so when it comes to CORBA.

CorbaModartix

Why Replace When You Can Extend Your Reach?

You likely have a wealth of technologies and products within your IT infrastructure that are essential to everyday business operations. As these solutions have typically been deployed over a long period of time, ensuring that critical business data contained within them remains accessible and actionable can become a challenge. Moreover, the ubiquitous nature of mobile and web means that the programming paradigms associated with emerging forms of communication can introduce additional complexity when your organization tries interfacing established solutions with newer applications.

I frequently see organizations quickly resign themselves to the fact that integration is too much to take on or that their data is irreparably siloed, and move to rip and replace well-established technology entirely – or rely upon ad-hoc data bridges that are difficult and expensive to maintain.

Artix puts the data that some may (mistakenly) consider locked away in multiple protocols, particularly CORBA, into easy reach and ensures that you can connect cutting-edge yet disparate technologies in sustainable fashion. So, instead of ripping, replacing or recoding the technology you rely on every day, you can extend your reach with Artix. No matter the CORBA solution deployed in your organization, Artix can integrate seamlessly, bringing cost savings and multiple new use cases to the fore.

Eliminate Risk with Seamless Interoperability

Through Artix, existing CORBA applications can seamlessly interoperate with non-CORBA technologies (including those built on .NET, Web Services and REST) without any modification – and they will continue to function with existing applications and systems exactly as before.

Working alongside CORBA, Artix operates in tandem allowing you to modernize using the latest industry standards, extending the functionality without any additional investment in CORBA. It’s a mature and proven technology, already powering robust and resilient integration infrastructures for industry leaders in a variety of markets including Telecommunications, Financial Services and Manufacturing.

For CORBA users, who typically operate in a request-reply world, Artix’s ability to convert the IIOP message in to a SOAP message and transmit over HTTP is most interesting. However, where you may need more flexibility and integrate through REST, that’s not a problem either. Thanks to Artix, there’s no need to change the behaviour of your CORBA applications, so they can continue communicating with your existing CORBA assets, thereby allowing its introduction with little to no risk. You can continue to rely upon CORBA to power your mission-critical applications with best-in-class performance, security and scalability while bring the data contained within CORBA to the widest range of devices and systems.

Artix in Action

Check out our Modernizing CORBA through Artix video series to learn how you can maximize existing assets using Artix and dig into the details of connecting CORBA with Web Services, .NET, and REST with these educational videos.

You can also sign up for a free 30 day trial and download the code in the video series to see for yourself how Artix can help you introduce new revenue models and operational efficiencies in far less time. Contact us with questions and share your experiences with CORBA and Artix in the comments section below. Also don’t forget that you can follow and talk to @MicroFocus on Twitter.

McHugh

DevOps – a culture of collaboration

DevOps is a software development methodology which promises greater efficiency and throughput of change through transparency, collaboration and flexibility. But it depends on significant changes to internal practices for many. In the second of our Mainframe DevOps series, Derek Britton opens up the discussion on a key DevOps principle: a culture of collaboration.

Introduction

The Agile manifesto suggests good practice for building software is to establish manageable iterations in which smaller chunks of work are taken, worked on and delivered in smaller timeframes by a dedicated team. DevOps seeks to add further discipline and flexibility by driving a more focused and inclusive process across IT.

We need to talk about IT

Fundamentally, DevOps aims to provide a sensible transparent point of intersection between the disciplines of Development, QA/Testing and Operations. This espouses (and largely depends upon) collaboration, flexibility and – ultimately – internal operational efficiency.

But is that really possible in a mainframe environment? Industry evidence talks about two thirds of all development shops being aware of DevOps, but that doesn’t mean they are aligned or structured – or even inclined – to collaborate.

CIOz13

Teams and Tools

In one particular client situation, a major application set was the centre of a significant industrialised development process to service several lines of business from the same source code. However, both in terms of team structure and tooling (development, testing, configuration management), there were significant restrictions in terms of the level of flexibility and parallel activity, meaning the line of business deliveries needed to be managed in sequence rather than

Failure to collaborate and adopt modern development methods was, in effect, jeopardizing business efficiency. The client needed flexible and more frequent releases to accommodate a range of unrelated application changes, and sought a model for parallel development across features, releases, or teams to achieve this.

A Contemporary Approach

While there was going to be some change in working practices, adopting modern tooling was the nucleus of the solution.

The flagship of our IBM mainframe application delivery suite – Micro Focus Enterprise Developer – enabled the teams to use a common development toolset which was not dependent on scarce mainframe resources. The powerful Eclipse-based IDE provided freedom for mainframe developers to work in isolation when necessary, but also share source code and other application resources as a group, without burdening the busy mainframe.

At the same time the team adopted a distributed SCM tool (in this case Micro Focus AccuRev) in order to provide a secure, multi-user approach to source code management for the earlier phases of the development and testing cycle. Integrating with their mainframe based SCM later in the delivery cycle maintained the integrity of their promotional model while building in greater flexibility at the early development stages.

A New Chapter

The client has embarked on a new chapter where its traditional mainframe development practices have evolved, without undergoing any risky seismic overhaul, to support contemporary practices. They’ve done this simply because that’s what the business needed.

The upshot is that a greater number of IT delivery projects are running concurrently, delivering more functionality to the business in less time. Improved tooling and visibility also means less time spent fixing regressions or backing out changes.

The phrases ‘Shorter development cycle’ and ‘Increased release velocity’ are often cited as the major outcomes of a DevOps model. While by no means a full-blown DevOps shop, it is already benefitting from one DevOps’ fundamental tenets.

Want A World View? – You Need an Atlas

Another potentially vital consideration is the flexibility with which the workload can be prioritize and shaped in the process. We’ve spoken so far about improving the production process of building and delivering in efficient parallel streams. But what about the tasks in the first place. Establishing an equally flexible, transparent method of capturing and managing the tasks. Agile requirements management benefits from a significant recent investment by the Borland team, as covered in the press.

Atlas

Need More?

Improved collaboration and a parallel development model is by no means the full DevOps story. But this was the major issue with this client. And their goal was improved efficiency. Micro Focus technology acted as a platform for improvements in working practices to enable a step change in efficiency, based on central DevOps principle of collaborative development. We’ll look in our next blogs at improving development and testing efficiencies. You might also like to read my introductory DevOps blog post here.

We are asked a lot about how to build greater efficiency into core systems delivery. We spend a lot of time in the #DevDay agenda talking about precisely that. Why not come to an event near you soon where you can see for yourself what’s possible?

Monte Carlo Method & Silk Performer solving Pi?

The Monte Carlo method uses random sampling in order to approximate calculations by performing random experiments on a computer. Can Silk Performer solve Pi using the Monte Carlo method? You can bet your bottom dollar it can – here’s how:

The Monte Carlo method uses random sampling in order to approximate calculations by performing random experiments on a computer.

A simple example of this is calculating an estimate for Pi and we can use Silk Performer to do this.

Here is the logic and process that we can use:

  • The square below is 2×2 and the circle inside the square has a radius of 1.
  • The area of a circle is πr2 therefore the area of the circle below is π*12 = π.
  • The area of the square is 2*2 = 4.
  • If we perform some clicks inside the square using completely random X and Y coordinates then the probability of the random click falling inside the circle is equal to the area of the circle divided by the area of the square. This is equal to π divided by 4 (π/4).
  • Using this logic and a lot of random clicks we can then provide an estimate for the value of π by counting the number of clicks that fall within the circle and the total number of clicks performed.
  • Some simple mathematics leads to the estimate for π being equal to the number of clicks inside the circle multiplied by 4 and divided by the total number of clicks.

So how can we use Silk Performer to calculate an estimate for π using the logic above?

Using Silk Performer’s powerful Browser Driven Load Testing (BDLT) technology we can perform actual random browser clicks on the hosted image above and keep a running total of where the clicks take place (inside or outside the circle).

Check out the video to see how this can be done using Silk Performer’s simple to use BDL scripting language.

Neil McMurtry

Neil-McMurtry

 

 

 

 

 

Neil is a Senior Product Engineer, located in Belfast, Northern Ireland. You can bump into Neil in many places on the Micro Focus Community Site, but you are most likely to find Neil answering customer questions on the Silk Performer Forum, as Neil has over 10 years of expertise using Silk Performer!

extend 10 is here and ready for next gen development!

Ed Airey talk us through the ACUCOBOL extend 10 launch. This latest and major release of extend offers new and exciting capabilities designed to optimize ACU development and accelerate your move to new innovation

Take your first steps towards the future of ACU

As an ACUCOBOL-GT user, are you looking for a clear path forward? Maybe a future proof strategy to support, even modernize, your existing ACUCOBOL apps?  If you are, you’re in very good hands with Micro Focus and extend 10.

Check it out!  This latest and major release of extend offers new and exciting capabilities designed to optimize ACU development and accelerate your move to new innovation.  Whether you’re considering a new UI strategy, a move to the Cloud or adopting the latest deployment platform, extend 10 delivers an impressive line-up of new features that promises to excite any ACUCOBOL user.

What’s New?

Well, here are just a few highlights:

  • ACUCOBOL support for the latest technology platforms including Windows 10, Apache 2.4 and .NET 4.5
  • Single files over 2gb are now supported
  • ACUCOBOL-GT enhancements including AutoFill Entry-field
  • Relational database support is now included as a standard ‘feature’
  • Tracing and profiler enhancements
  • Plus many more including customer reported fixes

To learn more, check out the ‘What’s New’ in Extend 10 datasheet

Protect what you have

If your core business systems are built on ACUCOBOL technology, you’ll want to protect them and leverage them to your advantage.  In many ways, these systems represent your core company identity, your unique IP and it would difficult to imagine a business environment without them!

With extend 10, you protect what you have but also optimize it – effectively use what already works and innovate from there.  Re-use is the smarter play. Want proof?  Read how Rainer Obermeit chose Micro Focus to move into the future using ACUCOBOL technology.  The secret to application re-use success is to do this without adding significant cost or risk to the business.  Easier said than done, I know, but with the right tools (designed for this task) it’s simpler than you may think. extend 10 makes this process straightforward by giving you the foundational platform for new development capability –to quickly deliver the new business features that your customers are asking for!  The tools are here and they’re waiting for you.

Familiar faces

Why we are so confident in extend 10?  We’re reinvesting in ACUCOBOL in new ways including new feature roadmaps, new technology deliverables and new community events.  Plus, our ACUCOBOL community just got a lot stronger with the recent return of an ACUCOBOL legend – Drake Coker.  We’re very happy to have Drake back with us and focused on delivering new ACU capabilities for the extend product line.  So, with a renewed energy, a feature-rich roadmap and the addition of new (and familiar) ACU friends, we’re ready to go.

Join us and let’s take our ACUCOBOL apps into the future!

Don’t miss this opportunity to experience extend 10 for yourself.

ACUCOBOL usersLearn more about this exciting new release NOW!

Extend10b_937x186

 

Enterprise united – IBM TXSeries and Micro Focus Visual COBOL

Ed Airey from Micro Focus looks at the tighter integration between Micro Focus Visual COBOL and IBM TXSeries. Enterprise developers can now build, test and deploy enterprise COBOL applications using modern IDEs – Visual Studio or Eclipse – that are seamlessly integrated within Micro Focus Visual COBOL.

BeatingBacklog2

Many global organizations rely on application portfolios and core systems written up to half a century ago to serve the needs of the business and support business- critical functions such as payroll, banking, retail services and logistics.

For the modern enterprise, re-using application logic and data technology is good business sense. With the right technology, companies can leverage these applications and create a bridge to new technology such as cloud, mobile and next generation architectures.

This way, companies with heritage IT can still be relevant in the new digital economy. The closer alignment between IBM middleware and our forward-thinking development environment offers new options for companies meeting that profile.

What are the products called and what do they do?

IBM® TXSeries® for Multiplatforms is distributed transaction processing middleware. It supports new and longer-established applications in both cloud environments and traditional data centers. It is a scalable and highly available platform for developing, deploying and hosting mission-critical applications that also integrates with mixed-language, multiplatform and service-oriented architecture (SOA) environments.

Micro Focus Visual COBOL

Visual COBOL is a powerful development environment for COBOL applications. By integrating with Eclipse and Visual Studio, Visual COBOL enables a new age of application innovation, with unrivalled portability and performance for distributed COBOL applications. Smart editing features, multi-platform compilation and advanced debugging tools ensure that modernizing your COBOL applications through Visual COBOL is simple and straightforward. It supports a wide range of the latest environments including Cloud, mobile, .NET and JVM.

Godfrey

How do they work together?

Modern enterprises deploying these two solutions together will be enabling their application developers to leverage long-established and unique business logic and data while bridging the gap between old and the new.  Business-critical applications written in C, C++, COBOL, Java and PL/I can be deployed across an array of on premise, virtual and cloud platforms. Enterprise organizations will be able to develop mission-critical applications with high scalability, fast performance and 24×7 availability.

What does it all mean?

Because the IBM TXSeries for Multiplatforms V8.2 now supports Micro Focus Visual COBOL, developers can now build, test and deploy enterprise COBOL applications using modern IDEs – Visual Studio or Eclipse – that are seamlessly integrated within Micro Focus Visual COBOL.

The tighter integration between Micro Focus Visual COBOL and IBM TXSeries is highly streamlined and validated through a simple ‘single step command’ within the Install Verification Programs (IVP) process. Adoption is further streamlined by providing CICS and COBOL sample applications ‘out-of-the-box’ for easy reference and integration.

Perhaps more significantly, internal test simulations suggest this tighter alignment will deliver a powerful 40% improvement in COBOL application performance. Application developers are more enabled than ever before to optimize application services for business users.

So, good news, right?

Right. “We’re very excited about this new integration capability between IBM and Micro Focus.  This new solution further demonstrates our shared partner commitment to the ongoing support of enterprise business applications.  Visual COBOL’s certification with IBM TXSeries provides our mutual customers with new development capability, a wide array of platform options and unrivaled performance.” (me! That’s Ed Airey, Micro Focus and you can find me on Twitter)

Right. “It is always our priority to bring together the best in class experience for our application development and management teams. With support for Visual COBOL we have taken our user experience to the next level by providing a well-integrated, seamless development and deployment of COBOL apps to our enterprise-grade platform. ” (Kasturi Mohan, IBM)

Learn more about IBM TXSeries for Multiplatforms V8.2 here and start your journey to innovation with Micro Focus Visual COBOL from here.

Ed

Discovering DevOps … for the Mainframe

That font of knowledge Wikipedia tells us that DevOps “aims to help an organization rapidly produce software products” – but can this much-lauded IT delivery methodology make light work of some of the most complex, and business-critical, mainframe development tasks? In the first of a series of blogs on DevOps, Micro Focus Product Marketing Director Derek Britton lifts the lid…

The Case for Mainframe DevOps: All Quiet on Planet z?

In many if not most organizations we speak to, there are some defining characteristics of the mainframe environment. First is its enduring value. Market surveys agree with continued loyalty from the mainframe community. Indeed, if we look at the IT estate that has built up over the years, the investments made over time have led to incredible returns in terms of business value and competitive advantage. But usually, very often in fact – and this is our second characteristic – these tremendous returns have come at a cost of what now is very high complexity in terms of the IT estate.

A third defining characteristic from all that complexity is where the time is spent today by mainframe application teams. The reality for most organizations is that the majority of budget and time is spent simply ‘keeping the lights on’ (by which I mean the day-to-day running of the organization). Some organizations will spend 70, 80 or even 90%[1] of their available IT resources just doing ‘lights on’ activities, such as maintenance updates, managing the maintenance backlog, regulatory changes, and bug fixes to core systems.

This leaves very little time and budget to spend on innovation – innovation to drive success. And business success in 2015 needs greater flexibility, and velocity in deliveries from IT.

CIOz13

Enter the Hero to Save the Day

IT faces an interesting reality in which core applications have outlived the very processes and technologies used to create them originally. These processes and technologies are now showing signs of age and are unable to support a more flexible and inclusive model that systems of 2015 need.

And that’s where DevOps comes in…

Building on the concepts of the agile manifesto, where smaller chunks of work are taken, worked on and delivered in smaller timeframes by a dedicated team, DevOps seeks to add further discipline and flexibility by driving a more focused and inclusive process across IT. This espouses collaboration, flexibility and – ultimately – internal operational efficiency.

DevOps makes a lot of sense for small teams building new code with few distractions. It lends itself to new developments among co-located teams using modern technology across an open structure.

All that’s needed is for development teams to adopt it in the mainframe environment…

superCIOlady

Sorry, we need to do what?

However this is far from straightforward for the mainframe world. There are a number of reasons, let’s look at three of the more obvious concerns.

Firstly, much of the work is NOT new code. The core systems and the updates they require make up most of the IT backlog. And these need to be planned and undertaken in a regimented fashion, where mainframe resources, available skills, business priority and a serious risk aversion drive the operational culture. Multiple updates to the same system may not be a smart move when the cost of production failure is significant.

Secondly, DevOps is built on an agile model. Most mainframe shops are not, with large teams and a formalised structure and fairly fixed delivery model. Culturally, DevOps is misaligned with the mainframe world today. One commentator phrased it this way, “one of the biggest challenges is the cultural shift, which will be necessary to break away from the old siloed ways of working”

And thirdly, the issue is technical: in all but a few cases the underlying mainframe delivery technology is not contemporary, and typically doesn’t lend itself to a collaborative or efficient workflow. It works for the way the processes were established years ago. It doesn’t necessarily bend to support another model.

But all is not lost. If we remind ourselves of our core objectives: to make delivery cycles more efficient, to foster collaboration, to improve the velocity of deliveries, we can look at practical ways of removing barriers to success. Online Agile Coach journal and Roger Brown recently did just that in the Blog Post ‘Agile in a COBOL world‘ I’d recommend you read after you finish up here…

Reality Bites

Let’s look at three common situations in the mainframe world. In fact these are 3 problems we have encountered recently with customers. Let’s consider these as our barriers to better efficiency.

One goal of DevOps is to accelerate throughput by enabling many resources to work simultaneously on code deliveries, or ‘concurrent deliveries’. This isn’t always how things are set up in the mainframe world, and is often restricted by promotional models, Source Code Management configuration, and usually, mainframe development procedures.

Another objective is to accelerate the task of application development, and associated analysis and testing, to enable each individual to get more done. Inefficient or out of date tooling might not allow for this.

Finally, and a real concern for many of our clients, is that irrespective of development speed, there is a rigid, fixed and lengthy test phase which is dictated by QA procedures and available mainframe resources. Most clients we speak to are maxed out in terms of mainframe test resources due to the volume of testing effort needed.

Conclusion – Get Smarter

The challenges above are by no means uncommon; they are, however, major rocks in the road in moving towards a more efficient delivery model, the cornerstone of the DevOps ethos.

But there is a way forward. We will explore each of these challenges – and a practical solution for each that is based on contemporary technology – in forthcoming blogs. For those looking to streamline mainframe delivery using DevOps principles, the news is good, and is timely. To quote a recent publication “Not only is DevOps on the mainframe Mission Possible, but also it’s becoming Mission Cricital[2]”. In the meantime find out more by contacting me on Twitter or an expert at Micro Focus.

[1] Sources – Gartner, Forrester and anecdotal customer information

[2] Mobile to Mainframe DevOps for Dummies – Rosalind Radcliffe (IBM Limited Edition), 2015

DerekB