Academics, Analysts and Anchormen: Saluting the Admiral

Introduction

In 1987 I sat my first semester (we call them terms in the UK) at university, studying a Bachelor’s in Computer Science. One of my first assignments was to pick up and learn one of a broad range of computer languages. COBOL was picked first because it was a “good place to start as it’s easy to learn[1]”. Originally designed for business users with instructions that were straightforward to learn. They were right, it was a great place to start and my relationship with COBOL is a long way from over 30 years later.

A Great New Idea?

Little did I know I was using a technology that had been conceived 30 years beforehand. In 2019, one of the greatest technology inventions of the last century, the COBOL computer programming language, will celebrate its ruby anniversary. While not as widely known or anywhere near as popular as in its in 1960s and 70s heyday, it remains the stalwart of a vast amount of vital commercial IT systems globally. Anecdotal evidence suggests the majority of the world’s key business transactions still use a COBOL back-end process.

However, the celebrated, windswept technology pioneers of Jobs, Turing, Bernars-Lee and Torvalds were not even in the room when this idea first germinated. Indeed, a committee of US Government and industry experts had assembled to discuss the matter of computer programming for the masses, a concept they felt without which would halt the progress of technological advancement. Step forward the precocious talent of Grace Murray. With her present on the Codasyl committee, the notion of a programming language that was “English-like” and which “anyone could read” was devised and added to the requirements. The original aim of the language being cross platform was achieved later, but the ideas still stood as the blueprint.

Soon enough, as scientists too, the inevitable acronym-based name arrived –

  • Everyone can do it? Common.
  • Designed with commerce in mind? Business Oriented.
  • A way of operating the computer? Language.

This was about 1959. To provide some context that was the year during which rationing was still in force in the UK, and 5 years before the mainframe computer had been first released. Bill Haley was still rockin’ ‘til broad daylight, or so the contemporary tune said.

Grace Hopper (then Murray) was already the embodiment of dedication. She wasn’t tall enough to meet the entrance criteria for the US Navy, yet managed to get in on merit in 1944. And while her stature was diminutive, her intellect knew no bounds. She was credited for a range of accolades during an illustrious career, as wide and varied as –

  1. Coining the term ‘debug’ to refer to taking errors out of programming language code. The term was a literal reference to a bug (a moth) which had short-circuited the electrical supply to a computer her team was using
  2. Hopper’s later work on language standards, where she was instrumental in defining the relevant test cases to prove language compliance, ensured longer-term portability could be planned for and verified. Anyone from a testing background can thank Hopper for furthering the concept of test cases in computing
  3. Coining the phrase, which I will paraphrase rather than misquote, that it is sometimes easier to seek forgiveness than permission. I can only speculate that the inventors of “seize the day” and “just do it” would have been impressed with the notion. Her pioneering spirit and organizational skills ensured she delivered on many of her ideas.
  4. Characterising time using a visual aid: she invited people to conceptualize the speed of sound by how far electricity would travel in a nanosecond. She offered people a small stick, which she labelled a “nanosecond” – across the internet people still boast about receiving a Nanosecond from Hopper
  5. Cutting the TV chat-show host David Letterman down to size . A formidable and sometimes brusque lady, her appearance on the Letterman Show in 1980s is still hilarious.

A lasting legacy

Later rising to the rank of rear Admiral, and employed by the Navy until she was 79, Hopper is however best known for being the guiding hand behind COBOL, a project that eventually concluded in 1959 and found commercial breakthroughs a few years later. Within a decade, the world’s largest (and richest) organisations had invested in mainframe-hosted COBOL Data Processing systems. Many of them have kept the concept today, though most of the systems themselves (machinery, language usage, storage, interfaces etc.) have changed almost beyond recognition. However, mainframes and COBOL are still running most of the world’s biggest banks, insurers, government departments, plus significant numbers of healthcare, manufacturing, transportation and even retail systems.

Hopper died in 1992 at the age of 85. In 2016 Hopper posthumously received the Presidential Medal of Freedom from Barack Obama. In February 2017, Yale University announced it would rename one of its colleges in Hopper’s honour.

Grace Hopper remains inspirational for scientists, for academics, for women in technology, biographers, film-makers, COBOL and computing enthusiasts and pioneers, and for anyone who has been in business computing in the last five decades. We also happen to think she’d like our new COBOL product too. The legacy of technological innovation she embodied lives on.

[1] The environment provided was something called COBOL/2, a PC-based COBOL development system. The vendor was Micro Focus.

The 5 Longest Lead Times in Software Delivery

The Pressure to Go Fast

Rapid business change, fueled by software innovation is transforming how software delivery organizations define, develop, test, and release business applications. For these software organizations to keep their competitive advantage in today’s complex and volatile digital marketplace, they must become more agile, adaptive, and integrated into the business and embrace digital transformation business practices. Unfortunately, most current software delivery practices can’t keep pace with the demands of the business.

Long software delivery cycles are a significant impediment to business technology innovation. Agile development teams have shortened development cycles, but Agile by itself is insufficient as it does not remove the cultural and technical barriers between development and operations.  DevOps principles and practices developed in response to this problem, facilitates cooperation and coordination among teams to deliver software faster and with better quality.

The goal of scaling “DevOps” for the enterprise is to prioritize and optimize deployment pipelines and reduce lead times to deliver better business outcomes. Creating new and optimizing existing deployment pipelines in large IT organizations is key to improving their efficiency and effectiveness in delivering software at the speed that the business requires.

Long Lead Times

Every enterprise IT organization is unique in that it will have different bottlenecks and constraints in its deployment pipelines.  I recommend conducting a value stream mapping exercise to identify specific problem areas.  “Starting and Scaling DevOps in the Enterprise” , by Gary Gruver is a great book and provides a good framework for getting started. The following are the some of the most common areas found that generate the longest lead times:

Handoffs

DevOps culture strives to break down the organizational silos and transition more to product teams.  This is because the current silo’d organizational structure provides headwinds to the objective of short lead times and continuous flow.  Organizational silos are artifacts of the industrial era designed specifically for “Batch and Queue” processing which drives up lead times with handoffs from one team or organization to another. Each handoff is potentially a queue in itself.  Resolving ambiguities require additional communication between teams and can result in significant delays, high costs, and failed releases.

You need to strive to reduce the number of handoffs by automating a significant portion of the work and enabling the teams to continuously work on creating customer value – the faster the flow, the better the quality, resulting in lower lead times.

Approval Processes

Approval processes were originally developed to mitigate risk and provide oversight to ensure adherence to auditable standards for moving changes into production, however, the approval process within most large enterprises is slow and complex and is often comprised of a set of manual stovepipe processes that use email and Microsoft office tools to track, manage, and, more often than not, wait on people for approval of a software change. Lack of proper data or insufficient data leads to hasty or faulty approvals or bounce backs further frustrating software delivery teams, reducing quality, and impeding deployments.

Continuous delivery practices and deployment pipeline automation enables a more rigorous approval process, and a dramatic improvement in speed. Releasing into production might need approval from the business, but everything up to that point could be automated dramatically reducing lead times.

Environment Management and Provisioning

There is nothing more demoralizing to a dev team than having to wait to get an environment to test a new feature. Lack of environment availability and/or environment contention due to manual processes and poor scheduling can create extremely long lead times, delay releases, and increase the cost of release deployments.

Creating environments is a very repetitive task that should be documented, automated, and put under version control. An automated and self-service process to schedule, manage, track, and provision all the environments in the deployment pipeline will greatly reduce lead times, drive down costs, while increasing the productivity of your Dev and QA teams.

Manual Software Deployments

Machines are far better and much more consistent at deploying applications than humans. Yet there still are a significant number of organizations that still manually deploy their code.  Automating manual deployment can be a quick win for these organizations. This approach can be delivered rapidly without major organizational changes. It is not uncommon for organizations to see deployment lead times reduced by over 90%.

The more automated this process is, the more repeatable and reliable it will be. When it’s time to deploy to production, it will be a non-event. This translates into dramatically lower lead times, less downtime and keeps the business open so that it can make more money.

Manual Software Testing

Once the environment is ready and the code is deployed, it’s time to test to ensure the code is working as expected and that it does not break anything else. The problem is that most organizations today manually test their code base. Manual software testing drives lead times up because the process is very slow, error prone and expensive to scale out across large organizations.

Automated testing is a prime area to focus on to reduce lead times. Automated testing is less expensive, more reliable and repeatable, can provide broader coverage, and is a lot faster.  There will be an initial cost of developing the automated test scripts, but a lot of that can be absorbed by shifting manual tester resources to “Test Development Engineers” to focus on automated API-based testing. Over time manual testing costs and lead times will go down as quality goes up.

 The velocity and complexity of software delivery continues to increase as businesses adapt to new economic conditions. Optimizing and automating deployment pipelines using DevOps practices will dramatically reduce lead times and enable the delivery of software faster and with better quality.

To learn more about how to optimize your deployment pipelines, listen to our popular on-demand webcast with Gary Gruver, where he talks about how to start your DevOps journey and how to scale it in large enterprises where change is usually difficult. He shares his recommendations from his new book on scaling DevOps and answers audience questions on how to adopt those best practices in their organizations.

Fill the form to listen to the recording and get your free copy of Gary’s new book Starting and Scaling DevOps in the Enterprise

Multifactor Authentication for the Mainframe?

Is the password is dead or dying?

Lots of articles talk about the death of passwords. Google aims to kill them off by the end of 2017. According to the company, Android users will soon be able to log in to services using a combination of face, typing, and movement patterns. Apple figured this out long ago (Apple Pay) and continues to move away from passwords. Even the U.S. government is coming to grips with the fact that passwords don’t cut it anymore.

Enter multifactor authentication or MFA. Almost everyone agrees that MFA provides the strongest level of authentication (who you are) possible. It’s great for users, too. My iPhone is a great example. While I like many things about it, Touch ID is my favorite feature. I never have to remember my thumb print (it’s always with me), and no one can steal it (except James Bond). Touch ID makes secure access so easy.

Given the riskiness of passwords and the rise of MFA solutions, I have to ask why it’s still okay to rely on passwords for mainframe access. Here’s my guess: This question has never occurred to many mainframe system admins because there’s never been any other way to authenticate host access—especially for older mainframe applications.

 Are mainframe passwords secure?

When you think about passwords, it’s clear that the longer and more complex the password, the more secure it will be. But mainframe applications—especially those written decades ago, the ones that pretty much run your business—were hardcoded to use only weak eight-character, case-insensitive passwords.  Ask any IT security person if they think these passwords provide adequate protection for mission-critical applications and you will get a resounding “No way!”

As far as anyone knows, though, they’ve been the only option available. Until now. At Micro Focus, we are bridging the old and the new, helping our digitally empowered customers to innovate faster, with less risk. One of our latest solutions provides a safe, manageable, economical way for you to use multifactor authentication to authorize mainframe access for all your users—from employees to business partners.

Multifactor authentication to authorize mainframe access?

It’s a logical solution because it uses any of our modern terminal emulatorsthe tool used for accessing host applications—and a newer product called Host Access Management and Security Server (MSS). Working alongside your emulator, MSS makes it possible for you to say goodbye to mainframe passwords, or reinforce them with other authentication options. In fact, you can use up to 14 different types of authentication methods—from smart cards and mobile text-based verification codes to fingerprint and retina scans. You’re free to choose the best solution for your business.

In addition to strengthening security, there’s another big benefit that can come with multifactor authentication for host systems: No more passwords means no more mainframe password-reset headaches!

Yes, it’s finally possible to give your mainframe applications the same level of protection your other applications enjoy. Using MFA for your mainframes brings them into the modern world of security. You’ll get rid of your password headaches and be better equipped to comply with industry and governmental regulations. All you need is a little “focus”—Micro Focus.

Trying to Transform (Part 2): the 420 million mph rate of change

Introduction

Organizations continually have to innovate to match the marketplace-driven rate of change. Readers of the Micro Focus blogsite know that I’m continually banging this drum. The issue seems relentless. Some even refer to tsunamis. But how fast is it?

An article from a recent edition of the UK Guardian newspaper attempted to gauge what the pace of change actually is, using the tried and tested motoring analogy. Here’s a quote.

If a 1971 car had improved at the same rate as computer chips, then 2015 models would have had top speeds of about 420 million mph. Before the end of 2017 models that go twice as fast again will arrive in showrooms.” Still trying to keep up? Good luck with that.

Of course this is taking Moore’s law to a slightly dubious conclusion. However, the point holds that the clamour for change, the need for constant reinvention, innovation and improvement, that’s not letting up any time soon.

The slow need not apply

But how quickly an organisation can achieve the innovation needed to compete in the digitally-enabled marketplace may depend on the IT infrastructure. Clearly, innovation is easier for funky, smaller start-ups with no core systems or customer data to worry about to drag along with them. But the established enterprise needn’t be left in the slow lane. Indeed look at some of the astonishing advances in mainframe performance and any nagging concern that it can’t support today’s business quickly dissipates.

Meanwhile, innovation through smart software can improve speed, efficiency, collaboration, and customer engagement. With the help of the right enabling technology, mainframe and other large organizations can match external digital disruption with their brand of innovation. Because innovation isn’t any one thing, and therefore the solution must be as comprehensive as the challenge. So what’s the secret to getting the enterprise up to speed? The answer for many is digital transformation.

Digital what?

OK, Digital Transformation may be neologism rather than accepted parlance, the term is common enough that Gartner get it and it has its own wiki definition:

“Digital transformation is the change associated with the application of digital technology in all aspects of human society”

Our customers have told us they are trying to transform, and while they have different ideas about what digital transformation means to them, Micro Focus is very clear about what it means to us.

Digital transformation is how we help keep our mainframe and enterprise customers competitive in a digital world. It can either be tangible, like a better mobile app, a better web interface on to a core system, getting into new markets quicker, ensuring a better overall customer experience, or simply doing things better to answer the challenges posed by the digital economy.

For us, the future is a place where to keep up with change, organizations will need to change the way everything happens. And for IT, that’s Building smarter systems even faster, continuing to Operate them carefully and efficiently, while keeping the organization’s systems and data, especially the critical mainframe-based information, Secure, these are the things that matter to the CIO, not to mention the rest of the executive team.

This is the practical, business incarnation of innovation, but to us the solution is as smart as it is efficient: realizing new value from old. Squeezing extra organizational benefit through increased efficiency, agility and cost savings from the data and business logic you already own. The pace of change is accelerating, so why opt for a standing start? We suggest you use what is, quite literally, already running.

Talking Transformation

Your digital story is your own journey, but the conversation is hotting up. Hear more by joining us at an upcoming event. Taste the Micro Focus flavor of innovation at the upcoming SHARE event. Or join us at the forthcoming Micro Focus #Summit2017.

Digital transformation – buzzword or business opportunity?

Digital transformation is what analysts, tech vendors and IT professionals call the latest move towards business innovation.

As this blog explains, organizations are finding that their digital strategy is increasingly being driven by changing market events – digital disruption – and the desire to improve the customer experience (CX) by better understanding how they engage with their products and services.

There will be no turning back. As Accenture research discovered, more than 65% of consumers rely on digital channels for product and service selection, speed and information accuracy, with the same percentage judging companies primarily on the quality of their customer experience. And expectations have never been higher.

Keep the (digital) customer satisfied

Digital technologies such as web, mobile, Cloud, the Internet of Things (IoT) have changed the way we live our personal lives and how we engage with businesses. Gartner predicts that this year will see more than half of the world’s population become digital subscribers and fuel expectations of easily consumed, tailor-made content, to be available on-demand and accessible from their device of choice.

It’s not something companies can get wrong. Organizations will probably be aware that more than 80% of the customers who switched to more digitally-savvy competitors could have been retained with a better digital experience. So who is best prepared for the new world of digital natural selection?

Smaller IT shops, born of new technology and running flexible processes, have a clear advantage. Better prepared to leverage consumer feedback in creating and delivering more focused products, faster, they can use disruption to challenge established incumbents. Across the sectors, from medical to manufacturing, entrenched enterprises are being left behind by the pace of change.

COBOL in the digital world?

In the new, shiny world of digitalization, systems of record written in COBOL are viewed as obstacles to progress, to faster delivery and new innovation. So what are the options for businesses with decades of IT investment and a new CIO directive to suddenly shift to a digital-first strategy?

When transforming any aspect of a business, whether it’s a single application or process, or the wider culture, business leaders must begin with a clear understanding of what ‘digital’ success looks like to that organization. CIOs, IT Managers, and dev teams have different definitions – some are strategic, others more tactical – and some are more concerned with the purely practical. However the imperative to improve the customer experience is paramount. Intuitive, content-rich, adaptive to new technologies and easily accessed, the CX should modify business behavior towards a customer-first approach.

Digital – the new frontier for business

The customer experience underpins digital transformation and improves business productivity. New hires with access to a better, faster, more intuitive application experience need less training and have more time for customers. In this world, every business is a software organization that creates a USP – a clear differentiator – using unique, software-generated ‘experiences’ for the end user. In this definition, success is building enduring customer loyalty and using it to access new markets.

But what about longer-established enterprise shops running COBOL? Where can their products take them? There’s good news. Micro Focus remains committed to a strong technology roadmap, supported by millions of R&D dollars, focused on innovation and customer success. Creating and refreshing technology solutions that enable the enterprise to reinvent their IT assets for the new reality remains a core tenet and guiding principle. And here’s the proof.

Drum roll, please

Say hello to a new UI transformation solution for ACUCOBOL customers—AcuToWeb®. With the latest version of extend® (v10.1) and AcuToWeb, organizations can instantly transform their COBOL applications by enabling browser access across multiple platforms.

SHARE 2017: Do you know the way to San Jose?

Introduction

While we’re no strangers to SHARE, our customers are entering unfamiliar territories in many ways so it’s fitting we should all pitch up somewhere new for this year’s event. And if this song gets stuck in your head for days and days – then welcome to my world.

It’s the first SHARE event of 2017 and a great platform for meeting the mainframe community. It’s also a classic 1960s song, so I thought I’d reference it to look ahead to what SHAREgoers can expect this year.

Our best people are there with good news on digital transformation. Here’s what it all means. Just imagine Dionne Warwick singing it.

“I’m going back to find some peace of mind in San Jose”

Peace of mind. Important for every IT organization, business-critical for the enterprise mainframe world. Risk-averse, security conscious, presiding over their must-not-fail core systems. Oh – and they must also find the bandwidth and resources to support innovation. Peace of mind? Good luck with that.

A few things, there. First up, we’ll be demonstrating how we’ve added greater security provision to the mainframe and terminal emulation environments to ensure the critical data remains protected, secured.

Second, peace of mind is about knowing what the future has in store. And that’s digital transformation. Transformation is essential for remaining competitive in a digital world. The ‘new speed of business’ shifts up a gear every year. Enterprise software innovation on the mainframe can improve speed, efficiency, collaboration, and customer engagement. You just need to know how to do it.

For many of our customers, enterprise IT and the mainframe are almost synonymous. Connecting the two to create the forward-thinking innovation needed to compete in the digitally-enabled marketplace is why people are coming to SHARE.

SHARE is where you taste the Micro Focus flavor of innovation. New is good, but realizing extra value through increased efficiency, agility and cost savings from the data and business logic you already own is even better. If you’re looking to make some smart IT investments this year, then SHARE participation could realize a pretty good return.

I spoke to Ed Airey, Solutions Marketing Director here at Micro Focus, about finding this peace of mind. “As we hear often, keeping pace with change remains a challenge for most mainframe shops. In this digital age, expectations for the enterprise couldn’t be higher. Transforming the business to move faster, improve efficiency and security while modernizing core applications are key. Success requires a new strategy that delivers on that digital promise to delight the customer. Our Micro Focus solutions supporting the IBM Mainframe, make that happen – helping customers innovate faster and with lower risk …and peace of mind.”

 “I’ve got lots of friends in San Jose”

This one is as simple as it is literal. Lots of our mainframe friends will be in San Jose, so share a space with seasoned enterprise IT professionals, hear their successes and lessons learned.

The full lineup includes more than 500 technical sessions. Check out these highlights:

It’s good to see the EXECUForum back for San Jose. This two-day, on-site event unites enterprise IT leaders and executives for strategic business discussions on technology topics. We address key business challenges and share corporate strategies around business direction with industry peers. Micro Focus will participate, having put the topic of ‘new workload’ on the agenda – the growth opportunities for z systems remain impressive, as we recently mentioned.  Check out the agenda of EXECUForum here.

 “You can really breathe in San Jose”

The final lyrical metaphor for me is about taking time to understand, to witness all that the technology has to offer. To really breathe in the possibilities. To think about what digital transformation might look like for your mainframe organization – and how Micro Focus might deliver that vision.

We all want to use resources wisely, so save time and money and decrease the chance of error by talking to the product experts at the user- and vendor-led sessions, workshops and hands-on labs. Our booth will be full of mainframe experts ready to talk enterprise IT security, DevOps, AppDev, modernization and more. Stop by the SHARE Technology Exchange Expo, take a breather, maybe even play a game of Plinko.

We’re ready when you are.

New Year – new snapshot: the Arcati Mainframe Yearbook 2017

Introduction

Trends come and go in the IT industry, and predictions often dominate the headlines at the turn of the year. Speculation and no small amount of idle guesswork starts to fill the pages of the IT press. What welcome news therefore when Arcati publishes its annual Mainframe Yearbook.  Aside from the usual vendor-sponsored material, the hidden gem is the Mainframe User Survey. Testing the water of the global mainframe market, the survey aims to capture a snapshot of what Arcati describes as “the System z user community’s existing hardware and software configuration, and … their plans and concerns for 2017”.

While the sample of 100 respondents is relatively modest, the findings of its survey conducted in November 2016 were well worth a read. Here are a few observations from my reading of the Report.

Big Business

The first data point that jumps off the page is the sort of organization that uses the mainframe. A couple of questions help us deduce an obvious conclusion – the mainframe still means big business. This hasn’t changed; with the study revealing that over 50% of responses have mainframe estates of over 10,000 MIPS, and nearly half work in organizations of more than 5,000 employees (major sectors include banking, insurance, manufacturing, retail and government). Such organizations have committed to the mainframe: over a quarter have already invested in the new IBM z13 mainframe.

…And Growing

A few other pointers suggest the trend is upward, at least in terms of overall usage. Nearly half are seeing single digit MIPS growth this year, while nearly a third are witnessing over 10% growth in MIPS usage. For a hardware platform often cited for being in decline, that’s a significant amount of new workload. While the survey doesn’t make it clear what form that increase takes, I’ve published my view about that before. Whatever the reason, it seemed unsurprising that the number of respondents who regard the mainframe as a “legacy platform” has actually reduced by 12 percentage points since the previous survey.

Linux is in the (Main) Frame

The survey asked a few questions about Linux in the mainframe arena, and the responses were positive. Linux on z is in play at a third of all those surveyed, with another 13% aiming to adopt it soon. Meantime, IBM’s new dedicated Linux box, LinuxONE, is now installed at, or is planned to be, at a quarter of those surveyed.

Destination DevOps

With a mere 5% of respondents confirming their use of DevOps, the survey suggests at first glance a lack of uptake in the approach. However, with 48%  planning to use it soon, this makes a majority of respondents on a DevOps trajectory. This is consistent with a growth trend based on Gartner’s 2015 prediction that 45% of enterprises will planning to adopt DevOps (see my blog here). Whatever the numbers turn out to be, the trend looks set to become an inextricable part of the enterprise IT landscape.

Cost of Support

Considering the line of questioning around cost of support compared various platforms it only seems  worth mentioning that the author noted “Support costs of Linux and Windows were growing faster than the mainframe’s”. The issue around “Support”, however, did not extend to asking about available skills or indeed training programs or other investments to ensure support could continue.

Future considerations?

It is hard to make any material observations about the mainframe in the broader enterprise IT context because there was no questioning around multi-platform applications or workload balancing, where a hybrid platform model, with a mainframe at its core, serves a variety of business needs, applications and workload types. So often, the mainframe is the mother-ship, but by no means the only enterprise platform. For the next iteration of the survey, we would welcome further lines of questioning around workload, skills, security and cloud as sensible additions.

Conclusion

There are a small number of important independent perspectives on the mainframe community, about which we report from time to time, and Arcati is one such voice. The survey reflects an important set of data about the continued reliance upon and usage of the mainframe environment. Get your copy here.

Another such community voice is, of course, the annual SHARE event. This year it takes place in San Jose, California. Micro Focus will be there, as part of the mainframe community. See you there.

Security 1st. Welche IT Trends prägen 2017

Christoph Stoica, Regional General Manager bei Micro Focus verrät, welche IT-Trends das kommende Jahr prägen werden

Im Rahmen der #DiscoverMF  Tour 2017, einer gemeinsamen Roadshow  von Micro Focus und Open Horizons, der führenden Interessensgemeinschaft für Micro Focus und SUSE Technologien, hatte Macro Mikulits als Mitglied des Open Horizons Core Teams die Möglichkeit mit Christoph Stoica, Regional General Manager von Micro über die  IT-Trends 2017 zu sprechen. Aus Sicht von Christoph Stoica sollte das Thema „ IT Sicherheit“ auch im neuen Jahr  eine zentrale  Rolle für Unternehmen  im Hinblick auf die Bewertung Ihrer IT-Strategie spielen.

Rückblickend war das Jahr 2016 geprägt von vielen, teils spektakulären Cyber-Attacken. Welche „Cyber-Bedrohungen gefährden die Netzwerke der Unternehmen Ihrer Meinung nach derzeit am meisten?

Christoph Stoica:
Die Lage der IT-Sicherheit ist angesichts immer größerer Rekord-Diebstähle von Kundendaten insgesamt sehr angespannt. Unberechtigter Zugriff resultierend aus Identitätsdiebstählen ist – neben der Verbreitung von Schadcode – nach wie vor die häufigste Ursache für sicherheitsrelevante Vorfälle in Unternehmen. Viele Angriffe konzentrieren sich zuerst auf den Diebstahl von Kennwörtern und Zugangsdaten aus dem Privatbereich – wie soziale Netzwerke, E-Mail Konten, Einkaufsportale – um sich im zweiten Schritt die Anmeldeinformation für das Unternehmensnetzwerk zu verschaffen. Professionelle Cyberkriminelle versuchen sich bei ihren Angriffen vor allem vertikal durch die Ebenen zu bewegen, um ihre Berechtigungen auszuweiten, eine Schwachstelle auszunutzen oder Zugriff auf Daten bzw. Anwendungen zu erhalten. Auch die digitale Erpressung durch gezielte Ransomware-Angriffe, wird zunehmend zu einer Bedrohung für Unternehmen und die Gesellschaft, wie die Beispiele der Cyber-Attacken auf mehrere deutsche Krankenhäuser Anfang 2016 zeigen. Infektionen mit Ransomware führen unmittelbar zu Schäden bei den betroffenen Unternehmen, was das betriebswirtschaftliche Risiko zusätzlich erhöht. Die Digitalisierung und Vernetzung sogenannter intelligenter Dinge (IoT) spielt dem Konzept der Ransomware zusätzlich in die Karten.


Wagen wir mal einen Ausblick  auf das, was uns dieses Jahr erwarten wird. Mit welchen „Cyber-Crime“-Trends müssen wir 2017 rechnen?

Christoph Stoica:
Die Themen Identitätsdiebstahl und Ransomware werden auch in 2017 weiterhin eine ernsthafte Bedrohung bleiben. Gerade bei Ransomware sind die monetären Gewinne für die Cyber-Kriminellen sehr hoch und da die Forderung nicht auf herkömmliche Währungen setzt, sondern die auf einer Blockchain basierten Cyberwährung „Bitcoins“ nutzt, ist auch das Entdeckungsrisiko für die Täter sehr gering.
Das Internet der Dinge wird in 2017 eine massive Ausweitung erfahren – insbesondere getrieben durch IoT-Lösungen im Konsumergeschäft, aber auch durch industrielle Anwendungsszenarien wie Gebäudeautomatisierung. Dadurch wird sich die tatsächliche Bedrohung weiterhin erheblich steigern. Sobald intelligente Maschinen und Geräte in Netzwerke eingebunden sind und direkte Machine-to-Machine Kommunikation immer mehr Anwendung findet in Bereichen wie Bezahlsystemen, Flottenmanagement, Gebäudetechnik oder ganz allgemein im Internet der Dinge, muss auch die Frage nach der Cybersicherheit gestellt werden.  Auf den ersten Blick denkt man vielleicht, dass von solchen „smart Things“ keine ernsthafte Bedrohung ausgeht und Schadprogramme nur lokal Auswirkung haben. Doch hat ein Schadprogramm erst einmal ein „smart Thing“ infiziert und somit die erste Hürde der peripheren Sicherheitsmaßnahmen hinter sich gelassen, können von dort weitere vernetzte Systeme identifiziert und infiziert werden. Selbst wenn es zukünftig gelingt, für die in Prozessor- und Storage-Kapazität limitierten IoT-Geräten eine automatische Installation von Updates bereitzustellen, um akute Sicherheitslücken zu stopfen, kann dies aber gleichzeitig auch zu einer Falle werden. Denn für das Einspielen solcher Updates muß das System auf das Internet zugreifen – ein Angreifer könnte sich als Updateserver ausgeben und auf diesem Weg einen Trojaner installieren.


Traditionell bietet sich der Jahreswechsel an,  gute Vorsätze für das Neue Jahr zu fassen. Aus Sicht eines Security Software Herstellers, welche 3 Security Aufgaben würden Sie Kunden empfehlen auf die „Liste der guten Vorsätze“ zu setzen, um Gefahren möglichst effektiv abzuwehren?

Christoph Stoica: Mit den guten Vorsätzen zum Jahresbeginn ist das immer so eine Sache… üblicherweise fallen diese meist recht schnell alten Gewohnheiten zum Opfer und damit sind wir direkt beim Thema „Passwortsicherheit“.

„Das Passwort sollte dynamisch werden.“

Obwohl man sich durchaus der Gefahr bewusst ist, werden vielerorts immer noch zu einfache Passwörter verwendet, ein und dasselbe Passwort für mehrere Accounts genutzt, das Passwort nicht regelmäßig gewechselt, Account-Daten notiert und so weiter und so weiter. Passwörter und Zugangsdaten sind nach wie vor das schwächste Glied der Kette. Dreiviertel aller Cyber-Attacken auf Unternehmen sind laut einer neuen Deloitte Studie auf gestohlene oder schwache Passwörter zurückzuführen. Starke Authentifizierungsverfahren hingegen können unerwünschte Zugriffe bereits an der Eingangstür verhindern und bieten wirksamen Schutz gegen Identitätsdiebstahl. Risikobasierte Zugriffskontrollen berücksichtigen eine Vielzahl von Faktoren um für jedes System und jeden Zugriff das angemessene Sicherheitsniveau zu erreichen – so erhält das Herzstück des Unternehmens den größtmöglichen Schutz, während der Zugriff auf weniger kritische Komponenten nicht durch unangemessene Sicherheitsmaßnahmen eingeschränkt wird.

„Bereiten Sie dem Wildwuchs bei den Verzeichnisdiensten und im Berechtigungsmanagement ein Ende.“

Viele Unternehmen pflegen die Zugangsberechtigungen für ihre Beschäftigten oft mehr schlecht als recht; nicht selten herrscht beim Thema Rechteverwaltung ein großes Durcheinander. Die Folgen sind unzulässige Berechtigungen oder verwaiste Konten. Gerade in einer Zeit, in der kompromittierte privilegierte Benutzerkonten das Einfallstor für Datensabotage oder -diebstahl sind, müssen Unternehmen dafür sorgen, diese Angriffsflächen zu reduzieren, Angriffe rechtzeitig zu erkennen und umgehend Gegenmaßnahmen einzuleiten. Hierfür werden intelligente Analysewerkzeuge benötigt, auf deren Basis die richtigen Entscheidungen getroffen werden können. Bei den Maßnahmen zur Prävention sollten Unternehmen daher Ihren Blick auf die Vereinfachung und Automatisierung von sogenannten Zugriffszertifizierungsprozessen richten, um Identity Governance Initiativen im Unternehmen zu etablieren.

„Verkürzen Sie die Reaktionszeit auf Sicherheitsvorfälle.“

Entscheidend für eine bessere und effektivere Abwehr von Bedrohungen und Datenmissbrauch ist die Verkürzung von Reaktionszeiten nach Sicherheitsvorfällen. Doch festzustellen, welche Aktivitäten echte oder potenzielle Bedrohungen darstellen und näher untersucht werden müssen, ist äußerst schwierig. Zur schnellen Erkennung von Bedrohungen, noch bevor sie Schaden anrichten, benötigt man Echtzeitinformationen und Analysen der aktuell auftretenden Sicherheitsereignisse. SIEM-Lösungen ermöglichen eine umfassende Auswertung von Sicherheitsinformationen und können durch Korrelation auch automatisiert Gegenmaßnahmen einleiten. Doch in vielen Fällen bieten bereits deutlich einfachere Change Monitoring Lösungen eine spürbare Verbesserung der Reaktionszeiten auf Sicherheitsvorfälle.
Über Open Horizons :
Open Horizons ist die größte und führende Interessengemeinschaft
für Micro Focus & SUSE Software Technologien.

Als Bindeglied zwischen Hersteller, Anwender und Kunde ist es unser Ziel, die Zusammenarbeit stetig zu verbessern. Denn je größer der Nutzen ist, den Unternehmen und Mitarbeiter aus den von Ihnen eingesetzten Micro Focus und SUSE Software-Lösungen ziehen, desto besser können Sie aktuellen IT-Herausforderungen entgegnen. Deshalb bilden Erfahrung- und Wissensaustausch die Eckpfeiler der Open Horizons Community Philosophie.  Diesem Kerngedanken folgend wurde die Open Horizons Community im Jahre 2004 gegründet. Die Projekte umfassen die Veröffentlichung des Open Horizons Mitglieder-Magazins, die Durchführung von User & Admin-Trainings, das Betreiben des Mailservers GW@home sowie die Organisation verschiedener, hochwertiger Events wie z.B.  Open Horizons Summit und Roadshows wie der YES Tour 2015 oder der #DiscoverMF Tour 2016/2017.
www.open-horizons.de

Appreciating IT’s Thankless Tasks

Introduction

We have an IT team at our company. Most mid-to large-sized organizations do; even small companies usually have “an IT guy”. I know who they are here. I got my new, larger SSD (solid state drive) from them; they gave me my eagerly-awaiting smartphone upgrade; they help me with troubleshooting issues; they advised when the core systems needed to go offline for a vital update. These are smart, busy folks.

They’re so busy, in fact, that they have to prioritize carefully. They aren’t always immediately on hand for everything. After all, an always-on server outage that runs core systems is more business critical than a call about a cell-phone battery that doesn’t last as long as it used to. But the accepted wisdom is that while IT believes it is delivering value, customer satisfaction ratings are less positive. In fact, studies show an alarming rise in IT backlog, which will do nothing to help its reputation.

Such negativity, however, cannot be the whole story. After all, consider the value IT helps organizations deliver. Without a functioning IT infrastructure, most organizations would just grind to a halt.

So the question then becomes, for the IT administrators, what’s taking the time and effort, and what can be done to fix it? Let’s look at three key areas:

Keeping customers satisfied

Challenge – servers, printers, mail, files, desktops, mobile devices… multiplied by the number of employees. Just the basic day-to-day of keeping users’ systems active, current, connected and collaborating is anything but trivial. Employees need to be able to securely share files to collaborate effectively, and when they travel to a different office, they need to connect to the network, access their files on corporate servers, and print.

Micro Focus View – Dealing with all those requests can be overwhelming. Simply *receiving* all those requests can be overwhelming, let alone actually resolving them. If the organization doesn’t already have one, an IT Service Management tool can help relieve the pressure, and provide the end user communications and transparency that can increase satisfaction ratings. And with that in place, you can look at solutions to:

When machines retire

Challenge – remember the last time you updated your home PC? Backing things up, uninstalling, reinstalling, setting up, rummaging for long-forgotten passwords and serial numbers, buying new software, and deciphering cryptic error messages. It was less straight-forward than you’d hoped, and took days to complete. Now imagine that for every PC in your organization. Who is doing that? Correct – the IT administrators.

Micro Focus View – Of course end users look forward to new laptops and smartphones, but rolling them out to dozens, hundreds, or thousands of employees doesn’t have to be an IT administration nightmare. Organizations that have mastered this process take advantage of specialized tools and solutions that can:

And when the subject is corporate servers, not endpoint devices, it’s even more critical to get it done faster, and to get it right.

When machines expire early

Challenge – a scheduled upgrade of servers or desktops is at least a planned event. What’s harder to manage and control are unplanned outages; they don’t happen on anyone’s schedule, and somehow seem to happen at the most inopportune times. Sure, there may be a disaster recovery plan for the data, but the whole environment? Even if the data is protected, sourcing and standing up new servers to restore a functional environment can take days or weeks.

Micro Focus View – Whether the environment comprises physical servers, virtual machines, or a mix of both, getting back to business as quickly as possible is a major priority. Troubleshooting what went wrong can prolong the outage, and is best left until after services have been restored. Options for whole workload disaster recovery – in which the entire server workload, including the operating system, applications, and middleware, is protected and can be quickly recovered – include all-in-one physical hardware appliances and disaster recovery software solutions, each of which can recover servers and get users back to productivity in a matter of minutes.

Conclusion

The administrative burden involved in IT operations is a significant ongoing task, and not always a well-understood one. Automating and improving the efficiency of these vital activities is a critical task and can free up IT time and investment for more visible projects.

Micro Focus’ broad range of IT Operations technology is designed specifically to ease the administrative burden, be it operations management, workload migration or disaster recovery. Our studies reveal a huge saving in the time taken for each task. For more information, visit Micro Focus.com.

You’ve Solved Password Resets for Your Network. Now What About Your Mainframe?

Humans. For the person managing network access, we are nothing but a pain. That’s because network access involves passwords, and passwords are hard for humans. We hide them, lose them, forget them, share them, and fail to update them.

The struggle is real, and understandable. We are buried in passwords. They’re needed for every aspect of our lives. To keep track of them, most of us write them down and use the “increment” strategy to avoid recreating and trying to memorize a different password at every turn. But the struggle continues.

Yes, passwords are hard for humans. And that makes them an incredibly weak security solution.

If you’ve been in IT for any length of time, you get it. For years, password resets were a constant interruption and source of irritation for IT. Fortunately, that changed when password-reset tools came along. Now used by most enterprises, these tools help IT shops get out of the password-reset business and onto more strategic tasks.

What About Mainframe Passwords?

Mainframe-password resets are even more costly and time consuming than network-password resets. That’s because mainframe passwords have to be reset in RACF, on the mainframe, which means someone who has mainframe access and knows how to execute this type of command has to do it—typically a mainframe systems programmer/admin. Plus, mainframe users often need access to multiple hosts and applications. And each application requires a separate username and password.

There are no automated password-reset tools for the mainframe—your wealthiest data bank of all. But what if there were a completely different way to solve this problem? What if you could get rid of mainframe passwords altogether and strengthen security for mainframe access in the process?

In fact, there is a way that you can do just that. Two Micro Focus products—Host Access Management and Security Server (MSS) and an MSS add-on product called Automated Sign-On for Mainframe (ASM) make it possible.

How Do MSS and ASM Work?

MSS puts a security control point between mainframe users and your host systems. It uses your existing Identify and Access Management structure—specifically, strong authentication—to authorize access to the mainframe. The MSS-ASM combo enables automatic sign-on all the way to the mainframe application—eliminating the need for users to enter any IDs or passwords.

Here’s what’s happening behind the scenes: When a user launches a mainframe session though a Micro Focus terminal emulator’s logon macro, the emulator requests the user’s mainframe credentials from MSS and ASM. ASM employs the user’s enterprise identity to get the mainframe user ID.

Then, working with the IBM z/OS Digital Certificate Access Server (DCAS) component, ASM obtains a time-limited, single-use RACF PassTicket for the target application. In case you didn’t know, PassTickets are dynamically generated by RACF each time users attempt to sign on to mainframe applications. Unlike static passwords, PassTickets offer replay protection because they can be used only once. PassTickets also expire after a defined period of time (10 minutes by default), even if they have never been used. These features all translate into secure access.

ASM returns the PassTicket and mainframe user ID to the terminal emulator’s logon macro, which sends the credentials to the mainframe to sign the user on to the application.

No interaction is needed from the user other than starting the session in the usual way. Imagine that. They don’t have to deal with passwords, and neither do you.

No More Mainframe Passwords

Humans. We are a messy, forgetful, chaotic bunch. But fortunately, we humans know that. That’s why we humans at Micro Focus build solutions to help keep systems secure and humans moving forward. Learn more about Host Access Management and Security Server and its Automated Sign-On Add-On.

Rapid, Reliable: How System z can be the best of both

Background – BiModal Woes

I’ve spent a good deal of time speaking with IT leaders in mainframe shops around the world. A theme I keep hearing again and again is “We need to speed up our release cycles”.

It often emerges that one of the obstacles to accelerating the release process is the differences in release tools and practices between the mainframe and distributed application development teams. Over time many mainframe shops converged on a linear, hierarchical release and deployment model (often referred to as the Waterfall model). Software modifications are performed in a shared development environment, and promoted (copied) through progressively restrictive test environments before being moved into production (deployment). Products such as Micro Focus Serena Changeman zMF and CA Endevor® automate part of this approach. While seemingly cumbersome in today’s environment, this approach evolved because it has shown, over the decades, to provide the required degree of security and reliability for sensitive data and business rules that the business demands.

But, the software development landscape continues to evolve. As an example, a large Financial Services customer came to us recently and told us of the difficulty they are starting to have with coordinating releases of their mainframe and distributed portfolios using a leading mainframe solution: CA Endevor®. They told us: “it’s a top down hierarchical model with code merging at the end – our inefficient tooling and processes do not allow us to support the volume of parallel development we need”.

What is happening is that in distributed shops, newer, less expensive technologies have emerged that can support parallel development and other newer, agile practices. These new capabilities enable organizations to build more flexible business solutions, and new means of engaging with customers, vendors and other third parties. These solutions have grown up mostly outside of the mainframe environment, but they place new demands for speed, flexibility, and access to the mainframe assets that continue to run the business.

Proven Assets, New Business Opportunities

The increasing speed and volume of these changes to the application portfolio mean that the practice of 3, 6 or 12 month release cycles is giving way to demands for daily or hourly releases. It is not uncommon for work to take place on multiple updates to an application simultaneously. This is a cultural change that is taking place across the industry. “DevOps” applies to practices that enable an organization to use agile development and continuous release techniques, where development and operations operate in near synchrony.

This is where a bottleneck has started to appear for some mainframe shops. The traditional serial, hierarchical release processes and tools don’t easily accommodate newer practices like parallel development and continuous test and release.

As we know, most organizations with mainframes also use them to safeguard source code and build scripts along with the binaries. This is considered good practice, and is usually followed for compliance, regulatory or due diligence reasons. So the mainframe acts as not only the production environment, but also as the formal source code repository for the assets in production.

The distributed landscape has long had solutions that support agile development. So as the demand to incorporate Agile practices the logical next step would be to adopt these solutions for the mainframe portfolio. IBM Rational Team Concert and Compuware’s ISPW take this approach. The problem with these approaches is that adopting these solutions implies that mainframe developers must adopt practices they are relatively unfamiliar with, incur the expense of migrating from existing tried and trusted mainframe SCM processes to unknown and untested solutions, and disrupt familiar and effective practices.

Why Not Have it Both Ways?

So, the question is, how can mainframe shops add modern practices to their mainframe application delivery workflow, without sacrificing the substantial investment and familiarity of the established mainframe environment?

Micro Focus has the answer. As part of the broader Micro Focus Enterprise solution, we’ve recently introduced the Enterprise Sync product. Enterprise Sync allows developers to seamlessly extend the newer practices of distributed tools – parallel development, automatic merges, visual version trees, and so forth, and to the mainframe while preserving the established means for release and promotion.

Enterprise Sync establishes an automatic and continuous two-way synchronization between your mainframe CA Endevor® libraries and your distributed SCM repositories. Changes made in one environment instantly appear in the other, and in the right place in the workflow. This synchronization approach allows the organization to adopt stream-based parallel development and preserve the existing CA Endevor® model that has worked well over the decades, in the same way that the rest of the Micro Focus’ development and mainframe solutions help organizations preserve and extend the value of their mainframe assets.

With Enterprise Sync, multiple developers work simultaneously on the same file, whether stored in a controlled mainframe environment or in the distributed repository. Regardless, Enterprise Sync automates the work of merging, reconciling and annotating any conflicting changes it detects.

This screenshot from a live production environment show a typical mainframe production hierarchy represented as streams in the distributed SCM. Work took place in parallel on two separate versions of the same asset. The versions were automatically reconciled, merged and promoted to the TEST environment by Enterprise Sync. This hierarchical representation of the existing environment structure should look and feel familiar to the mainframe developers, which should make Enterprise Sync relatively simple to adopt

It is the automatic, real time synchronization between the mainframe and distributed environments without significant modification to either that makes Enterprise Sync a uniquely effective solution to the increasing problem of coordinating releases of mainframe and distributed assets.

By making Enterprise Sync part of a DevOps solution, customers can get the best of both worlds: layering on modern practices to the proven, reliable mainframe SCM solution, and implementing an environment that supports parallel synchronized deployment, with no disruption to the mainframe workflow. Learn more here or download our datasheet.