DevOps – a faster voyage of discovery

Tackling IT change is adversely affected by the complexity of the application landscape. Yet, problems getting up to speed in enterprise IT systems might be a thing of the past, as David Lawrence learns in his first Micro Focus blog

Accelerating delivery starts with automating understanding

Anyone been asked to do less this year? Thought not.

Anyone been able to simplify their IT systems recently? Figured as much…

As IT teams continue their turnover, and the rate of change required to keep decades-old portfolios productive increases, the ability to mobilize and plan for change is coming into sharp relief.

Yet, as the article from CIO magazine describes, the impending shortage of COBOL programmers will complicate efforts to keep these assets productive. Moreover, the increasing IT backlog (referred to by others as “IT Debt,” for example in this 2010 Gartner report) illustrates the urgency of improving the productivity of new developers as quickly as possible. A team that has been in place for decades, and has probably created a significant proportion of the portfolio they are now maintaining, will have an easier time keeping up with the backlog than will a team of individuals who are unfamiliar with the code.

Application discovery is a necessary part of the work of a developer, or programmer, who is new to a project or to a part of the application portfolio they are unfamiliar with. Traditionally, it is a trial and error process consisting of searching through tens or hundreds of source files, deciphering cryptic comments and locating references to significant data elements. And the language of these core systems? More often than not, COBOL.

wordle5

A DevOps Approach?

The benefits of replacing error-prone manual tasks with automated tools are well understood and form the bedrock of the rationale for the DevOps initiative.

Understanding of an application is crucial not just to get the new programmer up to speed. It’s also necessary for performing due diligence and following good practice. Compliance and oversight rules in organizations I speak with mandate that the impact of a proposed change to an application in production must be thoroughly understood, and usually documented in the form of an impact analysis, before the change can be deployed to the production environment.

DevOps is about automating as much of the application lifecycle as is feasible, to shorten time to production and reduce errors and resulting delays. This includes the early stages of discovery, analysis, requirements gathering, and so on.

The traditional means of discovery and analysis  of mainframe applications is a manual, and usually unbounded task, difficult to schedule and plan.

Automating the Discovery process

If we take the DevOps perspective of seeing what could be done to eliminate application discovery – usually a laborious, manual effort – it holds that this is an activity that is ripe for automation. What if, instead of chasing through one file after another, the programmer had at his disposal, a means to quickly and accurately visualize the structure and flow of the application? Such a solution could be used to not only reduce the effort of discovery, it could also automate another crucial task: Complete and accurate impact analysis. Application updates have been known to fail in production due to an inadequate understanding of the impact of the update.

Application Discovery Benefits

Solutions from Micro Focus and other vendors help automate discovery by automatically creating a visual representation of the application. By revealing artifacts like control flow and data references in an IDE instead of through the ISPF editor, the new programmer’s task of familiarizing himself with a new application is simplified. At the same time, the capability to automatically create impact analysis reports helps move your organization further along the path to DevOps.

Better yet, the same analysis information can be provided not only at the stage of initial examination (potentially scoping out a task for others), but also at the point of change, when the developer needs to know what to change, where and why, and what impacts this will have.

Figure 1Automated analysis at the point of change
Figure 1Automated analysis at the point of change

Conclusion – Automating the Journey

The demographic trends in the IT world are helping to exacerbate the IT backlog issue. People who know these systems may have moved on. Or the task of maintenance has been sub-contracted out to a team of staff who have no familiarity with the system. The increasing velocity of business and new models of customer interaction are additional factors in adding to the workload of COBOL programmers. A solution that speeds up development activities and reduces the risk through elimination or reduction of manual steps, makes a lot of sense. Moving the organization closer to their own DevOps objectives involves automating as much as possible – starting with knowing the systems being changed, using technology such as Micro Focus Enterprise Analyzer, should be seriously considered.

David Lawrence

Global Sales Enablement Specialist

DLblog

 

Feelin’ Locky, Punk?

Ransomware, malware that locks away your data until you pay to get it back, is spreading like a rash and affects both businesses and consumers. Simon Puleo lifts the lid on Locky and delivers some timely advice.

What is Ransomware?

Ransomware, malware that locks away your data until you pay to get it back, is spreading like a rash and affects both businesses and consumers.  The FBI has identified that ransomware is on the rise, affecting not only PCs but smartphones as well.  The Verizon 2016 Data Breach Investigations Report listed Ransonmware in it’s number two spot for Crimeware, and realized the biggest jump in reported attacks.  While first seen as a semi-sophisticated crime against hospitals and financial institutions, this nefarious crime is quickly becoming commonplace in all types of organizations and at targeted individuals. 

“Why is Ransomware new?” an IT Manager asked me, “isn’t this just a throwback to other malware schemes?” 

I replied with one word, “Bitcoin.”

While ransomware has been around for a while, it’s becoming more prevalent because Bitcoin (BTC) has made it easier than ever for the bad guys to get paid. Because it is anonymous, Bitcoin enables the criminal to receive his or her payment without the scrutiny of law enforcement, the government, or Visa or MasterCard for that matter.   In short, criminals use BTC as another tool to receive funds without easily being tracked.

Bitcoin started showing financial traction back in 2013 and not coincidentally, that is around the same time that the CryptoLocker virus started asking for payment in that manner.

As of May 3,2016, one of the latest ransomware creations is called Locky which Microsoft regards a “severe threat.”

locky

Locky

While “Locky” sounds like a cute character from one of the shows my kids watch on TV, it will cause more pain and headache then a teething toddler. After it is downloaded, Locky encrypts all of your files including photos, documents and videos using AES strong encryption (the type of encryption the FBI uses). When it’s done, it pops up a screen demanding payment in Bitcoin in return for the encryption key to unencrypt and retrieve your data.

Obviously no one is going to download and run ransomware on purpose, are they? You might be surprised. The criminals behind Locky get you to download and run it yourself by sending it to you in a phishing email. The targeted user is part of a social engineering  scheme, the criminals are on the hunt for users in accounting who often receive invoices from vendors.  The user:

  1. Receives an email with an invoice attached asking for review.
  2. The invoice emails are socially engineered addressed to the company and asking for payment.
  3. The curios user launches the attached Word document, and is prompted to “Run Macros”
  4. A set of malicious executables is installed on the target machine that will begin to encrypt data.

And Locky won’t just attack your local files, it will attempt to encrypt all network and storage devices too! When it’s done, you may see a screen like the below:

screenshot

Protecting your organization

Antivirus tools provide a measure of protection by identifying harmful macros, however motivated hackers will find a way past antivirus. But you can take destiny into your own hands by following the below best practices:

Awareness is the place to start. Educate your users about Ransomware and cyber security best practices.   Start a campaign to ensure that all users are aware of the dangers of phishing and how to avoid being a victim. Several companies make software that sends simulated phishing messages, tracks responses, and trains users how to not be fooled by them.

Take control and monitor user access and change with Micro Focus Sentinel and Change Guardian.  These solutions help security teams quickly identify threats before they cause damage with real-time integrity monitoring and analysis of security events as they occur. Rapidly spot file changes and new extensions like .locky that are out of the ordinary and take action with intelligence.

Enforce least privilege for your most sensitive data and systems.  Micro Focus Privileged Identity Management solutions ensure the right users have access to the right systems at the right times.  Trojans and malware like Locky typically need elevated rights to execute, you can stop them by simply not letting them start.  Protect the integrity of your critical systems by limiting use and monitoring who has access to what files during designated time periods.

Finally ensure you have a disaster recovery plan in place which includes keeping offline copies of critical data in both physical and virtual environments.   Micro Focus PlateSpin technology offers solutions that can quickly restore workloads to their original location or to a new location while the original is being repaired.

Put the right measures in place to create awareness, monitor user activity, enforce least privilege and create a disaster recovery plan.  Be vigilant and use great technology to protect you and your organization.

Start over, or with what you know?

Derek Britton’s last blog looked at the appetite for change in IT. This time, he looks at real-world tactics for implementing large-scale change, and assesses the risks involved.

Introduction

In my recent blog I drew upon overwhelming market evidence to conclude that today’s IT leadership faces unprecedented demand for change in an age of bewildering complexity. That “change”, however, can arrive in many shapes and forms, and the choice of strategy may differ according to a whole range of criteria – technical investments to date, available skills, organizational strategy, customer preference, marketing strategy, cost of implementation, and many more besides. This blog explores and contrasts a couple of the options IT leaders have.

Starting Over?

Ever felt like just starting over? The difficulty of changing complex back-end IT systems, when staffing is so tight, where the pressure to change is so high, with an ever-growing backlog – there is point at which the temptation to swap out the hulking, seething old system with something new, functional and modern, will arrive.

Sizing Up the Task

We’re sometimes asked by senior managers in enterprise development shops, how they should assess whether to rewrite or replace a system versus keeping it going and modernizing it. They sense there is danger in replacing the current system, but can’t quantify to other stakeholders why what is.

Of course, it is impossible to give a simple answer for every case, but there are some very common pitfalls in embarking on a major system overhaul. These can include:

  • High Risk and High Cost involved
  • Lost business opportunity while embarking on this project
  • Little ‘new’ value in what is fundamentally a replacement activity

This sounds a rather unpleasant list. Not only is it unpleasant, but the ramifications in the industry are all too stark. These are just a few randomly-selected examples of high profile “project failures” where major organizations have attempted a major IT overhaul project.

  • State of Washington pulled the plug on their $40M LAMP project. It was six times more expensive than original system
  • HCA ended their MARS project, taking a $110M-$130M charge as a result
  • State of California abandoned a $2 billion court management system (a five-year, $27 million plan to develop a system for keeping track of the state’s 31 million drivers’ licenses and 38 million vehicle registrations)
  • The U.S. Navy spent $1 Billion on a failed ERP project

Exceptional Stuff?

OK, so there have been some high-profile mistakes. But might they be merely the exception rather than the rule? Another source of truth are those who spend their time following and reporting on the IT industry. And two such organizations, Gartner and Standish, have reported more than one about the frequency of failed overhaul projects. A variety of studies over the years keeps coming back to the risks involved. Anything up to a 70% failure is cited in analyst studies when talking about rewriting core systems.

Building a case for a rewrite

Either way, many IT leaders will want specific projections for their own business, not abstract or vague examples from elsewhere.

Using as an example a rewrite project[1] – where in this case a new system is built from scratch, by hand (as opposed to automatically generated) in another language such as Java. Let’s allow some improvement in performance because we’re using a new, modern tool to build the new system (by the way, COBOL works in this modern environment too, but let’s just ignore that for now).

Let’s calculate the cost – conceptually

Rewrite Cost = (application size) x (80% efficiency from modern frameworks) x (developer cost per day) / speed of writing

The constants being used in this case were as follows –

  • The size of the application, a very modest system, was roughly 2 Million lines of code, written in COBOL
  • The per-day developer cost was $410/day
  • The assumed throughput of building new applications was estimated at 100 lines of code per day, which is a very generous daily rate.

Calculated, this is a cost of $6.5M. Or, in days’ effort, about 16,000.

Considerations worth stating:

  • This is purely to build the new application. Not to test it in any way. You would need, of course, rigorous QA and end-user acceptance testing.
  • This is purely to pay for this rewrite. In 10 years when this system gets outmoded, or the appetite for another technology is high, or if there are concerns over IT skills, do you earmark similar budget?
  • This assumes a lot about whether the new application could replicate the very unique business rules captured in the COBOL code – but which are unlikely to be well understood or documented today.

A well-trodden path to modernization

Another client, one of the world’s largest retailers, looked at a variety of options for change, among them modernizing, and rewriting. They concluded the rewrite would be at least 4 times more expensive to build, and would take 7 or 8 times longer to deliver, than modernizing what they had. They opted to modernize.

mod

 

Elsewhere, other clients have drawn the same conclusions.

“Because of the flexibility and choice within [Micro Focus] COBOL, we were able to realize an eight month ROI on this project – which allowed us to go to market much faster than planned.”

— Mauro Cancellieri,  Manager. Ramao Calcados

“Some of our competitors have written their applications in Java, and they’ve proven not to be as stable, fast or scalable as our systems. Our COBOL-based [banking solution] however, has proved very robust under high workloads and deliver a speed that can’t be matched by Java applications.”

— Dean Mathieson, Product Development Manager, FNS / TCS

Our Recommendation

Core business systems define the organization; they – in many cases – are the organization. The applications that provide mortgage decisions, make insurance calculations, confirm holiday bookings, manage the production lines at car manufacturers, process and track parcel deliveries, they offer priceless value. Protecting their value and embracing the future needs a pragmatic, low-risk approach that leverages the valued IT assets that already work, delivers innovation and an ROI faster than other approaches, and is considerably less expensive.

If you are looking at IT strategic change, talk to us, and we’d love to discuss our approach.



[1] We can’t speculate on the costs involved with package replacement projects – it wouldn’t be fair for us to estimate the price of an ERP or CRM package, for example.

3-2-1: The #DevDay Countdown has begun

With dozens of cities and thousands of delegates in the past four years – our #DevDay event is more popular than ever. Jackie Anglin previews this year’s exciting updates to the COBOL community’s must-attend show.

Introduction

It’s spring. And to mark the season of renewal and growth, we’re announcing the latest incarnation of our highly popular event series, Micro Focus #DevDay!  Now in its fourth year, #DevDay offers an out-of-this-world lineup of technical information, case studies and networking opportunities for you.  What’s new and different about this year? Let’s take a closer look….

The only constant is change

This year’s #DevDay is all about embracing change and let’s face it – change within IT is constant.  Platforms, architectures, applications, and delivery processes are continually adapting to meet new business requirements and market pressures.  But in order to achieve successful, lasting change, IT skills must also evolve and that’s what Micro Focus #DevDay is all about – technical education, building new skills and stronger community engagement. #DevDay delivers on this promise with a rocket booster of innovative content just for the enterprise application development community.

Today’s need: skill and speed

According to a recent Accenture survey, 91% believe organizational success is linked to the ability to adapt and evolve workforce skills. For starters, the business needs to respond to new competitive pressures, keep existing customers, retain market share and capitalize on new business opportunity, and these are just a few reasons. What makes this change proposition more challenging today is the plethora of new innovations in areas like mobile, cloud or IoT technologies including the connected devices we wear, drive or use to secure our homes. This requires an unprecedented technological prowess in IT.

But being smart won’t be enough on its own. This surge in the digital marketplace requires IT shops adapt faster than ever in order to keep pace with this unprecedented consumer demand for instant, accurate and elegantly designed content. Anywhere on the spectrum of status quo is no longer acceptable.  Delivering services in this new era requires tight business and IT alignment, better application delivery processes, greater efficiency and of course – speed.  For organizations, large and small, IT capability is the new competitive differentiator and as your responsive IT partner, Micro Focus, will help you meet these challenges.  Which brings us back to this year’s #DevDay lineup.

There is space for you at #DevDay

For organizations with IBM mainframe and other enterprise COBOL applications that need to move faster (without breaking things), #DevDay is for you.  Whether you manage COBOL apps in a distributed environment, or work with critical systems on the mainframe, or whether you work with those who do, here’s a list of reasons you should attend: latest tech content, real-world case studies, hands-on experience, a peer networking reception and our famously difficult ‘stump a Micro Focus expert’ contest.

12314117_1049822581735309_2928393436294244828_n

A universe of technology

This year’s #DevDay series is packed with new technology topics including platform portability, app development using Visual Studio and Eclipse IDEs, mainframe DevOps, .NET, Java integration and much more.

Here are just a few of today’s highly relevant topics on the agenda:

  • REST assured with COBOL: API-enable your business systems
  • Dealing with Data: COBOL and RDBMS integration made simple
  • The modern mainframe: Deliver applications faster. Get better results

You – at the controls

#DevDay now offers a brand new opportunity to build hands on experience with our latest COBOL products.  Led by our experts, you can test drive for yourself some of the powerful new capabilities available to the enterprise application developer. You must pre-register to participate.  To do so, click here.

#DevDay: Future AppDev takes off

#DevDay is focused on you – the enterprise COBOL development community.  This is a perfect chance to learn best practices and experiences, connect with like-minded professionals, as well as build new technical skills.  Don’t miss this opportunity. Join us for a truly intergalactic #DevDay experience.  Seating is limited, so register now before the space-time continuum distorts!

12322551_1049739885076912_5264228375435590730_o

Touching Down Near You Soon

United States

Canada

Brazil

See what happens at a #DevDay and find us on social media.

Enterprise DevOps is different: here’s why

Many of the world’s largest enterprises are looking at DevOps. But, as many are discovering, implementing it is not without its pitfalls. In his first Micro Focus blog, software industry guru Kevin Parker outlines what DevOps means at the enterprise scale.

Introduction

The DevOps movement evolved to allow organizations to innovate fast and reduce risk. DevOps rethinks how software development and delivery occurs and it reshapes how IT is organized and how IT delivers value to the business. However, some “pure” DevOps ideas are difficult to implement in highly regulated, large enterprises.

A question of scale

When the organization is required to meet strict government audit and compliance standards, when you have optimized IT delivery around a monolithic, centralized infrastructure and when you have specialist teams to manage discreet technologies, it is very difficult to relax those controls and remove the barriers in order to adopt a shared-ownership model called DevOps. Yet implementing DevOps is exactly what over a quarter of the largest global IT teams are doing today.

So how do highly regulated, large enterprises benefit and succeed with DevOps?

Preparing for Change

Enterprise scale adoption requires enterprise-wide change. As Derek Britton said in a recent perspective on the cultural impact of DevOps, “[it is] those who preside over larger systems, [where] that chaos will be most keenly felt.”

There has to be acceptance that changes to practices, processes, policies, procedures and plans will occur as the ownership of responsibility and accountability moves to more logical places in the lifecycle. Trust must be freely given. Every action taken must have transparent verification through common access to project data. This will be disruptive so there must be strong leadership and commitment through the chaos that will occur.

Not just for the Purists

In the table below some of the differences that exist between “pure” DevOps and DevOps as implemented in highly regulated, large enterprises:

“Pure” DevOps Enterprise DevOps
Pure Agile teams Variable speed IT with waterfall, agile and hybrid development and deployment
Multidisciplinary team members with shared ownership and accountability Team maintains strict Separation of Duties (SoD) with clear boundaries and concentrations of technical specialists
Drawn primarily from Dev and Ops teams Drawn primarily from Change and Release teams
Limited variability in platforms, technologies, methodologies and a generally a standardized toolset – often Open Source Solutions (OSS) Wide variances in platforms, technology, methodologies and toolsets with many so-called legacy, and often competing, solutions – occasionally  Open Source Solutions (OSS)
Generally collocated small teams Generally geographically dispersed large teams
Frequent micro-sourcing and contingent workforce Frequent outsourcing inshore and offshore
Light compliance culture Strong compliance culture
Limited cross-project dependencies Complex cross-project dependencies
Architecture of application strongly influenced by microservices approach Architecture bound by legacy systems steadily being replaced by encircling with newer ones
Experimental, A-B testing, Fail-Fast culture Innovate Fast And Reduce Risk culture
Team developing the app runs the app Team developing the app kept separate from team executing the app

The key takeaway is this – an enterprise-scale adoption requires some very smart planning and consideration.

Automate to Accelerate

The key to successful DevOps adoption comes down to automation. Whether your DevOps initiative starts as a grassroots movement from the project teams in a line a business or from an executive mandate across the corporation, bringing automation to as much of the lifecycle as is practicable is what ensures the success of the transformation. Only through automation is it possible to cement the changes necessary to effect lasting improvements in behavior and culture.

Through automation, we can achieve transparency into the development and delivery process and identify where bottlenecks and errors occur. With the telemetry thrown off by automation we are able to track, audit and measure the velocity, volume and value of the changes flowing through the system, constantly optimize, and improve the process. With this comes the ability to identify success and head off failure allowing for everyone to share in the continuous improvement in software delivery.

SkillsJuggle

The Time is Now

Nothing is more important in IT than the timely delivery of working software safely into production. The last decade has seen astonishing growth in the complexity of releases and the consequences of failure and astounding change in the volume and velocity of change. As each market, technology and methodology shift has occurred, it has become ever more critical for Dev and Ops to execute software changes flawlessly.

With the extraordinary synergies between the Micro Focus and newly-acquired Serena solutions it is now possible to create and end-to-end automated software development and delivery lifecycle from the mainframe to mobile and beyond and to affect your DevOps transformation in a successful and sustained manner. Read more here.

Kevin Parker

Vice President – Worldwide Marketing, Serena Software

KPSmallPic_400x400

 

 

Market Attitudes to Modernization

The tried-and-trusted enterprise-scale server of choice is casually regarded as an unchanging world. Yet today’s digital world means the mainframe is being asked to do greater and greater things. Derek Britton investigates big-iron market attitudes to change.

Keeping the Mainframe Modern

A Firm Foundation

The IBM Mainframe environment has been on active duty since the mid 1960’s and remains the platform of choice for the vast majority of the world’s most successful organizations. However, technology has evolved at an unprecedented pace in the last generation, and today’s enterprise server market is more competitive than ever. So it would be wholly fair to ask whether the mainframe remains as popular as ever.

You don’t have to look too hard for the answer. Whether you are reading reports from surveys conducted by CA, Compuware, Syncsort, BMC, IBM or Micro Focus, the response is loud and clear – the mainframe is the heart of the business.

Summarizing the surveys we’ve seen, for many organizations the Mainframe remains an unequivocally strategic asset. Typical survey responses depict up to 90% of the industry seeing the mainframe platform as being strategic for at least another decade (Sources: BMC, Compuware and others).

It could also be argued that the value of the platform is a reflection of the applications which it supports. So perhaps unsurprisingly, a survey conducted by Micro Focus showed that over 85% of Mainframe applications are considered strategic.

plus ça change

However, the appetite for change is also evident. Again, this holds true in the digital age. An unprecedentedly large global market, with more vocal users than ever, are demanding greater change across an unprecedented variety of access methods (devices). No system devised in the 1960s or 1970s could have possibly conceived the notion of the internet, of the mobile age, of the internet of things; yet that’s what they do have to do today – cope with this new world. Understandably, surveys reflect that: Micro Focus found two-thirds of those surveyed recognize a need to ‘do things differently’ in terms of application/service delivery and are seeking a more efficient approach.

The scale of change seems to be a growing problem that is impossible to avoid. In another survey, results show that IT is failing to keep up with the pace of change. A study by Vanson Bourne revealed that IT Backlogs (also referred to as IT Debt) had increased by 29% in just 18 months. Extrapolated, that’s the same as the workload doubling in less than five years. Supply is utterly failing demand.

Supporting this, and driven by new customer demands in today’s digital economy, over 40% of respondents confirmed that they are actively seeking to modernize their applications to support next generation technologies including Java, RDBMS, REST-based web services and .NET.  Many are also seeking to leverage modern development tools (Source: Micro Focus)

And it isn’t just technical change. The process of delivery is also being reviewed. We know from Gartner that 25% of the Global 2000 have adopted DevOps in response to the need for accelerated change, and that this figure is growing at 21% each year, suggesting the market is evolving towards a model of more frequent delivery.

pluscachange

Crossroads

Taking what works and improving it is not, however, the only option. Intrepid technologists might be tempted by a more draconian approach, hoping to manage and mitigate the associated cost and risk.

Package replacements take considerable budget and time, only to deliver – typically – a rough equivalent of the old system. Both unique competitive advantage is compromised, and of course packages are available to the open market. Such an approach is known to have a 40% failure rate, according to Standish Group. Custom rewrite projects appear to be riskier still, the same report talking about a 70% failure rate and extremely lengthy and costly projects.

Worse still, reports from CAST Software suggest that Java (a typical replacement language choice) is around 4 times more costly to maintain than equivalent COBOL-based core systems. The risks of such a drastic change are clear.

Moving Ahead

Meeting future need never ends. Today’s innovation is tomorrow’s standard. Change is the only true constant. As such, the established methods of providing business value need to be constantly scrutinized and challenged. The mainframe market sees its inherent value and regards the platform as well-placed to support the future.

And to meet the demands of the digital age, the mainframe world is evolving: new complimentary technology and methods will provide the greater efficiencies it needs to keep up with the pace of change. Find out more at MicroFocus.com and / or download the white paper ‘Discover the State of Enterprise IT

Rockin’ Role-Based Security – Least Privilege

With over 40,000 attendees, 500 exhibitors, and hundreds of sessions, this year’s RSA Security Conference was the place to be for anyone interested in keeping their networks, systems, and information safe from threats, including insider threats; which in turn got me thinking about least privilege.

The “between a rock and a hard place” discussion at this year’s RSA Security Conference was the battle between Apple and the FBI to unlock an iPhone that was used by one of the San Bernardino shooters. But with over 40,000 attendees, 500 exhibitors, and hundreds of sessions, other topics were discussed as well.

RonLaPenisImage

According to a survey done at the conference by Bromium, 70% of a sampling of attendees stated that users are their biggest security headache. This jives with previous surveys; which means that users were, are, and probably will continue to be one of the biggest security holes that organizations face.

Whether the “user” is an actual employee (the insider threat) or a cyber criminal who’s appropriated the credentials of an employee (making a guest appearance as the insider threat) is immaterial. Employees are our biggest threat, not only because they can maliciously or unintentionally cause data breaches, but because they are not equipped to deal with the tactics of cybercriminals, who covet their credentials – especially those of insiders with privilege. In either case, your employee is still a threat to your organization. So how do you eliminate threats from your users? You can’t!

IAS blog 2

Just like you cannot stop a hurricane, you cannot eliminate cyber threats. But just as you can harden buildings and build surge barriers to protect against hurricane damage, you can use appropriate user management and access controls to prevent or mitigate a breach caused by a cyber threat.

Let me postulate that the best way to prevent a breach is to not allow the actor (or threat) to access the information that they are targeting. In plain English, this means least privilege. Rather than giving your employees access to everything and anything, use proper access controls and user management to lock down your employees so that they can only access the systems and information that they specifically need to perform their jobs. You are not eliminating the threat, but rather are trying to minimize it through compartmentalization. For example, marketing people don’t need access to your finances, so lock them out. Similarly, programmers should never be granted access to production systems except in extreme circumstances. And have you even considered time- or location- based access? When should your employees have access to key information and where should they be sitting when they are allowed to access it? Should I be able to download the plans for a new product after hours from a country different from my office?

When an employee changes roles, ensure that their access changes with them – without a time lag which could give them to attack. Using an Identity Lifecycle Manager (ILM) tied to your HR database would be a good way to ensure proper initial provisioning along with ongoing access maintenance. An employee’s access lifecycle needs to stay congruent with their HR lifecycle. If your ILM also includes an analytics engine that can pop up nonsensical or out-of-the-ordinary access grants, so much the better.

But you cannot just buy an ILM and tell your board that your work is done. An ILM is useless unless you know what each role should be allowed to access. And that means working with your business units to define the roles within them.

Don’t accept that departments need dozens to hundreds of roles; that just means someone is being lazy. Nor do you want too few roles, forcing the system into a large number individual access grants. Like Goldilocks and her three bears, there is a “just right” which you will need to work out.

This is where access risk scoring might help you out. A risk score provides a means for determining or calculating risk for users, applications, business roles, or permissions. If the risk is low, perhaps you don’t need to create another role that manages access to a specific resource. But if the risk is high and you have to split the user population, then another role might be needed.

Finally, you want to combine least privilege with automated role changes, policy-based access, and change monitoring. This powerful combination can help to ensure that users don’t have access to what they shouldn’t and allow you to determine if someone is doing something out of the ordinary with something that they can access. By combining user activity and change monitoring you can watch how users (especially privileged users like sysadmins) use the rights they’ve been granted. It helps you spot and address unauthorized activity with concise, easy-to-read alerts that provide the “who, what, when and where” of unauthorized activity.

LaPenis

 

 

 

Ron LaPedis

Global Sales Enablement Specialist

This post was originally published on the NetIQ Cool Solutions blog site on April 21 2016

Neuer Sicherheitsstandard PCI DSS 3.2. – Die Daumenschrauben für die Finanzindustrie werden angezogen

Das PCI Security Standard Council hat mir der kürzlich verabschiedeten neuen Version PCI DSS 3.2. seine Sicherheitstandards für die Finanzindustrie nochmals deutlich verschärft. Erst kürzlich bekannt gewordene Angriffe auf das SWIFT-System haben gezeigt, dass die bereits hohen Sicherheitsstandards und Complianceanfoderungen immer noch nicht ausreichend genug sind, die sensiblen Daten umfassend zu schützen. Woran liegt das? Erfahren Sie in unserem Blogbeitrag mehr über die vielfältigen Gründe und welche Konsequenzen auf den Handel, die Banken und alle anderen, die mit Kreditkarten arbeiten, zukommen.

Die Cyberkriminalität in der Finanzwelt ist weiterhin auf dem Vormarsch. Wie bereits in meinem letzten Blogbeitrag erwähnt, hat sich die Cyber-Kriminalität laut einer KPMG-Studie zu einem äußert lukrativen Geschäft entwickelt. Geografisch gesehen häufte sich die Wirtschaftskriminalität in der Schweiz vor allem im Raum Zürich, gefolgt vom Tessin als zweitplatzierte Region. Dass gerade diese beiden Regionen die Liste anführen ist nicht weiter verwunderlich, da sowohl Zürich als auch Lugano die größten Finanzzentren in der Schweiz sind, und die Finanzinstitute nach wie vor im Fokus der Cyberkriminellen stehen. Obwohl die Finanzindustrie zu den am höchsten regulierten Branchen mit hohen Sicherheitsstandards und Complianceanforderungen zählt, gibt es nach wie vor unzählige Fälle von Datenmissbrauch und Datendiebstählen bei Finanzdienstleistern. Als Beispiel können hier die erst kürzlich bekannt gewordenen Angriffe auf das SWIFT System genannt werden.

Mit PCI DSS wurde bereits vor vielen Jahren ein Sicherheitsstandard für die Zahlungsindustrie eingeführt – basierend auf den Sicherheitsprogrammen Visa AIS (Account Information Security) und MasterCard SDP (Site Data Protection). Die Sicherheitsvorkehrungen der Kreditkartenunternehmen reichen anscheinend nicht aus. Woran liegt das? Die Gründe sind vielfältig:

  • Ausnutzung bestehender privilegierter Accounts durch Advanced Persistent Threat Szenarien,
  • die gestiegene Nutzung von Mobilitätslösungen und Cloud Services,
  • vermehrte Web-App-Angriffe durch die Verwendung gestohlener Zugangsdaten,
  • das Ausnutzen von Schwachstellen in Webanwendungen und
  • eine stetig steigende Bedrohungslage durch neue Schadsoftware und Cyberspionage.

Aber auch die Einhaltung von PCI-Compliance-Vorschriften bereitet den Unternehmen anscheinend Probleme. Gemäß dem 2015 PCI Compliance Report von Verizon fallen drei von vier Unternehmen durch, wenn es um die Einhaltung von PCI-Compliance geht. Damit sind all diese Unternehmen anfällig für Cyberangriffe auf Kreditkartentransaktionen und Kundendaten.

Als Reaktion sowohl auf die massiven Datendiebstähle bei Finanzdienstleistern als auch auf die steigende Komplexität der Bedrohungen, hat das PCI Security Standards Council seine Sicherheitsanforderungen nun deutlich erhöht. Mit der am 28. April 2016 verabschiedeten Version 3.2 fordert das PCI Security Standards Council unter anderem den konsequenten Einsatz einer Multi-Faktor-Authentifizierung (MFA) für Banken, Händler und andere, die mit Kreditkarten arbeiten. In bisherigen Versionen wurde ausschließlich der Einsatz einer Two-Factor-Authentifizierung für den Remotezugriff gefordert. Neu gilt dies für alle Administratoren, auch bei Zugriffen innerhalb des Cardholder Data Environment (CDE), und man spricht über Multi-Factor-Authentifizierung. Mit den neuen Sicherheitsanforderungen will das PCI Security Standards Council offenbar ein klares Zeichen setzten, dass die momentanen Schutzmechanismen für kritische und sensible Daten, wie solche von Karteninhabern, nicht ausreichend sind und erweitert werden müssen.

Damit verfolgt das PCI Security Standards Council die gleiche Richtung in Bezug auf Passwörter, wie sie bereits letzten Herbst durch den britischen Geheimdienst GHCQ in der Publikation „Password guidance: simplifying your approach“ empfohlen wurden. Angesichts der heutigen Bedrohungsszenarien sind längere und komplexere Passwörter alleine nicht mehr ausreichend.

Das Ende statischer Passwörter und einfacher Pins… es gibt bessere Lösungen für eine sichere Zukunft

thumbAuch wenn den Firmen noch eine Schonfrist für die Umsetzung der neuen Anforderungen bis zum 1. Februar 2018 gewährt wird, sollten bereits heute die entsprechenden Weichen dafür gestellt werden. Es gibt eine Vielzahl von Herstellern, die unterschiedliche Multi-Faktor-

Authentifizierungsverfahren anbieten, und die Anzahl der Authentifizierungsmethoden wächst rasant weiter. So gab die HSBC Bank in Großbritannien vor kurzem bekannt, dass sie ab Sommer 2016 eine Kombination aus Sprachbiometrie- und Fingerabdruckverfahren für die Authentifizierung beim eBanking für über 15 Millionen Kunden einführen wird. Die Authentifizierung, die den Zugriff auf das eigene Bankkonto ermöglicht, erfolgt dann per Smartphone, Stimmen-ID und Fingerabdruck. Innovative Hard- und Software ermöglicht eine eindeutige Identifizierung der Stimme anhand von mehr als 100 Merkmalen, wie beispielsweise Schnelligkeit, Betonung und Rhythmus – auch im Falle einer Erkältung! Ein anderes interessantes Verfahren, an welchem Micro Focus und Nymi derzeit arbeiten, ist die Authentifizierung über den eigenen Herzschlag. Hierfür legt sich der Nutzer ein Armband an, welches den Herzschlag per EKG auswertet und individuelle Muster erkennt und prüft.

Jedes Unternehmen hat unterschiedliche Anforderungen und Voraussetzungen für die Implementierung solcher MFA-Lösungen, und somit gibt es keine „one-size-fits-all“-Lösung. Unterschiede bestehen vor allem bei der Integrationsfähigkeit mit Remotezugriffsystemen und Cloud-Anwendungen. Wie löst man also das Passwort-Problem am besten?

Eine effektive Authentifizierungs-Strategie

Es gibt drei Kernpunkte, die Unternehmen bei der Planung eines für sie passenden Authentifizierungsverfahren berücksichtigen sollten:

  • Abbildung der Business Policies in modularen Richtlinien – vorhandene Richtlinien sollten wiederverwendbar, aktualisierbar und auch auf mobile Endgeräte erweiterbar sein. Das erleichtert die Verwaltung der Zugriffskontrolle für die IT-Sicherheit, da der Zugriff für das Gerät dann im Falle eines Sicherheitsvorfalls schnell entzogen werden kann.
  • Verbesserte Nutzbarkeit mobiler Plattformen. Einige Legacy-Applikationen verwenden zwar ein Web-Interface, sind jedoch weder für den mobilen Zugriff noch für regelmäßige Aktualisierungen ausgelegt. Die Verwendung von Single-Sign-On (SSO) Mechanismen für native und Web-Applikationen kann hier durchaus hilfreich sein.
  • Flexibler Einsatz unterschiedlichster Authentifizierungsmechanismen für ein angemessenes Gleichgewicht zwischen Sicherheitsanforderungen, betrieblicher Handlungsfähigkeit und Benutzerfreundlichkeit. Das Authentifizierungsverfahren sollte immer genau dem jeweils erforderlichen Schutzniveau anpassbar sein. Unterschiedliche Benutzer oder Situationen erfordern unterschiedliche Authentifizierungen – die verwendete Methode muss sowohl zur Rolle als auch zur Situation des Benutzers passen.

Die Planung eines für sie passenden Multi-Faktor-Authentifizierungsverfahren sollten Unternehmen jedoch nicht nur am Status Quo ihrer Anforderungen ausrichten, der Blick sollte sich auch auf zukünftige Bedürfnisse richten. Zu berücksichtigen sind insbesondere die zentrale Verwaltung und Steuerung von Benutzern und Endpunkten, sowie die TCO, und ob neue Anforderungen wie Cloud Services und Mobile Devices über das gleiche MFA-Produkt ohne weitere Add-on Module abgesichert werden können.

Thomas Hofmann

Systems Engineer – Micro Focus Switzerland

TomHofmann

 

Die Kunst der Einfachheit ist die Komplexität des Puzzle

Über die gestiegene Akzeptanz von Cloud Services und die damit verbundenen Herausforderungen in Bezug auf das Zugriffsmanagement haben wir im letzten Blogbeitrag bereits berichtet. Im folgenden Beitrag beleuchtet Christoph Stoica weitere Stolpersteine, die bei der Integration von Cloud Diensten in die vorhandene Infrastruktur zu beachten sind, um Business Continuity, Sicherheit und Compliance in heterogenen Umgebungen zu gewährleisten.

Laut IDC werden bis 2018 mindestens die Hälfte der IT-Ausgaben in cloudbasierte Lösungen fließen. Ohne Cloud Services wird zukünftig nichts mehr gehen, denn Cloud bildet sowohl die Basis für neue digitale Produkte als auch Services.  Keine Frage, Cloud Computing ist jetzt auch bei den deutschen mittelständischen Unternehmen angekommen und die Anpassung geht mit großen Schritten voran. Fast jedes Unternehmen nutzt jetzt irgendeine Form von Cloud Services,  allerdings gibt es viele  Herausforderungen im Betrieb dieses Service-Modells.  Für die Unternehmensführung  steht bei der Entscheidung für den Einsatz von Cloud Ressourcen vorrangig  die Erfüllungen von Zielen  wie Flexibilität, Skalierbarkeit und Schnelligkeit im Vordergrund. Eine moderne IT-Infrastruktur ist heute ohne mobile Komponente und ohne Cloud Dienste nicht mehr komplett.  Darüber hinaus punkten Cloud Dienste vor allem damit, dass sie standardisierte Leistungen schneller und zu einem günstigeren Preis anbieten als die Unternehmen selbst dies mit ihrer internen IT können.

Aus Business-Sicht im Zuge von immer knapper werdenden IT Budgets scheint dies ein idealer Schachzug – weg von der Komplexität hin zur Lean-IT.  Doch so verlockend die operativen Vorteile auch sein mögen, eine 100 % Migration in die  Cloud wird es in Realität wohl eher kaum geben. Das liegt unter anderem an den immer noch großen Sicherheits- und Datenschutzbedenken gegenüber Public Cloud Diensten. Cloud-Integration ist daher ein wichtiges Stichwort.  Man kann davon ausgehen, dass in den Unternehmen ein Mix aus verschiedenen Modellen entstehen wird. Neben dem herkömmlichen, lokalen IT-Betrieb (der Legacy IT oder Static IT)  wird es parallel dazu eine dynamische IT in Form von virtualisierten Private Clouds und Public Clouds geben.

CloudPieces

Bei der Konvergenz von herkömmlicher, statischer IT und dynamischer IT geht es nicht nur darum, Best-of-Breed-Lösungen zu finden, sondern auch darum, sie zusammenzufügen. Wie tief soll man integrieren? Wie organisiert man die Datenhaltung, damit nicht an unterschiedlichen Stellen sich widersprechende Informationen gespeichert werden? Wie automatisiert  man das Zugriffsmanagement über die verschiedenen Plattformen hinweg? Neben Herausforderungen wie Integration und Management, wächst für IT-Organisationen  natürlich auch die Angriffsfläche. Der klassische Schutz der IT-Netze und Systeme an den Außengrenzen der Unternehmen erodiert zusehends. Die Integration von hybriden Cloud Architekturen  muss daher gleichzeitig mit einer Evolution der IT-Sicherheitsmaßnahmen auf allen Ebenen Hand in Hand gehen, damit die Verteidiger gegenüber den Angreifern technologisch nicht ins Hintertreffen geraten.  Die Agilität, die man durch Virtualisierung und „Cloudifizierung“ gewinnt, bringt es naturgemäß mit sich, dass ständig und vor allem schnell eine große Zahl von Zugriffsregeln in Dutzenden oder gar Hunderten von Systemen modifiziert werden muss. Um Business Continuity, Sicherheit und Compliance in derart heterogenen Umgebungen wie hybriden Clouds zu gewährleisten, ist die Auswahl der richtigen Werkzeuge essenziell. Manuelle Prozesse können einfach nicht mehr mit der Dynamik dieser hybriden Welten mithalten.

Zum Glück gibt es mittlerweile Lösungen, wie die von Micro Focus, die genau solche Orchestrierungen von Sicherheitsrichtlinien ermöglichen. Sie bieten eine holistische Sicht auf die gesamte heterogene Umgebung und erlauben ein automatisiertes Change-Management von Security Policies – vom Design über die Implementierung bis hin zur Nachverfolgung für die Auditierung.

Christoph

 

 

 

 

 

Christoph Stoica

Regional General Manager DACH

Micro Focus

Alles Wolke 7 oder doch eher Wolkenbruch? – Cloud Computing ist Realität, hybride Lösungen sind die Konsequenz

Cloud Computing rückt 2016 in Fokus vieler deutscher mittelständischer Unternehmen. Verständlich denn, getragen von der digitalen Transformation sorgt Cloud Computing für die Optimierung der Kapitalbasis, indem sich ausgewählte IT-Kosten von einem Investitions- hin zu einem Betriebskostenmodell verlagern. Doch wie sieht es mit Sicherheitsrisiken und der Durchsetzung von Compliance dabei aus? Sind die Daten in der Cloud wirklich sicher und wo liegen sie und wer kontrolliert sie? Christoph Stoica erläutert im neuen Blogbeitrag, welche Aspekte aus der IT-Security Sicht beachtet werden sollten.

Wenn man einen Blick in den aktuellen Cloud Monitor 2015 der Bitkom wirft, dann ist es keine Frage mehr : Cloud Computing ist jetzt auch bei den deutschen mittelständischen Unternehmen angekommen und die Anpassung geht mit großen Schritten voran.  Einer der maßgeblichen Treiber für die gestiegene Akzeptanz der Cloud in Deutschland ist die digitale Transformation.  Auf Basis von neuen Technologien und Applikationen werden Produkte, Services und Prozesse umgestaltet, so dass sich Unternehmen nach und nach zu einer vollständig vernetzten digitalen Organisation wandeln. Wer jetzt denkt, dies alles sei Zukunftsmusik und gehöre nicht auf die Agenda der  TOP-Prioritäten, dem sei gesagt : weit gefehlt!

Schon jetzt bewegen wir uns mit einer Höchstgeschwindigkeit in eine voll vernetzte Welt.  Immer mehr Menschen verfügen über mobile Endgeräte, hinterlassen digitale Spuren in sozialen Netzwerken, tragen Wearables  die  ihre persönlichen Daten – ob freiwillig oder nicht – senden und für Unternehmen verfügbar machen. Maschinen und Gegenstände sind über  Sensoren und SIM-Karten jederzeit digital ansprechbar, was zu veränderten und erweiterten Wertschöpfungsketten führt.  Die Vielzahl der so gesammelten Daten stellt für Unternehmen  einen  wichtigen Rohstoff dar, der, durch geschickte Analytics Tools richtig genutzt, den entscheidenden Wettbewerbsvorteil verschaffen kann. Es stellt sich also nicht die Frage, ob die digitale Transformation erfolgt, sondern vielmehr wie schnell die Unternehmensführung die entsprechende Weichenstellung in der IT-Infrastruktur vornimmt.

Die digitale Transformation erfordert skalierbare Infrastrukturen – sowohl technisch als auch hinsichtlich der internationalen Reichweite. Cloud Dienste, ob public oder private, mit ihren Merkmalen wie Agilität,  Anpassungsfähigkeit, Flexibilität und  Reaktivität sind hierfür bestens dafür geschaffen. Doch wie sieht es mit den Sicherheitsrisiken und der Durchsetzung von Compliance dabei aus? Sind die Daten in der Cloud sicher? Wo genau liegen meine Daten und wer kontrolliert sie? Auch wenn nach dem kürzlich gefallenen Safe Harbor Urteil „Big Player“ wie Amazon Web Services, Profitbricks, Salesforce und Microsoft nun ihre Rechenzentren in Deutschland oder zumindest an einen EU Standort verlagern, löst das immer noch nicht alle Sicherheitsfragen. Reicht ein Zugriffsmanagement basierend auf einer einfachen Authentifizierung mittels Benutzername und Passwort angesichts der größeren Angriffsfläche noch aus?

dataprotection

Benutzernamen und Passwörter lassen sich heutzutage leicht überlisten, das neue Zaubermittel heißt  Multi-Faktor Authentifizierung. Eine  erweiterte Authentifizierungsmethode unter Nutzung zusätzlicher Faktoren ermöglicht  eine schnelle und präzise Identifikation. Unterschiedliche Benutzer oder Situationen erfordern unterschiedliche Authentifizierungen, die verwendete Methode muss zur  Rolle als auch zum Kontext des Benutzers passen und natürlich der Risikoeinstufung der angeforderten Informationen gerecht werden. Nicht jede Interaktion birgt dasselbe Risiko für ein Unternehmen. Einige Interaktionen stellen eine größere Gefahr dar. Bei einer risikobehafteten Interaktion wird eine strengere Authentifizierung benötigt, die beispielsweise durch eine zusätzliche Information (die nur dem Benutzer bekannt ist), die zusätzliche Verifizierung der Identität über getrennte Kanäle – man spricht von Out of Band – oder andere Elemente gewährleistet wird.

Jedoch kann die Verwendung und Verwaltung solcher mehrstufiger Authentifizierungsverfahren kostspielig und unübersichtlich werden. Micro Focus bietet mit Advanced Authentication eine Lösung zur zentralen Verwaltung aller Authentifizierungsverfahren – ob für Ihre Mitarbeiter, Lieferanten oder Geräte.

Christoph

 

 

 

 

Christoph Stoica

Regional General Manager DACH

Micro Focus