While automated IT services and processes have the potential to drive more seamless IT operations, they often exist in silos. This is what Gartner means when they talk about “Islands of automation”[1] in areas like service desk, virtual machine application deployments, storage and networking. Imagine how much extra speed and throughput you’d gain by joining up these “islands” – especially for your mainframe platforms that might be sitting in disconnected silos. With that said, here are three opportunities that illustrate my point.

1. Making the mainframe available to line of business developers
Automation, in the form of DevOps, helps to drive an agile approach that enables continuous delivery of apps and updates. However, these approaches are less widespread among mainframe developers. Manual processes like provisioning and launching mainframe test environments, or managing test data, effectively make the mainframe less accessible to enterprise developers. In the near future, automation will make mainframe infrastructure available “on demand,” and ultimately as code, so developers can access a development environment on the mainframe as easily as they can in the cloud.

2. Orchestrating disaster recovery for IT operations
It’s also true that mainframes are less geared towards efficient recovery. With distributed environments, automation enables granular controls over rollbacks; whereas restoring the mainframe is typically a hands-on manual process. However, with orchestration, there is the ability to deliver more granular disaster recovery, as well as the ability to support rollbacks. The goal here is to move towards Disaster Recovery as a Service across the whole IT environment, rather than the current scenario where some infrastructure is recovered via automation and other areas manually.

3. Making mainframe data available to Line of Business users and Big Data analytics
Many manual processes are often required before Line of Business users can start unearthing the nuggets of insight hiding in your mainframe data. Tasks like data extractions, data formatting and data scrubbing are often predominantly hands-on activities. Today, however, big data stacks like Hadoop and Spark are already helping to automate the process of extracting near-real-time data from the mainframe and readying it for analytics. The ultimate goals are two-fold. First, the aim is to create a scenario where data scientists have access to an end-to-end workflow that extends from data extraction to setting up an analytics infrastructure, data ingestion and analysis, and the final delivery of results. Second, to optimize the costs of this data extraction, since there are costs associated with VSAM, Datacom and other such access.

It’s this kind of holistic automation that IDC have in mind when they say “Unified automation of workload management, service orchestration, and application release activities provides vital enabling technology for almost every dimension of [digital] transformation.”[2] As automated services are brought together across your hybrid enterprise, those inefficient islands of automation will start to disappear.

Learn more about delivering a self-driven mainframe and let me know what you think.

Jeff Henry is responsible for Broadcom’s Mainframe Division strategy, product management and design. In this role, Jeff is leading innovation across Broadcom solutions to help transform how clients build, operate and secure their mission critical applications and enable their organizations to optimize and transform to become a digital business, leveraging DevSecOps, machine learning/AI & Security and Compliance. Jeff has over 30 years of industry experience leading software organizations, specializing in business services delivery, cloud deployments, and bringing Systems of Engagement together with Systems of Record for enterprise customers. He lives in Raleigh, NC with his wife and two children, ages 15 & 16.

Share this