Accelerating Digital Business Decision-Making and Execution with Workload Intelligence and Analytics

Accelerating Digital Business Decision-Making and Execution with Workload Intelligence and Analytics

Broadcom just unveiled Automation.ai – the industry’s first AI-driven software intelligence platform purpose built to accelerate decision-making across multiple business and technology domains that support digital transformation initiatives.

Automation.ai harnesses the power of advanced machine learning (ML), intelligent automation and internet-scale open source frameworks to transform massive volumes of data from disparate toolsets, providing a unified approach to enterprise decision-making such as:

  • AI-driven: Provides a predefined and out of box set of AI-driven analysis, correlation, recommendation and remediation services that are fully automated.
  • Open: Ingests AIOps, DevOps, ValueOps, Automation COE domain data from a full-range of software, including Broadcom, third party and open source.
  • Always Learning: Continuously validates and improves decisions based on real-world outcomes.
  • Extensible: Operates independently or within existing AI and machine learning ecosystems.
  • Multi-cloud: Fully containerized Kubernetes-based orchestration on public or private cloud.

As you may have heard, Broadcom also recently completed the acquisition of Terma Software whose solutions model workload dependencies, optimize workloads, enable intelligent SLAs and improve overall operational efficiency, in real-time. As I am responsible for Automation technologies at Broadcom I can honestly say these are truly exciting times.

Terma Software is a global leader for Adaptive Intelligence and Predictive Analytics for Optimizing Workloads through an advanced AIOps platform for workload automation, allowing you to predict and prevent workload issues before they happen, helping you to meet service levels.

This cross platform/cross vendor solution enables you to monitor all your workloads from a single pane of glass. It discovers dependencies across automation providers such as CA 7, AutoSys, Jobtrac, Tidal & IBM mainframe and distributed solutions, allowing companies to improve service delivery through proactive management of SLAs across these automation platforms.

At Broadcom, we understand that it is critical to simplify operational control and this can be a challenge if you use multiple scheduling technologies. Having a single point of view across your entire portfolio and combining this with enhanced workload analytics helps in this simplification. We also understand that having insight into the potential impact of delays to the business enables you to resolve issues before end users and the business are impacted.

Key to Broadcom is what the Terma technology allows us to provide in the future. We believe automation is critical for successful digital transformation and have done so for decades, as you know, this is the lifeblood for any business but is often thought of as old and not transformative.

Terma’s technology enables this change by offering AIOps for Workload Automation, enriching our Automation.ai platform that powers our Digital BizOps solution, and providing actionable workload automation insights from multiple vendors. Ultimately it will improve both service delivery and customer experience. Unified workload management will drive faster results, operational improvement, and increase confidence and trust across IT and the business.

The beauty of the solution is that it is not essential for you to change your workload automation toolset, a decision that for many organizations is put off for far too long because of the perceived business risks of changing scheduling engines. With this, we can offer the advanced SLA management, critical path, dependency discovery and analytics to all, and then use AI/ML to expose previously unseen insights into your operations, allowing for continual improvement and better delivery to the business.

Feel free to join me at a webinar we will be presenting on the subject on the 12thof December at 11AM Eastern Time, so you can see what this can mean to your operations environment. If you are reading my blog after the event, don’t worry; the link will take you to a replay of the webinar, so you will not miss out.

Empowering Automation Center of Excellence Initiatives

Empowering Automation Center of Excellence Initiatives

I have Googled “Digital Transformation” and I got more than 40,000,000 results … pretty packed. In case it wasn’t already, it makes clear that most companies are still trying to figure out how to best use emerging technologies. IDC envisions that 55% of organizations will be digitally determined by 2020, pushing transformation initiatives and spending up to nearly $6T. In fact, as disruption threatens every market, the so-called digital transformation appears to be unavoidable for any organization.

So, I am sure, you perfectly know about the digital transformation story: it is all about a greater level of business pressure. The pressure that drives technological, and organizational changes. In fact, why are you transforming? Simply because you need to innovate fast and stay ahead of the competition, constantly delight your customers and avoid churn. All that, dealing with speed and volumes that you never reached before. So, there are a few key areas to focus on when tackling digital transformation initiatives, to increase agility in an organization.

Key areas to focus on when tackling digital transformation

The first area is about managing and controlling business processes end-to-end. It didn’t used to be a challenge when all functions were centralized and integrated. But introducing multi-cloud and SaaS into your application infrastructure comes with a new set of challenges as you still need to stay in control of the whole business process execution.

Another area is delivering an innovative digital experience to the market. With constant pressure to transform at scale, DevOps organizations need to create a robust framework of tools and processes to enable continuous delivery. However, coordinating tools and teams is often done manually or through ad-hoc scripting, which causes errors and delays that put business at risk.

The last area can be seen as a consequence of the other two. Managing service delivery is becoming difficult for the operations teams with siloed tools and applications spread across mobile, the cloud and the mainframe. As digital technologies are dramatically increasing volume and velocity, it is now imperative to include AI and machine learning to keep increasingly complex infrastructures and processes under control.

Automation finds its place as an indispensable tool

For improving enterprise agility, automation finds its place as an indispensable tool. And if there was a time when automation was seen as just a tool to cut costs, modern automation technology has the potential to put you a step ahead of the competition. For that, many organizations are putting an Automation Center of Excellence (CoE) in place. As a matter of fact, automation that is developed without proper controls can have hazardous consequences for the business. With a CoE, automation efforts do not belong to individuals, the CoE enables sharing of best practices and thus enhance adoption while increasing speed and ROI. Of course, the CoE is not just a matter of technology. But it is important to understand that automation at scale requires organizations to consider a platform approach for effective centralized governance.

Empowering the Automation CoE

Empowering the Automation CoE is an inherent objective of the new Automic 12.3 release. The new Automic release provides a solid platform powered by AI and machine learning capabilities for driving agility across both digital processes and continuous delivery pipelines. Automic 12.3 helps meeting business expectations by increasing the speed and scale of application deployment while ensuring stability and high-quality customer experience. The new Automic release bridges across silos of automation and provide the single control point for all automation. Automic is the backbone IT organizations need to support their CoE initiatives:

  • Continuous Delivery – you construct easily intelligent pipelines for hybrid and multi-cloud, automate toolchains and testing processes driven by ML and AI
  • Digital Business Automation – you automate complex workloads, integrate business processes across platforms, control flows for big data and analytics, give control back to business users with self-service
  • AIOps – you automate detection and remediation of issues, implement self-healing operations, leverage intelligent insights to improve service levels

By bringing greater intelligence into automated processes and tightening the alignment between teams and tools, Automic 12.3 is more than ever the right platform to empower your Automation CoE initiatives. Ultimately, by overcoming complex environments, bridging islands of automation and preventing from constantly reinventing the wheel, this will bring an unprecedented level of consistency to drive your digital initiatives. Probably this is the best asset for accelerating your transformation, isn’t it?

White Paper – Unlocking the Value of the SRE Model

White Paper – Unlocking the Value of the SRE Model

This paper examines the key tool requirements that are integral to supporting SRE models, and
it reveals how Broadcom offers a unique approach that helps organizations realize more value from the
SRE model, and do so more rapidly and securely. Review this paper and discover how Broadcom solutions
deliver complete ecosystem observability and AI-fueled intelligence, enabling teams to optimize the
customer experience and boost business outcomes.

Read White Paper

What is a Unified Data Model and Why Would You Use It

What is a Unified Data Model and Why Would You Use It

Managing modern application environments is hard. A unified data model can make it easier. Here’s how.

The nature of modern app environments

Modern distributed application systems are growing increasingly complex. Not only are they larger and spread across scale-out environments, but they are also composed of more layers, due especially to the trend toward software-defined networking, storage and everything else. The environments are also highly dynamic, with configurations that are auto-updated on a recurring basis.

Add to this picture microservices architectures and hybrid clouds, and things get even more complex.

Whereas in the past you would typically have run a monolithic application in a static environment on a single server, today you probably have containerized microservices distributed across clusters of servers, using software-defined networks and storage layers. Even if you have simpler virtual machines, your infrastructure is still likely to be highly distributed, and your machine images might move between host servers.

This complexity makes it difficult to map, manage and integrate multiple tools within your environment, especially when each tool uses its own data model. It creates multiple issues for DevOps practitioners and developers alike.

What is a Unified Data Model?

This is why organizations are increasingly adopting unified data models. A unified data model creates an opportunity for an organization to analyze data from multiple sources in the context of shared business initiatives.

A unified data model forces your DevOps and development teams to determine the methods, practices, and architectural patterns that correlate to the best outcomes in your organization. It will also force your institution to future-proof your data architecture by leveraging new technology data types and attributes.

As the complexity of systems increases, diminishing returns of different data modeling impact our ability to maintain and monitor web applications. Individual modeling for different systems creates a contextual gap in regard to the overarching infrastructure. Broadcom’s white paper on the Essentials of Root Cause Analytics goes into much greater detail on this.

A unified data model acts as a bridge between your different ecosystems, allowing you to contextualize data sources across multiple services. It acts as a foundation upon which data can be consistently consumed, combined and correlated, which allows for machine learning application across different data sets. It could be argued that this is the future of DevOps monitoring and maintenance.

Lastly, a unified data model will allow for refactoring and migration of data across your infrastructure. As a result, careful consideration should be given to the flexibility of the data components in your organization’s ecosystem, and design should be addressed with future-proofing in mind. Every data layer and every data source should serve to increase the understanding of your overarching data model and ecosystem.

Too Much Work, Too Little Time

Too Much Work, Too Little Time

Seventy percent of transformation efforts fail.  It€™s a statistic that’s been thrown around for 25 years, most recently cited by McKinseyPMI reports that 9.9% of every dollar spent on projects is wasted – and that’s an improvement on previous studies. There are many reasons for these failures, but organizational fatigue is a big one – and it’s getting bigger.

The problem is that the speed of business has increased, but the approach hasn’t. As a result, planning is occurring more frequently, with more projects being approved, adjusted and cancelled, but it’s still based on individual proposals from different business areas. At the same time, many organizations still approve way more projects than they are capable of delivering. This causes frustration, lost productivity and an overall sense of organizational fatigue – and the more frequent planning becomes, the worse it gets.

Organizations must be more strategic, even as they adjust their tactical work more frequently. Planning must change to align approved work with long-term roadmaps that guide the strategic direction of a product, service, or the entire business. Adjustments in the short-term must still contribute to progress on that strategy, and work that doesn’t contribute should never be approved in the first place.

Organizations talk a lot about creating an environment where their employees can “work smarter, not harder,” but they still operate with legacy planning techniques that are anything but smart. Change those planning fundamentals and you’ll go a long way to alleviating organizational fatigue.

Top 3 Reasons to do Big Room Planning

Top 3 Reasons to do Big Room Planning

When you have delivery teams dispersed across different locations, planning can get messy. Luckily, collaborative planning in one big room has proven to be an effective remedy to this situation. In fact, here at Rally Software, we’ve been doing Big Room Planning since our inception. Here are three reasons why you should, too.

When you plan together, you reach your goals faster

Our Agile advisors have estimated that in order to produce a mid-range plan for a delivery group, it takes 20,000+ informed decisions. The reality is, when you have everyone in the same room, you can make those decisions and deal with any dependencies quickly. At the end of planning, your path forward may only be mostly-right, but that’s why you put mechanisms in place to adjust along the way.

What’s even better is that can form a sound plan in just two days. And when you get really good at it – you can get it done in a day. Compare that to planning cycles that take weeks or even months when you’re using email, chat, and a never-ending exchange of spreadsheets.

Big Room Planning at Rally Software
Planning together results in better plans.

When the people who do the work are included, you get better plans

Traditionally, plans were created by managers. While managers may think that they have great experience, being removed from the work means that their experience is no longer current. The only experts at the detail are the people who do the work every day. What managers bring to the table is context, which is why you need them in the room, too.

By including both parties in the planning process, you significantly mitigate risk right from the start. All of the implementation questions are easily resolved simply by getting out of your chair and speaking with the people who have experience in that area. Further, the diversity of viewpoints means that you’ll consider angles that you might have missed.

Rally Software team during Big Room Planning event.
At Rally, we even invite customers to observe our processes.

When you plan together, you get happier people

Bringing together the entire delivery group transforms an email signature and voice on the phone into a living, breathing person. This is someone you’re far more likely to help and support when things get tough. It also helps build the culture of your organization.

Another outcome is commitment. You can’t commit on behalf of someone else. When the people who are doing the work also plan it, the result is not only a plan, but a team of teams who believe in that plan and are deeply committed to delivering on it.

Rally Software Big Room Planning

It may sound crazy, but put everyone in a room for a day or two, and the result is a sound plan. Trust us – it works – time and time again.

The importance of identity in your ALM data strategy

The importance of identity in your ALM data strategy

In the first part of this blog series, we presented the analogy that the benefits of blockchain technology are the same benefits we seek to have in an ALM data strategy. This is potentially a multi-million-dollar analogy because there is a lot to gain throughout your product development flow once you tie in the key tenets and benefits of blockchain. Enter, the first benefit, which is identity.

Every time I talk to a customer, I almost always witness a spoof of the old Abbott & Costello Who’s On First skit. The simple act of trying to identify ‘the work’ that is being done in an organization, and by ‘whom’, turns into chaos. It is a very real thing in organizations today. This lack of identity around the work will evince ambiguity, uncertainty, and erode trust. We want the direct opposite result in product development.

Blockchain technology lowers uncertainty during the exchange of value because it provides identity about What is being transacted, by Whom, and When. We will focus on these three W’s for the moment, yet identity certainly empowers us to clarity around the five W’s. In blockchain, transactions are linked together to provide a chronological history or audit trail. The same is needed in product development; an overview of the all the transactions made from inception to delivery.

In the context of product development, the ability to quickly identify the pertinent information about the ‘work’ being planned and executed, in real-time, which means you can have quick conversations, make quick decisions; and the domino effect will be shortened delay times across the lifecycle, and tighter feedback loops.

With blockchain, technology is used to eliminate the need for a third-party intermediary to prove the identity of an asset, and who/what/when the work for that asset was completed. Let’s think about that in the context of agile planning. What acts like a third-party intermediary in product development?

Spreadsheets.

Spreadsheets are manually compiled, take a small army of people to do the mundane work, and lead to unproductive conversations about data accuracy, that all doesn’t help build trust or empower our knowledge workers. We go to great lengths to create and maintain spreadsheets all in vain of simply trying to identify what work is being done, by whom, and when. What if you just had a single registry of all the linked transactions in product development in real-time; to get an accurate big picture at a glance?

In my first blog post, we established the notion that product development is a series of transactions involving a multitude of stakeholders and artifacts. Use technology to identify all the transactions that occur during the product development lifecycle, and identify how these transactions are linked together. Now, have all that identifying data in a single registry to eliminate the need for spreadsheets; would that not be a game changer?

Let’s explore this a bit more; they say technology should help enable your business processes by freeing up capacity and providing data to guide decision making. To obtain that higher degree of trust and certainty in your ALM data, you must be able to identify the provenance, custodianship and attestations of the artifacts in your ALM data set.

Ideally, you should be able to answer the following questions:

  • Can I easily identify what the work item is? E.g. Is it a User Story? Task? Feature?
  • Identify the Acceptance Criteria, target objectives, child artifacts, whatstatus it is currently in?
  • Can I easily identify who is the owner of those work items?
  • What parent artifact is it linked to? Why is this work important? Whatproblem are we solving?
  • Can I easily pull a full audit trail of the artifact to show all transactions: test cases, investments, tasks, dependencies/risk, scheduled dates, etc.?

Having all this information in a single registry is the foundational stone to business agility. Yet, from my experience, ALM data infrastructure is often treated as an afterthought. If you want to attempt business or enterprise agility, then your technology must act as a robust backplane. The identification of the ALM data from top to bottom across the different planning horizons is something you should be solving with technology.

If you are not able to easily identify if your organization is building the right thing, and who/what/when the work is being done, then I would safely assume your product planning and status meetings involve a great deal of amnesia. There are hidden costs to operating in that way.

If you find these concepts interesting, and you have read this far, then may I suggest a couple of simple things to experiment with:

  • Data standardization: Put in place a standard around the work items in the work item hierarchy for you product development. Easily identify what the work item assets you manage are, is a good first step.
  • Single registry: To have the right conversations with the right people at the right time, you need to be able to easily identify, and retrieve the information. A single management system that supports one truth throughout the organization is the catalyst to decentralized decision making, and empowered teams.
Forrester Research: Why Digital Businesses Require Agile Financial Planning

Forrester Research: Why Digital Businesses Require Agile Financial Planning

Disruptive competitors and discerning customers have companies shifting to agile development practices in order to deliver better customer experiences — fast and effectively. However, many organizations are finding that traditional planning and execution methods don’t provide the overall benefits of switching to agile. To be successful, companies must shift not only their development practices, but also their strategic and financial planning practices to an agile mindset by leveraging the power of agile and PPM solution capabilities.

In this webinar guest speaker, Margo Visitacion, vice president and principal analyst at Forrester, will discuss how organizations can evolve toward agile planning and funding practices by:

– Rethinking business cases for agile at scale
– Discovering how agile financial planning requires lean thinking
– Prioritizing investments based on customer lifecycles and value streams
– Driving continuous funding allocations to support an agile business
– Using fast feedback in agile delivery to validate funding decisions

Watch Webinar

The Power of “Defining Done”: A Simple Concept to Ignite Company-wide Change

The Power of “Defining Done”: A Simple Concept to Ignite Company-wide Change

I see many people struggling to find the benefits of agility. There is so much noise in the market today, so many people telling you how to make radical changes to your organization, that it’s hard to know what to believe and what direction to move in. This blog is the first, of a multi-blog series, through which I will focus on sharing small changes you can make to create ripple effects of goodness in your organization.

What is Definition of Done?

Let’s start with the ‘Definition of Done’ (DoD). The Definition of Done is not Acceptance Criteria. Acceptance criteria are specific to a story and tell the person working on the story and those who test it how far they need to take it. Acceptance criteria should be specific to that one piece of work and should not be overloaded with things everyone has to do like security scans and unit tests. This is what standardized Definitions of Done are all about.

As a leader in a technology organization, do you have a clear understanding of the state of done in your organization? What does “yes, that’s done” mean to you? Does it mean the same to everyone whose name is aligned to that work? It should.

But don’t go overboard.

Start with defining ‘Done’ for Stories, Iterations, and Releases. Most companies have legacy release criteria already defined. Pull that out, dust it off, and see how close you get with each release. If it’s pretty good, then you have a starting DoD for releases in your organization. If it needs a lot of work, set it aside and start with stories. I’ll create a future post regarding DoD for Iterations and Releases.

Stories: Are we done yet?

For any story in your organization, it’s important to identify what state you want it in prior to it being called done. Here’s an example of one set of criteria that I’ve used at a company.

For Stories, until the following criteria are met, the work does not get shown to the team and product owner for acceptance:

  • All tasks completed (dev, QA, tech pub, UX)
  • QA has run and passed all “happy path” tests
  • No known defects for all new code*
  • Code reviews completed, when needed
  • Unit tests implemented
  • Automated tests implemented for features/areas that the team agreed to automate
  • All development documentation is complete

When I show people this list, I often get told “we can’t do that here”. But I created this at a real company—just like yours—that didn’t start off practicing agility. After sharing this list, there was lots of grumbling among the team who expressed “this is impossible; we cannot do that in a 2-week sprint.” However, after about 3.5 years, nearly every team had accomplished these items.

*Zero defects of any severity was our ideal state. I knew we couldn’t get there quickly, but it was where we needed to end up. Every release, each team took one step towards these lofty goals. Eventually, we met them.

I could tell the teams that were taking agile principles and practices seriously because they were able to complete the Definitions of Done first. The laggard teams had all the same problems they always had—missed dates, poor planning, over commitment, and under delivering. Every release. Yet those teams that worked to hit these goals, and put in the infrastructure and automation needed, actually did hit them—all of them— and morale improved. More on that in a later post.

It’s important to note that we didn’t get there overnight. There’s an evolution to meeting these goals that I will describe in future posts, but establishing these definitions was key to setting that stake in the ground for everyone to strive towards.

Why DoD matters

It’s very important that you create a DoD for all teams, world-wide, who are creating products for your business. At first, they might not be able to complete this because the infrastructure or automation doesn’t exist. But during release planning, they should be able to tell you how close they can get, which then becomes part of their commitment during planning. And each release or planning cycle, they should be getting closer. Eventually—it took the company I was working at roughly 3-4 years. By the end, all but one team was meeting this goal.

The upside was that everyone, from team members to our CTO, knew what ‘Done’ meant. Our defect leakage and cost of maintenance decreased dramatically, while our quality skyrocketed. Teams were being interrupted less frequently to fix bugs from previous releases and they were able to focus on new features and get more to market.

DoD in Rally

You should visualize your Definitions of Done to make them easy for everyone to see. Here are a couple ways you can do this in Rally:

Custom HTML App

  • Or you can post them on your intranet and link them into a dashboard using the Custom URL app.

Either way, make sure that all teams using Rally can see your definitions and what they should aspire to achieve.

As a leader, you should know what “Done” means in your organization. A word of caution—don’t bloat these definitions. Make them attainable if automation, infrastructure, and good agile practices are in place. It makes a huge change to your organization the minute no one needs to ask if it’s ‘done’, or ‘done-done’, anymore.

Provide a Seamless Customer Experience Across the Enterprise

Provide a Seamless Customer Experience Across the Enterprise

Global businesses are prioritizing mobile-first strategies that cater to the needs of a growing, tech-savvy customer base that expects frictionless access to online goods and services. But payment fraud shows no sign of letting up, and verifying identities continues to be a tricky proposition for banks as cybercriminals diversify and increase their attacks.

And while financial institutions find themselves in a cat-and-mouse game with fraudsters, customers want financial services that meet their online lifestyle.

I’m the Same Person. Why Is Every Login Different?

It’s simple. Beyond their desire for great products and services, consumers expect three things from their bank and financial service providers—security, privacy, and convenience. What they don’t want is a different authentication experience—biometrics for mobile banking, passwords (weak and/or forgotten) for brokerage, and yet another login for mortgage payments.

And while the majority of millennials purchase goods and services online, according to one study, 38 percent of them abandoned mobile banking activities because the process took too long.

At the same time, concerns about identity theft, wire-transfer fraud, ATM skimming, and credit card scams loom large in the minds of consumers. And rightly so. Payment fraud is on the rise and in the news.

People depend on their financial service providers to protect them from fraud. Even the most sophisticated consumers are unsure how best to protect themselves.

This is driving the need to deliver a consistently positive digital experience across products and services. And with today’s economic, competitive, and regulatory pressures, banks can benefit by offering streamlined cross-channel access—creating a new way to grow customer loyalty and sell more products and services.

Imagine having a single view and deeper understanding of customers, enhancing security, while at the same time offering a common authentication experience. State-of-the-art payment fraud prevention technology is making this happen.

Enter Identity Risk Insight Suite (IRIS)

IRIS—a single cross-channel fraud prevention solution—enables banks to use the same capabilities available in Broadcom’s proven CNP payment security systems across the entire enterprise.

It provides the ability to recognize good, returning customers by piecing together their digital identity from the complex digital DNA users create as they transact online. High-risk behavior can be pinpointed in real time, whether at new account applications, logins, or payments—reducing friction and unnecessary step-ups.

The solution leverages Broadcom’s robust data set by synthesizing different types of anonymized interaction information. And by robust, we mean the largest consortium of global fraud data—with hundreds of millions of devices associated with billions of global e-commerce payments.

IRIS supports financial services across the enterprise, including online banking, mortgage, insurance, lending, and more. And our patented predictive neural network modeling and risk-based authentication capabilities enable banks to share anonymized customer data across all digital financial services and offerings.

IRIS extends our proven risk-based authentication capabilities—currently applied to the highly regulated payments space—across the enterprise to drive out fraud on all digital channels and satisfy industry mandates.

Bottom line: With a single view and risk-based authentication across the enterprise, banks can detect and prevent fraud in real time. This can reduce fraud, build trusted transactions, and implement an omnichannel authentication experience for customers.

Identity Risk Insight Suite (IRIS) from Broadcom
Identity Risk Insight Suite (IRIS)
GDPR – Never Mind the Buzz Words

GDPR – Never Mind the Buzz Words

It’s About People, Not Data

Over an 18-month period, those in the technology industry bore witness to the foretelling of an impending cataclysm, an Earth-shattering event of unprecedented destruction with epic consequences. An all-encompassing state of mass hysteria ensued, instigating a state of doomsday prepping – leaving all but emergency essentials, the mass evacuations (of data!) began, akin to an Orson Welles radio broadcast telling of a Martian invasion, with many simply running to the hills. This was fueled (mostly) by waves and waves of doom-mongering – men in flat caps with Sandwich boards, stating ‘The End is Nigh’, with an almost constant ringing of bells (yes, I’m talking about technology vendors) – as we waited in trepidation for the forthcoming onslaught of the vast, meteoric impact of… General Data Protection Regulation (GDPR)!

Yet, as some of us congregated on beaches, holding hands and waiting, with a sense of reluctant acceptance that, this indeed, was it – the devouring, biblical Tsunami of economy-collapsing fines and apocalyptic brand damage that was prophesied turned out to be not much more than an ocean swell… a bit rough out there, but nothing a decent umbrella couldn’t handle. We all breathed a collective sigh of relief, went home, put the kettle on and had a nice cup of tea.

A dramatic start. This is a recurring scenario: a new legislation is announced, or updates to an existing one and we struggle to understand the context, accurately foresee the consequences and end up in a whirlwind of differing views and priorities.

Mocking aside, GDPR is an ongoing imperative. One that organisations must take seriously and of which large fines have already been levied for infringements, notably, Marriott Hotels and British Airways, to name just two. So how do we dissect the seemingly endless and overly complex wording of the requirement as a whole? How do we then map those prerogatives to a business strategy? One that is ultimately governed by technology tooling.

The Two Major Themes of GDPR

The main categories of GDPR are:

  • Rights of Data Subjects
  • Accountability
  • Data Protection by Design and by Default
  • Data Breach Reporting
  • Anonymisation and Pseudonymisation
  • Cross-Border Data Transfers and Binding Corporate Rules
  • Certifications, Codes of Conduct and Seals

Cut out the jargon and we see two themes emerge, Identity and Data. Indeed, the first emphasis, which makes complete sense, is to focus on the data aspect- it’s right there in the text and there are already a range of technologies that can address those: anonymisation technologies (data andidentity), data classification and governance, secure transfers and integrity. These are the points that can be addressed early on – we can look at these sub headings and, as industry ‘experts’ we can recommend a course of action and the tools to help achieve and ultimately maintain compliance.

But let’s look at the other aspect, Identity, which, if we look at the context has the most bearing on all. Let’s take an example: Data Breach Reporting – simply a case of reporting a data breach – something that some people that I’ve spoken to that are responsible for GDPR in their organisations accept as an inevitability – in a timely manner, mitigating the impact on those individuals (customers for the most part) affected. Now, reporting a breach may reduce or even eliminate the chance of a hefty penalty, but it doesn’t prevent the damage done to a company’s reputation, which 9 times out of 10 will cost a whole lot more. So isn’t prevention better than the cure?

The first thing that emerges when discussing data breach is perhaps not surprisingly, data. The protection, classification and integrity of that data – to build a wall around it – which is perfectly valid. What about if we look at it another way? Data doesn’t ‘leave the door’ of its own accord, someone (a human being, whether a single person or group) causes that breach, and they do that for a reason; money, activism, terrorism, etc. – that’s the why. So, now let’s look at the how, if we delve in to the statistics, 74% of data breaches involve the compromise of privileged accounts – a big number! With so many privileged accounts – Operating System, Database, Cloud services, Application accounts – in an organisation, and now in so many locations – co-locations, cloud platforms, cloud services, exposed devices, IoT – gaining access to data via a privileged account has to be the most attractive target to any would-be attacker.

It comes back to identity – an attacker (an identity) wants to access an organisation’s data, using a privileged account (an identity) that has access to that data, so a successful defensive strategy should be identity based, and that’s where Privileged Access Management comes in. The ability to control, monitor and enforce access around who has access to privileged accounts is key, combined with secure, centralized management of those accounts provides a powerful combination – this is the foundation of a preventative strategy against the risk of a data breach. Capabilities also need to extend out to cloud, IoT and applications that use privileged accounts to run, giving a comprehensive policy of control and action in a post-digital transformation world.

In summary, GDPR is already here and organisations don’t want to be the next headline- a Privileged Access Management strategy goes a long way to help reduce the risk. It won’t end with GDPR… organisations need to be ready for the next piece of legislation.

Forced Re-Authentication: What Is It?

Forced Re-Authentication: What Is It?

As businesses move toward engaging their customers and employees through more digital experiences, there is an increasing risk to security due to the widening of the attack surface. This is driven by the adoption of emerging technologies across distributed architectures and the proliferation of devices and other digital interfaces.

With the anywhere, anytime access expected by consumers, identity becomes the new enterprise perimeter, maintaining data security and privacy while granting appropriate access based on the user, device, and application. But security concerns don’t end when a user successfully authenticates.

Session hijacking is a growing and dangerous attack vector, potentially leading to malicious actions being performed using a legitimate user’s identity. Despite the many session hijacking mitigation techniques that one may have put in place, there can be instances where a session can still be compromised. This is where dynamic, forced re-authentication can come to your rescue and add another layer of protection to reduce the risk of such scenarios in your applications.

For some time, SiteMinder has offered the ability to designate sensitive resources and force a user to re-authenticate to access those resources, even if the user is logged in and has a valid SMSession cookie. Forced re-authentication provides additional protection for designated resources and has been extended in our 12.8.3 release to work with Layer7 Advanced Authentication as well. This means that you can force a just-in-time risk computation and then dynamically enforce step-up multi-factor authentication (MFA) by Advanced Authentication every time the sensitive resource is accessed. This enhanced re-authentication capability in SiteMinder assumes that the user has initially logged in and that a valid SMSession exists for the user.

This feature will be very useful in scenarios dealing with sensitive resources while not adding additional friction to more mundane requests. The “Transfer Money” section of an online banking website is one example of a sensitive resource. Normally, once a user logs in, they would not have to re-authenticate to transfer money. By defining the money transfer section as a sensitive resource, users must provide their credentials before accessing that section. This sensitive resource protection prevents an unauthorized user from transferring money in case the user stepped away from their system without first logging out or the existing session was compromised in a cross-site scripting attack. By challenging the user to re-confirm their identity, it prevents unauthorized users from taking advantage.

Please refer to DocOps for more information on how to configure this capability.

Forced re-authentication is one of many SiteMinder capabilities to ensure high session security. Read about some more highlights here.

Code Smarter, Not Harder

Code Smarter, Not Harder

We are coding software faster than ever before. A recent survey showed 85 percent of developers use Agile methodologies. Another 75 percent of coders expect new coworkers to be productive in three months, with a third saying it should take less than 90 days. Continuous delivery is thus becoming more real every day, as organizations look for new and innovative ways to improve speed without compromising quality or cost.

The days when organizations could just ask employees to work harder are long gone – or mostly gone – the focus now is on working smarter. From no-code to low-code, IT departments around the world have tried to squeeze efficiency out of all kinds of innovative technologies.

Yet there’s a more fundamental, and far simpler, way of working smarter.

Just improve the way software development is integrated into the rest of the organization €“ not from a technical standpoint, but from a business and cultural perspective.

Traditional agile development focuses on software that delights customers, and that–s good. But it’s far more important to develop solutions that delight both customers and your own organization. You need a solid foundation, a loyal team, to keep customers happy. You need to understand not just what customers want, but what your organization wants, why it wants it, and how it supports the business.

Give your teams that context, and they–ll give you better solutions all round.

Fighting Fraud with Data Science

Fighting Fraud with Data Science

“Predictions are hard—especially about the future.”


Yogi Berra

Beyond a doubt, eCommerce crime is big business. And CNP fraud accounts for 81 percent of it—equaling billions of dollars in losses annually, according to Javelin Research.

Today’s reality is that fraudster behavior is quickly adapting to the many ways issuers attempt to prevent it, while legitimate customer transactions can look like fraud. Conventional authentication methods are not enough to keep up with, let alone stay ahead of, these trends.

It takes advanced data science—predictive analytics, neural networks, and machine learning—to change the game in reducing fraud risk while maintaining a positive cardholder experience.

Big Data + Deep Expertise

With real-time analytics, every transaction, such as a log-in event or CNP purchase, is examined using contextual data. This analysis can make fine-grained decisions about any transaction’s implied or inherent risk.

For CNP fraud prevention, predictive analytics is nothing new. But effective analytics take an incredible amount of relevant, globally diverse risk and fraud data. We’re talking about—just as an example—hundreds of millions of devices associated with billions of global e-commerce payments. Plus, the known good or bad behavior associated with these devices and transactions—across both issuers and merchants.

Yet, as important as this data is, it’s easy to overlook that the expertise required to wrap data science around it is just as critical. As analytics become more prevalent in fraud prevention schemes, it’s easy to make mistakes and misapply machine learning algorithms. There are plenty of war stories where a model was being biased the wrong way.

This proficiency is critical in knowing how to accurately apply the techniques of data science to understand legitimate and fraudulent behavior in context of the individual cardholder in real time. And while we’re talking about real time, it’s important to understand that some data models are based only on confirmed historical fraud—essentially chasing after it instead of predicting fraud before it happens.

In half of all CNP fraud schemes, the second transaction occurs within 3.6 minutes of the first, and in 15 percent of these cases it’s less than one second. So when it comes to major fraud events, it’s all about quickly recognizing the first fraudulent transaction to avoid the second, the third, and so on.

This can happen only with sophisticated analytics, using neural networks and a system that continues to learn from all purchase transactions—as they happen.

Finding the Right Balance

In the end, when data science is accurately applied to payment fraud prevention, it allows issuers to find the right balance between risk mitigation and cardholder experience. The result is security that doesn’t get in the way of genuine online transactions, minimizes fraud, avoids false declines, and keeps cardholders happy.

Bottom line? More data means more powerful solutions built on predictive analytics. But building the advanced machine learning needed to optimize user experience and drive out fraud depends on how the data is leveraged.

And that’s where our world-class team of data scientists—with a combined hundreds of years of experience in payment fraud prevention—lead the way.

Redefining the Customer Experience

Redefining the Customer Experience

A closer look at how some organizations are truly redefining the customer experience

The world is now in the early stages of the Fourth Industrial Revolution – the Digital Age. The digital revolution that began in the middle of the last century has exploded into the App Economy, which is fundamentally altering the way we live, work, and relate to one another. Software is the key driver of growth, innovation, and efficiency in this new age, but how you deliver it says a lot about how you’ll be able to compete in this world.

Consumer demands and expectations have never been higher, and so are the number of options from which to choose. How do you ensure that you are providing the best combination of features, security, and convenience? One area that is getting a lot of focus is friction-specifically, how do you minimize the friction you add to the customer journey? Shep Hyken, New York Times bestselling business author, digs into ten key ways business trigger points of friction in his new book, The Convenience Revolution.  But in this blog, we will focus on two examples of companies who addressed friction using technology.

Ending a Cruise and Saving Face

In 2013, my wife and I took our son on his first cruise.  We had been struggling to find that perfect getaway that was super easy and convenient for the parents, but also fun for a child. Someone recommended a cruise, and that fall we sailed from Baltimore to Bermuda. We were hooked and cruises have been a part of our annual vacation planning ever since. But, as we moved to larger ships, we were struck by one annoying thing-something that ruins almost any travel experience, security.

At the end of our relaxing and peaceful vacation, we disembarked the ship only to be faced with the lines for customs and immigration.  It is sad enough waking up knowing that your vacation is over, and it is now time to get back to work, but leaving the ship and standing in line with 6,000 fellow passengers is just painful. Enter technology.

Royal Caribbean knew that their customers were frustrated by the customs process, and wanted to reduce this friction, so they partnered with the US Customs and Border Protection (CBP) to introduce facial recognition at the Port of Miami and Cape Liberty. It was a truly inspired idea.

When each passenger checks in before their cruise, their picture is taken and loaded onto their sea pass card. This is used to identify the passenger when they get on and off the ship in different ports. This same technology can be used to clear the passengers through immigration by comparing facial scans taken at the beginning of the cruise with those taken at departure, and by also comparing them to a government database of passport photos. The process takes about 2 seconds per passenger, making this final step of clearing customs an extremely fast, secure, and frictionless process.

Banding Together to Extend the Magic

Along this same theme, Disney also wanted to transform the customer experience within their parks. If you have never had the opportunity to visit the beast that is Walt Disney World, especially with young children, then you have missed one of the true modern-day rights of passage for a parent. Just to put things in context.  The Royal Caribbean cruise ship, the Symphony of the Seas, is the world’s largest and carries, on average, 5,400 passengers each voyage. This equates to about 280,000 visitors annually. Walt Disney World averages about 250,000 visitors a day.  Which of these sounds like a more relaxing and peaceful vacation to a parent?

But like Royal Caribbean, Disney is also laser-focused on delivering the best experience to its guests. In fact, some would argue that Disney sets the bar for creating the ultimate customer experience. And in 2015, Disney took a $1 Billion bet on IoT when it introduced the MagicBand. Disney recognized that lines of any kind detracted from the magical experience that they were trying to create. The MagicBand helps to do this.

Worn on each guest’s wrist, the MagicBand transmits a signal more than 40 feet in every direction. It can do everything from unlocking your hotel door to ordering food and buying merchandise with the wave of a hand. It can also notify restaurants of your impending arrival, so the hostess can greet you by name and your food order can be processed to minimize your wait. And not only do they work like magic, but they have been adopted by the cruise industry. Royal Caribbean introduce similar magic bands on its Oasis and Quantum class ships and Princess introduced the Ocean Medallion in 2017.

Summary

Royal Caribbean and Disney both leveraged technology to remove friction and enhance customer experience, but both were also addressing security. In the first case, Royal leveraged facial recognition to positively identify users to remove friction from the customs and immigration process. They were authenticating the users and under the covers, were comparing digital images against those housed in a government database via API calls. And what about Disney?  They are using an IoT device to make API calls to identify the user, to unlock a hotel room door, and to conduct payment transactions. This is exactly what Layer7 delivers – one portfolio to secure and accelerate digital transformation.

User Experience Friction in Security is Real

User Experience Friction in Security is Real

Here’s how analytics can help manage it

There’s a delicate balancing act when it comes to access, the user’s experience and how much friction is added for security.

If there’s one area in software where we need to dynamically modulate user experience friction (UXF), it’s in security. And for good reason. While we always want to provide a great user experience with as few security check points as possible, sometimes there’s a need for additional authentication measures to ensure a user is who he or she claims to be. The trick is knowing when you truly need to step up authentication.

User experience friction – real and not-so-spectacular

According to Pfeiffer Consulting, UXF is basically the slow-down or friction that occurs when the user experience deviates from our expectation or knowledge.

When designing security features and introducing them into the user experience, we find ourselves in a delicate balancing act between security and accessibility. This tends to be a zero-sum game, with any incremental increase in security resulting in greater friction for the user to overcome.

Externalities like friction have been studied in economics, which provides us with at least some analogous wisdom. For instance, in the hypothetical scenario where we introduce incremental increases in production, what is the result? We introduce pollution, pure and simple. Efficient production comes at a cost.

Applying this to security gives us a corollary – if you want to have a secure user engagement, you’re going to have to accept some friction in the user experience. This brings us right back to the delicate balancing act of security and accessibility.

What if we could have it all – a great user experience with a stronger level of security – without the friction?

What is Shift-Right Testing?

What is Shift-Right Testing?

Before we begin to talk about shift-left or shift-right testing, it’s helpful to visualize the steps of typical software development in a straight line, or to what would be considered a standard waterfall development process:

In this process, testing would occur at the end of the development lifecycle, usually run by a team of expert testers. Any bugs or errors that were found would result in lengthy changes, with a long deployment process.

Testing only before release meant that testing was frequently a bottleneck in the development process. As more and more companies embraced agile software development, shift left testing emerged as key to faster and more efficient software development.

What do we mean by Shift Left Testing?

Shifting Left refers to the idea of performing actions earlier in the development process. And as it relates to testing, shift-left testing means applying testing practices earlier in the development process than what is usually common. When we talk about shift left testing, we mean that instead of testing only occurring at the end of the testing process, testing now happens within every stage of development. Instead of waiting till the end of the process to test the entire application, testing occurs in much smaller units, and much more frequently.

What do we mean by Shift Right Testing?

Shift Right refers to the idea of performing actions later in the development process, usually in the steps after deployment or release. Shift Right testing practices can be applied not only to the process of releasing software, but also to the processes of deploying, configuring, operating, and monitoring applications. These testing practices are also closely tied to DevOps, and it’s rise in popularity and implementation across industries and teams. Under a DevOps model, a development and operations team work are much more closely tied together, and engineers can have many more responsibilities that extend past the point of simply deploying an application to production.

Shift Right testing practices such as API Monitoring, Feature Toggles, and using production traffic for testing applications are just a few examples of how teams can extend their Continuous Testing culture to a more holistic approach in the development lifecycle.

A more accurate representation of an agile process that follows the DevOps model will look like an infinity symbol:

By shifting our testing left, and simultaneously shifting our testing right, we are now testing at every stage of the software development cycle. This is known as Continuous Testing.

Tests Run When Shifting Left

So, when we talk about shift-left and shift-right testing, we are really talking about transforming every step of the development process by applying different testing practices to it, no matter what approach your team might use.

For shift-left testing, some of these practices include:

Test Run When Shifting Right

Shift-right testing means focusing on applying testing practices to after our application has been deployed or released. That can include:

How Can Shift Right Testing Benefit Your Development Process?

Applying practices of shift-right testing to a team can have a big impact on how applications are treated after deployment.

Shifting right creates a continuous feedback loop from real user experience with a production application back to the development process. In today’s world where more and more teams go through agile transformations and change deployment practices from once every few months or once a year to once every few weeks, days, or any time, that continuous feedback loop between real users and development is crucial for successful teams.

How Can Shift Right Testing Benefit Your Business?

Shift-right testing can also have a big impact on not just development teams, but on product, customer success, and management teams as well.

Applying test practices to production means taking into account real-world users and their experiences into the development process. For example [find source in references], certain companies will duplicate live traffic to their production application and route it to their staging environment, so they can test an application with live traffic before deploying it to production.

By creating a continuous feedback loop between users and development you can delight customers by collecting their feedback, fixing or improving your application, and following up with them. It means your application evolves not just by internal input, but also from your users.

With automation and monitoring practices, it also means your team can have more confidence in your application resilience, and more confidence in fast deployment practices, delivering continuous quality to users.

Self-Driving Continuous Delivery?!

Self-Driving Continuous Delivery?!

Your journey to continuous delivery is a lot more closely related to autonomous driving then you may think. In fact just like continuous delivery, self-driving cars are all about levels of automation. And intelligence you may ask? Intelligence is a level of automation in and of itself!

Self-Driving Cars… 

Self-driving cars are all the rage these days, but they are not really here yet. If you dig a little deeper into it you”ll quickly find out that the NHTSA (yes that exists, its the National Highway Traffic Safety Administration board in the USA) has adopted what they call the 5 levels of automation ( 6 levels actually if you consider level zero “no automation” as a level – anyway – you can read about it on the NHTSA website here)

The 5 Levels of Automation in Cars.

The interesting thing to notice about the NHTSA stairway is that they actually don’t really mention artificial intelligence or deep learning at all, instead, they make their distinctions based on the “level of automation” that a vehicle is capable of. So for example, level 2 is defined as “partial automation” where “the vehicle has combined automated functions … but the driver must remain engaged with the driving task and monitor the environment at all times”, at level 3 the driver is still responsible for the driving and must remain alert all thought the vehicle practically handles all the driving (Some modern-day cars like the Tesla and other are considered to be almost level 3), At level 4 the car can drive autonomously under certain conditions (e.g., highways, specific weather conditions etc..) and at level 5 the driver is completely optional, or indeed, not even given an option to control the vehicle.

Where is the Intelligence?

When you begin to further explore these increasing levels of automation and think more closely about the technology that would be required to achieve each new level of automation on the NHTSA list, the necessity of artificial intelligence or deep learning technologies starts to become clearer. For example, In order for a car to be able to drive safely in a busy city and avoid hitting that crazy dude who suddenly jumped on the road,  Or at least give it it’s “best shot” better than a human driver could, the car will need to be able to “understand” the situation in a split second even if it had never been in a similar situation before, Just like most of us. This means it will need to “learn” about many things before it ever encounters these things in “real life”, it will need to be “taught how to react”… intelligent stuff. Or consider the vehicle’s ability to adjust to rain and snow etc… without delving too deep into the philosophical range I hope we all see there is hardly even a thin line between what is “intelligence” to what is “automation” in this context. The key to setting goals for us is not in an IQ test but in specific capabilities and scenarios (i.e., use-cases) that our cars can handle for us. The same logic applies and should be considered for digital transformation and continuous delivery as a key component of it.

Humans Don’t Ship in Containers

Surely this whole story is completely irrelevant for continuous delivery or release automation! Or is it?? My argument here is that any transformation of human activity to an automated one always follows a similar, staged, evolution. Recognizing the main levels of your particular circumstance is key to ensuring your goals are set monitored correctly in a structured way forward that stays in the line of sight. We may be a lot more dismissive of the importance and attention a continuous delivery transformation is then a self-driving car because we do not have to, literary, sit in it. However, digital transformation and continuous delivery both share a progress continuum that revolves around levels of automation, entirely not dissimilar to those set by the NHTSA for cars.

The 5 Levels of Automation in Continuous Delivery – Welcome Intelligent Pipelines

Much like a level 2-3 autonomous vehicle, many organizations are already boasting automation in their release pipelines and even “continuous delivery” but much like autonomous vehicles, not even one organization in the world exists today where even human supervision of the CD process is truly redundant, or shall we call it level 5 CD? We may have autonomous deployment processes and some autonomous rollback implemented in some (high-end cars anyone?) instances but when you consider true continuous delivery – from ideas to user feedback – we can’t ignore the gaps that still need to be overcome before we can claim “level 5” in human supervision over the process and particularly in quality assurance as an integrated part of “self-driving continuous delivery” vehicles.

Read more about intelligent pipelines here.

Free trial of Automic Continuous Delivery Director here.

Join or Die: The Case for Unifying the API Lifecycle to Transform Digital Experiences

Join or Die: The Case for Unifying the API Lifecycle to Transform Digital Experiences

Join or Die, and its Relevance Today

In 1754, a political cartoon attributed to none other than Benjamin Franklin appeared. The cartoon depicted a severed snake, with each piece labeled to represent one of the American colonies. Beneath the picture were these words: “Join, or Die.”

The cartoon made a direct, easily understood appeal to readers: The only way the colonies could survive would be through uniting, and working together to pursue shared objectives and defeat a common enemy. Why the history lesson? It struck me recently that dev, sec and ops teams tasked with managing APIs aren’t all that different from our American colonists.

On today’s competitive battleground, an organization’s success is increasingly determined by its digital prowess. Digitally advanced companies and new technologies are disrupting competitors and inventing new markets. That’s why adoption of clouds, containers, service mesh and other modern architectures are so pervasive. For these efforts to truly pay off, however, teams that once worked in isolation now need to collaborate and operate in a unified way. And the stakes for this effort are high: If teams keep operating independently, the business’ very survival could be at stake.

Unifying the API Lifecycle

When it comes to unifying previously disparate teams, APIs represent a strategic asset. By uniting data and logic from many distributed systems, APIs play an integral role in the modern application development architecture.

Just like software, APIs have a lifecycle, which must now be managed in an optimal, intelligent, and unified fashion. This is a key requirement in order to fundamentally advance development, agility and insight so teams can thrive amidst disruption, and deliver the optimized digital experiences customers and employees require.

Now more than ever, it’s vital to effectively manage the entire API lifecycle, including planning, creation, testing, security, management, discovery, development, and observation. Each of these efforts has a critical role to play in the digital experience that users ultimately receive. However, the power of APIs won’t truly pay off if these efforts are handled in an isolated fashion. Following a few key areas where teams can realize the biggest benefits by establishing a solution that enables a truly unified API lifecycle approach.

Testing

With leading solutions, teams can effectively capture clear requirements and model for optimal API test designs. As soon as API code is written, developers should be able to test it. Solutions should also enable developers to test on their local machines, and then seamlessly push code into enterprise-grade testing tools.

To foster optimal collaboration, teams should be able to share test assets across the business. This helps maximize test coverage while reducing waste throughout the lifecycle.

Finally, teams need to continue validating APIs in production, employing the same scripts that were used to do API testing in development, in order to monitor API calls in real-world conditions. With these advanced and integrated capabilities, teams can establish a continuous feedback loop that feeds insights back into the development process.

Security

Today’s teams need robust, easy-to-implement security controls. Solutions should provide pre-built policies that can be leveraged immediately and adapted efficiently. Developers should be able to use native SDKs and pre-built backend services to create rich, secure experiences, without having to write thousands of lines of complex security code.

With advanced solutions, teams can get visibility and fine-grained control over who has access to APIs at run time, based on details about the end user, device, application, context, and transaction. Plus, teams can leverage rich threat protection capabilities and advanced threat analytics. Solutions should feature advanced capabilities, including frictionless biometric login, step-up authentication, single sign-on to multiple applications, and secure session transfer between devices.

Observation

To be more productive and deliver more value, today’s DevSecOps teams need intelligence and automation. Teams need to get insights into real-world usage of APIs and applications, and observe activities across the entire API lifecycle. With this comprehensive visibility and traceability, teams can quickly determine the root cause of an issue, proactively deliver code changes where needed, and mitigate quality concerns well before end users are affected.

Solutions should help teams learn continuously, so they can keep improving the quality of their APIs. It is important that solutions help constantly sift through massive volumes of operational and testing data to deliver actionable insights. With these capabilities, teams can spot potential problems, including gaps in test plans and security threats. In addition, they can uncover new opportunities and user requirements. By aggregating test metrics from development and production, teams can establish effective baselines and more efficient troubleshooting. With these capabilities, teams can streamline their releases while boosting the resiliency of their APIs.

Conclusion

While the phrase “Join, or Die” was penned hundreds of years ago and for a very different purpose, the concept carries an important message for today’s DevSecOps teams. In today’s competitive digital battlefield, the market victors will be the ones that can streamline innovation and deliver consistently optimized digital experiences. Teams simply can’t meet these objectives if they’re operating in an isolated, fragmented way. By leveraging advanced API lifecycle management solutions, teams can harness an invaluable asset in boosting collaboration and accelerating innovation to transform digital experiences.

Why SREs are Protectors of the User Experience

Why SREs are Protectors of the User Experience

Being a site reliability engineer isn’t easy. As described by Andrew Widdowson, “it’s like being a part of the world’s most intense pit crew. We change the tires of a race car as it’s going 100mph.”

Known as the “automaters”, SREs are often asked to observe application environments and manage incidents… at all hours of the day. Because everyone knows, when your app is down, so is your business.

The SRE’s job is to secure a flawless user-experience. To deliver site reliability. SREs bridge Dev and Ops, ensuring new releases improve the product, rather than breaking it.

The Challenge

The trouble with monitoring application environments is that there are hundreds of thousands of monitoring data points. How do you prioritize which data points are useful, and which can be ignored? Alarm storms aren’t helpful. They prompt panic, instead of resolution.

…And when a crucial incident does occur, how do you quickly mitigate it? The common SRE approach is to spend a ton of time and energy manually sifting through data – often at the expense of other initiatives, or worse, personal time (e.g. responding to the dinner-time incident alert).

What if you could get to that Aha! moment faster? What if instead of the typical hair-on-fire response, you had a trusted guide that could quickly lead you to the source of the incident?

Automation.ai as your trusted guide

What if you could empower SREs with the insights needed to drive improvements? What if instead of the typical war rooms and on-call burn out, SREs had a trusted guide to quickly fix problems?

Broadcom provides a “just add water” approach that can help your IT teams automate incident response through our AI-driven, self healing platform Automation.ai. Leveraging our deep domain expertise, we can help your SRE teams prevent alert fatigue by triaging alerting rules continuously using a combination of notification rules, process changes, dashboards and machine learning (ML) to proactively monitor the SRE four golden signals and measure what really matters for customer experience.