To contend with the growing changes in the technology landscape, companies have to make radical improvements in their business insights and operations. This means companies need to adopt solutions with artificial intelligence (AI) and machine learning capabilities to gain new levels of speed, intelligence, and sophistication.

In our last post, we examined why artificial intelligence (AI) and machine learning are becoming critical for today’s enterprises. We also introduced Digital BizOps from Broadcom, an innovative solution that addresses the growing demands that businesses face. Powered by, the industry’s first AI-driven software intelligence platform, Digital BizOps enables teams to continuously improve decision making and the execution of digital initiatives.  In this post, we offer a detailed look at the architectural approaches Broadcom has taken in developing

How We Do It: The Architecture

In developing, we set out to create a modern platform for organizations, one that provided a comprehensive model for business, IT operations, and development. Instead of trying to rearchitect our existing tools, we chose to establish an intelligence layer that could sit on top of a number of specific solutions. We also set out to make it possible to access data from across a number of silos, while preserving the context of the source content and enabling this context to be shared efficiently. This contextual, comprehensive visibility is vital in delivering the actionable insights today’s decision makers, developers, and operations teams need.

In the following sections, we offer a detailed look at some of the key approaches we’ve taken in developing

Building a knowledge graph to optimize algorithm usage

Often, when it comes to deriving value from machine learning, it isn’t the algorithms themselves that matter. Typically, what matters is the way these algorithms are orchestrated and scoped. To be able to use algorithms effectively, we set out to maintain a multi-domain knowledge graph that describes the enterprise in great detail. Once this detailed information is available, our platform decides what machine learning technique to apply, and which specific data set to apply it to. In addition, the platform evolves dynamically as the organization and environment change.

In effect, we sought to match the situational awareness of a real human analyst so that our platform can pick and apply the right analysis technique to the right data set, dynamically, based on what it discovers about the environment. This approach means that any specific analysis we apply can change and adapt to the enterprise as it evolves. This is very different than the rule-based expert systems of the past, which were simply too brittle for today’s dynamic enterprises.

Harnessing domain expertise

Domain experts can leverage data effectively because they know what questions to ask. To harness this knowledge, we are working with domain experts and studying how users interact with our products and other vendors’ tools. So far, we’ve interviewed hundreds of experts about the type of analysis they do under recurring situations. We asked them what signals matter for them while testing their hypothesis about a situation, including what patterns they scan, what correlations they seek, and so on. We then captured these heuristics with machine learning robots so that our system can leverage this expertise and apply it correctly to the right scenarios, technologies, and problems.

Constraining problem scope and employing small, reusable components

Powerful machine learning techniques are often costly to run, and their efficacy typically increases with input curation. Consequently, to be most effective, machine learning has to be employed within the right guardrails. It is important to constrain the scope of the problem you’re trying to solve. is built based on an approach in which we narrowly define tasks and use small, reusable components to build robots. Through this approach, we can make robots fast and efficient to run, and extend their utility. We’ve created one robot that has the sole responsibility of detecting incidents, for example, while another will handle incident response. Each of these robots can be managed independently; they don’t need to be run on the same computer or built by the same team. Further, these robots can be enhanced and optimized on independent schedules, according to evolving priorities.

Employing ontological abstractions to establish a product-agnostic, software intelligence platform

In developing the machine learning architecture for, we’ve focused on establishing capabilities around ontologies, rather than being tied to specific products. For example, in the area of application performance management (APM), we didn’t focus on developing around any specific product, which can have distinct collection methods, terminology, and so on. Instead, we focused on the common, industry-accepted ontology that all APM solutions share. Consequently, our architecture can work for all APM solutions, including those from Broadcom as well as third parties.

At the same time, it’s important to recognize the fact that ontologies vary across domains. For example, while an infrastructure monitoring ontology will be concerned with elements like routers and switches, a DevOps ontology will be focused on testing and production rules. That’s why we’ve built our architecture to accommodate different ontologies, including those for APM, infrastructure, networks, DevOps, security, and more. Most importantly, the platform can incorporate and integrate the intelligence from all these different domains.

Developing an open, flexible architecture

In the market today, many topology approaches are closed in nature, bound by specific technological approaches and linear models. By contrast, employs an open, source-agnostic approach. The platform’s architecture is flexible in several key ways:

  • Data source extensibility. The platform’s architecture is not bound by any specific product, but features an open data lake, algorithms, and more. Customers can easily accommodate new data sources, including those from multiple Broadcom solutions as well as solutions from a wide range of third-party vendors.
  • Architecture extensibility. With, Broadcom, partners, and customers can introduce entirely new ontologies, without having to make any architectural changes.
  • Ontology extensibility. Teams can add different properties onto existing ontologies, and so easily accommodate organization-specific information, including tribal knowledge, naming or classification approaches, and so on.
  • Robot extensibility. The architecture can efficiently accommodate new robots as needed, while at the same time, enabling each robot to be employed against a unified, consistent data set.

In addition, by employing’s documented, public APIs, customers can use external machine learning tools to access all the platform’s correlated data, knowledge graphs, and more.

Employing an intelligent, flexible, scalable data model

In developing the data model for, we’ve employed a patented, property-graph based approach that has history awareness. This approach is structured based on entities, relationships, and their properties, and is journaled over time. These time-stamped records represent an immutable data point, and a valuable way to establish incremental observation of an environment.

With property graphs, complex relational lookups can be done instantaneously. As a result, they provide an excellent structure for doing ontological inference. By comparison, using a traditional relational database management system (RDBMS) for this model would require an impractical amount of join queries between schemas and tables, introducing an unacceptable level of performance-degrading latency.


Through the design principles outlined above, the machine learning model can provide users with an unparalleled mix of characteristics. The platform equips customers with the ability to gain value immediately, and to leverage the flexibility they need to gain maximum benefits over the long term. To learn more, be sure to read our white paper, How Delivers Scalable, Powerful, and Agile Machine Learning.


Share this