An excerpt follows from an interview with Dion Eusepi, Director of Enterprise Architecture, on the Chief Architect Forum Podcast . He has advised companies such as Stanley Black and Decker, where he provided guidance on Data and Cloud Integration and Automation Platforms. He is also a member of the CAF. Brice Ominski, global chief technology officer at DeepDive World, conducted the full interview, which can be heard here.
Question: When we talk about systems of intelligence and the older systems of record and engagement, do you believe we will do something to add to those older systems, or will we address them more broadly to modernize them? One observation is that only 4% of companies are adopting generative AI;Â are their current legacy systems obstacles to adopting generative AI?
Dion Eusepi: Think about the cloud and what we saw with the advent of cloud migration—this massive push of transformation. If you think about the last time, we saw a major Inflexion point in our industry, we saw this kind of massive march in unison to just move it over. We saw a tremendous amount of marketing, as there usually is, in and in erroneous terms like lift and shift.
As we all know, legacy systems don’t go away, and they are rarely transformative. What I have seen work that makes more sense are things like modernization in place or transformation in place. This means you know that if you have an AS/ 400 system, which many organizations still do, the likelihood that you will uproot those legacy systems is extremely low.
And suppose you’re a large enterprise in the acquisition business over the last decade or more. In that case, the likelihood of you disrupting the flow of data and transactions from those systems of record is also very low. The value of transformation versus the disruption to existing business usually plays out with the obvious loss. So, generally, modernization has been done with the cloud.
In retrospect, I see the same thing potentially happening with systems of intelligence. Let’s look at ecosystem enablement, right? So, we’re already beginning to see this shift, the shift toward native interfaces for systems of intelligence to ecosystem-specific environments. Let’s take SAP, for example. Often, when you think of integration, you think of interoperability between those ecosystems and standard interfaces between those ecosystems.
They facilitate it. It is unnatural for an OEM to want to facilitate interoperability. The more natural behaviour is to keep you inside that ecosystem’s bubble and build capabilities that solve the problems that others solve through interoperability. For example, if I’m a Salesforce customer using Salesforce CRM, Salesforce Commerce Cloud, or Salesforce Marketing Cloud, I probably rely on SKU data in an ERP for product information. My only choice in the past has been to aggregate that data into one place and facilitate interfaces so that you can exchange data between those ecosystems. With intelligence systems, we can rank the confidence of data in those systems for data availability for large production-emphasized environments like consumer goods.
In e-commerce, a digital commerce foray, for example, availability is a big deal in the consumer goods arena. So, reliable core systems are important when you think about interfaces. When you think about competence rankings and trusted sources for data, we usually broker those relationships by making human-mediated decisions about what is a trusted source for product information and what is a trusted and available resource for inventory data.
The full podcast can be heard at https://www.youtube.com/watch?v=AcoUHGTaP64