An excerpt follows from an interview with Stephen Kaufman, Microsoft’s Chief Architect with the Customer Success unit, on the Chief Architect Forum Podcast. Stephen was interviewed by Brice Ominski, DeepDive World’s global chief technology officer. The full interview can be heard here.
Stephen Kaufman, the Chief Architect for Microsoft Customer Success, specializes in AI and large language models. Earlier this year, Kaufman authored the following article for A&G, entitled “Generative AI – Examining the Risks and Mitigations.”
Question: You recently wrote an article for Architecture and Governance Magazine in which you said people can’t wait to enter this field. Are people starting to adopt it, considering there are many obstacles to getting this in place? One statistic suggested that 80% of companies may not even be ready to use LLM outside of the Sandbox.
Answer: There are several things. You mentioned that there are so many different obstacles, but the other is that there are many different options. As such, what ends up happening is companies start looking at the different options, and they try to figure out if they are on the right path. In many cases, this is their first foray into adopting AI. They want to make sure that they get it right and are doing things along the way that are, I’m going to say, appropriate and responsible. You’re not just technology, but how do we ensure that when we get to the end state, we are dealing with responsible AI and getting things done safely and securely?
Question: One of the big things is understanding your company’s policy as you go forward. How do you temper that with building a strategy, a forward-looking strategy? These almost seem like two different things, but are they?
Answer: Yes, they merge, and so whatever you’re doing to start with, you know what’s happening in the Sandbox, and then you want to go beyond the Sandbox. Two things should be happening: one on the technical side and the other on the governance and operations side.
And in many large companies, many different groups are already moving forward and doing things within their Sandbox.
Sometimes, they do these things independently, independent of what management might realize or know. So, it’s important to ensure you have a governance practice and policy in place. How are we going to do this? What are some of the guard rails? What things do we need to implement across the board in the company?
It might be that we’re going to try to standardize on specific models so that as we look across the company, we are dealing with that tech debt ahead of time as either models get retired or other models come out. Where does that activity sit, and what is the governance that helps organize what these different teams are doing?
Then, you start to introduce some sharing across teams. And whether it’s components or best practices or failures, getting teams to communicate and share is significant because if we’ve got all these different silos or all these different teams going off and doing things on their own. How do I help different teams so that they’re not all failing on the same things and because they’re not learning from each other?