By Jitendra Gupta, Head of AI and Data Science, Wolters Kluwer ELM Solutions
Businesses are approaching the rise of generative AI with excitement and caution. While Wolters Kluwer’s Future Ready Lawyer Survey Report revealed that 76% of legal departments are using the technology at least once a week, another report exemplified businesses’ concerns over generative AI’s reliability and security.
What’s clear is that generative AI is here to stay and will only become more prevalent as enterprises strive to become more efficient and innovative. The question, then, is how can companies that build AI models help their customers overcome their trepidation about using generative AI? Here are five steps they can take to create generative AI models that businesses will trust and use.
- Understand the business challenge.
Before beginning any project it’s important to understand the end-goal. For AI, that usually means solving a particular problem. Thus, the first step in creating any AI model should involve a simple question: How can the technology we develop help solve our customers’ unique challenges?
That’s as much a business question as a technology one. Therefore, it’s important to have developers, data scientists, and customer representatives working on the answer—together. Developers know the technology, scientists know the data and how to build models, and client-facing representatives understand their customers’ challenges. Collectively, they can create AI models that meet their customers’ needs.
This first step is one of the most important in the development of responsible AI. It’s where the business showcases its understanding of its customers’ needs and how AI can be applied, setting a foundation of trust.
- Cultivate high-quality data
Feeding an AI model outdated, inaccurate, or unreliable data will lead to poor and untrustworthy results. That’s why cultivating large sets of high-quality data is so important. Data scientists must collect, analyze, identify, and curate datasets to ensure they are complete, relevant, and contain a significant enough representation of information to eliminate bias. They must also identify specific data features to help them address their customers’ challenges.
After the data has been cleansed, data scientists can begin identifying and training the models best suited to address their customers’ needs. Then, they can feed data into the models so they can learn and make predictions.
- Implement a “human-centric” approach to model testing
Model testing with human oversight is critically important. It allows data scientists to ensure the models they’ve built function as intended and root out any possible errors, anomalies, or biases.
However, organizations should not rely solely on the acumen of their data scientists. Enlisting the input of business leaders who are close to the customers can help ensure that the models appropriately address customers’ needs. Being involved in the testing process also gives them a unique perspective that will allow them to explain the process to customers and alleviate their concerns.
- Be transparent
Many organizations do not trust information from an opaque “black box.” They want to know how a model is trained and the methods it uses to craft its responses. Secrecy as to the model development and data computation processes will only serve to engender further skepticism in the model’s output.
To alleviate distrust organizations must provide complete transparency into their model-building processes. They must clearly show how their models are trained and their decision-making processes, so customers understand the basis for the AI’s conclusions.
- Continuously improve, adjust, and refine
After completing the previous four steps, it’s finally time to deploy the AI model. But that is just the beginning.
Following deployment, organizations must continuously check, refine, and recalibrate their models to ensure they deliver the expected results. Models should adjust as business needs and challenges change. Therefore, step five should involve a combination of proactive monitoring by data scientists, customer feedback, and expeditious development work to provide quick updates.
Continuous improvement might be the final step in creating trusted AI, but it’s just part of an ongoing process. Organizations must continue to capture, cultivate, and feed data into the model to keep it relevant. They must also consider customer feedback and recommendations on ways to improve their models.
These steps form an essential foundation for trustworthy AI, but they’re not the only practices organizations should follow. Other elements that instill trust include creating models that are fair, diverse, and free of bias; maintaining proper security around the models; and focusing on developing models that add to human, societal, and environmental well-being. Companies should invest in all of these practices to make their technology as trustworthy as possible.
Proper and diligent governance is also important. Organizations must implement ongoing controls to ensure the steps outlined above are being followed at all times. Failure to do so will compromise the trust companies have worked so hard to develop towards their AI.
In short, once an organization is in the business of building responsible, reliable, and trustworthy models, it becomes a long-term commitment. Over time, customers will begin to depend on the AI and experience the technology’s benefits with minimal concern.