AI governance touches many functional areas within the enterprise — data privacy, algorithm bias, compliance, ethics, and much more. As a result, addressing governance of the use of artificial intelligence technologies requires action on many levels.
“It does not start at the IT level or the project level,” says Kamlesh Mhashilkar, head of the data and analytics practice at Tata Consultancy Services. AI governance also happens at the government level, at the board of directors level, and at the CSO level, he says.
[ Cut through the hype with our practical guide to machine learning in business and find out the 10 signs you’re ready for AI — but might not succeed. | Get the latest insights with our CIO Daily newsletter. ]
In healthcare, for example, AI models must pass stringent audits and inspections, he says. Many other industries also have applicable regulations. “And at the board level, it’s about economic behaviors,” Mhashilkar says. “What kinds of risks do you embrace when you introduce AI?”
As for the C-suite, AI agendas are purpose-driven. For example, the CFO will be attuned to shareholder value and profitability. CIOs and chief data officers are also key stakeholders, as are marketing and compliance chiefs. And that’s not to mention customers and suppliers.
Not all companies will need to take action on all fronts in building out an AI governance strategy. Smaller companies in particular may have little influence on what big vendors or regulatory groups do. Still, all companies are or will soon be using artificial intelligence and related technologies, even if they are simply embedded in the third-party tools and services they use.