by Maria Korolov

AI governance: Reducing risk while reaping rewards

Feature
Mar 18, 2021
AnalyticsArtificial IntelligenceData Management

With AI moving from pilots to production, enterprises must establish cross-departmental oversight strategies focused on data quality, compliance, ethics, and more.

artificial intelligence ai ml machine learning abstract
Credit: dny59 / kentoh / Getty Images

AI governance touches many functional areas within the enterprise — data privacy, algorithm bias, compliance, ethics, and much more. As a result, addressing governance of the use of artificial intelligence technologies requires action on many levels.

“It does not start at the IT level or the project level,” says Kamlesh Mhashilkar, head of the data and analytics practice at Tata Consultancy Services. AI governance also happens at the government level, at the board of directors level, and at the CSO level, he says.

In healthcare, for example, AI models must pass stringent audits and inspections, he says. Many other industries also have applicable regulations. “And at the board level, it’s about economic behaviors,” Mhashilkar says. “What kinds of risks do you embrace when you introduce AI?”

As for the C-suite, AI agendas are purpose-driven. For example, the CFO will be attuned to shareholder value and profitability. CIOs and chief data officers are also key stakeholders, as are marketing and compliance chiefs. And that’s not to mention customers and suppliers.

Not all companies will need to take action on all fronts in building out an AI governance strategy. Smaller companies in particular may have little influence on what big vendors or regulatory groups do. Still, all companies are or will soon be using artificial intelligence and related technologies, even if they are simply embedded in the third-party tools and services they use.

And when used without proper oversight, AI has the potential to make mistakes that harm business operations, violate privacy rights, run afoul of industry regulations, or create bad publicity for a company.

Here’s how forwarding-thinking companies are starting to address AI governance as they expand AI projects from pilots to production, focusing on data quality, algorithmic performance, compliance, and ethics.

Facing the ethics of AI

Few areas are as fraught with ethical concerns today than facial recognition. There is great potential for abuse, and companies that offer facial recognition technologies are receiving pushback from the public, and sometimes their own employees.

That’s the case at Xtract AI, a subsidiary of Patriot One Technologies, which uses image recognition to identify people who are carrying weapons.

The technology can also be used in other contexts, such as to identify people who aren’t complying with mask or social distancing guidelines, says Justin Granek, Xtract’s VP of operations.

Kamlesh Mhashilkar, head of the data and analytics practice, Tata Consultancy Services Tata Consultancy Services

Kamlesh Mhashilkar, head of the data and analytics practice, Tata Consultancy Services

Ethics is a major topic of conversation, he says. “For us, we’ve seen a lot of this coming from the bottom up. Our staff is saying, ‘What are we doing about this,’ and forcing leadership to develop our governance policy.”

Customers have their own set of requirements, and there is a balance that needs to be determined, he says. “One of our clients is the Canadian Department of Defense, and some of our clients are in healthcare. They’re looking at it from different perspectives.”

The biggest question is which clients to work for, he says, and what kind of work should the technology be doing. That is a big picture decision that has to do with the mission of the company. But there are also technical issues that have to be addressed, and those start with data.

Getting data right

The biggest source of algorithmic bias is in data sets. For facial recognition, for example, data sets have historically not been representative of the general population. “They are biased towards white males,” Granek says. “It’s being corrected, but there’s still a lot of work to do.”

Experts can help fix data bias issues, and commercial data providers are working to fill gaps in the data they provide. There are also ways to create synthetic datasets, but often the solution comes down to going out and getting better data, Granek says.

Justin Granek, vice president of operations, Xtract AI Xtract AI

Justin Granek, vice president of operations, Xtract AI

For Xtract’s gun detection algorithm that meant setting up lab space, filling it with a wide variety of decommission firearms, and bringing in lots of people to walk in different ways, in different locations.

“One naive approach is just to look to Hollywood for images of people walking with guns, but that’s not representative of the world,” he says.

Instead Xtract made an effort to collect a wide range of individuals for its training data. “There’s no prescription for who might carry a weapon. We get some students. We get older individuals; we have a whole bunch of different individuals,” Granek says.

For some AI applications, accurate, representative data sets can be the difference between life and death and have significant moral and ethical implications. But even when the effects of bad data sets don’t lead to public disasters, they can still cause operational or financial damage to firms or result in regulatory or compliance issues.

The latter was the concern for Mexico-based Cemex, one of the world’s largest distributors of building materials. The company is more than 100 years old but is reinventing itself through use of artificial intelligence in supply chain management and operations.

Cemex began looking at AI and related technologies to grow market share, improve customer service, and boost the bottom line about three years ago.

“Last year and this year we’re actually seeing the value of AI on a global scale — not just in a small pilot here or there,” says Nir Kaldero, the company’s chief AI officer.

With AI firmly baked into the company’s DNA, Cemex realized the need to put governance structures around it, he says.

It all starts with data. “There is no good, reliable AI without good information architecture,” Kaldero says. “You cannot have good, reliable models without good information.”

At Cemex, data governance spans security, monitoring, privacy, compliance, and ethics. The company needs to know where data is located, where and how it is used, whether it meets regulatory requirements, and whether it’s free from biases.

Cemex, which relies on the Snowflake cloud data platform to manage its data and Satori to manage access, has a senior executive focused solely on data and another senior executive focused on governance who heads a governance team, Kaldero says.

Getting the models right

In addition to data governance, Cemex has begun to create governance around AI models and results. “That is something new,” Kaldero says. “Not just for Cemex, but for the world.”

This task is shared between Kaldero’s AI and data science group and the CIO group.

Cemex currently uses AI to predict part needs so it can save money by negotiating better deals with its vendors. It is also using AI for routing and scheduling trucks, as well as in sales and pricing. If any of these calculations are off, the company stands to lose a great deal of money.

So, to guard against model drift and algorithmic biases, Cemex uses technology from Seattle-based Algorithmia.

Muhammad Aurangzeb Ahmad, principal data scientist, KenSci KenSci

Muhammad Aurangzeb Ahmad, principal data scientist, KenSci

KenSci is another company concerned about the the downstream consequences of AI models. The Seattle-based company uses AI to analyze healthcare data, an area where accurate AI models can literally be a matter of life and death.

“We always begin with reviewing the goals of AI models with representative and diverse stakeholders,” says Muhammad Aurangzeb Ahmad, the company’s principal data scientist. To ensure those models are transparent and accountable, explainability is a core component.

“We have even released an open-source Python package — fairMLHealth — that can be used by anyone to measure fairness of machine learning models,” he says.

Ahmad also recommends auditing AI models for performance across different groups, to make sure that minorities and other vulnerable groups are treated equitably.

“Transparency and explainability of AI models makes them more likely to be used and trusted by end users,” he says. “And more amenable to be audited — and thus corrected when needed.”

AI and ethics

Another key area to consider in shaping a governance strategy is the ethics of AI use. “Legislation has not caught up with technology,” Ahmad says. “It is the responsibility of creators of machine learning systems to value align it with ethical goals. When trade-off is needed, one should err on the side of caution.”

Joe Tobolski, CTO, Nerdery Nerdery

Joe Tobolski, CTO, Nerdery

Joe Tobolski, CTO at digital services consultancy Nerdery, sees companies becoming increasingly aware of the possible ethical hazards of AI. “But are they completely aware in the sense of what systems they’re running and what training data they have under their covers? Probably not,” he says.

Few companies have a clear code of AI ethics to apply to their AI projects, data sources, and uses of the technology. “That’s what I’d like to see us to go — to have this strong, codified framework for how to address these things,” he says.

Cemex is one company deliberately limited its use of AI to minimize potential ethical complications. For example, it is prioritizing projects that improve services and help customers over those that would, say, simply reduce headcount, Kaldero says.

“The employees are at the center of the organization — not the technology,” he says. “We could automate all our customer call centers, but that’s not in our interest. Cemex is very proud to be an employer that provides job opportunities to people. There is something beautiful about this, to have that in our mission of the company.”

AI projects are chosen to have a positive impact on the workforce. Take safety, for example. “That’s a huge initiative for AI,” Kaldero says. “Cemex has already reduced accidents very dramatically, to almost zero. And the way to get it all the way to zero is through image recognition.”

AI governance strategies

For Springfield, Mass.-based life insurance company MassMutual, AI governance is based around an ever-evolving set of data ethics principles that guide actions and decision-making.

Sears Merritt, head of data, strategy, and architecture, MassMutual MassMutual

Sears Merritt, head of data, strategy, and architecture, MassMutual

“We specifically created a set of principles for using AI to grow our business aligned with company values and the interests of our policyowners,” says Sears Merritt, the company’s head of data, strategy, and architecture. “We also built a team to oversee the use of AI through the creation of a policy framework.”

MassMutual started looking at AI ethics and governance about a year ago, when the company realized it needed to demonstrate and ensure it was using AI for the benefit of its policyholders.

Merritt now oversees a team of six people, including AI ethics and governance consultants, who track whether algorithms adhere to governance principles and how they change over time, creating a formal structure for the approaches that the company was already following.

“We believe our work has a tremendous impact on all of our stakeholders,” says Merrit, who recommends starting with core principles aligned with company values and customer interests, and working with partners in law, compliance, ethics, and business to implement it consistently.

Next, he says, MassMutual plans to promote its framework as an industry best practice.

The importance of guardrails

John Larson, SVP at Booz Allen Hamilton, says many of the best practices around AI governance should be somewhat familiar.

John Larson, senior vice president, Booz Allen Hamilton Booz Allen Hamilton

John Larson, senior vice president, Booz Allen Hamilton

“I’ve been doing this for 25 years,” he says. “The same principles of how you develop the software, the algorithms, they existed before. But what didn’t exist was the speed of the data, the process power, and the learning algorithms.”

AI systems, hungry for training data, typically work with larger datasets than ever before, and, thanks to the digitization of today’s companies, the data is coming in from websites, network sensors, IoT devices, and other sources at unprecedented rates.

The ability to process this data is also dramatically higher than ever before, thanks in large part to cloud resources that can scale up in an almost unlimited way.

Finally, the feedback nature of some AI systems mean that they, in effect, learn as they go, on their own, and those learnings can take them in unexpected directions at a pace too fast for humans to react to.

“The governance models from 25 years ago — the principles are the same, but they can’t just scale to the challenges that we’re facing,” says Larson, adding that the solution is to build automated safeguards into AI systems.

For example, developers can set guardrails. If a model’s prediction accuracy drifts beyond a predefined target, or the model otherwise stops performing within design parameters, then some form of intervention could be called for. Similarly, if data coming into the system no longer reflects the features required, that could raise an alert to reevaluate the data sources, or to choose a different model that better fits the incoming data.

Jessica Lee, partner and co-chair of the privacy and security practice, Loeb & Loeb Loeb & Loeb

Jessica Lee, partner and co-chair of the privacy and security practice, Loeb & Loeb

There are other ways AI systems can be monitored. Testing final recommendations for prohibited correlations such as race, age, or religious affiliation, for example, could help catch problems before they result in regulatory fines or public relations disasters.

“There are tools that have been developed — Google has them, Microsoft has them — that can assess whether a model is biased against certain things,” says Larson. “At Booz Allen, we are also developing some of those tool kits and are trying to provide tools to all our data scientists.”

Finally, any good AI governance program needs ownership and accountability, says Jessica Lee, partner and co-chair of the privacy and security practice at law firm Loeb & Loeb. “Who will steer the program and how will we address missteps?”

“Companies that don’t do this well risk being the companies we read about,” she says.

There’s no guarantee that companies can avoid unintended consequences of their algorithms, bias, or discriminatory outcomes, or other harms, she says. “But good governance certainly helps.”