For every company rushing to adopt artificial intelligence, there seems to be a dire prediction about the technology surpassing humans or taking their jobs.
There’s a good reason for all this speculation: Panelists at Fortune’s Most Powerful Women Summit in Laguna Niguel, Calif., said responsible artificial intelligence has yet to be defined.
Susan Athey, former chief economist of the U.S. Department of Justice’s antitrust division and professor at Stanford University’s Graduate School of Business, explained that academia, government, and business don’t yet know the answer.
“This will be a joint research agenda among academia, government and industry,” Assi said. “I think it’s going to take at least 10 years until we actually have an answer to, can we say that this system is deeply responsible in all the important ways?”
Artificial intelligence regulations won’t solve all problems
Barbara Cosgrove, vice president and chief privacy officer at Workday, said she does not expect the upcoming regulations to help implement responsible artificial intelligence. Currently, responsible AI is defined at the company level, Cosgrove said, with much of it driven by values and what can be done legally. At Workday, she said, they looked at how to set guardrails for responsible AI in the company and make sure AI ethics principles were set at that level so they could develop a governance plan.
Cosgrove said: “Our most important thing is to make sure that we take a human-centered approach, we are not replacing humans, humans remain at the center of what we do. We are improving the experience, we are helping to amplify human potential, but we are not replacing it.”
Karin Klein, founding partner at Bloomberg Beta, explains that responsible AI actually means taking a step back and looking at the process by which data flows through the organization and to the customer.
“So start with what data is used, is the data set fair and transparent? How are the algorithms and models created? What are the applications for the output? Then rigorously test and step back and go ahead and see if those values that were originally modeled are Consistent,” Klein shared.
privacy risk
Susannah Stroud Wright, general counsel at Credit Karma, said building and understanding responsible AI depends on the individual company and what that company wants to achieve. Klein says we’re at a stage where when determining how to develop and use artificial intelligence responsibly, you have to assume that something can or will go wrong, which is exactly what she’s looking for when supporting founders.
“We live in a world now, with artificial intelligence, where you have to assume that something is going to go wrong,” Klein said. “There will be hallucinations, there may be some data being used in ways it shouldn’t be. So as long as the people you’re working with, whether it’s a startup or a large company, have the right focus on transparency and communication, you’ll be fine to meet the challenges.”
As for privacy and the impact that AI may have on it, despite what some may say, the two can coexist — there are already privacy laws and regulations that apply to the use of AI, Cosgrove said.
“They don’t conflict,” she said. “But I do hear it all the time, ‘I’m just going to stop using AI because I’m worried about privacy,’ but there’s no stopping it. I mean, every organization is already using it.”
Athey said both consumers and businesses want privacy, but businesses themselves are AI customers and are demanding privacy, so the market is reacting. Of course, companies that promise privacy must actually deliver on their promises, she said, but that’s driving the privacy discussion related to artificial intelligence forward.
Svlook