DeepMind’s new AI ethics unit is the company’s next big move

As we hand over more of our lives to artificial intelligence systems, keeping a firm grip on their ethical and societal impact is crucial. For DeepMind, whose stated mission is to “solve intelligence”, that task will be the work of a new initiative tackling one of the most fundamental challenges of the digital age: technology is not neutral.

DeepMind Ethics & Society (DMES), a unit comprised of both full-time DeepMind employees and external fellows, is the company’s latest attempt to scrutinise the societal impacts of the technologies it creates. In development for the past 18 months, the unit is currently made up of around eight DeepMind staffers and six external, unpaid fellows. The full-time team within DeepMind will swell to around 25 people within the next 12 months.

Headed by technology consultant Sean Legassick and former Google UK and EU policy manager and government adviser Verity Harding, DMES will work alongside technologists within DeepMind and fund external research based on six areas: privacy transparency and fairness; economic impacts; governance and accountability; managing AI risk; AI morality and values; and how AI can address the world’s challenges. Within those broad themes, some of the specific areas addressed will be algorithmic bias, the future of work and lethal autonomous weapons. Its aim, according to DeepMind, is twofold: to help technologists understand the ethical implications of their work and help society decide how AI can be beneficial.

For DeepMind co-founder Mustafa Suleyman, it’s a significant moment. “We’re going to be putting together a very meaningful team, we’re going to be funding a lot of independent research,” he says when we meet at the firm’s London headquarters. Suleyman is bullish about his company’s efforts to not just break new frontiers in artificial intelligence technology, but also keep a grip on the ethical implications. “We’re going to be collaborating with all kinds of think tanks and academics. I think it’s exciting to be a company that is putting sensitive issues, proactively, up-front, on the table, for public discussion.”

To explain where the idea for DMES came from, Suleyman looks back to before the founding of DeepMind in 2010. “My background before that was pretty much seven or eight years as an activist,” he says. An Oxford University drop-out at the age of 19, Suleyman went on to found a telephone counselling service for young Muslims before working as an advisor to then Mayor of London Ken Livingstone, followed by spells at the UN, the Dutch government and WWF. He explains his ambition thusly, “How do you get people who speak very different social languages to put purpose ahead of profit in the heart of their organisations and coordinate effectively?”

Understanding the implications of artificial intelligence systems isn’t an exercise in chin-stroking. In 2015, Google’s AI-powered Photos app started automatically labelling some photos of black people as “gorillas”. More damningly, an algorithm used in the American criminal justice system has been found to be biased against black people. Earlier this year, a facial recognition research group from Stanford University claimed its AI could distinguish between gay and heterosexual people based on their facial features. “Gay men had narrower jaws and longer noses, while lesbians had larger jaws,” the researchers claimed. In London, a startup called RAVN developed an AI that can sift through dull legal paperwork with little-to-no human assistance.

“The topics that we’re concerned with are how do you scrutinise an algorithm; how do you hold an algorithm accountable when it’s making very important decisions that actually affect the experiences and life outcomes of people,” Suleyman says. “We want these systems in production to be our highest collective selves. We want them to be most respectful of human rights, we want them to be most respectful of all the equality and civil rights laws that have been so valiantly fought for over the last sixty years.”

DMES is separate to the firm’s secretive internal ethics and safety board, which has been in operation since around the time DeepMind was acquired by Google for £400 million in September 2010. Suleyman says the board, which has some external representation, has been “reasonably successful”, but further experimentation with how DeepMind explores the ethics of AI has always been the plan. “The ethics board is focussed on [artificial general intelligence],” he explains. “That was always longterm, over ten, 20, 30 years, as we build systems which are more and more autonomous, which are genuinely capable of real human skills. That body is there to help us navigate the challenges that arise specifically from general intelligence.”

While little is known about the ruminations of DeepMind’s internal ethics and safety board, DMES is intended to be open and transparent. All its research will be published online in full and its six external fellows – who include economist professor Diane Coyle; philosopher and existential risk expert professor Nick Bostrom; international diplomat Christiana Figueres; and economist professor Jeffrey Sachs – have not signed any non-disclosure agreements.

“I can say what I like about anything, anywhere, there’s no constraining agreement,” professor Coyle tells me. She describes her role as a fellow to DMES as conducting academic reviews on research, taking part in workshops and giving feedback. She adds that concerns about DeepMind in-housing crucial AI research are broadly unfounded. “DeepMind is obviously owned by Google now. But you can be too cynical about it. I think they are sincere and genuinely want to achieve some understanding and do some very good research. And I think if a lot more companies did it we’d be in a better place.”

Such cynicism might be founded on DeepMind’s collaboration with the NHS Royal Free Trust to develop an app used to detect acute kidney injury. An investigation into the data-sharing agreement between the firm and the NHS by the Information Commissioner’s Office found that the Royal Free had failed to comply with the data protection act when it handed over details of 1.6 million patients to DeepMind. At the time, a DeepMind spokesperson said the firm “underestimated the complexity of the NHS and of the rules around patient data”. The ICO warned that such work should never be “a choice between privacy or innovation”.

When I raise the issue with Suleyman, he counters that DeepMind created a panel of independent reviewers to scrutinise its work with the NHS, adding the company took “serious measures” to be open and transparent. “There’s so much more that we need to do when it comes to interacting with this kind of data in the NHS. And that is really, really challenging,” he says. “It is ambiguous, a lot of the texts are very tough to navigate when it comes to regulation and a lot of it is emerging and trying to catch up with the technology.” If it works as DeepMind hopes, one role for DMES could be to spot similar issues and tackle them in the open based on input from all parties.

Continue Reading via Wired

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s