- Twitter/Verity Harding
Google DeepMind has launched a new research unit to in a bid to help it understand the real world impacts of artificial intelligence (AI).
The London-based research lab announced its “Ethics & Society” unit on Wednesay, saying it will aim to “help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all.”
Around eight DeepMind staff and six external, unpaid fellows, will work in the unit to begin with but the number is expected to grow to around 25 over the next 12 months.
The unit will be led by Verity Harding and Sean Legassick. Harding, a former special adviser to former Deputy Prime Minister Nick Clegg, was previously a policy manager at DeepMind, while Legassick was a DeepMind policy advisor.
It will focus on six areas. They are:
Privacy transparency and fairness; Economic impacts; Governance and accountability; Managing AI risk; AI morality and values; How AI can address the world’s challenges.
“We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards. Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work,” the duo wrote in a blog post announcing the news.
The formation of the Ethics & Society unit comes as technology companies and governments start to acknowledge the enormous potential that AI is going to have on the world: be it for good or for evil.
AI is the field of building computer systems that understand and learn from data without the need to be explicitly programmed. The goal of these systems is to perform increasingly human-like cognitive functions where the outputs are optimised through learning from data.
The Ethics & Society team within DeepMind said it plans to carry out interdisciplinary research on AI’s potential that brings together experts from the humanities, social sciences and other areas, along with voices from civil society and technical insights from other DeepMind divisions.
“If AI technologies are to serve society, they must be shaped by society’s priorities and concerns,” said Harding and Legassick.
“This isn’t a quest for closed solutions but rather an attempt to scrutinise and help design collective responses to the future impacts of AI technologies. With the creation of DeepMind Ethics & Society, we hope to challenge assumptions- including our own – and pave the way for truly beneficial and responsible AI.”
Google also has another ethics board but no one knows too much about it. The board was established when Google acquired DeepMind in 2014 but it remains one of the biggest mysteries in tech, with both Google and DeepMind refusing to reveal who sits on it.
DeepMind is also studying how to ensure AI remains a benefit to humanity through the Partnership on AI consortium, which includes Facebook, Amazon, Apple, Microsoft, and many other tech companies.