EU aims to control AI ethics with stringent guidelines
European Commission lays out seven ethical areas for AI development to follow
The European Commission (EC) has published a set of guidelines for ensuring businesses and governments to develop artificial intelligence systems that are ethical and fair.
Building on the work of a group of independent experts appointed in June 2018, the EC is launching a pilot phase to see how these guidelines can be implemented in practice.
The Commission has laid out seven areas within its framework which are human agency and oversight, safety, privacy and data governance, transparency, its traceability, diversity, non-discrimination and fairness, societal and environmental impact and accountability.
But these rules aren't like Isaac Asimov's Three Laws of Robotics, they don't offer a moral framework that will help control rogue robots. Instead, they address problems that will affect society as AI is further embedded into sectors like health care, education, and consumer technology. The initial aim, for the Commission, is to build trust in AI developments.
"The ethical dimension of AI is not a luxury feature or an add-on," said Andrus Ansip, VP for the digital single market. "It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust."
So as an example, AI systems that are used to diagnose cancer, the guidelines set out by the EU are there to make sure that a number of steps take place, such as safeguarding the software is from bias or making sure it doesn't override human objections - or those made by a doctor.
Keeping bias out of AI has proven to be a problem for some of the biggest tech companies. Last week Google's AI council was disbanded before it had even begun as the company's own employees protested the appointment of Kay Coles James, over her track record on anti-LGBT and anti-immigration.