Accountability primary for companies using AI technology say new EU guidelines


Devdiscourse News Desk | Updated: 08-04-2019 17:59 IST | Created: 08-04-2019 17:26 IST
Accountability primary for companies using AI technology say new EU guidelines
Image Credit: Pixabay

Companies working with artificial intelligence need to install accountability mechanisms to prevent it being misused, the European Commission said on Monday, under new ethical guidelines for a technology open to abuse by authoritarian regimes.

AI projects should be transparent, have human oversight and secure and reliable algorithms and be subject to privacy and data protection rules, the commission said, among other recommendations. The EU initiative taps into a global debate about when or whether companies should put ethical concerns before business interests, and how tough a line regulators can afford to take on new projects without risking killing off innovation.

"The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies," Commission digital chief Andrus Ansip said in a statement. AI can help detect fraud and cybersecurity threats, improve healthcare and financial risk management and tackle climate change.

But it can also be used to support unscrupulous business practices and authoritarian governments. The EU executive last year enlisted the help of 52 experts from academia, industry bodies and companies including Google, SAP, Santander and Bayer to help it draft the principles.

Companies and organisations can sign up to a pilot phase in June, after which the experts will review the results and the Commission decide on the next steps. IBM Europe Chairman Martin Jetter said guidelines "set a global standard for efforts to advance AI that is ethical and responsible."

(With inputs from agencies.)

Give Feedback