Europe Is Setting Up Base Guidelines For Future Safer AI Systems

Europe Is Setting Up Base Guidelines For Future Safer AI Systems

The European Commission wants to make sure artificial intelligence has a set of rules to help avoid the crisis of trust social media is facing

The European Commission has announced this week the launch of a pilot project that will test ethic guidelines for developing and applying artificial intelligence. The EC says this was necessary so that future AI systems are safe and reliable across their entire life cycle, prioritizing data protection, and allowing users to control their own information. As well as the need for companies using AI to be transparent and non-discriminatory.

Liam Benham, the vice president for regulatory affairs in Europe at IBM, who was involved in drafting the AI guidelines, stated: “It’s like putting the foundations in before you build a house… now is the time to do it.”

The intervention of the EC intends to break the pattern of regulators being forced to play catch up with emerging technologies that can often lead to unanticipated negative consequences. Since AI has produced dire warnings on the potential for misuse, the bloc considered it needed a regulatory front.

The European Commission set in place seven major guidelines for AI development, and even though the guidelines are not binding, they will form the basis of further action in the future. The guidelines are putting the entire responsibility on the shoulders of those who build and deploy AI systems. EU commissioner Mariya Gabriel said that companies using AI systems should be transparent with the public:

“People need to be informed when they are in contact with an algorithm and not another human being. Any decision made by an algorithm must be verifiable and explained.”

It was announced that the pilot program will start in June 2019 and the EU invites all stakeholders and individuals to test the guidelines list and provide feedback and work together on improving it.