APD News
Close

APD NewsAPP, New stage!

Click to download

Lagging behind in AI? Here's one of the EU's fightback plans

Science

2019-04-09 20:18

An industry report published by McKinsey Global Institute last month revealed Europe Union's gradually emerged weakness in the digital era, which signals the urgency to the top European officials of bridging the gap.

The report shows that Europe may risk falling further behind, however, as the world's AI leaders, the United States and China, without faster and more comprehensive engagement in AI.

Moreover, the field competing in the AI race is expanding, with countries including Canada, Japan and South Korea making strides.

At a time when new digital technologies, such as artificial intelligence (AI), are increasingly being adopted, EU is in an urgent need to roll out their fightback-plans for a successful digital transformation.

input words Pilot ethical rules to boost AI development

AI has been used in wide-range sectors from healthcare, energy consumption, car safety, to climate change, financial risk management, as well as cybersecurity threats detection over the years.

But the frontier technology brings us not only the benefits but also new challenges and concerns.

Like in the medical industry, AI is getting increasingly sophisticated at doing what humans do, but in a more efficient, accurate and inexpensive way.

According to the American Cancer Society, a high proportion of mammograms yield false results, leading to 1:2 healthy women being told they have cancer. The use of AI is enabling review and translation of mammograms 30 times faster with 99 percent accuracy, reducing the need for unnecessary biopsies.

But what if the one percent happens? What if the misdiagnosis leads to a medical accident? Then who will hold accountability for the outcomes, the technology provider, the hospital, or the physician?

The wave of autonomous driving powered by AI becomes irresistible. By 2025, the car market for partially self-driving vehicles is expected to be at 36 billion U.S. dollars, according to the data. 

The similar dilemma came to the auto industry as well. Thinking about the fatal Tesla car crash in 2016, the car owner was reported died when the car was running under the Autopilot mode. Who should be blamed for, the car owner or the manufacturer?

These are just two cases in point when we talk about AI ethical issues. The list can go on and on. It forces people and academics in the industry, as well as policymakers across the world to think about AI ethics.

EU's top officials have been struggling with supporters and opponents over the cutting-edge technology development for several years. But undoubtedly, they believe AI is the future. The problem is how to build a human-centric AI that people can trust.

The EU gave Google 90 days to end 'illegal' practices surrounding its Android operating system or face further fines, after slapping a record 4.34 billion euro (five billion U.S. dollars) anti-trust penalty on the US tech giant. /VCG Photo

EU's plan 

On Monday, they unveiled ethics guidelines under its AI strategy of April 2018, aiming to boost trust in AI while trying to clear some of the concerns over certain issues.

The guidelines can be boiled down to "seven key requirements" for trustworthy AI:

- Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.

- Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.

- Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.

- Transparency: The traceability of AI systems should be ensured.

- Diversity, non-discrimination, and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.

- Societal and environmental well-being: AI systems should be used to enhance positive social change and improve sustainability and ecological responsibility.

- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

In this summer, the Commission will launch a pilot phase involving stakeholders in different sectors and gather feedback, then evaluate the outcome for their next steps in early 2020.

Photo from Google website

Last week, Google's farce of dissolving its week-long AI ethic Lab once again attracted public attention on the troublesome issue.

The Advanced Technology External Advisory Council, supposed to supervise its AI development plan and solve some of Google's most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, was declared to be aborted after dissension over members controversial identity.

According to the Engadget, the group has a long history of climate change denial and anti-immigrant sentiments. The head of the foundation James has espoused those views and is very vocally anti-trans and anti-equality.

(CGTN)