Technologies like artificial intelligence and machine learning 'desperately need' oversight using an institutional framework and system of values, a speaker told delegates at the EuroScience Open Forum (ESOF) 2018 in France.
Jeroen van den Hoven, a professor of ethics and technology at Delft University of Technology in the Netherlands, said that people are becoming aware that their perceptions of the digital age are biased:
"People are becoming aware that this digital age is not neutral," he said. "It is presented to us mainly by big corporations who want to make some profit."
Van den Hoven, a member of the European Group on Ethics in Science and New Technologies (EGE), said: "We need to think about governance, inspection, monitoring, testing, certification, classification, standardisation, education, all of these things. They are not there. We need to desperately, and very quickly, help ourselves to it."
He also spoke about the need for a cross-Europe network of institutions that could provide a set of values, based on the EU's Charter of Fundamental Rights, which the technology industry could use to inform future work on AI.
The EGE published a statement on AI, robotics and autonomous systems earlier this year, which highlighted the moral issues around the topic and encouraged the development of a structured framework.
Following the release of the document, the European Commission announced in June that it had formed a group of 52 people from academia, science and industry, with the aim of developing guidelines for the EU's AI policy. The group will present its conclusions early next year.
Ethics in robotics
Ethical issues were a hot topic at the ESOF conference, Phys.org reports. One of the major points discussed was the lack of transparency in AI and machine learning, especially around neural networks. In such a system humans can see the data going in and answers coming out, but not how those conclusions were reached.
Maaike Harbers, of Rotterdam University, pointed out the ramifications of such a system in military drones:
"In the military domain, a very important concept is meaningful human control," she said. "We can only control or direct autonomous machines if we understand what is going on."
Other topics of concern were the effect of companion robots on children, which may impact social relationships; and autonomous cars, using the well-known moral quandary of which is the greater evil: a self-driving car that hits a group of people, or one that swerves to avoid the group and hits a single person.
Ebru Burco Dogan, of France's Vedecom Institute, said that while research has shown that most people are in favour of the pragmatic solution - i.e. hitting fewer people - they wouldn't want to buy or ride in a car that contained programming to that effect.
Latest Tesla news: Tesla stock price tanks amid reports of 'widening probe' by SEC and claims the base Model 3 loses money
SEC 'probe' takes its toll on Tesla as new research suggests that Tesla loses $6,000 on every $35,000 Model 3
10nm Cannon Lake Core i3-8121U CPUs make a rare outing with Intel's NUC mini PC
'Notorious' Australian child hacker thought he had executed 'flawless' hack
The former employee says that Tesla fired him for bringing the accusations to management internally