Menu
Event Report – European Societal Challenges: The Future of Human-Robot Interaction

Event Report – European Societal Challenges: The Future of Human-Robot Interaction

John Procter, MEP for Yorkshire and the Humber, hosted the event. He emphasised the growing impact that AI would play and that we needed to take a collaborative and proactive approach to tackling this issue.

The event’s chair Professor Hai-Sui Yu, Deputy-Vice-Chancellor: International from the University of Leeds expressed the individual research-excellence of the three universities and the continued aim to work closely together both as a consortium and with other international partners.

The first speaker was Dr Amanda Sharkey from the University of Sheffield. She discussed her research on the ethics of robot care and companionship and stressed that we should be aware of the purported benefits and fully investigate these closely and in the long-term. With no clear answer in ethics because people may have different opinions on what is acceptable, she said that there needs to be a framework for ethical assessments to combat this.

Director of Robotics at Leeds, Professor Pietro Valdastri discussed his research on medical robotics. Colorectal cancer is one of the most common types of cancer in men and women, but the colonoscopy instrument’s design has not improved since the 1960s and is high in cost (€90 000), very difficult to perform and often extremely painful for patients. With innovative research conducted at Leeds, however, using an intelligent magnetic system, this can bring down the instrument cost to only €2.

Professor Robert Richardson asked how we could make the city self-repairing. Small robots could enter difficult access points such as underground, on bridges or in sewage pipes thereby removing potential risks to human workers who currently perform these tasks. These robots could automatically fix things at night and shy away from any human contact to make them as less invasive as possible. By 2035, Leeds could be the world’s first self-repairing city, with 3D-printed drones able to repair potholes or street lights automatically. Some key issues raised, however, were the need to study how thousands of drones carrying out autonomous tasks may affect existing wildlife such as bees and birds; and the legalities of regulating these drones.

Director and co-founder of Sheffield Robotics, Professor Tony Prescott showcased Sheffield’s research strengths, including its Advanced Manufacturing Research Centre; biomimetics and brain-based robots; cognitive robotics; swarm and collective robotics; and assistive robotics and smart homes. He expressed no reason as to why robots in the future shouldn’t be socially or self-aware even if they aren’t currently. At the networking drinks he displayed the MiRo, a brain-based social robot designed to provide emotional engagement and entertainment, and attendees could freely interact with it.

Dr James Law acknowledged that designing robots responsibly means engaging stakeholders. He emphasised that if the right balance wasn’t struck then we run the risk of losing the impact of technologies. People are concerned that robots are not only going to take their jobs, but that they will take over the world. He ran a project with a company in a deprived area of Yorkshire. The company had a low-skilled manual workforce and wanted to innovate and deploy robots to improve health and safety. They ran a series of co-design workshops where employees had initially arrived believing their jobs were at risk but changed their attitude through real robot interactions. This clearly showed the benefit to investing in research and feedback. Dr Law noted that current Horizon 2020 calls seem to include ethical and legal issues as merely desirable ‘add-ons’ and prioritise technical research. He recommended that future calls in Horizon Europe should place more emphasis on the integration of ethical approaches and responsible innovation practices and support more interdisciplinary ways of working.

Professor John McDermid, Director of the Assuring Autonomy Programme at the University of York, a £12 million partnership which runs across eight departments to ensure safety of robotics and autonomous systems. Robots can relieve people of very dull jobs. It is therefore important that when in use they are reliable. Bio-inspired robotics have self-healing systems giving machines the ability to detect, diagnose and repair failures themselves meaning that humans are kept safe from challenging environments. New approaches are needed and to meet these challenges we need to give robots the knowledge, so they can make decisions themselves. This may give rise to unexpected behaviours and presents new legal challenges – how does autonomy effect general legal principles as well as implementations of legal frameworks? This problem relates to where the responsibility lies regarding self-driving cars. We must ask ourselves if we are willing to accept that robots may not be the perfect entities that we desire – they might not do exactly what they are programmed to do but rather a wide range of unexpected tasks.

Jim Dratwa, Head of the European Group on Ethics in Science and New Technologies (DG RTD) closed with a keynote. He asked the crucial question: “What world do we want to live in? And who is included in this ‘we’?” There are two ways of thinking about the rise of robots and autonomous systems, the first paints a doomy picture in which intelligent robotic systems will signify the end of humanity as we know it; while the second sees AI for good, widening public participation for the people and with the people. As it will affect everyone, creating an institutional framework for AI will require wide-ranging public deliberation. An active decision has been made to put societal challenges at the heart of EU strategy. The European Commission communication shows the importance to which this is being considered. This stands on three pillars: firstly, to support rapid innovation in this area by increasing annual investments in AI by 70% under Horizon 2020, secondly, prepare for socio-economic changes brought about by AI and thirdly, to ensure an appropriate ethical and legal framework. A co-ordinated approach has been propagated as it is only when all European countries work together that we can make the most of opportunities offered by AI and become a world leader in this technology that will play such a huge part in the future of our societies.

It seems more likely than ever that the future of human-robot interaction looms closer to our daily lives. It is imperative then, that we create and implement a standardised framework to address the impending ethical and safety issues that will arise. The first draft ethics guideline for trustworthy AI published in December 2018 indicates this high priority in the EU political agenda.

Outcomes and Next steps:

  • The White Rose academics forged strong links with a variety of EU stakeholders from robotics networks, other universities and industry professionals.
  • The White Rose academics will continue to work closely together to tackle key societal issues, including on joint projects such as the White Rose Collaboration on ‘AI Law and Ethics: The Challenge of AI Wrongdoing’ which raises serious issues about the adequacy of existing frameworks to manage the rise of AI, in particular, autonomous and semi-autonomous systems.

To download the slides, please click here

For further information, please contact: k.r.norris@whiterose.ac.uk

You currently have JavaScript disabled. This site requires JavaScript to be enabled and some functiomns of the site may not be usable or may not look correct until JavaScript is enabled. You can enable JavaScript by following this tutorial.