As technology companies and product designers begin to focus on developing robots for social care, they will also start to consider the research challenges. Professor Tony Prescott recently co-authored the whitepaper on Robotics in Social Care: A Connected Care EcoSystem for Independent Living for the UK Robotics Autonomous Systems (RAS) Network. We highlight twelve key research challenges identified in the whitepaper.
For robots to operate effectively and flexibility in people’s homes requires advances in AI capabilities for mapping and understanding human home environments. GPS does not operate inside buildings, options include integration with smart home localisation systems, tagging rooms with machine-readable beacons, and developing indoor localisation and mapping algorithms, based on vision or active sensing systems (e.g. pulsed laser light). These systems must be able to cope with changing environments including people and objects that move. Advances are also needed in technologies to recognise everyday objects and know what they are used for.
To be able to help people, robots will first need to understand them better, this will involve being able to recognise who people are, and what their intentions are in a given situation, then to understand their physical and emotional state at that time, in order to make good judgements about how and when to intervene. These systems will need some understanding of how people interact with each other and with objects in their environments in order to operate safely and usefully. In a real world situation, the robot might also have to deal with unpredictable interventions, such as a pet or young child doing something unexpected.
Current robots fall well short of the dexterity and safety needed to physically interact with people in tasks that involve close physical contact including many aspects of dressing, feeding, and toileting. The human ability to grasp and manipulate complex objects with our hands is unique in nature, particularly as humans use a range of senses and years of learnt knowledge in order to do this; emulating this with artificial systems will be extremely challenging.
Whilst technologies for automatic speech recognition (ASR) are improving, there is a need to adapt these system for use in noisy environments and for a wider range of voices, accents and dialects. Some people who are in need of care have impaired speech and are not able to benefit from current ASR technology. Research is also needed to understand verbal intonation and forms of non-verbal communication such as expression and gesture.
The ability to understand the meaning of what people are saying (as opposed to simply recognising the sequence of words in human speech) is still at a relatively primitive level within AI. This means that there can be a disconnect between the ability of existing RAS technologies to talk fluently and their capacity to engage in forms of meaningful dialogue. This can be confusing and disappointing to users, indeed, until dialogue systems improve it might be better for such systems to talk in simpler language more commensurate with their actual level of understanding.
Learning from data, social interaction and knowledge sharing.
RAS will benefit from learning offline and on the job, based on the latest advances in machine learning and including the ability for users to train robots in specific tasks. Acquired knowledge should be transferable between robot platforms and tasks given appropriate safeguards around data privacy.
In addition to acquiring skills, RAS technologies will need to have some memory of events and to be able to relate these to people’s routines and preferences. Whilst current robots can store everything that happens as raw sensory streams this does not mean that they are able to retrieve useful information when needed or even understand the significance of past events. Data needs to be classified and tagged, and stored in such a way that event memories can be accessed using contextual and user-provided cues. Memory systems need to operate in a transparent way that respects user privacy, in particular, users should be able to choose when the system is storing data, and should be able to selectively delete stored data.
Long-term autonomy and safe failure.
To provide effective support in the home RAS technologies will need to be able to operate safely 24/7. This requires improvements in the physical design of robots to make them more robust and strategies for self-monitoring and diagnosis of failure. To address the fragmentation we observe in current solutions, there is a need to promote component re-use and to develop system architectures that are validated for operation across multiple platforms. For critical support systems strategies to manage system down-time during charging or maintenance cycles will need to be considered. When systems fail it is also important that they do so without compromising user safety and are able to recover autonomously where possible. To demonstrate and evaluate how assistive RAS technologies can benefit their users and adapt to their evolving requirements it will be necessary to setup and run long-term user trials.
Dynamic autonomy and responsibility.
Human care providers need to be able to override or indirectly control (teleoperate) robotic care systems, this means that such systems should be designed to support varying levels of autonomy. There is a range of stages between full autonomy and direct human control, and the foundations and implementation of variable autonomy across these is only just beginning to be developed. Related to this, tracking the locus of responsibility in a human-robot system is also important.
Verification and validation.
We need to be able to verify that RAS technologies will perform as intended, and that their behaviour will match requirements and emerging regulations on service robot safety. Since RAS technologies will need to be tested rigorously under a range of different conditions to ensure they behave in a predictable manner, suitable benchmarking and testing frameworks will need to be defined.
Assistive technologies must be designed to optimise energy efficiency and with a clear strategy for reuse and recycling that will minimise environmental impact. Mobile systems should be able to recharge autonomously and frequently so as to reduce the need to carry heavy and expensive batteries.
Integration, networking and security.
To operate as part of the wider care ecosystem we will need to rethink control systems architectures to operate in an environment where communication and interaction with other smart systems is the norm. For instance, an assistive robot could sense the environment through ceiling sensors in another room, and decide to terminate the current task due to a more urgent need elsewhere. Given the importance of protecting privacy, good cyber-security is essential. The use of data must be transparent and user-consent must be obtained for any transfer of data outside the home.