Author: Conor McGinn, Department of Mechanical and Manufacturing Engineering, Trinity College Dublin In 1940, Isaac Asimov published the short story Robbie, the first of his many robot stories. Asimov, who is today credited with coining the term ‘robotics’, was fascinated by automata and the psychological and practical effects that it would one day have on mankind. Robbie centres on a young girl called Gloria, who develops a caring relationship with her robotic nursemaid who she names ‘Robbie’. It goes on to detail the fallout between Gloria and her parents when, for selfish and misguided reasons, they decide to sell ‘Robbie’. When compared against similar fiction of the day, it’s particularly interesting to note that in this case, the robot is portrayed as a force for good. Through their ignorance and misplaced fear, it is the human parents who are portrayed as the ‘bad guys’. Fast forward 72 years to 2012. Despite making incredible technological progress (for example: jet engines, space shuttles, modern computers, automated manufacturing, the internet, digital music), ground-breaking progress in mobile robotics remained somewhat elusive. While several impressive robots have been built and demonstrated in laboratories (e.g. Honda’s Asimo series, Boston Dynamic’s BigDog and Massachusetts Institute of Technology’s Kismet), few were at product-development stages or even capable of intermediate- to long-term use. Some advanced robotic solutions were commercially available, but these machines were often highly specialised (such as robotic vacuum cleaners and automated golf caddies), extremely expensive (often more than $200,000) and/or impractical to customise/upgrade. Therefore, when Joanne O’Riordan, the Cork teenager with total Amelia, posed the challenge at the United Nations conference for Women in Technology to “build me a robot”, the challenge was truly a difficult one. Of course, people with total Amelia are not the only group who could benefit from assistive mobile robots. Indeed, personal robots that can perform everyday tasks have the potential to empower the wider disabled community, including groups such as the elderly and injured war veterans, not to mention the many roles that they can perform in everyday civilian life. Prior to issuing her challenge, Joanne was well known within our research group in Trinity College. Prof Kevin Kelly, who had for several years run an annual engineering summer school for transition-year girls and is active in the promotion of engineering, had seen Joanne in the news. She had championed engineering research and how it could improve living standards for people with disabilities. Our group was actively engaged with robotics research and engineering design, so when Joanne posed her challenge, we felt both motivated and capable of accepting it. DESIGN CHALLENGE The first stage in any design is understanding what the design should achieve, then knowing the constraints and defining what the measurable outcomes should be. A key part of our design philosophy is that good design must be user-centred and respond to the real needs of the user, rather than just what they may be able, or willing, to initially articulate. So, after many meeting with Joanne, it was clear that she wanted a robot that was distinctly separate to her existing wheelchair – in other words, she didn’t want a robotic wheelchair. Clearly, a good solution would be reliable, capable of operating wherever Joanne would go and possess a battery life in line with her electric wheelchair. Due to the severity of Joanne’s disability, the robot was required to be capable of performing multiple tasks and require little physical modification to do so. Since Joanne was not a qualified engineer or technician, its interface had to be simple to use and require no formal training. Finally, Joanne described that the single biggest practical requirement for the robot was to pick up objects from the ground that she often dropped – typically cutlery, a stylus for a tablet and her mobile phone. Of the many constraints that were identified, three in particular were regarded the most significant. Firstly, nobody had yet succeeded in building such a versatile and multi-purpose machine and this was clearly a major obstacle. While lots of interesting research had been done by the robotics community and could be leveraged in our solution, it was apparent from the start that no roadmap to success existed. Whatever we built would carry significant risk of failure. Secondly, the initial funding for the project (generously put forward by the International Telecommunications Union) stipulated that the first full-scale prototype solution should be ready in time for the premiere of ‘No Limbs, No Limits’, a documentary that was being made on Joanne’s life. This deadline would give us fewer than five months to deliver a working prototype and the increased media attention would place additional pressure on us to succeed. Thirdly, despite the €50,000 funding we received for the project, we knew that relative to other projects of comparable scale and ambition, we were working with a very small budget. With the overall objective of the project and the constraints at hand, in late May 2013 we begun work on Robbie the robot for Joanne. The robot would be a prototype for bigger things to come; it would be a flavour of what we were capable of doing. It would demonstrate basic functionality in locomotion, grasping and manipulation, control and social interaction – put together, they would demonstrate an array of functionality that had rarely been incorporated on a single robot platform. We hoped that by demonstrating several key performance requirements, it would spur further funding that would, in turn, allow us to refine and improve the machine until we succeeded in constructing the all-round solution Joanne requires. The following seven functional requirements of the prototype were defined:

  1. Easily and intuitively remotely operated by tablet/smartphone;
  2. Rely exclusively on on-board computers (does not rely on wireless links to desktop PCs);
  3. Spatial footprint comparable with wheelchair;
  4. Similar shape to a human (in order to exploit human ergonomics);
  5. Able to pick up objects from ground;
  6. Battery life comparable to wheelchair;
  7. Social interface to provide useful information to the user and people in the vicinity of the robot.
MORPHOLOGY, LOCOMOTION AND CONTROL [caption id="attachment_15230" align="alignright" width="941"] Fig 1: (a) a graphical representation of the robots primary features and joints (b) a visualisation of how the stabilising mechanism is used to enable the robot maintain static stability at differential postural orientations (click to enlarge)[/caption] The proposed design can be described as a ‘nine degree of freedom’ humanoid robot that possesses a torso, arms, head and legs (Fig. 1a). Instead of using a walking gait for locomotion (as is the case with bipedal robots), the left and right legs are rigidly fixed to each other and utilise a pair of differentially driven wheels at the ankle joint. To ensure static stability (1), an actuated stabilising link located in the proximity of the knee joint was implemented. Actuation of this joint causes a proportional change in the robot’s wheelbase. Therefore, through co-ordinated movement of the hip, knee and stabiliser joint, the robot is able to adjust its height and consequently its centre of mass, all whilst remaining statically stable (Figure 1b). It is noted that while static stability is maintained, the degree of static stability, formally known as the static stability margin, is not. Therefore, for positions where the robot is less stable (i.e. in standing orientations), a self-balancing controller is employed. Using inertial data from on-board accelerometers and gyroscopes, this controller can adjust the wheel velocities such to maintain a desired pose (much like how a Segway remains upright). Ensuring static stability is maintained results in an inherently energy efficient solution, as energy is only expended when performing tasks explicitly required by the user. [caption id="attachment_15231" align="alignright" width="372"] Fig 2: An computer rendering of Robbie the Robot  v1.0. It is noted that all the electronics and on-board computers are located within the exposed chest cavity[/caption] A differently controlled wheelbase was deemed optimal for this robot as it is robust, energy efficient, simple in operation, possesses high manoeuvrability and facilitates accurate, reliable odometry. While legged solutions may possess certain advantages, the increased energy consumption, complexity and mass they require deemed them unsuitable. Additionally since Joanne relies on a wheelchair for transport, it was observed that a wheeled solution would be appropriate for use in all the places that it would be needed. A graphical rendering of the final prototype is shown in Fig 2. The robot is presently controlled by a laptop though a wireless interface or through an iPad app developed especially for Joanne (Fig 3). The former interface is used primarily for testing and debugging purposes, while the latter has been developed ergonomically for use by the user. Manipulation A central requirement of this design was the ability for it to pick up items from the ground. Through elongation of the arm at a prismatic joint in the elbow, a universal particle jamming gripper (2) (effectively a rubber balloon filled with small granular material) deforms around the object to be picked up (Fig 4). By creating a vacuum in the gripper, the gripper contracts and thus tightens around the object which can be then be safely lifted. [caption id="attachment_15236" align="alignright" width="336"] Fig 3: An early prototype of the control app[/caption] This type of gripper was favoured over more conventional manipulators due to its mechanical and control simplicity. As particle jamming grippers passively adapt to the shape of the object they pick up, the complex coordination between sensing, planning and actuation typically required to perform a ‘pick-up’ is substantially reduced. Social Interface The social interface on the robot consists of a custom 3D-printed head mounted on a 1 DOF neck. A graphical monitor is mounted in the head and is, in turn, connected to a computer in the robot’s torso. This renders a dynamically responsive animated face. Both the face and the head can be easily customised to the user’s taste and/or requirements (i.e. a user with poor vision may require the robot to have larger eyes and with a high contrast to the face). Furthermore, since this interface is connected directly to the computer controlling the robot, it can double as both a programming interface and a medium to directly communicate the robot’s internal states such as sensor readings, belief states and joint trajectory plans. While two such features may not be deemed critical in the everyday usage of the robot, in practice they have proved extremely useful as they eliminate the need to plug Robbie the Robot in to external monitors while tuning and performing maintenance operations. [caption id="attachment_15240" align="alignright" width="940"] Fig 4: The sequence of picking up object from the ground: (i) approach object (ii) tilt torso forward and position arm perpendicular to ground, locate gripper directly above object (iii) elongate arm until gripper deforms around object then turn on vacuum pump to tension gripper and grasp object (iv) while maintaining the vacuum in the gripper, shorten arm at elbow and move to desired position. To release object, apply positive pressure at the gripper (click to enlarge)[/caption] Having conducted a range of tests with human volunteers, we found that by reducing the complexity of the facial expressions to five discrete states (neutral, happy, sad, angry, surprise), people could easily understand and distinguish each state. Additionally, the robot was made to blink such that people would not fear that the screen had frozen or the program stopped working. Figure 5(a) provides a sample of these expressions on the developed prototype, with 5(b/c) showing how the same display can be used for debugging/maintenance and display of other information such as a camera feed from a remote location). FUTURE WORK [caption id="attachment_15244" align="alignright" width="941"] Fig 5: Robbie’s head, showing: (a) ‘neutral’ expression, (b) debugging/maintenance, (c) video feed and localisation operation[/caption] On 21st March 2014, we officially unveiled our first prototype robot to Dr Hamadoun Touré, secretary general of the International Telecommunication Union (ITU) and a delegation of journalists at a special event in Trinity College. We gave a live showcase of the robot’s functionality, including how it can pick up objects from the ground – a feat very few robots of its size and morphology have ever demonstrated. Through a matching donation of €50,000 from President Paul Kagame and the Rwandan Government, it has been possible to continue our work on the project. We have since commenced work on a second, substantially more sophisticated robot that encompasses several substantial design improvements. These include a state-of-the-art gripper, which integrates next-generation tactile sensing, custom-made compliant actuators and a novel distributed electronic architecture capable of undertaking much of the low-level filtering and control operations – thus freeing up resources on the main PC control boards. The team is pleased with the progress we have made and the success we have achieved since starting the project. We are extremely grateful to the companies who have sponsored and supported our work to date, in particular the ITU, DID Electrical, AndyMark, MaxAmps and Flexitech, and we thank the journalists and media outlets who helped publicise and promote our efforts in addressing Joanne’s challenge. However, it’s important that people realise the project is not complete and further support and help is needed. A substantial amount of work remains before Joanne’s dream of having a robot helper can become a practical reality. Based on what we have already delivered, we believe we are up to the task of pioneering such work and hope that people continue to support our efforts. We are confident that someday soon, Asimov’s prophecy will indeed become a reality – we will indeed build ‘Robbie’. Conor McGinn is currently completing a PhD in robotics in the Department of Mechanical and Manufacturing Engineering in Trinity College Dublin. His primary research concerns the design and control of autonomous robotic systems. He has previously worked as a design consultant on the GROVER project at NASA and, among other things, has been involved with the development of an autonomous robotic wheelchair, a biped humanoid robot, a quadrotor and several types of robotic grippers. McGinn is the founder of the Trinity Robotics club and is Secretary of Robotics Ireland, a national organisation established to promote robotics in Ireland. http://youtu.be/iS9eUBmpRWI (1) Statically unstable mechanisms require actuation to remain balanced (e.g. an inverted pendulum). On the other hand, statically stable structures do not require actuation to remain in a desired pose (i.e. a chair).
(2) Amend, JR Jr; Brown, E; Rodenberg, N; Jaeger, H; Lipson, H. ‘A Positive Pressure Universal Gripper Based on the Jamming of Granular Material‘, IEEE Transactions on Robotics,vol. 28, pp. 341-350, Apr. 2012.