thresholds 2

Below are some examples of what is possible with highly trained bodies.

A next step for me is thinking through how to make appropriate embodied movement tasks to train humans working with Cobots or industrial robots to enhance kinaesthetic awareness and empathy (for human and non-human collaborators). The goal being to assist humans to perceive the movement of their robot collaborator and potentially move to towards more improvisational work that does not require programmed/repeatable movement. Can we create approaches that might lead to greater flexibility for work and working across different tasks more efficiently?

A caveat of the videos below is that these are choreographed movements that have been thoroughly rehearsed. However, how expert performers are trained and how they practice does offer a way to consider the types of embodied methods that could be enlisted to calibrate any human for their work.

 

 

thresholds

I have been thinking a lot about thresholds since we started the project. Thresholds for me relate to safety, comfortability, barriers, awareness, reading intention and perception. At the moment we are throwing boxes with the robot arm to me behind a barrier. The barrier has moved in recently but remains at a distance beyond the reach of the robot. My question is can we remove the barrier and share space? Move more intimately around one another? 

The industrial ABB robot I have been working with at the ARM Hub has lasers that provide information about movement within the robots work space and act as a safety switch. This has led to the approach of throwing the box and developing game structures with me behind the barrier. 

The challenge for me behind the barrier is becoming one of missing the tactility and more direct interaction when able to work more closely together. 

It was suggested that perhaps working with the model of the robot in the HoloLens might be a solution. However, this becomes another vision based tool, with a strong frontal relationship that doesn’t enable my whole body at once to be involved in sensing and responding to the robot. I’m wondering how we can move beyond relying on vision as a feedback mechanism and engage our senses holistically to perceive, interpret and respond appropriately. Can I attune myself to reading the robots next move without seeing it? 

The next big thing for me is to have a conversation with a safety expert. I’d like to ask a series of curly questions to see how much closer I can be to the robot and what negotiations might need to occur.