(This my first blog post. I hope to make more as I attend interesting events or when a nice idea sparks in my mind.)
I recently participated in a 3 day workshop “Robots! Work, care, performance” organized by the Dutch graduate program WTMC with about 20 PhD’s from various fields of STS and philosophy. We were about to get more familiar with the developments in robotics, consider the ethical implications and discuss the relation between humans and technology. Some great lectures by Peter Paul Verbeek, Rinie van Est and Elisabeth Jochum kickstarted us right into the current debates on the state of robotics. Discussion regarding the relation between modern technology and humans in general, and ICT and robotics in particular is often one of for or against. Peter Paul Verbeek defused this debate by arguing that this opposition is not actually possible because both arguments rest on the assumption it is possible to distinguish and separate technology and humans. In the sense that technology is (and always has been) intimately involved in our lives, we are all cyborgs, our perception and understanding of the world increasingly being mediated by technology.
A review essay by Yoni van den Eede (2015) I read in preparation also paints this relationship between humans and robot technology by reviewing how human nature is claimed by both transhumanist enhancement advocates and the bioconservative side. This got me thinking about the place of the human, or rather the place of the robot, in a social practice theoretical approach. Here, all of human activity is approached as an integration of material, meanings and skill into social practices, and I've been immersed in this perspective for some time. It’s not difficult to conceive of robotics as the material element in an existing or emerging practice, taking over physical labor, forming an extension of the human practitioner. But what happens when we start automating more processes that are now considered ‘human jobs’, in the services sector, in decision making, in administration? When we become more successful in creating artificial intelligence, mimicking human behavior and roboticizing social interaction, could robots break out of their material category and become practitioners? And where does that leave the human, where can make the distinction? I’ll being an attempt to find the human in practice theory, in a future of convincing artificial intelligence and socially successful robotics.
Let’s consider Reckwitz’ (2002) formulation of a practice as the inter-connectedness of many elements - forms of bodily activities, mental activities, things and their use, background knowledge in the form of understanding, know-how and notions of competence, states of emotion and motivational thinking. Taking the elements mentioned here I don’t see much that could not be ascribed to a completely capable, artificially intelligible robot. Mental processes and background knowledge can be inscribed in programming, Arguably, interconnected artificial intelligence has access to an endless background information captured in our digital networks. Think of medical diagnostics and drug prescription, where the vast and complex system of diseases and medical treatments is impossible to keep up with by any one individual. The last ingredients, emotions an motivational thinking, are perhaps the most human aspect of a practice. Being able to consciously consider why exactly one is engaged in any practice, gaining emotional energy (Collins 2004) from successful performances, striving to improve our performance. Reflexivity might be what sets humans apart from robots in practice theory. But then again, we are usually not so very reflexive of our practices, and robots are not impeded by emotional constraints and lack of motivation in their performances.
Another consideration is that as long we humans are in charge of creation, any robot will still be an extension of us, the practitioners. However, this extension of the human creator is stretched to the point where recognizing the human as the practitioner is difficult. Think of robot lifters and welders in car factory lines; no cars are being assembled by humans. The robots are created by humans, but the practice of assembling a car is not recognizable as a human carried practice anymore. Another example is lights-out manufacturing, factories where for days on end, robots work (and make more robots) while humans are not involved. Still, the eventual goal of these processes is still one decided on by humans, it are human meanings of a good car/robot which are ingrained in these processes. In this sense the notion of ownership and giving meaning is a human aspect of practices and maintains robots as being a material element of practice.
Perhaps the human nature of practices is being a knowledgeable practitioner with the ability to reconfigure practices as circumstances and available elements change. Robots as we know them are very circumstantial and fit to a specific set of tasks in a specific environment. That is precisely the challenge roboticists see, creating machines that can deal with more uncertainty and have more capabilities. Humans reconfigure a practice in times of crisis or within a new context where the different elements are available or introduced. This can be done on the basis of previous experience, going back to older ways of doing for which some skills might have been trained. Reconfiguration can also be done on intuition, tinkering and making the best of a new set of elements. I suppose intuition is a very human capacity not easily taught/programmed, but I wonder how far this capacity can brought into robotics.
Finally, social practice theory holds that all practices are social because in every performance, we draw upon elements that are outside of us, and return we shape this social environment by reproducing practices. Social usually has a human connotation; does it exclude robots from being practitioners? Social (in social practice) does not necessarily mean direct human interaction, and in that sense a robot performing any practice is acting social.
All in all it seems that aspects of social practice theory are open for robotic intrusion beyond just being instrumental elements of any practice. But the question if robots can be practitioners remains.
Collins, R. (2004). Interaction ritual chains, Princeton university press.
Eede, Y. v. d. (2015). "Where is the human? Beyond the enhancement debate." Science, Technology and Human Values 40(1): 149-162.
Reckwitz, A. (2002). "Toward a Theory of Social Practices: A Development in Culturalist Theorizing." European Journal of Social Theory 5(2): 243-263.