Our Trust in Machines —How to Measure and Maintain It in Turbulent Times?
We are no longer sure if we are following the path of humanizing technology, or rather moving towards adapting humans to technologies.
Nowadays, with the advent of the Internet of Things and ubiquitous computing, the upcoming trend is rather to detect and recognize emotional information with passive sensors which capture data about the user’s physical state or behavior without interpreting the input. Current state of technological development does not clarify what will be the next stage of affairs and what sort of use will we make of those technologies that are either replacing people or opening up a new, radically deeper level of machine-human interaction and interdependency. We are no longer sure if we are indeed following the path of humanizing technology, or rather moving towards adapting humans to technologies. Recent scandals with major data leaks at Facebook, Grindr, and others do not make our understanding of this complex process any easier.
I have been working in the field of artificial intelligence and its societal implications for a decade now. The reliability of various devices, systems, and platforms arises as an important problem when one considers the level of trust that is allocated in them. In a social context, trust has several connotations. Trust is characterized by the following aspects: one party (trustor) is willing to rely on the actions of another party (trustee); the situation is directed to the future.
It can be demonstrated that humans have a natural disposition to trust and to judge trustworthiness that can be traced to the neurobiological structure and activity of a human brain.
In addition, the trustor (voluntarily or forcedly) abandons control over the actions performed by the trustee. As a consequence, the trustor is uncertain about the outcome of the other’s actions; they can only develop and evaluate expectations. Thus, trust generally can be attributed to relationships between people. It can be demonstrated that humans have a natural disposition to trust and to judge trustworthiness that can be traced to the neurobiological structure and activity of a human brain. When it comes to the relationship between people and technology, the attribution of trust is a matter of dispute. The intentional stance demonstrates that trust can be validly attributed to human relationships with complex technologies, and machine-learning based trackers and sensors could be considered as complex technologies. Thus, one of the key current challenges in the social sciences is to re-think how the rapid progress of technology has impacted constructs such as trust. This is specially true for information technology that dramatically alters causation in social systems: AI, wearable tech, bots, virtual assistants, and data. All that requires new definitions of trust.
The Field of Bots Is Booming
In the spirit of understanding human-machine relations, and trust in particular, we have decided to devote a large chunk of our work to researching its complexity. We are currently an interdisciplinary and transatlantic team located in the East Coast of US and in Warsaw, Poland. Our research has so far been devoted to social interaction of chatbots that are employees/members of the organizations that implement them. Chatbots (also called bots or conversational agents) are a perfect example of implementation of the postulates of artificial intelligence by simulating human behavior based on formal models. Currently, the field of bots and natural language processing is booming. Bots are more and more frequently used in call centers, account management, tele- and online marketing. We are all experiencing the ongoing process of introducing artificial intelligence in the area of social interaction with people, with particular emphasis on the interactions in the professional sphere and in business.
One could think of our research as a specifically understood, reverse Turing test for humanoid and social robots. Humanoid robots, similar to bots, perform certain activities as a substitute for humans, whose function is often to imitate human behavior. The Turing test is an experiment that was conceived as a way of determining the machine’s ability to use natural language and indirectly to prove its ability to think in a way similar to humans.
Humanoids vs. Social Robots
A typical humanoid has artificial intelligence, visual data processing, and a facial recognition system. Similarly, a social robot possesses the same features but without physical resemblance to a human. It imitates human gestures and facial expressions, is able to answer certain questions and conduct simple conversations on predefined topics, for example about the weather. Sophia, the humanoid created by Hansen Robotics that became famous for being granted the citizenship of Saudi Arabia this year, for example, uses Alphabet’s voice recognition technology and is designed to become smarter with the passage of time.
Sophia is conceptually similar to ELIZA bot program which was one of the first attempts to simulate human conversation. In 1966, designed by Joseph Weizenbaum, ELIZA, the first bot capable of talking to people, conducted several “therapeutic” conversations with patients, acting as a Rogerian psychologist.
Chatbots are a perfect example of implementation of the postulates of artificial intelligence by simulating human behavior based on formal models.
This project has been the inspiration for previous studies carried out by the project manager and her team regarding affective for various variants of bots of our research can be thought of as a specifically understood, reversed Turing Test for humanoid robots, used increasingly in organizations and companies.
It Needs to Start Asking New Questions
Humanoid and social robots, similar to bots, perform certain activities as a substitute for humans, whose function is often to imitate human behavior. The Turing test is an experiment that was conceived as a way to determine the machine’s ability to use natural language and indirectly to prove its ability to think in a way similar to human. We feel that Turing test par excellence is still a great philosophical inspiration, but in the era of machine learning and deep learning one needs to start asking new questions that help understand human-robot relationship better.
In our work, the two following research questions are most relevant:
a) In what way and to what extent are features related to social intelligence developed in these programs? How does it manifest itself in interactions with co-workers (people)?
b) What effect does the socialization of AI in general (and chatbots in particular) have on the organization of work? Taking into consideration consequences of interactions with chatbots, how are professional and social roles re-negotiated? How does the introduction of conversational agents affect organizational culture?
The Turing test is an experiment that was conceived as a way to determine the machine’s ability to use natural language and indirectly to prove its ability to think in a way similar to human.
At the first stage of our work, we used qualitative methodology. The approach taken in this part of the research was virtual ethnography. We analyzed the behaviors of individuals on the Internet using online marketing research techniques to provide useful insights concerning usage of bots, their place in organizations that implement them, and customer’s approach to them. After collecting qualitative data concerning the roles ascribed to chatbots in organizations, we shall proceed to the second stage. In the experiment, which is to study the interaction of individuals and chatbots, we used various sensors (electromyography—EMG, electro-dermal activity—EDA, electrocardiography—ECG, etc.). Our purpose here was to examine the differences in human-human and human-non-human interaction process.
A Little Attention to the Interaction between Man and Technology
The research is original and definitely has an exploratory character. Its meaning is also enforced by recent success of Eugene Goostman in Turing test in 2014, where 33% of judges assessed that chatbot they had been interacting with was a human being, and by further developments in bots such as Alexa, Siri, Cortana, Google Now, and others.
Only little attention so far has been paid to the socio-cognitive nature of the professional interaction between man and technology in general, and chatbots in particular.
We hope that our work will fill a gap in the HCI research (Human-Computer Interaction) where only little attention so far has been paid to the socio-cognitive nature of the professional interaction between man and technology in general, and chatbots in particular. And there are more and more of them coming to interact with us. Moreover, the research will underline the important dimension of social cognition in all interactions. This dimension is extremely important as it contributes to the formation of a new organizational culture, setting new professional and social roles, and to non-humans associates that are here called co-workers of the future.
Share this on social media
Support Aspen Institute
The support of our corporate partners, individual members and donors is critical to sustaining our work. We encourage you to join us at our roundtable discussions, forums, symposia, and special event dinners.