HFI Connect

User Experience News, Blogs, and Videos

Social Response to Communication Technology (Mobile Phones)

This article appeared originally in the HFI UX Design Newsletter, October, 2012

Mobile phones are often considered as powerful tools to persuade customers. The myriad of native hardware such as camera, accelerometer, magnetic sensor, GPS, light sensor, compass, etc., presents a unique opportunity to provide context specific experiences.

The fact that customers tend to carry their mobile phones continuously is also unique. Unlike any other device or accessory, mobile phones are even taken to washrooms! This provides a great opportunity to engage customers wherever they are. Last but not least is the emotional relationship that customers share with their devices.

Mobile phone users frown, kiss, shake, throw and even hurl profanities at their devices. This behavior highlights that mobile phones are not perceived as mindless machines but as social actors that play important roles in our lives.

Clifford Nass, Jonathan Steuer, and Ellen R. Tauber attempt to understand this behavior through a series of 5 experiments. The experiments strive to ask some key questions.

“What can we learn about human-computer interaction if we show that the human-computer relationship is fundamentally social? What can we predict and test if we assume that individuals are biased toward a social orientation; that when people sit down at a computer, they interact socially?”

The experiments were conducted with 180 computer-literate college students who volunteered to participate in various experiments involving a computer tutor. Each experiment lasted approximately 45 minutes. The participants were required to go through the following sessions:

  1. Practice session introducing the interface controls
  2. Tutoring session where participants were introduced to 25-30 facts
  3. Testing session where participants were required to answer 15 multiple choice questions
  4. Evaluation session where a computer evaluated the performance of the computerized tutoring session
  5. Assessment session where participants were asked to answer a questionnaire assessing the tutoring, testing and evaluation sessions

Will users apply politeness norms to computers?

This experiment was conducted with 33 participants. They used a single computer for the tutoring and testing sessions. The evaluation session was administered in three ways: via the same computer they were tutored on, via a different computer, or via pen and paper. The results of the experiment showed that participants who evaluated the same computer that they were tutored on were more polite. They indicated that the tutoring was more “friendly” and “competent.” It was also found that there weren’t significant differences in terms of evaluation while using a different computer or a pen and paper questionnaire. This confirms that participants related to “tutor computer” socially and ruled out the medium as an influencer.

Will users apply the notions of “self” and “other” to computers?

This experiment was conducted with 44 participants. Participants used 2 or 3 computers.

The first situation provided participants with the “same voice and box” condition. This meant that the evaluation session was conducted on the same computer and in the same voice as the tutoring session.

In the second situation participants were provided with the “different voice and box” condition. This meant that the evaluation session was conducted on a different computer and by a different voice than the tutoring session.

For each situation the computer either praised or criticized the tutoring session by describing 12 of 15 questions as positive or negative. The results of the experiment showed that participants believed that a distinct computer with a distinct voice was a different social actor.

In the situation when the tutoring session was praised by a different or same computer/voice, participants felt that praise by other computer/voice is more accurate and friendly than praise of self. In the case of criticism, participants felt the other computer/voice was less friendly but more intelligent than the one that praised. Participants also considered a praised tutoring better than a criticized tutoring.

On what basis do users distinguish computers as “self” or “other” — the voice or the box?

This experiment was conducted with 66 participants. This experiment was identical to the previous experiment except that there are 8 possible situations. These situations were formed through permutation of praise/criticism, same voice/different voice and same computer/different computer.

The results of the experiment showed that participants perceived different voices as different social actors. They also perceived the same voice across computers as the same social actor.

Will users apply gender stereotypes to computers?

This experiment was conducted with 48 participants, 24 male and 24 female. The testing session was given no voice while the tutoring and evaluation sessions were given either male or female voice. The topics for the tutoring and testing were love and relationships, mass media, and computers.

Participants perceived males who praise as more likable than females who praise. The evaluators with a male voice were considered more dominant, assertive, forceful, sympathetic and warmer than evaluators with a female voice.

Participants also perceived tutors with a female voice talking about love and relationships as more sophisticated and having chosen better, broader, and less-known facts than male tutors.

In situations where a different voice type was used for tutoring and evaluation, facts about love and relationships were seen as more informative, better-chosen, and broader than in the same-gender conditions.

If people do respond socially to computers, is it because they feel that they are interacting with the computer or with some other agent, such as the programmer?

This experiment was conducted with 33 participants, each using 2 computers. The protocol for this experiment required participants to experience the full cycle of tutoring/testing/evaluating twice. In the first cycle, the evaluations of tutoring were generally positive; in the second, they were generally negative. Tutoring and evaluation were conducted on one computer while testing was conducted on another.

There were three conditions created. In the first both experimenter and computer referred to the computer as “this computer” or “the computer.” In the second, the computer referred to itself as “I,” but the experimenter referred to it as “the computer.” In the third, the computer referred to itself as “I,” but the experimenter referred to it as “the programmer.”

The result of this experiment showed that in the first and second condition participants found the computer to be generally more capable, more likable, and easier to use than in the third condition.


In conclusion, across the five experiments, the following principles were derived:

  • Social norms are applied to computers.
  • Notions of “self’ and “other” are applied to computers.
  • Voices are social actors.
  • Notions of “self” and “other” are applied to voices.
  • Computers are gendered social actors.
  • Gender is an extremely powerful cue.
  • Computer users respond socially to the computer itself.
  • Computer users do not see the computer as a medium for social interaction with the programmer.

Though the above-summarized research was conducted based on desktop computers, the principles apply to mobile phones as well.

Another study that is more focused on mobile phones is “Intimate Self-Disclosure via Mobile Messaging.” This study explores SRCT in the context of mobile phones. Following are the hypotheses tested through the study:

  1. Participants will self-disclose more via mobile messaging in response to intimate questions coupled with the flattery and social norms strategies than via direct requests.
  2. Participants will self-disclose more via mobile messaging in response to intimate questions ostensibly from a human than from a computer.
  3. Participants’ self-disclosure via mobile messaging in response to intimate questions will be differentially affected by a human or computer sender that flatters them as compared to one that does not flatter them.

The study was conducted with 71 university students. The students received course credit or a $20 Amazon.com gift certificate as incentive and were told that the study was testing a new questionnaire system. All participants used their own mobile phones and service plans. The experiment conducted was based on permutation of sender and strategy. Sender could be of two types, human and computer. Strategy could be of three types, “direct request,” “flattery” and “social norms.”

Participants were asked to choose two periods, each about 2 days long, to participate. The periods were spaced one week apart. In each day, participants were required to choose an hour-long time slot to receive six to seven questions via text message. The first two questions sent to the participants in each period were low intimacy, and the remaining ten each period were high intimacy. An example of a high intimacy question could be “What has been the biggest disappointment in your life?”

In some cases the sender was referred to as “research assistant” while in others as “research computer.”  This was highlighted to the participants across two reminder emails and four welcome messages. In reality, all messages were sent by a computer.

In the “direct request” strategy, participants were only sent the question. In the “flattery” strategy, each question was accompanied by a compliment (e.g. “Nice reply!” or “You are better at texting than most.”). In the “social norms” strategy, the question was accompanied by a sentence stating the percentage (85-100%) of participants who had fully answered the question.

Breadth, or quantity, of disclosure is measured by word count. Responses which contained “no comment” were counted as 0 words.

The results of the experiment are summarized in the table and figure below.

The study showed that participants showed significantly more disclosure when flattery was provided by the ‘human’ sender than by the ‘computer’.  Different strategies did not elicit significantly different disclosure when the participants thought they were interacting with a ‘computer.’

It could be concluded that though people reciprocate with computers in self-disclosure, flattery is a more effective as a strategy for humans than computers. These results can be seen as inconsistent with predictions of SRCT.


Across the two contrasting studies talked about in this paper, it is evident people respond to computers/mobile phones as if they are social actors. This provides a tremendous opportunity to leverage mobile phones to persuade and induce behaviour change.

The point to keep in mind is that though there is a social response, it is not as much as human to human interaction. Technology/mobile phones have not arrived at the point where they could start to substitute human interaction. For now, they can be crude and rudimentary companions which humbly suggest and advise.

But the day is not far! As stated in Ray Kurzweil’s book The Spiritual Machine, the future of mankind would find a turning point when machines will start to surpass human intelligence and become a more than satisfactory substitute for human connection.


Computers are Social Actors
Clifford Nass, Jonathan Steuer, and Ellen R. Tauber
Department of Communication
Stanford University

Mobile Persuasion
20 Perspectives on the future of behaviour change
BJ Fogg & Dean Eckles

The Spiritual Machine
Ray Kurzweil

Intimate Self-Disclosure via Mobile Messaging:
Influence Strategies and Social Responses to Communication Technologies
Dean Eckles, Doug Wightman, Claire Carlson, Attapol Thamrongrattanarit,
Marcello Bastea-Forte, B.J. Fogg
Persuasive Technology Lab, Stanford University
Nokia Research Center, Palo Alto, CA

This article appeared originally in the HFI UX Design Newsletter, October, 2012

Subscribe to the HFI UX Design Newsletter

Views: 219


You need to be a member of HFI Connect to add comments!

Join HFI Connect


  • Add Photos
  • View All

Discussion Groups

© 2017   Human Factors International   Powered by

Badges  |  Report an Issue  |  Terms of Service