A system’s response time and its psychological consequences

In our day-to-day, it is increasingly possible (if not common) to receive prediction advice that has been generated by some algorithmic system. These systems can be anything from simple evidence-based rules, statistical models, or decision-support software developed for specific tasks. Whether we trust the advice we receive from such systems can have wide-ranging consequences.

In a recent paper, published in the journal of Organizational Behavior and Human Decision Making Processes we (myself and colleagues from the Eindhoven University of Technology and Tilburg University, Dr. Philippe van de Calseyde and Dr. Anthony Evans) found that trust in an algorithmic system depends on how quickly or slowly a prediction is made. We found that people mistrust algorithmic systems that take time to make their predictions. 

Response time algorithm

Algorithmic vs human prediction

But first, some background. Research that compares the effectiveness of algorithmic and human predictions has consistently shown that algorithmic systems outperform humans. Astonishingly, this is not a recent phenomenon. In the ‘50s, Dr. Paul Meehl reviewed results from studies across a wide-range of domains and found that algorithms simply outperform humans. For instance, a simple linear model (this is a basic linear regression, a statistical tool taught to undergraduates) outperformed experts in clinical diagnosis and predicting graduate students’ success.

And yet, even though interactions where people receive some prediction advice that has been generated by an algorithmic system are almost ubiquitous in modern workflows, people are often reluctant to trust predictions made by these systems. Some have even dubbed this behavior as “algorithm aversion”.

As interest in algorithmic systems grew and as researchers started to explore how people interact with them, a more complicated picture emerged. While people are often “averse” to algorithmic advice, there is some evidence that this tendency is modulated by confidence in one's own knowledge and type of decision - people tend to trust algorithmic systems more for objective decisions (e.g., forecasts of future events, than for subjective decisions (e.g., movie recommendations). 
 

SBE algorithm response time

Understanidng how cues guide our decision making

So, algorithmic predictions are superior to human predictions and people are increasingly interacting with them. Understanding and probing what modulates this interaction is thus both timely and important. As researchers with an interest in judgment in decision-making, we thought to treat this interaction similar to an interaction between two humans. That is, perhaps there are subtle cues that can impact trust of algorithmic systems, similar to how it may impact trust in humans. We thought of observing how the time it took an algorithmic system (compared to a human) to come to a prediction impacted how people judge the quality of the prediction. 

Response time is an important cue in social interactions. It has been shown that people can infer doubt, effort, or even confidence from others’ response time. People thus use response time as information. Does the same thing occur for non-human algorithmic systems? One may instinctively think to say no. Algorithmic systems are considered as rigid and structured. Why would people infer anything from their response time? But, people often use simple cues as guides for decisions. On top of that, algorithmic systems can vary in their processing ability or they can even be purposefully manipulated to appear either fast or slow. For example, when an airline’s web page takes longer to find you “suitable” flights?   
 

Across 9 studies with 1,928 participants completing a total of 14,184 judgments, we presented our participants with either an algorithmic system or a human that made a prediction either quickly or slowly. We looked across various domains such as sports predictions, sales predictions, employee absence predictions, or student’s success.

We consistently found that people judged slowly (vs. quickly) made predictions by an algorithm as being of lower quality. People were also less willing to rely on predictions that an algorithm made slowly. This was not the case for humans though - people judged slowly (vs. quickly) made predictions by a human as being of higher quality. 
 

Algoritme

Different expectation for algorythmic systems

It turns out that people actually have different expectations of how difficult certain tasks are for algorithmic systems vs. how difficult those same tasks are for humans. This difference in expectations, combined with the fact that people inferred effort from the response time it took the systems (humans) to make a prediction, explains our findings. For algorithms, slower responses were incongruent with expectations; the prediction task was presumably easy so slower speeds, and more effort, were unrelated to prediction quality. For humans, slower responses were congruent with expectations; the prediction task was presumably difficult so slower responses, and more effort, led people to conclude that the predictions were high quality.

This hints at a highly nuanced effect of response time on prediction quality evaluations made by algorithmic systems. It seems that people’s judgment of quality for a prediction made by an algorithm can be meaningfully impacted by how quickly or slowly it is provided.
 

What can we do now with this information? Response time is a feature that is easily adaptable to existing algorithmic systems. As managers, we can implement these findings into our organizations. Since algorithmic predictions tend to be more accurate, why not calibrate the response time so that your human officers are more trusting of their predictions. As users of algorithmic systems, we can be aware of how simple cues (that are often outside of our control) can impact our judgment, even of non-human systems. 

We hope our findings stimulate future research on this fascinating topic. Identifying cues that can change how we interact with algorithmic systems is a promising and valuable avenue for understanding human-algorithm interaction.  
 

Article written by Dr. Emir Efendić

Also read