The Interpretability of Artificial Intelligence and the Impact of Outcome Feedback on Trust: A Comparative Study | Xue Zhirong's knowledge base
0:00
- 0.5
- 0.75
- 1
- 1.25
- 1.5
- 2
- 0.5
- 0.75
- 1
- 1.25
- 1.5
- 2
[x]
Player version
Player FPS
Video type
Video url
Video resolution
Video duration
The Interpretability of Artificial Intelligence and the Impact of Outcome Feedback on Trust: A Comparative Study | Xue Zhirong's knowledge base
观看次数:96.1w

User experience
主演:未知
SA合作伙伴
关注我们众多的合作伙伴
-
The content is made up of:
-
The researchers conducted two sets of experiments ("Predict the speed-dating outcomes and get up to $6 (takes less than 20 min)" and a similar Prolific experiment) in which participants interacted with the AI system in a task of predicting the outcome of a dating to explore the impact of model explainability and feedback on user trust in AI and prediction accuracy. The results show that although explainability (e.g., global and local interpretation) does not significantly improve trust, feedback can most consistently and significantly improve behavioral trust. However, increased trust does not necessarily lead to the same level of performance gains, i.e., there is a "trust-performance paradox". Exploratory analysis reveals the mechanisms behind this phenomenon.
-
speech
-
artificial intelligence
-
Translation
-
Q3: How does result feedback and model interpretability affect user task performance?
-
Xue Zhirong, Designer, Interaction Design, Human-Computer Interaction, Artificial Intelligence, Official Website, Blog, Creator, Author, Engineer, Paper, Product Design, Research, AI, HCI, Design, Learning, Knowledge Base, xuezhirong, UX, Design, Research, AI, HCI, Designer, Engineer, Author, Blog, Papers, Product Design, Study, Learning, User Experience
-
thesis
-
A3: The study found that the feedback of the results can improve the accuracy of the user's predictions (reducing the absolute error), thereby improving the performance of working with AI. However, interpretability does not have as much impact on user task performance as it does on trust. This may mean that we should pay more attention to how to effectively use feedback mechanisms to improve the usefulness and effectiveness of AI-assisted decision-making.
-
Interactive
-
樱花动漫
-
海角社区
-
SA直播应用中心
-
The Interpretability of Artificial Intelligence and the Impact of Outcome Feedback on Trust: A Comparative Study
-
红莲社区
-
Personal insights
-
interview
-
The researchers found that although it is generally believed that the interpretability of the model can help improve the user's trust in the AI system, in the actual experiment, the global and local interpretability does not lead to a stable and significant trust improvement. Conversely, feedback (i.e., the output of the results) has a more significant effect on increasing user trust in the AI. However, this increased trust does not directly translate into an equivalent improvement in performance.
-
summary
-
Draw inferences
-
Interview
-
outcome
-
Xue Zhirong is a designer, engineer, and author of several books; Founder of the Design Open Source Community, Co-founder of MiX Copilot; Committed to making the world a better place with design and technology. This knowledge base will update AI, HCI and other content, including news, papers, presentations, sharing, etc.
-
To assess trust more accurately, the researchers used behavioral trust (WoA), a measure that takes into account the difference between the user's predictions and the AI's recommendations, and is independent of the model's accuracy. By comparing WoA under different conditions, researchers can analyze the relationship between trust and performance.
-
TikTok
-
Conference
-
The Interpretability of Artificial Intelligence and the Impact of Outcome Feedback on Trust: A Comparative Study
-
Q1: How does feedback affect users' trust in AI?
-
老司机百科
-
三国欲女传