Liste des questionnaires
Sélection: Questionnaires en langue anglaise.
Nouveau questionnaire
Affichage de 16 questionnaires sur un total de 16 questionnaires.
Scientifique
Public
Trust perception scale-HRI [Short version]
Modifié le 18/12/2025 à 09h00 par PEER project
 This scale, of 14 items, is the short version of the 40 items trust perception scale-HRI.

While use of the 40 items scale is recommended, a 14 items subscale can be used to provide rapid trust measurement specific to measuring changes in trust over time, or during assessment with multiple trials or time restrictions. This subscale is specific to functional capabilities of the robot, and therefore may not account for changes in trust due to the feature-based antecedents of the robot. 
Trust score is calculated by first reverse coding the "have errors", "unresponsive", and "malfunction" items, and then summing the 14 item scores and dividing by 14.


Reference:  Schaefer, K. E. (2016). Measuring trust in human robot interactions: Development of the “trust perception scale-HRI”. In Robust intelligence and trust in autonomous systems (pp. 191-218). Boston, MA: Springer US. https://d1wqtxts1xzle7.cloudfront.net/114178129/978-1-4899-7668-020240505-1-agu5dz-libre.pdf?1714934405=&response-content-disposition=inline%3B+filename%3DRobust_Intelligence_and_Trust_in_Autonom.pdf&Expires=1720525011&Signature=GiwTFX8RVqIoZ0hbY~O4fmCLoHrC4Zk6y-yviwJvsGZKm2pg7HiR3BNPjcyV4ROsD7TmigLEFsXIXf8UppjDyCRJWrbqyAFgpogdMr21TAWd9JakETZoju5qsSh8qgpmCQdR19PUJbtnb~DgcEdW7JpjjAYoY5A7h7aNXz97kUS0iHpRZaG-~1~ez4K82~5arEkL016b1QQUaaqk9Kk4A~j4qKbHg3fUST60QOKxwtzju1MQOscVJLX882NQmG03rhZ1jqAzb6VG4OnTjQL2hQP1eegcQk4j6TF1fTp0Q9idZo9LdQ7eq6yPro-8nCluVQ6w3bVUBc-45HPs43IjLg__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA#page=198
1
Pages
14
Questions
2
Minutes
Scientifique
Public
AIDUA (Artificial Intelligence Device Use Acceptance)
Modifié le 18/12/2025 à 09h00 par PEER project
 This, 34 items, scale aims to explain customers’ willingness to accept AI device use in service encounters.
It take into account sociual influence, hedonic motivation, anthropomorphism, performance expectancy, effort expectancy, emotion, willingness to accept the use of AI devices, objection to the use of AI devices.



Reference: Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157‑169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008 https://www.sciencedirect.com/science/article/pii/S0268401219301690?casa_token=JI-l_R98WhQAAAAA:UVHvXzORFhGyY19NIgIdPZgXp6jn8o8pL4pR1UgB1hvIMsTUWqTPGLRvzUtq-lJirD6-N9mn_HrTqA
8
Pages
34
Questions
7
Minutes
Public
Different facets of trust
Modifié le 18/12/2025 à 09h00 par PEER project
This, 14 items, scale allows to evaluate different facets of trust, to understand how different factors about a decision-making process, and an AI model that supports that process, influences peoples’ perceptions of the trustworthiness of that process.

The evaluation of trust focused on several dimensions:
  • Overall trustworthiness: the process ought to be trusted
  • Reliability: the process results in consistent outcomes
  • Technical competence: AI is used appropriately and correctly
  • Understandability**: participants understood how the process works
  • Personal attachment: participants liked the process

** Due to poor reliability (α = 0.11), we recommand to exclude Understandability from analysis.



Reference: Ashoori, M., & Weisz, J. D. (2019). In AI we trust? Factors that influence trustworthiness of AI-infused decision-making processes. arXiv preprint arXiv:1912.02675. https://arxiv.org/pdf/1912.02675
5
Pages
14
Questions
4
Minutes
Scientifique
Public
Users’ perception of transparency in recommender systems
Modifié le 18/12/2025 à 08h59 par PEER project
This scale, of 13 items, is a measurement tool for assessing transparency in recommender systems. It covers :
- Input: what data does the system use?
- Output: why and how well does an item fit one’s preferences?
- Functionality: how and why is an item recommended?
- Interaction: what needs to be changed for a different prediction?



Reference:  Hellmann, M., Bocanegra, D. C. H., & Ziegler, J. (2022). Development of an Instrument for Measuring Users' Perception of Transparency in Recommender Systems. Universität Duisburg-Essen. https://duepublico2.uni-due.de/servlets/MCRFileNodeServlet/duepublico_derivate_00075641/CEUR_WS_3124_Paper17.pdf
4
Pages
13
Questions
4
Minutes
Scientifique
Public
Measures of Trust, Trustworthiness, and Performance Appraisal Perceptions
Modifié le 18/12/2025 à 08h59 par PEER project
Scientifique
Public
Human-Computer Trust Scale (HCTS)
Modifié le 18/12/2025 à 08h59 par PEER project
This scale of 16 items takes into account 5 dimensions: risk perception, competency, reciprocity, benevolence, general trust; All of the items were positively worded except 
 for the risk perception scale, which was adapted as a negatively worded statement and reversed before analyzing the data.

 

The Humane-Computer Trust Scale (HCTS) was developed in the aim of researchers can use this in their research, therefore, to make it easy and less time consuming, we present the scale using placeholders [we refer to the brackets with dashes as presented above (—) as a placeholder]. 
 This placeholder needs to be filled in by the user of the scale.

 
 In items 1, 2, 3, 4, 5, 9, 10, 11, 12, 13, 14 & 16, the placeholder needs to be replaced with the artefact with which the user of the scale intends to measure trust. For exemple, if the artefact is Amazon "Alexa", then the placeholder can simply be replaced by "Alexa".


 In items 6, 7, 8 & 15, the initial placeholder needs to be replaced by the artefact (just like the previous items). However, the second placeholder needs to be replaced with the functionality of the artefact or what the artefact is capable of doing. Again, if we use Alexa as an example, then for item 7, the first placeholder would be replaced by "Alexa" and the second placeholder can be replaced with any functionality of Alexa which the user of the scale wants to use to gauge trust perception. As an example, for item 7, second placeholder could be replaced by "providing personal assistance". So, the final sentence for item 7 using Alexa as an example would look like "I think that Alexa is competent and effective in providing personal assistance".

 
 To iterate, it doesn’t have to be "providing personal assistance". What goes in the placeholder depends on the context, study objectives and the motives of the researcher. Essentially, the placeholders are there for putting the name of the artefact and their context of use.

1
Pages
16
Questions
4
Minutes
Scientifique
Public
Trust perception scale-HRI [Long version]
Modifié le 18/12/2025 à 08h59 par PEER project
This scale, of 40 items, was developed to provide a means to subjectively measure trust perceptions over time and across robotic domains.

When the scale is used as a pre-interaction measure, the participants should first be shown a picture of the robot they will be interacting with or provided a description of the task prior to completing the pre-interaction scale. This accounts for any mental model effects of robots and allows for comparison specific to the robot at hand.
For post-interaction measurement, the scale should be administered directly following the interaction.
To create the overall trust score, 5 items must first be reverse coded (incompetent, unresponsive, malfunction, require frequent maintenance, have errors). All items are then summed and divided by the total number of items (40). This provides an overall percentage of trust score.

While use of the 40 items scale is recommended, a 14 items subscale can be used to provide rapid trust measurement specific to measuring changes in trust over time, or during assessment with multiple trials or time restrictions. This subscale is specific to functional capabilities of the robot, and therefore may not account for changes in trust due to the feature-based antecedents of the robot.



Reference:  Schaefer, K. E. (2016). Measuring trust in human robot interactions: Development of the “trust perception scale-HRI”. In Robust intelligence and trust in autonomous systems (pp. 191-218). Boston, MA: Springer US. https://d1wqtxts1xzle7.cloudfront.net/114178129/978-1-4899-7668-020240505-1-agu5dz-libre.pdf?1714934405=&response-content-disposition=inline%3B+filename%3DRobust_Intelligence_and_Trust_in_Autonom.pdf&Expires=1720525011&Signature=GiwTFX8RVqIoZ0hbY~O4fmCLoHrC4Zk6y-yviwJvsGZKm2pg7HiR3BNPjcyV4ROsD7TmigLEFsXIXf8UppjDyCRJWrbqyAFgpogdMr21TAWd9JakETZoju5qsSh8qgpmCQdR19PUJbtnb~DgcEdW7JpjjAYoY5A7h7aNXz97kUS0iHpRZaG-~1~ez4K82~5arEkL016b1QQUaaqk9Kk4A~j4qKbHg3fUST60QOKxwtzju1MQOscVJLX882NQmG03rhZ1jqAzb6VG4OnTjQL2hQP1eegcQk4j6TF1fTp0Q9idZo9LdQ7eq6yPro-8nCluVQ6w3bVUBc-45HPs43IjLg__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA#page=198
2
Pages
40
Questions
5
Minutes
Public
Agent and system evaluation
Modifié le 18/12/2025 à 08h59 par PEER project
This questionnaire is a combination of  7 point Likert scales, open form questionnaires to collect qualitative and quantitative user feedback, and checklist for trust between people and automation from Jian et al. (2000).
[ Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International journal of cognitive ergonomics, 4(1), 53-71 ; https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=ed6a076ab7d43c27085d412108b98b93edbb1b00].

It was developed to evaluate the trust in a virtual agent + a speech recognition system. It is coupled with  Jian et al. (2000)' scale. This scale of 12 items measures the trust and distrust in automation.

In the first part of the questionnaire, you can replace the term [the virtual agent] by your own system.



Reference: Weitz, K., Schiller, D., Schlagowski, R., Huber, T., & André, E. (2021). “Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design. Journal on Multimodal User Interfaces, 15(2), 87-98.
https://link.springer.com/content/pdf/10.1007/s12193-020-00332-0.pdf
3
Pages
22
Questions
5
Minutes
Public
The effect of anthropomorphism on investment decision-making with robo-advisor chatbots
Modifié le 18/12/2025 à 08h58 par PEER project
This, 24 items, scale evaluates the effect of anthropomorphism on investment decision-making with robo-advisor chatbots. It covers :
- Anthropomorphism
- Social presence
- Trusting beliefs
- Disposition to trust in technology


You can replace the term [robo-advisor chatbots] by your own system and the term [finanacial advise] by the type of advice the system gives you.



Reference:  Morana, S., Gnewuch, U., Jung, D., & Granig, C. (2020). The Effect of Anthropomorphism on Investment Decision-Making with Robo-Advisor Chatbots. In ECIS. https://www.researchgate.net/profile/Stefan-Morana/publication/341277570_The_Effect_of_Anthropomorphism_on_Investment_Decision-Making_with_Robo-Advisor_Chatbots/links/5eb7c5ba4585152169c14505/The-Effect-of-Anthropomorphism-on-Investment-Decision-Making-with-Robo-Advisor-Chatbots.pdf
4
Pages
24
Questions
7
Minutes
Scientifique
Public
AI Literacy Scale (AILS)
Modifié le 18/12/2025 à 08h58 par PEER project
The concept of AI literacy is used to determine user competence in using AI technology. This quantitative scale, of 12 items, allows to obtain accurate data regarding the AI literacy of ordinary users. This scale take into account the primary core constructs of AI literacy : awareness, usage, evaluation, and ethics.



Reference: Wang, B., Rau, P. L. P., & Yuan, T. (2023). Measuring user competence in using artificial intelligence: validity and reliability of artificial intelligence literacy scale. Behaviour & information technology, 42(9), 1324-1337.https://www.researchgate.net/profile/Bingcheng-Wang/publication/360519116_Measuring_user_competence_in_using_artificial_intelligence_validity_and_reliability_of_artificial_intelligence_literacy_scale/links/6408741fb1704f343fb47955/Measuring-user-competence-in-using-artificial-intelligence-validity-and-reliability-of-artificial-intelligence-literacy-scale.pdf
1
Pages
12
Questions
2
Minutes
Scientifique
Public
XAI (eXplainable Artificial Intelligence) Trust Scale
Modifié le 18/12/2025 à 08h58 par PEER project
This scale of 8 question assess trust in automation.
This XAI Trust Scale asks users directly whether they are confident in the XAI system, whether the XAI system is predictable, reliable, efficient, and believable. Most of the items are adapted from the Cahour-Fourzy scale, which has been shown to be reliable. The XAI Trust Scale incorporates items from other scales.

 You have to replace the notion [tool] with the tool that you want to evaluate.



Reference:  Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2023). Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Frontiers in Computer Science, 5, 1096257. https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2023.1096257/full
1
Pages
8
Questions
2
Minutes
Public
Trust in Automation (TiA)
Modifié le 18/12/2025 à 08h58 par PEER project
This questionnaire of 19 items measures the user's trust in an automation system. It takes into account 6 dimensions :
- Reliability/Competence
- Understandability/Predictability
- Propensity to Trust
- Intention of Developers
- Familiarity
- Trust in Automation



Reference: Körber, M. (2019). Theoretical considerations and development of a questionnaire to measure trust in automation. In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018) Volume VI: Transport Ergonomics and Human Factors (TEHF), Aerospace Human Factors and Ergonomics 20 (pp. 13-30). Springer International Publishing. https://link.springer.com/chapter/10.1007/978-3-319-96074-6_2
1
Pages
19
Questions
4
Minutes
Scientifique
Public
User Experience Questionnaire (UEQ) [EN]
Modifié le 18/12/2025 à 08h58 par PEER project
The 26 scales of the questionnaire cover a comprehensive impression of user experience. Both classical usability aspects (efficiency, perspicuity, dependability) and user experience aspects (originality, stimulation) are measured.


Attractiveness
Overall impression of the product. Do users like or dislike it?

Perspicuity
Is it easy to get familiar with the product and to learn how to use it?

Efficiency
Can users solve their tasks without unnecessary effort? Does it react fast?

Dependability
Does the user feel in control of the interaction? Is it secure and predictable?

Stimulation
Is it exciting and motivating to use the product? Is it fun to use?

Novelty
Is the design of the product creative? Does it catch the interest of users?



Reference:   Laugwitz, B., Schrepp, M. & Held, T. (2008). Construction and evaluation of a user experience questionnaire. In: Holzinger, A. (Ed.): USAB 2008, LNCS 5298, pp. 63-76. https://www.ueq-online.org/
1
Pages
26
Questions
4
Minutes
Public
Questionnaire socio-démographique - EN
Modifié le 24/11/2023 à 09h30 par Jeremy Laviole
This questionnaire allows you to obtain essential information about your participants (age, gender, etc.).

The various criteria assessed through this questionnaire are traditional criteria for population segmentation.

In the case where you are targeting a specific population, these indicators will help you determine if your participants are a good fit for your target sample.
These pieces of information also enable you to verify that your sample is representative of the overall population.

If you wish to study different groups of participants, socio-demographic data allows you to create your various conditions.
2
Pages
7
Questions
2
Minutes
Scientifique
Public
Multidimensional cognitive load scale for virtual environments
Modifié le 19/07/2024 à 12h25 par SCH CATIE
Cette échelle est une extension de l'échelle de charge cognitive (CLS) de Leppink et al. (2013).
Elle permet de mesurer les trois types de charges cognitives (intrinsèque, extrinsèque et essentielle) dans un contexte d'apprentissage en environnement virtuel.

Plus précisément, la dimension "charge extrinsèque" de la CLS est divisée en trois sous-échelles :

- Charge extrinsèque relative à l'instruction
- Charge extrinsèque relative à l'interaction
- Charge extrinsèque relative à l'environnement

Andersen, M. S., & Makransky, G. (2021). The validation and further development of a multidimensional cognitive load scale for virtual environments. Journal of Computer Assisted Learning, 37(1), 183-196.
https://onlinelibrary.wiley.com/doi/epdf/10.1111/jcal.12478
1
Pages
18
Questions
10
Minutes
Réalité virtuelle
Apprentissage
Cognition
Charge mentale
Charge cognitive
Cognitive load
Learning
Scientifique
Public
CSUQ - English version
Modifié le 02/09/2025 à 10h04 par SCH CATIE
This questionnaire provides feedback from participants on the system they have used. This tool helps to understand which aspects of the system are problematic and which aspects meet the user's needs.

For the rating, a Likert scale of 1 to 7 is used from1 = Strongly agree to 7 = Strongly disagree.

Scoring: The CSUQ scores are grouped into five categories:
- an overall score, which combines items from 1 to 16, i.e. all items
- a system utility score (SysUse), which combines items 1-6
- a score on the quality of information (InfoQual), which combines items from 7 to 12
- a score on the quality of the interface (IntQual), which gathers items from 13 to 15
- a score on general satisfaction (item 16)

Ref :  Lewis, J. R. (1995). IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. International Journal of Human‐Computer Interaction, 7(1), 57-78.
https://doi.org/10.1080/10447319509526110
1
Pages
16
Questions
10
Minutes
Parcours Utilisateur
CSUQ