The Repository @ St. Cloud State

Open Access Knowledge and Scholarship

Date of Award

5-2025

Culminating Project Type

Thesis

Styleguide

apa

Degree Name

Information Assurance: M.S.

Department

Information Assurance and Information Systems

College

Herberger School of Business

First Advisor

Jieyu Wang

Second Advisor

Jim Q. Chen

Third Advisor

Susantha Herath

Fourth Advisor

Abdullah Abu Hussein

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Keywords and Subject Headings

ChatGPT, Trust, Credibility, Privacy, AI Ethics, User Perception

Abstract

Chatbots have rapidly transformed domains such as customer service, education, and digital assistance. However, trust and credibility in generative AI systems like ChatGPT remain underexplored, particularly in relation to task context. Existing studies often focus on traditional rule-based chatbots or specific domains, overlooking how user perceptions may vary depending on the nature of interaction. This study addresses that gap by examining task-dependent perceptions of trust, credibility, privacy, and security in interactions with ChatGPT. A mixed-methods approach was employed, combining a two-week diary study with follow-up interviews. Thirty participants (aged 20–26, primarily students in computer-related majors) engaged with ChatGPT in two task contexts: structured (event planning) and preference-based (e.g., recommending books or restaurants). Quantitative ratings and qualitative reflections were analyzed using independent t-tests and grounded theory, respectively. Findings revealed that while trust and security remained stable across tasks, credibility perceptions differed significantly, with higher ratings in preference-based tasks. Participants expressed strong trust when ChatGPT’s suggestions aligned with prior knowledge but questioned credibility in tasks requiring real-time data or source verification. Privacy concerns were minimal, though some users hesitated to share personal details due to unclear data handling practices. Theoretically, the study draws on trust and credibility frameworks from human-computer interaction research, focusing on how task type and the transparency of information influence users’ evaluations of ChatGPT. Design implications include the need for real-time data integration, visible source citations, and personalized privacy settings to improve user confidence. Limitations include the homogeneity of the sample and lack of real-world testing environments. Future research should explore similar dynamics in high-stakes domains like healthcare, finance, or legal services, and examine user behavior over longer-term AI interactions. This study contributes to a nuanced understanding of human-AI trust formation, offering actionable insights for developers and researchers building transparent, context-aware chatbot systems.

Share

COinS