AI Hallucinations in Information Security : A Bibliometric and Grounded Study Perspective
##plugins.themes.bootstrap3.article.main##
الملخص
The ubiquitous use of artificial intelligence (AI) and generative models across multiple sectors such as healthcare, finance, education, and cybersecurity, have given rise to what is now commonly termed ‘AI hallucinations’, that is, these models become more sophisticated but prone to producing outputs that are factually incorrect, nonsensical, or misleading, despite their seemingly authoritative tone. AI hallucinations pose significant risks to information security by undermining data integrity, eroding trust, and providing fertile ground for malicious exploitation. This paper uses a dual-mixed method approach that provides both macro-level trends via bibliometrics and micro-level contextual understanding via qualitative methods on how AI hallucinations impact information security. 322 peer-reviewed articles, conference papers, and book chapters retrieved from the Scopus database were the impetus for a bibliometrics study, while four information security practitioners provided data for a qualitative inquiry and theory formulation. By synthesizing insights from interdisciplinary studies in computer science, cognitive psychology, and ethics, and using a grounded theory approach, we outline how practitioners perceive AI hallucinations in practice and the contextual challenges they face. Through a grounded theory method (GTM) approach, key categories were identified, which enabled a better understanding of AI hallucinations. These categories include AI Usage Patterns, Confidence & Familiarity, Verification Strategies, Trust & Hallucination Triggers, and Tone & Believability, and point to how AI hallucinations are understood and interpreted by information security practitioners.
التنزيلات
##plugins.themes.bootstrap3.article.details##

هذا العمل مرخص بموجب Creative Commons Attribution-NonCommercial 4.0 International License.