Understanding Insider Threat Attacks using Natural Language Processing: Automatically Mapping Organic Narrative Reports to Existing Insider Threat Frameworks

Paxton-Fear, Katie, Hodges, Duncan and Buckley, Oliver (2020) Understanding Insider Threat Attacks using Natural Language Processing: Automatically Mapping Organic Narrative Reports to Existing Insider Threat Frameworks. In: HCI for Cybersecurity, Privacy and Trust. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) . Springer, pp. 619-636. ISBN 978-3-030-50308-6

[img]
Preview
PDF (Accepted_Manuscript) - Submitted Version
Download (1MB) | Preview

Abstract

Traditionally cyber security has focused on defending against external threats, over the last decade we have seen an increasing awareness of the threat posed by internal actors. Current approaches to reducing this risk have been based upon technical controls, psychologically understanding the insider’s decision-making processes or sociological approaches ensuring constructive workplace behaviour. However, it is clear that these controls are not enough to mitigate this threat with a 2019 report suggesting that 34% of breaches involved internal actors. There are a number of Insider threat frameworks that bridge the gap between these views, creating a holistic view of insider threat. These models can be difficult to contextualise within an organisation and hence developing actionable insight is challenging. An important task in understanding an insider attack is to gather a 360-degree understanding of the incident across multiple business areas: e.g. co-workers, HR, IT, etc. can be key to understanding the attack. We propose a new approach to gathering organic narratives of an insider threat incident that then uses a computational approach to map these narratives to an existing insider threat framework. Leveraging Natural Language Processing (NLP) we exploit a large collection of insider threat reporting to create an understanding of insider threat. This understanding is then applied to a set of reports of a single attack to generate a computational representation of the attack. This representation is then successfully mapped to an existing, manual insider threat framework.

Item Type: Book Section
Uncontrolled Keywords: insider threat,natural language processing,organic narratives,theoretical computer science,computer science(all) ,/dk/atira/pure/subjectarea/asjc/2600/2614
Faculty \ School: Faculty of Science > School of Computing Sciences
Related URLs:
Depositing User: LivePure Connector
Date Deposited: 11 Aug 2020 00:04
Last Modified: 18 Sep 2020 00:43
URI: https://ueaeprints.uea.ac.uk/id/eprint/76393
DOI: 10.1007/978-3-030-50309-3_42

Actions (login required)

View Item View Item