Can we use and trust AI in Cyber Security?

Share This Post

Can we use and trust AI in cyber security?

Here at Brace168 we have been having endless discussions on how we could use AI tools such as ChatGPT to improve our security operations.  Although machine learning and other AI tools have been in use for quite some time, we see enormous potential for the latest generation of AI to further enrich and speed the process of analysing security threats.  However, we also see this coming with some risk and downsides.  How can we use it?  How much can we trust what AI tells us?

We thought an interesting way to approach this question was to ask ChatGPT how we should use it and compare its answer to our own – this blog covers what we found.

We asked:  How can ChatGPT help my security operations team?”

ChatGPT replied:

  1. Incident Triage Support: ChatGPT can help security analysts review and prioritize alerts by providing relevant information and context, enabling them to identify critical incidents and reduce false positives.
  2. Knowledge Base Assistance: By functioning as an interactive knowledge base, ChatGPT can quickly answer queries about security concepts, protocols, tools, and best practices, saving time and promoting continuous learning among team members.
  3. Threat Intelligence Analysis: ChatGPT can assist in analyzing threat intelligence feeds, summarizing key findings, and extracting relevant information to help security teams stay informed of the latest threats and vulnerabilities.
  4. Policy and Compliance Support: ChatGPT can help with the interpretation of complex security policies and regulations, providing guidance on maintaining compliance and implementing necessary controls.
  5. Automation of Routine Tasks: ChatGPT can help automate repetitive tasks, such as generating reports, tracking incidents, and managing documentation, allowing your security team to focus on more strategic and complex issues.
  6. Collaborative Problem Solving: ChatGPT can be a valuable resource for brainstorming ideas, discussing potential solutions, and providing fresh perspectives on complex security challenges faced by your team.

Please note that while ChatGPT can provide valuable support, it should not be relied upon as the sole source of information or decision-making authority. Always validate critical information and consult with experienced security professionals when making decisions.

We then asked: “Anything else?”

Certainly! Here are a few more ways ChatGPT can help support your security operations team:

  1. Threat Intelligence: a. Assisting in the collection and analysis of threat intelligence from various sources. b. Identifying patterns and correlations to help your team prioritize threats and allocate resources more effectively.
  2. Risk Assessment: a. Helping your team identify and evaluate potential risks to your organization. b. Suggesting risk mitigation strategies and controls, based on industry standards and best practices.
  3. Compliance Assistance: a. Providing guidance on various regulatory and industry-specific cybersecurity standards, such as GDPR, HIPAA, PCI-DSS, ISO 27001, and NIST. b. Helping your team understand and meet compliance requirements through appropriate documentation and implementation of controls.
  4. Vendor Evaluation: a. Supporting your team in evaluating cybersecurity products and services by providing comparative analysis and feature breakdowns. b. Assisting with the development of vendor-specific security assessment questionnaires.
  5. Automating Routine Tasks: a. Helping automate repetitive or time-consuming tasks, such as log analysis, data entry, or alert management, to free up your team’s resources for more strategic activities.

While ChatGPT can be a valuable tool for your security operations team, it’s important to remember that it shouldn’t replace human expertise or be relied upon as the sole source of information. Always validate ChatGPT’ s suggestions and recommendations with your security team’s knowledge and expertise and maintain a healthy balance between AI-driven assistance and human intervention.

This seems fantastic, and we agree with most of the points.  They align with the ideas we have been discussing and testing internally and which are inline with what is becoming common practice for security teams.  There is no doubt that AI tools such as ChatGPT can speed the process of finding and analysing the information required to detect and respond to security incidents.

Microsoft obviously agree as midway through preparing this article their Security Co-pilot appeared.  Given it is based on the latest ChatGPT engine (GPT-4) it is not surprising that at face value it enables much of the above, with a focus on enabling better response to information provided by Microsoft security products such as Defender, Sentinel and Intune.  This is great but also emphasises the reality that using these tools properly requires specialised skills and knowledge that most organisations cannot access.  It will also be interesting to see how they charge for it once it is through the preview phase.  Hopefully it will be accessible to those that need it.  Microsoft also recognise the potential pitfalls of using their tool as it comes with a warning (much like the ChatGPT responses) “Security Co-pilot doesn’t always get everything right. AI-generated content can contain mistakes” (https://blogs.microsoft.com/blog/2023/03/28/introducing-microsoft-security-copilot-empowering-defenders-at-the-speed-of-ai/ ).

So what is the reason for this seemingly counter intuitive warning?  The Brace168 security team noted a few drivers of concern:

  • Source data – Most will know that the outputs of most AI tools are very dependent on the source data – garbage in garbage out. This includes:
    • currency (e.g., the free version of ChatGPT has only been trained in vulnerability data before September 2021); and
    • validity (how can you be sure that someone is not compromising the training data with skewed or incorrect data). Providing your own training data or checking what has been used by a third party is sensible and every reputable AI tool provider should be allowing their customers to check.
  • Helping the bad guys – We must also remember that these tools are also available to the attackers, the bad guys out there on the internet. It will quite likely enable less sophisticated users to bring together the tools and methods to mount more complex and harder to defend attacks.  We all know that volume of attacks is as much of an issue as sophistication.  AI will drive them both!
  • Transparency of sources – We have all been taught to question what we see and validate conclusions. This links back to the comment above on validity, but the response from ChatGPT does not tell you where it got the answers from.  How can you trust outputs without knowing the basis of what it is saying?  Those with scientific, legal or journalism backgrounds will attest to the need for traceability and transparency when relying on “intelligence” to make significant decisions.
  • Security of data you upload: ChatGPT itself has been the subject of a security breach (https://australiancybersecuritymagazine.com.au/personal-data-of-over-one-million-subscribers-compromised-in-chatgpt-breach/?utm_source=rss&utm_medium=rss&utm_campaign=personal-data-of-over-one-million-subscribers-compromised-in-chatgpt-breach).  As always, we need to carefully choose who we share our data with and follow the fundamentals of securely managing our supply chains and custodians of our data.

This is already quite a long article, so we are going to have to leave it there.  Some of our leading thinkers have called for more time to allow AI to mature and controls to be developed.  Holding up “progress” is unlikely to happen but users can influence the outcomes by questioning sources, checking on safeguards and ensuring trust is justified.  For the Brace168 SOC we are not there yet.  We to already use AI and expect to extend it to further complement our security analysts by cross checking our thinking and ensuring we ask and answer the right questions.  In this way it is making our analysis and response faster and more effective – but we will not be trusting it entirely just yet.

More To Explore