Warning! Be wary of scams. Read our FAQ page for more information.


In Discussion: AI in Cyber by Riki Blok

blog

‘In Discussion’ brings together the most valuable insights from live conversations at our events, shaped by our team and the industry experts in the room.

AI in cyber – what’s really happening

We wondered what CISO’s and Head of Security were saying about AI, so we ran event focused on just that.

On 22nd April, we hosted an invite-only event for Head of Security and CISO leaders in Sydney. Chatham House rules were in effect, which means I can share some of what was discussed, but not who said it.

The insights in this blog are shaped by the panellists at the event and our audience. That means senior security leaders talking about what’s happening in their organisations, here's my summary.

AI is already in the building…..ready or not

The first thing that struck me was the breadth of what's already being deployed. Use cases ranged from the relatively simple like automating personal expense claims, drafting email responses or similar, through to genuinely complex use cases.

I’ll have to leave some of the more interesting specifics in the room, but the message was clear: this is not a future problem - it's a right now problem.

A few leaders talked about AI dramatically compressing research timeframes, work that used to take days being turned around in hours.

The critical qualifier in every case was that humans remain responsible for the decisions, AI does the groundwork or often – the grunt work. The human makes the call and in effect would take the fall if the AI has made an error.

That distinction matters, and the best set up teams are building it into how they govern their AI tools from day one.

The organisations that are struggling aren't the ones experimenting with AI. They're the ones where experimentation is happening without anyone in security knowing about it.

Governance is the gap and it might be widening

This was the theme that ran through almost every conversation across the evening. In a lot of cases security tooling has not kept pace with the speed of AI adoption, not even close (although the recent RSA conference might suggest otherwise)

The consensus in the room was that governance frameworks need to be built before experimentation begins at scale, not retrofitted after the fact.

One framing I thought was particularly useful was to enable AI tools for your people proactively, so you avoid shadow AI or at least limit it greatly. Give people something sanctioned to use, with guard rails in place, and you maintain visibility. Try to stop it altogether, and you'll lose visibility without stopping anything.

Several leaders talked about using existing tools and processes with IAM being the obvious example to put solid guard rails in place relatively quickly. The infrastructure is often already there. The challenge is whether security teams are being brought in early enough to use it.

The phrase that stuck with me most from an enterprise CISO who said "We are in this together, let us work with you and learn with you." This is the kind of forward thinking posture that lands well with business stakeholders.

Security should not be seen as the house of no, but as the team that builds the conditions for safe innovation.

What I thought was a great idea was having an AI governance council with genuine executive buy-in. This could be viewed like a best practice model with cross-functional team members and early engagement from security, legal, HR and the business units driving adoption.

It won't work if security is the only function in the room.

Identity is getting complicated – is anyone really ready?

I found this discussion fascinating, and it's one I expect to be having a lot more of over the next 12 months.

What is the identity of an AI agent? How do you govern it? How do you audit what it's done?

This one generated genuine discussion, for example do you take disciplinary action on an AI agent who is misbehaving or hallucinating?

The room landed on a clear position on accountability, it sits with the human user the agent is representing - full stop.

If an AI agent does something it shouldn't, the person whose credentials it's acting under is responsible.

For me that clarity matters and is the right answer.

But the mechanics of getting there are genuinely new territory.

Best practice discussed included working closely with HR and people teams to maintain visibility on who is and isn't using AI agents. This enables you to ensure that users understand the risks they're carrying.

Some organisations are already using AI agents to monitor other AI agents as an additional audit layer – it’s got a bit of AI inception vibes to me.

One principle that came up and I think is worth repeating:

If a human can't explain how an AI agent arrived at a conclusion, that AI shouldn't be used to make that decision.

Seems simple, but this is super important when looking at using AI to streamline processes.

Data protection and exfiltration risks were flagged as significantly more important in an agentic world than maybe any time before. Basically, the attack surface has expanded.

Software engineering teams are experimenting with deploying agents within containers as a layer of security. Other novel techniques are emerging fast, but the landscape is evolving faster than most teams can track.

The talent and team angle, what this means for hiring

Most organisations represented were going slow on headcount replacement. The uncertainty around AI was cited as reason along with the current economic environment was creating a go slow across the industry. That's consistent with what we're seeing in the broader market data from our Cyber Wrap, this info will be released in the next few months.

The functions seeing the clearest demand signals continue to be IAM, SOC and GRC.

The rationale for IAM in particular is obvious given everything discussed above, if identity is the new edge, organisations need the people to build and govern it.

A final thought

Conversations like this one remind me why I genuinely love working in the cyber industry. It’s such a unique community where information sharing is valued and there is a want to improve security posture more broadly.

If you'd like to be considered for future invite-only events, or if anything here sparked a conversation you'd like to continue please reach out. Always happy to talk.

Riki Blok

Cyber Security Recruitment Specialist, Talenza