The Cyber Insider

The Role of AI in Cybersecurity: Advantages, Risks, and Future Trends, with Ian Paterson

Emsisoft

Send us a text

This month we welcome Ian L. Paterson on the Cyber Insider podcast. Ian is an entrepreneur with 10+ years of experience in leading and commercializing technology companies. Paterson has raised millions of dollars in private and public financing, completed international M&A transactions, and is co-inventor of 3 patents on digital identity and data analytics. As CEO of Plurilock, Paterson successfully built and grew Plurilock, leading to its successful public listing on the Toronto Stock Exchange Venture. 
Previously Paterson served as founder and CEO of data monetization platform Exapik (acquired), and as Director of Insights for Terapeak (acquired), a venture-backed analytics firm. Paterson is a regular speaker, media commentator, and active angel investor. 
 
Hosts Brett Callow and Luke Connolly discuss the role of artificial intelligence (AI) in cybersecurity with our expert guest. Ian explains that while AI has its strengths in processing large amounts of data and making determinations based on patterns, it also has its limitations in areas such as content sensitivity, context sensitivity, creativity, and innovation. However, he notes that AI is evolving rapidly and becoming more capable in areas like creativity, as seen with tools like ChatGPT and OpenAI's image creation tools. Ian emphasizes that AI is a valuable tool for processing large amounts of data in cybersecurity, particularly in areas like threat detection and response.  

Regarding the ethical implications of AI in cybersecurity, our guest discusses the importance of data ownership and rights. He highlights the need for organizations to be cautious about the data they feed into AI systems and ensure they are not accidentally leaking or granting permission to sensitive information. He also mentions the use of data loss prevention tools to mitigate these risks.  

"AI is an equal opportunity tool. It's not just going to be used by the good guys, it's going to be used by the bad guys as well." 

In terms of future trends, Ian predicts that there will be multiple AI systems in use, both public and private, within organizations. He believes that each team, individual, and domain will have their own AI system, and organizations will have more control over the models and data used. He also anticipates the emergence of new applications and use cases for AI in cybersecurity that we may not have thought of yet. 
  
All this and much more is discussed in this episode of The Cyber Insider podcast by Emsisoft, the award-winning cybersecurity company delivering top-notch security solutions for over 20 years.   

Be sure to tune in and subscribe to The Cyber Insider to get your monthly inside scoop on cybersecurity. 
 
Hosts:  
Luke Connolly – partner manager at Emsisoft  
Brett Callow – threat analyst at Emsisoft 

0:00:15

Luke Connolly

Welcome to the Cyber Insider, Emsisoft's podcast all about cybersecurity. Your hosts today are Brett Callow, Threat analyst here at Emsisoft, and I'm Luke Connolly, Partner Manager. We're excited to have Ian Paterson with us today. Ian's an entrepreneur with over ten years of experience in leading and commercializing technology companies. He's raised millions of dollars in private and public financing, completed international M&A transactions, and is co inventor of three patents on digital identity and data analytics. As CEO, Paterson has built and grown Plurilock, leading to its successful public listing on the TSX Venture Exchange.

0:00:54

Luke Connolly

Ian's a regular speaker, media commentator and active angel investor. Welcome, Ian, and thanks for taking the time to chat with us today.

0:01:02

Ian Paterson

Great to be here, Luke. Excited for the conversation.

0:01:06

Luke Connolly

So let's start off. Plurilock is a cybersecurity company that leverages artificial intelligence AI, for its products. As I understand it, AI is very good at processing large amounts of data, as in data analytics, pattern recognition, and predictive analysis. Where it's not so strong is in areas like content sensitivity, context sensitivity, pardon me, creativity and innovation. And specifically, when we consider cybersecurity, AI's strengths seem to be a really good fit, but its weaknesses appear to be a bad fit. How do you reconcile this discrepancy?

0:01:45

Ian Paterson

Well, it's interesting to have the conversation this year as opposed to last year. I think last year, if you had said AI is not great at creativity, I would have agreed with you. Now, I think this year what we're seeing with ChatGPT and OpenAI, with their image creation tools, Dolly, et cetera, actually, it looks like AI is pretty good at creatively coming up with ideas with very little input. So I think that the industry is changing over time.

0:02:16

Ian Paterson

You are correct, though, that there are some problems to which AI is a good solution, and there are other problems to which AI is not really a good solution. Now those things are changing over time. I think with cybersecurity, what we've historically seen is that any type of job where you have a large amount of raw data or signal coming at you, which is just too much for a human to be able to process, that's usually a really good fit for AI.

0:02:44

Ian Paterson

What we have done at Plurilock is that we actually look at human behavior as a raw input, and then our AI is making a determination of whether you're the right human on the device or not. So again, we're taking a huge amount of data, we're condensing it down, and then we're giving a very simple answer to the security team or to the SOC analyst or whoever it is who's consuming that information to say, either we think Luke is Luke on his device, or we think Luke is not Luke on his device, and therefore we should take some sort of action.

0:03:12

Ian Paterson

So it's a fairly discrete problem. But what we're also seeing is that there are bad guys out there who are using AI tools as well. So I think the more important consideration when we're talking about AI or machine learning or neural nets or what have you, is that we can talk about a static point in time, which is now, but I think we have to be open about how that's changing and evolving, both for the good guys who are using it for defensive purposes, as well as the bad guys who are using it for offensive purposes.

0:03:48

Brett Callow

Yeah, there's been a huge amount of potential focused on AI since ChatGPT came onto the market. Is that hype, or is this as radical as some seem to think?

0:04:04

Ian Paterson

I think that the answer is, it depends if you're using AI internally to your business. It's been interesting. So we at Plurilock have a great cross section of customers. We predominantly serve mid market and enterprise organizations across North America, but that runs the mean. We have mid sized companies who have a thousand employees, and we also serve much larger enterprises in the tens of thousands of employees and a good amount of government agencies as well.

0:04:32

Ian Paterson

When ChatGPT first came on the scene, sorry, we were hosting a lot of conversations with security teams who were interested in the benefits that GPT like tools could provide, both for themselves as well as for other people, other organizations in the business. But we were having conversations around what risks do those tools actually present. So there was kind of two sides to the equation. The first was, where in the business could these tools be helpful? And so, again, like we were saying, large amounts of data you have to process, or a regular amount of data that you have to process kind of repeatedly. Great use of AI. So, to give you a practical example, I recall speaking to one kind of mid level government civil servant, and he was saying, listen, I can't live without ChatGPT. My day is spent meeting with companies, summarizing them, and then passing that summarization off to another system.

0:05:37

Ian Paterson

ChatGPT is great because it does most of my work for me. I was also talking to a partner at a top four auditing firm. And his nickname internally was basically ChatGPT, because all of his communications now looked great. They were much better written grammatically, et cetera, because he was using ChatGPT for most of his communication. So we are seeing some implementations or some use of these tools which are picking up.

0:06:08

Ian Paterson

Now, the challenge with that, though, is that you're potentially opening yourselves up to some data leakage concerns. So we saw this with larger companies that were well publicized, like Samsung, et cetera, who actually banned the use of ChatGPT internally. So, Brett, to your question, is this hype or is this not hype? I think the answer is, it depends if you're using it or not. If you're not using it, then sure, you could consider that it's hypey.

0:06:31

Ian Paterson

I think, though, that if you're an organization and you are seeing usage of it, you're probably seeing some real benefits and some real improvements, either on an ad hoc basis, meaning one or two early adopters are using this a lot, or one or two departments are slowly starting to deploy this technology internally. So it is still varied in the sense that different companies are adopting these tools at different speeds, but it's definitely out there being used in the wild.

0:07:05

Luke Connolly

Um, you talked early on about risks and asked you about creativity with artificial intelligence. So how can AI be used to create new cyberattacks? And that would sort of speak to its creativity. And how do you see the landscape of adversarial AI evolving?

0:07:23

Ian Paterson

Well, I think that there's really interesting use cases that are popping up, and then depending on how you use those use cases or those tools will dictate if you're a good guy or a bad guy. As an example, I just saw an article that indicated that Facebook was now leveraging AI to do automated bug fixes. So they would go through, they would look at code that results in either a Seg fault or a crash or something.

0:07:49

Ian Paterson

They would then analyze that through some AI. They would fuzz it a little bit. They would try and come up with some patches. They would run those patches in a build environment, see if they pass the tests, the unit tests, and then ultimately go to an engineer to approve or not approve. Now that's a great use case. That's really interesting application, the technology. I think the same workflow could be adopted by a bad guy who is looking to look for vulnerabilities, look for vulnerabilities that might elicit a crash of those crashes. Are there ways that you could leverage that into some sort of exploit, and could you do that at scale using AI. So it's functionally the same thing being done just with two different outcomes in mind.

0:08:38

Ian Paterson

And so I definitely think that AI is an equal opportunity tool. It's not just going to be used by the good guys, it's going to be used by the bad guys as well. One of the first areas, which we actually predicted about a year ago, was that we were going to see adoption of AI by bad guys for creating more lifelike phishing campaigns. And My personal prediction was that we were actually going to see multimodal phishing, meaning not just phishing with text, but also phishing with voice and with video and what we would have maybe called deep fake video or voice.

0:09:22

Ian Paterson

We are seeing some examples of that in the wild. I've actually been a bit surprised. I haven't seen as much of it yet, but I think that's only a matter of time. So I think that we're going to see evolution on both sides of the table.

0:09:39

Luke Connolly

I have a follow up just quickly, if I can, Brett. And this sort of touches on Brett's earlier question, which is, have expectations been unrealistically? And I'll reference something that I saw post in social media a couple of weeks ago. Someone sort of said, well, can't we just program AI to protect us? So I guess the follow up is, can AI be used to defend against AI generated threats?

0:10:08

Ian Paterson

Yes, absolutely. And I think that there's probably a dozen, if not more startups who are trying to build that right now. I mean, look, AI is a tool in the toolbox. It's not a panacea. The thought experiment that I like to use is if you had an unlimited number of interns, how would that change what you're doing? And AI right now can sort of be thought along that way. It's not super intelligent, depending on the task. It's not necessarily going to replace somebody who's really experienced and really good, but it is a good augmentation.

0:10:47

Ian Paterson

And so if you had kind of a low level task that doesn't require a ton of thought or creativity, how could you do more with effectively an unlimited number of people to do that task? That's a good way of conceptualizing. How could AI apply to the problem that you're trying to solve? I think at a big picture asking the question, could AI defend against cyber threats? Sure. Absolutely. But then you get into, well, which cyber threats and exactly how are you going to do that? And so there's still some implementation that you need to do.

0:11:28

Ian Paterson

Even if you take recommendations from ChatGPT for instance, you still have to implement them yourself. Even if you take the code suggestions, for instance, from Copilot, from GitHub Copilot, you still have to go implement those yourself. You could absolutely create some tooling to automate that, but there's still some human work here that has to be done before you get to a fully autonomous, fully self thinking apparatus.

0:12:00

Ian Paterson

So I think we're a little bit ways away.

0:12:02

Brett Callow

Still flipping that last question around somewhat. In what ways could AI be used to defeat our existing defensive mechanisms?

0:12:14

Ian Paterson

Well, I think going back to the multimodal phishing campaign, we see a lot of just everyday SMS phishing attacks. We see a lot of gift card attacks. It's pretty common, what I've found working with our commercial clients. It's pretty common for me to talk to an executive or a business owner and say, hey, when's the last time that somebody scammed you for gift card money? Almost universally they'll say, oh, it happened last week, or it happened last month or last quarter. Everybody has a story around one of their generally it's a new employee to the organization.

0:12:53

Ian Paterson

They get a text message or potentially a DM on some social media platform saying, hey, it's the boss. I can't talk right now. I'm with one of our top customers. I need you to go out and buy ten gift cards right now. We'll reimburse you, don't worry about it, et cetera, and appraise upon the fact that as the new employee to the business, you don't know what the communication norms are and so you want to please.

0:13:17

Ian Paterson

So you might just go out and do that. I think that those types of attacks are dangerous today in text message format. I think they're going to get even more dangerous when that becomes a voice memo that actually sounds like the boss. So now you have this thing that sounds like the person that you're supposed to be working for or working with, et cetera. So I would expect those two to be really hard to disambiguate again for a new employee in the organization and I think are the logical extension of attacks that we're already seeing today with this new technology.

0:13:51

Ian Paterson

What can you do? You can do a deep fake voice memo today, but can you do that at scale? Well, actually now you can. And so again, it's using that thought process around how would you use an unlimited number of interns? So I think that those types of things really, which would just be evolutions of existing successful attacks, are what I expect will be coming.

0:14:16

Luke Connolly

Saw earlier this year, Hollywood writers were on strike, and one of the main negotiation points was relating to the use of AI in filmmaking. Are there any ethical implementations of using AI in cybersecurity?

0:14:32

Ian Paterson

I think there definitely is. I think that there's a really good conversation happening right now across a number of industries around data and data ownership and data rights. So the use of AI to either do a function or create content, what is the underlying data that that was trained on? Do you actually have the rights to use that data for the purpose and then of the content that gets created, what rights are then imbued? I did see some commentary that content coming out of large language models are not necessarily copyrightable in the same way that content created by a person is.

0:15:15

Ian Paterson

And so I think that there's definitely going to be some second order and third order consequences of the distinction between how content was created if it was by a computer, by a person, or potentially by a person aided by a computer. So I think that there's probably some corners that we don't see around right now where those nuances will actually play a really big role. I think, though, that for some other more mature uses of AI within security teams, I don't think it'll be as much of an issue. So as an example, working inside a sock and trying to manage the diluge of data coming to you and using AI to kind of sort and parse and prioritize those feel to me like they're low risk operations to use AI. Just from a data ownership, data privacy perspective, with the exception that you do need to make sure that, you know, if you're passing data into an AI system, you do need to make sure that you're not accidentally either leaking information or accidentally or unintentionally granting rights to that data to the AI system.

0:16:26

Ian Paterson

So from that perspective, we actually are seeing a lot of concern from corporations and government agencies around, hey, if we use AI for this function, are we accidentally leaking data or are we accidentally granting permission to the use of this data that we're sending up? Zoom did something interesting a couple of months ago where they changed their terms of service, and they put in their terms of service that by using the Zoom feature set, you are granting Zoom a license to use that data for training.

0:17:02

Ian Paterson

And that was a tweak to their terms of service, which had very broad repercussions. I saw a lot of blowback as a result of that little tweak, because companies were concerned, they didn't want to give that right. And so that makes it a little bit challenging as well, because you don't necessarily think about Zoom as an AI platform, and so you're not looking for those gotchas when you're analyzing that contract or those terms of service. So it is something that is top of mind for CIOs and CISOs today, that concept of data ownership. There are some good guidelines. We've actually published a bunch.

0:17:42

Ian Paterson

So if you go topPlurilock.com Safety 4 AI and four is the number four. So Safety 4 AI, we've got a couple of AI usage policies which are templates which you can use for your own organization. We've also produced just some general guidance around what to think about as you're evaluating and making procurement decisions as it pertains to data ownership.

0:18:09

Brett Callow

There's been lots of discussion around the use of AI to create malicious code for use in phishing campaigns. What are we not thinking about here? What other risks could it potentially pose?

0:18:23

Ian Paterson

Well, I think that there are known unknowns, so we know that the bad guys are probably going to do some stuff. But then I think there's also a good chunk of unknown unknowns. I think it was former SecDef Rumsfeld who had this known unknowns and unknown unknowns paradigm mean, I think that we're still very early in the adoption curve of AI tools generally. And so functionally, what that means is that there are definitely going to be use cases or applications of technology that we have not thought about today, which will percolate up, and they might be really concerning from a security perspective, or only somewhat concerning from a security perspective.

0:19:06

Ian Paterson

So what I would say is definitely count on there being things that we have not thought of today. And then how can you structure your cybersecurity program, or how can you structure your risk management framework to accommodate the fact that there's going to be things that we haven't thought about for sure. And therefore, what do you need to do now to be prepared for those? It could be as simple as consuming content, much like probably you're doing right now if you're listening to this podcast.

0:19:36

Ian Paterson

By hearing from different perspectives, one thing that I have found very helpful is because our customer base is both in financial services, but we also have government clients as well as other industries. It's really interesting to jump from one customer conversation to another customer conversation and hear about the concerns and threats that might exist for a large chemical manufacturer compared to a pharma company, compared to a financial institution, or a bank or insurance company, et cetera.

0:20:07

Ian Paterson

There's definitely some trends of things that are similar and consistent amongst all those clients. But there's also some things that are specific to certain customers. As an example, within critical infrastructure. I'm always really impressed at how much physical security plays into the conversation around risk when you're talking to a cyber person, because when you have a physical power plant, you can't move it very easily, right?

0:20:33

Ian Paterson

It's not like you have a bunch of remote workers who you can just move to a different office building. And so physical threats actually play a large part when you're talking about risk for those types of people. So being exposed to different industries and different environments can be really helpful just to get a lay of the land and figure out, well, what have other people already taken for granted that might be new to my own organization. As I'm thinking about risk management.

0:21:03

Luke Connolly

You touched on one of the earliest cases of ChatCPT. One of the risks of using ChatCPT, which was actually Samsung, which had some confidential data leaked as a result of some of their employees using it. And their response, as you mentioned, was to ban the use of ChatGTP, which maybe is a bit extreme because it can be very helpful. As you talked about, some of your customers are using, it can be very helpful in making them more effective and efficient in their job. So how do we ensure that AI is not used to violate people's privacy or compromise a company's data?

0:21:47

Ian Paterson

I think there's different approaches you can take. If I were to bucket or segment the approaches that I'm seeing customers take, companies take right now, the first bucket is simply do not use any AI. Period. End of story. Case close. Block it at the firewall and terminate any employee who violates this policy. There are a portion of the companies that I've spoken to who are taking that approach. It's a very iron fisted approach.

0:22:17

Ian Paterson

The challenge, though, and the concern that I hear from those companies, is that we might be missing out. We might actually be missing out on innovation or velocity that our competitors are going to get who are adopting these tools. So that's the first approach. The second approach is disallow use and block access, except for a small portion of the business. So this could be an innovation team. It could just be Bill who works in the corner office, who just likes to tinker with stuff.

0:22:49

Ian Paterson

I've actually spoken to a number of nonprofit associations who are taking that approach, that there's one guy or one person who is just liking to play with technology, and they've become the innovation team for that nonprofit there. What I'm seeing most commonly is a process of use case identification. So given these tools that exist, whether it's chachipt or Dali or some of the newer ones like Moon Valley, etc., how can we apply those tools to our existing business?

0:23:22

Ian Paterson

And then how can we apply those tools safely? Meaning, can we do this in a way that doesn't expose data or potentially leak data? And then I think the third approach is saying for companies to say, you can use these tools, no problem. But when you are using these tools, make sure you're not sending sensitive data or you're not exposing any of the data that we would consider regulated or proprietary or sensitive.

0:23:47

Ian Paterson

Now, there's a subsection there which is organizations are simply telling their employees, please don't do this, or the other section is telling their employees, don't do this. And here are some tools to make sure you don't accidentally do it. So that would be deploying data loss prevention tools. We've actually launched at Plurilock, a capability called Prompt Guard, which basically it sits in between the organization and ChatGPT can also be used for other large language models, and it does data loss prevention detection in line. So as you're interacting with ChatGPT, it'll identify, if you're accidentally sending a credit card number or Social Security number or personally identifiable information, it'll identify it in real time, and then it'll block it, and then it has some additional anonymization capabilities as well.

0:24:38

Ian Paterson

So that's really applicable for those organizations who want to get the benefits or the value of these large language models, but they don't want to accidentally lose access or lose sight of the data that they have an obligation to protect. The very last category, and this is what we're seeing more on the enterprise, is organizations who are simply building their own large language models internally.

0:25:03

Ian Paterson

My expectation is that we're going to see more of this as it becomes easier to stand up LLMs. They're coming pre trained now, and there's a lot of good open source projects like Lama, for instance, where you can just roll your own LLMs, becoming easier and easier. And so for those organizations, they're saying, hey, we're going to get all the benefits from large language models. However, we're going to control the models, meaning the data is not going to go to ChatGPT, it's not going to go to Azure, it's not going to go to anywhere else, AWS, GCP, et cetera. We're just going to control the full stack.

0:25:36

Ian Paterson

And then that way we have, even if we do accidentally leak data to the LLM. We control the LLM, and therefore it's not as big of an issue that is only really applicable to organizations who have the resources. And what I'm seeing right now is that that tends to be the larger organizations. There's a handful of folks that I've spoken to in the financial services space who are taking that approach as well, but from a sophistication standpoint, they sort of act and behave much like larger organizations. So those are kind of the four different approaches that companies can take right now, and it really just becomes a risk trade off decision.

0:26:15

Ian Paterson

What do you want to do? Do you want to try a new technology, potentially get some benefits, whether it's 10% productivity improvement, 20%, 30%, whatever it is, which comes with a little bit of risk, or do you want to try and take that, mitigate the risk and kind of have your cake and eat it too?

0:26:31

Brett Callow

What trends do you think we'll see in the months and years ahead in terms of how AI is used by defensive teams?

0:26:42

Ian Paterson

So my prediction with AI more broadly is that we're going to see multiple AI systems. I don't actually know if this is contentious anymore. It was a little bit more contentious kind of earlier on in 2023, but we have the large public AI models like ChatGPT and Copilot. My expectation and my prediction is that we are going to see that in a business capacity, each team will have their own AI system.

0:27:13

Ian Paterson

Probably the company will have its own corporate AI system. Each individual person might have their own AI system, and there might be domain specific AI systems as well. I am waiting for the day that Bloomberg has an AI system that I can use to query and ask financial information to, which would be trained on financial data. And similarly, Pfizer might have their own medical AI system that is trained on proprietary data that I could query around healthcare or pharma questions to.

0:27:46

Ian Paterson

So my expectation is that we're going to end up with multiple AI systems and a mix of public AI systems as well as private AI systems. Whether those private AI systems are applications that we might just run on our iPhone or kind of more traditional on prem servers. That's what I expect more broadly across the business, security teams included in that. I think with regards to security, I think that they're going to follow the same trend.

0:28:19

Ian Paterson

I would expect that there's going to be multiple public AI systems that security teams are going to interact with. We're already seeing that of the companies who are allowed to use AI systems, they are already using a mix of chat, GPT and copilot already, and in most cases they're experimenting with alpaca or Lama or other open source AIs as well. So I think that we're already seeing where the puck is going.

0:28:48

Ian Paterson

The question then is who's going to control those AI systems? Is it going to be a small number of the large incumbents like Microsoft, et cetera, or are we going to see a very distributed base where we're going to see a ton of small, highly specific AI systems? My suspicion is that we'll see a mix of both, but we'll have to wait and see.

0:29:14

Luke Connolly

We talked a little bit about how AI can assist with providing defensive positions, but can you talk specifically about the role that AI can play in threat detection and response? What advantages can it offer over traditional methods for threat detection, for example?

0:29:34

Ian Paterson

Well, I think that if I come back to my two central positions on AI, the first is that AI does really well looking at a lot of data and prioritizing or triangulating on the signal to then do something with, either pass to a human or do something else witH, and kind of the other approach, which is if you had an unlimited number of interns, what could you do with that? I think I would just take those two ideas and then apply it to some of the areas within cybersecurity and infosec that already exist today. So definitely I would be looking at areas around malicious email or spam as areas to potentially see some improvement. I think the trick there is that you do have to be cautious around the cost per transaction that AI takes.

0:30:31

Ian Paterson

So if you're dealing with billions of emails and each transaction of reviewing that email is a high cost transaction, potentially you can't use AI on all of it in the same way. So how do you then filter and parse and separate what emails you feed into a large language model, et cetera. I think also log analysis generally, whether that's within the context of a Soc or not, I would expect to see some applications and improvements, and I think that that will help from a threat detection perspective.

0:31:06

Ian Paterson

And so I'm expecting more incremental improvements compared to our current technology. I think that that's the first thing that we're going to see. I also suspect though that there's probably some use cases that we don't see at all, which in hindsight will appear obvious to us. I think large language models are a new tool in the toolbox and we have very creative problem solvers out there who will probably find applications for large language models we haven't seen yet and which will become the norm.

0:31:41

Ian Paterson

I don't have a good prediction of where those will occur, but again, it goes back to that sort of unknown unknown that probably exists out there. We're just waiting to see where they turn up.

0:31:54

Brett Callow

What's the most interesting use or misuse of AI that you've actually seen to date?

0:32:03

Ian Paterson

That's a good question. I think that there was a news headline event with Worm GPT probably a couple of months ago now, where it was some offensive use of the GPT technology kind of purpose built for bad guys. I think that for pen testers or red teams out there, they've probably already done a lot of work, whether they're using worm GPT or they've just cobbled together some of their own stuff. But if you take something like a metasploit toolkit and you combine that with a large language model that's trained on a data set of vulnerabilities, you could probably do a lot of really interesting stuff.

0:33:00

Ian Paterson

That while it might be a little bit simple as opposed to sophisticated, might be a little bit simple, I think it could probably wreak a lot of havoc. And so I think that there are some things just on the edge of the periphery that we're seeing little glimpses of. We haven't necessarily seen the full kitten caboodle, but there are some really interesting applications from a red team perspective that are out there today.

0:33:28

Luke Connolly

We've talked about specifically the implementation of AI as it relates to cybersecurity, both offensive and defensive. But what are some of the emerging technologies that are likely to play a significant role in the future of AI driven cybersecurity? So the core functionality of AI that's going to make a difference at some point in the next five years.

0:33:50

Ian Paterson

I think five years is a long time, specifically in this field. If you think about where we were in January and, gosh, we're in October right now, and we've already seen so much. I think five years is a long time. I've typically looked on kind of a twelve to 24 month basis because I can plot some trend lines and kind of know with a reasonable degree of certainty where things end up. I will share an anecdote from somebody who is in, I'll just say broadly, the military industrial complex.

0:34:22

Ian Paterson

And I was talking to him about where does this all go? Because this is somebody who is very much a forward thinker. And what he was saying is, if you take the current rate of progress, we are likely to see the advent of artificial general intelligence. So the singularity probably result in bad guys trying to train their AI systems to be self sufficient. 

0:34:53

Ian Paterson

And so that was kind of on a similar timeline, the one to five years basis. I don't know if I necessarily agree with that, although it's an eye opening idea to consider that that's maybe where we get AGI coming from, if we get it at all. But I'd be curious, Luke, what are your thoughts as well? Where do you see this going in the next five.

0:35:17

Luke Connolly

Know, when I think about what's happening in five years and the underlying technology, I'm not a technologist, so I'm not deep into it. But the unspoken dark secret is the massive amount of computing power that lies behind a product like ChatGPT. So there aren't that many companies today that have pockets deep enough to pull together the sort of computing resources that are necessary to really be functional and show what's happening.

0:35:43

Luke Connolly

I think if I was to make a prediction, I would say that the capability that we're seeing with chat, TPT today and Bard today, they're going to go down markets. So that number know Moore's Law is continuing against all ODs, more or less, to progress, and computing power is continuing to get faster at a predictable rate. We're going to be able to have the sort of supercomputer capabilities that ChatGPT, OpenAI is able to deliver on a much smaller scale. And when we have that, we're going to get even.

0:36:21

Ian Paterson

Today.

0:36:22

Luke Connolly

The innovation that we're seeing with AI is incredible. We're seeing new tools coming out, dozens or hundreds a day. But as the capabilities go down market, so that you don't have to rely on OpenAI or Google or Amazon or an Apple, then it's going to really start to accelerate in terms of what's possible, and I have no idea what that's going to result in.

0:36:52

Brett Callow

Finally, and to totally change direction, ransomware is probably our biggest cybersecurity problem at the moment. How do we solve it? Or if not solve it, at least make it much less of a problem?

0:37:06

Ian Paterson

So ransomware has been interesting to observe over the last couple of years. Certainly as an industry, we saw ransomware skyrocket in 2021. Insurance companies, particularly in Canada, were actually upside down on their cybersecurity insurance policies. They paid out more in damages for, I think it was the first half of 2021 than they collected in premiums. Just as a result of the ransomware payouts that they had to do. So that caused a lot of, I think, emphasis and focus.

0:37:42

Ian Paterson

We saw a lot of takedowns and coordinated efforts by authorities in multiple jurisdictions to try and go after the actual threat actors themselves. It seemed like that was effective in 2022, and then it also seemed like things are back just as bad in 2023 relative to 2022 was. So I think that it's a bit of a cat and mouse game. It feels like the upper hand right now is with the bad guys. Unfortunately, I think that the great work that Emisoft is doing to help identify, defend, and recover from ransomware attacks is appreciated.

0:38:26

Ian Paterson

And I think it's also just a good reminder to organizations that you don't have to be somebody special to be hit or to be victimized. Really, if you're running a vulnerable system out there that has a remote code exploit that is Internet accessible, you can potentially get hit by ransomware pretty easily. And it doesn't matter if you are a nonprofit and you don't have any data that you would consider sensitive, that these guys are potentially going to go after you just because they can make some money. So I think that to address the problem requires a number of things. First is for organizations, whether it's companies, government agencies, nonprofits, et cetera, to really focus on the fundamentals.

0:39:11

Ian Paterson

So having good cyber hygiene can certainly limit, ah, the damage and limit the likelihood that you're going to get hit. Obviously, things like backups, etc, limit the blast radius. If you do get hit, but you're able to recover your data, that's going to help. I think also coordinated efforts by.