The Cyber Insider

Bug Bounties, the Uber Breach, and Ransom Demands, with Katie Moussouris

Emsisoft

Send us a text

In this episode we’re excited to host Katie Moussouris, the founder and CEO of Luta Security, a company that helps organizations implement and manage bug bounty programs. Prior to starting Luta Security, Katie worked with companies including ATstake, Symantec, and HackerOne. She’s a hacker, an advocate for gender and economic equality, a cybersecurity fellow at New America and the National Security Institute, and an advisor to the US government. 

With extensive experience in bug bounty programs, our guest shares her perspective on common mistakes in bug bounty and vulnerability disclosure programs:  
“You want to be able to hire and recruit people who will be able to prevent and also spot and fix those bugs while the software is being developed. If you weigh too heavily on the reward side of things and reward only the bugs that remain, after all of those secure development processes, you've actually set yourself up for a perverse incentive and you're going to gut your own hiring practices”.  

The discussion goes to explore solutions to combat ransomware and what organizations should do in case of an attack: “I don't think putting that much of a burden on the victims is really going to result in what you want, which is to shine more of a light on who needs help and who needs to warn their users that there was a material breach like that. So I would say it's about requiring notification upon payment of ransomware that we should focus, at least on the victim’s side”.   

All this and much more is discussed in this episode of The Cyber Insider podcast by Emsisoft, the award-winning cybersecurity company delivering top-notch security solutions for over 20 years.  

Be sure to tune in and subscribe to The Cyber Insider to get your monthly inside scoop on cybersecurity. 
 
Hosts:  
Luke Connolly – partner manager at Emsisoft  
Brett Callow – threat analyst at Emsisoft  
 
Intro/outro music: “Intro funk” by Lowtone.  

[0:00:08] Luke Connolly: Welcome to the Cyber Insider Emsisoft's podcast all about cybersecurity. Your hosts today are Brett Callow, Threat Analyst here at Emsisoft, and I'm Luke Connolly, partner manager, and we're very excited to have Katie Moussouris with us today. In case anyone isn't familiar with Katie, she's the founder and CEO of Luta Security, a company that helps organizations implement and manage bug bounty programs. Prior to starting Luta Security, she worked at companies including ATstake, Symantec and HackerOne. She's a hacker, an advocate for gender and economic equality, cybersecurity fellow at New America and the National Security Institute, an advisor to the US Government, and once made a very unusual entrance to a cybersecurity conference in New Zealand. Welcome Katie, and thanks for joining us today. 

[0:01:02] Katie Moussouris: Thanks so much for having me. 

[0:01:04] Luke Connolly: So right off the bat, I have to ask what was the unusual entrance or what was unusual about the conference? 

[0:01:09] Katie Moussouris: I think you're thinking of one where a karaoke note was used, where my entrance was a live karaoke session with backup dancers and pyro, and it was a song, it was Sia, but it was Chandelier by Sia. But I had changed the lyrics to Cyberlier, and as soon as the lights went off after that performance, the lights came back on, and then I talked about export control of cyber weapons, which was a problem I was working on at the time. So it was entertaining and informative. 

[0:01:41] Brett Callow: Just some background, what sparked your interest in computers on the hacking scene? 

[0:01:45] Katie Moussouris: Oh, my mom bought me a computer when I was eight years old, and I didn't have any friends, so it was a match made in heaven, teaching myself how to program on a Commodore 64 as a little kid. And then fast forward a few years, I was a teenager growing up in the Boston area, and there were a lot of hackers nearby, and I happened to stumble across a bulletin board system, which was a very early prototype of chat rooms that you would see in modern systems. And I joined this bulletin board system, it was flooded with hackers, and my hacking career began at that point. Before it was a career, it was a hobby. 

[0:02:22] Luke Connolly: Remember the Commodore 64 and its small brethren, the Vic 20? So how did that morph into a career in it? 

[0:02:28] Katie Moussouris: Well, when I was a teenager, that was actually the late 80s and early 90s, so just to place that all in the timeline of the internet itself, the Internet wasn't very well developed, so I actually went to school for molecular biology, biochemistry and mathematics. And I kind of wandered back into cybersecurity because I was working on the bioinformatics team at the Genome Center at MIT and transitioned to become a systems administrator there, taking care of the networks and planning them. And we were being attacked a lot. So I had to dust off my cyber skills and had to learn how to scan my own networks and ideally prevent bad guys from getting in before I could find and fix the flaws myself. And then fast forward, I was a Linux developer and I started writing security tools for systems administrators. And then right around that time was the turn of the millennium and the .com boom happened where that was one of the big collapses in the IT and computing and Internet industry. That was one of the first collapses right around that time. And I became an independent hacker for hire. I joined up with my old friends that I had been part of their hacking groups when we were teenagers and they had professionalized into. The company that you mentioned at the beginning called ATstake, and it was one of the earliest companies to do what today is well known as application security penetration testing, or looking for flaws in application code, as opposed to looking for sort of network level flaws. And my career just kind of went from there. 

[0:04:02] Brett Callow: For anyone who may not be familiar with the concept, what is a bug bounty program and what's the history behind them? 

[0:04:09] Katie Moussouris: Well, bug bounties are exactly what they sound like. If you find a security bug, you get paid a bounty. So it's a cash reward in exchange for security vulnerability information. And the history of those programs actually dates back to the mid-90s where the Netscape browser, which became the Mozilla browser, they offered $500 if you found a security hole in their browser. And that was pretty much the going rate. And it was just that one browser company. Not really too many other companies offering cash rewards like that. And then fast forward to 2010. Google started offering cash rewards for security holes in the very early Chrome browser. At the time, I think Chrome was maybe only two years old, so the code base was really new and they felt fairly confident they had eliminated a lot of security issues and they thought to crowdsource and look for talent in addition to looking for the bugs that way. And then I started Microsoft's first bug bounty program in 2013, which spawned me being invited to the Pentagon. And then the Pentagon launched. We launched Hack the Pentagon in 2016. So things have kind of snowballed since then, but bug bounties have become a lot more popular and a lot more well known. And the efforts of friendly hackers are more recognized today than they were a dozen years ago. 

[0:05:28] Luke Connolly: I imagine bug bounty programs are most successful when software vendors are able to structure their incentives to align with the motivations of hackers, whether it's money or recognition or competition, what have you. So is it easy or hard to achieve this alignment? 

[0:05:45] Katie Moussouris: Well, honestly, hackers will do things for a lot of different reasons. Microsoft, I mentioned we started the first bug bounties there a decade ago and Microsoft was already receiving bug reports for free from researchers around the world. Close to 400,000 non spam email messages a year were coming into secure@microsoft.com, the email address to report security holds. It's often a misconception that cash is the way to motivate hackers and align their motivations with the organization. You can use cash as a tool to focus the energy that's already there and the eyeballs that already might be looking. But if, for example, you try to outbid the offense market, like if there are people who are buying bugs in your software and they plan to use them, to exploit them, et cetera, a lot of organizations think, well, we should try and pay more than that so that we motivate hackers to come to us instead. That's actually a losing strategy in the long term because what you actually want is to build security in from the ground up. You want to be able to hire and recruit people who will be able to prevent and also spot and fix those bugs while the software is being developed. If you weigh too heavily on the reward side of things and reward only the bugs that remain, after all of those secure development processes, you've actually set yourself up for a perverse incentive and you're going to gut your own hiring practices. I can tell you myself as a teenager, I would never have sat through a corporate job interview or one on ones with my manager if I knew I could make as much or more money just doing bug bounties as opposed to kind of getting a day job in prevention.  

[0:07:32] Brett Callow: In terms of the time it takes companies to fix bugs, how long is too long? 

[0:07:38] Katie Moussouris: Well, that's a complicated question. So for web vulnerabilities that you control the code base and you don't have a lot of third party supply chain dependencies, you know, you should be able to turn those around fairly quickly. For code that runs on your customers or your users machines, you probably have to do a lot more testing. And so it'll take closer to what, 90 days has sort of been settled on as an industry norm of a reasonable amount of time to fix things. But if you have something that involves hardware or tricky supply chain issues, that might take you quite some time and you may be able to produce a fix. But getting it out there and getting it deployed in the places where the actual vulnerabilities live out in the field in the world, that may take a lot longer. So it's a little bit of a complicated answer. But software is not straightforward these days. 

[0:08:32] Luke Connolly: For company product programs, software company product programs, you typically consider things like what features do we want to release? Sometimes they'll have releases focused on performance, sometimes releases focused on general bugs and security. I would imagine should be one of the things they consider as part of their core development program. But software companies also have a huge interest and investment in their intellectual property. So how do you address some potential concerns that they might lose control over their products if they launch bug bounty program? 

[0:09:04] Katie Moussouris: Well, I think the idea there is that they should be building security in from the ground up. A lot of organizations, even publicly traded companies, have no real incentive to do that. We don't have software liability laws, so there's nothing really out there to punish companies that are negligent when it comes to software security. But in terms of putting out a bug bounty and worrying about their intellectual property, I think most organizations that put out bug bounties understand that if a security researcher can figure out how to break it, probably bad folks are also figuring out how to break it. And you're better off working with the security researcher community to find out where those weaknesses are as opposed to, let's say, waxing litigious with them and accusing them of illegally reversing your software. I don't know how the laws are going in Canada, but in the United States at least, the Department of justice has put in prosecutorial guidelines that essentially help to protect researchers, given that DOJ can't change the law and suddenly make every friendly hacker suddenly an authorized hacker. But what they can do is they can provide that prosecutorial guidance that essentially says if someone has done this in good faith and tried to tell you about a security flaw, it's just not good for them to try and go after them legally. 

[0:10:31] Brett Callow: On a similar theme, what's the difference between a bug bounty claim and a ransom demand? 

[0:10:39] Katie Moussouris: Well, that one is interesting because a lot of bug bounty hunters don't speak English as a native language and so sometimes they are simply asking, do you have a bug bounty or is this eligible for a bug bounty? But it comes across as more of an extortion attempt of saying, I would like to be paid for this bug, et cetera. So I would say that you do have to have a bit of conversation with a researcher if they are asking about cash reward before you can determine whether or not it is an unreasonable request or not. Now, that being said, there are definite attempts to extort and folks that just come right out and say, I have bugs in your system, I've exploited them, I know all of this data about you, and I'm going to release it to the public unless you pay me. That is a very clear cut. That's not looking for a bug bounty, that's looking for an extortion payment. And the company Uber actually got in trouble for covering up a breach of 57 million records by trying to launder it through their bug bounty program. And their bug bounty service provider knew what was going on and was complicit with this whole idea of this cover up of paying through the bug bounty program in exchange for a nondisclosure agreement about the breach that the hackers signed. And that was essentially them covering up what was an elevated extortion attempt by these researchers. They didn't actually know about the bug bounty program when they reached out to Uber. They just said, we found some things and we'd like to get paid. 

[0:12:16] Brett Callow: Why should a company use a service provider likely to secure its bug bounty program? What's the worst that can happen if they go it alone? 

[0:12:25] Katie Moussouris: Well, what a lot of people do, and actually a lot of governments do, is the concept of bug bounty is very easy to explain. And they make a mistake of thinking that since the concept is easy to explain and understand, the execution must also be very simple. And the problem is, it's just not in a lot of cases. Organizations may start out strong with some dedicated individuals who carry the weight of these programs programmatically and make sure no bugs end up unfixed for a long period of time, et cetera. But then eventually those people burn out. And so what Luta security usually does with organizations is we do a maturity assessment first. And if an organization is not running a bug bounty program yet and they're interested in starting one, we look at their vulnerability management skills. So it's almost like testing their muscle memory for how do they catch bugs that they already know about or that they find through their own scanning. How healthy is that process? Because if that process lags behind significantly, can you imagine what it's going to be like burdened with a whole bunch of bug reports coming from the outside? It ends up being a distraction from other higher ROI, higher return on investment security activities, and it ends up burning out your staff. So what we do is we do a maturity assessment. We offer two pathways. We either say this is what it will take to run a sustainable program, and how many people you need to hire in what roles, or we can handle it for you. And we know how to hire and train for these roles, and we place them in governments and large organizations around. 

[0:14:01] Luke Connolly: The world with pretty much every company having a presence on the Internet. Now I say pretty much because we actually came across a company in Germany recently that had no internet presence and we thought it was very odd. 

 [0:14:14] Katie Moussouris: Did you find them on the street? Like, how did you reach out to them? 

[0:14:18] Luke Connolly: We didn't find them, they reached out to us. And then we did a look of the company in public records to make sure they weren't something untoward. It was quite unusual, though. Do vulnerability disclosure programs, are they limited to software vendors or are they applicable to a construction company, healthcare company, companies that don't develop specifically software? 

[0:14:39] Katie Moussouris: Yeah, I mean, every organization that uses computers and the internet can benefit from a vulnerable disclosure program if they have the mechanisms in place to handle vulnerability management. So if they have sort of ignored that bit and thought to themselves, now is a good time to start a vulnerable disclosure program and we'll just kind of build it all as we go, that tends to be a recipe for disaster. But organizations that don't write software themselves, that simply use other people's software, that software, that third party software still has to be maintained, it still has to be configured correctly and it still has to be updated periodically. So vulnerable disclosure programs work for infrastructure issues like that and they work for whether or not you make software yourself. 

[0:15:26] Brett Callow: When should a company start thinking about having a bug bounty program? 

[0:15:30] Katie Moussouris: Well, I mean, they can always think about it from the beginning, but it really does matter how much preparation they do and where they've decided to weigh kind of their security investments. We actually turn away a lot of companies that are too early in their security journey because they were misinformed and they were told that first thing they have to do is start a bug bounty program. They're building out their security program and they've been told by somebody that this is a thing that they need and it's absolutely not the right time for them. You will, like I said, issues like staff burnout but also distraction. If you have a limited staff that can look at security issues as a whole, you don't want them dealing with a bunch of bug reports. Even if the bug report feed has been curated for you by, let's say, a bug bounty platform that will triage the issues and only send you the valid ones, that still represents a potential flooding of an organization. So I would say they can think about it from the beginning, but they need to think about it as part of an overall investment strategy that has its place once they have done some security prerequisite work on their own. 

[0:16:41] Luke Connolly: So Katie, you've been in bug bounty programs for a while now, so since they've been more and more common, have we seen an improvement in security overall. 

[0:16:50] Katie Moussouris: With the prevalence of these programs, certain mature programs? I would say the answer is yes. But these organizations, I call them the Security 1% and they sort of already had those earlier investments in proactive security set up and in place and they actually had pretty good reactive security as well. But most bug bounty programs and most vulnerability disclosure programs lack some of the essential closing of the loops, right? The bugs will come in, they may fix the majority of them, but they don't actually update their processes to prevent those types of bugs from happening again. And I think without closing that loop of not just bringing it back into your security development lifecycle, but actually elevating your security practices so that you can wipe out entire classes of bugs. I think that piece has been missing from the security 99%, which is everybody else who hasn't learned those maturity lessons yet. It just eventually becomes less of an efficiency and more of an operational burden in those cases. I do think that there's some research that I'm doing as part of a National Science Foundation funded research project that's going to be looking at exactly that. How effective have bug bounty programs and mold disclosure programs been at reducing not just the number of vulnerabilities left in any given system, but reducing the severity of them and making them harder to discover. So ideally, you should be left with more and more difficult to find, difficult to exploit vulnerabilities. If you've been running a bug bounty program for a few years and you're still getting reports for things that can be found with a free or relatively inexpensive scanning tool, then you definitely have. Missed some of those loop closing steps because you clearly aren't running those tools yourself, and you're relying on the crowd to run them for you very inefficient that way. So it does depend, it depends on how well you're closing those loops and whether or not you have learned some lessons and aren't just playing whack a bug. 

[0:18:56] Luke Connolly: So, based on what you said then, has vulnerability disclosure itself improved in recent years? 

[0:19:01] Katie Moussouris: I think the awareness that you should have some way for people to tell you if they are friendly, for them to tell you about a security vulnerability, I think that awareness has definitely improved in the last decade. Like I said, a lot of people understand the concept very easily at this point, whereas a decade ago not as many organizations were even considering it. And I know I made a lot of big companies very unhappy when I had Microsoft join the bug bounty train because they were looking at it as an understood pact among the older software companies. They knew they couldn't bounty everything, they knew they weren't going to be able to keep up with the volume considering they were already getting quite a bit of volume of cases. And I think that over time, resistance to the concept of vulnerability disclosure and bug bounties has gone down, but also so has almost that original understanding of how do we even fix the bugs and prevent the bugs that we know about or should know about already. 

[0:20:04] Brett Callow: Moving beyond the bug bounty programs for a moment, what should organizations really be paying attention to when it comes to security now? What are some common shortcomings you see? 

[0:20:15] Katie Moussouris: Well, honestly, I look at organizations as it doesn't matter what their size is, it matters what their security and privacy responsibility is. A couple of years ago there was an audio social media app called Clubhouse. It got popular during the pandemic as people wanted more than just texting each other, et cetera, and sending memes. They wanted to be able to talk to each other. And I got on that platform and accidents happen and I happened to stumble across some security holes. I reached out to the company, they had a bug bounty program but it was incredibly difficult to get them to engage and they tried to basically make me sign a non-disclosure agreement via recording the bug through one of the bug bounty platforms. And it was clear that they had heard about this vulnerability before, but they hadn't chosen to fix it. And so when I held them to saying, look, I'm going to disclose this publicly, if you don't feel like fixing it, that's fine, but I think the public should know that they are at risk. And so back to your original question. 

[0:21:15] Katie Moussouris: I think honestly it's organizations, no matter what their size, need to look at how many users are they affecting, what kind of information are they in charge of protecting with those users and kind of match their security and privacy efforts accordingly. That company, we haven't heard much from them in a while. I think their excitement around them died down. But at the time they were valuated at over a billion dollars and they had just received $100 million in the bank. They had fewer employees at their company than I have at mine. And so they absolutely had not invested in security at all except for starting a bug bounty program and that was their sole security investment at that stage. They were at probably 10 million or more users at that point and they weren't living up to their responsibility to those users in terms of security and privacy. So I would say if an organization is thinking about where should we invest, where should we start? Start by looking at what is it that you are trying to protect? What kind of information are you stewarding for your users and how many of them are there? And try and grow your security and privacy practices according to the population you serve, as opposed to leaving it until you gain a certain overall market share, et cetera. 

[0:22:34] Luke Connolly: I think you've sort of spoken to this with that answer but what are some of the mistakes that you've seen companies do when they introduce bug bounty programs aside from systemically not having security as part of their development process? Have there been mistakes that have been really kind of what was I thinking? 

[0:22:51] Katie Moussouris: Well, I mean, a lot of organizations think that paying out higher amounts of money will be the ticket to better bug reports and better security. But often all that does is it just kind of raises the bug bounty awards for mediocre bugs and still some of those bugs that you could have found yourself or could have hired penetration testers to find. I think another one is being over ambitious about the scope and then having to shrink back the scope of the program and then falsely thinking that slapping a nondisclosure agreement around your bug bounty program protects you and protects your users from exploitation and or the bugs being disclosed to the public before you fix them. All it really does is annoy the good samaritan researchers who would like to tell you about security issues, but they don't really necessarily owe you that for free if you are offering bug bounty rewards for some of your bugs, but a lot of your serious bugs are not rewarded. So there's the over scoping and being too ambitious, and then there's the under scoping and trying to clamp down on researchers with nondisclosure agreements which really don't belong in bug bounty or vuln disclosure programs. They do belong in penetration tests, but you also shouldn't sit on the results of a pen test forever and not fix them either. 

[0:24:11] Brett Callow: You mentioned the Uber case before. How common things like that actually are? Was Uber an outlier or is that type of thing more than we realize?  

[0:24:23] Katie Moussouris: Well, I think it's remarkable that Uber thought this was a good idea. And honestly, I think the fatal flaw and why it may be very prevalent actually, is that these bug bounty platforms sell the illusion of control. They don't really sell you a process that's up to you to do. All they sell you is essentially we will eliminate the bugs that are duplicates, we will eliminate the ones that don't reproduce that can't be reproduced, and we'll have all of the researchers effectively signing nondisclosure agreements when they join the platform. So I think the fatal flaw in that system is the illusion of control that is being sold by the bug bounty platforms. If nondisclosure was not on the table, I don't think Uber would have had the idea to use the bounty platform to pay off their extortionists and essentially pay for silence in their 57 million record breach. I think that probably a lot of organizations are saying to themselves, well, even though customer records were breached, we don't have to tell the customers in this case. We can just pay the bounty and rely on the nondisclosure of the platforms to keep this quiet. I think it probably is happening a lot more often. And unless regulators get very serious about telling these intermediary service providers like bug bounty platform companies, that you cannot be a party to essentially hiding a breach that otherwise would have been disclosed, I think that this is going to go on for quite some time.  

[0:26:04] Brett Callow: We've watched the ransomware problem get worse year-round after year. Do you think governments, and the US government in particular has done enough to combat it and cybercrime generally? 

[0:26:16] Katie Moussouris: Ransomware is an interesting question because the vulnerabilities that the ransomware actors are taking advantage of have always been there, right? They've always been possibilities of having these organizations hacked in that way. I think that part of the growing ransomware problem is the rise in cryptocurrencies and the ability for attackers to monetize their attacks directly with victims in that way. So I think it's if governments were to impose stricter criteria for cryptocurrency operators in knowing who their customers are, I think that would cut off a lot of the attractiveness of the anonymity of payoff and getting away with it that ransomware actors really rely on. They can always go outside the United States to use different cryptocurrency exchanges, however, without those kinds of restrictions. So I think it would have to be multiple governments around the world similar to how the banking industry has regulated around the world to try and make sure that criminal activity can be traced and stopped via the financial sector. On the causality, on the vulnerability side, I think that has always been present and we're just seeing a monetization of IT by cybercriminals in the ransomware variety because of the rise of the value and the anonymity and ease of use of cryptocurrency. 

[0:27:49] Luke Connolly: So to that point, do you think that really blocking access to the money should be the focus of policymakers or is there anything else that they can be focusing on to help protect average user? 

[0:27:59] Katie Moussouris: Well, honestly, the biggest problems are critical infrastructure such as utility providers, power plants, oil and gas pipelines. They are privately owned and typically it's a problem where they do not have any cybersecurity professionals at all or they have one and they are basically there keeping the IT networks up and running, but not necessarily focused on cybersecurity. So I think that for critical infrastructure, which is not all the victims of ransomware, we see a lot of softer targets like school districts and things like that, city governments and folks. But it's really a lack of cybersecurity personnel that are at these organizations that could even respond or try to prevent some of these issues from happening. So I would say government support for at least critical infrastructure and major cities getting more budget and more help and guidance in hiring cybersecurity professionals to even batten down the hatches when the storms come. I think that would be a big step. But we often see in this brand new industry, less than 20 years old, of the professional part of the cybersecurity industry. We see a lot of job requisitions that are looking for unicorn rock stars with a decade of experience. So that not only limits who you can hire, right? Because there are very few people who have that much experience in the workforce itself, but it also cuts off the ability for you to hire and train folks and improve the cybersecurity workforce pipeline itself. So I think if governments were to focus on cybersecurity workforce pipelining and getting more people who have an interest in cybersecurity hands on paid internships to start growing that workforce that can then be distributed around not just critical infrastructure, but to more state, local, city governments, I think that would help a lot as well. 

[0:30:03] Brett Callow: Do you think we can actually beat ransomware or is that just something we have to learn to live with and minimize its effects as far as we can? 

[0:30:11]  Katie Moussouris: That's like saying, can we ever beat the cleverest hackers? And I would say, no, we can't. But we can improve our time to detect. We can improve our resilience in the face of an attack. What happens when you actually find out that you've been attacked in increasing the isolation of affected systems? So I do think we can get better at it. It's kind of like looking at turn of the century medicine versus the medical industry today and everything. Certainly we've made a lot of advancements in the science of it, but we've also grown the workforce, we've grown skill sets, we've differentiated different skill sets. So I think a lot of it has to do with the fact that we do not have a very old profession. And we unfortunately built our society's dependence on these computer systems and the internet before we actually were equipped to deal with securing it. So will we ever beat ransomware? Probably not, but we can definitely improve just like antibiotics improve the health outcomes of many people in the civilized world. 

[0:31:20] Luke Connolly: So just to wrap things up, I think this is something we ask everybody. Katie, do you think that there should be any prohibition or limitations on the circumstances in which ransom demands are paid? 

[0:31:32] Katie Moussouris: So I think that's a difficult question. Right. I think that a decent deterrent to paying ransomware would be a requirement that you report that you paid ransomware. Because I think a lot of organizations look at it like, look, if we just pay this quietly, we can get back to work, get back to business. We'll just say we had an outage and that we're better now. And I think that that really does the public a disservice. So I think if we wanted to deter the payment part of it from victims, we should probably make it a requirement that they publicly disclose they paid and how much they paid, et cetera. I know the ransomware task force that assembled in the United States, they were considering making a recommendation to make ransomware payments illegal right. To try and deter organizations, victim organizations, from actually making those payments. However, if a victim organization is staring down an existential threat and they cannot recover their operations, they honestly have no choice but then to do that illegal thing if it were illegal right, or else they have no business. So I don't think putting that much of a burden on the victims is really going to result in what you want, which is to shine more of a light on who needs help and who needs to warn their users that there was a material breach like that. So I would say it's about requiring notification upon payment of ransomware that we should focus, at least on the victim’s side. 

[0:33:14] Luke Connolly: That's a great perspective. And with that, I'd like to thank you, Katie, for joining us today. It's been a really interesting and helpful conversation. And we'd also like to thank our listeners for tuning in. To stay up to date on the latest in cybersecurity, be sure to subscribe to our podcast. Thanks again, Katie. Take care.