Evolving Industry:

A no BS podcast about business leaders who are successfully weaving technology into their company DNA to forge a better path forward

Cybersecurity in the Age of AI: Why Data Defenses Need to Get Smarter

George Jagodzinski (00:01):

Today, we talk about cybersecurity, how AI is impacting it, and how to get buy-in on cyber efforts, and how to delicately balance between policy and culture improvements. But most importantly, we talk about Clippy, the useless office assistant that was in the '90s that’s shaped like a paperclip on Microsoft Windows. We'll learn a little bit about that, but I'm joined by Andrew Wilder, who's a cybersecurity leader with more than 20 years of experience, working at organizations such as Nestlé and Hillenbrand. In addition to being a CISO, Andrew is a board member and adjunct professor at Washington University. Please welcome Andrew. 

 

Welcome to Evolving Industry, a no-BS podcast about business leaders who are successfully weaving technology into their company's DNA to forge a better path forward. If you're looking to actually move the ball forward rather than spinning around in a tornado of buzzwords, you're in the right place. I'm your host, George Jagodzinski. 

 

Andrew, thanks so much for being here.

Andrew Wilder (01:10):

Hey, George. Thanks for having me today.

George Jagodzinski (01:12):

So today, we're going to talk cybersecurity, AI, people, but I figured a good kicking-off point would be, how do you justify to the executive team, to the board, why you would invest in some of these initiatives? I asked that selfishly because I'm often in a position where I'm trying to convince executives, and I literally heard a CEO say one time, "Everyone leaks data. I don't worry about it. It doesn't even get the press anymore. It's just going to happen. What can we do? Let's not bother." And I'm curious, with your experiences in the trenches, how do you get it done?

Andrew Wilder (01:45):

Yeah. So I recently pitched an executive team to spend money on cybersecurity controls, and had very similar questions specifically from the CFO, like, "What's the risk return on this?" I really see a CISO's job as a subject matter expert, and to come in and say, "We have risk X, and there are three different ways that we can address that risk. There is we can just ignore that risk completely, we can spend a small amount of money and bring it down a little bit, or we can spend a reasonable amount of money and really reduce the risk,” and it's really about, "What is the risk appetite of the business and what do they want to do?" A lot of times, they'll say like, "How would other organizations of our size do this?

(02:26):

Is this kind of table stakes or gold standard," that kind of stuff. But in the end, I think the business is able to see the value in saying, "I'm a more cybersecurity-mature organization." Imagine, as an investor, if you have two companies that are, for all intents and purposes, equal, but one of them is really good at cybersecurity. Well, you can feel safer knowing that you're investing in that company, or if you're doing business with two different companies and one of them is really good at cybersecurity, and the other one's not, well, maybe you're more interested in doing business with one where your data's not going to get leaked, or there's a much better chance that things are going to be more secure. So I think that's where a business starts to see the value of it when they can use it as a tangible advantage for them in terms of getting sales or investors, or things like that.

George Jagodzinski (03:13):

Interesting. Curveball question for you. I know you've done a lot of M&A, the way you were talking about value, I'm curious, does cybersecurity maturity factor into valuation at all? Do you know?

Andrew Wilder (03:24):

So I would say, in the past, definitely not. I think now, it's starting to be more so. I was able to create ... I've been doing M&A for more than 20 years and on the cybersecurity side, and the first half of that was really like, "Well, we've already bought this company, just go do something." Then, after a while, they started to say like, "Hey, we're going to buy this company.

(03:44):

Your cybersecurity assessment will probably not have an impact on it, but we'd like to know how good or bad they are ahead of time." Then, we start to do this thing where we bring them in slowly in a secure way, right? So you do an external pen test on them from a third party. You start to deploy your security tools. Once you get a good handle on things and you've addressed the biggest rocks, then you can say, "Okay. Now, we can start to connect to them and do all the stuff that the business is dying to do," but it's a lot better placed in the world now in terms of M&A and cybersecurity than it used to be.

George Jagodzinski (04:17):

So you guys used to just be out in the cold shadows as the M&A deal's going on, brought in to clean up the mess, and now you're getting into the room. That's nice.

Andrew Wilder (04:25):

You're like, "Guys, did you really want to buy this company? I mean, I understand from a business perspective, but from a cybersecurity perspective, it's a nightmare.”

George Jagodzinski (04:33):

Yeah. I love it. We get involved in quite a few M&A. I'm always fascinated by which parts get integrated how quickly, and it seems that sales teams always seem to get integrated super quickly. It's surprising, people that are incented by just getting results seem to all of a sudden get their act together really quickly.

(04:51):

Do you find, in M&A world, I could imagine that the cybersecurity playbooks there could serve a little bit as a connective tissue and a glue to bring the organizations together a little bit?

Andrew Wilder (05:00):

Yeah, it does, and a lot of times, what I got was a lot of pressure from the business saying, "Hey, how can we speed up this cybersecurity stuff?," and that's when it's important to remind them that, "Look, if we open the floodgates now and they have bad stuff, that could come right into our systems and could infect us," so it's important to let us do our job, do the due diligence, get things signed off, and then we can safely say, "Okay, now sales can connect, and finance can connect, and all those people can connect." So yeah, there's a little back and forth. You have to be a good business influencer at that point to not just say, "Okay, let's go ahead and do it," kind of thing.

George Jagodzinski (05:36):

They want their synergies, and they want them now. So on the speed front, last time we chatted, we talked a little bit about AI in the cybersecurity world, and it comes from on the attack side, on the defense side, it's a new vector. I'm curious, some of what you're seeing out there right now from an AI and cybersecurity perspective.

Andrew Wilder (05:56):

Yeah. So, boy, that's a loaded question, George. So let's first talk about the risks. So if you ask cybersecurity professionals today, like, "What is your number one attack vector? How do people most often get in?" Number one attack vector is phishing emails.

(06:13):

You get them. I get them. We get them all the time. Now, one of the things that is fortunate about phishing emails right now is that a lot of times, they have misspellings, they have clear clues that tell you that, "This is probably something that's fishy." The problem that's happening now is that fraudsters are using Generative AI to generate not only well-crafted phishing emails but unique phishing emails because, if you think about it from a defense perspective, if I get a phishing email in my organization, I'm going to search by the sender, by the subject line, by the attachment, by the link. Now, if I'm an attacker and I know that they're going to do that, if I can craft every message individually, I'm going to be that much more successful, and it's going to make it that much more difficult for the defense side.

(07:02):

So that's one way they're doing it. You can think about a very similar thing if you think about malware, ransomware, and stuff like that. A lot of the detections are based on behavior, and heuristics, and hash values. Well, if you can create every piece of ransomware or malware to be completely unique, then you can get past a lot of those controls that we have today. So from a defense perspective, we need to be a lot smarter. But you also asked about, "What are the positive benefits of it?"

(07:31):

There are a lot of positive benefits of it, right? Everybody's using it today. I think it was IBM talked about they're going to pause hiring any jobs that AI can do, which is an interesting statement. They are trying to be forward IT-wise or tech-wise. Google also talked about the ways that they're using it with their Mandiant solution about threats, and toil, and talent, and using it for different things. So there's a lot of different use cases that we could talk about for using it on the positive side, for sure.

George Jagodzinski (08:01):

Yeah, and I want to put on my grumpy old man hat for a second, which is when I hear AI, I get so frustrated because it's applied in marketing terms to everything imaginable. You and I have been around the block long enough to know most of it that's happening is just rule-based automation, or machine learning, or whatever it is. Yeah, and it's not just Skynet battling Skynet out there, there's some, I think, more practical, pragmatic things that can help people do. Maybe it'll be interesting to explore because our audience, they don't all know cybersecurity, but on a cybersecurity team, how is AI making their lives easier because I find AI, in general, if you can just free up time from your existing team anywhere in your organization, you start to see some great benefits, it doesn't need to be some crazy Skynet thing. So I'm curious, what's happening?

Andrew Wilder (08:49):

So one of the things that I would implement with my teams in the past, and it was actually like an annual objective that people get bonused on. We did this thing called SSAOE, which is an acronym that is “simplify, standardize, automate, offshore, and eliminate.” So as much as possible, you want to always be moving down low-value tasks so that the team can do higher-value tasks. So there's going to be new things. Next year, there's going to be something else.

(09:16):

Two, three, five years from now, everything's going to continue to change. So as much as you can, you want to push down those low-value tasks. So you think about using Generative AI to automate routine tasks. People who work on a SOC, or a Security Operation Center, doing an investigation, they may have to get into three or four different systems. They may have to put that all into a spreadsheet and massage the data to get down to the details of what they really want to get.

(09:41):

Well, if you're smart enough, you can help Generative AI to do that stuff for you and do it for you regularly, so you start getting, "Okay, here's exactly what I wanted," instead of having to kind of sort through all this data to get to the goal that you're looking for. So that's one, I think, very good use case for him.

George Jagodzinski (09:59):

I love that framework. I want to poke into that a little bit because we always talk in our organization about elevate and delegate your role, but I threw out a challenge for our team to look for as many opportunities to leverage AI to free up their time. And to be honest, it's a little bit crickets because everyone's just so darn busy with their day jobs, but if you start incenting people off of it, I could see where that would move the lever a little bit. Can you help me understand a little bit, like what is that incentive framework? I understand those pillars that you have, but then, how do you structure that?

Andrew Wilder (10:30):

So what you have to do is on like a mid-year review, or a quarterly review, or end of year review, you'd have to talk about, "What are the things that I've done that fit into that SSAOE framework?" Right? So I've eliminated this low-value task that we used to have to do, or I've automated this task where it used to take three or four people a couple hours a week, and now it takes one person 15 minutes to run this automation. So we were always trying to find ways to eliminate things and reduce things so that we'd have more time to learn and more time to do additional tasks.

(11:03):

You're right, if you just tell people, "Yeah, go do this," yeah, that sounds great. I got a lot of other stuff to do, but if you say, "If you tie money to things, people are generally very interested in doing it." A part of your end-of-year bonus, or your salary increase for next year, could be based on how successful you were at doing these things. So if you did one, well, that's probably not very good, but if you did five or six of them, well, hey, that's really doing a good job, and not only helping you but helping the rest of your team and hopefully helping the entire organization that you work for, right?

George Jagodzinski (11:33):

Yeah. Yeah. As much as my grumpy old man perspective on the marketing label of AI, I have found that there are real dollars at the board level and the executive level, they're in an AI bucket, and if you can kind of tie that to what you're doing, you can unlock those. Have you had those experiences in your line of work?

Andrew Wilder (11:53):

Yeah, we definitely have, and it's important to be able to tie back to those buckets, like you talked about, those kind of initiatives, and say, "Here's ways that we are doing that. Here's ways that we are leveraging it." I wouldn't want to get onto your grumpy old man hat and try and say, "We're doing AI for things that we're not," but it's also a great way to incentivize people like, "Look, if you want to do a project, here's a way that we can do it leveraging AI, and we can tap into these additional funds or incentives, or whatever to move those things forward."

George Jagodzinski (12:25):

Talking about mapping back to the organizational goals, I find every executive I talk to, most of them, have challenges doing that for whatever reason, whether it's unlocking new data capabilities, or e-commerce, or modernization. Being able to map that back to the corporate goals is challenging for some reason, and I'm curious if you run into that with cybersecurity efforts, and how you align those.

Andrew Wilder (12:50):

Yeah. So I worked for Nestle for about 18 years, and one of the things that we did starting in 2012 was a Lean initiative. So Lean, you've probably heard of it, is like Six Sigma, things that were developed in Toyota and very factory related and things like that, always about making continuous improvements, small incremental improvements all the time. But one of the other things about Lean, if you think about it from a leadership perspective or an organizational perspective, is the cascading of organizational goals. So if you think about the CEO's goals, or the executive team's goals, or the board's goals, and then when that gets to the next level, you have a similar set of goals, but that are tailored or flavored towards your line of the business, and how you're supporting those things, and then that continues to cascade throughout the organization.

(13:40):

It's really a powerful thing when you think about someone on the factory line is doing something to support a goal that is the factory manager's goal, that is the country manager's goal, that is the regional manager's or the head of the organization's goal, and you can see that kind of cascading up. So I agree with you that there's some difficulty there. Sometimes you need to be pretty creative about how you do that supportive goals, but I think there is a good way to do it if you do that cascading process well.

George Jagodzinski (14:09):

Makes sense. I've seen plenty of CEOs’ heads explode when we go into evaluate part of their organization, and you can't draw a line between the initiatives to any of the five corporate goals. It's quite a fun moment. So then, moving on into people. People are a big part of this.

(14:26):

We talk about incenting them. There's obviously the risk. There's a lot of stories out there where people are pasting stuff into ChatGPT that they shouldn't, but how do we improve the people aspect of cybersecurity?

Andrew Wilder (14:39):

So let's first start with the Generative AI piece because I think there's a good use case for that, but we can also talk about the people aspect of cybersecurity, which is a huge piece of cybersecurity, for sure. So if I talk to my peers about this, there's a couple of different ways of looking at it. One way is people trying to block it. Now, I don't think that's reasonable. I think we've kind of opened Pandora's box here, and even if you try and block the big ones, people are still going to find other ways to get Generative AI that they want.

(15:12):

So to me, it's more about teaching people how to use it safely than trying to block them. Maybe you have really good reasons for it, maybe you're in the government, or you have people in R&D that are working with confidential stuff all the time. Maybe there's some reasons for some businesses to do it, but it seems like a difficult road to go down. The second part is looking at policies, right?

(15:33):

So you probably already have some kind of a cloud policy, or a SaaS policy, or something like that, so saying, "Look, this is another SaaS solution. This is another cloud solution. You wouldn't go to Pastebin and put our corporate secrets out there. Don't do the same thing in Generative AI, just because it might be able to give you more information.” One of the interesting things that I've seen are these kind of security teachable moments.

(15:57):

So you go into whatever Generative AI tool, and some kind of browser plugin or something pops up and says, "Hey, George, I noticed that you're on this Gen AI site. Just want to remind you of the corporate policies about that, and we're able to track what you're doing here, so make sure you don't put anything confidential in there because that's not allowed, and we don't do that." And so, to me, that's a very powerful link between people and the policy, and getting them to see that. One thing I've done before with like a DLP tool is we just had a popup message that said, "Hey, I noticed that you're trying to copy 10 gigs of files to a thumb drive. What's your business reason for doing this?"

(16:40):

When people start to see that like, "Hmm, maybe I shouldn't do this because they know that I'm doing this," especially if they're doing it for the wrong reasons, like an insider threat or something. So those kind of things, those kind of messages, can really help people to understand, and it's a powerful learn-in-the-moment thing. If you think about cybersecurity awareness training that's common today, it's something that you get once a year, and you click through it as fast as you possibly can to say, "Yay, I'm done. I did my cybersecurity training." Maybe you take the quiz at the end, and hopefully, you pass, but this kind of learning-in-the-moment thing of, "Hey, you're doing a risky behavior, and here's the policies or the reasons not to do that," I think that's a pretty powerful way, and it goes back to your original question about, "How do we help people?"

(17:24):

It's not about a stick, it's about a carrot, right? I think the best way for people to understand cybersecurity and take it home with them is to say, "Look, you're a professional. You're intelligent. You know what you're doing, but what about your children? What about your friends who are not in technology? What about your parents or your grandparents, or your aunts and uncles?" Go take this information home and give it to them, because if they get a phishing link, maybe they don't know all the same stuff that you do. When people see that in terms of protecting their family and friends, that becomes a very powerful tool, as well, that kind of emotional connection to it.

George Jagodzinski (18:00):

Yeah. That taps into the good human stuff. I'm curious, though, how do you avoid that from feeling like the Clippy Overlord that's watching everything that you do?

Andrew Wilder (18:10):

Well, there's positive and negative parts of Clippy. I love Clippy, and I like the web plugin that's Clippy-esque that says, "Hey, I see you're trying to do this.” There is a negative aspect to it because even as the Gen AI stuff started coming out, and COVID and work from home and all of that, there's a lot of stuff about people doing tracking, and seeing what you're doing, and all of that stuff. I think if you have a strong policy that says, "Look, your corporate computer or the work that you're doing there can always be monitored and logged, so you should be aware of that," it's not like there's people in the secret dark room who are back there, looking and reading all your emails or whatever, but in a case of a forensics investigation for an incident, they're probably going to have to go back and look at the stuff that happened on that computer, or that device, or that server, or whatever, so we need to have all of that kind of logging and monitoring in place. I don't think that we should be monitoring people and what they do.

(19:07):

I think that's more of a leadership problem than a technology problem to solve, but different organizations have chosen to do different things, and that's up to them, I guess.

George Jagodzinski (19:16):

Yeah. For the youngsters out there, Clippy was an animated, little paperclip on Windows that, anytime you're trying to do something, would ask you if it could help. It wasn't always that useful back in the day, but maybe it was ahead of its time. Yeah, maybe you get around the Overlord fears with if it's truly helpful. I mean, knowledge management within organizations is rife for disruption with something like AI, and then as everything's changing, like ChatGPT is one flavor of a million flavors that are on their way. If your enterprise kind of Clippy helper can just turn that a little bit into a black box, while also making sure that you don't poke yourself in the eye, maybe it becomes a bit more accepted, right?

Andrew Wilder (19:58):

Exactly, and it's about making sure that those things are truly helpful, like you said, making sure that people really want to use it, and it really adds value for you. There's a use case that I think about for Generative AI every single day, which is somebody emails me and says, "What's your availability for the next two weeks?" And so, I go into my calendar, and then I type out an email that says this. Now, you can use things like Calendly or whatever, but maybe you have parts of your calendar that have blocked off that maybe for a conversation with George, you're going to kind of open it up. So boy, it would be cool if Clippy popped up and said, "Hey, I can respond to this for you and give people your availability for the next two weeks. That would be easy for me to do."

George Jagodzinski (20:39):

Oh, executive assistants across the globe are about to be out of a job. As long as it would make me not book something over school drop off or pick up, that would be nice if it would. Yeah. Navigating the personal and work calendars is always a challenge.

(20:57):

We're big believers that, yes, while you need policy for cybersecurity, it's much more a cultural solve, but you need both. How do you find the balance between the two, between policy, and people, and culture?

Andrew Wilder (21:12):

Yeah. So listen, you got to have a good policy. It's got to be up-to-date. You got to refresh it normally. You got to communicate it to people regularly.

(21:19):

It can't be a 60-page document and all legal jargon. It's got to be something that people can understand, it could be translated, and all that kind of stuff. There was a quote that the CEO of Nestlé UK gave us one time, and he said, "The sense of urgency is very hard to cascade." So think about this from a large organization and the topic of cybersecurity. So imagine you have a CEO, and he says to his executive team, "Hey, this cybersecurity stuff is really important. Make sure that your teams do it."

(21:54):

Okay, so that message gets to those people, and they're like, "Yeah, cybersecurity is really important." Now, what is the chances that all of those executive leaders go back to their teams and say, "Hey, by the way, when I was talking to the CEO, he reminded me that this cybersecurity thing is really important." Okay, maybe that's going to happen, but then, what's the chances that that next level, if we're playing the telephone game here, that that next level goes to their directors or managers and says, "Hey, by the way, remember that cybersecurity is really important." That message gets lost at some point. So what you have is you have, at the top of the organization, people understanding, "Hey, there's these SEC rules that are out there.

(22:31):

This is really important stuff. Let's make sure that we're doing this," and maybe on the IT team or the cybersecurity team, people know that it's important, but when you get out to the general population of the rest of the organization, a lot of people are not hip to it. So I think the best way to address this is creating strategic relationships in each of those business units, having cybersecurity champions in those areas, people who understand, people who know about the policy. “Hey, there's somebody in your area, your location, or your business unit, or whatever that you can talk to, if you want to know about cybersecurity, somebody you can call first or ask questions,” that sort of implanting agents throughout the organization, I think that's a really good way to do it, so...

George Jagodzinski (23:19):

Man, you're speaking right to my soul and raising my blood pressure because, as someone who leads a company, I can never get my urgency down throughout the whole company. We're not even that large, and it's just, the game of telephone is real, and you can't ... We always say you can't pay someone to give a… insert of four-letter word here, and it's a challenge, but I think getting everyone educated on the, what's at stake, I think can help too. We were talking earlier, not everyone might know this, but some executives of organizations are being personally held liable for data leaks, and that's something that follows you to your next organizations as well, depending on what your role is. I don't know what you see kind of evolving in that realm of responsibility. I'm curious what you see.

Andrew Wilder (24:06):

So executives and boards are responsible for addressing risk within their organizations. With this new SEC rule that came out recently, it's clear that the SEC is saying, "Hey, by the way, you're supposed to address risk in your organization," but cybersecurity is a very important risk that you really need to pay attention to. So I think what's happening now is instead of it being an IT problem, or a tech problem, or a CISO problem, it's starting to be a company problem or an everybody problem, and people are starting to be held accountable for it. If you look at some of these recent cases, who is being held accountable for these things? Is it always a CISO that's going to take the blame and be the scapegoat for these problems that happen?

(24:49):

So you got to make sure that you're covered with the right type of insurance, like D&O insurance in your organization as a CISO. You got to make sure that the decisions that are made are well-documented so that everybody takes responsibility for those things. So that goes back to the thing we talked about before, about subject matter expert, risk appetite, "Here's the different things that we can do, here's my recommendation of what we do," and then let the business make the decision, but make sure that that's well-documented because that way, all the people involved in the decision are held accountable for that decision, instead of one person being the fall guy, if you will, for making a bad cybersecurity decision.

George Jagodzinski (25:27):

Yeah. We find ourselves in an interesting position in that one of our capabilities is we do cybersecurity advising, but then another is essentially helping our clients get more out of their data. I just hosted a round table recently with some well-known brands, pretty much all of them, on average, are rating themselves like a letter grade of a C on their ability to get the value out of the data. And so, I'm sitting there, I'm like, "Okay, so you have this data that everyone was saying data is the new oil, and everyone's just collecting more and more data, but you're not even getting the value out of it, while at the same time, it feels like it's becoming more and more of a liability that you have within your organization." I was trying to figure out if there's any interesting frameworks or rules of thumb for, "Hey, if you have X amount of data or across this many different things, it's like this much more of a liability."

Andrew Wilder (26:17):

If you use quantitative risk assessment methods, and if you look at the fines, like GDPR or CCPA, or some of these other data privacy regulations, it makes it very easy to translate the amount of data and the number of records into financial risk. So you can say, "Look, we've got a million customer records. It's $1,000 per record is the maximum fine. We could have up to a billion-dollar fine if these records get breached, and here's the amount of cybersecurity spend we'd need to protect them or protect them better than we are right now, and that's actually a very nice way to do… And there's a whole conversation about qualitative versus quantitative risk assessment methods.

(27:02):

You can go to your board and show them the little pictures with the red, and yellow, and orange blocks on there, but it's much more powerful to say, "We have this much dollar amount risk exposure. Implementing this much of a control that costs this much money can lower it by this dollar amount." It's still not an exact science. It's all based on actuarial tables and estimates and stuff like that, but listen, that's how we do insurance today, right? I mean, insurance is not an exact science either. It's all based on estimates and Monte Carlo math schemes and stuff like that, so it's using that same kind of methodology to think about cyber risk.

George Jagodzinski (27:37):

That makes a lot of sense. Yeah, because we're always working with our client, they're trying to find the budget to create more data capabilities, and maybe we can light an extra fire under everyone's butts to be like, "Hey, there's not just the ROI on this new data capability, but there's this much risk tied to this, so you better be doing something with it, and just cut out the nonsense and get it done, right?" Interesting. Well, Andrew, I've really enjoyed this conversation.

(28:03):

I always like to finish with a fun question, which is, throughout your career across personal, work, your life, what's the best advice you've ever received?

Andrew Wilder (28:13):

So I think the best advice I ever received is that no one is going to care more about your career than you do. So if you're not doing a development plan for yourself, if you're not educating yourself, continuously learning, reading books, getting certifications, going back to school, if you're not doing that stuff, nobody's going to do it for you. Now, you might have a manager or a leader or whatever who's nice and helps you in that area, but the number one person who's going to care about your career is you.

George Jagodzinski (28:46):

That's great advice. I love that. I luckily got that early on, as well. That's fantastic. Well, Andrew, thank you so much. I really enjoyed this.

Andrew Wilder (28:55):

Thank you, George. Have a good one. Cheers.

George Jagodzinski (28:59):

Thanks for listening to Evolving Industry. For more, subscribe and follow us on your favorite podcast platform, and pretty please, drop us a review. We'd really appreciate it. If you're watching or listening on YouTube, hit that subscribe button and smash the bell button for notifications. If you know someone who's pushing the limits to evolve their business, reach out to the show at evolvingindustry@intevity.com. Reach out to me, George Jagodzinski, on LinkedIn. I love speaking with people, getting the hard work done. The business environment's always changing, and you're either keeping up or going extinct. We'll catch you next time, and until then, keep evolving.