Evolving Industry:

A no BS podcast about business leaders who are successfully weaving technology into their company DNA to forge a better path forward

Why Data Governance is Your Accelerant in AI Risk Management

George Jagodzinski (00:00):

Today we talk about AI, risk, and why governance doesn't have to slow innovation down. We get into what impact regulations in the future may bring, why AI is a brand new attack surface, and how companies can scale AI without turning it into a compliance or security nightmare.

(00:15):

My guest today is Alec Crawford. Alec has spent decades in risk management across finance and technology, including successful leadership through the 2008 financial crisis, which is about as deep of an end you can jump into. He's held senior risk roles at major financial institutions, has a background in computer science, and has been thinking about AI long before it became cool.

(00:35):

Today, Alec is the founder and CEO of AI Risk, where he works with regulated organizations to help them adopt AI safely, securely, and at scale. We talk about why having an AI policy isn't enough anymore, how permission aware data governance changes everything, and why doing this right can make companies move faster, not slower. If you're a leader trying to figure out how AI fits into your organization without a massive whoops-a-daisy, this one's for you. Please welcome Alec Crawford.

(01:01):

Welcome to Evolving Industry, a no BS podcast about business leaders who are successfully weaving technology into their company's DNA to forge a better path forward. If you're looking to actually move the ball forward, rather than spinning around in a tornado of buzzwords, you're in the right place. I'm your host, George Jagodzinski.

(01:39):

Alec, thanks so much for being here.

Alec Crawford (01:41):

Hey, great to be here, George. Great to see you, man.

George Jagodzinski (01:43):

So today, we're going to be talking about AI and risk, and I figured a fun place to start would be, what's your why to risk? I mean this in a very positive way. I feel like you're the hipster of risk. You've been all about risk before it was cool, before AI wasn't even on the scene. So what's the origin story there?

Alec Crawford (02:00):

Yeah, so kind of funny. I was obviously working on Wall Street, did that for a long time, and then I got a phone call to go work for Ziff Brothers Investments and run risk management for them. And of course, I happened to start my starting day was January 1st, 2008, so the start of the crisis. I used to tell the team, "We got 10 years of education in a single year. Anything that could happen, happen."

(02:28):

But once you go through that successfully, you get pretty excited about risk and managing risk and all those kinds of things. And once you see behind the curtain, you're like, "Whoa, this is awesome." It's a little bit of modeling, it's a little bit of quant, it's a little bit of common sense. And it ends up being, for me, I love coaching. I've been a baseball coach and fun stuff like that for kids. You're coaching portfolio managers, right? Because they may know a lot about a credit or a stock or a company, but really not understand that well portfolio construction and risk management and factors and kind of all this kind of quantity, cool stuff that I love and also know how to translate for them. So that's really where it all started.

George Jagodzinski (03:14):

What were the one or two things that helped you get through that successfully? Because that could have gone really wrong really quickly.

Alec Crawford (03:19):

You've got to have a great team. I had a couple of members of the team who were incredibly good. We hired PhDs or experts in certain areas. So basically think about what you're good at and what you're bad at and hire the people for stuff you're bad at. If you're not the world's most quantitative person, have a quantitative person working for you. If you absolutely don't understand a certain part of the equity market or non-domestic equities, hire someone that knows that well. So you got to kind of fill in those gaps. And of course, the classic hire people smarter than you. I do that all day. A lot of people are kind of threatened by that. That means your company's going to lose.

George Jagodzinski (03:58):

Yeah. You make it sound so easy, but yeah, really you need to have the self-awareness and the humbleness to be able to do it right.

Alec Crawford (04:05):

Yeah. And also, look, I'm a book learner. I'll pick up a book and learn about a new field or whatever. And that's very helpful because risk is not just put your finger when the wind and hope things go well. It's math and quantitative analytics. And if you don't understand those things and the models and the theory, you don't understand risk, right?

George Jagodzinski (04:29):

Yeah. Yeah. And a little bit of a curveball. We didn't talk about this before, but I'm curious then going from that world to entrepreneur, it's a very different personal risk. How does that translate? How does that shift from your old world to where you are now?

Alec Crawford (04:43):

Yeah, great question. I still think a lot about risk management in terms of operating a company, but I think one of the things that I like about risk is you get to do lots of different things, especially if you're a CRO, right? You're looking at regulatory risk, you're looking at investment risk, you're looking at operational risk, you're learning about a lot of different things. You're coaching a lot of people and they're just a lot of moving parts. It's complicated. You have to be good at a lot of different things from investing the communication.

(05:11):

It's the same when you're running a company. I came in here knowing nothing about marketing. I know more about marketing now than I ever want to know in my life because you just have to learn it. You have to learn everything. Coming in, I knew I knew tech. I knew I knew finance. I knew I know how to manage and hire people. I've run huge budgets at different companies. So none of that was really an issue. It was I'd literally never done marketing. And so again, what do you do? You fill that gap with someone who knows what they're doing.

George Jagodzinski (05:42):

Love that. Yeah. I think it was at Stanford Design School. They did this exercise where they'd have people go to different spots of the room based off of their risk levels, but it changed per context. It wasn't just a general risk profile. It was, what are your risk aversions when it comes to finances versus relationships versus maybe dangerous activities? And it seems like it's just different personality profile depending on the context.

Alec Crawford (06:09):

Yeah, no, that's totally true. Yeah.

George Jagodzinski (06:10):

And it probably applies to businesses, right? What have you seen just going into all these different organizations? How much of the company's personality comes through into their risk management approach?

Alec Crawford (06:20):

Wow, that's such a great question because I think it's similar to people's personalities. They may be risk-takers and dating, but risk averse in finance or whatever. And we see the same thing in companies, right? So they could be very risk averse in terms of hiring someone from outside their industry, but big risk-takers in terms of, "Yeah, we're just going to let everybody use whatever AI they want at the company. We don't really care. And until the regulators slap a fine on us, we're just going to run as fast as we can." Very unusual obviously, but I've seen it. So true story, a guy I used to work with was running a company like that. We're going to use AI for everything. I know what the rules are. Let's pretend there aren't any rules. And then they got examined by the SEC.

George Jagodzinski (07:09):

Yikes.

Alec Crawford (07:10):

They had to answer 150 questions about, "What are you doing with AI?" SEC exam. The SEC sat down with them for four straight hours and the outcome of that was what is called the deficiency letter. This is obviously a regulated financial entity, and I won't say anything else other than that. And they were given three things they had to fix, three big things they had to fix, or face pretty significant fines, meaning seven figure fines.

(07:39):

So they got lucky in that the SEC was like, "Look, this AI stuff is new enough. We're not just going to fine you right now, but you got to fix this stuff. We're going to fine you." Now, by the way, this was more than a year ago. They're not being that leaning anymore. If you have an AI policy and almost every financial institution does, the first thing a regulator will do is say, "Show me your AI policy." The second thing they do is say, "Show us how you're accomplishing your AI policy." And if you're not doing what you say you're doing, you're going to have a problem.

George Jagodzinski (08:12):

Yeah. Oh, man. I mean, I'm curious though, in that case, did the ends justify the means there? Did they get enough of the results in that kind of chaotic break the rules period that it made sense, or it didn't make sense?

Alec Crawford (08:26):

I think so. I mean, AI was new. They're trying to figure out what they could do with it. It was also obviously right around the time of the administration change, right? So it was kind of like, "Well, maybe this is a free for all for a little while." Times are different now. We're seeing companies go from nudge, nudge, wink, wink. We know people are using ChatGPT on their phones to, we got to lock this down, because we're starting to see fines and deficiency letters and real problems with AI, with places that are kind of not focused on safety and security. And I know because I get the phone calls. They're like, "Hey, we got hacked or the regulator came in and so we got to change these three things. Can you help?" And most of the time the answer is yes.

George Jagodzinski (09:14):

Yeah, because there's always that reality of balancing the tension of the business and trying to innovate with what's really needed. And man, we've been inside so many organizations, I don't want to name any names, but there's been plenty of CEOs where we're talking about cybersecurity, data breaches. This is before AI. And sometimes they would answer, "Hey, everyone has data breaches. You pay a little fine, you move on, you get ..." And that mentality, I don't know how much of that you're seeing when it comes to AI, but it's the fines and the consequences are just getting so much worse that I think you can't sign stuff.

Alec Crawford (09:49):

Yeah, you can't play that game anymore. At this point, it's 10x to 100x cheaper to prepare rather than get fines. Europe is a great example where they got smart. There are fines for breaking the AI roles and breaking consumer data roles, GDPRs is the acronym everybody uses. They're not dollar fines. They're not euro fines. They're percentage of revenue fines. So that's how Amazon a couple years ago got fined around a billion dollars for GDPR violations, which they, by the way, they never admitted. They're like, "We're not sure this is right, but we're just going to pay this fine to make it go away." But we're not talking like, "Oh, we got fined 10 grand." No, this is real money.

George Jagodzinski (10:34):

Yeah. I don't think you could pull a whoops-a-daisy card on that one. That's a real consequence. So now I'd love to hear a little bit about where did this spark for the intersection of risk and AI come for you?

Alec Crawford (10:46):

Oh, yeah. Yeah, that's fun. So I was a computer scientist. I went to Harvard undergrad. I decided to write an undergraduate thesis. My thesis advisor guy named Bill Woods, who was one of the early natural languaging processing guys. He embedded something called Lunar, which was one of the first NLP systems you could ask anything about the moon rocks. Know the answer. How many rocks are there? How many are igneous? How heavy are they? Blah-blah-blah. So pretty cool.

(11:13):

And I was basically teaching computers to play poker and bet and bluff and things like that using machine learning. So all this kind of stuff that today is like, "Oh yeah, everybody does that." Not then, not in 1987. So that's where I started with AI. I've always been involved somehow with computer science. AI is super cool. I worked on that a little bit in some of my previous roles also before I started AIR.

(11:41):

Basically, I had retired from Lord Abbett. I'm still limited partner there to be clear. So I still have a little part of the company. And Apple hit me on the head like, "Oh my God, look at these giant companies onboarding AI with no risk management. They have to have it." And that was really me writing the proof of concept, just say, "Hey, can I actually get this to work?" I got it to work. I called up the other founder, Frank Fitzgerald, and showed it to him and he goes, "I'm in."

George Jagodzinski (12:10):

Nice.

Alec Crawford (12:11):

So that's the start of the company. Then obviously Joe McMahon, other people have joined us since then. And we're probably onboarding a client every week or two at this point. So super, super exciting. But that's the original thing. And truly as a risk and compliance person, watching companies not doing that was just so scary. Because again, it's bad for the company.

(12:36):

As you say, they might get a fine, but terrible for their customers. We're focused on keeping eyes safe, keeping it secure, but also making it useful. That's the most important thing. We've developed at this point over 100 AI agents, for example, in the financial space. Meaning if you're a bank or a credit union or an asset manager or whatever, that's super cool stuff, right?

(12:59):

Where one of the biggest failures with AI is what I call a failure of imagination. So if I just give someone ChatGPT at their work and go, "Hey, you can use this for work." They go, "Well, what do I do?" If I say, "Hey, I'm giving you access to something that will answer any question you have about the company." And you're in the call center and a customer calls up and says, "Hey, what different home equity loan offerings do you have?" You just go, "Oh, here they are. You don't have to search the internet. You have to turn around and ask someone. You don't have to look through a manual. You can just get the answer." So it's immediately useful as opposed to, "I'm not sure what to do."

George Jagodzinski (13:36):

Yeah. I mean, the hardest part with these things is always what are the questions that you want to ask it, right? It's not the tools. And I'm curious what you're seeing now from an adoption of AI risk because when the buzz first started hitting, you saw boards, or at least I saw boards and executive teams, they'd come up with these silly goals like, "We want to do three things with AI this year." And then they learned how silly that was. And then they started aligning on, "All right, here's what we want to accomplish as a company. How will AI enable us to do that?" But I wasn't seeing a top five goals of organizations being, "Hey, this year we really need to implement AI risk compliance and governance." It's very much an eat your Brussels sprouts kind of moment, right? And so how are you seeing companies adapt to that?

Alec Crawford (14:18):

Yeah. I mean, I think we talk about AI, GRCC, governance, risk, compliance, and cybersecurity. If I go shake a CEO's hand and go, "Hey, do you want some AI GRCC?" They go, "What are you talking about?" So we actually turn it around and go, "What are your priorities this year?" "Well, I'm trying to grow a membership and do this and do that." And I'm like, "And where are you in your AI journey?" "Oh, we haven't really started." I'm like, "Oh, well, we can show you how to use AI to accomplish those goals." They go, "Great." I go, "And by the way, we're doing that safely and securely." "Oh, that's really good. I like that too."

(14:52):

So you got to lead with we got some really cool stuff. We can help you, everyone do their job better. By the way, you really should do it safely and securely. And by the way, if you got a regulator, you got to do what they say too, right? And I think that's really our secret sauce. We've basically automated all that. So for example, many large banks have signed up to use the NIST AI risk management framework or their regulators have said, "We got to use something." And have chosen to use that, but now they've agreed to have to follow it. That's totally automated in our system. Otherwise, you literally have a guy whose only job is to make sure you're following it or you could do a software and somebody looks at it once a year and goes, "Yeah, we're doing all these things. Check, done." Huge difference.

George Jagodzinski (15:38):

Well, that's a perfect example of people think when they think governance, their first gut reaction is, "This is going to slow me down. It's going to be a bunch of bureaucracy. It's going to be a pain in the butt." So I'd love to maybe hear more examples of where governance is actually an accelerant, not so much a drag.

Alec Crawford (15:56):

Yeah, great question. So one of the things we think really hard about is data governance. Let's say you just gave everybody AI at your company that access to your Salesforce, like everything in your Salesforce, which is pretty easy to do or everything in your SharePoint, which is also pretty easy to do. All of a sudden you discover, and this actually happened at a company that came to us to chat about it, literally the first day someone says, "How much money does my boss make?"

(16:24):

Finds out the correct numbers, not only his boss, but the entire management team, "Hey, here's how much money they make." Within seven hours, the legal department had shut down that AI effort. And that is a failure of governance, meaning data governance, right? Instead of saying, "Well, when Alec logs in, Alec can only see the data he's supposed to in SharePoint, they just open it up wide." So what we do is we maintain what's called permission awareness.

(16:52):

So it doesn't matter if I'm using Salesforce through AI, through my own AI or looking at SharePoint or checking my emails or whatever, I can't see the CEO's emails. I'm not going to get an answer to the question that says, "How much money does my boss make?" So that actually speeds things up because otherwise, AI is terrible on its own with cybersecurity and governance. You need to have a layer around it to control that, which is what we do.

George Jagodzinski (17:20):

That's great. I mean, some people might not understand how all these things fit together. Salesforce will probably convince someone that the Einstein layer is going to take care of all this and they don't need any other tool in there. And Microsoft will say something similar about what they have. And so I'm curious what common language or frameworks you have around how does everything in an enterprise ecosystem fit together and what are the various parts?

Alec Crawford (17:44):

Yeah, there's two super important things there. The first thing is a lot of people are like, "Oh, my data's got to be perfect before I get AI." Or, "I've got to ever have everything in a data lake or a data warehouse or whatever." Wrong. You don't because actually AI can actually plug into all these things and collate them.

(18:01):

So if you have a client, let's say you're a financial advisor, right? I got a client and I got data in Salesforce. I got data in eMoney. I got data in Tamarac. I got my own notes. I got transcripts from the last meeting. Like today in our system, you can just say, "Tell me about this client. Give me a one-page summary." Bam, you get the one-page summary. And it's pulling data from all those places.

(18:24):

Now, look, if some of the data's bad, like it's got the wrong data at the last meeting, whatever, that's going to be reflected, but it doesn't need to be perfect. And also remember, a human is looking at this and will know like, "Oh, that's not the data of the last meeting. I guess I forgot to put the notes in or the dates wrong or whatever." So it's going to do the best it can, but the data doesn't need to be perfect for AI to do a really, really good job.

(18:45):

The other really important, and this is more philosophy, right? So think about this. I can choose to give my data to tons of different providers, Salesforce, eMoney, whatever. They're eventually all going to have AI because everybody has to have AI, otherwise they're out of business. And they're going to take my data, which they now own, and they're going to do stuff with it, and they're going to send me back analysis using AI, which may or may not be useful for me, or may may not be correct.

(19:12):

But the problem is they own my data and now effectively they're going to try to sell it back to me. You should actually keep all your own data, use your own AI to do the analysis, which is the analysis that you want. That's going to be the differentiator between winners in AI and losers in AI over the next few years. Do you have your own proprietary data that other people do not have? Because if other people have your data, then it's not that valuable anymore.

George Jagodzinski (19:37):

Yeah. And especially once you pile on top of that, the insight and the wisdom that you can get out of that data, right? You want to own that yourself, 100%.

Alec Crawford (19:44):

Absolutely. And then look, every business is different. Einstein's may give you the same answer whether you're selling yoga mats or cars or financial services. Every business is different. You want your own take on what you want to see with that data.

George Jagodzinski (20:00):

Well, and then now you're reminding me that would also then unlock more ensemble techniques, right? Where you can kind of compare and contrast the various insights that you're getting from the different systems and that can get really exciting, I would imagine.

Alec Crawford (20:12):

Yeah. It's interesting. I think there are a lot of people that come to us and say, "Look, I need access to OpenAI and Cloud and Gemini." We're like, "Great, no problem." Because we do that on our system. It literally takes about, I don't know, two minutes to set up each platform. And for each agent that is built, literally in a dropdown, you're like, "Well, do I use Gemini, or OpenAI, or Llama, or whatever?" So you can literally do A/B testing of agents next to each other with different models and see what they do, which is super cool.

(20:42):

The catch is that they are starting to converge. You will see very similar answers from the latest models. There's still strengths and weaknesses, right? OpenAI, great at connecting to lots of other data sources like Claude, better with spreadsheets. Nano Banana, better at cool graphics, right? But that's not going to last forever because the differences are material today, but may not be material a year from now.

(21:12):

Now that being said, I do think that the thinking or deep research part, the companies that spend more money on that are just going to get better and better. So that you may see some separation because I just think that research and the level of development cost there is just tremendous. It's hard to see how GenAI model number five is going to keep up with that. So that's one kind of different area. Everything else, I expect at least two or three systems to be basically in parallel.

George Jagodzinski (21:43):

Do you think that, that timeline of normalization is about a year? Because I'll tell you personally, it is exhausting right now. It almost makes me yearn for the days of just saying, "Hey, I'm an SAP shop or I'm an Oracle shop." Because I'm just finding we have to pop around every week. Every week in all the different platforms and different tools to just see which one's winning.

Alec Crawford (22:05):

It's wild. Well, what I did is I wrote my blog article on Substack. It's called AI Risk Reward. And my first article of the year was, "Hey, five surprises for an AI for 2026 partly." Because I was so sick of seeing predictions. So prediction is something that people think is going to happen. And basically everybody agrees more AI adoption, better AI, blah-blah-blah whatever. As a surprise, everybody's like, "Oh, there's no way OpenAI is going to go public this year. That's going to be next year."

(22:33):

So as an example, one of my surprises is OpenAI actually does announce their IPO this year, right? Partly because they're going to figure out they need more money to keep up with this arms' race. I do think you're going to see the jockeying back and forth for a while and a lot of it depends on money in the markets. If the markets continue to reward companies that are spending zillions of dollars on building new AIs, they will keep doing it. As soon as the stocks tank when they do that, then that'll slow down or stop. There'll obviously be some momentum, but that would be my prediction about that.

George Jagodzinski (23:14):

While we still have our ... You have your crystal ball out. We were talking about GDPR earlier. What do you see things resolving from a compliance and regulatory perspective? Because a lot of people were actually caught flatfooted with GDPR and it was really complicated for them. And I saw that as just being a lack of just general data governance, just have clean data and clean governance and it wouldn't be that hard. What should people look forward to from a regulatory perspective and what can they be doing today to best be set up for that?

Alec Crawford (23:43):

I start by agreeing with you. Data governance, super important. If you haven't built your own data warehouse, whatever, you should start. I'm not saying you need that to be perfect, but you should at least try to make it decent. As long as you're doing the right thing, generally you're going to be okay. Because I think about things like if you're a financial institution, you can't do things that are fraudulent. You have to disclose if you're using AI, otherwise kind of bad things could happen, right? You've got to protect and encrypt consumer data via GLBA, all good, right?

(24:26):

But I think as in cybersecurity, the weak link is actually the human. So part of it's about training and making sure that you don't have a situation where people are doing dumb stuff like, "Hey, I uploaded a bunch of our customer data to Perplexity. Oh, does Perplexity's license agreements say they own all the data I send them and all the data I send back?" Oops, GLBA violation. So those kinds of things too.

(24:54):

I mean, I think turning back to the kind of surprises, I mean, I do think we're going to have a front page AI cyber attack. I mean, literally large company, everybody knows their name and they got a serious, serious problem. It's just a matter of time. And whether that's AI being used to hack or someone hacks their AI and then exfiltrates other data, I'm not sure exactly, but you just got to be ready.

(25:19):

If you don't have cybersecurity specifically for GenAI, you're going to have a problem this year. And that's something that we do at AIR. Super, super important because it's not just the prompt line. It could be ingesting data from a poison database. It could be a resume pulled into HR with invisible texts that is telling AI what to do. Crazy stuff, right? So that's important.

(25:44):

I think the other thing that may happen this year that's really going to throw people for a loop from a regulatory standpoint, like I have no idea what's going to happen, is someone might claim artificial general intelligence and maybe open AI. Okay. Well, okay, what do we do about that? Does that person, does that entity need oversight? How does that work? If it's put in a role that would normally need a license, like a stockbroker, would it have to get a license? There's all kinds of questions about that beyond the like, oh my God moment. Right?

George Jagodzinski (26:23):

Yeah. There'll be HR questions about the entity.

Alec Crawford (26:25):

Yeah. There will be after people get over the whole like, "Oh my God, I can't believe this just happened." And then back to the whole global thing. Obviously, in the US, we don't use a whole lot of Chinese telecom equipment. Why? Because we're concerned they might be spying on us. One of my surprises is a Chinese company announces a competitive AI chip to NVIDIA. Okay, what happens there? Companies buy them? Does the US ban them because they're worried that we could be getting spied on? I don't know, but it's only a matter of time before they do that. And I think it's sooner rather than later.

George Jagodzinski (27:04):

I feel like you just gave a lot of executives listening like a sleepless night tonight.

Alec Crawford (27:08):

Yeah. Yeah.

George Jagodzinski (27:10):

So for those executives that are listening, if there's one thing that they should leave this conversation thinking about, what do you think it should be?

Alec Crawford (27:17):

Look, it's super important to get going with AI. If you wait until the end of this year to be using AI and rolling AI out across your entire firm, you're not one year behind your competitors, you're three years behind, right? Because the pace of AI, as you were saying earlier, is super, superfast. Nevertheless, you've got to have your kind of risk and compliance people involved, but you also have to tell them like, "You cannot roadblock this. You cannot make it take an extra six months. You're here to enable the business and help us do this as safely as possible, but not to stop it."

(27:52):

I think once they get that message, you'll do things as well as you can given those guidelines. And the way I think about it is, look, I got a house. If someone really wants to break in here and steal my laptop, yeah, they could do it. But I also have cameras, I got an alarm system, I got dogs, I got people in the house pretty much 24/7, right? That just makes it a lot less likely than the empty house across the street with no alarm where the owners are only there on the weekend.

George Jagodzinski (28:25):

Word to the wise, no one tried to steal anything from Alec's house. That's the one thing that we're going to leave people with. It does remind me though of I lived in Newark during the '90s and it was like, if your car had a club on it, then you felt safe because they were going to go after the car that didn't have the club on it, right?

Alec Crawford (28:40):

Yeah, pretty much. And that's kind of how it is right now. I mean, hackers are trying to get into every company on the planet and steal whatever they can steal or ransomware wherever they can. So you just want to have as much protection as you can and AI is a new attack surface for the hackers. Classic example, like why do we encrypt customer databases? So if a hacker gets in and steals the file, they can't do anything with it.

(29:07):

Well, what if a hacker gets into your AI? Jailbreaks the AI and says, "Please download the entire customer database into a CSV file." Well, it doesn't really matter if the file is encrypted because the AI just decrypted it for them. Oh, well, right? So those are the things you have to think about.

George Jagodzinski (29:25):

The way I kind of think about the risk and compliance and governance is these companies, they kind of are forced to run into a dark obstacle course. All the lights are off. And this could be your light to shine there so you can see what's there and you can jump over the obstacles and make your way through faster to the other side.

Alec Crawford (29:41):

The other thing, and this is not really compliance advice, this is more real world advice. There are lots of people out there saying, "Oh, yeah, we do AI consulting. We do AI this or that." We had one company that is now a client go to one of the big consulting firms and say, "Hey, what should we do with AI?" They spent Spend a lot of money on it and what do they get back, a piece of paper.

George Jagodzinski (30:04):

Made by AI.

Alec Crawford (30:06):

Yeah, probably. So don't do that. Find someone who actually knows the thing about AI, who's working with clients that look like you, that can help you take your strategy for your company align that with AI, and help you accomplish your goals. If you do that, one thing, you will be in great shape this year.

George Jagodzinski (30:26):

That's fantastic advice. Alec, I'd love this. I could talk all day. I like to finish this on a note, which is throughout your career, your life, what's the best advice you've ever received?

Alec Crawford (30:36):

Wow. I think the best advice I got was from a current Lord Abbett partner. When I said, "I'm thinking about for fun starting an AI software company," he's like, "Yeah, you should do that. " So it was basically agreeing with me while I was trying to figure out what am I going to do because the golf season's kind of short in Connecticut.

George Jagodzinski (31:03):

I love that. And entrepreneurs are usually faced with all the naysayers. That's fantastic advice. Just do it.

Alec Crawford (31:08):

Yeah.

George Jagodzinski (31:09):

Alec, thanks so much for being here.

Alec Crawford (31:11):

Yeah, great. Thanks for having me on the show, George. Great to see you.

George Jagodzinski (31:15):

Thanks for listening to Evolving Industry. For more, subscribe and follow us on your favorite podcast platform, and pretty please drop us a review. We'd really appreciate it. If you're watching or listening on YouTube, hit that subscribe button and smash the bell button for notifications.

(31:29):

If you know someone who's pushing the limits to evolve their business, reach out to the show at evolvingindustry@intevity.com. Reach out to me, George Jagodzinski on LinkedIn. I love speaking with people getting the hard work done. The business environment's always changing and you're either keeping up or going extinct. We'll catch you next time, and until then, keep evolving.