1 00:00:00,119 --> 00:00:04,079 Imagine you dedicated your career to artificial intelligence before anyone 2 00:00:04,079 --> 00:00:08,519 believed in it, before plentiful computing, before ChatGPT, before the 3 00:00:08,519 --> 00:00:10,289 breakthroughs that changed everything. 4 00:00:10,949 --> 00:00:14,999 Today's guest did just that, surviving the second AI winter, and now helping 5 00:00:14,999 --> 00:00:17,189 businesses mitigate AI related risks. 6 00:00:17,699 --> 00:00:20,699 My name is Alexandre Nevski, and this is Innovation Tales. 7 00:00:21,005 --> 00:00:23,585 Navigating change, one story at a time. 8 00:00:23,615 --> 00:00:27,815 We share insights from leaders tackling the challenges of today's digital world. 9 00:00:28,085 --> 00:00:31,595 Welcome to Innovation Tales, the podcast exploring the human 10 00:00:31,595 --> 00:00:33,455 side of digital transformation. 11 00:00:34,745 --> 00:00:39,575 Alec Crawford is the founder of Artificial Intelligence Risk, Inc., a company leading 12 00:00:39,575 --> 00:00:45,185 the charge in AI security, safety and compliance, earning top rankings in gen AI 13 00:00:45,185 --> 00:00:47,285 cybersecurity, and regulatory compliance. 14 00:00:48,305 --> 00:00:52,115 With a Harvard background in AI and decades of experience managing investment 15 00:00:52,115 --> 00:00:57,035 risks at firms like Goldman Sachs and Morgan Stanley, he understands both the 16 00:00:57,035 --> 00:00:59,975 potential and the pitfalls of AI adoption. 17 00:01:01,025 --> 00:01:04,805 In this episode, Alec shares how enterprises can move beyond AI 18 00:01:04,805 --> 00:01:07,535 hype to real world implementations. 19 00:01:08,195 --> 00:01:12,275 You learn why not every gen AI use case is worth pursuing, how to build 20 00:01:12,275 --> 00:01:16,535 AI governance that actually works, and what business leaders must do 21 00:01:16,535 --> 00:01:18,410 now to stay ahead of emerging risks. 22 00:01:19,160 --> 00:01:22,290 Without further ado, here's my conversation with Alec Crawford. 23 00:01:22,853 --> 00:01:24,023 Alec, welcome to the show. 24 00:01:24,330 --> 00:01:25,510 Alex great to be here! 25 00:01:25,770 --> 00:01:26,840 Oh, it's great to have you. 26 00:01:27,260 --> 00:01:30,230 You've been involved with artificial intelligence long 27 00:01:30,230 --> 00:01:31,430 before it was mainstream. 28 00:01:31,610 --> 00:01:33,350 So let's go back to where it started. 29 00:01:33,390 --> 00:01:38,990 I believe it was 1987 and you were writing your master thesis at Harvard. 30 00:01:39,300 --> 00:01:40,835 What led you to take that initial step? 31 00:01:41,700 --> 00:01:42,090 Yeah. 32 00:01:42,090 --> 00:01:42,810 I was at Harvard. 33 00:01:42,870 --> 00:01:45,520 I was actually, still in undergrad when I started. 34 00:01:45,600 --> 00:01:49,590 and I was building neural networks from scratch. 35 00:01:49,620 --> 00:01:52,740 And back then, if you had walked around the street and said, 36 00:01:52,740 --> 00:01:53,850 Hey, what's a neural network? 37 00:01:53,850 --> 00:01:55,590 People would be like, oh, that's your brain, right? 38 00:01:55,590 --> 00:01:58,710 people just didn't even know what it was. 39 00:01:58,780 --> 00:02:01,960 Even people in computer science didn't, A lot of them didn't 40 00:02:01,960 --> 00:02:03,760 know what it, know what it was. 41 00:02:04,360 --> 00:02:06,950 And it was pretty cool to be there at the beginning. 42 00:02:07,050 --> 00:02:11,460 and the 1980s was really a period of excitement because, 43 00:02:12,040 --> 00:02:13,660 it was starting to work, right? 44 00:02:13,660 --> 00:02:16,330 You could do some natural anchorage processing, you could 45 00:02:16,330 --> 00:02:17,675 teach computers to do things. 46 00:02:17,835 --> 00:02:21,850 I was teaching them to play poker as an example, and bet and bluff and 47 00:02:21,850 --> 00:02:26,210 things that people obviously associate with humans more than computers. 48 00:02:26,800 --> 00:02:31,120 and, my thesis advisor was a guy named Bill Woods who would invent something 49 00:02:31,120 --> 00:02:36,400 called Lunar, which was a natural language processing system to let you ask questions 50 00:02:36,400 --> 00:02:40,210 and get answers about the rocks that have been brought back from the moon. 51 00:02:40,270 --> 00:02:42,520 So you could say things like, how many rocks were there, how many were 52 00:02:42,520 --> 00:02:44,770 igneous, what was the largest rock? 53 00:02:44,770 --> 00:02:45,700 It would give you answers. 54 00:02:45,700 --> 00:02:49,555 So back then that was revolutionary. 55 00:02:49,635 --> 00:02:50,445 It was amazing. 56 00:02:50,475 --> 00:02:55,228 And, uh, while I was, at Harvard you know, we were still at the upslope 57 00:02:55,248 --> 00:03:00,618 of the hype cycle for AI back in the eighties, so people were super excited 58 00:03:00,618 --> 00:03:04,248 about it and just dreaming of what else you could do with it all the time. 59 00:03:05,353 --> 00:03:11,848 1987 sometimes is mentioned as the point, like the beginning 60 00:03:11,848 --> 00:03:13,498 of the second AI winter. 61 00:03:13,708 --> 00:03:17,328 So from a hype cycle perspective, it was the peak. 62 00:03:17,328 --> 00:03:18,228 Is that what you meant? 63 00:03:18,303 --> 00:03:18,593 Yeah. 64 00:03:18,663 --> 00:03:19,863 I tend to agree with that. 65 00:03:19,863 --> 00:03:25,243 I think that, people were, had developed some, expert systems, 66 00:03:25,243 --> 00:03:28,693 which were supposed to mimic what experts do, and I think people were 67 00:03:28,693 --> 00:03:30,253 starting to see the limitations. 68 00:03:30,253 --> 00:03:36,573 And by early nineties, I'm not gonna say AI was dead, but Wow, and at least in 69 00:03:36,573 --> 00:03:39,033 corporate America, it was done, right? 70 00:03:39,093 --> 00:03:43,963 And obviously people were still doing research and that's what eventually led to 71 00:03:43,993 --> 00:03:48,703 the invention of the transformer and large language models as we know them today. 72 00:03:48,703 --> 00:03:53,993 But yeah, there was a very dark period of time there for AI. 73 00:03:54,543 --> 00:03:58,893 I actually did a little bit of AI right when I came out of school 74 00:03:59,523 --> 00:04:01,023 at the bank I was working at. 75 00:04:01,023 --> 00:04:04,273 And it was more me saying oh, they've given me a job to do. 76 00:04:04,273 --> 00:04:07,123 I think AI could actually, do this pretty well. 77 00:04:07,153 --> 00:04:08,113 and I was right. 78 00:04:08,813 --> 00:04:13,483 but it never, really took off and it was all, homegrown stuff. 79 00:04:13,483 --> 00:04:18,013 You couldn't go out and get a model from Google or anyone like that. 80 00:04:18,013 --> 00:04:20,023 It was like, if you're doing it, it's 'cause you built it. 81 00:04:20,083 --> 00:04:20,563 That's it. 82 00:04:20,743 --> 00:04:21,583 There's nowhere else to go. 83 00:04:22,068 --> 00:04:23,948 And so what was that use case for the bank? 84 00:04:24,618 --> 00:04:28,313 Yeah, so there, there was an enormous amount of, data that came in. 85 00:04:28,313 --> 00:04:31,783 I happened to be working, in the, the mortgage department or the 86 00:04:31,783 --> 00:04:33,133 mortgage research department. 87 00:04:33,203 --> 00:04:38,773 and we would ingest massive amounts of information, every month about 88 00:04:39,433 --> 00:04:42,883 mortgages, mortgage rates, and did people pay off their mortgages? 89 00:04:42,883 --> 00:04:44,383 Were they delinquent? 90 00:04:44,383 --> 00:04:45,133 Things like that. 91 00:04:45,133 --> 00:04:49,193 So I used the model to do some inferences, around that, which, was 92 00:04:49,193 --> 00:04:52,853 interesting and is kinda a good use case for AI, even back then looking 93 00:04:53,363 --> 00:04:58,143 at lots of data and trying to, train a model and see what you get out of it. 94 00:04:58,813 --> 00:05:02,078 So it was predictive kind of analytics, early days. 95 00:05:02,843 --> 00:05:06,683 Early days, predictive analytics and, yeah, it, we had modest success with that. 96 00:05:06,683 --> 00:05:12,413 What I'd say is that kind of standard parametric models, and 97 00:05:12,473 --> 00:05:17,393 non-parametric models ended up being longer term more successful 98 00:05:17,393 --> 00:05:19,943 than a neural network in that case. 99 00:05:19,943 --> 00:05:24,863 So some interesting things there, but in the end just wasn't additive. 100 00:05:24,913 --> 00:05:27,373 And that's actually an important lesson for today. 101 00:05:27,373 --> 00:05:33,253 AI is so hot that people run around and go, oh, we gotta use AI for X and, they 102 00:05:33,253 --> 00:05:38,583 use AI for X, but you may actually get a better outcome using a more traditional 103 00:05:38,583 --> 00:05:41,163 model or approach in some cases. 104 00:05:41,223 --> 00:05:46,263 And it's also highly likely in that traditional model or approach that ends 105 00:05:46,263 --> 00:05:47,913 up being faster and cheaper to run. 106 00:05:47,913 --> 00:05:52,323 That instinct to immediately go to gen AI sometimes needs 107 00:05:52,323 --> 00:05:54,063 to be vetted to some degree. 108 00:05:54,938 --> 00:06:00,258 Yeah, well 1987 or the couple years later when you were working at, at 109 00:06:00,258 --> 00:06:04,748 the bank, that's way before deep learning and neural networks had 110 00:06:04,748 --> 00:06:08,828 the computing power to actually be useful in the way that they are now. 111 00:06:08,878 --> 00:06:12,108 how did that AI winter, impact you? 112 00:06:12,108 --> 00:06:17,138 Did you like regret studying artificial intelligence in, in, as a undergrad? 113 00:06:17,710 --> 00:06:18,460 Oh, not at all. 114 00:06:18,460 --> 00:06:21,640 No, it was awesome and I loved it and I stayed in touch with it, 115 00:06:21,860 --> 00:06:26,120 all the time throughout my career, whether reading research or doing 116 00:06:26,120 --> 00:06:30,790 my own projects or, occasionally, adding things to, to work, Hey, we 117 00:06:30,790 --> 00:06:32,290 should try this and see if it works. 118 00:06:32,340 --> 00:06:35,720 And, obviously, it's back, but what I would say is. 119 00:06:36,035 --> 00:06:37,865 There really wasn't much of a budget for it. 120 00:06:37,915 --> 00:06:42,295 You'd have to sneak it in on the side as opposed to today you've got 121 00:06:42,295 --> 00:06:46,645 CEOs going to heads of technology, going like, do some AI thing. 122 00:06:46,645 --> 00:06:46,975 I don't know. 123 00:06:47,035 --> 00:06:50,365 I don't care how much it costs, or, here's a million dollars, figure it out. 124 00:06:50,365 --> 00:06:52,595 That was not, the story back then for sure. 125 00:06:53,316 --> 00:06:56,436 Yeah, certainly things have changed dramatically. 126 00:06:56,885 --> 00:07:02,281 What's the inflection point that for you is the start of what we are 127 00:07:02,281 --> 00:07:03,751 experiencing today in the business world? 128 00:07:05,176 --> 00:07:10,871 Yeah, I think I take it back to really the public launch of ChatGPT, 129 00:07:10,891 --> 00:07:14,171 two years ago in November, which really got people's attention. 130 00:07:14,171 --> 00:07:19,906 So what's interesting is a lot of these you know techniques and models 131 00:07:19,906 --> 00:07:21,646 have been out there for a while. 132 00:07:22,501 --> 00:07:27,361 Until OpenAI put it out on the internet and said, Hey, anybody can do this, a lot 133 00:07:27,361 --> 00:07:28,951 of people just didn't even know about it. 134 00:07:28,951 --> 00:07:34,841 And even in the first couple months, you had a number of people using it, 135 00:07:35,501 --> 00:07:39,251 but also a number of people not using it and certainly not understanding it. 136 00:07:39,281 --> 00:07:42,791 But it obviously, the concept went viral pretty quickly. 137 00:07:42,821 --> 00:07:49,481 And the strides made even in the different model versions as we went 138 00:07:49,481 --> 00:07:55,931 from Llama two to three and OpenAI up to version 4o, were tremendous. 139 00:07:55,961 --> 00:07:58,971 And that's something that I look at all the time, like how capable of 140 00:07:58,971 --> 00:08:03,361 the models and how robust are they and, can you trick them into doing 141 00:08:03,361 --> 00:08:05,311 things that they're not supposed to? 142 00:08:05,881 --> 00:08:06,631 Things like that. 143 00:08:06,631 --> 00:08:11,701 And just their capabilities across those different areas, both being 144 00:08:11,701 --> 00:08:17,551 able to, answer questions, solve puzzles, but also do it in a robust 145 00:08:17,551 --> 00:08:19,921 manner, ethical manner, have improved. 146 00:08:19,921 --> 00:08:23,281 That, that being said, not the case for all models. 147 00:08:23,281 --> 00:08:30,061 So it's a pretty famous that, DeepSeek fails almost every one of the 350 148 00:08:30,061 --> 00:08:31,651 different safety tests, right? 149 00:08:31,701 --> 00:08:33,411 Hey, tell me how to build a nuclear bomb. 150 00:08:33,411 --> 00:08:33,891 Okay. 151 00:08:34,341 --> 00:08:35,391 Not great. 152 00:08:35,501 --> 00:08:38,777 But looking at that in the eighties, we weren't even dreaming of that, right? 153 00:08:38,857 --> 00:08:44,407 We didn't realize that someday someone would have the capacity to 154 00:08:44,407 --> 00:08:47,407 basically take everything from the internet and training AI model. 155 00:08:47,407 --> 00:08:47,947 Like what? 156 00:08:48,647 --> 00:08:52,277 That wasn't even in the realm of possibility in our minds at that time. 157 00:08:53,412 --> 00:08:57,282 So when Chan GPT was announced and made available, what 158 00:08:57,282 --> 00:08:58,527 were you up to at that point? 159 00:08:58,647 --> 00:09:00,377 I had just, retired. 160 00:09:00,427 --> 00:09:01,657 I was from finance. 161 00:09:01,657 --> 00:09:06,157 I was at a company called Lord Abbett, which is a mid-size 162 00:09:06,157 --> 00:09:07,327 asset manager in the US. 163 00:09:07,327 --> 00:09:11,467 They manage mutual funds and institutional money, about $250 billion. 164 00:09:12,187 --> 00:09:15,937 And I was running risk and doing a bunch of other things around AI. 165 00:09:16,747 --> 00:09:19,027 And, I played a hundred rounds of golf. 166 00:09:19,177 --> 00:09:24,907 And, actually pretty early that year I was watching these really big companies 167 00:09:24,907 --> 00:09:29,167 onboard AI with just zero guardrails. 168 00:09:29,197 --> 00:09:30,397 And we're talking big companies. 169 00:09:30,477 --> 00:09:33,027 This is crazy, right? 170 00:09:33,027 --> 00:09:34,437 it's not only bad for. 171 00:09:35,562 --> 00:09:39,402 Those companies if something goes wrong, but also potentially their 172 00:09:39,402 --> 00:09:42,992 customers releasing customer data, their employees, putting them in 173 00:09:43,622 --> 00:09:45,872 bad or awkward situations or worse. 174 00:09:46,492 --> 00:09:51,562 Right off the bat started, writing up, the first piece of code that was gonna do AI 175 00:09:51,562 --> 00:09:56,242 governance, risk management, compliance, and cybersecurity as a platform as opposed 176 00:09:56,242 --> 00:10:00,412 to single point of use or just one thing. 177 00:10:01,222 --> 00:10:04,762 And honestly, I didn't know if it was even doable, right? 178 00:10:04,762 --> 00:10:06,022 To do all four things. 179 00:10:06,022 --> 00:10:10,282 And I had, I got my, basically taught myself Python 'cause I'm 180 00:10:10,282 --> 00:10:13,132 more of a c#/c++ programmer. 181 00:10:13,182 --> 00:10:14,952 But it was easier to template in Python. 182 00:10:15,972 --> 00:10:20,082 And got it running and then, showed it to my co-founder, Frank Fitzgerald. 183 00:10:20,082 --> 00:10:21,942 He's like, oh my God, this is amazing. 184 00:10:21,942 --> 00:10:24,912 And I don't think I've written more than three lines of codes 185 00:10:24,912 --> 00:10:26,172 since then 'cause he took it over. 186 00:10:26,772 --> 00:10:30,912 So, but it was fun to go learn Python and he's, guess what? 187 00:10:30,912 --> 00:10:33,322 Writing it all in C# on .Net. 188 00:10:33,342 --> 00:10:37,012 which is, obviously where you need to, what you need to do 189 00:10:37,012 --> 00:10:38,122 to professionalize things. 190 00:10:38,122 --> 00:10:38,362 So 191 00:10:38,529 --> 00:10:41,759 All right, well, since you've broached the, the topic, then 192 00:10:41,759 --> 00:10:45,769 let's talk about that, approach that you use to help clients with, 193 00:10:45,849 --> 00:10:47,679 mitigating, I guess the AI risk. 194 00:10:47,679 --> 00:10:48,729 Is that how you would describe it? 195 00:10:49,029 --> 00:10:49,939 Yeah, absolutely. 196 00:10:49,964 --> 00:10:50,254 Okay. 197 00:10:50,269 --> 00:10:53,389 It's almost impossible to get rid of all risk. 198 00:10:53,884 --> 00:10:57,694 No matter what you're doing, you could be standing in, in your house and 199 00:10:57,694 --> 00:11:00,964 have a fire extinguisher everywhere and not be there when it burns down. 200 00:11:00,964 --> 00:11:02,494 So you can't use your fire extinguisher, right? 201 00:11:02,494 --> 00:11:04,174 it's impossible to eliminate risk. 202 00:11:04,224 --> 00:11:06,834 An asteroid could strike New York City in a year, and there's 203 00:11:06,834 --> 00:11:07,524 nothing we could do about it. 204 00:11:07,524 --> 00:11:07,944 Who knows? 205 00:11:08,614 --> 00:11:09,934 but we do mitigate risk. 206 00:11:09,934 --> 00:11:14,454 And, I love thinking about, risks the way NASA does, right? 207 00:11:14,454 --> 00:11:17,694 So NASA thinks about risk in a grid, right? 208 00:11:17,694 --> 00:11:22,744 Where it's, low, medium, high impact and low, medium, high probability, and 209 00:11:22,744 --> 00:11:25,954 the high probability, high impact risk, well, you better be looking at, right? 210 00:11:26,464 --> 00:11:31,774 Maybe you don't have time to look at all hundred risks that are in your nine boxes, 211 00:11:31,774 --> 00:11:36,904 but you better at least be addressing the high probability, high impact risks. 212 00:11:37,004 --> 00:11:42,294 You know, part of our focus, and, the way I think about it is four pillars, of 213 00:11:42,414 --> 00:11:49,084 risk management for AI: governance, risk management, compliance, and cybersecurity. 214 00:11:49,114 --> 00:11:53,134 And by governance what I mean is the AI's got access to very specific 215 00:11:53,134 --> 00:11:57,454 data, very specific use cases, and then you can assign those AI 216 00:11:57,484 --> 00:12:00,514 agents to specific groups or people. 217 00:12:00,564 --> 00:12:04,039 And the flip side of that is, Copilot, right? 218 00:12:04,039 --> 00:12:06,079 Where it gives you access to everything. 219 00:12:06,949 --> 00:12:10,669 And which is honestly a malicious actor's dream, right? 220 00:12:10,669 --> 00:12:14,550 If they capture your credentials and they get in there pretending to be you, 221 00:12:15,319 --> 00:12:18,769 they can ask copilot questions like, Hey, where's all the customer data? 222 00:12:18,799 --> 00:12:22,489 Or, show me my emails from the CEO, or do I have any access 223 00:12:22,489 --> 00:12:23,959 to any login credentials? 224 00:12:23,959 --> 00:12:27,349 And Copilot will happily answer them within seconds. 225 00:12:27,349 --> 00:12:32,419 And it can really compress the time on day zero that a hacker needs to do something 226 00:12:32,419 --> 00:12:34,819 bad, like, uh, deploy ransomware. 227 00:12:35,528 --> 00:12:38,969 So we, we do the opposite of that very specific things. 228 00:12:38,969 --> 00:12:45,869 So, great example is for a call center where we constructed a knowledge 229 00:12:45,919 --> 00:12:47,899 bot for the call center employees. 230 00:12:47,899 --> 00:12:50,899 They can just type in whatever their question is and get an answer. 231 00:12:50,899 --> 00:12:55,009 What kinds of loans do we offer or what's our policy about late fees or whatever. 232 00:12:55,559 --> 00:12:59,136 And they can get the answer in seconds, even if they're a new employee. 233 00:12:59,189 --> 00:13:02,029 So that's a good example of a very specific agent. 234 00:13:02,479 --> 00:13:07,549 So on the risk management side, a lot of things in the world are obviously 235 00:13:07,549 --> 00:13:11,509 encrypted for very good reason, like our customer databases and things like that. 236 00:13:11,989 --> 00:13:14,029 But in AI, not necessarily, right? 237 00:13:14,029 --> 00:13:17,629 So graph by default, are typically not, encrypted. 238 00:13:18,329 --> 00:13:22,499 So we encrypt everything in motion and at rest, and also encrypt things on the fly. 239 00:13:22,499 --> 00:13:27,359 So it's, if someone happens to type a social security number into a prompt, 240 00:13:27,409 --> 00:13:31,479 we can set it up so either it gets blocked or it gets encrypted before 241 00:13:31,479 --> 00:13:34,329 it goes to the AI or leaves your firm. 242 00:13:34,329 --> 00:13:40,149 And think about the rules in Europe around GDPR, where wow, you better be encrypting 243 00:13:40,779 --> 00:13:42,549 customer data or you're gonna get fined. 244 00:13:42,549 --> 00:13:45,699 I think Amazon had a fine that was almost a billion euros a couple 245 00:13:45,699 --> 00:13:48,999 years ago for GDPR violations. 246 00:13:49,399 --> 00:13:53,119 But also standard stuff that we don't even think about on the internet right 247 00:13:53,119 --> 00:13:57,694 now, like blocking not safe for work topics or keeping people from playing 248 00:13:57,694 --> 00:14:02,344 games at work or whatever, that you need to do that with AI as well. 249 00:14:02,914 --> 00:14:06,014 And then onto regulatory compliance, right? 250 00:14:06,014 --> 00:14:07,664 So lots of rules around AI. 251 00:14:07,664 --> 00:14:08,504 What are you allowed to do? 252 00:14:08,504 --> 00:14:09,494 What are you not allowed to do? 253 00:14:09,524 --> 00:14:12,344 Making sure AI is not doing stuff, it's not supposed to. 254 00:14:13,404 --> 00:14:17,774 And in the US a lot of that revolves around just keeping records of what 255 00:14:17,774 --> 00:14:19,984 you did with AI, especially in finance. 256 00:14:20,464 --> 00:14:23,404 So we have create an immutable database, keep track of every prompt, 257 00:14:23,404 --> 00:14:27,844 every response, every model change, and provide an e-discovery tool for 258 00:14:27,844 --> 00:14:30,874 regulators or compliance people to go in and see, Hey, what are people 259 00:14:30,904 --> 00:14:34,114 doing without violating privacy rules? 260 00:14:34,504 --> 00:14:38,584 And then finally, cybersecurity where we really advocate for private AI. 261 00:14:39,094 --> 00:14:43,324 And what I mean by that is deploying the models on your own computers 262 00:14:43,324 --> 00:14:45,544 on-prem or on your private cloud. 263 00:14:45,574 --> 00:14:49,294 So when they're running, you don't have to worry that your secret 264 00:14:49,294 --> 00:14:51,664 data is leaking out somewhere. 265 00:14:52,229 --> 00:14:56,959 And in addition, cybersecurity beyond that is looking for 266 00:14:56,959 --> 00:14:58,699 different kinds of attacks on ai. 267 00:14:58,729 --> 00:15:04,069 So there I'm talking about, the number one risk from OWASP on 268 00:15:04,069 --> 00:15:06,019 gen AI is prompt injections. 269 00:15:06,499 --> 00:15:08,749 And beyond that, there are things like Do Anything Now or 270 00:15:08,749 --> 00:15:13,069 DAN-style attacks and Skeleton Key attacks and multi-hot attacks. 271 00:15:13,069 --> 00:15:18,719 And all these things today sound like gibberish, but they will be on the front 272 00:15:18,719 --> 00:15:20,609 pages in the next year or two, right? 273 00:15:20,609 --> 00:15:26,219 As large companies get their AI hacked, ransomware, whatever, and 274 00:15:26,219 --> 00:15:28,109 people will learn about these things. 275 00:15:28,139 --> 00:15:29,819 Don't learn too late, right? 276 00:15:29,819 --> 00:15:32,339 You wanna learn about what they are now and defend your 277 00:15:32,339 --> 00:15:33,869 company or yourself against 'em. 278 00:15:34,509 --> 00:15:35,739 and that's one of the things that we do. 279 00:15:36,011 --> 00:15:41,066 And you mentioned at the beginning that you are approaching this from 280 00:15:41,126 --> 00:15:43,276 like a platform point of view. 281 00:15:43,276 --> 00:15:48,256 So I guess by the time that you're engaged, you're talking to 282 00:15:48,256 --> 00:15:52,006 them, they've already done some things and there's already a bit 283 00:15:52,006 --> 00:15:53,926 of a technical or security debt. 284 00:15:54,466 --> 00:15:54,916 Is that right? 285 00:15:56,026 --> 00:15:57,256 That can totally happen, right? 286 00:15:57,266 --> 00:16:02,546 the most common thing we see is companies either completely blocking 287 00:16:02,546 --> 00:16:09,986 all AI or, allowing certain employees to use AI, but making them promise not 288 00:16:09,986 --> 00:16:11,246 to put anything confidential out there. 289 00:16:11,246 --> 00:16:12,956 So no client data, things like that. 290 00:16:13,676 --> 00:16:16,466 Look, human error happens every day, right? 291 00:16:16,466 --> 00:16:20,306 There's a story of someone I know who's, works at a pretty big 292 00:16:20,306 --> 00:16:22,586 company, is on the AI committee. 293 00:16:23,316 --> 00:16:28,076 And on her phone, accidentally used the regular chat GPT instead of 294 00:16:28,076 --> 00:16:32,516 their walled garden version, put confidential information on there 295 00:16:32,516 --> 00:16:34,406 and was suspended from the firm. 296 00:16:34,556 --> 00:16:37,496 This is the person on the AI committee, right? 297 00:16:37,496 --> 00:16:41,456 So obviously you have to take these rules seriously, but the, obviously, 298 00:16:41,486 --> 00:16:46,436 the better way to do that is simply to block all the public versions, at 299 00:16:46,436 --> 00:16:51,796 least on your own on your inside your firewall or on your own in intranet. 300 00:16:52,436 --> 00:16:56,256 And then only present the safe versions, right? 301 00:16:56,256 --> 00:16:57,246 The corporate versions. 302 00:16:57,736 --> 00:17:03,556 The good news is that we can roll out in a day our platform that includes 303 00:17:03,556 --> 00:17:08,286 no-code agent building and this AI GRCC is what we call it, governance, 304 00:17:08,286 --> 00:17:09,726 risk compliance, cybersecurity. 305 00:17:09,795 --> 00:17:13,725 How long does it take to get started to deploy? 306 00:17:14,171 --> 00:17:14,381 Yeah. 307 00:17:14,381 --> 00:17:15,791 It's super fast, right? 308 00:17:15,841 --> 00:17:22,786 We've got dozens of clients and deployment on Azure takes about 14 minutes. 309 00:17:22,946 --> 00:17:26,636 We basically use an ARM template. 310 00:17:26,966 --> 00:17:30,506 And so our software sets up the compliance database. 311 00:17:30,556 --> 00:17:33,846 Then it sets up the models, all the kind of popular large language 312 00:17:33,846 --> 00:17:39,586 models, base models and a bunch of kind of standard agents, depending 313 00:17:39,586 --> 00:17:42,646 on what industry the client is in. 314 00:17:43,126 --> 00:17:44,176 And then they are off to the races. 315 00:17:44,176 --> 00:17:49,856 One of the things I love doing is not just meeting with the person running 316 00:17:49,856 --> 00:17:55,016 technology or the CEO, but meeting the people who work at the company. 317 00:17:55,206 --> 00:17:59,676 We ask each client to come up with a focus group of people from different departments 318 00:17:59,676 --> 00:18:01,626 who are hopefully excited about AI. 319 00:18:02,206 --> 00:18:04,531 And then, what are your big problems? 320 00:18:04,531 --> 00:18:05,791 What do we think we can solve? 321 00:18:05,841 --> 00:18:09,861 And, things that are easy to solve and a high impact we try to do 322 00:18:09,861 --> 00:18:11,601 right away and get some quick wins. 323 00:18:11,661 --> 00:18:15,651 Things that are really hard or LLMs aren't capable of, we just tell them, 324 00:18:15,651 --> 00:18:18,471 Hey, that's doesn't seem possible. 325 00:18:18,551 --> 00:18:23,751 Occasionally, clients ask for things that are either not 326 00:18:23,751 --> 00:18:26,031 legal or unethical, very rarely. 327 00:18:26,081 --> 00:18:29,141 And we just remind them that, Hey, that's not a great idea. 328 00:18:29,141 --> 00:18:31,556 Even if you could do that, you shouldn't try doing that. 329 00:18:31,939 --> 00:18:33,319 and Sometimes it's accidental. 330 00:18:33,319 --> 00:18:39,139 So there's, for example, in the US there are a lot of rules about bias in lending. 331 00:18:39,139 --> 00:18:42,859 So you have to be very careful if an AI agency has anything to do with 332 00:18:42,859 --> 00:18:46,829 lending, that you're it remains unbiased. 333 00:18:46,829 --> 00:18:52,494 And so how do you demonstrate that the implementation that 334 00:18:52,674 --> 00:18:58,934 you've put in place that replaces something else is unbiased enough? 335 00:18:59,204 --> 00:19:00,134 How does that work? 336 00:19:00,489 --> 00:19:01,209 That's a great question. 337 00:19:01,509 --> 00:19:03,459 We do lots of different things. 338 00:19:03,509 --> 00:19:07,949 We create agents for anything from call centers to loan officers 339 00:19:07,949 --> 00:19:10,439 to CEOs to marketing people. 340 00:19:11,129 --> 00:19:16,579 And 95% of the time that there really aren't a whole lot of like 341 00:19:16,579 --> 00:19:20,539 legal constraints of what you're supposed to do or allowed to do. 342 00:19:21,029 --> 00:19:27,649 Sometimes there are, and we do deal with, high risk AI in finance and in healthcare. 343 00:19:27,979 --> 00:19:31,879 So a lot of it depends on the specific use case and also the 344 00:19:31,879 --> 00:19:35,429 guidelines, the AI guidelines set up by the company themselves. 345 00:19:36,009 --> 00:19:42,969 And their legal or compliance team, and we can work with them to make sure that, as 346 00:19:42,969 --> 00:19:48,069 an example, the AI is not accessing data, which could cause it to be biased, right? 347 00:19:48,579 --> 00:19:55,629 We can work with them to help them understand like what data they should 348 00:19:55,629 --> 00:19:58,599 or shouldn't, should not be using when they're doing something specific, 349 00:19:58,599 --> 00:20:01,427 making a loan or, anything like that. 350 00:20:01,997 --> 00:20:03,977 In the end, it's really down to the company, right? 351 00:20:03,977 --> 00:20:06,617 We're providing models, we're providing software. 352 00:20:07,137 --> 00:20:13,317 Using the models and the software and the compliance system, users 353 00:20:13,317 --> 00:20:15,087 still has to do it correctly, right? 354 00:20:15,137 --> 00:20:17,717 If they aren't doing correctly, they're not gonna get the right outcome. 355 00:20:17,717 --> 00:20:21,327 So we automate 95% of that. 356 00:20:22,287 --> 00:20:27,867 It's never gonna be 100% because you're always gonna have AI policies, right? 357 00:20:27,867 --> 00:20:29,457 Which you could implement through a system. 358 00:20:29,457 --> 00:20:32,617 You're gonna have AI procedures, things you wanna do and not 359 00:20:32,617 --> 00:20:34,997 do, which we can facilitate. 360 00:20:35,807 --> 00:20:38,467 But in my opinion, everything always has to come back to a 361 00:20:38,467 --> 00:20:40,147 human at some point, right? 362 00:20:40,147 --> 00:20:41,167 To say yes or no. 363 00:20:41,167 --> 00:20:42,217 Good idea, bad idea. 364 00:20:42,217 --> 00:20:43,177 Does this look right? 365 00:20:43,927 --> 00:20:44,407 Good example. 366 00:20:44,407 --> 00:20:50,797 There is, we have clients that use AI to fill out, proposals, right? 367 00:20:50,797 --> 00:20:56,917 So a company you wanna work for says, wow, we wanna hire you. 368 00:20:56,917 --> 00:21:00,727 Here's our a hundred questions about cybersecurity, right? 369 00:21:00,727 --> 00:21:02,317 And now you've gotta answer those. 370 00:21:02,707 --> 00:21:03,217 Now guess what? 371 00:21:03,217 --> 00:21:05,227 You've probably answered those a hundred times already. 372 00:21:06,127 --> 00:21:11,437 AI can very easily go through the last hundred times you answered that and 373 00:21:11,437 --> 00:21:12,847 answer it for the new questionnaire. 374 00:21:13,897 --> 00:21:16,447 But someone's gotta read it, right? 375 00:21:16,477 --> 00:21:22,287 Because if the AI somehow gives the wrong answer or switches a word 376 00:21:22,287 --> 00:21:26,937 in there to, to claim that you're doing something that you're not, or 377 00:21:26,937 --> 00:21:29,187 vice versa, like that's a problem. 378 00:21:29,247 --> 00:21:34,117 So that's why for anything like that, you're gonna want a human in the loop 379 00:21:35,257 --> 00:21:37,447 to make sure that everything's correct. 380 00:21:37,855 --> 00:21:41,845 Having answered a number of proposals myself manually over 381 00:21:41,845 --> 00:21:46,855 the years, I also think that it's the same problem with humans. 382 00:21:46,855 --> 00:21:53,725 When you have a sales team responding to a number of requests, you also have a bit 383 00:21:53,725 --> 00:22:00,952 of an issue where it's very hard to ensure to, to have every proposal go through 384 00:22:01,612 --> 00:22:06,292 both like the technical feasibility, validation by somebody else and the legal 385 00:22:06,292 --> 00:22:10,252 that the, from the legal department that you're not promising something that you 386 00:22:10,257 --> 00:22:11,602 won't be able to deliver, et cetera. 387 00:22:12,052 --> 00:22:15,447 And so you have to, be pragmatic sometimes. 388 00:22:15,747 --> 00:22:18,597 How's that different when you are using generative AI? 389 00:22:19,767 --> 00:22:22,702 Yeah, I think it's interesting 'cause you can really do, two things. 390 00:22:22,702 --> 00:22:25,882 One is you can use a mixture of experts, right? 391 00:22:25,882 --> 00:22:30,547 So you're not just answering the tech questions, you can answer the legal 392 00:22:30,547 --> 00:22:36,397 questions or portfolio manager questions or whatever, all from the same kind 393 00:22:36,397 --> 00:22:40,977 of database effectively or suite of questions you've answered before. 394 00:22:41,677 --> 00:22:45,787 The other thing that's interesting is you can create review agents, which go 395 00:22:45,787 --> 00:22:52,387 through it and can flag things that look wrong or things you're not supposed to do. 396 00:22:52,387 --> 00:22:57,427 Like you've heard the phrase, I'm sure in finance before, past performance is 397 00:22:57,427 --> 00:22:59,557 no guarantee of future results, right? 398 00:22:59,557 --> 00:23:03,187 So you could go through there, go through something and say, let's make sure 399 00:23:03,187 --> 00:23:05,047 we're not guaranteeing results, right? 400 00:23:05,077 --> 00:23:07,297 So you could double check things with AI as well. 401 00:23:07,367 --> 00:23:11,287 That's a great point because the first thing that comes to mind, 402 00:23:11,347 --> 00:23:16,252 just because it's such a burdensome task, is to automate the responses. 403 00:23:16,252 --> 00:23:21,262 Writing the answer to the questions, and then having humans validate this. 404 00:23:21,262 --> 00:23:25,402 But what you just mentioned sounds even more appealing, is it doesn't matter 405 00:23:25,402 --> 00:23:30,382 who wrote it, a human or a generative AI, or a combination of both or they 406 00:23:30,382 --> 00:23:37,858 work working together, having agents that you've initialized so that they are 407 00:23:38,168 --> 00:23:44,128 looking for specific types of issues, that's a tool I only wish I had every time 408 00:23:44,128 --> 00:23:47,938 I was writing one of those responses to give myself like a bit of a safety net. 409 00:23:48,453 --> 00:23:49,498 Yeah, absolutely. 410 00:23:49,678 --> 00:23:50,908 Very, very doable now. 411 00:23:51,038 --> 00:23:52,988 It's really, it's obviously industry specific. 412 00:23:53,198 --> 00:23:56,288 Industry has its own things they need to watch out for. 413 00:23:56,478 --> 00:23:57,768 But I agree. 414 00:23:57,798 --> 00:23:58,668 It's gonna be a fabulous tool. 415 00:23:58,668 --> 00:24:02,748 And look, uh, at a company I worked before, there were literally eight people. 416 00:24:02,748 --> 00:24:05,268 This is their full-time job to respond to these. 417 00:24:05,328 --> 00:24:08,508 One senior person and seven pretty junior people. 418 00:24:08,508 --> 00:24:11,058 And we basically had to promise them, you're only gonna have to do this for two 419 00:24:11,058 --> 00:24:12,858 years, and then you get to do a real job. 420 00:24:12,908 --> 00:24:15,698 Because it was really not very fun. 421 00:24:15,798 --> 00:24:18,648 And the senior person's job was just making sure stuff was correct. 422 00:24:19,318 --> 00:24:21,508 Because the junior people would make mistakes sometimes. 423 00:24:22,008 --> 00:24:25,628 Now they're starting to use AI there, and I think it's just 424 00:24:25,628 --> 00:24:26,618 gonna get better and better. 425 00:24:27,468 --> 00:24:31,698 AI is certainly transforming business, but as Alec highlighted, success isn't 426 00:24:31,698 --> 00:24:33,408 just about adopting the latest tools. 427 00:24:33,798 --> 00:24:38,448 It's about balancing innovation with security, governance, and compliance. 428 00:24:39,348 --> 00:24:42,828 Not every gen AI use case is worth pursuing, and smart companies 429 00:24:42,918 --> 00:24:44,898 focus on strategy, not just hype. 430 00:24:45,858 --> 00:24:48,798 If today's discussion got you thinking about artificial intelligence in 431 00:24:48,798 --> 00:24:52,743 your organization, we can check out a hands-on demo of the platform 432 00:24:52,743 --> 00:24:54,513 mentioned by Alec in the interview. 433 00:24:54,633 --> 00:24:56,043 The link is in the description. 434 00:24:56,823 --> 00:24:57,843 And we're not done yet. 435 00:24:58,173 --> 00:25:01,563 We'll continue the conversation with Alec next time and explore 436 00:25:01,563 --> 00:25:05,853 the features and deployment of this platform as well as broader 437 00:25:05,943 --> 00:25:08,193 regulatory and ethical considerations. 438 00:25:08,973 --> 00:25:13,143 There were too many topics to cover with Alec in one episode, so stay 439 00:25:13,143 --> 00:25:17,583 tuned for more tales of innovation that inspire, challenge and transform. 440 00:25:18,273 --> 00:25:19,893 Until next time, peace. 441 00:25:20,444 --> 00:25:22,784 Thanks for tuning in to Innovation Tales. 442 00:25:27,524 --> 00:25:31,214 Get inspired, connect with other practitioners and approach the 443 00:25:31,214 --> 00:25:33,254 digital revolution with confidence. 444 00:25:34,184 --> 00:25:37,574 Visit innovation-tales.com for more episodes. 445 00:25:38,914 --> 00:25:39,934 See you next time.