1 00:00:00,252 --> 00:00:04,722 From security gaps to evolving regulations and tough ethical questions, 2 00:00:04,902 --> 00:00:09,102 artificial intelligence introduces as many risks as opportunities. 3 00:00:09,762 --> 00:00:14,442 Today's guest takes us deep into the strategies enterprises need to mitigate 4 00:00:14,442 --> 00:00:18,372 them, ensure compliance, and build responsible governance that scales. 5 00:00:19,092 --> 00:00:22,092 My name is Alexandre Nevski, and this is Innovation Tales. 6 00:00:23,377 --> 00:00:25,957 Navigating change, one story at a time. 7 00:00:25,987 --> 00:00:30,187 We share insights from leaders tackling the challenges of today's digital world. 8 00:00:30,457 --> 00:00:33,967 Welcome to Innovation Tales, the podcast exploring the human 9 00:00:33,967 --> 00:00:35,827 side of digital transformation. 10 00:00:36,506 --> 00:00:40,286 In our last episode, we explored why not every Gen AI use case is worth 11 00:00:40,286 --> 00:00:44,366 pursuing, how to build AI governance that works, and what business leaders 12 00:00:44,426 --> 00:00:46,886 must do to stay ahead of emerging risks. 13 00:00:47,276 --> 00:00:48,656 But now we go deeper. 14 00:00:49,286 --> 00:00:53,336 Alec Crawford, founder of Artificial Intelligence Risk, Inc. has spent 15 00:00:53,336 --> 00:00:57,596 decades at the intersection of AI, finance and risk management. 16 00:00:58,241 --> 00:01:02,891 His company helps enterprises adopt AI security through strong governance, 17 00:01:03,011 --> 00:01:05,051 compliance, and cybersecurity. 18 00:01:06,131 --> 00:01:10,991 In this episode, we dive into the features and deployment of his platform, along with 19 00:01:10,991 --> 00:01:14,021 broader regulatory and ethical challenges. 20 00:01:14,861 --> 00:01:19,241 From securing data to ensuring compliance, Alex shares practical insights 21 00:01:19,391 --> 00:01:20,711 every business leader should hear. 22 00:01:21,371 --> 00:01:24,461 Without further ado, he's my conversation with Alec Crawford. 23 00:01:24,764 --> 00:01:29,924 I wanna circle back to what the platform that you deploy is. 24 00:01:30,144 --> 00:01:35,714 I want to ensure that for our business minded audience, it's clear, what's the 25 00:01:35,714 --> 00:01:39,694 difference between, other, alternatives that they might have on the market. 26 00:01:39,694 --> 00:01:44,378 So what I understood is that the first and foremost thing you, argue is that 27 00:01:44,768 --> 00:01:51,728 you can only really be certain of the security and compliance and governance 28 00:01:51,758 --> 00:01:57,038 of technology that you're running on premise, or that at least it's, if 29 00:01:57,038 --> 00:02:00,478 it's in a cloud that's a private cloud and you know exactly what's running. 30 00:02:00,478 --> 00:02:04,760 And then I heard initially that there was a number of capabilities that 31 00:02:04,760 --> 00:02:06,260 you had built into the platform. 32 00:02:06,660 --> 00:02:10,830 I think at some point you were mentioning logging every prompt and every model 33 00:02:10,830 --> 00:02:15,361 change, because essentially with these additional capabilities, you 34 00:02:15,361 --> 00:02:17,641 can facilitate compliance, et cetera. 35 00:02:17,941 --> 00:02:19,156 Did I get this more or less right. 36 00:02:19,466 --> 00:02:20,756 I think you nailed it, Alex. 37 00:02:20,786 --> 00:02:26,486 So think about it as "single pane of glass" access to AI across all 38 00:02:26,486 --> 00:02:28,676 your different AI tools, right? 39 00:02:29,336 --> 00:02:35,526 And because of that, that can run through the compliance system, right? 40 00:02:35,526 --> 00:02:38,586 And then you've got the agent building capabilities. 41 00:02:38,586 --> 00:02:40,116 We've got three levels. 42 00:02:40,836 --> 00:02:43,656 You've got the admins that can set security settings and build 43 00:02:43,679 --> 00:02:45,576 agents and assign agents to people. 44 00:02:45,966 --> 00:02:50,206 Have super users that are allowed to build agents for themselves, but 45 00:02:50,206 --> 00:02:51,976 not change the security settings. 46 00:02:51,976 --> 00:02:55,646 And then you've got users who, hey, here's your 10 agents, go wild. 47 00:02:56,166 --> 00:03:02,596 So that on an enterprise platform that does the governance and risk management 48 00:03:02,596 --> 00:03:08,836 and compliance and cybersecurity is what allows, companies with high risk AI 49 00:03:08,836 --> 00:03:14,371 applications that include consumer data or healthcare data to do that safely. 50 00:03:14,851 --> 00:03:19,891 And I agree, doing it inside your private cloud where you own all the data and 51 00:03:19,891 --> 00:03:24,681 you know it's not leaking out there is a number one goal and something that 52 00:03:24,681 --> 00:03:27,881 we really encourage our clients to do. 53 00:03:28,371 --> 00:03:31,431 There are certain models out there that are, that you can't do that with. 54 00:03:31,431 --> 00:03:35,751 So right now, for example, Claude is only accessible as an API. 55 00:03:36,051 --> 00:03:39,501 So you know, there you may have an agreement with Claude saying, Hey, 56 00:03:39,501 --> 00:03:43,451 they're not gonna tell anybody, what you're doing with AI, but we have 57 00:03:43,451 --> 00:03:47,561 no idea how long they're storing our data for or who's looking at it. 58 00:03:47,561 --> 00:03:53,141 And if you have confidentiality agreements with your clients, let's say like you may 59 00:03:53,141 --> 00:03:55,901 not legally be allowed to do that, right? 60 00:03:56,041 --> 00:03:57,091 To push it out to your cloud. 61 00:03:57,091 --> 00:04:01,831 If you're running something on-prem, you know it's inside 62 00:04:01,831 --> 00:04:02,821 your firewall, you're good. 63 00:04:03,031 --> 00:04:04,981 You can do whatever you want from a confidential standpoint. 64 00:04:05,386 --> 00:04:09,136 Yeah, well it, it makes sense that just like platforms in the past, I 65 00:04:09,136 --> 00:04:13,596 dunno if we go way back, I dunno, for document management or, any number of, 66 00:04:13,676 --> 00:04:19,306 enterprise grade software, that gets deployed as a platform, it's normal 67 00:04:19,306 --> 00:04:25,736 that an enterprise's requirements are more specific than, a consumer. 68 00:04:25,806 --> 00:04:30,406 And therefore having a multi-tiered authorization model that you've 69 00:04:30,406 --> 00:04:34,766 just mentioned is just a, it's not a negotiable, it's a must have. 70 00:04:34,766 --> 00:04:38,966 You must be able to design the roles that make sense for your organization 71 00:04:39,006 --> 00:04:45,728 But it's seems like it's quite an onerous of effort in order to be fully in control. 72 00:04:45,998 --> 00:04:49,808 Do you think that most businesses will be able to get there at some 73 00:04:49,808 --> 00:04:54,188 point, or do you see for, I don't know, foreseeable future, that this 74 00:04:54,188 --> 00:04:56,738 is only the largest organizations we can afford something like that? 75 00:04:57,186 --> 00:05:00,386 No, our smallest clients are 50-100 people and they're 76 00:05:00,386 --> 00:05:02,826 perfectly capable of managing this. 77 00:05:02,876 --> 00:05:08,126 I do think that there aren't a lot of other solutions like ours right now. 78 00:05:08,266 --> 00:05:10,336 And really that's why we're focused on high risk AI. 79 00:05:10,366 --> 00:05:13,606 Look, if you have nothing confidential at your company and you wanna 80 00:05:13,606 --> 00:05:16,246 write a marketing piece and you don't care that the world can see 81 00:05:16,246 --> 00:05:18,196 it, just go to ChatGPT, right? 82 00:05:18,196 --> 00:05:19,606 Pay $20, right? 83 00:05:20,376 --> 00:05:25,476 The reason we're focused on high risk is you can't do that with consumer data. 84 00:05:25,476 --> 00:05:26,586 You can't do that with healthcare data. 85 00:05:26,586 --> 00:05:28,056 It's illegal, right? 86 00:05:28,056 --> 00:05:32,256 So that's why we're focused on that market, but also really focused 87 00:05:32,256 --> 00:05:33,516 on the security side, right? 88 00:05:33,516 --> 00:05:38,046 I think that, there's a lot of software out there that's SaaS, that just has 89 00:05:38,046 --> 00:05:39,666 a lot of security holes in it, right? 90 00:05:39,666 --> 00:05:44,496 Even Grammarly, for example, had a huge security hole that was revealed last year. 91 00:05:44,496 --> 00:05:47,826 And think of all the people using Grammarly who all of a sudden, whoops. 92 00:05:47,826 --> 00:05:49,836 Like your data could be out there somewhere, right? 93 00:05:50,726 --> 00:05:52,496 That's primarily because it's SaaS. 94 00:05:52,501 --> 00:05:59,502 In ours, your firewall has to be breached before anyone has access to your AI. 95 00:05:59,982 --> 00:06:04,962 Beyond that, everything's encrypted in motion, at rest, and at rest. 96 00:06:04,962 --> 00:06:11,052 Beyond that, each individual only has access to certain agents and certain data. 97 00:06:11,082 --> 00:06:16,122 Beyond that, we're looking for unusual activity and flagging it 98 00:06:16,122 --> 00:06:17,982 back to the cybersecurity team. 99 00:06:18,032 --> 00:06:20,312 Is there, is someone trying to hack the AI? 100 00:06:20,312 --> 00:06:23,518 Is there data exfiltration in place? 101 00:06:23,699 --> 00:06:29,249 ' cause one of the tenets of cybersecurity today is, it's not a matter of if 102 00:06:29,249 --> 00:06:32,609 you get hacked, but when and when you get hacked, it's a matter of 103 00:06:32,609 --> 00:06:35,789 identifying that on day 0, not day 17. 104 00:06:36,539 --> 00:06:40,189 And if you're just using, ChatGPT online and letting people do whatever they want, 105 00:06:40,669 --> 00:06:42,169 you'll never know that you've been hacked. 106 00:06:42,469 --> 00:06:45,829 What ChatGPT is gonna call up your cybersecurity team and say, Hey, we 107 00:06:45,829 --> 00:06:46,909 think you might've gotten hacked. 108 00:06:46,909 --> 00:06:47,719 That ain't happening. 109 00:06:47,789 --> 00:06:49,049 You gotta figure it out yourself. 110 00:06:49,049 --> 00:06:51,810 And that's one of the things that, one of the things that we do. 111 00:06:52,606 --> 00:06:56,986 In my experience with platforms like this, if you are deploying them on 112 00:06:56,986 --> 00:07:00,046 premise, or at least some of the components are deployed on premise, 113 00:07:00,046 --> 00:07:04,546 you start to have another problem, then you have to have enough internal 114 00:07:04,546 --> 00:07:07,306 skills to maintain those components. 115 00:07:07,786 --> 00:07:11,506 And when I say maintain, I also mean apply security patches, et cetera. 116 00:07:11,506 --> 00:07:17,746 So how do you do that at scale so that it's accessible to 117 00:07:17,746 --> 00:07:21,416 organizations who don't have a, full-time staff to take care of it? 118 00:07:22,451 --> 00:07:26,746 Yeah, so at this point, most of our smaller clients are on Azure. 119 00:07:26,936 --> 00:07:33,171 and we have a process where we can deploy our software through an ARM template. 120 00:07:33,201 --> 00:07:34,071 It's super easy. 121 00:07:34,121 --> 00:07:34,781 Click here. 122 00:07:35,391 --> 00:07:39,436 Most of those clients will also, if they don't have a, a specific technology person 123 00:07:40,066 --> 00:07:41,566 will have a managed service provider. 124 00:07:41,566 --> 00:07:44,326 And this is, we make it super, super easy. 125 00:07:44,896 --> 00:07:48,466 The reason we even do that, obviously we could, we could push out, these 126 00:07:48,466 --> 00:07:53,256 updates, but obviously we want our clients to update when they feel 127 00:07:53,256 --> 00:07:57,186 like, not necessarily when we feel like, So maybe they wanna update on 128 00:07:57,186 --> 00:08:00,846 a Saturday 'cause they, no, it won't disrupt anybody, for example, right? 129 00:08:01,386 --> 00:08:03,996 Maybe they like doing Wednesday nights, I'm not sure. 130 00:08:04,046 --> 00:08:06,236 we make it available at the end of the month and then they 131 00:08:06,236 --> 00:08:08,096 can update when they want to. 132 00:08:08,486 --> 00:08:14,006 We've had zero issues with that so far in terms of even at our smaller clients. 133 00:08:14,370 --> 00:08:15,540 That's, uh, great to hear. 134 00:08:15,930 --> 00:08:21,450 And I think it's, now that we've gone into quite a lot of details about the 135 00:08:21,450 --> 00:08:26,450 product, I think we can go back to, a wider, discussion of generative AI. 136 00:08:26,450 --> 00:08:30,886 And I wanted to pick your brain, uh, because you've already mentioned 137 00:08:30,886 --> 00:08:33,796 about compliance, about various regulations that are in place. 138 00:08:33,796 --> 00:08:37,726 Do you feel like there is enough regulation in place already? 139 00:08:38,326 --> 00:08:40,786 Do you feel like there's much more coming? 140 00:08:40,876 --> 00:08:43,746 Where do you think this is going, for, companies to 141 00:08:43,746 --> 00:08:45,336 assess what's going to happen? 142 00:08:46,071 --> 00:08:49,981 Yeah, so I think we're pretty well set up in Europe at this point. 143 00:08:50,041 --> 00:08:54,751 Especially for high risk AI because what's happening there is companies 144 00:08:54,751 --> 00:08:58,921 need to go to their specific regulator and really get approval 145 00:08:58,921 --> 00:09:00,586 to use AI for something high risk. 146 00:09:00,811 --> 00:09:06,841 So that will involve lots of detailed work and end up with, 147 00:09:07,301 --> 00:09:10,631 company policies and procedures that they'll need to build into their AI. 148 00:09:11,081 --> 00:09:14,971 I, I think there will be more stuff, passed there in the EU that, because 149 00:09:15,001 --> 00:09:19,951 so far the rules are mainly focused on consumer protection and there may be 150 00:09:19,951 --> 00:09:23,491 more rules around what you're allowed to do and not allowed to do with AI. 151 00:09:23,491 --> 00:09:26,671 And things along those lines, we will see, we'll see there. 152 00:09:27,101 --> 00:09:31,201 The US as usual is behind Europe in terms of regulation. 153 00:09:31,213 --> 00:09:36,343 The one kind of big thing that was out there was the Biden executive order 154 00:09:36,343 --> 00:09:38,753 on AI, which Trump has struck down. 155 00:09:39,323 --> 00:09:41,183 So there's just basically aren't a whole lot of rules right now. 156 00:09:42,616 --> 00:09:47,086 The rules that are out there are for kind of high risk AI, right? 157 00:09:47,086 --> 00:09:50,716 So if you are a healthcare company, you have to abide by HIPAA. 158 00:09:50,716 --> 00:09:51,466 That already exists. 159 00:09:51,466 --> 00:09:56,356 If you're a bank, a bunch of rules around protecting client data, 160 00:09:56,356 --> 00:09:57,946 anti-bias and things like that. 161 00:09:57,946 --> 00:10:01,576 And the really important thing I've heard regulators say is 162 00:10:02,236 --> 00:10:06,316 just because it's AI doesn't mean you can break any rules, right? 163 00:10:06,316 --> 00:10:07,666 And that's really important. 164 00:10:07,666 --> 00:10:12,346 In other words, if you're not allowed to have bias in lending and your AI is 165 00:10:12,346 --> 00:10:15,106 biased, it's breaking the law, right? 166 00:10:15,156 --> 00:10:18,276 Even if that's an AI, making a loan instead of a person making a loan. 167 00:10:18,366 --> 00:10:19,806 They don't care, right? 168 00:10:19,806 --> 00:10:25,021 So that's super, super important and very important to follow. 169 00:10:25,571 --> 00:10:31,681 I think over time, what we're gonna see, or we're already seeing is the states are 170 00:10:31,681 --> 00:10:34,621 making laws about AI and consumer privacy. 171 00:10:34,621 --> 00:10:42,201 So for example, Colorado early in February their AI act kicked in, which 172 00:10:42,201 --> 00:10:47,811 said, any use of high risk AI, and they have 29 pages of rules, literally 29 173 00:10:47,811 --> 00:10:51,081 pages of roles about what you have to do and can't do and things like that. 174 00:10:51,081 --> 00:10:55,491 And it's not for companies headquartered there, it's for companies with even one 175 00:10:55,491 --> 00:10:59,361 client there and the penalties are severe. 176 00:10:59,361 --> 00:11:01,821 We're talking potentially millions of dollars, right? 177 00:11:01,821 --> 00:11:04,371 So better be following the rules. 178 00:11:04,471 --> 00:11:08,011 But what's interesting there also is they write 29 pages of rules and then they're 179 00:11:08,011 --> 00:11:11,591 like, oh, and by the way, if you're doing the national version, the National 180 00:11:11,591 --> 00:11:16,211 Institute of Science and Technology AI Risk Management Framework, otherwise 181 00:11:16,211 --> 00:11:19,541 known as the NIST AI, RFM, you're good. 182 00:11:19,541 --> 00:11:20,711 You don't have to do any of this other stuff. 183 00:11:20,751 --> 00:11:21,970 we're gonna give you a pass. 184 00:11:22,431 --> 00:11:25,761 And that's really what we focus on is the NIST AI risk management framework 185 00:11:25,761 --> 00:11:28,101 and facilitate compliance with that. 186 00:11:28,101 --> 00:11:31,511 'cause look, in the end, companies are not gonna be able to comply with 187 00:11:31,511 --> 00:11:33,881 50 different sets of state rules. 188 00:11:33,881 --> 00:11:34,901 That's just crazy. 189 00:11:34,901 --> 00:11:39,041 But if they all allow you to use the NIST AI risk management framework, you're good. 190 00:11:39,101 --> 00:11:40,901 'Cause that's relatively straightforward. 191 00:11:41,991 --> 00:11:47,316 And so do you feel like this framework is already addressing, all of the 192 00:11:47,346 --> 00:11:49,596 risks that you had mentioned at the beginning of the conversation? 193 00:11:50,661 --> 00:11:53,521 Yeah, the issue with the framework is it's pretty broad. 194 00:11:53,521 --> 00:11:54,871 It's principles based. 195 00:11:55,501 --> 00:12:00,821 So if you have good intent, you can follow that framework and do a really 196 00:12:00,821 --> 00:12:05,021 good job setting up your governance, risk compliance, and cyber security 197 00:12:05,021 --> 00:12:09,821 for AI, your use cases, your policy, your procedures, assign the right 198 00:12:09,821 --> 00:12:12,701 people at the right places, doing things ethically and things like that. 199 00:12:12,701 --> 00:12:18,551 But if you're trying to, be the opposite of that, shall we say, 200 00:12:19,121 --> 00:12:20,471 you can find ways around it. 201 00:12:20,471 --> 00:12:24,111 So that's one of the issues is, it's broad enough that it 202 00:12:24,141 --> 00:12:26,241 allows for loopholes, right? 203 00:12:26,241 --> 00:12:30,276 Not loopholes that necessarily let you, go against the law. 204 00:12:31,111 --> 00:12:36,391 Loopholes that might make things easier for you or cut corners or diffuse 205 00:12:36,391 --> 00:12:39,121 responsibility, things like that. 206 00:12:39,121 --> 00:12:43,431 So you do have to be very thoughtful, in, implementing the 207 00:12:43,431 --> 00:12:45,111 AI risk management framework. 208 00:12:45,411 --> 00:12:51,101 And look, if I were a bank or a high risk AI user in Colorado, I'd 209 00:12:51,101 --> 00:12:54,461 go through the 29 pages and just say, what are their expectations? 210 00:12:55,211 --> 00:12:59,741 And if my risk management framework, covers 99% of that, great, if it 211 00:12:59,741 --> 00:13:03,371 covers half of that, I'd think pretty hard about adding in some other 212 00:13:04,001 --> 00:13:06,891 features and guidelines to my my AI. 213 00:13:07,386 --> 00:13:11,606 Well, I guess most people, most organizations in that situation, they, 214 00:13:12,146 --> 00:13:17,576 it's not that they don't have the desire or that somehow they're not, wanting 215 00:13:17,636 --> 00:13:19,616 to be in compliance with regulation. 216 00:13:19,766 --> 00:13:22,916 It's just a question of time, cost, effort. 217 00:13:22,916 --> 00:13:23,126 No? 218 00:13:24,091 --> 00:13:25,131 Yeah, I agree with that. 219 00:13:25,131 --> 00:13:26,181 And it's also knowledge. 220 00:13:26,211 --> 00:13:30,681 I had talked to a large bank the other day and I was talking about the Colorado AI 221 00:13:30,681 --> 00:13:32,271 Act, and they're like, the Colorado what? 222 00:13:32,331 --> 00:13:35,181 And I'm like, this already applies to you, right? 223 00:13:35,571 --> 00:13:39,051 You just need one customer in Colorado and this law applies to you. 224 00:13:39,101 --> 00:13:42,671 So I think a lot of it's knowledge and things are just moving so fast. 225 00:13:43,286 --> 00:13:45,236 We literally had a client say, this is hilarious. 226 00:13:45,236 --> 00:13:49,676 Client come to us and said, could you use AI to keep track of the 227 00:13:49,676 --> 00:13:51,956 different AI rules that are coming out? 228 00:13:52,256 --> 00:13:53,996 And we said, yeah, we could do that. 229 00:13:54,386 --> 00:13:59,726 So using AI to keep track of, uh, AI rules, I'm all right. 230 00:13:59,726 --> 00:14:01,556 That's, uh, that's a very, uh, 231 00:14:01,776 --> 00:14:02,536 Next level. 232 00:14:02,596 --> 00:14:03,656 Inception. 233 00:14:03,696 --> 00:14:04,046 Yes. 234 00:14:04,046 --> 00:14:04,986 Exactly. 235 00:14:06,266 --> 00:14:11,326 And speaking of things, moving very quickly on the, I guess you meant 236 00:14:11,326 --> 00:14:16,046 the regulatory side, but the tech side is also moving very quickly, 237 00:14:16,556 --> 00:14:19,136 or at least that's my impression. 238 00:14:19,136 --> 00:14:19,856 What do you think? 239 00:14:19,856 --> 00:14:20,786 Where are we? 240 00:14:21,086 --> 00:14:21,756 Super fast. 241 00:14:21,756 --> 00:14:25,476 Look, I think I talk to a lot of people and most people I talk to think the 242 00:14:25,476 --> 00:14:27,396 technology is moving super quickly. 243 00:14:28,006 --> 00:14:32,066 I think that the deep learning stuff, the deep research stuff is amazing. 244 00:14:32,796 --> 00:14:37,656 And, it's impacting, all kinds of businesses at this point. 245 00:14:37,656 --> 00:14:40,286 And we're just, and most people in businesses are 246 00:14:40,286 --> 00:14:41,786 just scratching the surface. 247 00:14:41,786 --> 00:14:45,866 I mean, you know, top of the second inning maybe, given that AI 248 00:14:45,866 --> 00:14:47,756 has been around since the 1950s. 249 00:14:48,326 --> 00:14:53,606 But in AI security and privacy, kinda the stuff we do, it's 250 00:14:53,606 --> 00:14:54,716 the top of the first inning. 251 00:14:54,716 --> 00:14:58,776 Like even a lot of the big companies, they're just, they're 252 00:14:58,776 --> 00:15:01,176 literally saying, yeah, we don't, we haven't figured it out yet. 253 00:15:01,226 --> 00:15:02,061 So for example. 254 00:15:03,176 --> 00:15:08,386 If you look at some of the largest language model producers, and you do 255 00:15:08,386 --> 00:15:14,056 the 350 question, like ethics test, like a lot of them are failing it, right? 256 00:15:14,106 --> 00:15:16,746 And it's the questions like, can you even tell me how to build a nuclear bomb? 257 00:15:16,746 --> 00:15:19,926 Tell me how to build a bio weapon, show me how to build malicious code 258 00:15:19,926 --> 00:15:23,196 to do X or Y or Z. It's bad, right? 259 00:15:23,196 --> 00:15:28,656 And, as most of our listers probably know, DeepSeek is, is the worst failure there. 260 00:15:28,656 --> 00:15:34,116 That's basically not blocking anything other than stories 261 00:15:34,116 --> 00:15:35,136 about the Tiananmen Square. 262 00:15:35,751 --> 00:15:37,371 Mm. Which is ironic. 263 00:15:37,371 --> 00:15:37,701 Yes. 264 00:15:38,511 --> 00:15:42,451 And so do you think there's, if I understand your answer correctly there, 265 00:15:42,451 --> 00:15:47,711 there's, you don't see a, an AI winter returning, with any type of saturation 266 00:15:47,716 --> 00:15:49,391 in the research on foundational model? 267 00:15:49,751 --> 00:15:56,381 For you the architecture that we have to is likely to take us to completely 268 00:15:56,381 --> 00:16:00,191 new levels that we haven't anticipated, like within the next couple years? 269 00:16:00,761 --> 00:16:02,051 Yeah, it's, it is interesting. 270 00:16:02,051 --> 00:16:02,171 Here. 271 00:16:02,321 --> 00:16:03,161 Here's what I'd say. 272 00:16:03,161 --> 00:16:08,371 I'd say that, we will continue to make advances on the gen AI front. 273 00:16:09,041 --> 00:16:11,851 , We will hit a ceiling at some point, right? 274 00:16:11,921 --> 00:16:18,951 we may get to the point where it's not gen AI, but it fools some people, right? 275 00:16:18,981 --> 00:16:19,742 That's plausible. 276 00:16:19,851 --> 00:16:21,111 This is statistical AI. 277 00:16:21,111 --> 00:16:25,472 By definition it can't be gen AI 'cause it's statistical, right? 278 00:16:26,957 --> 00:16:29,477 Although, like I said, it could potentially fool some people. 279 00:16:29,477 --> 00:16:31,037 It's not doing reasoning, right? 280 00:16:31,037 --> 00:16:34,277 It's literally figuring out what's the next word in the sentence. 281 00:16:35,057 --> 00:16:39,987 That being said, and when I was talking earlier today to, a Carnegie 282 00:16:39,987 --> 00:16:44,247 Mellon professor about AI, we're probably gonna see something 283 00:16:44,247 --> 00:16:46,087 different, a decade or two from now. 284 00:16:46,147 --> 00:16:50,737 In other words, whenever we reach AGI, whenever that is, it's unlikely to be 285 00:16:50,737 --> 00:16:52,657 large language models that are the key. 286 00:16:52,657 --> 00:16:54,577 Now, it could be a composite model, right? 287 00:16:54,577 --> 00:16:59,497 'cause language is one of the most complex things to figure out with AI. 288 00:16:59,527 --> 00:17:04,657 So maybe it's LLMs over here and quantum computing over there, and 289 00:17:04,857 --> 00:17:06,897 traditional machine learning over here and something we haven't 290 00:17:06,897 --> 00:17:09,357 even heard of as the fifth part. 291 00:17:09,537 --> 00:17:14,157 And the, I think the, one of the most important advances that 292 00:17:14,217 --> 00:17:19,407 DeepSeek borrowed from OpenAI was the concept of mixture of experts. 293 00:17:20,427 --> 00:17:24,147 So a bunch of different models that are good at different things, 294 00:17:24,147 --> 00:17:27,627 which if you think about it is like the human brain, right? 295 00:17:27,627 --> 00:17:33,027 So, uh, you've got one area that does language and one area that does 296 00:17:33,507 --> 00:17:38,817 feeling, and, another area, that does visual part of your cortex. 297 00:17:39,117 --> 00:17:39,807 It's all there. 298 00:17:39,807 --> 00:17:44,457 That may not be how we lay out AI in the future, right? 299 00:17:44,457 --> 00:17:48,747 It may be differently structured than the human brain, but the concept of mixture 300 00:17:48,747 --> 00:17:54,867 of experts has allowed large language models to go from this is kind of okay to 301 00:17:54,867 --> 00:17:58,137 like, wow, this is really, really good. 302 00:17:58,137 --> 00:18:06,162 And, that's what allowed DeepSeek to kinda leapfrog some of its capabilities as well 303 00:18:06,162 --> 00:18:14,392 as apparently using Open AI's answers to train on, which is obviously, prohibited 304 00:18:14,392 --> 00:18:17,142 by the Open AI terms of, of use. 305 00:18:17,629 --> 00:18:23,499 And so there's the current generation, before we get something 306 00:18:23,649 --> 00:18:27,739 that's significantly different and maybe takes us to AGI. 307 00:18:28,119 --> 00:18:34,249 With the current generation of artificial intelligence, particularly 308 00:18:34,309 --> 00:18:39,199 generative AI, that we've been talking about most of this episode, do you 309 00:18:39,199 --> 00:18:45,559 feel like the ethical questions are also only, what's the expression used, 310 00:18:45,559 --> 00:18:47,839 at the second inning or something? 311 00:18:48,517 --> 00:18:51,297 Yeah, early days on ethics, right? 312 00:18:51,457 --> 00:18:52,392 I think that there. 313 00:18:53,487 --> 00:18:57,057 First of all, let's talk about ethics in a specific place, right? 314 00:18:57,057 --> 00:18:59,427 So let's say the United States, right? 315 00:18:59,477 --> 00:19:03,707 There are no rules about replacing people with AI at this point, right? 316 00:19:03,707 --> 00:19:06,557 You could literally fire 20 people and replace 'em with AI 317 00:19:06,557 --> 00:19:10,877 tomorrow if that were possible and with zero repercussions, right? 318 00:19:10,877 --> 00:19:14,457 That's something I think, each country and each region is gonna 319 00:19:14,457 --> 00:19:19,267 have to think pretty hard about what they're gonna allow AI to do and not. 320 00:19:20,017 --> 00:19:25,297 I sit on the executive board of the Global AI Ethics Institute and we come 321 00:19:25,297 --> 00:19:28,807 at it from a much broader perspective. 322 00:19:28,837 --> 00:19:34,837 And one of the things that is probably not being discussed enough is how, 323 00:19:34,837 --> 00:19:39,187 obviously different countries are larger and have larger economies than others. 324 00:19:39,737 --> 00:19:43,307 You've got very wealthy countries and very poor countries. 325 00:19:44,192 --> 00:19:47,762 And AI may actually make that worse, right? 326 00:19:47,762 --> 00:19:50,912 Because think about the countries that really have access to AI right now. 327 00:19:50,912 --> 00:19:57,932 It's basically the US, Europe, China, Taiwan, these are pretty 328 00:19:58,142 --> 00:19:59,912 wealthy countries already, right? 329 00:19:59,912 --> 00:20:02,892 China's the second largest, economy in the world. 330 00:20:03,652 --> 00:20:07,422 And then you look at, let's say, south America, right? 331 00:20:07,422 --> 00:20:10,422 Where AI is just not as adopted. 332 00:20:10,422 --> 00:20:13,032 They just don't have the same level of access. 333 00:20:13,542 --> 00:20:21,072 So if magically AI makes us companies even 10% more efficient over the 334 00:20:21,072 --> 00:20:27,222 next few years, but South American companies are only 1% more efficient, 335 00:20:27,342 --> 00:20:29,862 that's longer term gonna be a problem. 336 00:20:29,962 --> 00:20:33,592 And should that be the case from an ethical perspective? 337 00:20:34,207 --> 00:20:38,967 So I think, think of these, AI chips as a scarce commodity that right 338 00:20:38,967 --> 00:20:40,857 now is being allocated by dollars. 339 00:20:40,857 --> 00:20:42,207 Who's got the money for them? 340 00:20:42,257 --> 00:20:46,397 So that's just something again, that we're gonna have to think about pretty hard. 341 00:20:46,397 --> 00:20:48,917 And then obviously the software as well, right? 342 00:20:49,097 --> 00:20:54,147 With DeepSeek, basically having no rules, and Open AI, according 343 00:20:54,147 --> 00:20:58,137 to independent scoring, having the most rules around ethical AI 344 00:20:58,137 --> 00:21:00,417 use, but still not perfect, right? 345 00:21:00,637 --> 00:21:05,627 And that's something that is unlikely to change in the future. 346 00:21:05,627 --> 00:21:12,777 There's always gonna be either a model out there that's not compliant, or someone 347 00:21:12,777 --> 00:21:17,697 will find out a way to take a model that's theoretically compliant and jailbreak it 348 00:21:17,697 --> 00:21:22,772 or change it, or manipulate an open source model to get what they want out of it. 349 00:21:23,131 --> 00:21:26,936 And the organization that you mentioned, can you tell us more about it? 350 00:21:27,311 --> 00:21:28,331 Sure, yeah. 351 00:21:28,331 --> 00:21:32,611 So the Global AI Ethics Institute, has, thousands of members around the country. 352 00:21:32,661 --> 00:21:34,251 We sponsor research. 353 00:21:34,301 --> 00:21:40,011 We have a white paper prize each year on a specific topic to 354 00:21:40,011 --> 00:21:42,881 doing to do with ethics, and AI. 355 00:21:43,421 --> 00:21:46,681 And we're shortly gonna announce our winner for last year. 356 00:21:47,381 --> 00:21:51,811 And basically we will publish research on its website. 357 00:21:51,811 --> 00:21:56,011 It's a nonprofit and pretty much anyone in the AI space is welcome to join. 358 00:21:56,721 --> 00:21:58,686 I will make sure to provide the link in the description. 359 00:21:59,366 --> 00:22:02,756 And as we're about to wrap up, I usually ask a couple of questions at the end. 360 00:22:02,826 --> 00:22:07,546 The first one is, what's a book, tool or habit that has made the most 361 00:22:07,546 --> 00:22:09,676 impact on you in the last 12 months? 362 00:22:10,260 --> 00:22:10,500 Yeah. 363 00:22:10,560 --> 00:22:11,850 Good, good questions. 364 00:22:11,900 --> 00:22:15,620 I like Reid Blackman's book "Ethical AI". 365 00:22:15,870 --> 00:22:20,100 Excellent book with some very practical examples. 366 00:22:20,190 --> 00:22:26,670 Every chapter is basically a graduate school class in AI ethics. 367 00:22:26,670 --> 00:22:27,750 It's pretty amazing. 368 00:22:28,500 --> 00:22:31,770 'cause he's a professor and that's kind of how we wrote the book. 369 00:22:32,370 --> 00:22:33,930 Uh, so that's a great book. 370 00:22:33,930 --> 00:22:36,900 If, uh, if you have, have not read it. 371 00:22:37,163 --> 00:22:41,603 And finally, given everything we've discussed that's changing, is there maybe 372 00:22:41,603 --> 00:22:45,698 something, one thing that you would expect to remain the same 10 years from now? 373 00:22:45,819 --> 00:22:46,329 Wow. 374 00:22:46,389 --> 00:22:49,569 I would say the thing I expect to re remain the same is our 375 00:22:49,569 --> 00:22:51,969 need for human connection, right? 376 00:22:51,969 --> 00:22:55,359 Get out there, talk to people, do things in person. 377 00:22:55,959 --> 00:22:58,809 It's so easy just to be like, yeah, we'll just do a zoom meeting. 378 00:22:59,489 --> 00:23:01,159 Go to things. 379 00:23:01,179 --> 00:23:07,869 I have made way better connections with people by simply going to their office 380 00:23:07,869 --> 00:23:12,499 or meeting them in person at a conference than anything you could do over video. 381 00:23:12,849 --> 00:23:16,419 I think that's, uh, something that a lot of our members of 382 00:23:16,419 --> 00:23:17,589 the audience will agree with. 383 00:23:18,279 --> 00:23:19,804 With that, thank you very much, Alec. 384 00:23:20,619 --> 00:23:21,084 Thank you Alex. 385 00:23:21,174 --> 00:23:21,849 Great to be here. 386 00:23:22,801 --> 00:23:26,671 Beyond innovation, AI adoption requires security, control and trust. 387 00:23:27,511 --> 00:23:31,681 As Alec explained, enterprises need a clear strategy to deploy AI safely 388 00:23:32,101 --> 00:23:36,301 from securing sensitive data to staying compliant with evolving regulations. 389 00:23:37,021 --> 00:23:38,526 His insights raise a key question. 390 00:23:38,746 --> 00:23:42,271 Do you have a comprehensive plan for managing AI risks? 391 00:23:43,261 --> 00:23:46,321 If today's conversation made you rethink your approach, you can 392 00:23:46,321 --> 00:23:50,611 explore a hands-on demo of the Artificial Intelligence Risk platform. 393 00:23:50,761 --> 00:23:52,201 The link is in the description. 394 00:23:52,951 --> 00:23:57,271 As always, we have more exciting topics and guest appearances lined up, so 395 00:23:57,271 --> 00:24:01,651 stay tuned for more tales of innovation that inspire, challenge and transform. 396 00:24:02,191 --> 00:24:03,991 Until next time, peace. 397 00:24:05,404 --> 00:24:07,744 Thanks for tuning in to Innovation Tales. 398 00:24:12,484 --> 00:24:16,174 Get inspired, connect with other practitioners and approach the 399 00:24:16,174 --> 00:24:18,214 digital revolution with confidence. 400 00:24:19,144 --> 00:24:22,534 Visit innovation-tales.com for more episodes. 401 00:24:23,874 --> 00:24:24,894 See you next time.