DELIVERED

The AI honeymoon is over with Josh Harguess

Infinum Season 1 Episode 10

In this episode of Delivered, you can learn what’s beyond the hype and where AI will take us next.

We sat down with Dr. Josh Harguess, the AI Security Chief at Cranium, a leading AI security and trust platform that spun out of KPMG. With 60+ publications, three best paper awards, and five issued patents, Josh has been on the frontlines of AI safety and effectiveness for over a decade.

Key learnings:

  • Discover how to tackle the biggest challenges in AI adoption today
  • Learn about securing AI systems and the practice of AI red-teaming
  • Find out how to assess AI risks and set realistic expectations
  • Explore the future of AI and its most promising applications



Thanks for tuning in! If you enjoyed this episode, hit that follow button so you don’t miss our next delivery.

Delivered newsletter
Delivered episodes are recorded live. Subscribe to our newsletter to stay informed about upcoming live events. When you join a live event, you can ask our guests questions and get instant answers, access exclusive content, and participate in giveaways.

About Infinum
Delivered is brought to you by a leading digital product agency, Infinum. We've been consulting, workshopping, and delivering advanced digital solutions since 2005.

Let's stay connected!
LinkedIn
YouTube

Okay, Josh, welcome to Delivered. It's wonderful to have you.

Thank you. That introduction as well.

Yeah, I had to really, really list that out. Your credentials are such a vast list of things. I was like, man, am I going to get all these in? But yeah, honestly, it's super great to have you on the show. And like I say, the topic we're talking about today, I mean, I'm a big fan of AI in general, but the fact that you have all these credentials is a great thing. I try to distill this in a list for the audience, but I suppose we should probably just start talking about it from the start. Right? Let's talk about you, the career path you've had to get to where you are today. I'd love to hear more just to set the background really of your origin story.

Yeah, absolutely. So I think I'll start with the AI path. So I was at University of Texas at Austin, this is in the early 2000s. Ended up meeting a professor there doing computer vision work, J.K. Aggarwal, one of the pillars of this area, computer vision really got into face recognition. And interestingly enough, that was really my first path down. This AI security area, biometrics from face, from iris, from fingerprint, all these sorts of things obviously has a lot of implications in the security realm. So ended up doing a PhD there, kind of leaning on my applied math background. So came up with some formulas for multi-camera face recognition. So published several papers there, ended up at the Navy. So I had my last year for my PhD, I was actually under a scholarship with the Department of Defense and that's how I ended up here in San Diego working for the Navy. And during those seven years, this was in the 2010s, early 2010s, and folks were using the words machine learning. Nobody was seeing AI, deep learning wasn't around yet in the context that we know it now. And data science was really what people were talking about. They were talking data science, various analytics, these sorts of things, support vector machines where all the rage and machine learning.
And really my goal as a researcher at that Naval lab, as most researchers know, is to get money to do the work that I wanted to do. And that took a lot of effort because people were not very interested in learning about machine learning. They had their own way of viewing the world, but I was there to try to tell everybody the good news machine learning is on the way it's going to be here to stay. So that's really how I got into the DoD space. Continued some of that work into intelligence, surveillance, reconnaissance mission, so a lot of obviously computer vision work.

And then really about 2016, 2017 is when I started getting into AI security itself. And that kind of led me down the path to where I'm today.

Yeah, it's such a great and varied background. And like I say, it is funny because I feel like back in the day it would've been like, it's coming, it's happening. But I guess people don't contextualize it then as to do now you have all this AI tooling everywhere, people understand it because kind of using it a bit more. So yeah, it's also I guess at the government level as well, that was also interesting when I looked at your CV, the fact it's helping apply that into government problems. I mean, we don't have to get too deep into those. I appreciate for the nature of those. But yeah, just that kind of resonance of trying to bring it to various governments must be fairly difficult as much as its businesses today.

Yeah, no, it's a really good point. So at the time, sort of 2011, 2012, 2013, we started to see some deep learning papers come out and really deep learning and most of machine learning back then you needed a ton of data and labeled data at that. And that's one thing, the DoD had a ton of data but not a lot of labeled data. And so we were coming up with all sorts of methods, zero shot, few shot kind of learning methods where you only have a few examples of something. So a ship that you may want to track or detect somewhere in the wide open ocean. You may only have a couple of examples. How do you build models when you only have a few images or information to go on? And so a lot of that early work was really around that. And then later on started joining efforts such as Maven where really the goal there was let's label as much data as we possibly can so that we can use these tools that industry is using because they have so much rich labeled data.

And I suppose the topic of the show today was about the honeymoon period being over in AI. And I think there's so much hype that's being put out there at the minute. I suppose the allure of AI is to just adopt it, just use it to compete with it and get it into business. And I guess in your case, and sometimes at the government level, but I suppose securing AI is different to traditional software. I'd love to maybe just lean into that a bit more because I guess in the absence of I guess security and consciousness of making sure this thing is done properly, the allure is let's us get into AI and use it and start to deploy it to compete with various other competitors in the market. What is that difference between AI and traditional software applications being secured?

Yeah, it's a really good question. So some of the differences really come down to the way these models behave. So some examples, some of the early work, you might have an image that if you send into some sort of an image recognition algorithm, it would say, this is a cat, this is a dog. And if you add just a little bit of adversarial noise, so noise that is catered in such a way, not just random noise, but it really is imperceptible to a human. Now all of a sudden that might say given 99%, I'm absolutely sure that this is a given. So that's the famous kind of panda gibbon paper. And actually we have this joke where if we hear somebody mention panda gibbon, that's like drink.

Love that, love that.

It's such a common reference for this area.

Yeah, absolutely. Yeah, it's a different beast, isn't it? I suppose as well. Traditional software I feel like has quite simpler vectors. That's quite binary where I guess with AI models it's constantly changing, evolving and sometimes black box. So yeah, I guess trickier to manage security wise.

It's true. And I think at this point we have a pretty good handle on what are the vulnerabilities of a software, what are the ways to get back doors, what are the security holes? I think with the AI, what we've done is obviously introduce a bunch of new capabilities. We're all excited about that, but these new capabilities come with very strange back doors, very strange software vulnerabilities. Prompt injection is another example that obviously we knew we were going to deal with on some level, but I think some of the attacks have been pretty different from what we were expecting.

Yeah, and I suppose, I mean to add some context without getting too deep into NDA, breaking anything, an attack through AI, I guess that is things like prompt injection, that kind of thing, I'd assume from an intermediate level of this, is that the kind of thing we're talking about?

Yeah, exactly. So it very much depends on the kind of AI I mentioned, deep learning, computer vision, those are different set of attacks, even traditional machine learning before deep learning has these same kinds of attacks that you can perform on them. They look different. And then obviously we have with LLMs and with multimodal LLMs, we have a very new class of attacks and the prompt injection is exactly that really fits the bill for the majority of those attacks.

Yeah, absolutely. I suppose what was really interesting to talk about as well, I was doing the research here is this is for the tech nerds like me out there that when you hear things like red teaming, it's almost like born identity or almost the cool kind of secret agent part of this thing. So in terms of red teaming, I suppose, I guess first of all, we should probably talk about what they are just for context for the audience. Yeah, yeah,

Certainly. So I'll give a little bit of a background of, because I was new to red teaming until about 2017, 18, hadn't really been in the cybersecurity world. I've been in obviously the AI world for the majority of my career. So I had to learn myself, what is this? And I think at the time people were even trying to figure out what does red teaming mean for ai?
But really red teaming as a practice kind of break it down to these three things. So it's understanding vulnerability assessments, so what are the vulnerabilities in the system? The second thing is doing pen testing. When you do understand these vulnerabilities, am I actually affected by these? I'm going to pen test my own systems, my own models. And then the third one is really the one that I spent the majority of my time on at MITRE which is doing red team campaigns. So essentially can I find a new vulnerability that we're not already aware of and put on my adversarial hat and really try to break the system?

Yeah, I love that. At Infinum, we have a SecOps team, AI team, et cetera. And I think you're right, you have to sometimes wear the black hat, white hat in Ballard to kind play the role of the bad guy to then inform the white hat how to then solve that. So yeah, I also think with the red team side of things, I mean it is quite broad concept, but I suppose it's needed especially at government level or corporate level, that's going to be something that needs to be in place as a backup at all points.

It's true, and I think we will probably get into this in multiple of the questions, but there's a lack of talent obviously in the AI space. We need tons of people working on ai. There's also a lack of people in cybersecurity that's well documented. And then when you intersect those into AI security, I mean it's a handful of people around the world, so we're really going to have to build tools to help scale those teams and we're going to have to educate folks to get them into this area.

Yeah, I totally agree. Mean we work a lot with software digital, which again feels it's relevant, it is everywhere, but I do see the age now where that is now becoming almost traditionally older and it is now about AI seeping in. But I think, like I said before, the allure of using it comes with the threat of misusing it. Even if it's unintentionally you are just deploying these things. I suppose it'd be a good time for the audience who probably are probably not quite well versed as AI as your good self, but in terms of the types of AI attacks that you see in real world, maybe we should just discuss that and how to maybe prevent that or what that threat could maybe give to maybe clients and businesses. Yeah, let's maybe talk about that a little bit more.

Yeah, I'll give two examples. I know everybody wants to hear the prompt injection ones, but I'll start with a more traditional attack.

Please.

And this is back from my MITRE days. And so we had a customer in the DoD, they had an autonomous platform and their AI engine could run the autonomy, it could run the camera system, it could basically run the whole platform and it could do so without any human intervention human could take over if they had the communication. So essentially this was a platform for intelligence surveillance reconnaissance. So go out there, take imagery, come back, is kind of the idea. So really our goal there was can we, and this is that red teaming campaign, can we actually break this system? And so we did things like let's do a tabletop exercise where we have everybody in the room that's part of the design team that kind of knows the intricacies of this. Let's have cybersecurity people in the room, let's have AI people in the room, AI security people, and then see how far we can get from an exercise point of view. So simulation kind of point of view. So we did that. We kind of came up with, okay, now we know a little bit about the camera system. It's still pretty black box to us as security people. They don't divulge too much. And then we went on and designed attacks using only simulated data in simulated environments. So we had no real data
and we were able to show effects. So essentially from a pretty far range, dozens of miles, we could show an effect on this particular camera system and the AI that was being used to drive that camera system. And so that was more of a traditional attack, but it really showed that these research papers that say that we can do X, Y, Z, we were able to pull that into the real world. And as an example, in order to have this effect, we had a billboard sized image to show it not something very scalable or easy to do in field, but it showed kind of the art of the possible there.

Yeah, I mean all I can think about as well, and we talked about this a little bit before we went live, is this red teaming situation is such a pivotal part of this new age that the autonomous age is coming if not already here. And I can't help but think about the fact that literally today as well, the fact that we have meta opened up Llama to government military application now, and I feel like once that's being utilized again, the red team in there will be significant. So I guess trying to compare that to the corporate world, I guess you could say it is almost like you'd need that regardless of where you are in the two sides.

Right. So yeah, now I'll come into where things are right now. There's a ton of AI out there. Obviously computer vision, deep learning is out there being used, so people are trying to secure those systems. But I think on the top of everybody's minds are definitely these LLM systems that have these unique prompt injections that can manipulate, whether you're building a chatbot or something else, you could end up leaking data, you could end up opening yourself to a denial of service attack, financial harm, reputational harm, these sorts of things. And red teams, I think a year ago red teams were basically grasping at straws, what can I do to these systems? So let's try everything we can possibly think of kitchen sink throw at it. Now it's a bit more mature. There's a couple of open source libraries out there. Microsoft has one called Pirate, there's another one called ROC. So there are more tools out there to help red teams and security teams to figure out where these vulnerabilities are. But it still takes an army of researchers to push the boundaries and figure out what are these new attacks, especially as new models come out, we're starting to see brand new attacks on multimodal models, for example, and models come out with new, for example, system prompts that haven't been tested before. So these sorts of things.

Yeah, I mean you mentioned there about the research component here. I suppose the scary part is when you have a bunch of agents working with you from an AI point of view, essentially as your researchers then trying to work on these problems to try and attack said systems, that could be an interesting day if not already close to that day. Exactly. So I guess we're Delivered, we're obviously here to talk about building great products and businesses at the core of this. So I suppose leaning into the topic here is what are the kind of most obvious or common security oversight you could imagine the business building into AI products that they are releasing into their ecosystem?

Yeah, definitely. So I think the number one thing is what are you using to train these systems? What data are you using? What access do people have to both the data and the systems? What are you doing to inspect any reasonable testing on the prompts themselves? So have you used some of these tools that I was mentioning to do some PIN testing? But I think number one is really that data piece. And so I think a year ago, two years ago, people were taking these models off the shelf, doing training, fine tuning on their own data sets and then trying these models out. And I think what we saw there was pretty commonly you were able to leak data that was used in that training set so quickly people started to realize that. And there's some other ways of approaching that. So retrieval augmented generation is one way. There's a bunch of papers in this space, there's some other architectures that are similar to that, but essentially it takes that variable out. So you're no longer using your data to actually fine tune a model. You're using the model with sort of an in-between sort of a retrieval system, a search system to find the information that you want out of a set of documents or set of training data that you otherwise would've used to train the model.

Yeah, absolutely. And I suppose in most companies I spoke to about AI strategy and how we're going to implement this going forward, they sometimes have an AI committee, which usually at the moment it feels like it's the people that are responsible for departments brought together. I'm trying to figure out how to use AI across the different departments. And in the business itself, where do you see the responsibility for AI security sitting with a business?

Yeah, this is a great question and I also want to address what you just said too, which is what sorts of things do you need to do within your business? We're seeing, we try to do them ourselves and then we try to teach folks how we see it as practices wise. So that steering committee you just mentioned, that's exactly right. We have an AI council that has a representative from each of the departments within Cranium. We recommend that same approach for the organizations, and we're seeing a lot of adoption in that direction, which is great. Things like incident response plan, do you have that for AI? So having a specific IRP for AI, do you have an AI use policy? How you as an organization should be using AI internally and how do you educate your team about that? So these kinds of things. And so that's really the setting up that whole beforehand, what do I need to do to prepare for any sort of adversarial event or just do good security practices. And to your question, so where does this kind of lie? We're definitely seeing this more land in the CISO's office. It certainly lands with security these days. We're seeing some of it land with the data science or AI teams themselves, depending on what element of security we're kind of talking about.
We're also seeing IT land, for example, in the CFO's office, since they're ultimately responsible for the budget, they may be the ones with purchasing power and they may be the ones thinking, okay, if I'm going to buy this AI and bring that in, spend all this money on AI, what are we doing to secure it? And so that also sometimes lands with that team. And then finally, it's the compliance officer. So whoever is in charge of compliance in the organization, that is also where we're seeing this land purely, especially in the EU with the EU AI Act, which is already starting to show some teeth. And we will certainly see the executive orders start to play out with various things coming from NIST. And folks are definitely worried about what do I do to comply to these regulations that are coming?

Yeah, it's a really good point. I think from what I can understand when I talk to clients about, first of all with the strategy part, at least from my side being biased in the strategy game, it's like what does it need to do? Who does it affect? How does it work together into play between departments? That's something we always want to figure out at Infinum for our clients, but then it by proxy leaks into things like ethics and regulation and compliance, and actually who's qualified to actually do that in the team itself. And I suppose you're right, I suppose you've touched on some really good there about the EU AI Act, all the abbreviations. It's like where does that start and end for people? And I guess beyond that, you have your own compliance. We work with a lot of financial institutions, infinite, it's like how does that get regulated around this ai? I suppose we should probably just lead into that a little bit. As you were saying there that the EU AI Act is now showing teeth. What does that mean for companies who might not be aware of it or people on the show that are tuning in there? What does that mean for companies? Sure. 

Yeah. So their approach, and I think a lot regulators are looking at this from a risk-based perspective. So essentially what are you building and how much risk is there to the general public or particular institutions based on what you're building? So if you go to their website, you can actually kind of use their calculator to figure out if it is something you should be worried about in the near term or if it's something that's coming a year from now. But essentially how sensitive is the data you're using? How sensitive is the application that you're trying to fulfill? And really just overall, what is the risk that sort of poses? So most people in the next couple months will not really be affected by this just because of their risk profile. But as this continues and as people use more and more sensitive data to train these models, we'll certainly see the effects of that come into play.

Yeah, absolutely. And I guess from your experience at Cranium as well, where you're literally AI trust platform for these clients, I guess you're constantly having to deal with the balance of regulation meets technology meets, like you were saying at the start, education for each of these companies to know this. And I suppose to kind of just lean a little bit away from kind of risk horror and slight scare distill of AI problems, I suppose with companies as well. I guess there's a risk of disappointment around ROI as well, which again, I know it's more abbreviations, more business terms, but I think it's a bit like that genie analogy, isn't it? I think if people sometimes believe AI can be turned on and used, it's just going to stop making things happen. I mean, do you feel like companies are starting to maybe see now the ROI of AI isn't quite a thing yet? It's still early,

So it very much depends. So there are some organizations that are mature way on the right hand side. They're using AI every day to do all sorts of functions that people thought were impossible a year ago. So we know it's real. For some organizations, they've done the hard work of implementing it, bringing it in,

Doing the research, doing the testing. And so it's definitely out there, I think where some of the folks are not seeing the ROI really, in my opinion, it's usually kind of misaligned strategies, unrealistic expectations. They really thought they could just kind of plug in this chatbot to what they were doing, and then it would just magically solve it. So it still comes down to hard engineering problems, clearly defining your problem, decomposing that, figuring out where AI fits into that solution. So you still need to do those good practices, and I think really I, I've heard this a couple of times, how do you actually say yes to AI? It's three things. It's one, is there a use case? Give me a proposal of what we would do with this AI within some sort of a system description design. Two, can you show me a working prototype, actually show me this in a sandbox with some proxy data, have to see that. And then three, how are we securing it? How are we making sure that nobody can get access to it and use it inappropriately?

Yeah, absolutely. And I'm so pleased you said that as much about engineering as it's about definition strategy, and I guess all of that baits into something that happens continuously because it starts, it's a circular approach. It's not this classic product life cycle. It starts and ends. It's going, it's ongoing. That's exactly right. Continuous lifecycle.

Absolutely. Yeah, love that. So yeah, I mean, I try to lean into the strategy team, definitely my vibe. But yeah, I love the fact it's been validated here because in my mind, I'd always see the future business, an autonomous business would have almost like an AI committee lab and probably something at the core of it that is that continuous just in the business baked in, always doing that. But that's a different conversation for a different day. I suppose. One of the big things that I would love to know from you who is really obviously baked into this industry and has a lot of accolade in it, do you think that AI hype is dying down now or do you think it's just accelerated?

So it's interesting because I've been in this field for a couple decades now. So it's been interesting to watch the hype kind of go up and die down a bit. I think the peak that we have just experienced with LLMs is kind of unheard of. I think even across other technologies, it's pretty crazy. So I think there's a natural kind of plateauing effect kind of happening where people are realizing what it's capable of versus what it's not capable of. However, this is only the current state of ai. This is the current deployed state. There are a ton of researchers around the world pushing the envelope on new architectures, new ways of training. What we're dealing with right now is a very static system, even though they get updates. So people are looking at neuroplasticity, for example, things that can learn online. So there's plenty of exciting things still coming with AI probably right now. We're just feeling that effect of, oh, well, I thought this was going to solve everything,

If only that would be amazing. But I guess around this, we've touched a bit on governmental style, AI business side of things, but what do you think it means for the general public? The general public obviously are embracing this now because it's more B2C toolings here. They can just, like you say, they can jump on, do a few things. They can see it emerge in front of their eyes almost like magic at times I guess. But the other side is the fears of it in the public, in the general public. Where do you see that taking place?

Yeah, I think it's died down a little bit, but obviously six months, a year ago, people were really worried about these things, developing consciousness. How are they going to just become our rulers overnight? I'm glad that that sort of fear has died down. I think the fears are a little bit different now. They're thinking more deeply about what does this mean for my job in the future

And what does it mean for my children's careers and students in a classroom, for example. So I have a good friend that's a professor in creative writing, and you're trying to teach folks how to think creatively. And if they just go to a chatbot and get what they need to turn in for that homework assignment

And don't actually learn how to write and don't learn how to think critically, then that's a bit of a problem. So I think it's understanding, educating yourself about AI, definitely using it. There's tasks day-to-day tasks that I use it for now I can rely on it. There's other things that I have to be very careful about how I use it and how I check responses from it and things like that. So some of the fears definitely I think are justified from a perspective of you need to understand what's coming. So this comes up a lot, but will AI take my job in the future? I think that's a question mark. We're not sure, will someone using AI take my job if I don't use AI also? I think that's a yes.

That's so true. Yeah, it's like the old adage slash new adage. I've heard this statement a lot, and I believe it to be true. I feel like we're heading towards the autonomous age, and one day we may have agents and fully autonomous companies, but right now we are literally in the augmented present where it's like if you are not at least exploring the idea of AI in some way even today. I mean, I look at workflows today. I've been working on where I'm talking to a client, I'm trying to build a strategy, but now with the tooling at my fingertips, I can visualize that user flow, it even do mockup designs, even do coding where I've basically built an MVP upfront just from tooling. And yes, it's not perfect, and yes, it won't be as good as human beings, but the point is it's taking me significantly less time. So you're right, I think it's about can you implement it as a tooling kit with humans, as a partnership, as a dance, really to make it
Properly do something useful. And I think one feeds to the other, right? Is like you need the guidance of a human to maybe look at what's being produced from AI and go, is that right? That with tweet and the AI is there to be efficient for the human being. So yeah, I can see a bright future with it. It's that dance. And I dunno, mind blowing implementations of AI. I've seen a lot. You've seen a lot. I'm sure the world has. Is there any right now that stand out to you with your history in this? You must have seen most if not all of them come through recently.

I think if you look at the hype curves for AI and other technologies, obviously where we are with agents, I mean, we're just starting to see the hype on agents with LLMs. We're starting to see that kind of crossover, the hype curve a bit, but things that are on the tail end, I think I mentioned computer vision early on. That's really where I'm seeing solid systems being built that as soon as they kind of ingest these systems into their workflows, these are actually being used every day. If you've been to the airport recently and you've experienced the TSA change where they take your ID and take an image, I mean that interaction is just seconds. And that's all being done with. Obviously there's a human to help check that kind of stuff, but there's computer vision also being used there for that validation verification piece. And that's a huge system. That's massive number of people going through those gates across the United States. And so that's one example, but there's tons of others. Autonomous driving systems, we wouldn't be anywhere near where we are without the sensor suites and AI and technology that's being used to pull all of that together. I know we're not quiet at autonomous driving, but if you've been in a Waymo and San Francisco, we're very close.

Yeah, I find not that been those exact cars, but when you've been in a Tesla, even just the fact it helps you drive safer, easier, less tiring. It's already doing that what we talked about, right? It's making your journey and life easier with the kind of a augmented synergy between the two worlds, right? The technology and the human being. So yeah, I mean, I'm a big fan of all that for sure. And I suppose to try and move into more positive realms, the future of AI development. Do you think that AI has a potential to solve pressing problems and challenges at the global scale, which I think is the utopian world we're all hoping for? What do you think from your professional experience?

Absolutely. I think some of the biggest challenges, for example, protein folding, we've seen some massive advancements in AI to help with that. And that's a huge combinatorial problem that takes a serious amount of time, but AI is able to assist in problems like that. So I think we're going to see new vaccines being developed. I think we're going to see cures being developed at a pace that we've never seen before. So I think in the healthcare space, we're going to see a massive amount of improvement in our development there. I think we're already starting to see the benefits of that from any sort of design work that people do. Doing simulations, for example, in wind tunnels is extremely expensive, but can we do this on a very large scale for simulated vehicles and simulated effects in environments? And these are the kinds of things that obviously places like SpaceX is doing. These are the kinds of things that autonomous vehicle companies are doing. So just being able this digital twin idea of creating the world in a digital form so that you can test it there first before you bring it into the real world. Absolutely.

Yeah, totally. It made me laugh. Obviously, we were talking about digital twins prior about the idea of me trying to use my digital twin Chris Botcher to try and talk about the Llama incident, but because it's military based, it wouldn't do it. But digital twins are away, right? They are coming. They're real. So yeah, I just wanted to make sure we, can you turn it around a bit from dystopian problems to utopian happiness. Now we do have some questions for you. Would you believe we have quite a lot of questions based on this topic, and so we'll jump straight into them and dive in because quite a few here. And actually we actually ran a poll from our marketing team at Infinum just to get a sense before this live show about what are people's biggest AI fears and what the majority of them said was that essentially AI fall into the wrong hands. What does that mean in terms of what is our worst case scenario? What could be utilized in that sense? So yeah, quite a big question, but yeah.

Yeah, that's a big question for sure. And there's some examples of this already happening. I'll give kind of two examples. One is on the pure kind of cybersecurity side. So Cranium is focused on how do we secure AI systems? But there's kind of another piece of that Venn diagram, which is how do I use AI to do cybersecurity? And obviously there's people with their white hats on thinking about how do I defend against these sorts of things. But there's obviously adversaries that are out there using AI to build chatbots, to do nefarious things, and we're seeing definitely an increase in this. So we're seeing folks that are able to quickly ramp up and build attacks based on open literature and things like that, and they can try really advanced phishing campaigns, for example. And so you're starting to see phishing campaigns look a lot more human over the past six months to a year that's going to continue.
So that's definitely something that we have to be aware of and defend against. Another example in this space is folks that are building deepfakes. So we've seen some examples there where even being able to do video and audio and kind of replicate someone and bring them onto a video screen and then convincing you that they're real and then you hand over money or passwords or whatever. So we're definitely seeing an increase in the deep fake side, and there's a bunch of research, a bunch of folks out there trying to solve that problem. How do we, do we watermark models and imagery and audio and things like that, or how do we handle those sorts of things?

Yeah, yeah, you're right. I mean, I guess the imagination of what you can utilize AI for could be vast for good and bad. I guess that's technology in general, isn't it? It's like the human condition of how do you manage this thing? And I suppose also, I'm glad you mentioned fish in there actually, because my next question actually was about the environment and I guess how do you think we should address environmental impacts around AI in the future? Which I think that's something that maybe is never considered too much, but it's a prevalent problem that should be looked at now as well.

Absolutely. Yeah. So I think there's certainly implications on the environmental side training these models. It takes a massive amount of energy. There's definitely papers written about this that you can go see the scale of what we're talking about there, power to actually power a city, that kind of energy. And I think we're at the stage where that was sort of what was needed to get past certain hurdles. We have the entire dataset from the internet, all of human knowledge essentially being used to train these models. What we're probably going to see is some changes to how we do that. So one is some more focused models that are smaller, easier to train, but still have a lot of the general intelligence borrowed from these foundational models that have been trained on massive amounts of data. So I think we'll see edge devices as well. A model on your phone obviously can't be the size of some of these models. That has to be much smaller and more dedicated to what it's doing. We're going to see a trend in that direction. And then I think we're also going to see on the energy side, we're going to see people get creative there. Can we build enough solar farms to power an AI training centre or something like that? So I think we're going to see people on both sides struggle with this problem.

Yeah, yeah. It's mental, isn't it? That whole citywide power just deliver this stuff. It's mad. And I guess I'll put this as the last question to bring it back to human beings, I guess. So do you think businesses are serious about ethical ai? Do you think customers are informed enough to push for responsible AI for companies and how can companies increase that transparency to really raise the game with the public concern? So it's quite a few questions that are baked into one, but yeah, how do you see responsible ethical AI being leveraged?

Definitely. So I think there are people that are certainly concerned. I think each very large organization that's building these foundational models, they typically have a group embedded that is solely concerned with responsible use of AI trust and safety. And so you're seeing a lot of that Mitre that I came from. We had a framework for this. NIST is thinking very deeply about this. Obviously this is a piece of the EU AI act as well. So I think there's enough kind of folks, enough energy kind of thinking about this problem that it's not being ignored. So that's very good news. But at the same time, it comes back to that education piece. So somebody in an organization that's actually training these models or going to use this for something, they might not be thinking about the ethical consequences that might come from these models. If it's not supposed to let you build a recipe for how to make a bomb, and it does somehow, then now you're kind of crossed into this grey area of, well, this model is being misused. It's on your dime, it's your account. And so now you're responsible for that. Building policies around this, having the education, understanding the compliance side, the governance side that's coming into this space and bringing as many folks together as you can to talk about these problems. I think that's really where that AI council comes in, where
You have representatives, diverse backgrounds, diverse points of view, and that's really how you solve these problems. Because if you only live in your data science world or only in your security world, you're not going to solve these sorts of problems.

Yeah, that's a really good, I guess, wholesome way as well to wrap this in terms of it is about education, it is about transparency. It is also about human beings just collectively working together, like you say in these committees to make sure we responsibly use this as a species, I suppose, and that's the only way it's going to stay and be our kind of augmented friends going forward. So hey, look, I mean, once again, Josh, thank you for all the information on this today. It is mind-blowingly exciting and somewhat scary I think talking about this. But thank you again for being on delivers. That's been great. And yes, hopefully I'll see you soon.

Great. Well, thank you so much. I had a great time.