AI Governance - Balancing Innovation and Security
“If Microsoft, Google, and Tesla have faced challenges with AI, what does that mean for your organisation’s risk exposure?”
Grassroots IT CEO David Mitchell hosts ISO expert Jason Maricchiolo, Managing Director of ISO 365. In this eye-opening webinar, Jason unpacks the fundamental principles of AI governance that every business needs to know—regardless of size or AI maturity.

Access This Webinar
Join Grassroots IT and ISO365 expert Jason Maricchiolo for this strategic discussion that delivers practical guidance on balancing AI innovation with essential security measures.
The question isn’t if AI governance will become standard—it’s whether you’ll be ahead of the curve or playing catch-up. Is your organisation prepared to harness AI’s potential while managing its unique risks?
Organisations that implement strategic AI Management aren’t just mitigating risks—they’re positioning themselves for sustainable growth and competitive advantage.
In this Webinar
- Essential Security Foundations
Understand how the core principles of information security (confidentiality, integrity, and availability) apply directly to AI management and why they’re critical for responsible AI implementation.
- Real-World Risk Management
Explore practical strategies to identify and mitigate common AI risks, including data breaches, information manipulation, and over-reliance on AI systems.
- Implementation Roadmap
Gain actionable insights on developing AI policies, establishing governance structures, and creating a clear path toward responsible AI usage in your organisation.
The following additional resources are mentioned or referenced in the webinar.
David holds an MBA and various qualifications, including Project Management. With extensive consulting experience across a wide range of industries, he is well-placed to be the Chief Executive Officer of Grassroots IT. When he’s not running after his four children, David enjoys trail running in his downtime.
Jason Maricchiolo has spent the past 15 years helping organisations enhance their operational functions, focusing on governance, risk, and ISO compliance. With extensive experience in achieving ISO certifications and a deep understanding of data security, AI governance, and regulatory requirements, he provides practical, tailored guidance to help organisations strengthen their security and compliance frameworks.
David Mitchell [00:00:03]:
Welcome everyone to our webinar. This morning, our webinars on AI governance and balancing innovation and security. Now, as you can tell this morning I am somewhat vocally challenged. So I’m going to be leaning heavily on our special guest, our subject matter expert, Jason Maracciology from ISO 365. So Jason, without me straining myself too much more, can I please hand to you.
Jason Mariciolo [00:00:34]:
Absolutely.
David Mitchell [00:00:35]:
Can I please ask everybody participating today to do some of my job for me and ask Jason questions through the the chat and I will do my very best to chime in when my voice allows. Thank you Jason.
Jason Mariciolo [00:00:51]:
Thank you, David. All good and welcome everybody and thanks for joining. As David said, please use the chat if you have any questions today. It is a very brand new topic, AI that there’s a lot of interest around it. So please, if you’ve got any questions that I can help with, today’s the day to ask it. Just a quick intro from me. I’m Jason Maracciolo. I run a governance risk and compliance company called ISO365.
Jason Mariciolo [00:01:19]:
And I’ve just, I’ve been working with information security for the past 15 or so years and working with companies all across Australia and New Zealand on their GRC governance, risk and compliance and more recently have become myself certified to and the new AI Management System, which is what we’re going to talk about today. It’s called ISO 42001. And that’s just something that is obviously just getting started today. So I wanted to do a quick refresher for those that haven’t joined any of our sessions before, just breaking down the ISO and trying to explain what this term ISO actually means. So if you’ve heard of it before, it actually stands for the International Organization for Standardization. So it’s an acronym and this organization writes and publishes these things called standards. And so They’ve written over 24,000 standards in their history and they can range from many different industries or governance frameworks. And so they create these standards.
Jason Mariciolo [00:02:31]:
And a standard is a formula that describes the best way to do something. That’s the best way to think of what a standard is. So as I was saying before, it could be a quality standard, it could be an environmental standard. A lot of you may have heard about an information security standard in the last two, three years. That is the ISO 27001, which is probably the most popular ISO at the moment, besides the quality standard, which has been very popular for many years, which is the ISO9001. A quote from the ISO that I love is that when things don’t work as they should, it often means that standards are absent. So I wanted to kind of demonstrate that very quickly today. And if you can, in the chat, I just want some of you to try and guess what these symbols are.
Jason Mariciolo [00:03:22]:
Don’t be shy. If you recognize the symbol, put it in the chat. Here we go. We’ve got one engine warning. All right, check. Engine, engine, engine. Demister. Oh, there we go.
Jason Mariciolo [00:03:38]:
Ian’s gone for the. For the trifecta. Fantastic. All right, so a lot of these guesses are correct. So the first one is to indicate that the engine is experiencing a failure or a malfunction. The second is you’re demisting and you’re defrosting. And that third one, which hopefully we all haven’t seen too often, is that tyre pressure is outside of normal operating parameters. And the reason why I show this particular slide is that without any kind of leading from me, without any written language, without any colors, without anything, we were all able.
Jason Mariciolo [00:04:15]:
We were all able to come to a conclusion of what these symbols actually are. And that’s because it comes from one of those 24,000 ISOs, which is 2575 road vehicle symbols. So this illustrates that when we standardize things, it means that no matter what country we’re in, we know what these things mean, rather than relying on written language. So some common ISO management systems that you may be familiar with, we touched on them before ISO 9001. That’s around the quality management ISO 27001 is information security and keeping all of your information protected. And then we also have another one called ISO 45001, which is the OH&S management system, Occupational health and safety. Those are probably the three most popular ones that are going about for businesses of all sizes. And as I was alluding to before, we’re now talking about AI management.
Jason Mariciolo [00:05:14]:
And that is another number, 42,001. The thing is, though, everyone is that we can’t actually talk about AI management without understanding basic concepts of information security management. I’m going to teach that to you today, just very quickly in this one slide. The three things that we’re trying to protect at any one time when it comes to our corporate or our corporate information are these three things. Confidentiality, integrity and availability. The first one around confidentiality, it’s about ensuring that the sensitive information that we have is only accessible to those who are authorized to view it. That’s why we have to log into things. It’s why even at work, you might be able to see Something in a platform that another colleague can’t.
Jason Mariciolo [00:06:01]:
That’s because of user permissions. It’s about only seeing what we should be able to see. The second thing that we’re protecting is the integrity of information. So this is about guaranteeing that any information that we’re seeing is accurate, complete and trustworthy and can only be modified by the right people or by authorised personnel. And so we need to make sure that the integrity of anything that we have included. Imagine a scenario where you had a contract with somebody and the terms of that contract were able to be manipulated or changed without your knowledge. That is an integrity issue. It’s another thing we’re protecting with information.
Jason Mariciolo [00:06:39]:
Then the third thing is around availability. This is about ensuring that authorised personnel can access information whenever it’s needed. I’ve got a little graphic there of an atm. We’ve all been there where we’ve walked up to an ATM and it’s offline and we’re not able to get our cash out. That is an example of an availability issue. And a lot of people kind of forget about availability. But you can think of how important that would be in the healthcare system, system making sure that something is always available or it’s online. When we’re talking about doctors and nurses and things like that.
Jason Mariciolo [00:07:14]:
So when we are talking about information security, we need to understand those three things and then that will ultimately lead us into, you know, what today is about, which is about our AI management. So how do these three things affect our AI management? And it comes down to these three things. It’s about making sure that we understand the risks of using AI within our businesses and the impacts, right? And then ultimately setting these some guardrails around these risks and impacts. So to give you an example of some things that AI can affect, going back to that previous graphic about information security is we talk about accidental data breaches through AI chatbots or AI agents. Okay, so I’ve got in brackets there, confidentiality. So in that example I was giving you before, where ordinarily, day to day, you need to log into something to see something. If we are now using AI and we’re feeding information into AI and anybody in the organization can ask this agent a question. It could be about if they’re being a bit sneaky, it could be about payroll information.
Jason Mariciolo [00:08:23]:
If things aren’t controlled on the back end, that agent is just going to display that information. AI chatbots and agents can accidentally have a confidentiality breach, either internally or externally. So we really need to make sure that we control that risk. The second part, which is around that integrity piece, is what we call hallucination or manipulation of data. For anyone that’s used AI recently, especially in its early days, it’s subject to this thing we call hallucination. When it doesn’t know the answer to something, it’s going to make something up. Right. And so feeding back to that integrity piece before, where.
Jason Mariciolo [00:09:04]:
Imagine if someone was able to change the terms of a contract. Think about the integrity piece. A staff member asking an AI agent that you’ve built some information about your business and if it’s not actually in there for it to gather that information and display it back, it’s going to make something up. And that can be quite damaging. So you need to make sure that we control the integrity of that information as well. And then the final piece is probably more of a future piece, but it’s the availability piece and it’s how, how we are starting to rely on AI more than ever. Okay. And so there isn’t, you know, one day there may be an over reliance on AI.
Jason Mariciolo [00:09:43]:
I do feel like that’s actually something that will happen. Right now everyone’s either at the entry point or potentially just using it as a little bit of a, of a guide during their day. But there’s a point in time where if AI becomes so ingrained in the business, there may become an over reliance and then if it goes offline, just like my ATM scenario before, that might actually affect the business in more ways than you think. These are the risks. These are some high level risks that we kind of talk about when we’re dabbling in AI. And then we have the flip side, which is what we call impacts. And so this is something new in the AI management system that doesn’t exist in any other ISO as far as these three particular things we’re trying to protect. And that’s impacts to individuals, impacts to groups, to individuals and societies.
Jason Mariciolo [00:10:40]:
And so you might think, okay, well what does that really mean? Imagine if you’re building a chatbot for your organization that is going to help you hire. If the data that is feeding that AI agent has biased data. So let’s say you work in a industry where only it’s a majority of men that apply for those kind of roles. Your AI agent is going to be biased to men when it looks at hiring or looks at the candidates for your hiring, it’s actually going to do that. So you need to make sure that your data, if it is bias, you need to make sure that you have some guardrails around that to make sure that if, you know, if someone that’s not male actually does apply for a job, they can be considered. Okay, so that’s what we’re talking about. Impacts to individuals, groups of individuals is obviously that it’s a wider impact to a group. It could even be a department within your organization as well.
Jason Mariciolo [00:11:45]:
So think of it as multiple people. And this could be if you’re in healthcare and you’re building an AI agent around trying to diagnose patients, if the data is incomplete, you may get a misdiagnosis for a group of individuals, which can lead to some serious problems, impacts to groups as well, and then finally societies. And this is that wider scope again. And an example I’ve got here is around some of this facial recognition technology that’s being brought in. I believe it was Bunnings at some point that kind of rolled out a little bit of a facial recognition technology in some stores. And then immediately there were some privacy concerns. Right, The Privacy act came into play. Are we actually allowed to do that? And so we are at the very beginning of a lot of these new technologies.
Jason Mariciolo [00:12:38]:
And so these impacts are starting to rear their heads quite quickly. So some guardrails I want you guys to think about today very simply because a lot of this can be quite complex. Human oversight is a must. If we’re building AI agents, we always want somebody over the top that is a human that can verify the answers that are coming out of the AI system. I always say AI feels like magic, but it shouldn’t be magic. There’s a thing called transparency in AI, we need to know how the AI gave its answer. And if we can’t figure out how the AI gave its answer, that is a transparency problem. Again, human oversight is a must.
Jason Mariciolo [00:13:23]:
If you’re building an agent in your business, even if it’s a basic one around looking at some financial data and things like that, make sure someone that knows the numbers is confirming those numbers. When you get your output or your answer, always have a subject matter expert, as I said, to govern the AI. So if you’re working in finance, get a finance person over the top of it. If you’re working in hr, get a HR person quoting, estimating all of these things. Make sure your, your human subject matter expert is looking at the answers and then the output, which means the final point there is that your data quality must be high. A very simple concept for you all today is that, you know, I won’t swear, but rubbish in, rubbish out. And so when it comes to your data, just make sure that data quality is high because if you’re building agents around, you know, to help efficiencies in your business, you need to make sure that the answers that it’s giving are as high as possible. Just some quick AI failures over the years and there’s going to be a point to the ones I actually display for you here today.
Jason Mariciolo [00:14:36]:
So when Microsoft first released its Bing Copilot, as it’s called now, we had some errors coming through on the Microsoft end. And so the reason I raise this is because if it’s one of the world’s biggest companies that can fail in AI, odds are is that all of us are going to fail in AI and hopefully it doesn’t cause such a problem to our business that it can ruin some of us. We just need to make sure that when we are again dabbling in the use of AI, that we are really thinking about the risks and impacts. Because as you can see there with Microsoft’s, when they first released their Bing Copilot, it was hallucinating, going back to that hallucination from before. It was giving the wrong answer, the wrong outputs and things like that. We just need to make sure that we monitor that. Another massive company, Tesla, which we’ve all heard about, we’re now getting into some pretty dangerous territory. Who’s responsible when we lose lives when it comes to AI? So making sure that we understand what we’re doing with AI.
Jason Mariciolo [00:15:48]:
Hopefully none of us on this call today are kind of at that level where we’re now, you know, we’re building, you know, AI agents that are going to, you know, ultimately cause someone to lose their life. But we are going, we still are going to be building agents that are going to potentially affect those individuals and groups of individuals as well. So Tesla, another major company that’s battling with AI failures, Google, they released their image model that was misrepresenting historical figures, went in its image creation, which caused a lot of, you know, a lot of drama out there because you know, when we’re talking about historical figures and the way that they’re represented, it must be accurate. And so again, going back to integrity, this is a piece of, this is a failure that happened to Google again, one of the, the world’s largest businesses and then finally one that will pretty much always be around, I believe this particular story, but it was, I believe, the first loss of life from AI. And it’s something that I think we all can agree on the call that we just obviously wish that never happened. And if this company had of raised some of these concerns about risks and impacts of how their AI agent could affect their customers. I’d like to think that maybe something like this could have been avoided. And so again, we are dabbling in some pretty dangerous territory out there in the world, hopefully in our businesses that for everyone on the call, we are just looking for efficiencies and things like that.
Jason Mariciolo [00:17:27]:
But AI failures are happening to the biggest companies in the world. So we really need to take the governance side of things really seriously. Some benefits of AI management, so what does it actually look like within our organization? So going back to what we’re talking about before, it’s about controlling these risks. So we need to understand what these risks are and then we need to control them. So many of you, perhaps in your businesses, have an existing risk matrix or a risk register that you could probably leverage in order for you to help control these risks. So what I would encourage you all to do is if there is a mandate or you are starting to dabble in AI, before you go and create your bot or your agent or doing whatever you’re trying to do in AI, raise a risk on your risk register, if you have one, or just grab a pen and write down what you’re trying to build and figure out what could go wrong with this thing. Figure out who it can affect, whether it’s your organization itself, whether it’s a person in your organization, it could be a whole department, it could be a whole company. As I said, figure out what the risks are and then look at controlling those impacts.
Jason Mariciolo [00:18:44]:
How do we make sure that whatever we’re building, that no humans are harmed, when we are actually going to deploy whatever agent it is we’re using? Like I was saying before, we’re all looking for efficiency when it comes to AI. Okay, there’s no doubt about that. So let’s ensure that we’re building AI to create efficiencies and that we’re really taking those impacts to those individuals and groups of individuals away. And some really high level things like I was talking about before. I know people are looking for efficiencies in the HR area, in the finance areas. Just make sure that the data is not so biased that it causes issues for your business. Either under Fair Work or even under the Privacy act, you need to make sure you control it. I do see some questions coming through as well.
Jason Mariciolo [00:19:35]:
I am going to go through those likely towards the end, so thank you. Keep raising those as well. The final part as well, when it comes to the benefits of AI management, it’s around building trust so showing that your key stakeholders or your interested parties, as we call in the ISO world, it shows that you’re leading the way. If you can understand AI risk, AI impacts, when you’re actually talking about deploying agents and bots within your business, if you can start talking about some of these things around the oversight and the governance, it’s going to build trust among your team and among your key stakeholders as well. This next slide is just around what certification looks like in the ISO world. So I don’t expect anybody on the call is going to put their hand up and say, hey, I’m going to go and get ISO 42001 certified. It is a brand new standard, it was released in 2023 or at the end of 2023. It’s more for organizations that are heavily using AI or that are even developing their own AI or even consulting in the AI space.
Jason Mariciolo [00:20:52]:
Okay, so if we’re just trying to get started with some AI basic governance, I’m not suggesting here now that everybody go out and get certified. But for those that are interested in a general ISO certification journey, this is what it looks like. So you go through an implementation program and that is just about understanding the requirements of the full AI standard or whatever standard you’re looking at at this point. It could be 27,001 information security, it could be quality, whatever it is, but for now we’ll just keep it to AI. And that’s just about implementing policies within your organization about AI. So having an AI policy, having an AI security policy, even an AI privacy policy. So we’re all very used to having our generic ISO policy that are listed on our websites, but now we’re talking about AI privacy as well. And then creating an AI risks and impacts methodology.
Jason Mariciolo [00:21:52]:
Like I was saying before, with that image of that risk matrix, having a way to actually assess what these risks and impacts might actually be. You then sit what we call a stage one audit. That is almost like a preparation audit or a gap analysis audit, if you will. Your external party will come in, have a look at your organizational policies and registers and things like that, and then they’ll recommend you for that Final Stage 2, which is your full certification audit. And that is where you’ll show evidence of how you are managing your AI risk and impacts. And that can be done by showing your policies, acceptable use policies, things like that, and then you’ll ultimately be recommended for certification. So that’s what a certification journey would look like. Generally takes 6 to 12 months, depending on how much AI you’re actually using or how big your development is.
Jason Mariciolo [00:22:54]:
Okay, so what can you do from here? Some tangible things for you to do and take away from the session today. Find your AI champions within your organization that might be you, or it might be someone that’s actually been knocking on your door saying, hey, can we use AI? Can we use AI? Find that person and kind of let them help establish these rules by asking the right questions. If we have someone that’s really keen on AI within the organization, it’s a really good way to start by asking, okay, what do we want to do? What are the risks? What are the impacts? And how do we even get started with some of this stuff? What I will say is in this, in that second box where it is set values. What do we stand for? So I’ve had a lot of chats with a lot of organizations that completely ban the use of AI, and that’s fine. If you’re going to ban it, though, you need to communicate that to your staff. You can’t just say at the top management level or the leadership level, we’re banning AI, but then no one lets the staff know, because I can guarantee you. And if there’s one thing for you all to take away from this webinar, I guarantee you, you all have staff that are using a free version of an AI tool to help them with their jobs. Okay, A lot of companies will probably refuse to agree with that, but I can almost guarantee the shadow IT or the shadow AI as we’re coining it now.
Jason Mariciolo [00:24:28]:
The smartest of people will be using AI, and you do not want them to be putting in your company or your confidential information into a free version of OpenAI’s ChatGPT or some of the other agents that are out there. Free effectively means that you are the customer. So the data that they are putting in is training the model itself. We need to make sure that we set those values. And if we are going to ban it, then let’s ban it. But if we’re not going to ban it, then we need to give the staff the ability to use AI the way that we want it to be used. Get a paid version, set your guardrails only, give it to the people that are allowed to use it. And then just really make sure you’re communicating to your team about what acceptable use of AI looks like for your business.
Jason Mariciolo [00:25:23]:
And then finally, just speak to your technology provider so there are experts out there that can help you on your AI journey. Make sure that you start asking the right questions or you may not even know what the right questions are. They’ll be able to assist you to ensure that you’re asking the right questions when it comes to what AI you’re building and using within your organization. That brings me to the end of the presentation. David. I know I went there for a while, but I thought I’d give your voice a little bit of a rest. For anyone on the call that is interested, you can connect with me on LinkedIn. I’m open to having a conversation with you, obligation free, of course, just to see what AI looks like within your business and to see what next steps could look like.
Jason Mariciolo [00:26:15]:
More than welcome to connect with me. Reach out. Let’s have a half an hour or 45 minute chat and that’s open to everyone on the call.
David Mitchell [00:26:23]:
Hey Jason, Appreciate the rest. Can I just ask two questions? One is multiple one. Can you summarize the top three things that our audience, whether they go through certification or not, and it’s probably not, but what can they do anyway to govern for or prepare for AI? There’s got to be three key things that everyone can do.
Jason Mariciolo [00:26:47]:
Yeah, I’ll keep it very high level. So I’ll go back to the slide from before. Finding that AI champion. Who in your organization has the most interest in AI because they’re likely to be the one if we don’t talk about it early. That could be running AI in the background. They’re trying to do the right thing but probably not doing it the right way. So find that AI champion so that you can create a little bit of an AI function within your business. It doesn’t have to be massive, it could just be one or two people.
Jason Mariciolo [00:27:15]:
But that’s something I would definitely establish early on. The second piece is definitely around acceptable use. You will all likely have an acceptable use policy around your IT that usually exists within a lot of organizations, not going to malicious websites or going to the wrong types of websites and things like that. I would update acceptable acceptable use policies to be able to set our values on what we can and can’t do with AI and so get the staff to then reread these acceptable use policies as a bit of a reminder. But also all new staff coming into the organization for the first time will be introduced to acceptable use of AI from the get go. And then that third piece is engage with experts. We’re at the very beginning of the AI, you know, Rush, if you want to call it that, it’s not going away. It’s something that we definitely need to embrace, but we need to get it under control.
Jason Mariciolo [00:28:21]:
In our organizations early. So just engage with experts would be the third one.
David Mitchell [00:28:27]:
Thanks for that. I would definitely include an AI policy in those top three. And you alluded to that with the, with the use guidelines. But if an organization has a policy, they have thought it through and it’s in. It’s clear there. Another thing I would add in there is that you’re right, people are going to use it anyway. So the concept of shadow it and authorized it comes into play here. So pardon my voice, it is better to have an AI plan and an AI policy and a plan that is safe rather than nothing and let people do the unsafe things.
Jason Mariciolo [00:29:12]:
And. Absolutely.
David Mitchell [00:29:13]:
Another quick question is how do you, how do you think that businesses like ours, like everyone that’s here, should go about identifying the AI risks?
Jason Mariciolo [00:29:25]:
Yeah, the best way to identify the risks is figuring out what you’ve already started to build, if you have started to build it, or if you’re wanting to build something again, start with a pen and paper. Don’t start by trying to develop this thing and then trying to see what happens. I would get a pen and paper, do a basic risk assessment or a risk analysis on what it is you’re trying to build and think of the things that can go wrong. Think about the consequences of whatever it is you’re building not being configured correctly. And all of a sudden you’re in the middle of a data breach. Because let’s face it, the ability to be able to create a public facing AI agent or chatbot is becoming so easy for people to put on their websites. That’s querying data in the back end and you could be sitting in the middle of an accidental privacy breach within hours of actually deploying that out. These are the way you just really need to get your mind thinking about.
Jason Mariciolo [00:30:28]:
This is what I want to build. What’s the risk, what’s the impact and what are the consequences of if it goes wrong? Right. Could I lose my entire business because I tried to make my website a little bit more interactive?
David Mitchell [00:30:40]:
100%. And I think the point to continue with there is that basic or standard cybersecurity awareness and practices overlay the AI very carefully. I am going to stop my. I’m going to stop there because it’s going to get worse. Jason, if you wouldn’t mind having a quick look at the meeting, the questions and have a look at the questions.
Jason Mariciolo [00:31:07]:
Yeah, I’ve tried to do my best of listening to both you, David, and reading the chat at the same time. So I think I’ve got some general feels of the questions in here, but one of them was around AI still being a concept or a goal, yet to see any actual intelligence in it. It’s really a money making venture for Big Corp. It’s definitely, it’s a good observation and it’s something that, it’s a sentiment that a lot of people have out there as well. What I will say is again, we are right at the very beginning of this journey. So the fact that we’re even having webinars and podcasts about it, it’s good that we’re talking about it because although it may seem, although a lot of organizations probably aren’t using AI yet or don’t understand how it’s going to revolutionize their business, within 12 months the conversation is going to be very different. So that’s all I’ll say on that one there. Another question from Rohan was around which industries do I see as needing the AI cert? I think that’s a good question.
Jason Mariciolo [00:32:15]:
It’s anyone. In my opinion, the initial AI certs, the ones that go through that full certification are going to be tech companies. So IT companies that are trying to consult in this area because it’s going to build trust. We need to be able to show that those talking about AI actually are equipped to talk about AI. So for my business, we’re only a small organization as well, but we are currently going through our AI cert because I believe in it strongly that in order for us to be able to talk on webinars and things like this that we need to be able to understand what best in class looks like. Roman it’s definitely going to be anyone consulting in AI and then it’ll filter through to those that are heavily using AI, even to the point where they potentially aren’t hiring staff anymore for certain functions because they’re using AI so much. I think that will be the next one. But like with anything, as the years tick on it will start to flow down that supply chain.
Jason Mariciolo [00:33:23]:
Jonathan, I haven’t had a chance to go through yours to see if there is a question in there, but more than happy to take that offline actually, if, if you’re keen, mate. So yeah, I’m sure anyone else that has had a read of it have seen it, but yeah, I’m not sure if there’s a question in there. How, how difficult would a certification of a locally hosted AI system be to achieve? Let, let’s, let’s take it offline I reckon because there’s some, there’s some deep detail in there and I’d be more than to chat to you, mate. Not a problem. And that’s for everybody that on the call that could be dabbling in AI as well.
David Mitchell [00:34:07]:
Thank you, Jason, very much. Appreciate that. And without further ado, I’ll just invite anyone to again, give me a couple of days for my voice to improve. But there’s lots of conversations to be had here. I think one of the ways of identifying risks is to talk to a third party, any third party who was involved in this space, grassroots. It is certainly there to try to help you out as well. But thank you. Thank you, everyone, for coming.
David Mitchell [00:34:33]:
Thank you, Jason. And we’ll see you on the next one.
Jason Mariciolo [00:34:36]:
Thanks for having us. David.