Practical considerations for AI adoption: Part 2

In our second episode of our three-part Ten Minutes with Ten10, Ash Gawthorp and Coincidencity Founder Miriam Gilbert discuss how to turn abstract ideas of ‘doing AI’ into a practical reality, what small projects can prove value and build confidence in AI before investing heavily in the technology.

Miriam: Many people talk about how they want to “do AI” – and they have some ideas about how it could really support their business, or their team, or their region. But when you dig deeper, they will find very quickly that a lot of these are very ethereal concepts rather than clear ideas of what they think “What should AI be doing for them?”

What would you suggest are good ways to move from that ethereal concept into something that’s actually tangible and delivers business outcomes?

Ash: I think step one is: define a pilot. Identify a suitable pilot – in terms of whether that’s a workflow or whether that’s an activity that a team does – with clearly defined boundaries, in terms of what’s expected of it, what the inputs are, what that looks like. And then once you have that, you’ll then be able to move forward. Now, in terms of selecting that, you want to not boil the ocean. You want to go for something which is quite small in its scope, which isn’t so mission-critical, and at the same time, it’s reasonably self-contained. You know, that doesn’t touch many different parts of the organisation, or need data from all the different areas to be able to do that. I think once you have that, the next step is then putting some structure around that in terms of a project. Decomposing it into exactly what steps are required to do this? What are its requirements?

Miriam: And I guess it’s crucially important to – on the one hand – yes, have that self-contained pilot and have clear boundaries, but also to bring more of the business community with them. Bring more people with you along that journey. Because unfortunately, what we’re seeing quite a lot in organisations is like, “Yeah, there is an AI team that’s doing something over here, and the rest of the organisation doesn’t really know what’s going on”. And sometimes they don’t even know who’s involved and what it’s trying to solve.

I think there is a real risk, first up – that you’re missing vital input. So you say you’re mapping certain processes, but if you’re missing the input from frontline users, for example: who currently have lots of little workarounds going, you will be missing some key input into that. Secondly, the big risk is adoption afterwards, because we all know that it’s harder for people to change the way they’ve been doing things. If they feel that it’s not been invented here, that somebody else tells them this is the new way to get things done.

Ash: It needs to be self-sustaining as well, or you need some mechanism in place to do that. I mean, I guess one of the classic challenges with that consultancy model is that you will have a consultancy come in to advise you around what’s required. You can sometimes miss a number of those workarounds because they haven’t properly engaged with the people on the ground using it day in, day out. And then when they ride off into the sunset, what are you left with essentially?

How do you have the teams maintaining it? So that’s something that we really focus on in terms of being able to – after we’ve built these things – leave the people behind to be able to run with this, actually, as permanent employees of the organisation. But as well as the tech, it’s so important that hearts and minds piece with the individuals who are doing it.

You’re absolutely right, you need to get that buy-in. And if they feel it’s being forced on them, then they will revert back to old ways of working. You have to have an easy path to be able to maintain it, to be able to upgrade it with the input from those users as they use it, and they identify changes with it.

You mentioned a really interesting point there about the AI team. I think one of the challenges – and we touched on this in the last one – is that there isn’t a clear definition of what AI is. Often in organisations you have large data teams that maybe got some data science and machine learning capability, that they become the “AI team”.

But if AI is also Agentic AI and automating workflows, that team is not the team that you want going in there speaking to the business users to be able to identify these use cases and to be able to decompose them and drive that change through an organisation. So I think that’s there’s often a discontinuity there as well.

Miriam: Very much. We’re seeing that live all the time, from clients who say to us; “Well, we’ve had the consultancy in, we’ve had the tech developed, we’ve done the skills training, we’ve done the hackathons, and we’re still not seeing a lot of movement or we’re still having a lot of pilots just sitting there, not going anywhere, not scaling, not delivering”.

There are multiple reasons, but they also always come back down to the humans involved. Almost invariably, you find that they haven’t involved the wider business community from the start. They haven’t helped their people understand how they can contribute to shaping that – you know – what the future of work with AI will look like. Whether it is machine learning, whether it’s Agentic, whether it’s individual productivity. They haven’t spent that time and they haven’t listened to those frontline users sufficiently.

Having your odd town hall or having your hackathon where everybody can create – yes, these are nice – but that can only be just part of that behaviour change that you really need to instil. Behaviour change really starts with an identity change. So people have to consider and reconsider how they feel. It is linked to emotions, not just logic, but how they feel they deliver value with their work. How they bring value to their role, to their organisation. And when you tap into that, it’s quite amazing how quickly you can create change. It’s quite amazing that – no, you don’t need six months of behaviour change programs. You can actually really switch people’s attitudes very quickly. And with AI, it’s perfect because the element of – I want to call it “play” – is almost built into the success.

The more they play, the more they experiment, the faster they get results. Rather than waiting to be trained in a formal way and going through formal steps.

Ash: There’s a balance there between encouraging people to play, which I think is essential. For me, anyway, what that playing means is actually trying something, but without being on the hook to actually have this thing delivered by a certain date to do that, because there are so many unknowns in it that that will just instantly make people fearful. So allowing them the freedom to be able to explore these tools and see what they can do with and learn from them and get a level of comfort with it, I think is, is really important.

Ash: So I think in terms of small-scale projects that organisations can adopt, one of the key things with Agentic AI is really around being able to use LLMs (Large Language Models) and the way you kind of want to be able to use them within your organisation to be able to make simple decisions for you, as part of workflows.

But I think as well, one of the key areas that many organisations struggle with is the idea of knowledge, and being able to access that knowledge, and being able to use it. And so that concept of RAG’s or Retrieval-Augmented Generation, the ability to be able to have an LLM, or to be able to question an LLM, or to be able to ask it questions and it been able to derive answers not based on what it was trained on, but actually on your data, from a variety of different areas, and sounds very believable. So, actually being able to know that if you ask it, or maybe you’ve trained it on a whole load of HR documentation, for example – and you’re saying to it: “How many days holiday am I allowed at this point in time?” It could come up with a wrong answer, but the answer would be entirely plausible.

That’s dangerous enough internally; that’s even more dangerous if that’s being offered externally. So the understanding of how to prompt and how to configure those LLMs so they don’t hallucinate. You can force them to say, just use the data and just ask the question based on ‘that’. Being able to put that structure and rigour around it.

I’ve been able to make these things scalable. You know, you can have a small-scale project like that of a knowledge base where you’ve maybe got a few documents in it, and it will work really well. But how does that scale when you have 100,000 documents in it? How is that scale when you have a thousand users accessing at the same time? If you get that wrong, what you end up with as a pilot which works functionally on a small scale, but then when you actually come to scale it up, you’ve got to start again. You’ve got to almost throw that pilot away and start again. I guess that’s some value from that in that you’ve demonstrated it. You’ve shown it to the business. They’ve said: “There’s value in this. Let’s take it forward”. But what we really focus on is building these things right from the start. There are so many frameworks out there. There are so many tools, and then new ones come along on a daily basis, virtually. By the time this podcast goes out, there’s probably three new identical frameworks that have appeared.

But the point is that businesses need to build something which is sustainable. You can’t just keep iterating this whenever something new comes along. That again comes back to these basic engineering principles; it needs to be built right from the start, it needs to be scalable, it needs to be secure. You need to have an understanding of what your cost looks like. Because that’s one of the big problems with LLMs, if you’re just firing lots of tokens into this thing. You’re going to ask why the bill’s going to go up very quickly. So, having some engineering rigour around that, and a lot of this has been solved already. So you’ve got the big hyperscalers, so you’ve got the likes of AWS. Full disclosure, Ten10 and The Scale Factory, our brands partner with AWS. A lot of what we’ve built is on AWS. As you’ve heard, GCP and some of the others. Essentially, they have a lot of that infrastructure in place to be able to build this stuff with the rigour it needs in order to be practical in an enterprise rollout and enterprise application.

So it’s almost a case of building on those at the start with the foundations in the right place, because then, irrespective of whatever does come down the line – and lots of new things will inevitably – you’re going to be able to build that into the framework you’ve built already and you’re not going to need to throw it away. So it’s a question of having that pilot, but at the same time starting off in the right footing. So you don’t need to throw any of it away.

Miriam: You’re making some very good points there in that the pace of change with AI is incredible, and I see some organisations saying, “Oh, we will just sit back and wait to see what lands.” Well, they will be sitting back a long time. So much so that they fall over at the back! That is not a position that you can stay in if you want to stay competitive. Yet at the same time, what you describe in terms of having a robust platform is absolutely crucial. I’m likening it a little bit in my head to: if I want to build a new car, I’m not going to completely, necessarily reinvent the differential that drives wheels. There might be improvements in the mechanics, but the principles will be the same. So that’s a good platform on which to build a car and then all the rest, whether it’s a combustion engine, whether it’s electric or whatever, can be changed over time. So you’re building on that robust platform.

Now, one thing that I believe we need to add into that, and we’ve seen great results with organisations that do that is; you have the engineering rigour and you have to have the “people rigour” so that people develop what we call the AI emotional intelligence that allows them to almost stand up to the AI to be critical of the answers, that you get to not be swayed by the very, very persuasive answers that AI can give, but that if you ask it; “How many days holidays have I got left?”. And it tells you, “500”, you can go; “Okay, this is probably not right”. But also when it says, “You have 25 days left” and you go, “Huh, is that reasonable?”. That’s a very simple example of AI emotional intelligence, but you need to dig deeper because there are a lot of scenarios where the answer can be very plausible.

You are not necessarily the expert on the topic. Therefore, you will need to find ways to ascertain whether the answer is right and whether the answer is acceptable as well. I’ve had a conversation with a law firm just recently who are very concerned about AI – for example, giving highly biased advice and which could open employers up to all sorts of risks, in particular in the HR space.

So you, as the user, need to develop that emotional intelligence. The way how I can look at you, Ash and I can see you are smiling, nodding, agreeing, and we’ve established an understanding – versus, somebody who might be very standoffish. I could read that, maybe they’re not understanding my message, and I would work with that. You actually have to do similar things with AI to assess if it is right. Is it correct? Is it ethically right in what it says? Should it give this answer in the first place or should this maybe be something that’s reserved for human judgment? The one test that I always like to ask my client is; “If you were to use AI and that use was going to end up on the front page of various tabloids, would you be happy to stand there?”. That really sharpens the focus on the AI emotional intelligence that you need to develop.

Ash: I think that’s a really key point and that brings into the scenario the people and the engineering side to that as well, particularly around that rigour around testing and making sure that it’s accurate. Just to go over the people side first: People need the skills, and I think that’s a very easy thing to say, but they also need that emotional intelligence with it to be able to know if it’s plausible. And so much of that, I think, comes from understanding.

We have a phrase we use all the time: “It’s not magic, it’s just maths”. It’s very complex math, but nonetheless it is still maths. I think if you’re not aware at some level of how these things work and their limitations, then you are prone to believing it’s just magic. So that’s the people side of that, so people understand the limitations. But the engineering side of that is that quality engineering and testing side, to make sure that it does work. I mean, that point there of the 25 days of holiday, for example, it might be that the organisation adds more days with time served. So when somebody asks, it then also needs to understand that it needs to look at how long that individual’s been with the business to then determine how many days’ holiday they have. Or if in that scenario, the company suddenly says; “Hey, we’re going to give everybody 30 days holiday” and the document is updated to reflect that – how do you make sure that all of that is updated so that when somebody asks and says; “how many days holiday I have”, it now takes 30 and it stops answering that point? I mean, that’s relatively straightforward to test, as you said, but that point around bias, that’s much harder. There are frameworks that exist that allow you to do that, which are essential, but we’re moving away from “does it work functionally” to security.

The right answer to “How is the answer formed?” and “Are we at risk?”, as you mentioned with that law firm, of having something which is biased, how do you measure that objectively? And how do you say, “Yes, we’re happy with this” and also keep monitoring it. You know, you need to know. It may be very unbiased when you roll this thing into production, but then over time, it may drift. So you need to monitor that and then be able to take action when it happens.

Miriam: So these are the gaps that we address from moving from pilots into large scale, and also these are the points that need to be addressed to stop organisations just being stuck in what people call “pilot-itus”.

Ash: That’s right, or maybe even to your point, not even getting started, because they think something new is going to come along and, you know, “when do we actually get going?”

Our Presenters

Ash Gawthorp Ash Gawthorp, Chief Technology Officer and Co-Founder of Ten10

miriam gilbert Miriam Gilbert, Founder of Coincidencity

Work with AI experts

Let Ten10 show you how AI can transform your organisational operations. By automating complex workflows, your team is freed up to focus on strategic decisions and driving business growth.

Together, we’ll unlock the potential of AI, enabling your workforce to concentrate on what they do best.

Contact us to learn more