What does ‘speed’ mean in software delivery?

a group of people gathered around a laptop

Tech consultants are constantly under pressure to accelerate – but what do we actually mean when we talk about ‘speed’ from a software delivery perspective? How should it be measured in different circumstances and how can trying to accelerate affect business operations?

Here’s what Ten10’s expert panel had to say:

This article is an extract from Ten10’s ‘Speed vs Quality: Finding the right balance’ panel discussion hosted on 27th June 2024. Our panellists were:

  • Emma Hargreaves, Ten10 Managing Principal Consultant
  • Stuart Day, Head of Quality Engineering at Capital One
  • Robbie Falck, Senior QA Lead at Moneybox
  • Mala Benn, Engineering Manager at Glean
  • Vernon Richards, Senior Expert Quality Engineer at Ada Health

Vernon Richards

Before I worked for my current company, I worked for a company in the financial space, similar to Robbie, and the same thing applies there. You’ve got regulations that you’re obliged to adhere to, and I find that there’s always this clash between adhering to your regulatory applications and trying to increase the rate that you’re releasing technology, and building it. There’s a bit of tension there. I think they often get posed as mutually exclusive things, but I don’t think they are. I think the challenge is more of a human one, actually. There are people in the business who are not used to developing products that quickly, so you need to explain to figure out a way to help them understand that going quickly from their point of view does not mean we’re just throwing products out to customers. I think that’s the I think there’s a constant conversation between the set of people who are used to doing things one way and making sure that those people are constantly communicating.

Robbie Falck

I’ve worked for start-ups that have moved into scale-ups, and it’s been non-profitable startups that have become profitable, and that has a lot of impact on the speed that people want to deliver at. A good example: Moneybox was, a year ago, not profitable, and we were getting a lot of pressure to increase the speed of things because the speed to market of our products is potentially the difference between becoming profitable and not becoming profitable. Especially the seasonality of things. Tax year end is a massive thing for us, when we make almost all of our money. If we don’t have the right products delivered before then, we might have to wait another year.

I’ve always kind of taken the approach: I don’t really mind delivering bugs to production as long as we’ve thought about them and tried to communicate them and have an idea of ‘this the risk area, are you willing to accept it or not?’ For example, we might only test the core scenarios of this feature. The P2s, P3s, and P4s? We’re not going to test them. So we’re able to say ‘you might have some bugs in this area, but we’re going to deliver it a lot faster’. That’s a very fine line, but the only thing we can really do is communicate that and, at the end of the day, it’s up to someone else to make the decision as to whether we go live or not.

Emma Hargreaves

Every project I’ve ever worked on, if you spoke to the project managers and the people in charge of the budgets, they want you to do the work faster, more cheaply, and retain the quality (at least, if not have higher quality). So they want everything. In order to speed up genuinely and reduce costs, you have to get the quality right throughout the process. That’s not just testing thoroughly at the end and making sure you know what the bugs are and where the risk areas are, but it’s building quality in so that everybody’s life is so much easier throughout the whole process. I think speed and quality, for me, are intrinsically linked because without quality, everything just takes longer. Everything is just much harder work. You find way more defects, you’ve got way more fixes to do, way more testing to do. It slows everything down. It costs a lot. So for me, they’re intrinsically linked.

Mala Benn

[At] my last role at Sky, we were working on quite a complex code estate, and there was a lot of interlinking and integration into massive back-end services. So it was quite difficult to always get features and releases out to customers. There was always this tension between our stakeholders and engineering as to whether we could get stuff out quicker and develop it while maintaining quality. One thing that we started to get better at is experimentation and releasing smaller pieces of functionality so that we could get it out to customers and be able to test it and gain some insights off the back of it, then decide whether we want to invest time and spend however long we’d need to be able to then ‘productionise’ our experiments. We found that was a good way of appeasing product and stakeholders while protecting our code base and ensuring that we weren’t risking quality from a technical side. But you know, again, as many of the people on the panel have said, it was down to conversations and always getting people to discuss openly between technology and product about what changes mean and how it could impact quality. Just being really open about issues and impacts of anything that we’re going to do so that we could negotiate time and estimates for how long things were going to take. So again, I think they’re intrinsically linked, and really it’s all about making people aware of what the impact of any ask would be and how you’d be able to buy the time to be able to do it right.

Stuart Day

I think it’s really interesting that when we start talking about speed of delivery, often we start going into the more technical side of things. So, we need to do continuous deployments, and we need to have all this automation etc. Obviously, all that stuff helps, but what I’m hearing here is a lot more sort of conversational, risk-based, understanding how you evaluate that risk versus the value you’re ultimately going to get if you were to get this out quicker. Mala, in terms of experimentation, I’m a massive fan of experimenting to the right. Learning from what we’re doing in production and then feeding that back in to help us speed up. So again, when we say ‘speed’, what do we mean?

Is it time from ideation to production? Is it time from writing that first bit of code to production? Is it speed for feedback loops? All these things play into how fast we can go, but the question always becomes, how fast do we really need to go? There’s always a want. It’s like:

  • When do you want it? Now.
  • When do you really want it? Now.
  • When do you need it? Next week.

We need to start thinking about those elements as well. So obviously, when you’re then getting into that conversation around the risks or the experiments [you] want to run, what are the conversations you’re then trying to [have] and how are you mitigating the concerns?

As an example, in a previous company, we did performance testing in production. But there was a lot of discussion about ‘this will actually help us speed up in terms of knowing what we need to work on the other side, it will help us improve our monitoring in production because there’s only so much we can do in the lower environments’ and things like that. But you have to kind of get through those conversations of ‘what are we doing to mitigate the impact should something happen through the production site?’ So maybe you start small, iterate, and experiment, right?