Tag Archives: evaluation

Getting to Grips with Digital Service Design

Amy Richards leading Y Lab's workshop in Llanrwst

Amy Richards leading Y Lab’s workshop in Llanrwst

How might public services begin to digitally design their services? Jess Hoare and Amy Richards from Y Lab look at the key things to think about when you’re starting off on your Digital Service Design journey.

A few weeks ago the Wales Audit Office invited Y Lab to run a workshop at their Digital Seminars. These seminars were lively events with some really great questions coming up during the panel session. Here’s one of the meaty ones:

What are the key areas of focus for any organisation looking to redesign services?

We thought it might be useful to reflect on the discussion that followed this question and offer some practical advice. Through the work of the Digital Innovation Fund, we’ve concentrated on addressing three main areas: skills, culture, and tools.

Each of those categories relates to broader themes of skills, culture and tools. To keep this succinct, we’ve summarised some of the key points raised in our workshops:

Skills

  • Get to grips with the basics of service design. Always, always start with user needs. A lot has been written on how to go about this. As a starting point, I would recommend taking a look at some of the brilliant resources shared by Government Digital Service.
  • If you want to enable digital service design be brutally honest about who is best qualified within your organisation to lead that. Who’s good at UX? Who’s interested in doing more user research? Who has more recently mapped the services your organisation offers? Get them in a room together.
  • You need to be able to build agile interdisciplinary teams that can work iteratively. That doesn’t happen overnight but it is important to start with a team that knows what they are working towards.

Culture

  • Don’t just recruit talented people, develop those already with you;
  • be clear about career advancement, company culture, and training/development opportunities;
  • allow ideas to be challenged and championed;
  • ensure your leadership is committed to cultural change and supports risk.

Tools

A photo of Jess Hoare taking part in the panel discussion in Cardiff

Jess Hoare taking part in the panel discussion in Cardiff

Y Lab’s Innovation Process has been created to help organisations solve challenges using design methods. The process has been split into three steps: Explore, Generate and Evaluate. The basis of our process is if you understand the problem better, you have a better understanding of the user needs, reduce the risk of failure and have a more efficient and effective solution.

Explore comprises of questions that help you fully understand the problem, get a clearer picture of what it is you need to solve and ask yourselves some crucial questions about the resources you need and how you might measure the project’s success.

It is at this point in the process where assumptions about the needs of the user are made and this is where user research steps in. It’s much better to admit not knowing everything than to start making assumptions about what the user needs, and getting it wrong. Our user research tools will enable you to add further detail before you begin to think about solutions. Journey mapping and user personas can add valuable insight.

Generating Ideas…

You’ve got a better understanding of the problem and user needs, written a brief (without realising it) so now it’s on to the fun part. Our generate section is exactly how it sounds, we encourage you to sit down as a team and start coming up with ideas constantly reflecting on your findings from ‘Explore’ to ensure that your solutions are relevant and which ones you should take to the next stage and start prototyping.

Evaluate (through prototyping and testing)

Prototyping seems to be the part most people are scared of, it’s the part of our process where ideas are really put to the test and where flaws can be uncovered. Service blueprints, storyboarding and paper prototyping are invaluable and can be put in front of users, tested and refined to reduce the risk of failure in the long run. It’s much better to fail now, and not fail when you’ve made that big ‘investment’. Evaluate your ideas and solutions against your findings in the ‘Explore’ section, is this really the best possible solution? If not, throw it away and start again, you can’t make a bad idea good.

Final thoughts…

The business of innovation can be messy, is tricky and is often fraught with challenges to be overcome. The work put in by those we worked with through the Digital Innovation Fund was considerable. There was a great appetite and enthusiasm for responding to challenges practically through a structured innovation method and cross-sector collaboration. In the most successful cases, we can see how involvement with the Digital Innovation Fund has had a wider impact across the organisation, bringing in new ways of working and opening up conversations around the potential for digital forms of innovation.

The pertinence of working in this way has infused the ideas, workshops, and conversations that have taken place since we begun our work on the Digital Innovation Fund. This appetite and enthusiasm for new methods of approaching challenges was certainly echoed at the workshops we ran with Wales Audit Office and we’re looking forward to the next seminars in the series.

Learning from failure in complex environments

In his second blog post on the Learning from Failure workshop, Dyfrig Williams looks at failure in a complex environment.

It’s now been a few weeks since the Learning from Failure workshop, and my subsequent admission that I haven’t been very good at learning from my own failure. The event took place at the Wales Audit Office, so it was perhaps inevitable that we discussed the role of audit in learning from failure.

Systematic failure

James Reason Swiss Cheese Model. Source: BMJ, 2000 Mar 18:320(7237): 768-770

James Reason Swiss Cheese Model. Source: BMJ, 2000 Mar 18:320(7237): 768-770

Chris Bolton’s presentation was on the James Reason Swiss Cheese Failure Model, which compares human systems to layers of Swiss Cheese. Reason chose Swiss Cheese for a reason (see what I did there), as each layer is a defence against mistakes and errors, and things go badly wrong when the holes line-up. There’s an interesting critique of the model in the comments by Matt Wyatt of Complex Wales.

After a good Twitter conversation on the merits of different types of cheese as defence (I went patriotic and chose Caerphilly – ‘I crumble in the face of failure’), I looked at a model that Matt has developed, called the ‘Timeline of Inevitable Failure.’

Whereas the Swiss Cheese Model is a reflective model (you look back and check out the failure after Timeline of Inevitable Failureit’s occurred), Matt’s model is interesting as it offers opportunities to reflect on failure and its consequences at different stages, which fits in with a systematic approach to failure and chimes with some of the thinking in my last post on examining failure rigorously.

To be able to rectify failures at the early stage of the timeline, we have to be open and frank about failure, or issues will escalate and become bigger problems. By being comfortable with minor instances of failure, we’ll also be better prepared for when things go drastically wrong. As Matt says in another comment, ‘complex living systems will always fail, so instead of trying to make them failsafe, it’s much more useful to make them safe to fail.’ It’s well worth reading Chris’ post on Trojan Mice, which are safe to fail pilots, before delving in to a video of Dave Snowden discussing them as part of the Cynefin Framework.

You can see this approach in action through the work of the Bromford Lab and Dublin City Council’s Beta Projects. In terms of the latter, it’s worth checking out how their painting of traffic signal boxes led to less tagging and graffiti.

What does this mean for audit and audited bodies?

Aside for the recommendation  in the workshop to take your auditor out for lunch to better understand their approach to failure (which I’m completely on board with by the way!), this all relates to the complex environment in which public services are delivered and audited.

In Wales, this environment is about to change fundamentally with the Wellbeing of Future Generations Act. It’ll need a shift in thinking for organisations, as they’ll have to improve people’s wellbeing without compromising the ability of future generations to meet their needs. It’ll also be a challenge for us at the Wales Audit Office – it’s difficult to measure success when you don’t know what the future will look like. There’s a great post on the Wales Audit Office blog that outlines these challenges by Ann Webster, Assistant Auditor-General of New Zealand.

We’ve already shared steps that organisations can take to report effectively, including integrated reporting, at a seminar we held with the Sustainable Futures Commissioner. But in terms of this event, I was struck by some simple steps that organisations can take to evidence improvement. Jonathan Flowers gave a great example of how a Neighbourhood Network Scheme Manager asked for two instances a month of how the service had improved people’s lives. These narratives show that the service is moving in the right direction and can be used at the project evaluation stage.

Where now?

When it comes to evaluating our project, we’ve been gathering examples of how our work has led to organisations adopting good practice. These aren’t often measures in themselves, but complex case studies of how services have changed.

And in terms of our work, it’s important that we continue to have these conversations about failure, so that it’s normalised and people can be honest about it. And if we can do that, we’re in a better place to help organisations take further steps to improve their services.

Failing to learn from failure

How can public services make use of learning and information that result from failure? Dyfrig Williams blogs on learning from failure.

Last week I attended the Learning from Failure workshop in Cardiff. Before I go any further, I should clarify that these reflections are very much on my own process of working, rather than the work of the Good Practice Exchange.

The event was a bit of an eye-opening session, as it gave us all as participants the scope to look at aspects of our work that are not traditionally discussed. But why not?

In her paper on ‘Strategies for learning from failure,’ Amy C. Edmondson shares the Spectrum of Failure, which shows that blameworthy failure rarely results from the actions of any one individual. So why do we still tend to think that an effective and productive workplace culture is one that shuns failure and casts blame at all costs?

Order, order!

In the group exercise, each table designed an enabling environment for innovation, and each one was an environment where failure was accepted. Which makes sense, because we’re not encouraging innovation by cracking down on failure, we’re cracking down on ideas for new ways of working.

I was given the task of feeding back our table’s thoughts, which were based around the point that an enabling environment is complex and messy. What I personally meant by this is that a traditional approach, and the way that I’ve tended to approach innovation, is messy. My own innovation has often been a by-product, rather than an outcome or focus I’ve chased. It’s not been something that’s been identified in my appraisals, and it’s not been something that I’ve had to report back on to measure success.

This messiness is complicated even further by the environment that we’re working in. There is no one-size fits all solution for issues around public service access and delivery. What works in one community may not work in another.

However, there are approaches to innovation in the wider world that are much more rigorous. It was interesting to look at the work of the Bromford Lab, who have got a much more structured approach to innovation by testing ideas. One of their founding principles is ‘to fail fast,’ which means that they uncover lots of information and learning from their tests.

This planned approach makes it much easier to measure the success of innovation. Whilst there is no doubt innovation taking place at Bromford outside the realms of the Lab, taking issues and ideas to the Lab gives scope to evaluate them in a more formal way. This is at its most obvious with their Trello, where evaluation is planned and built in to their testing process.

Making space to evaluate

Making the time to evaluate impact has been the main learning point for me. I’ve tended to treat evaluation as something I do when I get the chance, rather than a process to embed into my work. And if you don’t take the time to properly look into why something’s failed, then there’s little chance to learn from it.

And so my more rigorous process of evaluation begins. The dates are now in the diary – I’ll be taking some time out from my day to day work and getting on with my evaluation. I’ll be leaving the physical space of my desk, which will help me get away from the usual distractions like email and the pile of paper on my desk, and get me in the right headspace to approach a different aspect of our work. Wish me luck!