As a part of each topic workshop during our Transformation process, we’ve been creating assumptions. These let us hypothesise some potential things that users of our services might think or feel. In turn, this helps us to create some very basic personas, which give us a feel for how some of our user journeys flow.
Who makes these assumptions?
These events usually have around 10 to 15 people in attendance, from across as many stakeholders and areas of the business as possible. A typical one might include researchers, UX, data analysts, content, subject matter experts (we’re trying to use subject matter experts where possible, to ensure our outputs aren’t too fanciful, and that we have as much information as possible), transformation team members, data, NHS England and anyone else we can rope in.
The benefits of having people involved from the start are multiple: we get lots of opinions, loads of information, viewpoints we’d otherwise probably not consider, and perhaps most importantly, people in a room together from different backgrounds trying to improve what we do.
How do we track and prioritise these assumptions?
The basic principle of these sessions is to create a board upon which we plot our assumptions. This has a vertical axis describing our level of certainty or uncertainty about the assumption, and a horizontal axis describing urgency. Assumptions are then created, usually in smaller groups on post-it notes (a handy supply of which are vital to any transformation project), and added to the board, where the urgency and uncertainty are agreed.
This allows us to do three things:
- get a backlog of things we can test, through user testing and analysis of data and evidence sources
- get a rough idea of how our users might be using our service, or how it might look ideally (as well as establishing where potential issues might arise)
- give the user research team an early idea of the sort of people we might need to speak to
A point to note here – these assumptions are just that. They’re not set in stone, and while they have come from people who have expertise in the relevant areas, they may not reflect the real world. Because of the quick nature of how we’re working, we can add new assumptions if we need to, and move existing ones up or down the axes as we get more information, allowing us to refine our areas of focus. If we’re completely wrong, at least we’ve not spent six months creating giant documents, and can hopefully use what we’ve learned to make the next iteration better. As Roger said we might get some things wrong, but we’ll get them wrong fast, learn, and move on.
What do we do with these assumptions next?
Another tool we’ve created to collate evidence from different sources is an assumptions grid. This maps all the assumptions we’ve made against sources of information (data logs, qualitative and quantitative research, etc). These are reviewed each week, with the evidence summarised, to keep us on track, and to make any changes to priority, as well as feeding into our user research, helping us to fill in gaps in our knowledge. This then feeds into everything else we create, such as personas, user journeys and visions/objectives, which we’ll look at in detail in future posts.