Vicki James, our Head of Design Operations, recently wrote about our AI hack day. We wanted to share our experience of the day and some of the lessons learned while exploring the idea our group focused on.
We ran 3 discovery sessions, each one building on the last, to explore:
- How might AI tools help us to understand new problem spaces, and to frame problems in ways that foster good design?
- How might AI tools help us to analyse and understand the data and insights that support a design process?
- How might AI tools help us to ideate throughout a design process?
We knew we had to be mindful of data protection and shouldn’t add any personal, private, or confidential information into the tools we used. But we didn’t want to limit our initial exploration. So, we got to work with an imaginary organisation, in an imaginary place, with an imaginary problem around their library spaces. A problem that is just like the type of design challenges we experience all the time at Essex County Council.
Our group was made up of a mixture of folks across different User Centred-Design (UCD) disciplines and beyond. This included Service Designers and an Interaction Designer, a Delivery and Product Manager, a Design Operations colleague and someone from our Technology Services team.
How can AI tools help us to understand new problem spaces, and to frame problems in ways that foster good design?
At the start of any piece of work, we spend time exploring the problem space to build our knowledge and better define the problem we are looking to solve. During the first session of the day we wanted to explore how AI tools might help us to work more quickly and speed up the time it takes for us to become a valuable strategic partner to the colleagues we work with.
We started with a brief that contained the following challenge:
The traditional roles of library services is under threat due to budget cuts and a decline in library usage. You have been asked to assist the council in reimagining the future relationship between libraries and community services.
Humans vs. Machines
We thought it would be interesting to run a human versus machine experiment in two teams. Both teams had the shared goal of creating a punchy, action-oriented problem statement based on our 2-page brief. One team (team human) did the work manually, the other experimented doing the same task using ChatGPT (team machine). We gave ourselves 30 minutes and got to work.
Team human approached the challenge in our usual way. We got round a table with a printed version of the brief and asked ourselves the questions we always start with.
- Who are our users and what outcomes are they looking for?
- Why are we doing the work and what outcomes are we looking for?
- What are our key metrics and what will success look like?
We talked it through as a team, highlighted the bits we felt were important and scribbled on post-it notes. We worked through the brief until we felt confident that we understood the problem, the goal, and the gap between them. We created a sea of notes but didn’t quite get to a fully formed problem statement within the 30 minutes.
The story was quite different on team machine. They fed the full brief into ChatGPT and then started to interrogate it by asking the tool to summarise the most important information. The team soon discovered that there was an art to asking ChatGPT the right questions to produce a better standard of problem statement. Results were more refined when providing ChatGPT with a framework of questions for producing the problem statement, rather than asking it to generally produce one. But there were still flaws where ChatGPT wanted to stray away from the problem statement and jump right into solutions mode!
They also had time to pull together some high-level comms and an elevator pitch to talk about the work in a quick and easy to understand way. They even created a virtual human avatar, who presented their elevator pitch just like a real human. They seemed to be very productive, whilst having a LOT more fun than those poring over the brief over in team human.
Comparing our progress
We came back together and shared our progress. Team human wearily dumped a big stack of scribbled post-it notes on the table and had to admit that they hadn’t been able to finish within the time allowed. Team machine tried not to look too smug as they scrolled through their multiple problem statements, various elevator pitches and even their shiny virtual human presentation. It looked like team machine had got this one in the bag. Should we all be worried?
But then it got interesting. We started to look in more detail at the problem statements that had been generated by ChatGPT. As we dug into the detail, we realised that most of the questions – and most of the answers – were coming from team human. Team machine had been much more productive, but their levels of understanding around the problem space were much lower than those that had taken a manual approach. They had lots of information to work with, but had spent their time creatively splicing information, without really thinking deeply about the problem space. It’s pretty obvious why, but a useful insight into some of the pitfalls of moving to answers or solutions too quickly.
One area where we saw good results from the AI tools was around how we might measure success once we had defined our goals more clearly. ChatGPT quickly generated a range of measures that we could consider, as well as ideas for ways to capture and collate this information. It was a useful shortcut that we’ll probably use again.
What we learned
As we reflected on our first session, we drew a few conclusions:
- Using AI tools isn’t the same as understanding the problem. It’s easy to lose the meaning when relying on the technology alone. Team human had a good understanding of the problem space and felt confident talking and thinking critically about it. Team machine felt less able to interpret what ChatGPT had produced. They didn’t have the same level of understanding or comprehension. It appeared that those relying on the AI tools were the least able to judge the quality of the results. This is a problem!
- The polished results from the AI tools look good, so we need to be wary of the ‘halo effect’ when using them. We know how easy it is to be persuaded by a snappy presentation or hooky video, but what if these versions are more believable, but wrong? Teamed with the previous point this could create a dangerous level of risk.
- The collaboration is vital to the work, especially around human-centred services and experiences. AI tools can help distil and organise information. But when we do this work together, we create a common understanding of the problem, the constraints, and the context. We immediately build a sense of trust and see how each other’s skills can help us to move forward. This is vital when most of our work is focused around making changes to the way we work.
- Less contextual analysis worked well and could be a potential time-saver. We could quickly distil a list of primary users and their goals, or a set of measures to consider from a long-form brief. This felt a bit like magic. Whilst the results weren’t right every time, we felt it could help us move forward at speed and even threw up some options that we hadn’t considered. We could see merit in using AI to create something quickly as a first draft, which could then be pulled apart in a workshop situation.
- But any form of contextual or nuanced analysis felt risky. Maybe with the right prompts and more time spent we could get better results. But is this worth the effort, and do we still need to do the hard work first to know if we are on the right track?
It's fair to say that after this strong start, our brains were buzzing from all the possibilities. This test was a great way to explore and highlight some of the opportunities and pitfalls that lie ahead when using AI tools to support our early knowledge building and problem framing.
And that was only the first session of the day! Keep your eyes peeled for a future blog post where we’ll share what we learnt during the other sessions.