This article is part of the Snook spotlight series – where we bring you in-depth conversations with the best and brightest research and design thinkers. 

In this edition, Kat Dixon interviews Liam Hinshelwood, the new Head of Service Design at Snook. They talk about the emergence of ChatGPT, uses for it in research and service design and wider seismic change.

Liam has been a Snook for 5 years and recently moved into his new Head of Service Design role. Kat is Snook’s Third Sector Lead, with a background in digital services.

This article was written using AI transcription, some ChatGPT prompts to inspire the conversation and human editing.  

Opening the conversation – what are language models? 

Kat: First Liam, I’m going to put you on the spot. If you were going to explain ChatGPT to a colleague and you just wanted them to get a grasp of what a language model is. What would you say?  

Liam: In the broader sense, it’s an amazing tool that could help us to significantly reduce the time we spend on some tasks because it takes mental processing out of the picture for us. However, there are limitations and consequences that are ill defined and that we need to understand. Unfortunately, some of the ways we’re going to find those are only through using it.  

Kat: Yes, and not that I am any kind of tech expert, but for me the fundamental difference between this and a Google search is that Google is seeking out information, you’re asking it questions and it’s finding you sources of information. Whereas ChatGPT is predicting the most likely next possible word in a sequence of words based on a huge amount of data.  

“And that means that the most likely next word, the most statistically probable next word in this sequence isn’t the same as a fact”

Liam: Absolutely. And this is where hallucinations start to become a big issue… we don’t entirely know what source of information that these language models are drawing off. They could draw off fiction as much they could draw off a news article or an academic paper, hence leading to some of the truthfulness of these things being quite questionable.  

Kat: And they appear factful as well. My favourite example is that if you ask for an academic reference, ChatGPT will give you an academic reference that looks like a real one because it’s good at imitating and producing statistically likely strings of words, but it’s nonsense. 

Liam: And you can ask it to create a whole academic paper for you, with references, and it’ll fake the whole thing and there’s no validity to a lot of these references that fall out of it.  

Keeping up with the pace of change 

Kat: Absolutely. So we’ve been talking about ChatGPT a lot at Snook and Liam, you recently ran an internal Learning Session on this for the Snook team, right?   

Liam: Yes. Early doors, we saw this as an opportunity to open up a conversation more broadly around what is this new technology and what does it mean to us? How do we use it? What questions do we need to ask ourselves to help understand our ethical position? What the implications that we can see for our practice and for our clients? 

“I think it’s one of these things where if you’re not involved in the conversation, you can’t take a seat at the table”

Liam: So I think the more we can socialise what it is, how it works, if we should use it, and if so what we could do with it? The more informed our team are, the more they are able to respond to it as well. It’s still early days, it’s terrifying how quickly it’s moving. And there are some amazing memes floating around about people who consider themselves to be experts on ChatGPT within 30 seconds of having picked it up. I definitely don’t consider myself an expert in AI.   

Kat: That’s what’s quite exciting about this moment in history, right? We’re all picking up this tool and using it and it’s going to have a fundamental shift in how we interact with technology, but we’re also just figuring it out as we go.   

Liam: People are much, much more aware of its capabilities and are really starting to consider the consequences, which has led some people to be quite nervous about it. Other people have embraced it massively and have been doing some really fascinating things with it too. I think about that team from Portugal who have recently made ChatGPT a CEO and built an apparel company on a thousand dollars of investment with one hour of human input per day 

“It’s just goes to show how interesting the opportunities around these new technologies are. We’re very much just at the tip of the iceberg”

Kat: Yes, I read about the Portuguese team and thought, that’s a gimmick but also it’s genius. I feel like there’s a lot like this going on at the moment; things that feel like gimmicks but also have an undercurrent of something transformative.  

ChatGPT and user-centred research and design 

Kat: How might ChatGPT be used to enhance user-centred research and design?  

Liam: I think it’s good as an assistant to help us crunch through large volumes of data, spot some types of patterns etc. You can see there’s real benefit there.  

It also has the possible benefit of reducing bias which is interesting too. So, I suppose there is a primary research aspect to it.  

There’s also a research synthesis and analysis aspect. We can use it to generate ideas as well- using it as a prompt to expand your thinking and break any particular mental patterns that you might have been stuck into.  

There was a great podcast I was listening to where two academics were talking about the impact of ChatGPT and how they are going to have to stop essentially asking students to write essays based on a questions simply because ChatGPT and other AI tools are going to be so effective at writing essays.  

And of course there’ll be a bit of an arms race around different tools that come out to spot where an AI has been used. One of the things they’ve pointed out, which is really interesting, was how can we ask students to ask better questions about what they’ve been asked to explore? And I think that’s one of the things that it can really help us to ask better questions.  

I wonder if there’s something here about taking some of the more procedural, mechanistic aspects of our work away, does it help us to be more human?

“Can we look at how we can lean into those spaces of compassion, of understanding, of better quality engagement and deeper listening, deeper understanding? Maybe that’s a really great opportunity

Kat: Your point about this technology might allow us to be more human, that’s been the argument from time immemorial about enabling us to focus on the things that matter and that automation would put us out of certain jobs. But that would free us up to have more relational jobs, more job focused on compassion.

Optimism vs pessimism   

Kat: Obviously job losses are a really big topic around this, and I’m interested in whether you have a lot of hope and optimism around the fact that it will give us space to be productive and to automate the things that maybe feel like hard work. Will this help us focus on things like asking better questions or engaging with bias? How optimistic do you feel about this?  

Liam: History tells us that yes, technology has brought around huge labour-saving opportunities and freed more people to enter the workforce.  

“I think in some respects you have to be optimistic”

Liam: That said I feel like that there are people in demeaning jobs all over the world who are propping up our society as a whole and that ChatGPT isn’t going to solve anything directly for them – it could even make things worse. 

“We know that often what happens is these techno utopias don’t unfold”

Liam: They create more complexity, more challenges. Smartphones are a great example, they’ve increased access to the internet but with real cost to our mental and social health. And whilst I think in the small world of UCD, we can be quite optimistic about the agency it might give us. I think that the challenges beyond our discipline are quite problematic.  

Bias and power structures  

Kat: You mentioned earlier about usability testing and being able to remove some bias. That seems like a very obvious benefit of how there’s a lot of power structures that influence how we conduct research and how we do design work, and there’s an opportunity to mitigate some of that.  

Having said that, it’s drawing on language models and data sets that are inherently full of bias. Do you have any reflections on how bias might manifest in this?  

Liam: There is an obvious answer to flag here at the doors, which is you can train those language models, right?  In theory you could train them to be non-biased or as least biased as possible. But I’m not convinced its limited to how these models are trained, its often down to how the products were designed and who designed them too.

We know again from history that bias is inherently encoded into the products we create. Both the hardware and software for digital photography are biased toward people with lighter skin tones.  That leaves a huge portion of population that have been excluded through the way that a product was designed. So yeah, I don’t have a clear answer you. 

Kat: I’ve seen fragments of it pop up such as the way that female bodies are indexed on Instagram algorithms, for example. It comes back to design principles across the century, such as how seat belts weren’t designed with female bodies in mind.   

And I think we’re at a really interesting crossroads where how these models are trained is going to have a wide-reaching impact on how society adopts them and how these models transform our day-to-day lives.  

Liam: That’s fascinating, right? Because you wonder then how are we regulating how these models are trained? Because in one respect we’ve got a high degree of ambiguity and in fact a lack of visibility of how these models work in the first place.  

The black box idea of we throw something in and it’s something magical that comes back out to us is a terrifying concept in itself. How are we then understanding how those models have been trained? What’s the regulation around showing us what’s actually in those models? How can we build trust and confidence in those models to help us understand where those biases might sit, how they’ve accounted for any of those biases?  

It’s a wild west. Some of these things we are going to be scrabbling to fix over the next few years. This is crazy disruptive. Is a regulation framework appropriate? Does that even work in this scenario because it would break so quickly, particularly as you’ll need international agreement to limit escalation.

“It’s no surprise that we’ve had such a huge outpouring from academics and entrepreneurs and tech specialists to say, we need to pause this”

Liam: You can understand why because we might not be ready for it. And the market is driving an AI arms race. 

Authoring the future 

Liam: The thing that’s really scary here is that the agency we have as individuals. We are in a scenario where even if we try to avoid using it in some way, shape or form, we are letting others define how those language models will be shaped. Every single entry we make into one of those things is helping it to build those language models. 

“We are authoring a future that we have very little agency over“

Liam: We are doing it quite willingly and with some degree of glee. “Look how exciting this thing is” – I don’t know how intentional that is for the creators of these AI’s.  

Because when you start getting to their motivation it’s not because they’re necessarily looking to build great new technology for the benefit of humanity. It’s probably a bit more commercial than that in practice. And that’s really challenging.  

Kat: I want to end on a question from our friend ChatGPT, our robot voice in the room. Do you have a sense of how you think this might be measured in the future to look at what happened previously and how we live now? Do you have an inkling of how we might measure the shift in our lives?  

Liam: I think that we as a business have quite a few things to overcome yet to understand how we can really make the best and most ethical use of ChatGPT and other AI’s. And there’s something we haven’t touched on yet, which I think is really important, which is the environmental impact of these AI’s too.  

“The cost of cooling and storing data, not just in terms of the financial cost, but the environmental cost is vast and we are not talking about that enough yet”

Liam: And I know there are people like Gerry McGovern who have stark evidence of the impact of these technologies.

We can probably measure that environmental impact quite quickly. That then becomes an interesting trade off, right? As a business, yes we can save time, but should we be looking at the carbon impact of that as well? 

We need to question the trade off against the environmental, ethical, social impact that will come as a result of this technology.  

Kat: Thanks Liam, this has been great.

If you’d like to get involved in the evolving conversation with us on this topic, Snook are running a series of free webinars on ChatGPT. Register for free on Event Brite or sign up to our newsletter to learn more.  

25 May 2023

Sector

  • Data
  • Digital
  • Technology