After a year when many people’s lives were lived online more than ever before, and with that trend only set to continue, the issue of online safety has never been more important.

This is an introduction to the issues surrounding Safety Tech and how user-centred design is crucial in helping this growing sector to safeguard people online.

In 2020, OFCOM reported that 22 million adults using the internet have personally experienced online content or conduct that is potentially harmful.

What is ‘Safety Tech’?

Perhaps you’re more used to hearing about ‘cybersecurity’, which is focused on protecting data. Whereas ‘cyber safety’ – or what’s increasingly being referred to as Safety Tech – is focused on protecting people.

Put simply, the Safety Tech industry produces products and services designed to block harmful content and prevent online harassment, abuse and misinformation.

With an estimated 35% annual growth rate since 2016, the government estimates that revenues in the UK could exceed £1bn by the mid-2020s.

We believe it’s fundamental that the design community brings user-centred design to the heart of Safety Tech.

Of course, cybersecurity is part of a broader spectrum of security issues. Data breaches can lead to becoming a victim of fraud or having your online viewing data revealed. Even chastity tools have recently been hacked by dark web gangsters. Who knew bluetooth could be so dangerous?!

But in this article, we’re focusing on ‘online harms’ and safety by design.

“Your data will not suffer from low self-esteem.” Mary Aitken, Professor of Forensic Cyberpsychology and author of The Cyber effect

What do we mean by online harms?

In 2018 the Government released a paper on online harms. These include:

– Illegal and harmful activity – such as terrorism and child abuse.

– Bullying and harassment on online platforms.

– Misinformation that threatens proper debate and democracy.

– Algorithms which lead to users being presented with only one type of content, instead of a range of opinions.

– Targeting people who are vulnerable and at risk.

– Gang culture such as drug dealing and distributing weapons.

– Children seeing inappropriate or harmful information, for example, about self-harm or suicide.

There’s a particular focus on ensuring children are only able to access appropriate content, due in part to the UN convention on the Rights of the Child and the great work done by 5 Rights.

Additionally, part of the broader spectrum of harm is persuasive design that leads to addictive behaviours that are well known to affect not only children but adults too. We’ll cover that further soon.

A complex global challenge

Given that in recent times we’ve seen some very high profile figures (ahem!) disseminating widely disputed claims that some take to be fact and large payouts are made to social media moderators. This can develop PTSD in response to the disturbing content they’re exposed to, it’s clear why using tech to address online harm is such a rapidly growing sector.

An example of claims from Donald Trump on Twitter with the Twitter warning on it.

An example of claims which are both widely disputed and widely disseminated.

So how can we ensure society gets the best from this technology? How do we shape an internet that is open and vibrant, but that also protects its users from harm? This is a systemic challenge, which, by its very nature, is global and complex. The kind of ‘wicked problem’ that design is well suited to address.

Prevention and treatment: how Safety Tech works

Safety Tech can exist in the background, using technology ranging from simple ad-blockers to  advanced computational methods which identify and remove harmful material at scale.

The Turing institute has been leading worldwide research and development to address issues of online harms in the ‘Hate Speech: Measures and Counter-Measures’ project. This uses natural language processing techniques and qualitative analysis to create tools which identify and categorise different types of online hate speech.

In 2020, the institute led researchers from Oxford, Surrey, Sheffield and George Washington universities in developing a tool that uses deep learning to detect East Asian prejudice on social media. The tool is available as open-source, along with the training dataset and annotation codebook.

2 smart phones showing texts with additional questions from an AI interpreting the person's emotions.

Safety tech can also be part of the front-facing user experience.

The BBC’s Own IT app is aimed at 8 to 13-year-old children getting their first smartphone. It combines machine learning to monitor their activity on the device, with the ability to self-report their emotional state. This information is used to deliver tailored content and interventions to help children stay happy and healthy online, offering friendly and supportive nudges when their behaviour strays outside the norm.

Of course, this sort of automated technology silently whirring away in the background might generate unease about surveillance amongst users and not actually meet privacy rights. We’ll talk more in a future post about this.

Where does design come in?

But beyond the additive technology and apps to make existing user experiences safer and reduce harm, how do we bake in safety by design in the products and services we use every day?

As Tina Turner might say, ‘What’s design got to do with it?’ 

Well, quite a bit actually. As Cordelia McGee-Tubb writes in her article about accessibility, “It’s like a blueberry muffin – you can’t push the berries in afterwards.”

1. You have to consider safety from the start and across the end-to-end service

Thankfully, the government’s online harms paper recognises this. It points out the necessity for decisions about online security needing to be made throughout the product development cycle. It also stresses the need for clear standards and sharing of good practices.

But there are a myriad of ways that services get designed and delivered online.

Some services are consciously designed for users when: 

– There may be an in-house team focused on developing them

– Organisations may employ external consultants (like us!) to work with them periodically

– Sometimes a leader who’s responsible for a service will champion a user centred approach and make the time and space to embed this into their organisation’s DNA.

However, many are still unconsciously designed when:

– Some services sort of… happen, for example, in response to a global pandemic meaning something has to be delivered online fast

– A new regulation results in a procured piece of technology being acquired and a service comes into being without much thought for the end-to-end user experience.

How services get made varies greatly. 

Budgets, time and context all affect how much research and user testing can be done. For example, services run by smaller entities might be knitted together using lots of off the shelf (OTS) technology, supplemented by manual processing behind the scenes.

2. Where’s the guidance to put this into practice?

In truth, there’s a lot of guidance out there including things like the ICO’s Age appropriate design: a code of practice for online services and guidance on privacy by design.

Much of safety by design ends up in the prevention through design (PtD) space. This focuses on occupational hazards, risk management for medical procedures, and International Organisation for Standardisation (ISO) standards on data.

But not much of it is written simply enough for people to understand the basics of what they should be covering. It’d also be helpful to provide examples of what good implementation of guidance looks like. An example of this is to avoid things like the terrible User Interaction (UI) decisions made around cookie laws, which rendered many websites frustrating to deal with. Asking user’s consent every time they visit a website wasn’t the best implementation of new regulations.

We’re talking some real basics here. Things like if users can sign up for accounts and interact with others, has the organisation considered the right kind of moderation to protect users? What kind of adverts might pop up? Should children really be targeted by gambling ads?

And beyond prevention, what happens if someone is harmed?

Research we’ve conducted with young people on behalf of the Scottish Government shows a distinct lack of support for young people who experience harm.

Do we have restorative ways of resolving issues, learning from them and recovering to return confidently to an online life?

We created a set of principles to help the Scottish Government create safe and usable services. This list includes:

– Unite those already working in the field.

– Make Internet Safety a whole business priority.

– Provide high level guidance to those with high digital capacity.

– Enable those with experience of harm to contribute their knowledge.

– Make impactful cultural change through education.

– Provide practical guidance for those with low digital capacity.

Visual of set of principles.

Principles for uniting a user-centred safety by design network

3. Organisations need help interpreting the guidelines

Snook undertook research on the back of the online harms paper to find out who the people making design decisions on services are. We wanted to find out if they consider safety as a priority (or at all), and what they might need to help implement the guidance.

We spoke to a mix of design consultancies, charities, Subject Matter Experts (SMEs), startups and larger tech orientated businesses.

Larger organisations with technical expertise did have privacy and safety by design capability in-house, and were adopting a number of commercial products to help them in areas such as moderation or safe ads.

We found larger charities that specifically support vulnerable people or those whose needs are in need of protection, generally had safety policies in place. This led to good design and technology decisions being made, such as Barnardos creating safe spaces for their users.

“It depends on the app or the creators – has this been created quickly to make money or is it something that’s been created with safety at its core? Many things for children have been made very quickly, very cheaply and aren’t at all appropriate.” Service Provider

However, in smaller studios, SMEs and charities with low digital literacy, we found that many organisations have poor knowledge of how to implement safety by design, or generally hold it as a low priority due to lack of resources.

When you look at the facts from the Charity Digital Skills report there is cause for concern:

– 51% of charities still don’t have a strategy for digital

– 83% say they are fair to poor at digital service delivery, but due to the Coronavirus, two-thirds of charities are delivering all work remotely.

And for agencies supporting organisations with their service design, some reported that their clients weren’t interested in safety.

“Sometimes we just have to do what the client asks for. It’s generally a responsibility on the client-side for the person in charge of scoping. We can advise, but we have to pick our battles – is internet safety something we’re going to argue on?” Digital Agency

4. Learning by example: How patterns can help

Design patterns are repeatable, adaptable, solutions for common problems. Sharing them means avoiding duplicating work that’s already been done and ensuring best practice is repeated.

– They should be informed by research. This leads to forming a set of principles about what does and doesn’t work for users based on evidence

– They can take an instructional format, presented in a way that’s easy to understand and apply to a service or product

– They should show examples of how you might implement them.

– And evidence that they can work

By showing people what good, and in this context, safe, design might look like, we can help build better products and services. 

We mapped online interactions and the safety issues that arise at different stages of a user journey – from sign-up to leaving. From setting up passwords and understanding terms and conditions, through peer-to-peer communication, moderation and resolving any issues.  All of these steps can be informed by using design patterns that have a track record of working.

An image showing the stages of a user journey (sign-up, active use, inactive use and exit/pause) with the interactions that involve safety mapped to them

There are many different interactions that people have with services from beginning to end.

There are 100s of examples of successfully implemented designs by organisations. These include the BBC and Lego Group for children, or the major banks focusing on older people at risk of fraud. There’s much we could learn from them. But much great information is hidden in lengthy PDF’s – not easy to find or accessible to many.

A wise tutor once told me, “Most good design is mostly stealing, all of the time.” That’s not to say you should copy someone else’s work and make it out to be your own (not ok!). But there is an opportunity for a culture of reuse and sharing, (with permission and giving credit where it’s due), when someone else has got it right.

So what if we provided examples of best practice in easy to implement formats? Like how Mumsnet manages moderation? Or how Lego do onboarding? Could we make something useful for organisations to, at a very basic level, think about a set of things they must do to consider whether their services are safe or not.

‘Baking in’ safety by design 

‘Baking in’ safety by design means making sure safety is properly considered at all stages of design. The challenges I’ve presented don’t merit simple answers.

The larger problems created by world politics, media giants’ business models, and other commercial forces can all contribute to an end-user being harmed in one giant tangle of cause and effect. Terrorism, fraud and abuse are way beyond the control of apps or platforms on their own, however well designed.

Patterns are not a panacea. A bigger challenge – alluded to earlier – is that we may not all agree on what ‘good’ design looks like. For instance, monitoring children’s internet use is a pattern that works to protect rights but doesn’t meet their personal user needs around privacy and trust.

But patterns could galvanise energy around these discussions on how we best protect people online.  We can discuss what the best ways to design for both prevention and treatment might be and share easy to scale designs to achieve them.

We are ‘baking’ methods into our design process like consequence scanning and a pattern library for ensuring mental health products and services are safe and designed around user needs.

We’re open for conversations around how we ensure safety by design is included in the design process and the products and services around us. If you would like to chat or ask questions about making sure your products and services are safe for users, give us a shout at curious@wearesnook.com.

Written by Sarah Drummond