Anni Feng
Associate Director, Hoare Lea
From tea to GPT.
PEOPLE
Fresh perspectives
New voices of the built environment
It’s your first time visiting an office kitchen. You are given a tall cup by a colleague you have just met. You want tea, so they kindly show you where the hot water and tea bags are. So far, so good. You take a tea bag out of the jar and realise it has no string attached. What do you do next?
- Drop it in the cup and leave it in the hot water, making a seriously strong cup of tea
- Dunk it into the hot water several times while trying not to scald your fingers
- Add cold milk, then hold and submerge the tea bag in the milk/water mix for a few minutes
These options went through my mind when I experienced this scenario recently. After a moment of risk assessment, I sought advice from my colleague. Apparently, the expected behaviour is option D: to drop the tea bag in and take it out using a spoon when the time is right.
After reassuring my colleague that I had, in my life, made tea before, I did not want to surprise them further with my version of option D: tearing the bag open and releasing the loose tea leaves. In my family, we always made tea using loose leaves, so I thought the tea bag itself could simply be packaging.

Moments of ingenuity such as this light up the room, spark further ideas, and usually come after sharing examples of struggle.
Design assumptions: don’t get burned.
Thinking about how users will react to and interact with technology/systems/environments has been a constant across my career – from how monitoring and control systems will be used by railway station staff for the safe operation of a busy station, to how patients would use digital wayfinding technologies to navigate to where they need to be in a cancer centre.
The tea-making experience made me reflect on my design assumptions in terms of how we expect users to behave and their baseline knowledge about a technology or a system. You might think using a spoon to take a tea bag out of a tall cup is common sense, but it wasn’t to me. Designing based on our own lived experiences, education and past projects might be an efficient shortcut but it might lead to unmet needs, awkward moments among colleagues, or even scalded fingers.
We all have biases, and maybe it’s inevitable that we make design assumptions at some point, but is there anything which might help us make more informed assumptions and improve design?
Vibrating loo seats, and the drive to survive.
Working on healthcare projects has been a continuous learning journey – sometimes, an unlearning journey. It has involved humbling, raw, personal conversations with building users, and communities who are marginalised and under-served. When I was with the deaf community to discuss their access to healthcare services, they expressed concern about not being notified when there is a fire alarm and they are alone – for example, in the toilet.
They suggested connecting the toilet seat to the fire alarm system, so it would vibrate to notify them if the alarm was activated. Moments of ingenuity such as this light up the room, spark further ideas, and usually come after sharing examples of struggle. When the environment is not designed for you, you have to be creative to survive in it.
This resourcefulness got me thinking about ways to enable underrepresented stories to be part of the design narrative, especially for projects which might not have any scope, budget or a team to conduct user engagement. Being a digital consultant, I turned to technology for some help.
I experimented with ChatGPT on three main tasks:
I was able to compare ChatGPT outputs with the ones from manual research and real-life engagement, and while the initial ChatGPT outputs for the first two tasks needed refinement and validation, they could serve as a useful starting point for resource-limited project teams.
It was output for the third task that really caught my attention. The first round was quite generic but did provide an overview to use as a checklist for general inclusive practices. The output after further prompting captured the nuances and most of the key points gained through in-person conversations. It did not quite manage a vibrating toilet seat idea, but did provide viewpoints not discussed in real-life sessions which we can now validate with users.
ChatGPT is not meant to replace direct engagement with users. The emotional connection and trust developed through personal conversations is still required for quality design. It is through conversation that we address biases and uninformed assumptions, as well as coming up with brilliant solutions. I do, however, see the benefit of using generative AI to prompt inclusive practice and produce many alternative viewpoints in a short time. All we need now is thoughtful implementation to make this AI-enabled approach more meaningful.

Maybe it’s inevitable that we make design assumptions at some point, but is there anything which might help us make more informed assumptions and improve design?