Human Food Interaction Workshop: What I Built and What I Learned
By Tamil Selvan Gunasekaran
I attended a Human Food Interaction (HFI) workshop run by researchers from the Inclusive Reality Lab at the University of Auckland. I did not organise it — I was a participant, bringing my HCI background to a room full of designers, engineers, and food researchers thinking about how humans and food interact at a much deeper level than taste alone.
The workshop was framed around multisensory design, biofeedback, and inclusive food experiences. By the end of it, I had more ideas than I could sketch on the whiteboards in front of me.
So I built them.

The People Behind the Workshop
Dr. Yun Suen Pai
Senior Lecturer at the School of Computer Science, University of Auckland, and director of the Inclusive Reality Lab. Her research direction emphasises multisensory and bioresponsive interactions — exactly the territory where HFI sits.
Zikun (Ivy) Chen
Co-founder and Designer of the Inclusive Reality Lab. Her role reflects what HFI needs at its core: food interaction systems require both technical depth and careful interaction design. You cannot separate the two — and Zikun is proof that one person can hold both.
Dr. Weijen Chen
HFI expert, researcher, and designer who completed her PhD in KMD (Keio Media Design) — making her officially and rightfully Dr. Weijen Chen. Super cool. Her CHI 2025 Honorable Mention project, Living Bento: Heartbeat-Driven Noodles for Enriched Dining Dynamics, is exactly the kind of work that makes you realise how far behind everyone else is — food as a social communication medium, driven by heartbeat. Wild in the best way.
The Workshop: From Ideas to Sketches
The session was hands-on. We had craft materials, paper prototypes, and a brief to think beyond screens. What happens when the table itself responds to how stressed you are? What if the way food looks could be adapted for people with eating disorders? What if your biofeedback changed the food's texture?

The conversation kept circling back to one idea: food is not just a nutrition delivery system. It is a social, emotional, and deeply embodied experience. And yet technology barely touches it.
That gap became my brief.
What I Built After
The workshop gave me the conceptual framing. The three prototypes I built afterward are my attempt to make those ideas tangible and interactive.
1. The Digital Sensory Table

This prototype explores an interactive table that reads participant state — stress, engagement, satiety — and adjusts the dining environment in real time.
The concept: a table surface that combines bio-signal sensing (wrist sensors, eye tracking) with adaptive outputs across four layers:
- Visual layer — projection overlays on the plate and table surface change based on satiety confidence
- Tactile layer — vibro-haptic utensils shift their feedback profile when stress is high
- Thermal layer — localised warming or cooling near the plate modulates comfort
- Affective layer — ambient lighting and sound adapt to arousal levels inferred from biosignals
The interactive demo lets you slide virtual participant signals (stress, engagement, satiety) and watch the closed-loop policy decide what to do — stabilise, intervene, or reduce stimulation. It is a concept prototype, but the feedback loop logic is real.
Why this matters: Most dining technology is about efficiency. This one is about experience — specifically, supporting neurodiverse or clinical populations where the relationship with food is complicated.
2. Bio-Adaptive Gastronomy: The Closed-Loop Dining System

This was the most technically ambitious of the three. The idea: your physiological state — measured through EEG, EDA, pupillometry, electrogastrography (EGG) — directly influences the physical properties of the food you are eating.
The prototype maps six biological input signals to three categories of food physics:
| Bio-Signal | What It Measures | Food Physics Response |
|---|---|---|
| EEG (frontal asymmetry) | Approach vs. withdrawal | Viscosity adjustment (phantom fullness) |
| fNIRS (PFC oxygenation) | Hedonic valuation | Lubrication / oral coating |
| EDA / skin conductance | Emotional intensity | Thermal conductivity modulation |
| Pupillometry | Cognitive load | Texture adaptation |
| EGG (gastric rhythm) | Satiety / anticipation | Food density signals |
| Facial EMG | Positive / negative valence | Sensory augmentation |
The simulation loop lets you adjust bio-signal sliders and see what food physics modulation the system recommends. It also has an AI-powered Experimental Protocol Generator (Gemini 2.0 Flash) that designs new research protocols — pick a sensing modality, pick a physical food variable, describe a research goal, and it outputs a study design.
The framing comes from a genuinely interesting scientific premise: food physics (rheology, tribology, thermodynamics) and human physiology are bidirectionally coupled. This prototype makes that loop interactive and visual.
3. Food Perception AR App

This one is the most directly applied to a clinical context. The app uses the device camera to overlay perception-correcting visual effects on food, designed to support people with eating disorders during therapy.
Three profiles:
- Anorexia Nervosa — portions appear visually smaller and calmer, reducing the anxiety response to normal-sized servings
- Bulimia Nervosa — portions appear slightly larger with enhanced visual contrast, supporting portion awareness
- ARFID — restrictive or anxiety-inducing foods are visually transformed to resemble preferred foods, supporting gradual exposure
This is not a replacement for clinical therapy — the app itself says so clearly. But it explores how AR perception manipulation could become a tool within a therapeutic context, rather than a standalone intervention.
The technical challenge here is real-time image segmentation on device, with Gemini AI handling the recommendation layer. Building it required thinking carefully about the ethics of perception manipulation, not just the engineering.
Presenting in the Session

One of the workshop moments that stuck with me: I was explaining the sensory table idea using a literal paper plate as a prop — holding it up, gesturing to show how the plate surface could carry information back to the diner.
It felt absurd and obvious at the same time.
And look, I am aware of the optics. A man holding a plain paper plate, treating it as a profound interactive artefact, in an academic setting. Right around the same time, Maurizio Cattelan's Comedian — a banana duct-taped to a wall — sold for $6.2 million. A banana. Duct tape. Six million dollars.
My paper plate did not sell for six million dollars. But the idea it represented — the plate as an active interface, not just a passive surface — is, I would argue, at least as interesting as the banana. And significantly more edible.
That moment crystallised what the three prototypes are really about: not adding technology to food, but rethinking food itself as an interaction surface.
Why Human Food Interaction Matters
The workshop and the prototypes sit at the intersection of three things I care about:
- Inclusive design — food is one of the most exclusionary experiences for people with sensory processing differences, eating disorders, or chronic illness. Technology can help, if applied thoughtfully.
- Multisensory HCI — my research at the Empathic Computing Lab studies how affect and cognition intersect in human-computer interaction. Food is a direct path to both.
- Closing the loop — the best HFI systems do not just output to the user; they read the user and adapt. All three prototypes are built around feedback loops, not one-way presentations.
Together they span:
- Multimodal sensing and closed-loop adaptation
- Inclusive interaction design for clinical populations
- Practical AR interfaces for real-world food experiences
The workshop gave me the permission to take these ideas seriously. The prototypes gave them a form I could actually show someone.
Sources
- Inclusive Reality Lab: inclusiverealitylab.org
- Dr. Yun Suen Pai: yunsuenpai.com / UoA profile
