The question nobody asks
Every app on your phone is designed to capture your attention. Notifications, streaks, red badges, infinite scroll. The whole architecture of modern technology is built around one metric: engagement. How long did you look? How often did you come back?
But here's what nobody in tech seems to ask: how did you feel afterward?
Not "were you satisfied with the product experience." Not "would you recommend this to a friend." But genuinely: did this interaction leave you more aware, more grounded, more capable of navigating your life? Or did it leave you slightly more depleted?
For most digital interactions, the honest answer is uncomfortable.
A quick history of technology trying to be less terrible
This problem isn't new. Researchers have been wrestling with it for decades.
In 1996, Mark Weiser and John Seely Brown at Xerox PARC proposed calm computing. The idea was that technology should move to the "periphery" of attention, engaging us only when necessary. Reduce cognitive overload. It was a beautiful idea that predicted ambient interfaces decades before they existed.
A year later, Rosalind Picard at MIT asked: what if machines could recognize human emotions? This launched affective computing, a whole field of facial expression analysis, voice sentiment detection, and physiological signal processing. The insight was genuine. Machines should understand what we're feeling.
Then in 2018, Tristan Harris and the Center for Humane Technology focused on the damage: addictive design patterns, attention extraction, surveillance capitalism. The prescription was mostly defensive. Reduce screen time. Limit notifications. Design with less manipulation.
Each of these was necessary. Each was incomplete.

Calm computing reduced interruption but didn't ask whether the content of the interaction served the human. Affective computing recognized emotion but mostly used that recognition to optimize for engagement: targeted ads, personalized content, retention. Humane technology identified the problem but primarily offered subtraction. Less screen, less notification, less interaction.
None of them asked the fundamental question: what if technology's purpose was to actively cultivate human awareness, agency, and wellbeing?
So what is conscious computing?
It's our attempt to answer that question.
The term comes from a simple observation: information should appear when humans need it, in a form that serves not just their task but their entire being. Their emotional state. Their physiological regulation. Their capacity for attention and self-awareness.
This is not the same as personalization. Netflix personalizes your queue to keep you watching. Spotify personalizes your playlist to keep you listening. That's optimization for consumption.
Conscious computing is different. It asks:
- Are you in a state where this information will be useful, or will it add to your cognitive load?
- Does this interaction promote your awareness of your own state, or does it pull you further from it?
- Does this interface support your agency, your ability to make deliberate choices, or does it exploit automaticity?
The shift is from serving the user's intent to serving the user's being.
What this looks like in practice
At Prana Labs, we're building Vayu as a working example of these principles.
The breath is the interface. Respiratory rate, depth, and rhythm are the most accessible physiological signals of autonomic state. They encode stress, relaxation, focus, and emotional regulation in real time. And unlike heart rate or skin conductance, breathing is both automatic and voluntarily controllable. It's the bridge between the conscious and the unconscious.
Vayu doesn't just tell you to breathe. It senses your current state through HRV monitoring, breathing pattern analysis, and contextual awareness. It adapts the intervention to your physiological needs rather than running a one-size-fits-all protocol. And it helps you recognize your own patterns over time, building what researchers call interoceptive intelligence.
This is the core loop: sense, interpret, respond, cultivate awareness. The system serves your state. Over time, you get better at recognizing that state yourself. Eventually, you don't need the app anymore. That's the goal.
Why now?
Three things are converging that make this both possible and urgent.
First, the sensing infrastructure is ready. Wearables with HRV, respiratory rate, and EDA sensors are mainstream. Apple Watch, Oura, Whoop. Consumer-grade mmWave radar can detect breathing without contact. The hardware layer exists and it's getting cheaper every year.
Second, AI can now interpret physiological signals in context. Machine learning models can classify respiratory patterns, predict autonomic state shifts, and adapt interventions in real time. Five years ago this required a lab. Now it runs on a phone.
Third, the mental health crisis demands it. Anxiety, burnout, and attention fragmentation are at epidemic levels. The technology that contributed to this crisis has a responsibility and an opportunity to be part of the solution. Not by subtracting itself, but by reorienting what it serves.
Where we're going
Conscious computing is not a product category. It's a design philosophy, a set of principles for building technology that serves the whole person.
We're formalizing these principles through academic research at SFU's Metacreation Lab, alongside the practical implementation in Vayu. The goal is a framework that any developer, designer, or team building human-facing technology can adopt.
The question isn't whether technology will be pervasive. It already is. The question is whether it will be conscious.
Dhruv Adhia is the co-founder of Prana Labs and a PhD researcher at Simon Fraser University's Metacreation Lab, where he studies adaptive interfaces, biosignal processing, and what he calls "technology that breathes." Vayu is available on iOS and Android.






