How Attune measures emotional compatibility
Attune uses EchoDepth — Cavefish's real-time facial Action Unit analysis engine — to generate an emotional profile for each user. This page explains the methodology, the model, and what the closed beta data showed.
The FACS framework
The Facial Action Coding System (FACS) is a comprehensive anatomical taxonomy of human facial expressions, originally developed by psychologist Paul Ekman and Wallace V. Friesen in 1978 and updated in 2002. FACS codes facial movements in terms of the underlying muscle groups that produce them, called Action Units (AUs).
FACS is the established standard for facial expression research across psychology, neuroscience, and human-computer interaction. It is peer-reviewed, cross-culturally validated, and widely used in academic and clinical settings. EchoDepth's emotion analysis is built on this foundation — not proprietary pseudoscience.
EchoDepth and Action Units
EchoDepth analyses 44 facial Action Units in real time using the device's front-facing camera. Each AU corresponds to a specific facial muscle movement — for example, AU1 (inner brow raise), AU6 (cheek raiser), AU12 (lip corner puller, commonly associated with genuine smiling).
Detection runs at under 200 milliseconds latency, capturing both voluntary expressions and involuntary micro-expressions — brief, automatic facial movements lasting under 200ms that are extremely difficult to consciously control. This is what makes emotional profiling resistant to deliberate manipulation: you can perform happiness, but the underlying micro-expression pattern of genuine joy is distinct and measurable.
All processing occurs on the user's device. No raw images, video frames, or biometric data are transmitted or stored at any point.
The VAD emotional model
EchoDepth maps AU activations to a three-dimensional emotional space using the Valence-Arousal-Dominance (VAD) model, the most widely used dimensional model of emotion in affective computing research.
- Valence — the positive-to-negative quality of an emotional response (pleasure vs. displeasure)
- Arousal — the intensity or activation level of an emotional response (excited vs. calm)
- Dominance — the sense of control or submissiveness in an emotional response (in control vs. overwhelmed)
A user's responses to the 90-second onboarding video are mapped across all three dimensions, producing an emotional vector — a mathematical fingerprint of how that person emotionally responds to a given range of stimuli. This vector is anonymised and cannot be reverse-engineered to reconstruct the original video or identify the user's face.
How matching works
Attune compares emotional vectors across the user pool using a similarity model. The goal is not to find identical emotional profiles, but complementary ones — people whose emotional responses to the world resonate with yours in meaningful ways.
Matches are introduced based on VAD vector proximity, weighted by user-set preferences (location, age range, relationship intent, identity). Photos are revealed only after a compatibility score is generated — not before. This is a deliberate design choice: it removes the visual bias that drives superficial swiping on conventional apps and surfaces people you are emotionally likely to connect with before physical appearance influences the decision.
Closed beta results
In Attune's closed beta, participants were asked to rate their first conversation with each match on a simple scale: not interesting, somewhat interesting, genuinely interesting, or better than expected. 94% of matched users selected "genuinely interesting" or above after their first conversation.
This data is from Cavefish's internal beta programme. For questions about the methodology or to request further data, contact hello@attunechemistry.com.
Data and privacy
Emotional data is the most personal data Attune handles. The architecture is built to minimise what is stored and maximise user control:
- Raw video and images are processed on-device and never transmitted
- What is stored is an anonymised emotional vector — a mathematical representation with no biometric identifiers
- Emotional vectors are encrypted at rest
- Users can permanently delete their entire profile, emotional vector, and all associated data in one tap
- Attune is fully compliant with UK GDPR, including the right to erasure
For full details, read our Privacy Policy.
Frequently asked questions
The Facial Action Coding System (FACS) is a comprehensive anatomical taxonomy of human facial expressions developed by psychologist Paul Ekman and Wallace V. Friesen in 1978. It is the established standard for facial expression research across psychology, neuroscience, and human-computer interaction.
EchoDepth analyses 44 facial Action Units in real time using the device's front-facing camera. It captures both voluntary expressions and involuntary micro-expressions lasting under 200 milliseconds, then maps them to emotional states using the Valence-Arousal-Dominance model. All processing occurs on-device.
The Valence-Arousal-Dominance model is the most widely used dimensional model of emotion in affective computing. Valence measures positive-to-negative feeling, arousal measures intensity, and dominance measures sense of control. Together they produce a three-dimensional emotional fingerprint.
In Attune's closed beta, 94% of matched users rated their first conversation as genuinely interesting or better. The matching system compares emotional vectors using a similarity model that finds complementary profiles — not identical ones.