Layer One
Your profile & preferences
The first thing Attune does is not analyse your face. It asks you what you actually want.
Onboarding starts with a structured preferences layer: relationship intent (serious, casual, open, undecided), lifestyle compatibility (how you socialise, exercise, work, spend weekends), and personality self-assessment. These answers are used as the primary filter. Emotional response profiling only runs within a compatible pool — not across everyone.
This matters because emotional resonance in isolation is not sufficient for lasting compatibility. Two people can have strikingly similar emotional fingerprints and want entirely different things from a relationship. Profile compatibility is the gate. Emotional profiling is the ranking within that gate.
Consent checkpoint
Before any camera access is requested, you see a plain-language explanation of what will be captured, how it will be processed, and what is stored. Facial and behavioural analysis requires explicit opt-in. You can complete a profile and browse without it — but emotional profiling and video dates are unavailable until you consent.
Layer Two
Emotional response profiling
Step 1: Calibration baseline
Before you watch anything, Attune captures 30 seconds of your neutral resting face. This is your personal baseline — the emotional starting point against which all subsequent reactions are measured. Without a baseline, there is no meaningful signal; one person's neutral is another person's smile. The calibration phase makes individual profiling possible.
Step 2: Labelled stimulus testing
You're shown a series of short standardised video clips — carefully selected content designed to produce a range of genuine emotional responses across different emotional dimensions. After each clip, you make a simple explicit rating: like, neutral, or dislike.
This is not decoration. This label is what makes the emotional data meaningful. Without it, an AU response pattern is ambiguous — the same micro-expression of surprise can occur with delight or with disgust. The user label maps the facial reaction to the actual felt preference. Both the AU data and the rating are stored together, as a labelled pair.
What EchoDepth actually captures
EchoDepth analyses 44 facial Action Units — the individual muscle movements that comprise all human facial expressions, defined by the Facial Action Coding System (FACS), the established standard in affective science research. But the unit of analysis is not a snapshot. The system captures:
Reaction timing — how quickly an AU activates after stimulus onset. Fast activation is associated with genuine involuntary response; delayed activation is more often deliberate.
Intensity — the magnitude of activation. A brief faint smile versus a full Duchenne smile carry different meaning even when the same AU is involved.
Duration — how long the expression is held. Longer-held expressions reflect sustained emotional engagement; brief flickers point to transient automatic responses.
Recovery — how quickly the face returns to baseline after a reaction. Emotional persistence — the tail of a response — is a distinct signal from the peak.
This temporal data is stored as a time-series per stimulus, not as a single averaged value. An emotional profile is a shape over time, not a number.
The VAD model
AU activations across time are mapped to a three-dimensional emotional space: Valence (positive vs. negative), Arousal (intensity of engagement), and Dominance (sense of control vs. overwhelm). This produces a trajectory — not a static point — for each stimulus. The complete profile is an anonymised time-series vector, processed on-device. No raw video or images leave the user's phone at any point.
Profile-first matching
Attune's matching engine runs in sequence:
Filter by profile compatibility — relationship intent, lifestyle, identity preferences. Hard filters. Non-negotiable.
Rank by emotional response similarity — VAD vector proximity within the compatible pool. Complementary emotional patterns, not identical ones.
Validate and improve through interaction outcomes — every date, every mutual yes or no, feeds back into the ranking model. The system learns what actually works.
Layer Three
Interaction analysis
The video date
Attune doesn't rely on external tools for your first date. The video date is built inside the platform — a private, two-person video environment where both users explicitly consent to recording and real-time analysis before the session begins. This is not just a product convenience. It is required for data quality and compliance.
The in-platform environment means Attune can capture interaction data that external tools cannot provide: how emotional expression changes when talking to this specific person, not to a video clip. That is a meaningfully different signal. Full details on how video dates work →
Post-date feedback: double-blind
After every video date, each person is privately asked one question: second date, yes or no? The response is sealed. Neither person sees the other's answer unless both say yes. If only one person selects yes, neither is told which way the other voted. This eliminates the social pressure that distorts self-reporting and produces clean, honest outcome data.
This mechanism is the core of Attune's training dataset. Every mutual yes or no, combined with the emotional data from the date itself, is a labelled outcome that the system uses to improve future match quality.
What the post-date analysis captures
After the session, the recorded interaction is processed to extract:
AU patterns
Smiles, tension indicators, negative affect markers — tracked across the conversation arc, not just at peak moments.
Emotional synchrony
How closely the two participants' expressions align over time. High synchrony is a strong predictor of felt connection.
Conversation balance
Turn-taking patterns, listen-to-speak ratio, interruptions. Imbalanced conversations predict poor outcomes reliably.
Emotional trajectory
Did energy increase or decrease as the conversation went on? The arc matters as much as any individual moment.
These interaction features, alongside the mutual yes/no outcome label, become training data. The goal of Attune's matching model is not to determine compatibility directly — it is to improve the probability of a mutual yes over time, through real-world data, not algorithmic assumption.
A note on the timeline
The first version of Attune will prioritise three things: collecting clean labelled data, delivering a smooth video date experience, and building the feedback loop. Predictive accuracy comes through model training on real outcomes — not from a single implementation. We are building a system that gets better with use, not one that claims to be finished.
Be part of building it.
Early users aren't just getting early access — they're providing the data that makes the system work. Attune launches in the UK in Q3 2026.
Join the waitlist Read the methodology →