Warning: JavaScript is not enabled or not loaded. Please enable JavaScript for the best experience.

AI understands what humans say.Not what humans mean.

OUNG builds the cognitive architecture that captures human intent through the body — not just through language.

The Alignment Gap

"RLHF captures what humans say they prefer. Constitutional AI captures what humans write as principles. Neither captures how humans physiologically respond before conscious thought begins."

Every major approach to AI alignment today — RLHF, Constitutional AI, DPO — operates through a single channel: language. Humans express preferences through text labels, rankings, and written principles. But behavioral economics and neuroscience have established that human values are substantially guided by pre-conscious physiological responses — bodily signals that encode learned associations between situations and outcomes before deliberate reasoning occurs. The gap between stated and revealed preferences is not a minor discrepancy. It is a structural feature of human cognition. OUNG exists to bridge it.

From Prediction to Intention

The OVIS Cognitive Architecture for Extracting Human Internal States via the Principle of Least Action

Jae-Woong Kim — Oung, Singapore / Seoul — February 2026

Contemporary Large Language Models optimize next-token probability without grounding in physical causality or multimodal internal states. While effective for language generation, autoregressive models cannot distinguish between habitual behavioral patterns and active causal intent. We present OVIS, a cognitive architecture that operationalizes Active Intent as a quantifiable divergence from the Principle of Least Action within a learned latent manifold.

  • Quantifies intent through real-time HRV and GSR biosignals
  • Integrates minimal action principles into cognitive computation
  • Establishes non-linguistic AGI benchmarks using embodied data
  • Active Intent defined as a non-conservative force against habitual trajectories in latent space
  • P-Link Biometric Cross-Attention: physiological signals modulate environmental representation
  • Intent Density metric (Dc): Mahalanobis divergence isolating deliberate decisions from habit
  • Proposed experimental protocol with ground-truth validation design
Added by user manually
Figure 1. OVIS Architecture Overview
Added by user manually
Figure 3. Intent as Physical Divergence in Latent Energy Landscape

OVIS Cognitive Architecture

Three Layers of Embodied Intelligence

OVIS (Omni-Visual Intent System) formalizes human intent as a measurable divergence from habitual behavior within a learned latent space — grounding AI cognition in physical law and human physiology.

Intent as Physics

Active intent formalized as a non-conservative force within a latent state space governed by a Cognitive Lagrangian. The Intent Density metric (Dc) measures not what a user does, but the cognitive cost of choosing to deviate from habitual behavior.

P-Link Protocol

Biometric Cross-Attention uses the observer's physiological state as the query to selectively attend to environmental features. Heart rate and skin conductance modulate how the world model perceives the environment — the body shapes the representation.

Embodied Alignment

Where current methods capture stated preferences through language, OVIS captures revealed preferences — the pre-conscious somatic signals that guide human decisions before words form. A new data channel for AI value alignment.

The Full Stack: From Play to Alignment

OUNG is not a research lab. It is a company building every layer required to align superintelligent AI with embodied human experience — from the device in your hand to the intelligence it trains.

  1. OV1 — Capture

    A handheld AR device that transforms the physical world into an adventure. Scan real objects to generate items, quests, and avatars. Explore the real world as a living game — while OV1 quietly captures the physiological and behavioral data that matters: how your body responds to every decision you make.

  2. Lila — Connect

    A shared digital world where everything discovered through OV1 converges. Collect, trade, communicate, and build. Lila is the social and economic layer — a living ecosystem that gives meaning to exploration and generates rich, continuous interaction data at scale.

  3. OVIS — Understand

    The cognitive architecture that processes the data Lila and OV1 generate. P-Link encodes physiological signals into a JEPA world model, extracting the moments where humans act with genuine intent — not habit. This is where raw interaction becomes structured understanding of human values.

  4. Aligned Superintelligence — The Goal

    Everything converges here. An artificial superintelligence trained not on what humans say, but on how humans experience the world — grounded in millions of embodied interactions, physiological responses, and intentional decisions. Alignment built from the body up, not from language down.

Each layer feeds the next. The game funds the data. The data trains the architecture. The architecture aligns the intelligence.

OV1: Sensing Intent in the Real World

OV1 is a handheld consumer device that captures heart rate variability, skin conductance, and gaze during natural interaction. Engineered for real-world deployment, not the laboratory — peripheral physiological sensors integrated into a portable form factor for everyday use.

When deployed at scale, every user interaction generates behavioral and physiological data as a natural byproduct of engagement. This creates a scalable data pipeline where the volume and diversity of training data grow with the user base. Controlled experiments validate the science. Consumer deployment scales it.

About OUNG

OUNG is building the full stack of embodied AI: the device that senses human intent, the cognitive architecture that formalizes it, and the aligned intelligence that learns from it. Based in Singapore and Seoul.

Founded by Jae-Woong Kim — brand strategist and project leader who led the strategic repositioning of two companies to CES Innovation Awards, including CES 2025 Best of Innovation and CES 2026 Innovation Award. Background spanning brand strategy, business vision, and product design across AI, robotics, and consumer hardware. Now building at the intersection of embodied cognition, AI safety, and consumer technology.