Human Vocality Primitives

Whisper & Aspiration

Whisper phonation with varying pressure, turbulence, and vowel shaping.

Structured at the articulation level using documented production workflows and secured under the Proteus Standard™.

FULL DATASET

Human Vocality Primitives

Voice and Vocal Techniques

Voice

This dataset is currently in production. Preview audio and full specifications will be added as they become available.

This is a preview release provided via Hugging Face for evaluation purposes. Initial recording passes are available to assess capture quality, labeling structure, and dataset relevance. The full dataset will be delivered privately with complete Proteus provenance, integrity, and documentation upon licensing.

This dataset is complete and available for licensing. A public preview subset will be released via Hugging Face on a rolling basis.

Whisper and aspiration are airflow-driven vocal behaviors produced without sustained vocal fold vibration, relying instead on controlled breath, glottal configuration, and vocal tract shaping to generate sound. In these techniques, acoustic energy is created primarily through turbulent airflow rather than harmonic excitation, resulting in noise-shaped vocal output with no stable fundamental pitch. Expressive control emerges through subtle modulation of airflow intensity, mouth shape, articulatory configuration, and transitions into and out of silence rather than changes in loudness or pitch.

Acoustically, whisper phonation and aspiration occupy a broadband, noise-dominant timbral space characterized by diffuse spectral energy, formant-shaped filtering, and fine-grained temporal variation. Expressivity is conveyed through airflow texture, articulatory gesture timing, and the interaction between breath noise and the resonant properties of the vocal tract. These characteristics make whisper and aspiration particularly well suited for modeling unvoiced vocal behavior, breath-aware speech systems, and airflow-centric sound analysis, where precise control of onset behavior, noise structure, and repeatable articulatory gestures is essential.

Content and Recording Details

An overview of what’s included in this dataset — from articulations and performance styles to session context and recording notes.

What's in this dataset

This dataset contains a comprehensive collection of recordings capturing whisper phonation and aspiration-based vocal gestures, recorded in isolation to emphasize airflow-driven, unvoiced vocal behavior. The material is designed to document the structural organization, capture quality, and articulatory taxonomy of whisper and aspiration as foundational vocal primitives, rather than convey semantic speech content or linguistic meaning.

Included recordings emphasize sustained whisper phonation, fricative-based airflow gestures, variations in mouth shape and articulatory configuration, and controlled transitions into and out of silence. All material is presented in a neutral, non-performative context to support expressive voice modeling, speech research, and airflow-centric audio analysis workflows. Together, the dataset provides a structured, repeatable corpus suitable for studying and modeling unvoiced vocal behavior across a range of airflow and articulatory conditions.

Recording & Session Notes

All audio was recorded in a controlled studio environment using standardized capture, editing, and QC protocols applied consistently across the Harmonic Frontier Audio catalog.

Source material was captured at 32-bit float to preserve full dynamic headroom and minimize quantization artifacts during editing and processing. Final dataset files are delivered as 24-bit PCM for consistency and downstream compatibility.

A single performer and vocal source were used consistently across all sessions to maintain physiological continuity, articulatory stability, and repeatable airflow behavior.

Vocal source details:
Human voice — whisper phonation and aspiration techniques

Additional processing was limited to trimming, fade handling, and integrity checks. No creative processing, normalization, compression, or dynamic shaping was applied beyond what was necessary for clean, faithful delivery of the recorded material.

Proteus Standard Compliance

This dataset was recorded, documented, and released under The Proteus Standard™, Harmonic Frontier Audio’s framework for rights-cleared, provenance-audited audio data.

The Proteus Standard ensures:

•Performer-owned, contract-clean source material
•Transparent recording methodology and metadata
•Consistent capture, QC, and documentation practices across the catalog

Learn more about The Proteus Standard

Layer I — Source Provenance

Layer II — Cryptographic Integrity

Layer III — Acoustic Fingerprinting

Audio Demonstrations

A three-part listening benchmark: a mixed musical demo built from this dataset, the raw source clip, and an AI model’s attempt to reproduce the same prompt.

PRODUCED REFERENCE

A musical demonstration created by replacing a state-of-the-art AI-generated lead instrument with original source recordings from this dataset, then arranged and mastered to preserve musical context. This approach allows direct comparison between current-generation model output and real, rights-cleared acoustic source material.

RAW DATASET CLIP

Directly from the dataset: an isolated, unprocessed example of the source recording.

AI MODEL BENCHMARK (Suno v5 Pro Beta)

An unmodified output from a current-gen AI model given the same musical prompt. Included to illustrate where today’s systems still differ from real, recorded sources.

AI model approximations generated using publicly available state-of-the-art music generation systems.