Extended Vocal Techniques Spectrum (Music)

Overtone Singing

Structured resonances, controlled transitions, glissandi, and dynamic variations for expressive vocal synthesis.

Structured at the articulation level using documented production workflows and secured under the Proteus Standard™.

PREVIEW

Extended Vocal Techniques Spectrum (Music)

Voice and Vocal Techniques

Voice

This dataset is currently in production. Preview audio and full specifications will be added as they become available.

This is a preview release. Initial recording passes are available for evaluation, and the full dataset is scheduled for upcoming delivery.

Overtone singing is a vocal technique in which a performer sustains a single fundamental pitch while selectively amplifying upper harmonics through precise shaping of the vocal tract. Rather than producing multiple pitches mechanically, the technique exploits the natural harmonic structure of the human voice, allowing individual overtones to emerge as distinct perceptual elements above the fundamental tone.

Acoustically, overtone singing provides a rare and highly controlled window into vocal tract resonance behavior, spectral shaping, and harmonic emphasis. Expressivity arises through fine-grained adjustments of tongue position, mouth shape, and breath control rather than changes in pitch or amplitude. These characteristics make overtone singing particularly well suited for articulation-level analysis and modeling, where disentangling source excitation from resonant filtering and gesture-driven spectral movement is essential.

Content and Recording Details

An overview of what’s included in this dataset — from articulations and performance styles to session context and recording notes.

What's in this dataset

This preview dataset contains a curated subset of articulation-focused recordings demonstrating overtone singing techniques performed with controlled vocal production.
The material is intended to illustrate the dataset’s structural approach, capture quality, and technique taxonomy, rather than represent the full scope of the final release.

Included recordings focus on stable fundamental tone production, controlled manipulation of vocal tract resonances, and isolated overtone emphasis, captured without musical accompaniment to support expressive audio modeling, evaluation, and analysis workflows.

The full dataset will expand substantially on this foundation, with broader pitch coverage, extended technique variations, and a significantly larger corpus of recorded material exploring the full expressive and acoustic range of overtone singing.

Recording & Session Notes

All audio was recorded in a controlled studio environment using standardized capture, editing, and QC protocols consistent across the Harmonic Frontier Audio catalog.

Source material was captured at 32-bit float to preserve dynamic headroom and fine-grained spectral detail during recording and editing.
Final preview files are delivered as 24-bit PCM for consistency and downstream compatibility.

Recordings were performed by a single vocalist to maintain consistency in vocal anatomy, resonance behavior, and articulation technique across all sessions.

Post-processing was limited to trimming, fade handling, and integrity checks. No creative processing, pitch correction, normalization, or spectral enhancement was applied beyond what was necessary for clean delivery.

Proteus Integrity Layers

A three-layer provenance and integrity framework ensuring verifiable chain-of-custody, tamper-evident delivery, and spectral fingerprinting for enterprise deployment. These layers are versioned and maintained to support long-term auditability, continuity, and enterprise compliance.

Layer I — Source Provenance

Layer II — Cryptographic Integrity

Layer III — Acoustic Fingerprinting

All full datasets from HFA include provenance metadata, session identifiers, and spectral integrity markers as part of The Proteus Standard™ for compliant enterprise deployment.

Audio Demonstrations

A three-part listening benchmark: a mixed musical demo built from this dataset, the raw source clip, and an AI model’s attempt to reproduce the same prompt.

PRODUCED REFERENCE

A musical demonstration created by replacing a state-of-the-art AI-generated lead instrument with original source recordings from this dataset, then arranged and mastered to preserve musical context. This approach allows direct comparison between current-generation model output and real, rights-cleared acoustic source material.

RAW DATASET CLIP

Directly from the dataset: an isolated, unprocessed example of the source recording.

AI MODEL BENCHMARK (Suno v5 Pro Beta)
AI MELODIC PROXY (Overtone Singing Unsupported)

An unmodified output from a current-gen AI model given the same musical prompt. Included to illustrate where today’s systems still differ from real, recorded sources.

An unmodified output from a current-generation AI music model given the same melodic prompt. Because current generative music models do not support overtone-based spectral melody, this example substitutes a conventional high-pitched melodic instrument as a proxy. The result illustrates a structural limitation of today’s systems when attempting to approximate overtone singing techniques.

AI model approximations generated using publicly available state-of-the-art music generation systems.