Extended Vocal Techniques Spectrum (Music)

Subharmonic Phonation/Vocal Fry (Musical)

Low-register non-lexical textures and controlled expressive gestures for hybrid timbre modeling.

Structured at the articulation level using documented production workflows and secured under the Proteus Standard™.

PREVIEW

Extended Vocal Techniques Spectrum (Music)

Voice and Vocal Techniques

Voice

This dataset is currently in production. Preview audio and full specifications will be added as they become available.

This is a preview release. Initial recording passes are available for evaluation, and the full dataset is scheduled for upcoming delivery.

Subharmonic phonation and vocal fry are vocal production techniques characterized by irregular vocal fold vibration and the emergence of low-frequency components below the primary fundamental pitch. Rather than operating within stable modal phonation, these techniques involve altered vibratory regimes in which vocal fold closure patterns, airflow, and tension produce complex spectral structures and perceptual pitch lowering.

Acoustically, subharmonic and fry-based phonation provide a valuable window into non-linear vocal behavior, phonatory regime transitions, and low-frequency spectral dynamics. Expressivity arises through controlled manipulation of airflow, vocal fold engagement, and phonatory stability rather than melodic pitch movement or amplitude modulation. These characteristics make subharmonic phonation particularly well suited for articulation-level analysis and modeling, where understanding regime boundaries, instability, and source-filter interaction is essential.

Content and Recording Details

An overview of what’s included in this dataset — from articulations and performance styles to session context and recording notes.

What's in this dataset

This preview dataset contains a curated subset of articulation-focused recordings demonstrating subharmonic phonation and vocal fry–based vocal behaviors.
The material is intended to illustrate the dataset’s structural approach, capture quality, and technique taxonomy, rather than represent the full scope of the final release.

Included recordings focus on controlled production of low-frequency phonation, irregular vocal fold vibration, and subharmonic resonance behavior, captured in isolation to support expressive audio modeling, evaluation, and analysis workflows.

The full dataset will expand substantially on this foundation, with broader pitch coverage, extended technique variations, and a significantly larger corpus of recorded material exploring the full acoustic range of subharmonic vocal production.

Recording & Session Notes

All audio was recorded in a controlled studio environment using standardized capture, editing, and QC protocols consistent across the Harmonic Frontier Audio catalog.

Source material was captured at 32-bit float to preserve dynamic headroom and fine-grained temporal and spectral detail during recording and editing.
Final preview files are delivered as 24-bit PCM for consistency and downstream compatibility.

Recordings were performed by a single vocalist to maintain consistency in vocal anatomy, phonatory behavior, and technique execution across all sessions.

Post-processing was limited to trimming, fade handling, and integrity checks. No creative processing, pitch correction, normalization, or dynamic shaping was applied beyond what was necessary for clean delivery.

Proteus Integrity Layers

A three-layer provenance and integrity framework ensuring verifiable chain-of-custody, tamper-evident delivery, and spectral fingerprinting for enterprise deployment. These layers are versioned and maintained to support long-term auditability, continuity, and enterprise compliance.

Layer I — Source Provenance

Layer II — Cryptographic Integrity

Layer III — Acoustic Fingerprinting

All full datasets from HFA include provenance metadata, session identifiers, and spectral integrity markers as part of The Proteus Standard™ for compliant enterprise deployment.

Audio Demonstrations

A three-part listening benchmark: a mixed musical demo built from this dataset, the raw source clip, and an AI model’s attempt to reproduce the same prompt.

PRODUCED REFERENCE

A musical demonstration created by replacing a state-of-the-art AI-generated lead instrument with original source recordings from this dataset, then arranged and mastered to preserve musical context. This approach allows direct comparison between current-generation model output and real, rights-cleared acoustic source material.

RAW DATASET CLIP

Directly from the dataset: an isolated, unprocessed example of the source recording.

AI MODEL BENCHMARK (Suno v5 Pro Beta)

An unmodified output from a current-gen AI model given the same musical prompt. Included to illustrate where today’s systems still differ from real, recorded sources.

AI model approximations generated using publicly available state-of-the-art music generation systems.