World Resonance Gallery

Kalimba (Mbira)

High-fidelity kalimba notes, resonant plucks, overtones, and dynamic articulations captured for tonal percussion modeling and hybrid ambient textures.

Structured at the articulation level using documented production workflows and secured under the Proteus Standard™.

PREVIEW

World Resonance Gallery

Percussion and Resonant Struck

Metal

This dataset is currently in production. Preview audio and full specifications will be added as they become available.

This is a preview release. Initial recording passes are available for evaluation, and the full dataset is scheduled for upcoming delivery.

The kalimba, also known as the mbira, is a plucked idiophone consisting of tuned metal tines mounted to a resonant body. Sound is produced when individual tines are displaced and released, generating a clear attack followed by a naturally decaying tone shaped by both tine vibration and body resonance. Expressive control is achieved through variations in pluck force, angle, and timing rather than sustained airflow or continuous excitation.

Acoustically, the kalimba occupies a focused and harmonically rich timbral space characterized by pronounced transients and distinct decay profiles. Expressivity emerges through articulation timing, gesture-level interaction with the tines, and subtle resonance behavior rather than wide dynamic or pitch modulation. Used in a variety of musical and cultural contexts worldwide, the instrument’s straightforward mechanical design and well-defined excitation–decay behavior make it well suited for articulation-level analysis and modeling, where onset clarity, resonance decay, and repeatable gesture behavior are important.

Content and Recording Details

An overview of what’s included in this dataset — from articulations and performance styles to session context and recording notes.

What's in this dataset

This preview dataset contains a curated subset of articulation-focused recordings from a kalimba (also known as mbira), captured to illustrate the dataset’s structural approach, capture quality, and articulation taxonomy rather than represent the full scope of the final release.

Included recordings emphasize clean note attacks, controlled decay behavior, and representative melodic gestures characteristic of plucked idiophone performance, recorded in isolation to support expressive audio modeling, evaluation, and analysis workflows.

The full dataset will expand substantially on this foundation, with broader pitch coverage, extended articulation variations, and a larger corpus of recorded material reflecting the instrument’s full expressive and resonant range.

Recording & Session Notes

All audio was recorded in a controlled studio environment using standardized capture, editing, and QC protocols consistent across the Harmonic Frontier Audio catalog.

Source material was captured at 32-bit float to preserve transient detail, resonance decay, and dynamic headroom during recording and editing.
Final preview files are delivered as 24-bit PCM for consistency and downstream compatibility.

A single instrument was used consistently across all sessions to maintain timbral continuity and articulation stability.

Instrument details:
Kalimba (Mbira) — Newlam Wood Kalimba

Post-processing was limited to trimming, fade handling, and integrity checks. No creative processing, normalization, or dynamic shaping was applied beyond what was necessary for clean delivery.

Proteus Integrity Layers

A three-layer provenance and integrity framework ensuring verifiable chain-of-custody, tamper-evident delivery, and spectral fingerprinting for enterprise deployment. These layers are versioned and maintained to support long-term auditability, continuity, and enterprise compliance.

Layer I — Source Provenance

Layer II — Cryptographic Integrity

Layer III — Acoustic Fingerprinting

All full datasets from HFA include provenance metadata, session identifiers, and spectral integrity markers as part of The Proteus Standard™ for compliant enterprise deployment.

Audio Demonstrations

A three-part listening benchmark: a mixed musical demo built from this dataset, the raw source clip, and an AI model’s attempt to reproduce the same prompt.

PRODUCED REFERENCE

A musical demonstration created by replacing a state-of-the-art AI-generated lead instrument with original source recordings from this dataset, then arranged and mastered to preserve musical context. This approach allows direct comparison between current-generation model output and real, rights-cleared acoustic source material.

RAW DATASET CLIP

Directly from the dataset: an isolated, unprocessed example of the source recording.

AI MODEL BENCHMARK (Suno v5 Pro Beta)

An unmodified output from a current-gen AI model given the same musical prompt. Included to illustrate where today’s systems still differ from real, recorded sources.

AI model approximations generated using publicly available state-of-the-art music generation systems.