Case Study IV: Algorithmic Piano Quartet No. 2

Case Study IV: Algorithmic Piano Quartet No. 2#

Algorithmic Piano Quartet No. 2 exists because the repository needed a place to continue experimenting without destabilizing No. 1. From a software point of view, this is a forked package. From a musical point of view, it is a controlled branch for testing new behavior. The decision to fork rather than rewrite No. 1 in place is important. It keeps the first proof-of-concept score intact while opening a second path for larger technical and musical changes.

The first large difference is the piano. No. 1 mainly treats piano lines as single-note event streams. No. 2 allows the piano to generate chords. Those chords are still derived from the configured pitch-class material and the configured hand ranges, but they are shaped with more detail. The left and right hands can have different chord sizes, different preferred spacing, and different span limits.

The second large difference is the occupancy model. In No. 1, the piano competes with the strings under the same density cap. In No. 2, piano has its own occupancy budget. That gives the keyboard more room to act like a harmonic instrument rather than like two more melodic voices squeezed into the same global constraint.

The third difference is the way left-hand spacing is treated. No. 2 now looks at whole chord shapes instead of simply growing a chord one pitch at a time from a seed. This makes it possible to prefer a wider overall left-hand span and to move away from narrow triadic shapes when the configuration asks for something more open.

The second quartet extends the same basic internal model as No. 1, but it expands the generation settings. The added fields define separate piano occupancy, separate left-hand and right-hand chord ranges, and separate preferred spacing rules. In other words, No. 2 does not invent a new architecture. It stretches the old one in a more piano-aware direction.

Download#

Listen#

Score Preview#

First page preview of Algorithmic Piano Quartet No. 2

The config classes make that change easy to see. PartConfig still carries one instrument definition, including name, role, staff type, and range. RenderConfig still carries the SoundFont choices and sample rate. The main difference is GenerationConfig. In No. 2, this class carries the extra piano controls: separate occupancy for piano, separate chord-size limits for each hand, separate span limits, and separate preferred interval lists. Those fields are the link between the TOML file and the generator.

Core No. 2 configuration data classes.#
@dataclass(frozen=True)
class PartConfig:
    id: str
    name: str
    short_name: str
    family: str
    clef: str
    midi_channel: int
    midi_program: int
    midi_instrument: str
    range_low: int
    range_high: int
    staff_type: str = "single"
    role: str = "melodic"


@dataclass(frozen=True)
class GenerationConfig:
    measures: int
    time_signature: tuple[int, int]
    min_note_quanta: int
    max_note_quanta: int
    min_rest_quanta: int
    max_rest_quanta: int
    max_simultaneous_tones_per_quantum: int
    piano_max_simultaneous_events: int
    max_pitch_leap: int
    seed: int
    tempo_bpm: int
    measure_quanta: int
    piano_chord_probability: float
    piano_min_chord_tones: int
    piano_max_chord_tones: int
    piano_rh_min_chord_tones: int
    piano_rh_max_chord_tones: int
    piano_lh_min_chord_tones: int
    piano_lh_max_chord_tones: int
    piano_chord_span: int
    piano_rh_chord_span: int
    piano_lh_chord_span: int
    piano_rh_min_total_span: int
    piano_lh_min_total_span: int
    piano_preferred_chord_steps: tuple[int, ...]
    piano_rh_preferred_chord_steps: tuple[int, ...]
    piano_lh_preferred_chord_steps: tuple[int, ...]
    piano_min_chord_separation: int


@dataclass(frozen=True)
class RenderConfig:
    soundfont: str | None
    piano_soundfont: str | None
    strings_soundfont: str | None
    sample_rate: int


@dataclass(frozen=True)
class OutputConfig:
    basename: str
    label: str | None
    include_measures: bool
    include_tempo: bool
    include_seed: bool
    include_timestamp: bool
    timestamp_format: str


@dataclass(frozen=True)
class ProjectConfig:
    title: str
    composer: str
    output: OutputConfig
    pitch_classes: tuple[int, ...]
    generation: GenerationConfig
    render: RenderConfig
    parts: tuple[PartConfig, ...]

The chord builder is a good example of that evolution. This function expects a seed pitch, hand range, pitch-class pool, requested chord size, span limits, preferred interval steps, and a minimum separation between adjacent notes. Given those inputs, it searches candidate chord shapes and keeps the result inside the hand range:

Chord construction in Quartet No. 2.#
def _build_piano_chord(
    seed_pitch: int,
    low: int,
    high: int,
    pitch_classes: tuple[int, ...],
    minimum_tones: int,
    maximum_tones: int,
    max_span: int,
    minimum_total_span: int,
    preferred_steps: tuple[int, ...],
    minimum_separation: int,
    rng: random.Random,
) -> tuple[int, ...]:
    chord_low = max(low, seed_pitch - max_span)
    chord_high = min(high, seed_pitch + max_span)
    candidates = [
        pitch
        for pitch in range(chord_low, chord_high + 1)
        if pitch % 12 in pitch_classes and pitch != seed_pitch
    ]
    if not candidates:
        return (seed_pitch,)

    def interval_quality(pitch: int) -> tuple[int, int]:
        interval = abs(pitch - seed_pitch)
        nearest_preferred = min(abs(interval - step) for step in preferred_steps)
        return (nearest_preferred, interval)

    candidates.sort(key=lambda pitch: (*interval_quality(pitch), pitch))
    candidate_pool = [seed_pitch, *candidates[: min(len(candidates), 12)]]
    available_size = min(maximum_tones, len(candidate_pool))
    if available_size <= 1:
        return (seed_pitch,)

    target_size = rng.randint(minimum_tones, available_size)
    valid_chords: list[tuple[tuple[int, ...], tuple[int, int, int]]] = []

    for combo in combinations(candidate_pool[1:], target_size - 1):
        chord = tuple(sorted((seed_pitch, *combo)))
        if any((upper - lower) < minimum_separation for lower, upper in zip(chord, chord[1:])):
            continue
        total_span = chord[-1] - chord[0]
        if total_span > max_span:
            continue
        span_penalty = max(0, minimum_total_span - total_span)
        adjacent_penalty = sum(
            min(abs((upper - lower) - step) for step in preferred_steps)
            for lower, upper in zip(chord, chord[1:])
        )
        seed_penalty = sum(
            min(abs(abs(pitch - seed_pitch) - step) for step in preferred_steps)
            for pitch in chord
            if pitch != seed_pitch
        )
        valid_chords.append((chord, (span_penalty, adjacent_penalty + seed_penalty, -total_span)))

    if valid_chords:
        valid_chords.sort(key=lambda item: item[1])
        best_score = valid_chords[0][1]
        top_chords = [chord for chord, score in valid_chords if score == best_score][:4]
        return rng.choice(top_chords)

    return (seed_pitch,)

No. 2 also has a larger configuration surface than No. 1 because it needs to expose these experiments clearly. That is a feature, not a flaw. The second quartet is the branch where new musical controls are tested. If a change proves useful and stable, it can later inform a more general future system.