gptme-attention-tracker

Attention tracking and routing plugin for gptme. Combines two complementary tools for context optimization: plugins/gptme-attention-tracker View on GitHub

gptme-attention-tracker

Attention tracking and routing plugin for gptme. Combines two complementary tools for context optimization:

  1. attention_history: Queryable record of what was in context during each session
  2. attention_router: Dynamic context loading with HOT/WARM/COLD tiers

Installation

uv pip install -e plugins/gptme-attention-tracker

Configuration

Add to your gptme.toml:

[plugins]
paths = ["plugins/gptme-attention-tracker/src"]
enabled = ["gptme_attention_tracker"]

Tools Overview

Attention History

Track and analyze what files/lessons were in context. Enables:

Key Functions:

Attention Router

Manage dynamic context loading based on attention scores:

Features:

Key Functions:

Usage Example

from gptme_attention_tracker.tools.attention_router import (
    register_file, process_turn, get_context_recommendation
)
from gptme_attention_tracker.tools.attention_history import (
    record_turn, query_coactivation
)

# Register files for attention tracking
register_file(
    "lessons/workflow/git-workflow.md",
    keywords=["git", "commit", "branch"],
    pinned=True
)

# Process each turn
result = process_turn("How do I commit changes with git?")

# Get context recommendations
rec = get_context_recommendation()
print("HOT files:", rec["include_full"])
print("WARM files:", rec["include_header"])

# Record turn for history
record_turn(
    turn_number=1,
    hot_files=rec["include_full"],
    warm_files=rec["include_header"],
    activated_keywords=["git", "commit"]
)

# Analyze co-activation patterns
pairs = query_coactivation()
for p in pairs[:5]:
    print(f"{p['file1']} <-> {p['file2']}: {p['count']} times")

State Files

The plugin stores state in .gptme/:

Token Savings

Expected savings with attention routing:

Related