10x Easier than JSON

TAI feels native in every language. No boilerplate. No schema headaches. Just import and log.


01 — The SDK

The "Magical" SDK

Write crash-safe, tamper-evident logs with a fluent API that feels like natural language.

Python


    from tai import TAI

with TAI("mission.tai") as log:
    log.system("You are a helpful assistant.")
    log.user("Launch the rocket.")
    log.assistant("Launching in 3... 2... 1...")

    # Even if the process is killed HERE, 'mission.tai' is perfectly valid.
  

Rust


    use tai_rs::TaiWriter;

let mut log = TaiWriter::new("mission.tai")?;
log.system("You are a helpful assistant.")?;
log.user("Launch the rocket.")?;
log.assistant("Launching in 3... 2... 1...")?;

// Even if the process is killed HERE, 'mission.tai' is perfectly valid.
  

Node.js / TypeScript


    import { TAI } from 'tai-ts';

const log = new TAI('mission.tai');
log.system("You are a helpful assistant.");
log.user("Launch the rocket.");
log.assistant("Launching in 3... 2... 1...");

// Even if the process is killed HERE, 'mission.tai' is perfectly valid.
log.close();
  

02 — Integration Modes

Engine Mode (Production)

100% Reliable. Your code writes TAI programmatically via the SDK. The LLM can output anything—JSON, text, gibberish. Your code converts it into perfect TAI frames.

Use cases:

  • Production agent runs
  • Regulated industries (legal, medical, finance)
  • Long-running agent loops (100+ steps)
  • Audit trails for compliance
  • Any scenario where corruption is unacceptable

Direct Mode (Prototyping)

95-99% Reliable. A one-line system prompt makes the LLM output valid TAI directly. Great for quick experiments and demos.

Use cases:

  • Quick experiments and prototyping
  • Single-response tasks
  • Educational examples
  • Interactive debugging sessions

Note: Not recommended for production systems requiring audit trails.


03 — Export Workflow

Export Workflow

Extract insights from immutable logs into editable documents while maintaining cryptographic provenance.


    from tai import tai_export

# Extract insights from an agent conversation into an editable document
tai_export(
    source_filename="research_discussion.tai",
    selector="log.last.content",  # Extract the final answer
    output_filename="research_report.taimd",
    title="AI Research Findings"
)

# Result: Editable .taimd file with cryptographic provenance
  

The .tai log remains immutable. The .taimd document is editable and human-friendly, but maintains a cryptographic link back to the source.


04 — Secure Buffers

Secure Buffers

Store secrets, API keys, and configuration with crash-safe, tamper-evident security.

Creating a Secure Buffer


    from tai import TAI
import os

# Generate a signing key for tamper-evidence
signing_key = os.urandom(32).hex()

with TAI("secrets.tai", signer=signing_key) as buffer:
    # Store API keys
    buffer._write_block(
        "frame",
        "content",
        "sk-1234567890abcdef",
        type="buffer",
        key_name="openai_api_key",
        created_at="2025-11-19T00:00:00Z"
    )

    # Store database credentials
    buffer._write_block(
        "frame",
        "content",
        "postgresql://user:pass@localhost:5432/db",
        type="buffer",
        key_name="database_url",
        created_at="2025-11-19T00:00:00Z"
    )

    # Automatically signs every 100 frames (or call buffer.sign() manually)
  

The signer parameter enables automatic cryptographic signing. Any tampering with the buffer will be detected during verification.

Verifying Buffer Integrity


    from tai import verify_tai_file

# Extract the public key from your signing key
public_key = "your_public_key_hex"

try:
    # Verify the entire buffer hasn't been tampered with
    verify_tai_file("secrets.tai", public_key)
    print("Buffer integrity verified")

    # Safe to read secrets
    graph = TAI.parse("secrets.tai")
    api_key = tai_select(graph, "log[0].content")

except ValueError as e:
    print(f"Tampering detected: {e}")
    # Do NOT use the secrets
  

If even a single character is modified, the signature verification fails. This provides tamper-evident security without requiring a database.

Use Cases

  • API key storage for CI/CD pipelines
  • Database credentials in containerized apps
  • Secrets management in air-gapped systems
  • Configuration files requiring audit trails
  • Cryptographic key material storage

Security Properties

  • Crash-safe: Auto-heal ensures partial writes are valid
  • Tamper-evident: Ed25519 signatures detect any modification
  • Human-readable: Plain text for security audits
  • No database required: File-based, portable
  • Version control friendly: Track secret rotations in Git

05 — Editor Support

VS Code Extension

First-class support for TAI files in your favorite editor.

Syntax Highlighting

Read logs with crystal-clear syntax highlighting. Distinguish between control headers, content blocks, and cryptographic signatures at a glance.

  • Custom grammar for TAI format
  • Visual distinction for type="buffer"
  • Error highlighting for invalid frames

Auto-Heal Command

Fix truncated files instantly. The extension detects unclosed strings and missing footers caused by crashes and offers to repair them with one click.


    Cmd+Shift+P > TAI: Auto-Heal File
  

06 — Install

Python


    pip install tai-py
  

View on GitHub

Rust


    cargo add tai-rs
  

View on GitHub

Node.js


    npm install tai-ts
  

View on GitHub