Experiment

nigiri

A Dynamic LLM Logic Tree with Human-in-the-Loop Editing

nigiri - A Dynamic LLM Logic Tree with Human-in-the-Loop Editing

A proof-of-concept that explores the intersection between automated LLM reasoning and human interaction. While language models are powerful tools for generating and transforming content for a specific input, knowledge and creation is usually not that simple.

Information is usually based on prior thoughts or information. Organizing the – from abstract meta-thoughts to concrete outputs – is a complex process. Visualizing and making the data editable helps maintain sanity.

The Nature of the Problem

Working with LLMs often feels like orchestrating a cascade of thoughts. One prompt leads to another, each building upon previous outputs, forming a tree of interconnected reasoning. But what happens when we want to nudge this process in a different direction? How do we maintain the insight and control?

data tree

The Approach

These prompt chains are visualized as a dynamic tree structure. Each node represents data generated by an LLM, connected to others through prompts. The tree structure is interactive - allowing human intervention at any point while maintaining the integrity of the reasoning chain.

How It Works

  1. The prompts are organized in parent/child-relationships
  2. The system executes these prompts, based on the relationships and its current state
  3. At any point, users can modify data at any node
  4. The system automatically regenerates dependent nodes, taking into account the new context

"Nigiri" is an attempt to make complex subjects and realationshipts accessible. To give understand the structure of AI-assisted reasoning while keeping human judgment in the loop.

Tech stack

ReactJS

NextJS

TailwindCSS

Prisma