This demo is openly accessible for evaluation and exploration. Usage is intended for short sessions and architectural review, not prolonged production use.
Agentic AI Workflow Engine · Cognitive Conflict Simulator
⚠️ PROJECT STATUS: Advanced Architectural Showcase. Engineered strictly as a live technical sandbox for engineering teams and CTOs to evaluate my multi-agent orchestration skills.
This is a purpose-built demonstration of LLM reliability, streaming UX, and complex cognitive protocols. Deployed specifically for technical review and stress-testing under the hood, not as a commercial SaaS.
The system addresses a structural weakness of single-assistant AI tools: confirmation bias and shallow validation. Instead of reinforcing user assumptions, it intentionally introduces structured intellectual conflict.
The core idea is not 'better answers', but better thinking under pressure. It simulates multiple conflicting expert perspectives, enforces cognitive discipline through protocolized reasoning, and orchestrates debates as repeatable computational processes.
Intentionally avoids microservices to keep iteration speed high and agent logic co-located. Business logic lives inside API routes for maximum velocity at the current iteration stage.
An Agent is not a running process but a data-driven contract defined by 'ConfiguredExpert' (Identity, Cognitive Structure, Domain Bias). It exists only for one HTTP request.
Anti-fluff, adversarial critique, structured output. Models are trusted for content, not structure. A custom state machine parses streams in real-time.
Context is reconstructed every turn. History is truncated (MAX_MSGS=30) to prevent runaway token costs and hallucination amplification.
Instead of static system prompts, the backend compiles a unique persona for every turn based on the expert's archetype (Analyst/Synthesizer/Resonator) and current debate state.
A separate /api/judge meta-agent evaluates the full discussion thread to produce structured analysis, remaining independent of the debate participants.
Custom stream parsing across backend + UI separates reasoning trace ([THOUGHTS]) from public speech, allowing the user to see the process without waiting for full completion.
Users do not write prompts. They manipulate graded scales (Archetype Mix 0-100%, Traits 1-10 like 'Conformism' or 'Openness'). These parameters are stored as structured JSON, not text.
A runtime compiler translates numerical values into nuanced behavioral instructions using non-linear heuristic buckets. (e.g., 'Conformism: 2' maps to a specific 'Rebel' instruction block).
The persona is not a persistent session. It is recompiled fresh for every cognitive turn. Changing a parameter mid-debate instantly alters the agent's next thought process.
User configuration (Personality) is always wrapped in an immutable System Kernel (Protocol). Users design the agent's mind, but the System enforces the 'Dirty Realism' format and cognitive discipline.
Assistant accepts abstract tasks and proposes 1-N experts with reasoning. Upon confirmation, it uses internal tools to generate precise JSON configurations (archetypes, weights, traits).
Generator is available both pre-debate and mid-debate. The assistant accesses the active Brief to generate experts that specifically fill missing perspectives in the current discussion.
A dedicated dialog layer processes user intent first. It actively asks refining questions to disambiguate the goal before any debating agent is spawned.
Dialogue is synthesized into a structured Brief — a semantic anchor injected into every agent turn. This prevents drift and isolates the 'Goal' from the 'Conversation'.