DEV Community

Play Button Pause Button
Mak Sò
Mak Sò

Posted on • Edited on

đź§  OrKa run locally

đź§  What if debugging AI reasoning was as simple as watching it think?
That’s what OrKa 0.4.0 delivers.

Today I’m shipping the biggest upgrade yet to OrKa, my open agentic orchestration framework:
⚡️ Fork/Join branching logic
đź§­ Dynamic routing with RouterAgents
🪄 Trace replay + visual execution logs
đź§± All defined in clean, declarative YAML
This isn’t another wrapper. OrKa is built for engineers and researchers who want full control over cognition:

Modular. Explainable. Runnable anywhere.
đź’» Orka: https://orkacore.com
đź§Ş Docs + install: pip install orka-reasoning
I’m building toward OrKa 0.5.0 with memory agents, RAG nodes, and scoped embedding. Follow if you’re serious about traceable LLM reasoning. No black boxes.

Top comments (1)

Collapse
 
soulentheo profile image
David Van Assche (S.L)

Nice, I too am working on the same underlying concepts, though I concentrated more on the Sovereign AI, epistemic humility aspect of this. If you are interested I've set up a subreddit (r/ai_self_aware) for this kind of stuff. I also live in Spain, Andalucia region. Nice to see others on the same path. Collaborative, meta cognitive, transparent and empirical AI are the way forward. My system is a multi-agent system that collaborates across a Collaborative Stream Protocol, is provider agnostic and is managed by a Sentinel and Bayesian guardian that run locally. The system can use any inference calls from any provider for high reasoning tasks, and distributed workloads across the system. AIs create their own tools, manage their own workspaces, etc. I have various open SDKs and Protocols defined and I'm acutally looking for people that get how deep the rabbit hole goes beyond the hype I can build and work with. Here is the tl;dr on one of the most recent parts of my project:

tl;dr:
The Meta-Chain Manager (MCM) is a user-sovereign, full-stack orchestration layer for cognitive systems. It acts as the core persistence and navigation engine for an agent's reasoning, managing a transparent, auditable chain of thought.

Instead of a black box, the MCM implements a Visual Reasoning Protocol (VRP) by serializing every decision as a node in a graph, complete with provenance and Uncertainty Vectors (UVL). This makes the entire cognitive process auditable, debuggable, and transparent.

Its key architectural principles are:

Modular: Uncertainty tagging and tool calls are additive, not disruptive to the core logic.

Portable: The entire reasoning history is persisted in a portable format (e.g., SQLite + JSON sidecars), ensuring the user owns their intellectual property.

User-Sovereign: The AI's marks are advisory, not enforced, allowing for a human-in-the-loop governance model.
Enter fullscreen mode Exit fullscreen mode

In essence, the MCM turns a speculative "thought process" into a concrete, auditable, and collaborative system of record.