Pending: Auto-advance threshold — when can tasks skip human approval?
Pending: Priority calculation for incoming tasks
User Tasks
Summary
The glue between task arrival and autonomous execution — classifies incoming tasks, decides human vs autonomous handling, and dispatches to the orchestrator.
Problem / Motivation
Tasks arrive from multiple channels: /idea skill, quick-notes, GitHub Issues (FR-065), Telegram (FR-069), email (FR-012) — but nothing connects them to the orchestrator.
FR-025 (Inbox Zero) processes raw input into FRs. FR-056 (Orchestrator) polls for planned FRs. But the journey new → planned → dispatched has no owner.
Without a dispatcher, every task requires manual /approve → /implement — defeating the purpose of autonomy.
Different tasks need different routing: a typo fix can go straight to autonomous execution, a new feature needs design review first, a vague idea needs clarification.
The system needs a single entry point that all channels feed into, with consistent classification and routing logic.
Proposed Solution
A Task Dispatcher that:
Watches all intake channels (vault inbox, GitHub Issues, Telegram, email)
Classifies each task (complexity, risk, type, completeness)
Routes based on classification:
Simple + low-risk → auto-advance to planned, dispatch to orchestrator
Medium complexity → create FR, queue for human review
Complex / high-risk → create FR + flag for design review (FR-008)
Incomplete / vague → request clarification from source channel
Dispatches approved tasks to the orchestrator (FR-056)
Open Questions
1. Intake Model
Question: How does the dispatcher discover new tasks?
Option
Description
A) Polling + webhooks
Poll vault/inbox periodically, receive webhooks from GitHub/Telegram
B) Polling only
Simple, works everywhere, but slower
C) Event-driven only
Real-time but requires infrastructure for every channel
Recommendation: Option A — polling for vault (low-tech), webhooks for external channels (real-time).
Decision:
2. Auto-Advance Criteria
Question: When can a task skip human approval and go straight to the orchestrator?
Option
Description
A) Rule-based + escalation policy
Combine task classification with FR-059 escalation rules
B) Never auto-advance
Human always approves (safe but slow)
C) Always auto-advance
Full autonomy (fast but risky)
Recommendation: Option A — leverage FR-059’s risk framework. Example: bug fix in registered project with tests → auto-advance. New feature → human review.
Decision:
3. Clarification Flow
Question: How does the dispatcher request clarification for vague tasks?
Option
Description
A) Reply via source channel
Telegram task → reply on Telegram; GitHub Issue → comment on issue
B) Always create FR with questions
Create FR in new/ with open questions, wait for human
C) AI-assisted completion
LLM attempts to fill gaps, flags low-confidence assumptions
Recommendation: Option A for external channels, Option B as fallback.