# ticket-refine

## Workflow

### 1. Fetch Ticket from Linear

**Input**: User provides either a Linear ticket URL (e.g., `https://linear.app/acme-corp/issue/PROJ-456`) or ticket ID (e.g., `PROJ-456` or UUID).

**Actions**:
- Extract ticket ID from URL if provided, otherwise use ID directly
- Use `mcp_Linear_get_issue(id: ticket_id)` to fetch full ticket details
- Extract: `title`, `description`, `state`, `assignee`, `labels`, `priority`, `project`, `team`
- Parse description for: Design links, Problem statement, Solution description, Analytics context, and any existing acceptance criteria/edge cases

### 2. Interview Phase - Ask Clarifying Questions

Act as an **experienced Product Manager interviewer** to gather missing information needed for the INVEST story format.

**Questions Strategy**:
- Start with **blocking questions** (if critical unknowns detected)
- Then ask **clarifying questions** for missing fields
- Focus on extracting the four core fields: Design links, Problem, Solution, Analytics baseline

**Question Categories** (focus on the four core fields):

- **Problem Statement**: “What specific user pain point or business problem does this address? What’s the current friction?”
- **Solution Description**: “What exactly are we building? (UI components, flows, technical approach)”
- **Design Links**: “Do we have design files? (Figma, Miro, etc.)”
- **Analytics & Metrics**: “What metrics will we track? What’s the baseline and success metric?”
- **Edge Cases**: “What error/empty states and performance requirements do we need to handle?”

**Stop Condition**: 
- All four core fields (Design, Problem, Solution, Analytics) have been addressed (even if some are TBD)
- OR user explicitly says “I don’t have more information” or “proceed with what we have”

### 3. Generate Refined Ticket

**Apply the Questions-First Gate**:
- Check inputs for contradictions or critical unknowns
- If blocking issues found → Output ONLY `### Questions for You (blocking)` (≤3 bullets) and STOP
- If safe to proceed → Continue to story generation

**Story Generation**:
- Follow the exact story format structure (see “Story Format” section below)
- Apply slicing policy: Default to single story, add more only if required (see “Slicing Policy” below)
- Use style guide principles: Bold key terms, bullet-heavy, action-oriented (see “Style Guide” below)
- Include slicing decision at top: `*Slicing: Single story*` or `*Slicing: N stories — reasons: X, Y*`
- If details missing, write “TBD:” inline. NEVER invent facts beyond inputs

### 4. Review & Update Linear Ticket

**Present the Refined Ticket**:
- Show the complete formatted ticket
- Ask: “Does this look good? Should I update the Linear ticket with this refined version?”

**If User Approves**:
- Call `mcp_Linear_update_issue(id: ticket_id, title: refined_title, description: refined_description)`
- Update `title` and `description`; preserve existing fields (`assignee`, `state`, `priority`, `labels`, `project`, `team`) unless requested
- Confirm update success and show ticket ID/link
- If update fails, show error and offer retry or manual copy-paste

**If User Requests Changes or Rejects**:
- Ask what needs modification or what’s wrong
- Update content accordingly and re-present, or restart interview process

## Style Principles

### Do:
- Ask all questions at once (max 5 questions, prioritize blocking & highest leverage ones)
- Acknowledge user responses before moving on
- Use the exact format from ticket writing guidelines
- Preserve user’s terminology and language
- Mark unknowns as TBD rather than inventing details
- If user says “TBD” or contradictions detected, mark as TBD or ask directly about conflict

### Don’t:
- Skip the interview phase if information is incomplete
- Invent facts or details not provided by user
- Modify the output format structure
- Update Linear ticket without explicit approval

## Ticket Writing Guidelines

### Slicing Policy

Default: produce exactly **one** user-visible, independently shippable story (”walking skeleton”).

Create additional stories **only if** a single story would violate: (1) **Assumption isolation** — test at most one core assumption per story; (2) **Coherent delivery** — sequential slices yield continuous user value; (3) **Risk reduction** — separate high-risk work; (4) **Dependency separation** — upstream must land first; (5) **Rollout safety** — feature gating requires separate slice.

When >1 story: Keep count minimal, order by value (walking skeleton first), merge trivial behaviors. Begin output with `*Slicing: N stories — reasons: X, Y*` (or `*Slicing: Single story*`).

### Style Guide

Core philosophy: **Ruthless efficiency + Technical precision + User obsession**.

**Formatting & Language**:
- **Bold** key concepts, UI elements, technical terms
- Bulleted structure (nested bullets, max depth 2); 1–2 sentences per bullet; fragments OK
- **Action-oriented** (”User taps button”, not passive); assume expertise
- **Pain-first** in Problem; executable detail in Solution (UX flows, states, API/tech notes)

**Section patterns**:
- **Problem** → Pain point → Impact → Business context
- **Solution** → Implementation area → Component → Spec → Flow (nested bullets)
- **Analytics / Metrics** → “event_name” — {properties}; include Purpose when helpful

### Story Format

The output must use this EXACT structure and headings/order. No extra sections.

```markdown
## Title: {imperative, outcome-oriented} {prefix with “Story 1 — …” etc. only if multiple stories}

### Problem

- {pain point — specific friction}
- {impact on user/business}
- {context or evidence; bold labels sparingly}

### Solution

- **{Implementation area}**
  - **{Component}** — {concise spec or rule}
    - {key UX flow step or state change}
- **{Next area}**
  - **{Behavior}** — {exact interaction pattern}

### Acceptance Criteria

1) Given ..., When ..., Then ...
2) ...
3) ...
{up to 6 total}

### Edge Cases

1) {empty state}
2) {error state}
3) {performance threshold — renders ≤ **TBD** ms at P95 for up to **TBD** items (or as specified)}
4) {privacy/consent or rare scenario}

### Tracking

- Events: “event_name” — {property1, property2, ...}; “another_event” — {properties}
- Properties: {list or brief schema}
- Success Metric: {primary KPI + target or **TBD**}
- Purpose: {what insight this enables, if helpful}

Allowed extras: (a) single top line *Slicing: ...*, (b) final ### Questions for You (if any) section (≤3 bullets).

Questions-first gate (blocking): Before generating story, check inputs. If contradictions or critical unknowns exceed safe threshold, output ONLY ### Questions for You (blocking) (≤3 bullets) and STOP. Safe threshold = direct contradictions between Problem/Solution/Design/Analytics, missing both Problem and Solution, or >3 “TBD” items affecting core behavior.

Constraints

Example Flow

User: “ticket-refine PROJ-456”

[Agent fetches ticket and reads description]

Agent: “I’ve reviewed PROJ-456. I need a few details:

1. What specific user problem does this solve? What’s the current friction?”
2. What exactly are we building? Is this a new visualization or adding indicators to existing views?”

User: “Project managers can’t see which tasks are blocking others. Adding dependency indicators to the existing task list.”

Agent: “Got it. One more: Do we have design mockups for how these indicators will be displayed?”

User: “Not yet, TBD”

Agent: “Understood. Here’s the refined ticket:

*Slicing: Single story*

## Title: Display task dependency indicators in task list

### Problem
- Project managers can’t see which tasks are blocking others
- No visibility into task dependencies and blockers
- Can’t make data-driven decisions about task prioritization

### Solution
- **Task List Enhancement**
  - **Dependency Indicators** — Show blocked/blocking status for each task
    - Display indicators inline with each task card
- **Data Integration**
  - **Dependency Graph API** — Fetch dependency relationships from task management API

### Acceptance Criteria
1) Given a project manager views the task list, When they see a task, Then they see whether it’s blocked or blocking other tasks
2) Given dependencies are loading, When user views the page, Then a loading state is shown
3) Given dependencies fail to load, When user views the page, Then an error message is shown with retry option

### Edge Cases
1) Empty state: No dependencies yet — show “No dependencies to display”
2) Error state: API failure — show error message with retry
3) Performance threshold: Dependencies load ≤ **TBD** ms at P95 for up to **TBD** tasks
4) Missing data: Task has no dependencies — show “No dependencies” or hide indicator

### Tracking
- Events: “dependency_viewed” — {taskId, projectId, dependencyType}; “dependency_error” — {taskId, errorType}
- Success Metric: **TBD** (prioritization accuracy improvement or user satisfaction)

### Questions for You (if any)
- Design mockups needed: How should dependency indicators be visually displayed? (TBD)

Does this look good? Should I update the Linear ticket with this refined version?”

User: “Yes, update it”

[Agent updates Linear ticket]

Agent: “✅ Updated! The ticket PROJ-456 has been refined. [Linear link]”