AI Analysis Features
slack-ticket’s AI engine is the core differentiator—it doesn’t just copy text from Slack; it understands the conversation and structures it into a proper bug report or feature request.
How It Works
When you run slack-ticket create, the AI processes your Slack thread in two stages:
- Thread Collection: The CLI fetches messages from the Slack thread (configurable depth)
- AI Processing: The AI analyzes the conversation and generates structured output
Output Schema
The AI generates issues following this structure:
{
"title": "Short, specific issue title (max 80 chars)",
"summary": "1-3 sentence description of the issue",
"steps_to_reproduce": "Numbered steps if inferable, otherwise null",
"expected_behavior": "What should happen, if inferable",
"actual_behavior": "What actually happens, if inferable"
}
Title Generation
The AI creates concise, searchable titles:
- Bad: “There’s a problem with the login”
- Good: “Login fails with ‘Invalid credentials’ for valid users”
Titles are truncated to 80 characters maximum to fit GitHub’s requirements.
Summary Extraction
The summary captures the essence of the issue in 1-3 sentences, suitable for:
- GitHub issue description
- Email notifications
- Search results
Context Extraction
When the conversation contains enough context, the AI extracts:
| Field | When Populated |
|---|---|
steps_to_reproduce | When users describe how to trigger the issue |
expected_behavior | When someone mentions what should happen |
actual_behavior | When the actual problem is described |
If these cannot be reasonably inferred, they are set to null.
Supported AI Providers
slack-ticket is provider-agnostic—you can switch between providers without changing your workflow.
OpenAI
Recommended models:
gpt-4o— Best quality for complex threadsgpt-4o-mini— Faster and more cost-effective
Base URL: https://api.openai.com/v1
Anthropic
Recommended models:
claude-sonnet-4-20250514— Latest Sonnetclaude-3-5-sonnet-20241022— Stable Sonnet 3.5claude-3-haiku-20240307— Fast and lightweight
Base URL: https://api.anthropic.com
Note: Anthropic uses a different API format. slack-ticket handles this automatically.
Google Gemini
Recommended models:
gemini-2.0-flash— Fast and capablegemini-1.5-pro— Higher quality
Base URL: https://generativelanguage.googleapis.com/v1
Ollama (Local)
For teams wanting self-hosted AI:
- Base URL:
http://localhost:11434 - Model: Any model you’ve pulled (e.g.,
llama3,mistral,qwen)
This option keeps all data local—no external API calls.
Validation & Safety
slack-ticket includes multiple layers of validation to ensure quality and security:
Output Validation
The AI output is validated before creating issues:
- JSON Parsing: Ensures valid JSON structure
- Required Fields: Title and summary must be present and non-empty
- Field Stripping: Removes any disallowed fields the AI might suggest
- Content Safety: Strips forbidden content patterns
Forbidden Content
The AI is instructed NOT to suggest:
- Labels or assignees
- Severity levels
- Milestones
- Projects
This keeps issue creation clean and consistent with your team’s conventions.
Retry Logic
If initial parsing fails, slack-ticket:
- Strips markdown fences (
```json ... ```) - Retries parsing once
- Exits with code 4 if persistent failure occurs
Update Command AI
When updating existing issues (slack-ticket update), the AI generates:
{
"update_summary": "One-line summary of what is new",
"new_information": "Markdown details, avoiding repetition"
}
The AI compares against the existing issue body to avoid redundancy.
Customization
Timeout Configuration
Adjust AI timeout in your config:
{
"ai": {
"timeoutMs": 60000 // Increase for complex threads
}
}
Model Selection
Choose models based on your priorities:
| Priority | Recommended |
|---|---|
| Quality | gpt-4o or claude-sonnet-4 |
| Speed | gpt-4o-mini or claude-3-haiku |
| Cost | Ollama with llama3 |
| Privacy | Ollama (local) |
Troubleshooting AI Issues
”AI provider returned non-JSON response”
- Check your API key is valid
- Verify the model name is correct
”AI output validation failed”
- The model may have returned malformed JSON
- Try a different model or increase
timeoutMs
”AI request timed out”
- Increase
ai.timeoutMsin config - Try a faster model
”Rate limited (HTTP 429)”
- Wait and retry
- Consider switching to a different provider
Best Practices
- Use descriptive Slack threads: The more context in the conversation, the better the issue
- Include error messages: Paste error logs directly in Slack
- Mention steps: “I tried X, then Y, and Z happened” helps AI extract reproduction steps
- Test with
--dry-run: Verify AI output before creating issues
For CLI usage, see Commands Reference.