| Age | Commit message (Collapse) | Author |
|
- New Omni/Agent/Tools/Hledger.hs with 5 tools:
- hledger_balance: query account balances
- hledger_register: show transaction history
- hledger_add: create new transactions
- hledger_income_statement: income vs expenses
- hledger_balance_sheet: net worth view
- All tools support currency parameter (default: USD)
- Balance, register, income_statement support period parameter
- Period uses hledger syntax (thismonth, 2024, from X to Y)
- Shell escaping fixed for multi-word period strings
- Authorization: only Ben and Kate get hledger tools
- Max iterations increased from 5 to 10
- Transactions written to ~/fund/telegram-transactions.journal
|
|
|
|
Memory changes:
- Add thread_id column to conversation_messages for topic support
- Add saveGroupMessage/getGroupConversationContext for shared history
- Add storeGroupMemory/recallGroupMemories with 'group:<chat_id>' user
- Fix SQLite busy error: set busy_timeout before journal_mode
Telegram changes:
- Group chats now use shared conversation context (chat_id, thread_id)
- Personal memories stay with user, group memories shared across group
- Memory context shows [Personal] and [Group] prefixes
- Add withTypingIndicator: refreshes typing every 4s while agent thinks
- Fix typing UX: indicator now shows continuously until response sent
|
|
- Parse message_thread_id from incoming messages
- Include thread_id in sendMessage API calls
- Pass thread_id through message queue system
- Replies now go to the correct topic in supergroups
|
|
OpenRouter's chat completion API doesn't properly pass audio to models.
Switched to calling OpenAI's /v1/audio/transcriptions endpoint directly
with the whisper-1 model.
Requires OPENAI_API_KEY environment variable.
|
|
Fixes 'database is locked' errors when multiple threads access the
memory database simultaneously (incoming batcher, message dispatch,
reminder loop, main handler).
|
|
Batches incoming messages by chat_id with a 3-second sliding window
before processing. This prevents confusion when messages arrive
simultaneously from different chats.
- New IncomingQueue module with STM-based in-memory queue
- Messages enqueued immediately, offset acked on enqueue
- 200ms tick loop flushes batches past deadline
- Batch formatting: numbered messages, sender attribution for groups,
media stubs, reply context
- Media from first message in batch still gets full processing
|
|
- Switch from gemini-2.0-flash-001 to gemini-2.5-flash
- Put audio content before text prompt (model was ignoring audio)
- Strengthen prompt to return only transcription
|
|
- Add Messages.hs with scheduled_messages table and dispatcher loop
- All outbound messages now go through the queue (1s polling)
- Disable streaming responses, use runAgentWithProvider instead
- Add send_message tool for delayed messages (up to 30 days)
- Add list_pending_messages and cancel_message tools
- Reminders now queue messages instead of sending directly
- Exponential backoff retry (max 5 attempts) for failed sends
|
|
Amp-Thread-ID: https://ampcode.com/threads/T-019b1894-b431-777d-aba3-65a51e720ef2
Co-authored-by: Amp <amp@ampcode.com>
|
|
|
|
- Add RelationType with 6 relation types
- Add MemoryLink type and memory_links table
- Add graph functions: linkMemories, getMemoryLinks, queryGraph
- Add link_memories and query_graph agent tools
- Wire up graph tools to Telegram bot
- Include memory ID in recall results for linking
- Fix streaming usage parsing for cost tracking
Closes t-255
Amp-Thread-ID: https://ampcode.com/threads/T-019b181f-d6cd-70de-8857-c445baef7508
Co-authored-by: Amp <amp@ampcode.com>
|
|
When the bot is added to a group, check if the user who added it is
in the whitelist. If not, send a message explaining and leave the group
immediately. This prevents unauthorized users from bypassing DM access
controls by adding the bot to a group.
|
|
|
|
Add parse_mode=Markdown to sendMessage and editMessage API calls
|
|
OpenAI's SSE streaming sends tool calls incrementally - the first chunk
has the id and function name, subsequent chunks contain argument fragments.
Previously each chunk was treated as a complete tool call, causing invalid
JSON arguments.
- Add ToolCallDelta type with index for partial tool call data
- Add StreamToolCallDelta chunk type
- Track tool calls by index in IntMap accumulator
- Merge argument fragments across chunks via mergeToolCallDelta
- Build final ToolCall objects from accumulator when stream ends
- Handle new StreamToolCallDelta in Engine.hs pattern match
|
|
Pre-filter now sees last 5 messages so it can detect when user
is continuing a conversation with Ava, even without explicit mention.
- Fetch recent messages before shouldEngageInGroup
- Update classifier prompt to understand Ava context
- Handle follow-up messages to bot's previous responses
|
|
- Update to Dec 2024 OpenRouter pricing
- Use blended input/output rates
- Add gemini-flash, claude-sonnet-4.5 specific rates
- Fix math: was off by ~30x for Claude models
|
|
Use Gemini Flash to classify group messages before running the
full Sonnet agent. Skips casual banter to save tokens/cost.
- shouldEngageInGroup: yes/no classifier using gemini-2.0-flash
- Only runs for group chats, private chats skip the filter
- On classifier failure, defaults to engaging (fail-open)
|
|
- Remove mention-based filtering, bot sees all group messages
- Add response rules to system prompt for group chats:
- tool invocation = always respond
- direct question = respond
- factual correction = maybe respond
- casual banter = stay silent
- Empty response in group = intentional silence (no fallback msg)
- Add chat type context to system prompt
|
|
- Only respond in groups when @mentioned or replied to
- Add ChatType to TelegramMessage (private/group/supergroup/channel)
- Add getMe API call to fetch bot username on startup
- Add shouldRespondInGroup helper function
|
|
- Fix Provider.hs to strip leading whitespace from OpenRouter responses
- Fix FunctionCall parser to handle missing 'arguments' field
- Use eitherDecode for better error messages on parse failures
- Switch to claude-sonnet-4.5 for main agent
- Use gemini-2.0-flash for conversation summarization (cheaper)
- Add read_webpage tool for fetching and summarizing URLs
- Add tagsoup to Haskell deps (unused, kept for future)
|
|
Refactor Telegram.hs into submodules to reduce file size:
- Types.hs: data types, JSON parsing
- Media.hs: file downloads, image/voice analysis
- Reminders.hs: reminder loop, user chat persistence
Multimedia improvements:
- Vision uses third-person to avoid LLM confusion
- Better message framing for embedded descriptions
- Size validation (10MB images, 20MB voice)
- MIME type validation for voice messages
New features:
- Reply support: bot sees context when users reply
- Web search: default 5->10, max 10->20 results
- Guardrails: duplicate tool limit 3->10 for research
- Timezone: todos parse/display in Eastern time (ET)
|
|
- Add TelegramPhoto and TelegramVoice types
- Parse photo and voice fields from Telegram updates
- Download photos/voice via Telegram API
- Analyze images using Claude vision via OpenRouter
- Transcribe voice messages using Gemini audio via OpenRouter
- Wire multimedia processing into handleAuthorizedMessage
Photos are analyzed with user's caption as context.
Voice messages are transcribed and treated as text input.
|
|
Adds a background reminder loop that checks every 5 minutes for overdue
todos and sends Telegram notifications.
Changes:
- Add last_reminded_at column to todos table with auto-migration
- Add listTodosDueForReminder to find overdue, unreminded todos
- Add markReminderSent to update reminder timestamp
- Add user_chats table to map user_id -> chat_id for notifications
- Add recordUserChat called on each message to track chat IDs
- Add reminderLoop forked in runTelegramBot
- 24-hour anti-spam interval between reminders per todo
|
|
When the LLM returned empty content after executing tools, the agent
would complete with an empty message. Now both agent loops (LLM-based
and Provider-based) detect this case and inject a prompt asking the
LLM to provide a response to the user.
|
|
- Omni/Agent/Tools/Todos.hs: todo_add, todo_list, todo_complete, todo_delete
- Supports optional due dates in YYYY-MM-DD or YYYY-MM-DD HH:MM format
- Lists can filter by pending, all, or overdue
- Add todos table to Memory.hs schema
- Wire into Telegram bot
|
|
- Add sender_name column to conversation_messages table
- Migrate existing messages to set sender_name='bensima'
- Show sender names in conversation context (e.g., 'bensima: hello')
- Pass userName when saving user messages in Telegram bot
|
|
|
|
|
|
|
|
|
|
|
|
- Omni/Agent/Tools/Calendar.hs: calendar_list, calendar_add, calendar_search
- Wire into Telegram bot alongside other tools
- Integrates with local CalDAV via khal
|
|
- Omni/Agent/Tools/Pdf.hs: Extract text from PDFs using pdftotext
- Omni/Agent/Tools/Notes.hs: Quick notes CRUD with topics
- Add notes table schema to Memory.hs initMemoryDb
- Wire both tools into Telegram bot with logging callbacks
|
|
- Add Omni/Agent/Tools/WebSearch.hs with Kagi Search API integration
- webSearchTool for agents to search the web
- kagiSearch function for direct API access
- Load KAGI_API_KEY from environment
- Wire web search into Telegram bot tools
- Results formatted with title, URL, and snippet
Closes t-252
|
|
- Add tgAllowedUserIds field to TelegramConfig
- Load ALLOWED_TELEGRAM_USER_IDS from environment (comma-separated)
- Check isUserAllowed before processing messages
- Reject unauthorized users with friendly message
- Empty whitelist or '*' allows all users
- Add tests for whitelist behavior
|
|
- Add sendTypingAction to show typing indicator when processing
- Add conversation_messages and conversation_summaries tables
- Implement conversation history with token counting
- Auto-summarize when context exceeds threshold (3000 tokens)
- Save user/assistant messages for multi-turn context
- Add ConversationMessage, ConversationSummary, MessageRole types
Tasks created: t-252 (web search), t-253 (calendar), t-254 (PDF),
t-255 (knowledge graph), t-256 (notes)
|
|
- Set response timeout to polling timeout + 10s for long polling
- Remove Markdown parse_mode to avoid 400 errors on special chars
|
|
|
|
- Omni/Agent/Telegram.hs: Telegram API client with getUpdates/sendMessage
- Omni/Bot.hs: Standalone CLI for running the bot
- User identification via Memory.getOrCreateUserByTelegramId
- Memory-enhanced agent with remember/recall tools
- Run with: bot --token=XXX or TELEGRAM_BOT_TOKEN env var
|
|
- User management with Telegram ID identification
- Memory storage with Ollama embeddings (nomic-embed-text)
- Semantic similarity search via cosine similarity
- remember/recall tools for agents
- runAgentWithMemory wrapper for memory-enhanced agents
- Separate memory.db database for user privacy
|
|
- Create Omni/Agent/Provider.hs with unified Provider interface
- Support OpenRouter (cloud), Ollama (local), Amp (subprocess stub)
- Add runAgentWithProvider to Engine.hs for Provider-based execution
- Add EngineType to Core.hs (EngineOpenRouter, EngineOllama, EngineAmp)
- Add --engine flag to 'jr work' command
- Worker.hs dispatches to appropriate provider based on engine type
Usage:
jr work <task-id> # OpenRouter (default)
jr work <task-id> --engine=ollama # Local Ollama
jr work <task-id> --engine=amp # Amp CLI (stub)
|
|
Defines architecture for multi-agent system with:
- Provider abstraction (OpenRouter, Ollama, Amp backends)
- Shared memory system (sqlite-vss, multi-user, cross-agent)
- Tool registry for pluggable tool sets
- Evals framework for regression testing
- Telegram bot as first concrete agent
Tasks: t-247 through t-251
|
|
|
|
- Change 'from Biz.X import Y' to 'import Biz.X as X' style
- bild only recognizes 'import X as Y' for local dep detection
- Add setuptools to Python deps (required by newer nixpkgs)
Amp-Thread-ID: https://ampcode.com/threads/T-fe8328a9-7709-4544-9d31-b099f04aa120
Co-authored-by: Amp <amp@ampcode.com>
|
|
Add explicit guidance on:
- Reading files with large ranges (500+ lines) instead of many small chunks
- Using read_file directly when target file is known vs search_and_read
- Cost awareness: planning refactors, avoiding redundant reads
- Tool call limits for complex tasks
|
|
The task was being added to the prompt twice, once in the base prompt and once
in the user prompt.
|
|
Worked with Gemini and Opus to improve the system prompt with learnings from the
Amp prompt. Removed reference to Omni/Task/README.md because it is deprecated in
favor of `jr task`.
|
|
jr prompt <task-id> constructs and prints the full system prompt
that would be sent to the agent, including:
- Agent configuration (model, cost budget)
- Base instructions
- AGENTS.md content
- Relevant facts from knowledge base
- Retry/progress context if applicable
Useful for debugging agent behavior and token usage.
|