diff options
| author | Ben Sima <ben@bensima.com> | 2025-12-01 04:34:12 -0500 |
|---|---|---|
| committer | Ben Sima <ben@bensima.com> | 2025-12-01 04:34:12 -0500 |
| commit | e2d6402464dda573c7401b34fc93557a786efc35 (patch) | |
| tree | 29fe7bf56e387b0c70f44e3f277ee1ffd3c398b2 /Omni/Jr.hs | |
| parent | 3945b6fad4f1620612beb259e8601d165b9f4f12 (diff) | |
Replace llm CLI with Engine.chat in Jr.hs
Perfect! All tests pass. Let me create a summary of the changes made:
I've successfully replaced the `llm` CLI calls with `Engine.chat`
in Jr.
1. **`addCompletionSummary` function (lines 604-624)**:
- Removed `Process.readProcessWithExitCode "llm" []` call - Added
OPENROUTER_API_KEY environment variable check - Replaced with
`Engine.chat` using the same pattern as `generateEpic - Proper
error handling for missing API key and API failures
2. **`extractFacts` function (lines 658-680)**:
- Removed `Process.readProcessWithExitCode "llm" ["-s", ...]` call
- Added OPENROUTER_API_KEY environment variable check - Replaced
with `Engine.chat` using the same pattern as `generateEpic -
Proper error handling for missing API key and API failures
3. **Dependency cleanup**:
- Removed `-- : run llm` from the header (line 12) since we
no longer
- Both functions now use the OpenRouter API via `Engine.chat` -
Graceful degradation when OPENROUTER_API_KEY is not set (warning
messa - Consistent error handling pattern matching the existing
`generateEpicS - All tests pass successfully - No hlint or ormolu
issues
The implementation follows the exact pattern shown in the task
descripti
Task-Id: t-198
Diffstat (limited to 'Omni/Jr.hs')
| -rwxr-xr-x | Omni/Jr.hs | 53 |
1 files changed, 36 insertions, 17 deletions
@@ -9,7 +9,6 @@ -- : dep servant-server -- : dep lucid -- : dep servant-lucid --- : run llm module Omni.Jr where import Alpha @@ -604,17 +603,25 @@ addCompletionSummary tid commitSha = do -- Build prompt for llm let prompt = buildCompletionPrompt tid commitMessage diffSummary files - -- Call llm CLI to generate summary - (llmCode, llmOut, llmErr) <- Process.readProcessWithExitCode "llm" [] (Text.unpack prompt) - - case llmCode of - Exit.ExitSuccess -> do - let summary = Text.strip (Text.pack llmOut) - unless (Text.null summary) <| do - _ <- TaskCore.addComment tid ("## Completion Summary\n\n" <> summary) - putText "[review] Added completion summary comment" - Exit.ExitFailure _ -> do - putText ("[review] Failed to generate completion summary: " <> Text.pack llmErr) + -- Try to get API key + maybeApiKey <- Env.lookupEnv "OPENROUTER_API_KEY" + case maybeApiKey of + Nothing -> do + putText "[review] Warning: OPENROUTER_API_KEY not set, skipping completion summary" + Just apiKey -> do + -- Call LLM via Engine.chat + let llm = Engine.defaultLLM {Engine.llmApiKey = Text.pack apiKey} + messages = [Engine.Message Engine.User prompt Nothing Nothing] + + result <- Engine.chat llm [] messages + case result of + Left err -> do + putText ("[review] Failed to generate completion summary: " <> err) + Right msg -> do + let summary = Text.strip (Engine.msgContent msg) + unless (Text.null summary) <| do + _ <- TaskCore.addComment tid ("## Completion Summary\n\n" <> summary) + putText "[review] Added completion summary comment" -- | Build prompt for LLM to generate completion summary buildCompletionPrompt :: Text -> Text -> Text -> [String] -> Text @@ -654,11 +661,23 @@ extractFacts tid commitSha = do Nothing -> pure () Just task -> do let prompt = buildFactExtractionPrompt task diffOut - -- Call llm CLI - (code, llmOut, _) <- Process.readProcessWithExitCode "llm" ["-s", Text.unpack prompt] "" - case code of - Exit.ExitSuccess -> parseFacts tid llmOut - _ -> putText "[facts] Failed to extract facts" + + -- Try to get API key + maybeApiKey <- Env.lookupEnv "OPENROUTER_API_KEY" + case maybeApiKey of + Nothing -> do + putText "[facts] Warning: OPENROUTER_API_KEY not set, skipping fact extraction" + Just apiKey -> do + -- Call LLM via Engine.chat + let llm = Engine.defaultLLM {Engine.llmApiKey = Text.pack apiKey} + messages = [Engine.Message Engine.User prompt Nothing Nothing] + + result <- Engine.chat llm [] messages + case result of + Left err -> do + putText ("[facts] Failed to extract facts: " <> err) + Right msg -> do + parseFacts tid (Text.unpack (Engine.msgContent msg)) -- | Build prompt for LLM to extract facts from completed task buildFactExtractionPrompt :: TaskCore.Task -> String -> Text |
