chat.message.willSend hook output is silently dropped — updatedInput.content never reaches the LLM

Version: 0.0.773 (also reproducible on 0.0.772)


What’s happing

When a hook registered for chat.message.willSend returns updatedInput.content to rewrite the user's message, the rewritten content is silently discarded. The LLM receives the original, unmodified message.

This breaks every plugin and user-defined hook that relies on message rewriting (auto-routing, redaction, instruction injection, etc.).

Reproduction (minimal)

  1. Save this script as ~/.config/alma/hooks/probe.py:

python
#!/usr/bin/env python3
 import json, sys
 sys.stdin.read()
 print(json.dumps({
 "decision": "allow",
 "updatedInput": {"content": "[INJECTED] hello world"}
 }))

chmod +x ~/.config/alma/hooks/probe.py

  1. Add to ~/.config/alma/hooks.json:

json
{
 "hooks": {
 "chat.message.willSend": [{
 "matcher": ".*",
 "hooks": [{
 "enabled": true,
 "timeout": 3000,
 "command": "python3 /Users/<you>/.config/alma/hooks/probe.py"
       }]
    }]
  }
}
  1. Restart Alma, send any message in chat (e.g. hi).

  2. Expected: LLM receives [INJECTED] hello world and replies to that. Actual: LLM receives hi and replies to hi. The injection is invisible.

Verification that the hook itself is fine

  • GET /api/hooks/path returns the correct config path → hooks are loaded.

  • Piping the same JSON to the script via stdin produces the documented output format.

  • Multiple hooks chained on willSend all run and log correctly.

  • The console log line [Plugins] chat.message.willSend hook modified content (which would appear if the rewrite path executed) never fires.

So the hook layer works end-to-end. The breakage is downstream of the hook.

Where I think the bug is

In the main process bundle, inside generateChatResponse, the rewrite logic is wrapped in a Promise that is created but never awaited or consumed:

js
const G = (async () => {
 // ... await ul.trigger("chat.message.willSend", ..., s);
 if (s.content && s.content !== o) {
 console.log("[Plugins] chat.message.willSend hook modified content");
 return [...c.slice(0, t), modifiedMsg, ...c.slice(t + 1)];
 }
 return null;
})().catch(e => null);
// The next Promise.all only awaits Y, J, V — not G:
const [K, Q, Z] = await Promise.all([Y.catch(...), J.catch(...), V.catch(...)]);
// Then the original c array is sent to the LLM, never replaced by G's result.

A whole-file grep confirms G is referenced exactly once (at the assignment). It is never awaited, never used.

Suggested fix

Either include G in the existing Promise.all, or await it separately and apply its result before the LLM call:

js
const rewritten = await G;
const finalMessages = rewritten ?? c;
// use finalMessages downstream

Impact

Any of these is silently broken right now:

  • Plugins that route messages to specialist agents

  • PII / secret redaction hooks

  • Per-thread system prompt augmentation via user-message injection

  • Any user-side automation built on the documented updatedInput.content contract

Happy to provide more logs, full grep output, or a minimal repro repo if useful.

Please authenticate to join the conversation.

Upvoters
Status

In Review

Board
🐛

Bug Reports

Date

About 2 hours ago

Author

YUN GU

Subscribe to post

Get notified by email when there are changes.