Misleading Invalid message format error when sending chat messages; local logs show missing Copilot auth and a secondary ReferenceError

Summary

Alma shows Invalid message format when sending a normal chat message, including when the selected chat model is not a Copilot model.

On this machine, the visible UI error appears to be misleading. Local diagnostic logs show that the actual failure during generation is:

  • CopilotServiceError: No access token found for account "<COPILOT_ACCOUNT_ID>". Please authenticate first.

A second local error then occurs:

  • ReferenceError: messageStorageId is not defined

This appears to replace or mask the original, actionable error and results in the user-facing Invalid message format message.

This matters because it blocks normal chat usage and makes the failure hard to diagnose.

Environment

  • Product: Alma

  • Version: 0.0.743

  • OS: Windows 10.0.26200

  • Architecture: x64

  • Runtime info observed in local diagnostics:

    • Electron 38.7.2

    • Node 22.21.1

  • Hardware:

    • CPU: AMD EPYC 7763 64-Core Processor

  • Relevant local setup:

    • Alma was configured with multiple providers

    • Observed local provider configuration included:

      • openai with base URL http://localhost:4141/v1

      • anthropic with base URL http://localhost:4141

      • copilot with base URL https://api.githubcopilot.com

  • Network/setup context:

    • A local proxy was in use for some providers

    • Whether the proxy is required to reproduce this exact Alma-side failure: Needs confirmation

Preconditions

Observed on this machine:

  • Alma had a configured Copilot provider with:

    • provider id: copilot

    • account id: <COPILOT_ACCOUNT_ID>

  • Alma settings showed:

    • toolModel.model = "copilot:gpt-5.4-mini"

    • chat.defaultModel = ""

  • The local Copilot account storage directory existed but was empty:

    • C:\Users\<WINDOWS_USER>\AppData\Roaming\alma\.copilot_accounts

  • User reported the visible issue when using:

    • gpt-5.4

    • gpt-5.3-codex

    • across anthropic, openai, and github copilot

What is strictly required to reproduce beyond the above: Needs confirmation

Steps to Reproduce

Minimal user-level reproduction observed/reported:

  1. Open Alma.

  2. Configure or keep multiple providers enabled, including a Copilot provider.

  3. Ensure Alma is in the state where the Copilot account referenced by the provider is missing or unauthenticated.

  4. Select a chat model such as openai:gpt-5.4.

  5. Open a chat thread.

  6. Send a simple message such as test.

Optional lower-level reproduction used during investigation:

  1. Connect to Alma’s local WebSocket endpoint: ws://127.0.0.1:23001/ws/threads

  2. Send a generate_response payload with:

    • threadId: existing thread id

    • model: openai:gpt-5.4

    • userMessage: simple text message

  3. Observe that Alma first reports memory retrieval progress, then returns:

    • {"type":"error","data":{"error":"Invalid message format"}}

Whether the lower-level WebSocket repro is stable across environments: Needs confirmation

Expected Behavior

When sending a normal chat message:

  • Alma should either generate a response successfully, or

  • fail with a clear, actionable error that reflects the real cause

If a background dependency is missing, the UI should report that dependency directly, for example an authentication error, rather than Invalid message format.

A second internal error should not overwrite or hide the original failure.

Actual Behavior

Visible user-facing behavior:

  • Alma shows: Invalid message format

Observed local diagnostic behavior on this machine:

  • Generation starts

  • Memory retrieval progresses normally

  • Then local logs show:

    • Chat generation error: CopilotServiceError: No access token found for account "<COPILOT_ACCOUNT_ID>". Please authenticate first.

    • WebSocket message error: ReferenceError: messageStorageId is not defined

So the visible Invalid message format does not match the actual logged failure.

Frequency / Reproducibility

  • User-reported frequency: repeated / persistent

  • Observed on this machine: reproducible multiple times on 2026-04-01

  • Under the observed local state, it appears to happen consistently

Whether it always requires a missing Copilot account/token state: Needs confirmation

Impact

User-facing impact:

  • Chat becomes unusable

  • The error message is misleading

  • Troubleshooting is much harder than necessary

Severity:

  • High for affected users, because normal chat requests fail

Known workaround:

  • Likely workaround: restore/re-authenticate the missing Copilot account, or change the configured tool/background model away from the missing Copilot-backed model

  • Whether this fully resolves the issue in all cases: Needs confirmation

Evidence

Exact visible error

  • Invalid message format

Exact logged errors

Observed in local diagnostics around 2026-04-01T07:04:12Z:

  • Chat generation error: CopilotServiceError: No access token found for account "<COPILOT_ACCOUNT_ID>". Please authenticate first.

  • WebSocket message error: ReferenceError: messageStorageId is not defined

Relevant local artifact locations

  • Sentry/local diagnostic scope:

    • C:\Users\<WINDOWS_USER>\AppData\Roaming\alma\sentry\scope_v3.json

  • Chat database:

    • C:\Users\<WINDOWS_USER>\AppData\Roaming\alma\chat_threads.db

  • Copilot account storage:

    • C:\Users\<WINDOWS_USER>\AppData\Roaming\alma\.copilot_accounts

Timestamps observed

User-visible failure history in the affected thread included failures around:

  • 2026-04-01T03:41:56Z

  • 2026-04-01T03:42:00Z

  • 2026-04-01T03:44:40Z

  • 2026-04-01T06:49:15Z

  • 2026-04-01T06:51:55Z

  • 2026-04-01T06:53:16Z

  • 2026-04-01T06:55:25Z

Detailed diagnostic reproduction captured around:

  • 2026-04-01T07:04:12Z

Additional observed facts

  • The affected thread contained only user messages and no assistant replies

  • The Copilot account directory was empty at the time of investigation

  • Alma settings on this machine included:

    • toolModel.model = "copilot:gpt-5.4-mini"

Maintainer-facing repro signal

In the WebSocket-based reproduction, Alma emitted:

  • memory_retrieval_progress events

  • then:

    • {"type":"error","data":{"error":"Invalid message format"}}

This suggests the request passed initial handling and failed later in generation/error handling.

Scope

Seems affected:

  • Sending simple chat messages

  • At least gpt-5.4

  • User also reported gpt-5.3-codex

  • User reported the visible failure across:

    • openai

    • anthropic

    • github copilot

Observed directly on this machine:

  • Reproduced with selected model openai:gpt-5.4

Seems not affected / less likely to be the direct root cause:

  • The visible error does not appear to be a literal message-format/schema validation failure at the UI layer

  • The failure occurs after generation has already started enough to emit memory retrieval progress

Needs confirmation:

  • Whether this affects all non-Copilot chat providers when the configured Copilot-backed tool/background model is unauthenticated

  • Whether this affects all models or only some models

  • Whether this reproduces outside the local proxy setup

Hypotheses (Optional)

Hypothesis 1

Alma may depend on a background/tool model during generation even when the selected chat model belongs to another provider. If that background model depends on Copilot auth and the Copilot token/account is missing, generation fails before the selected provider can complete the request.

Grounding:

  • Local settings showed toolModel.model = "copilot:gpt-5.4-mini"

  • Local logs showed a Copilot auth error while the selected model in reproduction was openai:gpt-5.4

Hypothesis 2

A second error in Alma’s error-handling path is masking the original failure:

  • original error: missing Copilot auth

  • secondary error: ReferenceError: messageStorageId is not defined

  • user-visible fallback: Invalid message format

Grounding:

  • This exact sequence was observed in local logs

Suggested Fix Direction (Optional)

High-level, practical suggestions only:

  • Preserve and surface the original generation error to the user when possible

  • Do not collapse unrelated internal failures into Invalid message format

  • If a background/tool model is required, validate its auth state explicitly and return a clear message

  • Prevent the secondary ReferenceError from executing in the error path

  • Ensure that non-Copilot chat requests are not blocked by stale/missing Copilot auth unless Copilot is actually required for that request

Acceptance Criteria

This issue can be considered resolved when all of the following are true:

  1. Sending a simple message in Alma no longer produces Invalid message format under this failure mode.

  2. If Copilot authentication is missing, Alma shows a clear and actionable auth-related error instead of a generic message-format error.

  3. No secondary error like ReferenceError: messageStorageId is not defined occurs during the same failure path.

  4. A selected non-Copilot chat model can complete normally, or fail with a correct dependency error, even when a Copilot account is stale or missing.

  5. The maintainer can reproduce the issue before the fix and verify it no longer occurs after the fix.

Open Questions / Missing Information

  • Is a missing/stale Copilot account the required trigger, or only one trigger?

  • Is the configured toolModel.model the reason Alma touches Copilot during non-Copilot chat generation?

  • Does the same issue reproduce on macOS or Linux?

  • Does it reproduce without the local proxy setup?

  • Does it affect all models, or only gpt-5.4 / gpt-5.3-codex?

  • Is there a simpler end-user repro path the maintainer prefers over the local WebSocket repro?

  • Screenshot/video of the UI failure: Unknown

  • Full crash dump beyond local scope/log evidence: Unknown

Please authenticate to join the conversation.

Upvoters
Status

In Review

Board
💡

Feature Request

Date

About 5 hours ago

Author

wxxb789

Subscribe to post

Get notified by email when there are changes.