Have something to say?

Tell us how we could make the product more useful to you.

GitHub Copilot Provider (Enterprise订阅)无法自动发现部分可用模型,需手动注入

问题 Alma v0.0.760 中,Copilot provider 点击 Fetch 后,部分实际可用且 model_picker_enabled=true 的模型未被发现,包括: claude-opus-4.6 claude-opus-4.7 claude-sonnet-4.6 用同一 OAuth token 直接请求 https://api.githubcopilot.com/models,这些模型均正常返回,capabilities.type 为 "chat",model_picker_enabled 为 true,理论上应通过 Alma 的过滤条件。 当前只能通过 API 手动注入 curl -X PUT http://localhost:23001/api/providers/ \ -H "Content-Type: application/json" \ -d '{"availableModels": [..., {"id":"claude-opus-4.6","name":"Claude Opus 4.6","isManual":true}]}' 但再次点击 Fetch 会覆盖手动添加的模型。 建议 Copilot provider 开放手动添加模型入口 — 目前 UI 上的手动添加仅对 Azure/ACP 开放,建议同样对 Copilot 开放 Fetch 时保留 isManual: true 的模型 — 避免手动添加的模型被刷掉 排查 Fetch 丢失模型的原因 — 可能与去重逻辑(按 name 取最高 version)或缓存(5 分钟 TTL)有关 环境 Alma v0.0.760 / Windows 10 Provider: GitHub Copilot

测试账号 3 days ago

🐛

Bug Reports

RTK Savings Data Not Displaying + Need for RTK Bypass Option

Environment App version: alma v0.0.756 (latest) OS: macOS (Apple Silicon) Model: kimi-k2p5-turbo (via Fireworks) Bug 1: RTK Savings Stats Always Show "暂无节省数据" Description The RTK (Real-time Kompression) panel in the Alma UI consistently shows "暂无节省数据" (No savings data yet) / "When Alma executes commands, RTK will automatically compress tool output. Start a conversation and savings data will appear here." However, RTK is clearly functioning — tool outputs show [alma compacted XXXXX chars] markers indicating compression is happening successfully. The statistics display just doesn't reflect this. Evidence Tool outputs are being compressed: e.g., alma compacted 11292 chars, alma compacted 1183 chars This has been observed across multiple conversations and commands The UI stats panel never updates, even after extended sessions with many compressed outputs Expected Behavior The RTK savings panel should display: Total bytes/chars saved Compression ratio or percentage Number of compressions performed Actual Behavior Panel always shows "暂无节省数据" regardless of how many compressions have occurred. Possible Cause The RTK compression engine runs correctly but the statistics/metrics are not being persisted or emitted to the UI layer. The UI component reads from a stats store that is never populated. Feature Request: Per-Tool or Per-Skill RTK Bypass Motivation During a workflow involving ChromeRelay to organize Blackboard course files (thread: mnsr31rnasjxdxoey2), RTK's auto-compaction caused significant issues: Lost HTML structure: ChromeRelay returns full page HTML/DOM data. RTK compaction strips or truncates this, making it impossible to parse download links, form tokens, or navigation elements. Lost file metadata: When listing downloaded files, the compacted output dropped file sizes and modification dates that were needed for deduplication logic. Context loss in multi-step browser operations: A sequence of ChromeRelay commands (navigate → scrape → download) relies on the full output of each step being preserved. Compaction breaks this chain. Concrete Example In the Blackboard file organization session: ChromeRelay returned HTML content from vuws.westernsydney.edu.au containing CDN download URLs for PDFs RTK compacted the output, losing the URL patterns needed to extract direct download links The workaround was to use eval to extract cookies/headers from the browser and construct curl commands manually — which shouldn't be necessary if the raw ChromeRelay output were preserved Proposed Solution Add a config option to bypass RTK for specific tools or skills: json { "rtk": { "bypassTools": ["ChromeRelay", "ChromeRelayEval"], "bypassThreshold": 0 }} Or a per-invocation flag: bash alma config set rtk.bypassTools '["ChromeRelay", "ChromeRelayEval"]' Alternative approaches: Per-tool threshold: Only compress if output exceeds a much larger threshold (e.g., 50KB) for browser-related tools Smart bypass: Don't compress outputs that contain URLs, HTML, or structured data formats Per-skill bypass: Allow skills to declare rtk: false in their SKILL.md Also discussed in https://discord.com/channels/1454390052359503986/1454390459328761857/1484830135365537924 Why This Matters RTK is great for saving tokens on verbose command output (e.g., ls -la, npm install). But for tools that return structured browser data, the compression destroys the very information the AI needs to act on. A bypass option would let users get the best of both worlds. Related Config Context Current RTK-related settings: json { "chat.autoCompact": { "enabled": true, "threshold": 75, "keepRecentMessages": 4 }} No RTK-specific bypass or tool exclusion settings currently exist.

Bill ZHANG 11 days ago

1
💡

Feature Request

Bug: `WebSearch` can trigger a native macOS “Save As” dialog on PDF/download result URLs, and the saved file is not read back by AI

``` That indicates WebSearch tried to access a PDF/download-style URL while enriching results. ## Why this points to WebSearch The dialog appeared immediately after the WebSearch call above. The next calls in the thread were WebFetch calls against ordinary HTML article pages, for example: - https://example-journal-site.invalid/article/.../full - https://example-repository.invalid/articles/ / Those returned contentType: text/html, so they were not the source of the save dialog. ## Expected behavior When WebSearch encounters result URLs that are PDF-like or download-like, it should: - skip preview/snippet enrichment for those URLs, or - route them through a controlled PDF pipeline It should not trigger an OS-level save dialog during search result processing. ## Actual behavior - WebSearch touches a result URL with download=true - macOS shows a native “Save As” dialog - the user saves the file manually - Alma does not read the saved file or attach it to the workflow ## Likely cause WebSearch result enrichment is treating PDF/download targets like normal web pages. URLs with patterns like these need special handling: -.pdf - content/pdf - pdfCoverPage - pdfdirect - download=true ## Recommended fix 1. Add PDF/download URL detection before result enrichment. 2. Do not open those URLs in the normal preview/snippet path. 3. Either skip enrichment or download programmatically to a temp path and parse the file. 4. If a file is downloaded, pass the saved path or extracted text back into the tool result. ## Impact This breaks research workflows in two ways: - unexpected native OS dialog interrupts the user - the downloaded file becomes orphaned because Alma never reads it

Bill ZHANG 11 days ago

1
🐛

Bug Reports