I think the current memory mechanism in Alma could still be improved.

I believe Alma's current memory mechanism could still be improved.
The current memory system is quite useful for executing a single task, but as a chatbot, using this memory mechanism feels like forcing a single individual into this context.

My suggestion is to leverage context as much as possible—that is, first understand the current query before retrieving memories. Shift memory retrieval from a default upfront process to an on-demand, post-processing step.
Memories inherently carry weight, making the use of forgetting mechanisms essential—such as implementing the Ebbinghaus forgetting curve.

Regarding memory recall, I believe that after recalling something, the recalled content should be assigned a higher weight.

The program could randomly trigger recall during idle periods—for instance, by monitoring system performance and activating recall when CPU utilization is low.

Memory Organization

Deduplication: Merge similar memories into cleaner, consolidated entries. Abstraction: Transform multiple specific events into stable preferences. Weight Reassessment: Update half-life/weight based on recent usage patterns.

Add reflection

Reflection answers three questions: 1. Why did I respond that way? (Basis: Context vs. Memory) 2. Was recalling this memory helpful? (Contribution Assessment) 3. What should I update? (Add/Modify/Decrease Memory Weight). Reflection need not be displayed to the user but drives weight updates and abstract organization.

Adopting Locke's approach to human understanding: cognitive mechanisms and how ideas enter the mind

Ideas aren't generated in one go. Common pathways include: Association: A frequently co-occurs with B → A triggers B ("exams" evoke "anxiety"). Abstraction and generalization: Multiple concrete examples → Deriving rules (from "those people are unreliable" to "people are unreliable").Analogy: Applying known concepts to understand the unknown (e.g., viewing "society" as "family," or "company" as "battlefield"). Integration: Weaving fragmented experiences into causal narratives ("I'm always unlucky because I'm not good enough").

Innate vs. Experience ??? Innate factors certainly carry significant weight, which humans can configure. The question is whether innate traits require adjustment based on experience or should adopt a Bayesian approach. Use dialogue boxes for humans to annotate information.

Employ the Comprehensible Input Hypothesis: Humans acquire language by gradually exploring and expanding from existing knowledge. If AI responses exceed user comprehension, it fails. But how can AI determine user comprehension levels?

Through current context,

Second, through memory.

How should the emotional filter be implemented?

Please authenticate to join the conversation.

Upvoters
Status

In Review

Board
💡

Feature Request

Date

About 1 month ago

Author

Midas Penn

Subscribe to post

Get notified by email when there are changes.