Memories.ai is pioneering the development of a “visual memory layer” for artificial intelligence, focusing on enabling AI systems to remember and recall visual data – a capability currently lacking in most physical-world applications. The company, founded by Shawn Shen and Ben Zhou, addresses a critical gap in AI development: the ability for machines to learn from, and act upon, past visual experiences.
The Need for Visual Memory
Currently, AI excels in the digital realm, but struggles to apply learned experiences to real-world scenarios. This is because most AI advancements prioritize text-based memory, which is easier to structure and index than visual data. The physical world operates on sight, and AI operating in this domain requires a way to retain and recall visual information. This is where Memories.ai steps in.
The founders recognized this need while working on Meta’s Ray-Ban smart glasses. They observed that if users couldn’t reliably recall recorded visual data, the glasses’ utility was limited. This led them to leave Meta and establish Memories.ai in 2024, raising $16 million in seed funding to date.
Partnership with Nvidia
Memories.ai is collaborating with Nvidia, leveraging tools like Cosmos-Reason 2 (a vision language model) and Nvidia Metropolis (a video search application) to accelerate its visual memory technology. This partnership highlights the industry’s growing interest in AI that can “see” and remember. The move to pair their work with Nvidia’s infrastructure suggests a belief that the future of AI will rely heavily on high-performance visual processing.
Data Collection and Model Development
A key challenge in building visual memory is effectively embedding and indexing video data for storage and recall. Memories.ai developed its own Large Visual Memory Model (LVMM) in July 2025, comparable to Google’s Gemini Embedding 2 but tailored for visual information. To train this model, the company created LUCI, a proprietary hardware device worn by data collectors to capture training footage. The decision to build custom hardware demonstrates the limitations of existing video recording technology in meeting the needs of AI training.
Future Outlook
Memories.ai is already working with major wearable companies (though identities remain undisclosed) and has secured a partnership with Qualcomm to run its models on Snapdragon processors. The company remains focused on the underlying model and infrastructure rather than becoming a hardware manufacturer.
“We are more focused on the model and the infrastructure, because ultimately we think the wearables and robotics market will come, but it’s probably just not now,” says Shen.
This suggests a long-term vision where visual memory becomes a foundational layer for broader AI applications in robotics and augmented reality. The company’s approach is less about immediate consumer products and more about building the core technology that will power the next generation of intelligent devices.
The development of AI visual memory is still in its early stages, but Memories.ai’s work marks a critical step towards machines that can truly “see” and learn from the physical world.




























