Artificial intelligence has made extraordinary strides in understanding text, generating images, and holding conversations. But there is one critical capability that remains largely undeveloped the ability for AI to remember what it sees in the physical world. That is exactly the problem Memories ai is setting out to solve, and its latest collaboration with Nvidia signals that the solution may be closer than we think.
The Problem: AI Can See, But It Cannot Remember
Today's AI systems are remarkably good at processing visual information in real time. Smart glasses can identify objects, robots can navigate environments, and cameras can detect faces. But once that moment passes, the visual data is essentially gone. There is no persistent memory layer that allows these systems to recall what they previously observed.
Shawn Shen, the founder of Memories ai, believes that AI will need to remember what it sees in order to succeed in the physical world. While text-based memory has advanced significantly in recent years, with tools like ChatGPT, Gemini, and Grok all gaining the ability to remember past conversations, these advancements have largely focused on text-based memory, which is much more structured and easier to index but not as helpful for physical AI applications that largely interact with the world through sight and visuals.
This gap is what makes Memories ai so compelling. The company is building the infrastructure that enables wearables, robots, and other physical AI devices to store, index, and recall visual memories much like how humans naturally remember places, faces, and events they have seen before.
The Nvidia Partnership
Memories ai announced a collaboration with semiconductor giant Nvidia at its GTC conference on Monday. Through this partnership, Memories ai uses Nvidia's Cosmos Reason 2, a reasoning vision language model, and Nvidia Metropolis, an application for video search and summarization, to continue developing its visual memory technology.
This collaboration with Nvidia is significant because it provides Memories ai with access to some of the most powerful AI infrastructure available today. Nvidia's tools are already widely used in autonomous vehicles, smart cities, and industrial automation, so integrating visual memory capabilities into this ecosystem could unlock entirely new use cases across multiple industries.
From Meta's RayBan Glasses to a Standalone Company
The origin story of Memories ai is rooted in firsthand experience with one of the most talked-about consumer AI products. Shen and his co-founder and CTO Ben Zhou got the idea for the company while building the AI system behind Meta's RayBan glasses. Building the AI glasses got them thinking about how people would actually use the tech in real life if users could not recall the video data they were recording.
They looked around to see if anyone was already building that type of visual memory solution for AI. When they could not find one, they decided to spin out of Meta and build it themselves.
The Technology Behind the Vision
Building a visual memory layer is no small task. Shen said successfully building this technology required two things: building the infrastructure needed to embed and index videos into a data format that can be stored and recalled, and capturing the data needed to train the model.
The company launched its large visual memory model, or LVMM, in July 2025. For data collection, the company created LUCI, a hardware device worn by data collectors that records video used to train the model. They built their own device because they were not satisfied with off-the-shelf video recorders that focused on high-definition and battery-heavy video formats.
The company has also released the second generation of its LVMM and signed a partnership with Qualcomm to run on Qualcomm's processors starting later this year.
Funding and Future Outlook
Memories ai was launched in 2024 and has raised $16 million thus far, through an $8 million seed round in July 2025 and an $8 million extension. The round was led by Susa Ventures and included Seedcamp, Fusion Fund, and Crane Venture Partners.
The company is also working with some of the large wearable companies already, though Shen declined to disclose which ones. While there is demand today, Shen sees even bigger opportunities ahead.
Shen said the company is more focused on the model and the infrastructure, because ultimately the wearables and robotics market will come, but it is probably just not now.
Why This Matters for the Future of AI
As wearables become more mainstream and robotics expands into homes, warehouses, and hospitals, the need for persistent visual memory will become critical. A robot that cannot remember the layout of your home every time it reboots, or smart glasses that forget everything they recorded yesterday, will never deliver on the promise of truly intelligent physical AI.
Memories ai is betting that visual memory is the missing piece. With Nvidia's tools, Qualcomm's processors, and $16 million in funding, they are building the foundation for a future where AI does not just see the world it remembers it.







