In-Person Event
Unifying & scaling AI infra with Nebius & Tensormesh.
Models are getting more powerful, and GPU infrastructure is moving fast. So the question is: can your stack keep up?
In production, the differentiator is how well you can:
- Deliver the right context at the right time
- Maintain low-latency AI experiences users trust
- Reduce redundant LLM calls and unnecessary infra cost
- Scale from prototype to production without breaking your stack
We see firsthand what it takes to power fast, accurate, and scalable AI apps — from context, to caching, agent memory, to the real-time data coordination that happens all behind the scenes.
So, why not skip the conference food and join us for lunch. It's a chance to exchange lessons learned with other leaders navigating the same challenges, and compare what’s working, what’s brittle, and what actually scales.
What to expect
- Informal conversation with leaders
- No slides, no product demos
- A trusted, peer-level environment
- Fully catered, sit-down lunch
- Great swag (quarter zip & personalized Stanley water bottle)
Speakers

Redis
Simba Khadder
Engineering, Context surfaces

Redis
Tyler Hutcherson
Head of Applied AI Engineering

Tensormesh
Nick Barcet
Head of GTM

Nebius
Raz Bacher
Senior Director, Global Lead, ISV
Request to join
Join us at NVIDIA GTC for a great sit down lunch and even better conversations.
Get started with Redis today
Speak to a Redis expert and learn more about enterprise-grade Redis today.