Blog
We built an Agent Skill so AI writes Redis code the way Redis experts would
That's it. One command, and every AI coding agent you use (Claude Code, Cursor, Codex, Copilot, Augment Code, whatever) gets opinionated, up-to-date Redis knowledge injected into its context when it's relevant.
Here's why we built it.
Everyone is becoming a coder
At the start of 2026, we published our predictions for the year. The third one was blunt: everyone will become a coder. The knowledge gap for building software is disappearing. If you can dream it, you can build it.
That prediction is already playing out, and faster than we expected.
Fortune reported that the head of Claude Code hasn't written a single line of code by hand in over two months. He shipped 22 PRs in one day, 27 the next, all 100% AI-generated. Anthropic says company-wide, the figure is between 70 and 90%. Meanwhile, 25% of Y Combinator's Winter 2025 founders relied heavily on AI to write their code, with 95% of it generated by LLMs. A study in Science found roughly 29% of Python functions on GitHub in the US are now AI-written.
And the trajectory keeps steepening. METR's research shows that the length of tasks frontier models can complete with 50% reliability has been doubling roughly every seven months for the past six years, with recent data suggesting that pace may have accelerated to around four months. Opus 4.5 now clears 80% on SWE-bench Verified. A year ago, the best model hit 49%.
But what's really striking isn't the benchmarks. It's who's building now.
At a recent Redis hackathon, a lawyer competed solo. No engineering background, no team of developers behind him. He built a working tool that solved a real problem from his day-to-day practice. I've been involved in hackathons for over a decade and I had never seen that before. People who historically couldn't build products because they didn't know how to write code are now shipping them.
New devs have AI coding assistants that combine the energy of an eager intern with the knowledge of an experienced engineer. "Consumer devs," people who've never touched a codebase, are building real things because LLMs handle the implementation. You're really only limited by your imagination.
So the question becomes: when agents are writing this much code, how do you make sure they're building systems correctly?
What we're seeing at Redis
We see a lot of agent-generated code that integrates with Redis. The patterns are instructive.
Agents reach for patterns from 2021. LLMs are trained on historical data. Redis has evolved significantly: vector sets, JSON document support, the query engine, LangCache, Agent Memory Server. But the agent's mental model is often frozen somewhere around Redis 6. It will happily generate code using older command styles, miss newer data structures entirely, and skip performance optimizations that exist specifically because Redis shipped them.
Agents improvise architectures when proven ones exist. Ask an agent to build a rate limiter and it'll write something that works on localhost. It probably won't use a sliding window with sorted sets. It won't think about cache stampede protection. It won't know about pipelining commands for performance. You can slow the agent down, make it read through docs, and steer it toward better patterns. But that assumes you already know what the better patterns are. The whole point of agents is that they're supposed to bring that expertise to you.
Agents don't surface what they don't know. The agent won't flag hot key risks. It'll use KEYS * in a code path that eventually runs against a production instance with millions of keys (that's a blocking call). It'll store large JSON blobs in string values when hash fields would be both faster and more memory-efficient. These aren't obscure edge cases. They're the kinds of things that show up in production incidents.
The core challenge is that agents are getting better at stringing together longer sequences of actions, but they're often doing it with stale knowledge of the libraries and services they're calling. One developer described the experience well: they asked their AI assistant to set up an authentication flow, and got back a mess of deprecated APIs and outdated patterns. The AI generated based on its training data and had no way of knowing things had changed.
Agent Skills are how you close that gap.
What an Agent Skill actually does
Agent Skills are a straightforward concept. They're markdown files that encode procedural knowledge: the kind of domain expertise that sits in a senior engineer's head but doesn't make it into the model's training data. When the agent encounters a relevant task, it loads the skill and applies its patterns.
The Redis Agent Skill gives agents:
- Current, correct patterns for common Redis use cases: caching, rate limiting, session management, vector search, semantic caching, agent memory, pub/sub, streams
- The right data structures for the job: when to use hashes vs. JSON vs. sorted sets vs. vector sets, and why
- Anti-pattern guardrails: no KEYS in loops, no unbounded key growth, no large values that amplify every operation
- Production-aware defaults: connection pooling, pipelining, cluster compatibility, error handling patterns that don't silently swallow failures
The key insight from Anthropic's own team is useful here: "MCP provides the tools; Skills teach how to use them." An MCP server can give an agent the ability to talk to Redis. A Skill teaches it how Redis should be used.
Why this matters now
Agents are fundamentally limited by context. They can only act on what they know, and what they know is bound by their training data and whatever you put in front of them. Agent Skills are a way to install context onto agents: current, structured, domain-specific knowledge that loads exactly when it's needed.
Our 2026 predictions called out that AI apps will backfire without context engines, and that the frameworks with the most robust ecosystems will win. Agent Skills sit right at that intersection. As agents proliferate across every IDE, CLI, and CI pipeline, the ones that best integrate into open ecosystems like the Agent Skills spec will be the ones that win out. Skills follow an open standard. They load on demand, keeping context windows lean. They're version-controlled and shareable across teams. And they compose: install the Redis skill alongside your framework skill and your infrastructure skill, and the agent draws on all of them.
Install it once. It works across agents. The next time you ask your AI to add Redis to your project, it'll use Redis the way we'd use Redis.
Get started with Redis today
Speak to a Redis expert and learn more about enterprise-grade Redis today.


