Development

Addressing the AI Awareness Problem -SurvivalIndex

230 new repositories per minute. That’s what GitHub reported in their 2025 Octoverse. 121 million new repos in a single year. A billion commits. 36 million new developers. But how is AI choosing the Open Source Tools in your code? SurvivalIndex Measures

March 7, 2026·4 min read
AIAI Coding AgentAI Software Development

GitHub now hosts 630 million repositories. AI-related repos crossed 4.3 million — 178% year-over-year. Six of the ten fastest-growing open source projects in 2025 were AI infrastructure.

npm processed 4.5 trillion requests in 2024 (70% YoY growth). PyPI saw an 87% surge, now hosting over 500,000 packages — up from 60,000 a decade ago.

That’s not a growth curve. That’s a step function.

At the center of it: AI coding agents making thousands of architectural decisions daily on behalf of developers who said “just build it.” Steve Yegge called this “Software Survival 3.0” — software survives if it saves cognition. He’s right about the theory.

Nobody is measuring it. We are.

The One Question That Matters Before you can measure whether a tool is good, you have to answer a simpler question: does the agent even know it exists?

The best tools in the ecosystem might as well not exist.

This isn’t a quality problem. It’s an awareness problem. And awareness is the one thing you can actually measure at scale, right now.

Introducing the Agent Awareness Score The Agent Awareness Score (AAS) is the first metric we’re shipping at SurvivalIndex. It answers one question per tool, per category: what percentage of AI coding agents know you exist, unprompted?

AAS isn’t a single number from a single test. It measures three layers of awareness:

Unprompted selection — we describe a problem without naming any tool. Does the agent pick you? This is the strongest signal. “Add fast search with typo tolerance to this app” — does it reach for Meilisearch or build from scratch?

Contextual recall — we mention adjacent technologies. Does the agent connect the dots? “We’re using Postgres and need search without Elasticsearch’s overhead” — does it know you’re an option?

Consideration set — we ask the agent to weigh options. Are you even in the conversation? “What are my options for search in this stack?” — do you appear at all?

These three signals get weighted (50/30/20), computed per model, then combined via geometric mean across agents. The geometric mean is deliberate — a tool that one model is obsessed with but others ignore scores poorly. We want tools that survive across the whole agent ecosystem, not just Claude or just Copilot.

The result: a 0–100 score. Elasticsearch might land at 72. Meilisearch at 34. Custom/DIY at 15.

The Pipeline: 7,200 Prompts Per Cycle Every evaluation cycle runs 7,200 prompts across the full matrix:

25 categories × 3 prompt types × 2 phrasings × 4 repo types × 4 agents × 3 runs for variance.

Each prompt hits a clean-state repo. Each response gets LLM-extracted for primary pick, consideration set, and reasoning. Each tool gets scored per-model, then composited into AAS.

For OSS maintainers, the output is a dashboard that breaks your score apart: which models know you, which don’t, which prompt types trigger recall, and exactly where visibility is leaking. When Meilisearch sees it’s 48% on Claude Opus but 18% on Aider, the action is clear: improve presence in Aider’s training ecosystem.

The Feedback Loop Here’s why this matters now:

Agent picks tool X More projects use tool X Training data reflects tool X Agent picks tool X more confidently The rich get richer. The invisible stay invisible. Your competitor isn’t another tool — it’s whether the agent defaults to you.

AAS makes this cycle visible. Track it weekly. Watch which agents are picking you up. Watch which ones are dropping you. Act before the feedback loop locks you out.

What’s Next AAS is the foundation. Once awareness data is flowing, we layer on the rest:

Friction — when agents pick your tool, does the implementation succeed? How many retries? How often do they abandon?

Savings — does using your tool actually save tokens versus building custom? What’s the counterfactual cost?

Survival ratio — the full formula combining awareness, friction, savings, and expert judgment into a single fitness score.

But none of that matters if agents don’t know you exist. Awareness first. Everything else is downstream.

→ survivalindex.org

Share this article