Building in public.
Starting from zero.

Geode applies its own GEO methodology to its own portfolio. This log documents every step: what we measured, what we built, what changed, and what didn't. Real data, attributed, dated, verifiable.

Entry 001 Kyle Vines | 2026-03-14

Starting from zero: our portfolio citation baseline

Before building citation surfaces, we measured where we actually stand. We queried five AI platforms with 16 questions that a potential buyer in each of our domains would realistically ask. The results are the starting point for everything that follows.

Three brands. Sixteen queries. Five platforms (ChatGPT, Perplexity, Google AI Overviews, Microsoft Copilot, Gemini). Here is what AI systems currently say about us:

The numbers

Brand Queries Cited Best Position Primary Blocker
Geode 6 0 N/A 10+ agencies already in the space
echology 5 1 #1 (GitHub, branded query only) "Document intelligence" owned by Azure
AECai 5 2 #4 (branded query) Brand collision with aecai.es

What we looked for

Each query was chosen to reflect what a real buyer would ask an AI assistant when evaluating vendors in our domains. We searched for:

  • Geode: "what is generative engine optimization," "GEO services companies," "AI citation optimization," "how to get cited by AI," "generative engine optimization agency," "who does generative engine optimization"
  • echology: "echology document intelligence," "deterministic text classification python," "decompose python text classification," "AI document processing pipeline," "document intelligence platform"
  • AECai: "AI for AEC industry," "construction document intelligence," "AI for engineering documents," "AEC document processing AI," "aecai"

What we found

Zero third-party mentions for any of our three brands. AI systems cite content that other sites reference. Wikipedia articles, industry listicles, comparison pages, and expert roundups drive AI citations. None of our brands appear in any of these contexts. This is the single biggest gap.

Geode has the most open field. The GEO category is new. Competitors are positioning but nobody dominates. First Page Sage claims to have "pioneered GEO in 2023." Frase.io brands as "Rank on Google. Get Cited by AI." The space is crowded with language but thin on demonstrated methodology.

echology faces the hardest positioning challenge. "Document intelligence" returns four Azure results before anything else. "decompose" is interpreted as a verb, not a package name. We need query targets that hyperscalers don't own.

AECai has the most tractable path. Niche market, moderate competition, already indexing for two queries. The branded term is contested by a Spanish beer industry association (aecai.es), but category queries like "AEC document processing AI" are winnable. Direct competitors: Document Crunch, BuildPulse AI, Mantis Builder, Sonar Labs.

How we measured

Manual queries across ChatGPT, Perplexity, Google AI Overviews, Microsoft Copilot, and Gemini. For each query, we recorded: whether our brand was cited, position in the response, exact snippet, source URL if attributed, and which competitors appeared in the same answer.

This is our starting point. We will re-run these same queries after publishing citation surfaces and report what changed. If nothing changes, we will report that too.

What happens next

We have published answer pages across all three brand sites targeting the highest-value queries from this audit. Each page is structured with Schema.org markup, FAQ format, organizational attribution, and cross-links to related content. These are the citation surfaces.

The next entry in this log will report the re-audit: same queries, same platforms, measured against this baseline. The methodology either works or it doesn't, and we will show which.

Methodology note: All queries were run between 2026-03-13 and 2026-03-14. Results reflect AI model states at that time. AI citation behavior changes as models update their training data and retrieval sources. This baseline is a snapshot, not a permanent state.

We set these targets on day one. They will be measured against actual data.

Metric Baseline 30 Days 90 Days
Geode queries with presence 0/6 3/6 5/6
echology queries with presence 1/5 3/5 4/5
AECai queries with presence 2/5 4/5 5/5
Citation surfaces published 12 18 30+