00 — Overview

A continuous audit layer for the marketing stack.

A synthetic stack inventory lives in a Google Sheet. On demand, a structured Claude prompt reads current state and returns a stack health score, an AI adoption gap, integration silos, redundancy flags, renewal alerts, and ranked recommendations as structured JSON. The dashboard renders that structure.

SheetsSheets APIClaude Haiku 4.5dashboard
Awaiting analysis

Click Run Analysis to read the current state of the stack and generate an executive summary.

Spend by category

Run analysis to surface unused AI features by tool, with effort to activate.
Run analysis to see silos, degraded connections, and manual workarounds.
Upcoming contract renewals with renew / renegotiate / sunset recommendations.
Things that look off in the current stack state.
Specific observations with impact tags.
Tool Category Annual Utilization Integrations AI Renewal Status
Priority-ranked actions to close the gaps above.
What this is and why

A MarTech stack audit is usually a spreadsheet exercise done once a year, if that. This is a small experiment in making it continuous and less painful. A synthetic stack inventory — tools, costs, integrations, renewal dates, AI feature status — lives in a Google Sheet. When you click Run Analysis, the app reads the current state and passes it to Claude with a structured prompt. Claude returns a stack health score, an AI adoption gap analysis, integration silos, redundancy flags, renewal alerts, and ranked recommendations as structured JSON. The frontend renders that structure.

The interesting part is the AI adoption gap section. Most enterprise MarTech stacks include AI features that nobody turned on. This surfaces them by tool, describes what they'd unlock, and estimates the effort to activate. The same analysis applies to integration coverage — tools that should talk to each other but don't, manual workarounds standing in for native connections, data gaps that compound over time.

What's here

  • AI adoption gap: surfaces unused AI features by tool with activation effort estimate.
  • Integration health: identifies silos and missing connections with business impact.
  • Renewal intelligence: upcoming renewals with renew, renegotiate, or sunset recommendations.
  • Stack governance: redundancy detection, utilization scores, ownership gaps.

Why Claude

Claude handles structured-output tasks with tight JSON schemas cleanly. For this use case — domain-specific reasoning across a cross-referenced dataset — Haiku is the right choice. Fast, cheap, accurate on structured output. A production version would route richer synthesis tasks to a larger model.

What this would look like at scale

The source becomes a live integration with the actual vendor management system or a Workfront project registry. The schedule runs weekly. Claude outputs write to a Slack channel, a read-only Confluence page, or a CMO dashboard. Renewal alerts trigger 90 and 30 days out. The AI adoption gap closes over time as recommendations get actioned and the inventory updates.

Stack

  • Source: Google Sheets.
  • Data access: Google Sheets API.
  • Analysis: Claude Haiku 4.5 via Anthropic API.
  • Frontend: HTML + Chart.js.
  • Hosting: Render.

Total build cost: $0 (all free tiers). Monthly operating cost at light traffic: under $1 in Claude API usage.