Observatory

A window into the human-AI collaboration behind Austin AI Events

Calendar
πŸ’‘

The Big Picture

This is a fully autonomous system. It discovers events, monitors its own health, and fixes its own code β€” every day, without human intervention. The Observatory exists to give transparency into this process so people can see how an autonomous AI system actually works. This isn't a black box.

How It Works

Three autonomous loops work together β€” no human runs this system.

LOOP 1

Discovery Pipeline

β€” runs daily at midnight
πŸ”
Search

Scans 8+ sources and the web for Austin AI events

πŸ”€
Deduplicate

Catches the same event listed on different platforms

βœ…
Validate

AI confirms: real event? In Austin? AI-related?

🏷️
Classify

Tags audience, skill level, and free/paid

πŸ“…
Publish

Approved events appear on the calendar

run complete β€” monitor evaluates
LOOP 2

Self-Monitoring

β€” evaluates every run
πŸ“ŠGather Metrics

Collects data on scraper health, error rates, source performance, and calendar coverage

🧠Opus Evaluates

The most powerful Claude model reviews everything, assigns a health grade, and identifies issues

⚑Take Action

Creates search queries, manages sources, and escalates code issues for the repair agent

issues found β€” repair agent activates
LOOP 3

Self-Healing

β€” runs daily, 2 hours after discovery
πŸ“‹Read Issues

Picks up the highest-priority action item from the monitor

πŸ”§Fix Code

Reads the codebase, understands the bug, and writes a fix

πŸ§ͺTest

Runs the test suite β€” only pushes if all tests pass

πŸš€Deploy

Pushes the fix to production β€” next run uses the improved code

Cycle repeats daily β€” the system continuously improves itself
πŸ€–

Multi-Model Architecture

Three Claude AI models split the work based on what each task needs β€” like having a junior analyst, a senior reviewer, and a strategic director on the same team.

Haiku(Speed)

Handles 80% of decisions β€” validation, classification, dedup

Sonnet(Balance)

Evaluates new sources and extracts event details

Opus(Strategy)

The system brain β€” monitors health and drives improvements

πŸ’¬

Community Input

Anyone can submit an event the system missed using the β€œMissing an event?” button on the calendar. The agent scrapes the submitted URL, validates it, and adds it to the calendar β€” all in the same daily run. It also learns from each submission, adding new sources and search strategies to find similar events in the future.

πŸ€–

Agent Performance

What the agent is doing autonomously

Events Added (Last 30 Days)

Recent Activity

πŸ”

Under the Hood

How the agent thinks, decides, and sometimes fails

🩺

Health Report

Automated self-evaluation of system effectiveness

GradeScraper HealthSourcesError RateActivity
A80%+4+ contributing<5%Events added in last 7d
B60-79%3+ contributing<10%Active discovery
C40-59%2-3 contributing>10%Some source issues
D<40%<2 contributingHighMultiple broken scrapers
Fβ€”β€”β€”System not running

Updated 2026-03-29: Grades now measure infrastructure health (what the agent controls), not event count or empty days (which reflect community activity). The agent still actively maximizes calendar coverage as a separate mission.

🀝

Human Stewardship

How humans guide the agent's growth

Human Stewardship

How humans guide the agent's growth using Claude Code

🀝
🧠 15 Learning⚑ 13 Optimization✨ 18 New CapabilityπŸ—οΈ 6 Foundation
⚑
❌Problem Identified

User flagged a duplicate pair in the calendar β€” "AI Lab (Austin)" May 6 appearing as two rows. AICamp was the canonical source (the user confirmed: AICamp coordinator drives participation through multiple Meetup groups) but its row had the WRONG title ("Giving AI Agents Real Memory" vs current page title), WRONG time (17:30 UTC vs correct 22:30 UTC), NULL venue_name, generic "Austin, TX" address, and no description β€” while the Meetup-sourced duplicate had the correct time, the specific venue "Capital Factory", and a better description. User noted: "All of the things that [the duplicate] does better are available in the AICamp webpage, so not sure why it missed it."

πŸ› οΈAction Taken

Root-cause investigation on the live AICamp event page surfaced three compounding bugs in agent/src/sources/aicamp.js fetchEventDetail(): (1) Title selector looked at h1/h2 but AICamp detail pages use h4 β€” selector returned empty string, the "if (title)" null-guard fired, and fetchEventDetail returned null for EVERY AICamp event. (2) When fetchEventDetail returned null, the caller fell back to listing-only data which had no description, no venue, and hardcoded address="Austin, TX". (3) Even if title had worked, the fallback description used <meta name="description"> which is AICamp's SITE-WIDE generic blurb, not the event-specific content. Rewrite of fetchEventDetail: title via meta[property="og:title"] (reliable primary) with h4 / h1 / h2 fallbacks; description by gathering the first substantive <p> tags inside <div class="left-contents"> while filtering out metadata sections (Venue, Speaker, Agenda, Prerequisite); venue/address by targeting the explicit <p>Venue:...</p> block and splitting the address line into venue_name + street address; removed the hardcoded "Austin, TX" address. Also fixed the duplicate row: updated the canonical aicamp row via SQL with the now-correct scraper output (title, description, start/end time, venue, address, image) and soft-deleted the Meetup duplicate via deleted_at + merged_into_id pointing at the canonical row.

βœ…Result

AICamp scraper now returns complete event data β€” verified against the live page: title "AI Lab (Austin) - Building AI Agents with Memory", start_time 22:30 UTC (5:30 PM CDT), end_time 01:30 UTC next day, venue_name "Capital Factory", address "701 Brazos St, Austin, TX 78701", full event description. Also surfaced a systemic improvement: the previous version was silently failing on ALL AICamp detail pages and returning only 1 upcoming event from a 44-event listing β€” after the fix, 30 upcoming events extract correctly (the others were false-past events because the listing-page-only fallback had the wrong timezone and was filtering them out). The canonical AI Lab row in the DB now has all correct fields, and the Meetup duplicate is soft-deleted with a merge trail. 131/131 tests still passing.

✨
✨
πŸ—οΈ
✨

This agent is developed iteratively with Claude Code. The collaboration is part of the project's identity.

πŸ‘€
000000Calendar Visits
πŸ€–
000000AI Crawlers