The Big Picture
This is a fully autonomous system. It discovers events, monitors its own health, and fixes its own code β every day, without human intervention. The Observatory exists to give transparency into this process so people can see how an autonomous AI system actually works. This isn't a black box.
How It Works
Three autonomous loops work together β no human runs this system.
Discovery Pipeline
β runs daily at midnightScans 8+ sources and the web for Austin AI events
Catches the same event listed on different platforms
AI confirms: real event? In Austin? AI-related?
Tags audience, skill level, and free/paid
Approved events appear on the calendar
Self-Monitoring
β evaluates every runCollects data on scraper health, error rates, source performance, and calendar coverage
The most powerful Claude model reviews everything, assigns a health grade, and identifies issues
Creates search queries, manages sources, and escalates code issues for the repair agent
Self-Healing
β runs daily, 2 hours after discoveryPicks up the highest-priority action item from the monitor
Reads the codebase, understands the bug, and writes a fix
Runs the test suite β only pushes if all tests pass
Pushes the fix to production β next run uses the improved code
Multi-Model Architecture
Three Claude AI models split the work based on what each task needs β like having a junior analyst, a senior reviewer, and a strategic director on the same team.
Handles 80% of decisions β validation, classification, dedup
Evaluates new sources and extracts event details
The system brain β monitors health and drives improvements
Community Input
Anyone can submit an event the system missed using the βMissing an event?β button on the calendar. The agent scrapes the submitted URL, validates it, and adds it to the calendar β all in the same daily run. It also learns from each submission, adding new sources and search strategies to find similar events in the future.
Agent Performance
What the agent is doing autonomously
Events Added (Last 30 Days)
Recent Activity
Under the Hood
How the agent thinks, decides, and sometimes fails
Health Report
Automated self-evaluation of system effectiveness
| Grade | Scraper Health | Sources | Error Rate | Activity |
|---|---|---|---|---|
| A | 80%+ | 4+ contributing | <5% | Events added in last 7d |
| B | 60-79% | 3+ contributing | <10% | Active discovery |
| C | 40-59% | 2-3 contributing | >10% | Some source issues |
| D | <40% | <2 contributing | High | Multiple broken scrapers |
| F | β | β | β | System not running |
Updated 2026-03-29: Grades now measure infrastructure health (what the agent controls), not event count or empty days (which reflect community activity). The agent still actively maximizes calendar coverage as a separate mission.
Human Stewardship
How humans guide the agent's growth
Human Stewardship
How humans guide the agent's growth using Claude Code
User flagged a duplicate pair in the calendar β "AI Lab (Austin)" May 6 appearing as two rows. AICamp was the canonical source (the user confirmed: AICamp coordinator drives participation through multiple Meetup groups) but its row had the WRONG title ("Giving AI Agents Real Memory" vs current page title), WRONG time (17:30 UTC vs correct 22:30 UTC), NULL venue_name, generic "Austin, TX" address, and no description β while the Meetup-sourced duplicate had the correct time, the specific venue "Capital Factory", and a better description. User noted: "All of the things that [the duplicate] does better are available in the AICamp webpage, so not sure why it missed it."
Root-cause investigation on the live AICamp event page surfaced three compounding bugs in agent/src/sources/aicamp.js fetchEventDetail(): (1) Title selector looked at h1/h2 but AICamp detail pages use h4 β selector returned empty string, the "if (title)" null-guard fired, and fetchEventDetail returned null for EVERY AICamp event. (2) When fetchEventDetail returned null, the caller fell back to listing-only data which had no description, no venue, and hardcoded address="Austin, TX". (3) Even if title had worked, the fallback description used <meta name="description"> which is AICamp's SITE-WIDE generic blurb, not the event-specific content. Rewrite of fetchEventDetail: title via meta[property="og:title"] (reliable primary) with h4 / h1 / h2 fallbacks; description by gathering the first substantive <p> tags inside <div class="left-contents"> while filtering out metadata sections (Venue, Speaker, Agenda, Prerequisite); venue/address by targeting the explicit <p>Venue:...</p> block and splitting the address line into venue_name + street address; removed the hardcoded "Austin, TX" address. Also fixed the duplicate row: updated the canonical aicamp row via SQL with the now-correct scraper output (title, description, start/end time, venue, address, image) and soft-deleted the Meetup duplicate via deleted_at + merged_into_id pointing at the canonical row.
AICamp scraper now returns complete event data β verified against the live page: title "AI Lab (Austin) - Building AI Agents with Memory", start_time 22:30 UTC (5:30 PM CDT), end_time 01:30 UTC next day, venue_name "Capital Factory", address "701 Brazos St, Austin, TX 78701", full event description. Also surfaced a systemic improvement: the previous version was silently failing on ALL AICamp detail pages and returning only 1 upcoming event from a 44-event listing β after the fix, 30 upcoming events extract correctly (the others were false-past events because the listing-page-only fallback had the wrong timezone and was filtering them out). The canonical AI Lab row in the DB now has all correct fields, and the Meetup duplicate is soft-deleted with a merge trail. 131/131 tests still passing.
This agent is developed iteratively with Claude Code. The collaboration is part of the project's identity.