Experiment Based Learning Generator


A small tool for turning a vague learning goal into a concrete plan you can act on. You give it a topic, it helps you narrow that topic down, then it produces a sheet with five hands-on experiments, one keystone project that ties them together, and five follow-up directions to chase next.

How a session works

The flow is the same in the CLI and the web app:

  1. Type a topic. Anything you're curious about — fermentation, pendulums, color theory. Broad is fine.
  2. Refine it. The model proposes 5–7 more specific angles within that topic. Pick one to drill in further (you'll get another round of suggestions), type your own refinement, or accept the current topic.
  3. Generate the sheet. Once you commit, you get a markdown document with:
  4. Introduction — what the topic is, why it matters, and the key concepts to hold in mind.
  5. 5 experiments, ordered so each builds on the last. Each one lists a goal, materials (cheap or on-hand where possible), a procedure, and what to observe.
  6. Keystone project — a larger build that synthesizes what the experiments taught, with success criteria and stretch variations.
  7. Follow-up ideas — five branches outward into adjacent fields, deeper specializations, or surprising applications.

Web app

python webapp.py starts a local server (default http://localhost:8000/). The home page takes a topic and walks you through refinement step by step — each suggestion is a clickable button that posts back the new topic. When you commit, the generated sheet is rendered as HTML with the raw markdown tucked into a collapsed View raw markdown section below it, plus a download button for the .md file.

Every generation is saved to ./sheets/ (overridable with $SHEETS_DIR) along with a JSON index. A View saved sheets link on the home page lists past topics newest-first; clicking one shows it again without re-generating.

A FastCGI mode is also available — python webapp.py --mode fastcgi — for deployment behind nginx or lighttpd.

CLI

python learn.py "your topic" runs the same flow in the terminal. Refinement is interactive (numbered choices), and the final sheet streams to stdout while being written to <slug>-<date>.md in the current directory (override with -o).

Under the hood

Both interfaces share a core.py module that wraps the Anthropic API:

If the model wraps its output in a &#96;&#96;&#96;markdown fence (it occasionally does despite being told not to), the web app strips the outer fence before rendering — otherwise the whole document would render as one giant code block.

Code

Here is the code if you want to play with it.