All checks were successful
Build and Deploy / Build & Push (push) Successful in 56s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
190 lines
6.5 KiB
Markdown
190 lines
6.5 KiB
Markdown
# Thoosie Calendar
|
|
|
|
A week-by-week calendar showing operating hours for all Six Flags Entertainment Group theme parks — including the former Cedar Fair parks. Data is scraped from the Six Flags internal API and stored locally in SQLite. Click any park to see its full month calendar and live ride status with current wait times.
|
|
|
|
## Parks
|
|
|
|
24 theme parks across the US, Canada, and Mexico, grouped by region:
|
|
|
|
| Region | Parks |
|
|
|--------|-------|
|
|
| **Northeast** | Great Adventure (NJ), New England (MA), Great Escape (NY), Darien Lake (NY), Dorney Park (PA), Canada's Wonderland (ON) |
|
|
| **Southeast** | Over Georgia, Carowinds (NC), Kings Dominion (VA) |
|
|
| **Midwest** | Great America (IL), St. Louis (MO), Cedar Point (OH), Kings Island (OH), Valleyfair (MN), Worlds of Fun (MO), Michigan's Adventure (MI) |
|
|
| **Texas & South** | Over Texas, Fiesta Texas (TX), Frontier City (OK) |
|
|
| **West & International** | Magic Mountain (CA), Discovery Kingdom (CA), Knott's Berry Farm (CA), California's Great America (CA), Mexico |
|
|
|
|
## Tech Stack
|
|
|
|
- **Next.js 15** — App Router, Server Components, standalone output
|
|
- **Tailwind CSS v4** — `@theme {}` CSS variables, no config file
|
|
- **SQLite** via `better-sqlite3` — persisted in `/app/data/parks.db`
|
|
- **Playwright** — one-time headless browser run to discover each park's internal API ID
|
|
- **Six Flags CloudFront API** — `https://d18car1k0ff81h.cloudfront.net/operating-hours/park/{id}?date=YYYYMM`
|
|
- **Queue-Times.com API** — live ride open/closed status and wait times, updated every 5 minutes
|
|
|
|
## Ride Status
|
|
|
|
The park detail page shows ride open/closed status using a two-tier approach:
|
|
|
|
1. **Live data (Queue-Times.com)** — when a park is operating, ride status and wait times are fetched from the [Queue-Times.com API](https://queue-times.com/en-US/pages/api) and cached for 5 minutes. All 24 parks are mapped. Displays a **Live** badge with per-ride wait times.
|
|
|
|
2. **Schedule fallback (Six Flags API)** — the Six Flags operating-hours API drops the current day from its response once a park opens. When Queue-Times data is unavailable, the app falls back to the nearest upcoming date from the Six Flags schedule API as an approximation.
|
|
|
|
### Roller Coaster Filter
|
|
|
|
When live data is shown, a **Coasters only** toggle appears if roller coaster data has been populated for that park. Coaster lists are sourced from [RCDB](https://rcdb.com) and stored in `data/park-meta.json`. To populate them:
|
|
|
|
1. Open `data/park-meta.json` and set `rcdb_id` for each park to the numeric RCDB park ID (visible in the URL: `https://rcdb.com/4529.htm` → `4529`).
|
|
2. Run `npm run scrape` — coaster lists are fetched from RCDB and stored in the JSON file. They refresh automatically every 30 days on subsequent scrapes.
|
|
|
|
---
|
|
|
|
## Local Development
|
|
|
|
**Prerequisites:** Node.js 22+, npm
|
|
|
|
```bash
|
|
npm install
|
|
npx playwright install chromium
|
|
```
|
|
|
|
### Seed the database
|
|
|
|
Run once to discover each park's internal API ID (opens a headless browser per park):
|
|
|
|
```bash
|
|
npm run discover
|
|
```
|
|
|
|
Scrape operating hours for the full year:
|
|
|
|
```bash
|
|
npm run scrape
|
|
```
|
|
|
|
Force a full re-scrape (ignores the staleness window):
|
|
|
|
```bash
|
|
npm run scrape:force
|
|
```
|
|
|
|
### Debug a specific park + date
|
|
|
|
Inspect raw API data and parsed output for any park and date:
|
|
|
|
```bash
|
|
npm run debug -- --park kingsisland --date 2026-06-15
|
|
```
|
|
|
|
Output is printed to the terminal and saved to `debug/{parkId}_{date}.txt`.
|
|
|
|
### Run tests
|
|
|
|
```bash
|
|
npm test
|
|
```
|
|
|
|
### Run the dev server
|
|
|
|
```bash
|
|
npm run dev
|
|
```
|
|
|
|
Open [http://localhost:3000](http://localhost:3000). Navigate weeks with the `←` / `→` buttons, or pass `?week=YYYY-MM-DD` directly. Click any park name to open its detail page.
|
|
|
|
---
|
|
|
|
## Deployment
|
|
|
|
The app ships as two separate Docker images that share a named volume for the SQLite database:
|
|
|
|
| Image | Tag | Purpose |
|
|
|-------|-----|---------|
|
|
| Next.js web server | `:web` | Reads DB, serves content. No scraping tools. |
|
|
| Scraper + scheduler | `:scraper` | Nightly data refresh. No web server. |
|
|
|
|
Images are built and pushed automatically by CI on every push to `main`.
|
|
|
|
### First-time setup
|
|
|
|
**1. Pull the images**
|
|
|
|
```bash
|
|
docker pull gitea.thewrightserver.net/josh/sixflagssupercalendar:web
|
|
docker pull gitea.thewrightserver.net/josh/sixflagssupercalendar:scraper
|
|
```
|
|
|
|
**2. Discover park API IDs**
|
|
|
|
This one-time step opens a headless browser for each park to find its internal Six Flags API ID. Run it against the scraper image so Playwright is available:
|
|
|
|
```bash
|
|
docker run --rm -v root_park_data:/app/data \
|
|
gitea.thewrightserver.net/josh/sixflagssupercalendar:scraper \
|
|
npm run discover
|
|
```
|
|
|
|
**3. Set RCDB IDs for the coaster filter**
|
|
|
|
Open `data/park-meta.json` in the Docker volume and set `rcdb_id` for each park to the numeric ID from the RCDB URL (e.g. `https://rcdb.com/4529.htm` → `4529`). You can curl it directly from the repo:
|
|
|
|
```bash
|
|
curl -o /var/lib/docker/volumes/root_park_data/_data/park-meta.json \
|
|
https://gitea.thewrightserver.net/josh/SixFlagsSuperCalendar/raw/branch/main/data/park-meta.json
|
|
```
|
|
|
|
**4. Run the initial scrape**
|
|
|
|
```bash
|
|
docker run --rm -v root_park_data:/app/data \
|
|
gitea.thewrightserver.net/josh/sixflagssupercalendar:scraper \
|
|
npm run scrape
|
|
```
|
|
|
|
**5. Start services**
|
|
|
|
```bash
|
|
docker compose up -d
|
|
```
|
|
|
|
Both services start. The scraper runs nightly at 3 AM (container timezone, set via `TZ`).
|
|
|
|
### Updating
|
|
|
|
```bash
|
|
docker compose pull && docker compose up -d
|
|
```
|
|
|
|
### Scraper environment variables
|
|
|
|
Set these in `docker-compose.yml` under the `scraper` service to override defaults:
|
|
|
|
| Variable | Default | Description |
|
|
|----------|---------|-------------|
|
|
| `TZ` | `UTC` | Timezone for the nightly 3 AM run (e.g. `America/New_York`) |
|
|
| `PARK_HOURS_STALENESS_HOURS` | `72` | Hours before park schedule data is re-fetched |
|
|
| `COASTER_STALENESS_HOURS` | `720` | Hours before RCDB coaster lists are re-fetched (720 = 30 days) |
|
|
|
|
### Manual scrape
|
|
|
|
To trigger a scrape outside the nightly schedule:
|
|
|
|
```bash
|
|
docker compose exec scraper npm run scrape
|
|
```
|
|
|
|
Force re-scrape of all data (ignores staleness):
|
|
|
|
```bash
|
|
docker compose exec scraper npm run scrape:force
|
|
```
|
|
|
|
---
|
|
|
|
## Data Refresh
|
|
|
|
The scraper skips any park + month already scraped within the staleness window (`PARK_HOURS_STALENESS_HOURS`, default 72h). Past dates are never overwritten — once a day occurs, the API stops returning data for it, so the record written when it was a future date is preserved forever. The nightly scraper handles refresh automatically.
|
|
|
|
Roller coaster lists (from RCDB) are refreshed per `COASTER_STALENESS_HOURS` (default 720h = 30 days) for parks with a configured `rcdb_id`.
|