Josh Wright 4118d31df8
Some checks failed
Build and Deploy / build-and-push (push) Failing after 6s
Update README.md
2026-04-04 01:00:07 -04:00
2026-04-03 22:06:54 -04:00
2026-04-04 01:00:07 -04:00

Six Flags Super Calendar

A week-by-week calendar showing operating hours for all Six Flags Entertainment Group theme parks — including the former Cedar Fair parks. Data is scraped from the Six Flags internal API and stored locally in SQLite.

Parks

24 theme parks across the US, Canada, and Mexico:

Six Flags branded — Great Adventure (NJ), Magic Mountain (CA), Great America (IL), Over Georgia, Over Texas, St. Louis, Fiesta Texas (TX), New England (MA), Discovery Kingdom (CA), Mexico, Great Escape (NY), Darien Lake (NY), Frontier City (OK)

Former Cedar Fair — Cedar Point (OH), Knott's Berry Farm (CA), Canada's Wonderland (ON), Carowinds (NC), Kings Dominion (VA), Kings Island (OH), Valleyfair (MN), Worlds of Fun (MO), Michigan's Adventure (MI), Dorney Park (PA), California's Great America (CA)

Tech Stack

  • Next.js 15 (App Router, Server Components, standalone output)
  • Tailwind CSS v4 (@theme {} CSS variables, no config file)
  • SQLite via better-sqlite3 — persisted in /app/data/parks.db
  • Playwright — one-time headless browser run to discover each park's internal API ID
  • Six Flags CloudFront APIhttps://d18car1k0ff81h.cloudfront.net/operating-hours/park/{id}?date=YYYYMM

Local Development

Prerequisites

  • Node.js 22+
  • npm

Setup

npm install
npx playwright install chromium

Seed the database

Run once to discover each park's internal API ID (opens a headless browser per park):

npm run discover

Then scrape operating hours for the full year:

npm run scrape

To force a full re-scrape (ignores the 7-day staleness window):

npm run scrape:force

Run the dev server

npm run dev

Open http://localhost:3000. Navigate weeks with the / buttons or pass ?week=YYYY-MM-DD directly.


Deployment

Docker (standalone)

The app uses Next.js standalone output. The SQLite database is stored in a Docker volume at /app/data.

Build and run locally

docker compose up --build

Or pull from the registry:

REGISTRY_URL=your.registry.host docker compose up

Seed the database inside the container

The production image includes Playwright and Chromium, so discovery and scraping can be run directly against the running container's volume.

# Discover API IDs for all parks (one-time, opens headless browser per park)
docker compose exec web npm run discover

# Scrape operating hours for the full year
docker compose exec web npm run scrape

Or as one-off containers against the named volume:

docker run --rm -v sixflagssupercalendar_park_data:/app/data \
  your.registry.host/josh/sixflagssupercalendar:latest \
  npm run discover

docker run --rm -v sixflagssupercalendar_park_data:/app/data \
  your.registry.host/josh/sixflagssupercalendar:latest \
  npm run scrape

CI/CD (Gitea Actions)

The pipeline is defined at .gitea/workflows/deploy.yml.

Trigger: Push to main

Steps:

  1. Checkout code
  2. Log in to the Gitea container registry
  3. Build and tag the image as :latest and :<short-sha>
  4. Push both tags

Required configuration in Gitea

Type Name Value
Variable REGISTRY_URL Your registry host, e.g. gitea.example.com
Secret REGISTRY_TOKEN A Gitea access token with package:write scope

Set these under Repository → Settings → Actions → Variables / Secrets.

Upstream remote

git remote add origin https://gitea.thewrightserver.net/josh/SixFlagsSuperCalendar.git
git push -u origin master

Data Refresh

The scrape job skips any park+month combination scraped within the last 7 days. To keep data current, run npm run scrape (or scrape:force) on a schedule — weekly is sufficient for a season calendar.

Parks and months not yet in the database show a placeholder in the UI. Parks with no hours data on a given day show "Closed".

Description
No description provided
Readme 731 KiB
Languages
TypeScript 93.7%
CSS 3.2%
Dockerfile 1.8%
Shell 1%
JavaScript 0.3%