How We Serve 414,000 Cameras From a 23MB File
Sentinel maps 414,000 surveillance cameras across 22 countries. The entire dataset ships as a single 23MB file. No backend. No database. No API calls. Everything renders in your browser.
Here’s how.
The data pipeline
We aggregate camera locations from government open data APIs, community mapping platforms, and public records:
- Government sources: Caltrans (California, 3,434 cameras), TfL London (883), DriveBC (1,058), Ontario/Alberta/Quebec 511 networks, Toronto, Vancouver, Calgary, Austin, Leicester City Council
- OpenStreetMap: 380,000+ cameras tagged
man_made=surveillanceby community mappers worldwide - Speed camera networks: 50,000+ across France, Germany, Spain, Poland, Austria, and 10 more European countries
- EFF Atlas of Surveillance: 4,000+ US jurisdiction-level surveillance technology deployments
- FLOCK dataset: 336,000+ cameras from US public records
Each source has its own Python ingest script that normalizes data into a common GeoJSON format: longitude, latitude, camera type, operator, source, jurisdiction, confidence, and country.
A deduplication pass removes cameras within 50 meters of each other, preferring higher-confidence sources (government data over community mapping).
From GeoJSON to PMTiles
The unified GeoJSON file is 95MB. Too large for any CDN to serve directly.
We use tippecanoe to convert it into a PMTiles file: a single-file archive of Mapbox Vector Tiles organized for random access. Tippecanoe handles the hard parts: clustering at low zoom levels, dropping features at high density, and encoding everything as compact protobuf-based vector tiles.
The result: 23MB. Every tile is addressable by byte offset within the file.
The Cloudflare Workers problem
PMTiles files are designed for HTTP Range requests. The client asks for bytes=12345-12456 and gets exactly those bytes, which contain the tile it needs. This is efficient: you only download the tiles you’re viewing.
Cloudflare Workers serves static assets without Range header support. Every request returns the full file with a 200 response, not a 206 partial response.
Our workaround: download the entire 23MB file into an ArrayBuffer on page load, then serve tiles from memory.
class ArrayBufferSource {
constructor(buf) { this.data = buf; }
async getBytes(offset, length) {
return { data: this.data.slice(offset, offset + length) };
}
}
const buf = await fetch('/data/sentinel.pmtiles').then(r => r.arrayBuffer());
const source = new ArrayBufferSource(buf);
const pmtiles = new PMTiles(source);
protocol.add(pmtiles, 'pmtiles:///data/sentinel.pmtiles');
Is this optimal? No. The ideal setup uses Cloudflare R2, which supports Range requests natively. But 23MB downloads in a few seconds on most connections, and once in memory, tile rendering is instant.
Client-side rendering
MapLibre GL JS renders vector tiles on a WebGL canvas. No server-side rendering. No tile server. The map loads a dark basemap from CARTO and overlays camera points as circles, colored by type:
- Cyan: CCTV
- Pink: ALPR
- Red: Facial recognition
- Yellow: Speed cameras
- Orange: Red light cameras
- Purple: Drones
At low zoom, tippecanoe clusters cameras into aggregated points. At high zoom, individual cameras appear with popups showing type, operator, source, and jurisdiction.
Zero telemetry
The map makes exactly three types of network requests:
- CARTO basemap tiles (public CDN)
- One 23MB PMTiles download (our static file)
- Nominatim geocoding when you search an address (OSM’s free geocoding API)
We never know where you panned, what you searched, or how long you stayed. There’s no analytics endpoint to call because we didn’t build one.
What we’d do differently
If we were starting over:
- Cloudflare R2 from day one for Range support
- Multiple PMTiles files per region to reduce initial download
- Service Worker caching for offline access
But shipping beats optimizing. The map works. 414,000 cameras across 22 countries, served from a single static file, rendered entirely in your browser.