facadi

Computer-vision analysis of 120 head-on building facades — saliency models (DeepGaze IIE + III) and hand-rolled features (symmetry, rhythm, fractality, greenery, texture).

📱 Mobile gallery — start here on phone

Vertical card per image. Tap-to-expand for full scalar values + map thumbnails. Has sort, filter, and a "view all as" picker that swaps every thumbnail to a chosen feature map (saliency, symmetry diff, edge density, etc.).

🖥 Interactive gallery — best on desktop

Big sortable table. Click any column header to sort. Each cell shows the scalar value plus a thumbnail of the per-pixel map (when available). Heavy — designed for laptop/desktop.

📊 Findings summary

One-page distillation: per-bucket stats, correlations, the most-significant features that move with hand-given face-likeness ratings.

🏆 Top & bottom by every feature

For each metric: top-8 and bottom-8 images. Useful sanity check — does the "most symmetric" image actually look symmetric?

📋 Original per-image gallery

Bigger row-by-row table grouped by source dir. Best for scanning all images together.

Built with PyTorch (RTX 4090, CUDA 12.4), DeepGaze IIE/III (matthias-k/DeepGaze), OpenAI CLIP for filtering, scipy/numpy/PIL for hand-rolled features. 120 images filtered from a corpus of ~1,400 facade photos via CLIP-zero-shot content classification + Hough gradient-orientation head-on detection + k-means diversity sampling on CLIP embeddings.