I started this website thinking it would be a straightforward one-page profile with a few sections, a contact form, and maybe a small gallery. In practice, it became a full engineering exercise: front-end UX decisions, deployment strategy, image delivery architecture, and operational debugging under pressure.
The core stack was Next.js running from a src app root, with a custom design layer and progressively more route structure over time. Early on, I focused on getting visual consistency right: clear typography, dark theme contrast, timeline-style experience section, and a contact flow that felt simple but strict enough to avoid junk input.
The first major feature expansion was the Track Days section. At the beginning, it used placeholder images. As real media landed, the requirements quickly evolved: auto-detect gallery folders, group by session, make folder-first navigation, and generate meaningful URLs instead of brittle encoded paths. That pushed me from static page content into dynamic route design.
Route modeling became one of the most interesting parts. I wanted URLs that were human and meaningful, not random folder strings. So I moved to patterns like /track-days/<location>/<month>/<year>, /track-days/<location>/<year>, and /track-days/CSS/<level>. This required parsing folder names, slug normalization, month mapping (including forcing double-digit month numbers), and robust reverse-resolution back to actual media sources.
I also had to preserve backward compatibility while changing routes repeatedly. Redirects were essential: old links should not die when information architecture improves. That sounds minor, but it matters in real usage: bookmarks, shared links, and user habits break trust quickly when routes disappear.
The gallery UX then moved beyond just listing images. I introduced folder cover cards, drill-down pages, and then a fullscreen lightbox. On desktop, it needed keyboard navigation and directional controls. On mobile, the critical issue was close behavior: once an image fills the viewport, tap-to-dismiss can become unreliable. A dedicated close button became mandatory for usability, not optional polish.
Performance and repository health came next. Keeping thousands of large images inside public/ made git history heavy and deploys painful. The right long-term move was CDN-backed media. I migrated delivery to CloudFront and removed local gallery assets from the app bundle. That decoupled code deployments from media volume and made future updates operationally cleaner.
That migration surfaced a classic cloud reality: DNS being correct does not mean content is reachable. The domain resolved, but CloudFront returned 403 until origin permissions and path assumptions were aligned. Distinguishing app-level bugs from infrastructure-level access control was key. Once URL construction and origin access matched the actual object layout, media delivery stabilized.
On the forms side, email delivery was another journey. SMTP with consumer mailbox constraints was unreliable in this context, so I switched to Resend for transactional sending. Integrating it in Next route handlers was straightforward, but deployment environment behavior was not: variables that looked configured in the console still failed at runtime until build/runtime configuration was made explicit and consistent.
This led to tightening build configuration in Amplify. I introduced deterministic env injection for production builds and explicit checks so failures happen early with clear messages. In deployment pipelines, ambiguity is expensive. A build should fail loudly when required configuration is missing, rather than shipping a broken contact flow and discovering it through user reports.
I also hit ecosystem friction around optional native dependencies (lightningcss, Tailwind oxide, SWC variants) in cloud CI. Locally everything can feel fine, then build images fail on missing platform bindings. The fix is not glamorous, but it is practical: install required Linux binaries explicitly in build phases and keep framework config aligned to avoid conflicting root assumptions.
Another lesson was branch discipline. Once git-flow was introduced, it became clear how easy it is to commit to the wrong branch when iterating quickly. A feature appearing locally but not in production can be a branch promotion issue, not an implementation issue. I now treat branch context as part of every deployment checklist.
On the content side, I built out a Blog route to document this exact process. The value is not just in publishing final outcomes, but in recording decisions: why I moved media to CDN, why I changed URL semantics, why I dropped brittle regex patterns in browser validation, why mobile interaction details matter more than they seem in desktop testing.
If I summarize the project in one sentence: this site became a miniature production system, not a static portfolio. It includes routing strategy, media architecture, CI/CD hardening, integration reliability, and UX iteration loops driven by real feedback.
What I would do from day zero next time: define route conventions early, externalize heavy media immediately, set deployment env checks before first release, and create a tiny operational runbook for build/deploy issues. Those four decisions alone remove most late-stage friction.
The final takeaway is simple: a personal website can be a serious engineering artifact when treated as one. It is small enough to iterate fast, but real enough to expose all the habits that matter in larger systems — clarity, repeatability, observability, and respect for the user experience on the devices people actually use.