ABOUT
PROFILE
I'm a full-stack engineer with 3+ years of production experience building secure SaaS platforms for banking and government sectors. I specialize in the full delivery cycle, taking ideas from requirements through to shipped product, and I thrive in small teams where ownership and craft both matter.
I care deeply about design. Not just how things look, but how they work, how they're maintained, and how they feel to build. Being a self-sufficient engineer means I can contribute meaningfully at every layer of a product.
Outside of work, I write on Dev.to, go running to clear my head, and regularly bounce ideas with friends about potential business projects. I'm drawn to the intersection of engineering and entrepreneurship.
When I'm not at a keyboard, I'm usually reading, playing board games, or running a D&D campaign with friends. The Phoenix Project and The Unicorn Project are among my favourites, which probably says everything about how I think about software teams and delivery.

SKILLS
EXPERIENCE
DEC 2022 — JAN 2026
Full-Stack Developer
National Digital ID Co.,Ltd
Designed, built, and maintained a secure SaaS platform integrating enterprise banking systems with government identity services, supporting ~100 active corporate users.
Create Proof of Concept for verifiable credential wallet application (Ionic).
Implemented secure data exchange workflows, including validation, transformation, and compliance controls for regulated digital transactions.
Built integration tooling and internal utilities to support customer onboarding, testing, and verification of external data connections.
Developed backend services using Node.js and NestJS, focused on reliability, traceability, and fault handling across system boundaries.
Deployed and operated services using Azure, Docker, and CI/CD pipelines, improving release reliability and reducing operational risk.
Contributed to system design decisions around data integrity, authentication, and auditability in high-trust environments.
RESEARCH
Master's Dissertation · 2025
Comparative Analysis of Iterative Prompt Refinement and Multi-dimensional Quality in LLM-Generated Code
University of Otago
VIEW PDF →KEY FINDINGS
Iterative prompt refinement does not reliably improve code quality. Across the evaluated models, performance does not consistently improve with additional prompt iterations. Instead, different models exhibit different trajectories:
Claude 4 Opus shows a steady decline in correctness over iterations.
GPT-4.1 demonstrates diminishing returns after the first iteration, with later iterations sometimes degrading results.
Gemini 2.5 Pro follows a dip–recovery pattern, where performance initially worsens before partially improving.
Correctness is the dominant factor determining overall code quality. Maintainability and security scores remain consistently high (near ceiling levels) across tasks and models. Because these dimensions show little variation, the overall quality score is largely driven by correctness.
Cross-dimensional relationships between evaluation metrics are weak. Changes in one quality dimension rarely correlate with changes in others. Iteration-to-iteration correlations between correctness, maintainability, and security are small or statistically insignificant, suggesting that improvements or declines in one metric do not reliably predict changes in another.
Most actionable improvement occurs in early iterations. The most meaningful changes in correctness typically occur between iteration 0 → 1, and occasionally 1 → 2. Beyond this point, additional iterations tend to yield minimal or negative returns.
Task type influences performance for some models but not all.
GPT-4.1 shows a moderate task-category effect, performing better on data structures and algorithms tasks compared to OOP tasks.
Claude 4 Opus shows a weaker trend of task dependence.
Gemini 2.5 Pro shows no statistically reliable task-category effect.
Structural quality metrics exhibit ceiling effects. Maintainability and security scores remain consistently high regardless of task category or iteration, limiting their usefulness as discriminative evaluation metrics under the current setup.
Model-specific refinement strategies are necessary. The results suggest that prompt iteration strategies should be model-aware and bounded:
Keep Claude’s iteration loops short due to declining correctness.
Limit GPT-4.1 to one or two targeted refinements.
Apply incremental monitoring and rollback strategies for Gemini.
PROJECTS
Delivered a production restaurant website for a Switzerland-based client, enabling non-technical staff to manage multilingual content through Contentful CMS. Implemented dual-layer localization combining next-intl for UI strings and Contentful locales for dynamic content. Built a branded email notification system using Resend and React Email.
CONTACT
SEND MESSAGE
DIRECT CONTACT
RESPONSE TIME
Typically within 24–48 hours. Open to full-time, contract, and remote roles.

