MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

LifeHub - AI Challenge for Cross-Platform Apps - Hot Design Showcase

2025-11-21 05:42:38

This is a submission for the AI Challenge for Cross-Platform Apps - Hot Design Showcase

What I Built

I built LifeHub, a cross-platform productivity and wellbeing assistant created using the Uno Platform.

It combines daily planning, habit tracking, mood logging, reminders, and quick-action widgets into a clean and responsive UI.

The entire interface was refined using the Hot Design Agent, which helped speed up UI experimentation and real-time adjustments.

Original Design Reference

I used a minimal card-based dashboard layout as the inspiration.

Key design elements I referenced:

  • Soft neumorphic shadow style
  • Rounded cards and panels
  • Neutral and modern color palette
  • Clean typography and spacing hierarchy

(Insert original UI screenshots here.)

Demo

Hot Design Agent in Action

Hot Design Agent helped with:

  • Generating alternate layouts for the dashboard
  • Adjusting spacing, margins, and corner radius in real-time
  • Choosing a cohesive color palette
  • Refining iconography and typography scale
  • Fixing UI alignment issues through quick suggestions

(Insert Hot Design Agent interaction screenshots or recordings.)

Design Process

  • Started with a base Uno template and structured the main layout with Grid and StackPanel.
  • Used Hot Reload extensively to adjust padding, spacing, and color themes without rebuilding.
  • Set up MVVM bindings for tasks, habits, and mood entries.
  • Implemented animations using Implicit Animations to add smooth transitions.
  • Ensured adaptive behavior using AdaptiveTriggers for mobile, tablet, and desktop screens.
  • Final UI refinements were done with feedback from the Hot Design Agent.

Key Takeaways

  • AI-assisted UI design significantly speeds up iteration cycles.
  • Hot Reload drastically reduces development time—changes feel instant.
  • Uno Platform enables true cross-platform development with a single codebase.
  • Hot Design Agent acts as a real-time co-designer, improving layout, structure, and visual clarity.
  • This approach felt more fluid and creative compared to traditional UI development.

Run the Windows desktop version

dotnet run --project LifeHub/LifeHub.csproj --framework net10.0-desktop

Run the WebAssembly version

dotnet run --project LifeHub/LifeHub.csproj --framework net10.0-browserwasm

Introduction to the Confluent REST Proxy

2025-11-21 05:39:04

1. Why Do We Need the REST Proxy?

Kafka works best with Java (JVM-based languages).
Kafka has clients for other languages too:

  • Node.js
  • Go
  • Python
  • C#
  • Rust
  • PHP

But these clients do not always support Avro well.
Some of them:

  • cannot register schemas
  • cannot handle binary Avro easily
  • do not support schema evolution
  • require many libraries

The solution:

✔️ Confluent REST Proxy

It allows any application to use simple HTTP requests (POST, GET) to send or read Kafka messages without writing Kafka code.

Every language can make HTTP requests, so every language can talk to Kafka through the REST Proxy.

2. Where Does the REST Proxy Fit?

Here is your normal system:

[ Java Producer ] → Kafka Cluster + Schema Registry → [ Java Consumer ]

Now we introduce REST Proxy:

[ Any Language Producer ] → HTTP → REST Proxy → Kafka → Schema Registry

And for consumers:

[ Any Language Consumer ] ← HTTP ← REST Proxy ← Kafka

The REST Proxy communicates with:

  • Kafka brokers
  • Schema Registry

It does the “Kafka work” for you.

3. What Does REST Proxy Do?

The REST Proxy acts as a translator.

It allows you to:

✔ Produce messages to Kafka via HTTP POST

✔ Consume messages from Kafka via HTTP GET

✔ Use Avro, JSON, or raw bytes

✔ Automatically register schemas

✔ Automatically validate schemas

✔ Easily integrate with languages that have poor Kafka libraries

It simplifies everything.

4. Why REST Proxy Is Useful

✔ Works with any programming language

All languages support HTTP calls.

✔ Easy to produce Avro

Even if the language has no Avro serializer.

✔ Automatically works with Schema Registry

Schema Registry is fully integrated.

✔ Easy debugging

HTTP requests are easier to test than Kafka clients.

5. What Are the Downsides?

❌ It is slower than direct Kafka clients

Why?

Because:

  • HTTP is slower than the Kafka wire protocol
  • REST Proxy does extra work
  • Messages are serialized and validated through REST Proxy first

Performance cost:
3x–4x slower than normal Kafka clients

For most use cases this is okay, but high-performance systems should use Kafka clients directly.

❌ Requires batching on the application side

The REST Proxy will not batch automatically.
If you want high throughput, your application must batch multiple messages together before sending.

6. REST Proxy Is Already in Your Docker Setup

Good news:

If you're using Confluent’s Docker Compose, the REST Proxy is already available at a port like:

http://localhost:8082

So you can start using it immediately.

7. What We Will Learn in This Section

In this REST Proxy module you will learn:

🔹 How REST Proxy works internally

  • Supported endpoints
  • Supported API versions
  • How messages are formatted

🔹 How to create topics via REST Proxy

🔹 How to produce messages

  • In binary
  • In JSON
  • In Avro

🔹 How to consume messages

  • With different formats
  • With schemas
  • With offsets

🔹 How to deploy and scale the REST Proxy

  • Production tips
  • High availability
  • Load balancing

1. V1 vs V2 – Which REST Proxy API Should We Use?

1.1 Background: Old Kafka APIs vs New Kafka APIs

A long time ago, Kafka had:

  • an old producer API
  • an old consumer API

These were used in Kafka before version 0.8.

  • The old consumer stored offsets in Zookeeper.
  • Then Kafka introduced a new consumer and new producer that store offsets in Kafka itself, not Zookeeper.

All modern code and all modern courses (including yours) use the new producer/consumer APIs.

1.2 How REST Proxy fits into this history

The REST Proxy has existed for a long time too.

  • Originally, it was built on top of the old Kafka consumer/producer APIs. That version was called V1 in the REST Proxy.
  • Later, the REST Proxy was updated to use the new Kafka consumer/producer APIs. That new version is called V2 in the REST Proxy.

So:

  • V1 REST Proxy API → old Kafka APIs
  • V2 REST Proxy API → new Kafka APIs (what we use today)

1.3 Which one should you use?

This part is very simple:

Always use V2.
Ignore V1.
If you see .../v1/... in examples – that is legacy.

  • V1 is old and will eventually disappear.
  • V2 has better performance.
  • V2 matches everything you already know about Kafka.

So the rule for your students:

“If the URL says /v1/ – don’t use it.
In this course and in real projects, we always use /v2/ REST Proxy APIs.”

2. REST Proxy Content-Type Header – How to Build It

When you send an HTTP request to the REST Proxy, you must set a Content-Type header that tells the proxy:

  • what kind of Kafka data you’re sending (Avro, JSON, binary)
  • which API version you’re using (V2)
  • how the HTTP body itself is encoded (JSON)

There is a pattern for the Content-Type.

Let’s break it into 4 parts.

2.1 The 4 Parts of the Content-Type

General pattern:

application/vnd.kafka.<embedded-format>.v2+json

Let’s understand each piece:

  1. application
    – normal MIME type prefix.

  2. vnd.kafka
    – means “Kafka-specific vendor type”.

  3. <embedded-format>
    – this tells REST Proxy what Kafka message format you are using:

  • json
  • binary
  • avro
  1. .v2
    – REST Proxy API version. We use v2 only.

  2. +json
    – this tells REST Proxy that the HTTP body is encoded as JSON text.

2.2 Some concrete examples

Example 1: JSON messages

You want to send JSON messages to Kafka via V2:

Content-Type: application/vnd.kafka.json.v2+json
  • Kafka message format: json
  • REST Proxy API version: v2
  • HTTP body format: JSON (+json)

Example 2: Avro messages

You want to send Avro messages to Kafka via V2:

Content-Type: application/vnd.kafka.avro.v2+json
  • Kafka message format: avro
  • REST Proxy version: v2
  • HTTP body format: JSON (+json)

The payload is described in JSON, but REST Proxy converts it to Avro using Schema Registry.

Example 3: Binary messages

You want to send raw binary data:

Content-Type: application/vnd.kafka.binary.v2+json

2.3 Easy way to remember

Just memorize this template:

application/vnd.kafka.<json|avro|binary>.v2+json

And remember:

  • Always use .v2 → because we use V2 API.
  • Always end with +json → because the HTTP request body is JSON.
  • Change only the <embedded-format> to: json, avro, or binary.

3. Quick Summary

  • Kafka used to have old APIs; REST Proxy V1 was built on top of those.
  • New Kafka APIs → REST Proxy V2 → this is what we use today.
  • Always use V2 REST Proxy endpoints.
  • REST Proxy Content-Type follows a pattern:
  application/vnd.kafka.<embedded-format>.v2+json

where <embedded-format> = json, avro, or binary.

From 20 to 24 LTS: Safe Way to Upgrade Ubuntu on DigitalOcean

2025-11-21 05:36:58

Upgrading an old production server is not a popular topic today. Most run Docker, Kubernetes, or at least keep OS fresh. But sometimes you inherit a legacy VM, full of configs, and you just need to push it to a modern LTS without breaking anything.

I had exactly this case on DigitalOcean. Here's a quick, real-world rundown of what I actually did.

The Problem

The project was running on Ubuntu 20.04 with an outdated Nginx. Direct upgrade to 24.04 LTS is not possible. I needed a way to bring everything up-to-date quickly and safely and keep downtime to minimum.

Why It Matters

Those kinetic and lunar are eol versions. They no longer receive security updates. Running outdated versions exposes your server to vulnerabilities and leaves you without support. The jump to 24.04 LTS is significant - you get five years of support, better performance, and peace of mind.

How I Did It

To avoid downtime, I leaned on DigitalOcean snapshots + temp droplet. Then I upgraded the system in steps:

20.04 → 22.10 → 23.04 → 23.10 → 24.04 lts + Nginx upgrade

It's a bit long, but it gets the job done if you're stuck on an old upgrade path.

1. Prep the Server (No Downtime Yet)

First things first:

  • Take a snapshot of your existing droplet.
  • Use that snapshot to create a temporary droplet.
  • Set up Floating IP so you can point your domain to this temp droplet when it's ready.

This gives you a safe rollback if something goes wrong.

2. Do the Upgrades

Upgrade to Ubuntu 22.10

sudo sed -i 's|http://mirrors.digitalocean.com/ubuntu/|http://old-releases.ubuntu.com/ubuntu/|g' /etc/apt/sources.list && \
sudo sed -i 's|http://security.ubuntu.com/ubuntu|http://old-releases.ubuntu.com/ubuntu|g' /etc/apt/sources.list && \
sudo apt update && \
sudo apt upgrade -y && \
sudo apt dist-upgrade -y && \
sudo apt autoremove --purge -y && \
sudo apt install -y ubuntu-release-upgrader-core && \
sudo sed -i 's/kinetic/lunar/g' /etc/apt/sources.list && \
sudo apt update && \
sudo apt upgrade -y && \
sudo apt dist-upgrade -y && \
sudo apt full-upgrade -y && \
sudo apt autoremove --purge -y && \
echo -e "\nUpgrade complete! Rebooting now...\n" && \
sudo reboot

Upgrade 23.04 → 23.10

sudo sed -i 's|http://old-releases.ubuntu.com/ubuntu|http://old-releases.ubuntu.com/ubuntu|g' /etc/apt/sources.list && \
sudo sed -i 's/lunar/mantic/g' /etc/apt/sources.list && \
sudo apt update && \
sudo apt upgrade -y && \
sudo apt dist-upgrade -y && \
sudo apt full-upgrade -y && \
sudo apt autoremove --purge -y && \
echo -e "\nUpgrade complete! Rebooting now...\n" && \
sudo reboot

Upgrade 23.10 → 24.04 LTS

sudo sed -i 's|http://old-releases.ubuntu.com/ubuntu|http://archive.ubuntu.com/ubuntu|g' /etc/apt/sources.list && \
sudo sed -i 's/mantic/noble/g' /etc/apt/sources.list && \
sudo apt update && \
sudo apt upgrade -y && \
sudo apt dist-upgrade -y && \
sudo apt full-upgrade -y && \
sudo apt autoremove --purge -y && \
echo -e "\nUpgrade complete! Rebooting now...\n" && \
sudo reboot

Now you're on Ubuntu 24.04 LTS. Double-check with:

lsb_release -a

3. Upgrade to the Latest Nginx

Don't skip this backup step:

cp -r /etc/nginx /root/nginx-backup

Then install the latest Nginx package:

curl -fsSL https://nginx.org/keys/nginx_signing.key | sudo gpg --dearmor -o /usr/share/keyrings/nginx-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] http://nginx.org/packages/ubuntu noble nginx" | sudo tee /etc/apt/sources.list.d/nginx.list
sudo apt update
sudo apt install nginx
nginx -v

You should see version ^1.28.0. Reload your configuration to make sure it still parses:

sudo nginx -t
sudo systemctl restart nginx

4. Point the Floating IP Back

Wait a few minutes. Check your app. Run your health checks, monitoring, whatever you've got. If you have automated tests, don't forget to run them.
Look at logs and metrics make sure it's all good. Everything working? Cool. Move to the next.

Once you're sure everything's stable, switch the Floating IP back from the temp droplet to your freshly upgraded one.

Hang on to the temp droplet and snapshot for few days. If nothing catches fire, you can remove them.

Timeline

Total time: about one hour. Most of that is just waiting for reboots and packages to download. You can grab coffee and check on progress every few minutes.

Final Thoughts

This isn't a workflow I want to repeat often. If you can move to Docker or rebuild your server from scratch — do it.

But sometimes legacy systems don't give you that choice.
And when you must upgrade an old Ubuntu machine step by step, this method works. Fast, safe, and almost zero downtime.

Author's Note

Thanks for sticking around!
Find me on dev.to, linkedin, or you can check out my work on github.

Notes from real-world Laravel.

5 APIs Every Indie Hacker Needs for Their MVP

2025-11-21 05:31:56

Stop building infrastructure. Start shipping features.

The golden rule of Indie Hacking: If it's not your core value prop, outsource it.

In 2025, you can build a million-dollar SaaS by just gluing together 5 powerful APIs. Here is the stack I recommend for speed and scale.

1. Auth & Database: Supabase

Stop writing login forms. Stop configuring Postgres.
Supabase gives you a full backend-as-a-service.

  • Why: Row Level Security (RLS) means you can query your DB directly from the frontend securely.
  • Free Tier: Generous.

2. Payments: Lemon Squeezy (or Stripe)

Stripe is the king, but Lemon Squeezy handles the "Merchant of Record" headache (taxes, VAT) for you.

  • Why: If you sell globally, calculating VAT for 50 countries is a nightmare. Lemon Squeezy does it for you.

3. AI/LLM: OpenAI (via Vercel AI SDK)

If you aren't adding AI to your app, your competitor is.

  • Why: The Vercel AI SDK makes streaming responses (that cool typing effect) trivial in Next.js.

4. Emails: Resend

AWS SES is cheap but painful to set up. SendGrid has bad deliverability.

  • Why: Resend has the best DX (Developer Experience) of any email provider. You can write your emails in React.

5. Social Data: SociaVault

If your app needs to interact with social media (Influencer search, Trend tracking, Content analysis), do NOT build your own scraper.

  • Why: Scraping is a maintenance black hole. SociaVault gives you clean JSON data from TikTok, Instagram, and YouTube without the headaches of proxies or captchas.
  • Use Case: "Monitor my competitor's TikTok growth" or "Find influencers with >5% engagement".

The "No-Backend" Stack

With these 5, you barely need a backend server.

  • Frontend: Next.js (hosted on Vercel)
  • DB/Auth: Supabase
  • Payments: Lemon Squeezy
  • Logic: OpenAI + SociaVault
  • Comms: Resend

You can ship an MVP in a weekend.

What's in your stack? Let me know in the comments. 👇

Building a Monte Carlo Risk Engine in PyTorch: Pricing, xVA, Exposure Simulation &amp; Wrong-Way Risk

2025-11-21 05:23:35

Quantitative finance relies heavily on Monte Carlo simulation engines to value derivatives, measure exposures, and compute xVA metrics like CVA, DVA, or MVA.

While many libraries exist for basic option pricing, very few open-source tools provide a full multi-model, multi-factor risk engine that can simulate:

  • interest rates
  • equity prices
  • stochastic default intensities
  • hybrid correlated factors
  • exposure profiles (EE, EPE, ENE, EEPE, PFE)
  • xVA metrics (CVA implemented, extendable to FVA/MVA/KVA)
  • adjoint algorithmic differentiation (AAD)
  • regression for American/Bermudan options

This article introduces the Monte Carlo Risk Engine, a PyTorch-powered framework for financial modeling, exposure simulation, and xVA.

All code is open source:
👉 https://github.com/konstantineder/montecarlo-risk-engine

Why Another Monte Carlo Engine?

Banks rely on complex XVA engines. Academia uses theory-heavy models. Python has many pricing libraries — but they:

  • rarely support multi-factor correlated risk
  • almost never include credit models (CIR++)
  • don’t compute exposures or xVA
  • don’t allow AAD
  • don’t offer American option valuation
  • lack wrong-way risk (WWR) analysis

I wanted a framework that:

  • is modular
  • supports any number of risk models
  • allows correlation between factors
  • runs fast thanks to PyTorch vectorization
  • supports AAD, regression, and early exercise
  • works as a true risk engine, not just a toy pricer

So I built one.

Architecture Overview

The engine consists of several core components:

SimulationController

Runs Monte Carlo paths, orchestrates model evolution, handles time stepping.

ModelConfig

Combines multiple stochastic models into a joint hybrid model with a correlation matrix (e.g., Vasicek interest rates + CIR++ intensity to simulate CVA).

Metrics

A pluggable API that includes:

  • PV
  • EE, EPE, ENE
  • EEPE
  • CE
  • PFE
  • CVA

Each metric returns **both the estimate and its Monte Carlo standard error.

Products

Supports a wide range of instruments:

  • European, Bermudan, American options
  • Bond options
  • Bermudan swaptions
  • Interest rate swaps
  • Zero, coupon, and floating-rate bonds
  • Basket options
  • Barrier and binary options

Regression-based continuation value estimation (Longstaff–Schwartz) is built in.

Stochastic Models

  • Black–Scholes (single-asset & multi-asset)
  • Vasicek
  • Hull–White
  • CIR++
  • Bootstrapped hazard rates from CDS data

All models are implemented in vectorized PyTorch.

Example: Exposure Profile of a Bermudan Swaption

This figure shows Expected Exposure (EE) and Potential Future Exposure (PFE) for a Bermudan swaption computed across 100,000 paths.

The pull-to-par behavior, early exercise shaping, and long-tail exposure are all captured correctly.

Wrong-Way Risk Example: CVA vs Correlation for a Zero-Coupon Bond

One of the most powerful features of a hybrid engine is analyzing xVA and WWR (wrong-way risk).

Consider a 10Y payer swap. CVA depends on:

  • exposure (increase when interest rates rise).
  • default intensity (CIR++)
  • correlation between short rate and intensity

If interest rates and intensity are positively correlated, we get wrong-way risk → CVA increases. If negatively correlated → right-way risk → CVA increases.

Result:

This reflects exactly the expected economic behavior!

Example: CVA under Wrong-Way Risk for a 10Y Payer IRS

The following code computes the CVA of a 10-year payer interest rate swap using:

  • Vasicek for stochastic interest rates
  • CIR++ for stochastic credit intensities
  • strong positive correlation (high WWR)
  • A baseline CVA from an uncorrelated simulation
  • A statistical significance test comparing the two (3-sigma rule)
import numpy as np

from common.packages import *
from common.enums import SimulationScheme
from controller.controller import SimulationController

from models.vasicek import VasicekModel
from models.cirpp import CIRPPModel
from models.model_config import ModelConfig

from products.bond import Bond
from products.swap import InterestRateSwap, IRSType

from metrics.cva_metric import CVAMetric
from metrics.risk_metrics import RiskMetrics


# ----------------------------
# Interest Rate Model (Vasicek)
# ----------------------------
interest_rate_model = VasicekModel(
    calibration_date=0.0,
    rate=0.03,
    mean=0.05,
    mean_reversion_speed=0.02,
    volatility=0.2,
    asset_id="irs"
)


# ----------------------------
# Bootstrapped Hazard Rates
# ----------------------------
hazard_rates: dict[float, float] = {
    0.5: 0.006402303360855854,
    1.0: 0.01553038972325307,
    2.0: 0.009729741230773657,
    3.0: 0.015552544648116201,
    4.0: 0.021196186202801115,
    5.0: 0.02284319986706472,
    7.0: 0.010111423894480876,
    10.0: 0.00613267811172937,
    15.0: 0.0036969930706003337,
    20.0: 0.003791311459217732
}

counterparty_id = "General Motors Co"

intensity_model = CIRPPModel(
    calibration_date=0.0,
    y0=0.0001,
    theta=0.01,
    kappa=0.1,
    volatility=0.02,
    hazard_rates=hazard_rates,
    asset_id=counterparty_id
)


# -------------------------------------------
# Hybrid Model Configuration (with WWR: ρ≈1)
# -------------------------------------------
inter_correlation_matrix = np.array([0.99999])

model_config = ModelConfig(
    models=[interest_rate_model, intensity_model],
    inter_asset_correlation_matrix=inter_correlation_matrix,
)


# ----------------------------
# Payer Interest Rate Swap
# ----------------------------
maturity = 10.0
irs = InterestRateSwap(
    startdate=0.0,
    enddate=maturity,
    notional=1.0,
    fixed_rate=0.03,
    tenor_fixed=0.25,
    tenor_float=0.25,
    irs_type=IRSType.PAYER,
    asset_id="irs"
)
portfolio = [irs]


# ----------------------------
# CVA Metric Setup
# ----------------------------
exposure_timeline = np.linspace(0.0, maturity, 100)
cva_metric = CVAMetric(counterparty_id=counterparty_id, recovery_rate=0.4)

risk_metrics = RiskMetrics(
    metrics=[cva_metric],
    exposure_timeline=exposure_timeline
)


# ----------------------------
# Monte Carlo Simulation
# ----------------------------
num_paths_mainsim = 100000
num_paths_presim = 100000
num_steps = 10

sc = SimulationController(
    portfolio=portfolio,
    model=model_config,
    risk_metrics=risk_metrics,
    num_paths_mainsim=num_paths_mainsim,
    num_paths_presim=num_paths_presim,
    num_steps=num_steps,
    simulation_scheme=SimulationScheme.EULER,
    differentiate=False
)

sim_results = sc.run_simulation()


# ----------------------------
# CVA Estimate + MC Error
# ----------------------------
cva_irs = sim_results.get_results(0, 0)[0]
cva_irs_error = sim_results.get_mc_error(0, 0)[0]


# --------------------------------------------------------------
# Baseline CVA (uncorrelated case), from reference simulation
# --------------------------------------------------------------
cva_uncorr = 1.114576156484541
cva_uncorr_error = 0.0024446898428056294


# --------------------------------------------------------------
# Statistical Test: Is WWR CVA > Uncorrelated CVA?
# Uses 3-sigma significance test on the difference.
# --------------------------------------------------------------
diff = cva_irs - cva_uncorr
se_diff = (cva_irs_error**2 + cva_uncorr_error**2) ** 0.5

assert diff > 3 * se_diff, "WWR CVA is not significantly higher than baseline"

AAD: Sensitivities With PyTorch Autodiff

The engine supports adjoint algorithmic differentiation, made possible by PyTorch’s computation graph.
You can compute Greeks (sensitivities) by enabling differentiation:


sc = SimulationController(..., differentiate=True)

This allows:

  • pathwise sensitivities
  • efficient Reverse-Mode differentiation
  • differentiable regression layers for Bermudan options
  • differentiable risk metrics

These are advanced capabilities typical of bank-level xVA systems.

Example:

As an example, the following plots demonstrate pathwise sensitivities (Greeks) computed via AAD.

Each line shows the derivative of the option value with respect to a chosen model parameter (e.g., spot, volatility, interest rate). These sensitivities were obtained in a single backward pass thanks to reverse-mode autodiff — something that would require multiple full re-valuations in traditional bump-and-revalue Monte Carlo.

This highlights three important points:

  1. Efficiency Reverse-mode AAD computes all Greeks at roughly the cost of a single simulation.
  2. Stability Sensitivities are smooth and differentiable, avoiding the noise of bump-and-revalue methods.
  3. Generality The same mechanism works for:
    • Bermudan/early exercise products (via differentiable regression),
    • CVA and exposure metrics,
    • Multi-factor hybrid SDE models,
    • Discontinuous payoffs when smoothing is enabled.

Below is an example sensitivity plot produced by the engine:

🐳 Docker Support

A full environment is available via Docker:

docker build -t mcengine .
docker run -it mcengine python tests/pv_tests/pv_european_option.py

No dependencies, no headaches — runs out of the box.

Source Code

👉 https://github.com/konstantineder/montecarlo-risk-engine
If you like the engine, consider ⭐ starring the repo or opening issues/suggestions!

Final Thoughts

This project aims to provide a research-grade, production-style Monte Carlo engine that:

  • is transparent
  • is extendable
  • supports hybrid multi-factor models
  • runs on PyTorch for speed & autodiff
  • handles real-world xVA phenomena like WWR
  • produces correct, validated numerical results

If you're a quant, quant dev, fintech engineer, or researcher, I hope you'll find it useful.

If you'd like a deep dive into:

  • American Option pricing (American Monte Carlo via Longstaff-Schwartz; new approaches using Machine Learning techniques (reinforcement learning))
  • AAD functionalities and implementation details
  • Exposure and/or CVA analytics
  • or any other fascinating topic covered by the engine

let me know in the comments!

🤯 Dudas, vueltas y un café frío

2025-11-21 05:21:39

Aquí me encuentro otra vez: dándole mil vueltas a la cabeza, buscando bootcamps, preguntándome si realmente vale la pena intentar un cambio profesional. ¿De verdad es posible empezar de cero y dedicarme a otra cosa? ¿Tiene sentido este giro o debería simplemente silenciar mi mente y seguir en contabilidad, intentando mejorar dentro de "mi" campo?

🤹‍♀️ El dilema de las dos Lauras

Una parte de mí me dice que deje de pensar tanto, que trabaje y ya. Que lo importante es tener un empleo estable, buenas condiciones y tiempo para mis hijos. Nada más.

Pero la otra parte, la que grita más fuerte últimamente, sabe que eso es casi una utopía en este punto de mi vida. ¿Un buen trabajo, buenas condiciones y estabilidad partiendo desde cero? Suena bien... pero con mis nuevas circunstancias —ser madre presente— lo veo complicado.

⏳ Jornadas imposibles y culturas rígidas

Siento que para lograr eso tendría que trabajar más de lo que mi cuerpo y mi vida actual pueden sostener. Las empresas con estructuras rígidas no suelen estar preparadas para personas como yo, con tiempos diferentes y ritmos distintos.

He pensado que quizás mi lugar está en una startup, donde haya mentalidades más flexibles y roles más híbridos. Soy muy “hands on”, me adapto, soy resolutiva. Pero claro... ¿cómo llego ahí?

🚪 Entrevistas cerradas, puertas también

Sí, lo intenté. Fui a entrevistas. Pero los horarios, los trayectos, la logística de ser madre y profesional a la vez... todo se vuelve cuesta arriba. Me cansé de sentirme rechazada.

No los culpo. No me conocen. Pero duele. El proceso de búsqueda de empleo siempre me resultó frustrante, porque nunca supe “venderme”. Sé lo que valgo (cuando mi mente no me sabotea), pero no quiero demostrarlo en discursos vacíos o perfiles de LinkedIn en inglés que no me representan.

💭 La lucha mental constante

Y así me debato entre lo que debería hacer y lo que realmente quiero. Una lucha interna más... otra taranta mental en mi camino.

💻 ¿Aprender sola o buscar guía?

Decidí buscar bootcamps porque pensé: ¿puedo estudiar sola y aprender? Claro que sí. Pero soy completamente nueva en este mundo y necesito una guía clara.

Vi cientos de vídeos, tutoriales, y entendí que hay muchísimos lenguajes de programación, frameworks, caminos como front-end, back-end, full stack... y sinceramente, sin una estructura, sin una hoja de ruta, es muy fácil perderse.

📁 ¿Y después del estudio qué?

Otra gran razón para elegir un bootcamp: una vez que “termine” de estudiar, ¿cómo consigo trabajo? ¿Cómo armo un portfolio si ni siquiera sé por dónde empezar? ¿Cómo transformo mi CV, mi perfil profesional, si no tengo aún nada que lo respalde?

Esas preguntas me pesaban. Así que lo decidí: voy a estudiar en un bootcamp.

🔍 Elegir no fue tan fácil…

La búsqueda del bootcamp adecuado fue más larga de lo que esperaba. Hay muchísima oferta y fue difícil decidir. Mis dos criterios fueron claros: precio accesible y que se pudiera hacer en diferido.

El 100% de los que vi ofrecían clases en streaming en directo, de 18:30 o 19:00 hasta las 22:00 o incluso más. Para mí, eso es simplemente imposible. Tengo tres hijos que recojo a las 16:30 y a partir de ahí empieza el maratón: juegos, ducha, cena, cuentos… y luego, ordenar, lavar ropa, preparar mochilas y snacks para el día siguiente.

🧩 Mi solución: flexibilidad real

Por suerte, en Upgrade Hub encontré una opción: podía ver las clases al día siguiente de la emisión. Eso lo cambió todo.

Trabajo por la mañana. Por la tarde estudio hasta la hora de recoger a mis hijos. Luego, cuando se duermen, si me queda energía, vuelvo a estudiar. Ese es mi plan. Ese es mi tiempo. Y a eso me aferro.