2025-11-21 05:42:38
This is a submission for the AI Challenge for Cross-Platform Apps - Hot Design Showcase
I built LifeHub, a cross-platform productivity and wellbeing assistant created using the Uno Platform.
It combines daily planning, habit tracking, mood logging, reminders, and quick-action widgets into a clean and responsive UI.
The entire interface was refined using the Hot Design Agent, which helped speed up UI experimentation and real-time adjustments.
I used a minimal card-based dashboard layout as the inspiration.
Key design elements I referenced:
(Insert original UI screenshots here.)
Hot Design Agent helped with:
(Insert Hot Design Agent interaction screenshots or recordings.)
dotnet run --project LifeHub/LifeHub.csproj --framework net10.0-desktop
dotnet run --project LifeHub/LifeHub.csproj --framework net10.0-browserwasm
2025-11-21 05:39:04
Kafka works best with Java (JVM-based languages).
Kafka has clients for other languages too:
But these clients do not always support Avro well.
Some of them:
It allows any application to use simple HTTP requests (POST, GET) to send or read Kafka messages without writing Kafka code.
Every language can make HTTP requests, so every language can talk to Kafka through the REST Proxy.
Here is your normal system:
[ Java Producer ] → Kafka Cluster + Schema Registry → [ Java Consumer ]
Now we introduce REST Proxy:
[ Any Language Producer ] → HTTP → REST Proxy → Kafka → Schema Registry
And for consumers:
[ Any Language Consumer ] ← HTTP ← REST Proxy ← Kafka
The REST Proxy communicates with:
It does the “Kafka work” for you.
The REST Proxy acts as a translator.
It allows you to:
It simplifies everything.
All languages support HTTP calls.
Even if the language has no Avro serializer.
Schema Registry is fully integrated.
HTTP requests are easier to test than Kafka clients.
Why?
Because:
Performance cost:
3x–4x slower than normal Kafka clients
For most use cases this is okay, but high-performance systems should use Kafka clients directly.
The REST Proxy will not batch automatically.
If you want high throughput, your application must batch multiple messages together before sending.
Good news:
If you're using Confluent’s Docker Compose, the REST Proxy is already available at a port like:
http://localhost:8082
So you can start using it immediately.
In this REST Proxy module you will learn:
A long time ago, Kafka had:
These were used in Kafka before version 0.8.
All modern code and all modern courses (including yours) use the new producer/consumer APIs.
The REST Proxy has existed for a long time too.
So:
This part is very simple:
Always use V2.
Ignore V1.
If you see.../v1/...in examples – that is legacy.
So the rule for your students:
“If the URL says
/v1/– don’t use it.
In this course and in real projects, we always use/v2/REST Proxy APIs.”
When you send an HTTP request to the REST Proxy, you must set a Content-Type header that tells the proxy:
There is a pattern for the Content-Type.
Let’s break it into 4 parts.
General pattern:
application/vnd.kafka.<embedded-format>.v2+json
Let’s understand each piece:
application
– normal MIME type prefix.
vnd.kafka
– means “Kafka-specific vendor type”.
<embedded-format>
– this tells REST Proxy what Kafka message format you are using:
jsonbinaryavro.v2
– REST Proxy API version. We use v2 only.
+json
– this tells REST Proxy that the HTTP body is encoded as JSON text.
You want to send JSON messages to Kafka via V2:
Content-Type: application/vnd.kafka.json.v2+json
json
v2
+json)You want to send Avro messages to Kafka via V2:
Content-Type: application/vnd.kafka.avro.v2+json
avro
v2
+json)The payload is described in JSON, but REST Proxy converts it to Avro using Schema Registry.
You want to send raw binary data:
Content-Type: application/vnd.kafka.binary.v2+json
Just memorize this template:
application/vnd.kafka.<json|avro|binary>.v2+json
And remember:
.v2 → because we use V2 API.+json → because the HTTP request body is JSON.<embedded-format> to: json, avro, or binary. application/vnd.kafka.<embedded-format>.v2+json
where <embedded-format> = json, avro, or binary.
2025-11-21 05:36:58
Upgrading an old production server is not a popular topic today. Most run Docker, Kubernetes, or at least keep OS fresh. But sometimes you inherit a legacy VM, full of configs, and you just need to push it to a modern LTS without breaking anything.
I had exactly this case on DigitalOcean. Here's a quick, real-world rundown of what I actually did.
The project was running on Ubuntu 20.04 with an outdated Nginx. Direct upgrade to 24.04 LTS is not possible. I needed a way to bring everything up-to-date quickly and safely and keep downtime to minimum.
Those kinetic and lunar are eol versions. They no longer receive security updates. Running outdated versions exposes your server to vulnerabilities and leaves you without support. The jump to 24.04 LTS is significant - you get five years of support, better performance, and peace of mind.
To avoid downtime, I leaned on DigitalOcean snapshots + temp droplet. Then I upgraded the system in steps:
20.04 → 22.10 → 23.04 → 23.10 → 24.04 lts + Nginx upgrade
It's a bit long, but it gets the job done if you're stuck on an old upgrade path.
First things first:
This gives you a safe rollback if something goes wrong.
sudo sed -i 's|http://mirrors.digitalocean.com/ubuntu/|http://old-releases.ubuntu.com/ubuntu/|g' /etc/apt/sources.list && \
sudo sed -i 's|http://security.ubuntu.com/ubuntu|http://old-releases.ubuntu.com/ubuntu|g' /etc/apt/sources.list && \
sudo apt update && \
sudo apt upgrade -y && \
sudo apt dist-upgrade -y && \
sudo apt autoremove --purge -y && \
sudo apt install -y ubuntu-release-upgrader-core && \
sudo sed -i 's/kinetic/lunar/g' /etc/apt/sources.list && \
sudo apt update && \
sudo apt upgrade -y && \
sudo apt dist-upgrade -y && \
sudo apt full-upgrade -y && \
sudo apt autoremove --purge -y && \
echo -e "\nUpgrade complete! Rebooting now...\n" && \
sudo reboot
sudo sed -i 's|http://old-releases.ubuntu.com/ubuntu|http://old-releases.ubuntu.com/ubuntu|g' /etc/apt/sources.list && \
sudo sed -i 's/lunar/mantic/g' /etc/apt/sources.list && \
sudo apt update && \
sudo apt upgrade -y && \
sudo apt dist-upgrade -y && \
sudo apt full-upgrade -y && \
sudo apt autoremove --purge -y && \
echo -e "\nUpgrade complete! Rebooting now...\n" && \
sudo reboot
sudo sed -i 's|http://old-releases.ubuntu.com/ubuntu|http://archive.ubuntu.com/ubuntu|g' /etc/apt/sources.list && \
sudo sed -i 's/mantic/noble/g' /etc/apt/sources.list && \
sudo apt update && \
sudo apt upgrade -y && \
sudo apt dist-upgrade -y && \
sudo apt full-upgrade -y && \
sudo apt autoremove --purge -y && \
echo -e "\nUpgrade complete! Rebooting now...\n" && \
sudo reboot
Now you're on Ubuntu 24.04 LTS. Double-check with:
lsb_release -a
Don't skip this backup step:
cp -r /etc/nginx /root/nginx-backup
Then install the latest Nginx package:
curl -fsSL https://nginx.org/keys/nginx_signing.key | sudo gpg --dearmor -o /usr/share/keyrings/nginx-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] http://nginx.org/packages/ubuntu noble nginx" | sudo tee /etc/apt/sources.list.d/nginx.list
sudo apt update
sudo apt install nginx
nginx -v
You should see version ^1.28.0. Reload your configuration to make sure it still parses:
sudo nginx -t
sudo systemctl restart nginx
Wait a few minutes. Check your app. Run your health checks, monitoring, whatever you've got. If you have automated tests, don't forget to run them.
Look at logs and metrics make sure it's all good. Everything working? Cool. Move to the next.
Once you're sure everything's stable, switch the Floating IP back from the temp droplet to your freshly upgraded one.
Hang on to the temp droplet and snapshot for few days. If nothing catches fire, you can remove them.
Total time: about one hour. Most of that is just waiting for reboots and packages to download. You can grab coffee and check on progress every few minutes.
This isn't a workflow I want to repeat often. If you can move to Docker or rebuild your server from scratch — do it.
But sometimes legacy systems don't give you that choice.
And when you must upgrade an old Ubuntu machine step by step, this method works. Fast, safe, and almost zero downtime.
Thanks for sticking around!
Find me on dev.to, linkedin, or you can check out my work on github.
Notes from real-world Laravel.
2025-11-21 05:31:56
Stop building infrastructure. Start shipping features.
The golden rule of Indie Hacking: If it's not your core value prop, outsource it.
In 2025, you can build a million-dollar SaaS by just gluing together 5 powerful APIs. Here is the stack I recommend for speed and scale.
Stop writing login forms. Stop configuring Postgres.
Supabase gives you a full backend-as-a-service.
Stripe is the king, but Lemon Squeezy handles the "Merchant of Record" headache (taxes, VAT) for you.
If you aren't adding AI to your app, your competitor is.
AWS SES is cheap but painful to set up. SendGrid has bad deliverability.
If your app needs to interact with social media (Influencer search, Trend tracking, Content analysis), do NOT build your own scraper.
With these 5, you barely need a backend server.
You can ship an MVP in a weekend.
What's in your stack? Let me know in the comments. 👇
2025-11-21 05:23:35
Quantitative finance relies heavily on Monte Carlo simulation engines to value derivatives, measure exposures, and compute xVA metrics like CVA, DVA, or MVA.
While many libraries exist for basic option pricing, very few open-source tools provide a full multi-model, multi-factor risk engine that can simulate:
This article introduces the Monte Carlo Risk Engine, a PyTorch-powered framework for financial modeling, exposure simulation, and xVA.
All code is open source:
👉 https://github.com/konstantineder/montecarlo-risk-engine
Banks rely on complex XVA engines. Academia uses theory-heavy models. Python has many pricing libraries — but they:
I wanted a framework that:
So I built one.
The engine consists of several core components:
Runs Monte Carlo paths, orchestrates model evolution, handles time stepping.
Combines multiple stochastic models into a joint hybrid model with a correlation matrix (e.g., Vasicek interest rates + CIR++ intensity to simulate CVA).
A pluggable API that includes:
Each metric returns **both the estimate and its Monte Carlo standard error.
Supports a wide range of instruments:
Regression-based continuation value estimation (Longstaff–Schwartz) is built in.
All models are implemented in vectorized PyTorch.
This figure shows Expected Exposure (EE) and Potential Future Exposure (PFE) for a Bermudan swaption computed across 100,000 paths.
The pull-to-par behavior, early exercise shaping, and long-tail exposure are all captured correctly.
One of the most powerful features of a hybrid engine is analyzing xVA and WWR (wrong-way risk).
Consider a 10Y payer swap. CVA depends on:
If interest rates and intensity are positively correlated, we get wrong-way risk → CVA increases. If negatively correlated → right-way risk → CVA increases.
This reflects exactly the expected economic behavior!
The following code computes the CVA of a 10-year payer interest rate swap using:
import numpy as np
from common.packages import *
from common.enums import SimulationScheme
from controller.controller import SimulationController
from models.vasicek import VasicekModel
from models.cirpp import CIRPPModel
from models.model_config import ModelConfig
from products.bond import Bond
from products.swap import InterestRateSwap, IRSType
from metrics.cva_metric import CVAMetric
from metrics.risk_metrics import RiskMetrics
# ----------------------------
# Interest Rate Model (Vasicek)
# ----------------------------
interest_rate_model = VasicekModel(
calibration_date=0.0,
rate=0.03,
mean=0.05,
mean_reversion_speed=0.02,
volatility=0.2,
asset_id="irs"
)
# ----------------------------
# Bootstrapped Hazard Rates
# ----------------------------
hazard_rates: dict[float, float] = {
0.5: 0.006402303360855854,
1.0: 0.01553038972325307,
2.0: 0.009729741230773657,
3.0: 0.015552544648116201,
4.0: 0.021196186202801115,
5.0: 0.02284319986706472,
7.0: 0.010111423894480876,
10.0: 0.00613267811172937,
15.0: 0.0036969930706003337,
20.0: 0.003791311459217732
}
counterparty_id = "General Motors Co"
intensity_model = CIRPPModel(
calibration_date=0.0,
y0=0.0001,
theta=0.01,
kappa=0.1,
volatility=0.02,
hazard_rates=hazard_rates,
asset_id=counterparty_id
)
# -------------------------------------------
# Hybrid Model Configuration (with WWR: ρ≈1)
# -------------------------------------------
inter_correlation_matrix = np.array([0.99999])
model_config = ModelConfig(
models=[interest_rate_model, intensity_model],
inter_asset_correlation_matrix=inter_correlation_matrix,
)
# ----------------------------
# Payer Interest Rate Swap
# ----------------------------
maturity = 10.0
irs = InterestRateSwap(
startdate=0.0,
enddate=maturity,
notional=1.0,
fixed_rate=0.03,
tenor_fixed=0.25,
tenor_float=0.25,
irs_type=IRSType.PAYER,
asset_id="irs"
)
portfolio = [irs]
# ----------------------------
# CVA Metric Setup
# ----------------------------
exposure_timeline = np.linspace(0.0, maturity, 100)
cva_metric = CVAMetric(counterparty_id=counterparty_id, recovery_rate=0.4)
risk_metrics = RiskMetrics(
metrics=[cva_metric],
exposure_timeline=exposure_timeline
)
# ----------------------------
# Monte Carlo Simulation
# ----------------------------
num_paths_mainsim = 100000
num_paths_presim = 100000
num_steps = 10
sc = SimulationController(
portfolio=portfolio,
model=model_config,
risk_metrics=risk_metrics,
num_paths_mainsim=num_paths_mainsim,
num_paths_presim=num_paths_presim,
num_steps=num_steps,
simulation_scheme=SimulationScheme.EULER,
differentiate=False
)
sim_results = sc.run_simulation()
# ----------------------------
# CVA Estimate + MC Error
# ----------------------------
cva_irs = sim_results.get_results(0, 0)[0]
cva_irs_error = sim_results.get_mc_error(0, 0)[0]
# --------------------------------------------------------------
# Baseline CVA (uncorrelated case), from reference simulation
# --------------------------------------------------------------
cva_uncorr = 1.114576156484541
cva_uncorr_error = 0.0024446898428056294
# --------------------------------------------------------------
# Statistical Test: Is WWR CVA > Uncorrelated CVA?
# Uses 3-sigma significance test on the difference.
# --------------------------------------------------------------
diff = cva_irs - cva_uncorr
se_diff = (cva_irs_error**2 + cva_uncorr_error**2) ** 0.5
assert diff > 3 * se_diff, "WWR CVA is not significantly higher than baseline"
The engine supports adjoint algorithmic differentiation, made possible by PyTorch’s computation graph.
You can compute Greeks (sensitivities) by enabling differentiation:
sc = SimulationController(..., differentiate=True)
This allows:
These are advanced capabilities typical of bank-level xVA systems.
As an example, the following plots demonstrate pathwise sensitivities (Greeks) computed via AAD.
Each line shows the derivative of the option value with respect to a chosen model parameter (e.g., spot, volatility, interest rate). These sensitivities were obtained in a single backward pass thanks to reverse-mode autodiff — something that would require multiple full re-valuations in traditional bump-and-revalue Monte Carlo.
This highlights three important points:
Below is an example sensitivity plot produced by the engine:
A full environment is available via Docker:
docker build -t mcengine .
docker run -it mcengine python tests/pv_tests/pv_european_option.py
No dependencies, no headaches — runs out of the box.
👉 https://github.com/konstantineder/montecarlo-risk-engine
If you like the engine, consider ⭐ starring the repo or opening issues/suggestions!
This project aims to provide a research-grade, production-style Monte Carlo engine that:
If you're a quant, quant dev, fintech engineer, or researcher, I hope you'll find it useful.
If you'd like a deep dive into:
let me know in the comments!
2025-11-21 05:21:39
Aquí me encuentro otra vez: dándole mil vueltas a la cabeza, buscando bootcamps, preguntándome si realmente vale la pena intentar un cambio profesional. ¿De verdad es posible empezar de cero y dedicarme a otra cosa? ¿Tiene sentido este giro o debería simplemente silenciar mi mente y seguir en contabilidad, intentando mejorar dentro de "mi" campo?
Una parte de mí me dice que deje de pensar tanto, que trabaje y ya. Que lo importante es tener un empleo estable, buenas condiciones y tiempo para mis hijos. Nada más.
Pero la otra parte, la que grita más fuerte últimamente, sabe que eso es casi una utopía en este punto de mi vida. ¿Un buen trabajo, buenas condiciones y estabilidad partiendo desde cero? Suena bien... pero con mis nuevas circunstancias —ser madre presente— lo veo complicado.
Siento que para lograr eso tendría que trabajar más de lo que mi cuerpo y mi vida actual pueden sostener. Las empresas con estructuras rígidas no suelen estar preparadas para personas como yo, con tiempos diferentes y ritmos distintos.
He pensado que quizás mi lugar está en una startup, donde haya mentalidades más flexibles y roles más híbridos. Soy muy “hands on”, me adapto, soy resolutiva. Pero claro... ¿cómo llego ahí?
Sí, lo intenté. Fui a entrevistas. Pero los horarios, los trayectos, la logística de ser madre y profesional a la vez... todo se vuelve cuesta arriba. Me cansé de sentirme rechazada.
No los culpo. No me conocen. Pero duele. El proceso de búsqueda de empleo siempre me resultó frustrante, porque nunca supe “venderme”. Sé lo que valgo (cuando mi mente no me sabotea), pero no quiero demostrarlo en discursos vacíos o perfiles de LinkedIn en inglés que no me representan.
Y así me debato entre lo que debería hacer y lo que realmente quiero. Una lucha interna más... otra taranta mental en mi camino.
Decidí buscar bootcamps porque pensé: ¿puedo estudiar sola y aprender? Claro que sí. Pero soy completamente nueva en este mundo y necesito una guía clara.
Vi cientos de vídeos, tutoriales, y entendí que hay muchísimos lenguajes de programación, frameworks, caminos como front-end, back-end, full stack... y sinceramente, sin una estructura, sin una hoja de ruta, es muy fácil perderse.
Otra gran razón para elegir un bootcamp: una vez que “termine” de estudiar, ¿cómo consigo trabajo? ¿Cómo armo un portfolio si ni siquiera sé por dónde empezar? ¿Cómo transformo mi CV, mi perfil profesional, si no tengo aún nada que lo respalde?
Esas preguntas me pesaban. Así que lo decidí: voy a estudiar en un bootcamp.
La búsqueda del bootcamp adecuado fue más larga de lo que esperaba. Hay muchísima oferta y fue difícil decidir. Mis dos criterios fueron claros: precio accesible y que se pudiera hacer en diferido.
El 100% de los que vi ofrecían clases en streaming en directo, de 18:30 o 19:00 hasta las 22:00 o incluso más. Para mí, eso es simplemente imposible. Tengo tres hijos que recojo a las 16:30 y a partir de ahí empieza el maratón: juegos, ducha, cena, cuentos… y luego, ordenar, lavar ropa, preparar mochilas y snacks para el día siguiente.
Por suerte, en Upgrade Hub encontré una opción: podía ver las clases al día siguiente de la emisión. Eso lo cambió todo.
Trabajo por la mañana. Por la tarde estudio hasta la hora de recoger a mis hijos. Luego, cuando se duermen, si me queda energía, vuelvo a estudiar. Ese es mi plan. Ese es mi tiempo. Y a eso me aferro.