2026-04-14 11:09:44
The ECB deposit rate is 2%. Eurostat publishes inflation for all 27 EU countries every month. All of it is free. Getting it into Claude without writing a parser takes 30 seconds now.
ECB and Eurostat APIs are free and official. But their responses look like this:
<message:GenericData>
<message:DataSet>
<generic:Series>
<generic:SeriesKey>
<generic:Value id="FREQ" value="M"/>
<generic:Value id="REF_AREA" value="U2"/>
<generic:Value id="INDICATOR" value="MRR_FR"/>
</generic:SeriesKey>
<generic:Obs>
<generic:ObsDimension value="2024-04"/>
<generic:ObsValue value="4.5"/>
SDMX-XML with dataset codes that require reading a 40-page spec to interpret. Not something you want inside a Claude prompt.
6 tools, all returning flat JSON:
| Tool | What you get | Source |
|---|---|---|
get_ecb_rates |
Deposit facility, main refi, marginal lending rates | ECB SDW |
get_euro_exchange |
EUR vs 30+ currencies, latest or by date | ECB / Frankfurter |
get_eu_inflation |
HICP inflation by EU country, monthly/annual | Eurostat |
get_eu_gdp |
GDP by country, quarterly, growth or absolute | Eurostat |
get_eu_unemployment |
Unemployment by country and age group | Eurostat |
compare_eu_economies |
Inflation + GDP + unemployment side by side | All sources |
No API key. No account.
Claude Code:
claude mcp add eu-finance -- npx -y @nexusforgetools/eu-finance
Claude Desktop / Cursor / Windsurf / Cline:
{
"mcpServers": {
"eu-finance": {
"command": "npx",
"args": ["-y", "@nexusforgetools/eu-finance"]
}
}
}
Ask in plain language — eu-finance handles the API calls:
Calls ECB SDW and Eurostat SDMX REST APIs directly — both free, no auth required. Every response is normalized into typed flat JSON before leaving the tool. Redis cache in HTTP mode (1h for rates, 6h for inflation, 24h for GDP). Dual transport: stdio for local clients, HTTP/SSE for server deployments. TypeScript/ESM, Zod validation.
If there's a specific ECB or Eurostat dataset you need, open an issue.
2026-04-14 11:04:00
I recently launched Qvert — a free unit conversion website with 2,600+ pages, each targeting a specific conversion pair like "centimeters to inches" or "USD to EUR."
Every unit converter I used was either:
I wanted something fast, clean, and comprehensive.
Instead of building one converter page with dropdowns, I generated a dedicated page for every conversion pair. Each page has:
/length/cm-to-inch)With 15 unit categories and hundreds of units, this generates 2,600+ unique pages — each one an entry point for organic search.
Users can type naturally: "175 cm in inches" or "how many cups in a gallon." The parser uses scored matching to find the right conversion:
This prevents common mistakes like "meters" accidentally matching "millimeters."
Instead of just showing numbers, Qvert has interactive visualizations:
29 world currencies with rates updated daily. The rates are fetched at build time, so there's zero runtime API cost — the conversion factors are baked directly into the pages.
Any conversion can be embedded on other sites with a single line of code:
<iframe src="https://qvert.ai/embed/length/cm-to-inch"
width="450" height="230" frameborder="0">
</iframe>
This drives traffic back to Qvert while providing value to other site owners.
Each page needs to be unique for SEO. I used AI to generate:
This content is generated once at build time, so there's no per-request API cost.
Check out Qvert — convert anything instantly. The embeddable widget is free for anyone to use.
What would you add? Let me know in the comments.
2026-04-14 11:03:19
Kalau kamu migrasi WordPress yang tadinya pakai Nginx sebagai SSL terminator ke setup dengan AWS Application Load Balancer (ALB) di depannya, ada satu masalah klasik yang hampir pasti muncul: ERR_TOO_MANY_REDIRECTS.
Artikel ini dokumentasi troubleshooting langsung dari production, mulai dari health check gagal, redirect loop, sampai fix di level Nginx dan WordPress.
Sebelumnya:
Client → HTTPS → Nginx (SSL termination) → WordPress HTTP
Sesudah pakai ALB:
Client → HTTPS → ALB (SSL termination) → HTTP 80 → Nginx → WordPress
Di setup baru, ALB yang handle SSL. Nginx dan WordPress tidak perlu tahu soal SSL, mereka cukup terima traffic HTTP biasa dari ALB.
Setelah EC2 didaftarkan ke target group ALB, health check langsung Unhealthy dengan pesan:
Health checks failed with these codes: [301]
ALB secara default expect response 200 dari health check path. Tapi WordPress melakukan redirect (301), misalnya ke /en/ atau ke HTTPS, sehingga dianggap gagal.
Pergi ke EC2 > Target Groups > Health checks, lalu edit:
Health check path : /
Success codes : 200,301
Atau ganti path ke endpoint yang langsung return 200. Tidak perlu menyentuh WordPress sama sekali.
Setelah domain di-point ke ALB, browser langsung menampilkan ERR_TOO_MANY_REDIRECTS. Ini terjadi karena redirect loop:
ALB kirim HTTP ke Nginx
→ Nginx redirect ke HTTPS
→ ALB nerima HTTPS, forward lagi ke Nginx sebagai HTTP
→ Loop tidak berhenti
Cek di mana redirect HTTPS berada:
sudo grep -rn "return 301" /etc/nginx/
Output:
/etc/nginx/sites-available/wordpress:5: return 301 https://$host$request_uri;
Karena ALB sudah handle SSL termination, Nginx tidak perlu lagi melakukan redirect HTTPS. Backup dulu, lalu edit config-nya:
sudo cp /etc/nginx/sites-available/wordpress /etc/nginx/sites-available/wordpress.bak
sudo nano /etc/nginx/sites-available/wordpress
Ganti seluruh isi config dengan versi yang hanya listen di port 80 tanpa redirect:
server {
listen 80;
server_name example.com www.example.com;
client_max_body_size 50M;
root /var/www/html/wordpress;
index index.php index.html index.htm;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS on;
include fastcgi_params;
}
}
Poin penting:
return 301 https://... karena ini biang kerok redirect loopfastcgi_param HTTPS on biar PHP/WordPress tahu request-nya HTTPSTest dan reload:
sudo nginx -t
sudo systemctl reload nginx
Setelah Nginx beres, masih bisa terjadi redirect loop dari sisi WordPress sendiri. Ini karena WordPress melihat request datang via HTTP (dari ALB ke Nginx), tapi WordPress dikonfigurasi untuk berjalan di HTTPS.
WordPress butuh dikasih tahu bahwa request originalnya adalah HTTPS melalui header X-Forwarded-Proto yang dikirim ALB.
Buka wp-config.php:
sudo nano /var/www/html/wordpress/wp-config.php
Tambahkan snippet ini sebelum baris require_once ABSPATH . 'wp-settings.php';:
/* Fix HTTPS detection behind ALB/reverse proxy */
if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] === 'https') {
$_SERVER['HTTPS'] = 'on';
}
Setelah save, refresh browser dan redirect loop selesai.
| Layer | Masalah | Fix |
|---|---|---|
| Nginx | Redirect HTTP ke HTTPS meski ALB sudah handle SSL | Hapus return 301, listen HTTP only |
| WordPress | Tidak tahu request original adalah HTTPS | Set $_SERVER['HTTPS'] dari header X-Forwarded-Proto
|
Keduanya saling terkait. Nginx fix menghentikan redirect di level server, tapi WordPress masih bisa generate URL HTTP dan redirect sendiri kalau tidak tahu konteks HTTPS-nya.
Masalah ini sebenarnya bisa dicegah jauh lebih cepat kalau dari awal sudah tahu detail setup server yang dipasang vendor.
Dalam kasus ini, vendor sebelumnya menginstall Nginx sebagai SSL terminator langsung di EC2. Tidak ada dokumentasi yang diserahkan, tidak ada catatan soal config apa yang aktif. Ketika infrastruktur berubah (masuk ALB), asumsinya server tinggal disambungkan. Ternyata ada konfigurasi lama yang konflik.
Beberapa hal yang sebaiknya didokumentasikan atau ditanyakan ke vendor sebelum migrasi:
Soal web server:
sites-enabled?Soal WordPress:
wp-config.php selain yang default?siteurl dan home di database point ke HTTP atau HTTPS?Soal server secara umum:
systemctl list-units --type=service)?Minta handover document atau minimal akses SSH sebelum mulai integrasi. Troubleshooting config orang lain tanpa dokumentasi itu bisa makan waktu jauh lebih lama dari yang seharusnya.
mysite dan mysite-old.conf), cek symlink di /etc/nginx/sites-enabled/ dan pastikan hanya satu yang aktifX-Forwarded-Proto dikirim otomatis oleh ALB, tidak perlu konfigurasi tambahan di sisi ALBTested on: Ubuntu 22.04, Nginx 1.24, WordPress 6.x, AWS ALB
2026-04-14 10:54:43
You install the official Kimi desktop .deb, fire sudo dpkg -i, and boom:
dpkg: dependency problems prevent configuration of kimi:
kimi depends on libwebkit2gtk-4.0-37; however:
Package libwebkit2gtk-4.0-37 is not installed.
That library doesn't exist on Ubuntu 24.04, let alone 26.04. It was removed from the repos over a year ago. The official Kimi desktop package is built on Tauri v1, which hard-depends on libwebkit2gtk-4.0.so.37 — a library that shipped with webkit2gtk 4.0, superseded by 4.1 and then dropped entirely.
So the app is just... broken on any modern Ubuntu. Here's how I fixed it.
Tauri v1 → libwebkit2gtk-4.0 → removed from Ubuntu 24.04+ → dpkg fails.
Tauri v2 links against libwebkit2gtk-4.1, which is the version shipped in Ubuntu 24.04 and 26.04. So the fix is straightforward: rebuild the app with Tauri v2 instead of v1.
I used Pake v3, which wraps any web app into a native desktop app using Tauri under the hood. One build script, one config file, and you get a .deb that actually installs.
| Feature | Detail |
|---|---|
| Tauri v2 runtime | Links against libwebkit2gtk-4.1 — the one Ubuntu actually ships |
| OAuth / SSO |
--new-window flag means Google sign-in works in-app instead of being blocked |
| System tray | Desktop integration that works |
| 1200x780 window | Matches the original Kimi desktop dimensions |
Prerequisites — Rust, Node, and the usual GTK/webkit dev packages:
# Rust >= 1.85
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Node.js >= 22 — use nvm, brew, whatever you prefer
# Build deps
sudo apt install libwebkit2gtk-4.1-dev libgtk-3-dev \
libayatana-appindicator3-dev librsvg2-dev
# Pake CLI
npm install -g pake-cli
Then it's one command:
./build.sh
The .deb lands in dist/. Install it:
sudo dpkg -i dist/kimi_1.0.0_amd64.deb
Done. Kimi runs natively on Ubuntu 26.04 with no missing libraries.
Everything lives in config/pake.json. The important bits:
{
"windows": [{
"url": "https://kimi.moonshot.cn",
"new_window": true,
"width": 1200,
"height": 780
}],
"user_agent": {
"linux": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36"
}
}
The two things that matter:
new_window: true — Without this, OAuth popups (Google sign-in, etc.) get blocked by the webview's navigation policy. This flag tells Pake/Tauri to open them in a new window instead.user_agent.linux — Spoofing a Chrome UA because some OAuth providers reject webview user agents.Fair question. A native desktop app gives you:
Clone it, build it, install it. If you're on Ubuntu 24.04+ and want Kimi as a desktop app, this is currently the only way that works.
If you need to remove it:
sudo dpkg -r kimi
Kimi is a product of Moonshot AI. This project uses the open-source Pake tool (MIT license) to wrap the Kimi web interface as a native desktop application.
2026-04-14 10:54:30
If you have built search for an e-commerce product and then moved to a travel project, you know the feeling. Everything you assumed about search breaks within the first week. The data model is different. The freshness requirements are different. The query patterns are different. Even the definition of "in stock" is different.
This is not a difficulty ranking. E-commerce search has its own hard problems. But the two domains diverge in ways that catch experienced developers off guard, and understanding those differences before you start building saves weeks of rework.
In e-commerce, a product either exists in the warehouse or it does not. A pair of shoes in size 42 is available until someone buys the last pair. The inventory state changes when a purchase happens. Between purchases, the state is stable. You can cache product availability for minutes or even hours without causing problems.
In travel, inventory expires whether anyone buys it or not. A hotel room on April 15th ceases to exist on April 16th. A flight seat for the 9am departure is worthless at 9:01am. This means every search result has a built-in expiration timestamp that has nothing to do with purchase activity.
The technical consequence: your caching strategy needs to account for time-based expiration, not just event-based invalidation. A hotel room that was available 30 minutes ago might still be available (nobody booked it) or might be gone (someone booked it on another channel). You cannot know without checking the source system. E-commerce developers used to comfortable cache TTLs discover that travel search requires either very short TTLs or real-time availability checks on every result, both of which have cost and latency implications.
In e-commerce, your product catalog is your source of truth. You control it. When you update a price or mark something out of stock, the change propagates through your system. There is one database, one version of the truth.
In travel, the same hotel room is sold simultaneously through the hotel's own website, Booking.com, Expedia, Agoda, and three other OTAs. Each channel has its own cached version of availability. The hotel's Property Management System (PMS) is the theoretical source of truth, but updates to the PMS propagate to each channel at different speeds through different APIs.
When a developer queries availability for "hotels in Da Nang, July 15-18," the response depends on which system they queried, when they queried it, and how recently that system synced with the PMS. Two queries made 30 seconds apart can return different results, not because availability changed, but because the cache refresh cycle hit between them.
In e-commerce, if your search returns a product, the customer can almost certainly buy it. In travel, if your search returns a room, the customer might click through and discover it was booked on another channel 45 seconds ago. This is why travel platforms need a confirmation step that re-checks availability at the moment of booking, a pattern that e-commerce checkout rarely requires.
In e-commerce, a product has a price. It might change during a sale or promotion, but at any given moment, the price is a stored value in a database column. You query it, you display it.
In travel, price is computed at query time based on a combination of factors: the dates selected, the number of guests, the room type, the cancellation policy chosen, the customer's loyalty tier, the time of day, demand levels, competitor pricing, and sometimes even the customer's country of origin (due to regional pricing agreements).
A single hotel room does not have "a price." It has a pricing function that returns different values depending on the parameters. This means travel search cannot simply index prices in Elasticsearch the way e-commerce search indexes product prices. You either pre-compute prices for common date and occupancy combinations (expensive in storage, complex to keep fresh) or you compute prices at search time by calling the supplier's pricing API (expensive in latency, subject to rate limits).
Most travel search systems use a hybrid: pre-computed base rates for display in search results, with real-time pricing API calls when the user clicks through to a specific property. The mismatch between these two numbers is a constant source of user frustration ("the price changed when I clicked") and engineering headaches.
An e-commerce search query is typically a keyword with optional filters. "Running shoes, size 42, black, under $100." The search engine matches keywords against product attributes and applies filters. The core operation is text matching plus filtering.
A travel search query always involves at least three dimensions that interact with each other: location, dates, and occupancy. "Hotels in Hoi An, July 15-18, 2 adults 1 child." These three dimensions are not independent filters. The location determines which properties exist. The dates determine which of those properties have availability. The occupancy determines which room types within available properties can accommodate the guests.
In e-commerce, you can filter sequentially: find all shoes, then filter by size, then by color. In travel, you need to evaluate all three dimensions simultaneously because a property might have availability for 2 adults on July 15-17 but not July 17-18, or it might have a room for 2 adults but not 2 adults plus 1 child on those specific dates.
This multi-dimensional constraint satisfaction is why travel search is computationally heavier than e-commerce search at the query level, and why GDS (Global Distribution System) APIs are notoriously slow. They are doing constraint satisfaction across millions of inventory records, not keyword matching.
E-commerce search has well-understood sorting defaults: relevance, price low to high, price high to low, bestselling, newest. These are straightforward to implement because the sorting criteria are attributes stored on the product.
Travel search sorting is more ambiguous. "Best" for a business traveler means close to the meeting location, with fast WiFi and a desk. "Best" for a family means close to the beach, with a pool and a kids' club. "Best" for a budget backpacker means cheapest per night with decent reviews. The same inventory, the same dates, completely different optimal orderings.
This is why personalization has a larger impact on conversion in travel search than in e-commerce search. In e-commerce, sorting by "bestselling" is a reasonable default that serves most users. In travel, there is no universal default that works. The platform either invests in personalized ranking or accepts that search results will feel generic to most users.
If you are moving from e-commerce to travel development, these are the adjustments that will save you the most time:
Rethink your caching strategy. E-commerce caching patterns (cache for 5-15 minutes, invalidate on purchase events) do not transfer. Travel search needs either much shorter TTLs or a two-phase approach: cached results for browsing, real-time confirmation at booking time. Budget for higher infrastructure costs.
Separate "display price" from "booking price." Accept that the price shown in search results will sometimes differ from the price at checkout. Build the UX to handle this gracefully (price change notifications, rate locks) rather than trying to eliminate the mismatch entirely. Eliminating it is prohibitively expensive at scale.
Index availability windows, not availability states. Instead of a boolean "available: true/false," store date ranges with room type and occupancy constraints. Your search index needs to answer "is this property available for these specific dates and this specific guest configuration?" not just "is this property available?"
Plan for multi-source data reconciliation from day one. If you are aggregating inventory from multiple suppliers or OTAs, build a normalization layer that maps different data formats into a unified schema before it hits your search index. Do not let supplier-specific data structures leak into your search logic.
Build the confirmation step into your booking flow. Unlike e-commerce where "add to cart" is low-stakes, travel search results go stale fast. The availability check at booking time is not optional. Design your UX and your API around the assumption that some results will be unavailable by the time the user clicks "book."
Built by the engineering team at Adamo Software. We build custom platforms for travel, healthcare, and enterprise applications.
2026-04-14 10:52:58
Cuando se trabaja en sistemas con acceso de usuarios, cada lenguaje y/o framework asociado, tiene sus formas de manipular las sesiones de estas. Ahora bien, también se puede visualizar el tiempo restante de sesión de forma interactiva y vistosa. Acá hice un pequeño ejercicio para entender dos cosas: el manejo de tiempo de sesión en Flask y el como visualizarlo "contando hacia atrás", con Javascript.
En Flask, el módulo de sesiones viene incluido en el framework. Esto lo invocamos en encabezado en los archivos de rutas. Como por ejemplo:
from flask import Blueprint, request, session, redirect, url_for, flash, render_template
En el método de autenticación, podemos usar la lógica siguiente.
if data:
session.permanent = True
session['usuario'] = data
DBUtils.set_history(session['usuario'][0], 'login', 'Inicio de Sesion Correcto')
usuarioModel.set_ultimo_acceso(session['usuario'][0], 'login')
return redirect(url_for('auth.home'))
Donde podemos ver lo siguiente: despues de extraer la data, la verificamos y si existe, se crean la sesion. En session.permanent, al darle valor True, se mantiene activa, mientras que en la tupla session['usuario'], llenamos los datos de sesion. Esto sería lo básico para manejar las sesiones de usuario en un sistema tanto en Flask, como en otros frameworks como Django.
Ahora bien ¿Y si solo queremos unos minutos de sesion por seguridad? Podemos hacer eso de la siguiente manera. Ya que sabemos que session.permanent = True, permite que la sesión continúe activa aun cerrando el navegador, crearemos un límite usando otro método del núcleo de flask, que se invoca en el archivo principal.
Lo primero es importar del núcleo de flask, el método current_app, que nos permitira crear una instancia válida para cualquier archivo de rutas, donde usamos sesiones.
from flask import current_app
Luego, llamamos el siguiente método propio de flask
app.permanent_session_lifetime = timedelta(minutes=10)
Este método, es el que maneja el tiempo de sesión. Para el ejemplo dejaremos 10 minutos, como límite. Ahora para invocarlo en todos los controladores, como ya habíamos mencionado, haremos lo siguiente: crearemos un contexto y definiremos una función en el archivo principal, que usara current_app para aplicarlo a todo el sitio.
@app.context_processor
def traer_restante():
try:
restante = int(current_app.permanent_session_lifetime.total_seconds())
except Exception:
restante = 600 # Valor por defecto (10 minutos)
return {"restante": restante}
Ya que tenemos definido el tiempo de sesión, traeremos el valor de la funcion de contexto y lo aplicaremos al front creando un contador de tiempo hacia atrás. Para ello definiremos una función en javascript que nos dará ese efecto.
function contadorTiempo() {
if (restante >= 0) {
var minutos = Math.floor(restante / 60)
var segundos = restante % 60
document.getElementById('contador').innerText = `Tiempo de sesión restante: ${minutos}m ${segundos}s`
restante--
setTimeout(contadorTiempo, 1000)
if (restante === 60) {
alerta(2, 'Tu sesión terminará en un minuto')
}
} else {
jconfirm({
title: 'Sesión expirada',
content: 'Tu sesión ha expirado. Por favor, inicia sesión nuevamente.',
type: 'blue',
buttons: {
cerrar: {
btnClass: 'btn-info ripple',
action: function() {
window.location.href = '/login'
}
}
}
})
}
}
Esta función toma el valor dela variable "restante" que definiremos a posteriori en la vista/template. Como no uso moment.js (que es una libreria útil para estas cosas, pero para el ejemplo no es necesario), aplicaremos el método Math.floor() para dividir el tiempo en minutos y segundos y colocarlos en un elemento del DOM de id contador, restando el valor de uno en uno y seteamos que el tiempo transcurra segundo a segundo con setTimeout().
Por último, creamos dos alertas. La primera es cuando quede menos de un minuto, una advertencia y el último, el que indica que la sesión expiro.
Ahora, debemos mostrar el resultado y para ello recurriré a un elemento div con id "contador" (en este caso, en el navbar), y al pie del archivo html se colocará el llamado a la función.
<div id="contador" class="ms-2"></div>
<!-- Bajo el </body> y los scripts usados se coloca el llamado a la funcion -->
<script>
// Como estamos llamando a valor del backend, usamos el formato de Jinja para traerlo.
var restante = {{ restante|default(600)|int }}
contadorTiempo()
</script>
El resultado final, será un contador de minutos y segundos que se verá algo así en la barra de navegación.
Si bien, como mencioné antes, existen librerías como moment.js que simplifican esto, el realizarlo de esta manera sirve para entender mejor el manejo de sesiones y tiempo de vida de estas. Algo simple pero "inquebrantable" (o casi).
Les dejo un video del proceso.