2026-03-12 18:27:09
Managing CSS at scale is one of the hardest challenges in front-end development. As projects grow, CSS can easily become brittle, hard to maintain, and prone to unintended side effects due to excessive inheritance and uncontrolled specificity.
This article introduces some popular methodologies: BEM, SMACSS, and OOCSS, and explains how they help create scalable, modular CSS architectures with clear naming conventions, low specificity, and better separation of concerns.
CSS was designed to cascade, but uncontrolled cascading often leads to:
When stylesheets become difficult to predict or extend, developers start fearing changes. The key is to avoid deeply nested selectors and manage specificity explicitly.
A modular CSS approach aims to:
Let’s explore some well-known methodologies that encourage this.
BEM stands for Block Element Modifier, and it works like this:
card)card__header)card--highlighted)BEM creates predictable class names with low specificity and no dependence on DOM structure or nesting. Besides, the low specificity makes these class names easy to override and compose.
<div class="card card--highlighted">
<div class="card__header">Title</div>
<div class="card__content">Content here</div>
</div>
Scalable and Modular Architecture for CSS (SMACSS) categorises CSS rules into five types:
SMACSS encourages separation of concerns, promoting reusability and scalability, and it helps organise large codebases logically.
/* layout/_header.scss */
.header { ... }
/* module/_card.scss */
.card { ... }
/* state/_visibility.scss */
.is-hidden { display: none; }
Object-Oriented CSS (OOCSS) promotes splitting styles into:
OOCSS encourages thinking of UI as reusable "objects" that can be extended visually through skins. It creates highly reusable elements, separating layout and appearance.
/* structure */
.box { display: block; padding: 1rem; }
/* skin */
.box--primary { background-color: blue; color: white; }
.box--secondary { background-color: grey; color: black; }
All these methodologies share common goals:
.is-disabled
BEM, SMACSS, and OOCSS are not mutually exclusive: they share complementary principles that help you write predictable, maintainable, and scalable CSS. By embracing flat class-based selectors, separating concerns, and managing specificity, your front-end code will remain robust even as your project grows.
A clear modular architecture with naming conventions not only improves maintainability but also fosters team collaboration and confidence in styling changes.
2026-03-12 18:20:19
I recently launched the beta version of Monavo, a Telegram-first service for token swaps on the Solana network. The project is currently running in public testing, and the main goal of this stage is to observe how the architecture behaves in a real production environment. In this article, I want to explain the Monavo architecture, why the system is split between an edge API and a private backend service, and what engineering decisions help protect the system from duplicate requests, network issues, and unreliable external APIs.
When working with cryptocurrency transactions, infrastructure becomes just as important as the user interface. A duplicated request or network retry can potentially trigger the same operation twice. Because of that, the main focus during the development of Monavo was reliability. The system needed to remain predictable even when dealing with unstable networks, webhook retries, and third-party API limits.
Monavo was designed as a Telegram-first product. Telegram is responsible for authentication, notifications, and action triggers. The web application (/app) is used to confirm operations and display transaction details before they are executed.
Monavo is implemented as a monorepo with clearly separated modules. This structure keeps business logic isolated from infrastructure and makes the project easier to evolve over time.
The system has several core components. A public API runs on Cloudflare Workers and receives all incoming requests. A separate backend service performs heavier operations and interacts with the Solana ecosystem. The web application acts as the user interface where transactions are reviewed and confirmed.
Shared data contracts and types live in a dedicated package, while the database schema and migrations are maintained in a separate module. This structure allows infrastructure or UI layers to evolve independently without affecting the domain logic.
One of the most important architectural decisions in Monavo was to adopt an edge-first design.
The public API runs entirely on Cloudflare Workers. Workers process Telegram webhooks, handle requests from the web app, manage user sessions, and apply rate limiting.
Workers effectively act as a gateway between users and the internal backend service. This approach ensures that the public interface remains fast while critical logic stays isolated from direct internet access.
Since Workers run on Cloudflare’s edge network across multiple regions, API responses remain fast regardless of where users are located.
The heavy business logic is handled by a separate internal service, often referred to as the swap engine. This service prepares swap transactions and interacts with Solana infrastructure.
The backend server is not publicly accessible. It runs behind a Cloudflare Tunnel and accepts requests only from Workers.
This separation means the internal service is never exposed directly to the internet. Even if the public API is targeted by attacks, the core swap engine remains unreachable from outside the system.
Workers communicate with the backend using a service authorization key that is validated by the server. This creates a clear security boundary between the edge layer and the internal infrastructure.
All data exchange within the system is built around DTO contracts. Every incoming and outgoing payload is validated at the API boundary.
These contracts are defined using Zod and serve as a shared source of types across all parts of the system. This ensures that every service interprets the same data structures consistently.
This approach significantly reduces the risk of runtime errors and makes the API easier to maintain as the project evolves.
Monavo follows an approach similar to functional languages such as Elixir or Rust. Instead of throwing exceptions, functions return structured result objects.
Each response includes an isFault flag that indicates whether an error occurred. If an error happens, the response also includes an error code and a message.
This approach prevents unexpected exceptions from interrupting execution flows. Clients always receive a predictable response structure and can safely handle errors.
Example responses look like this:
{
"isFault": false,
"data": {}
}
or
{
"isFault": true,
"code": "RATE_LIMIT",
"message": "Too many requests"
}
One of the most common issues in distributed systems is duplicate requests. These may appear due to network retries, repeated HTTP requests, or duplicated webhook deliveries.
To address this, Monavo implements idempotency for command operations. Each request receives a unique key, and the system stores both the request hash and its result.
If the same request is received again, the previously stored response is returned. If the payload differs, the server returns a conflict error.
This mechanism ensures that operations cannot be accidentally executed multiple times.
User authentication begins in Telegram. After authentication, the web application receives a one-time token used to establish a session.
This token has a limited lifetime and is stored in the database only as a hash. Once consumed, it becomes invalid.
The session itself is stored in a secure cookie with the following security attributes:
Session tokens are also periodically rotated, which reduces the risk of token compromise.
Monavo interacts with several external services, including Solana infrastructure providers.
To prevent API overload and accidental blocking, these integrations are wrapped in request rate limiters. Additional caching and periodic updates are used to reduce external load and improve system responsiveness.
This combination ensures that the system remains stable even when external services behave unpredictably.
Workers are deployed using GitHub Actions. Whenever code changes are pushed, the deployment pipeline automatically updates the Cloudflare Workers environment.
Deployment uses configuration options that prevent accidental overwriting of runtime secrets and environment variables.
The internal backend service runs on a VPS and is managed with PM2. This setup allows fast updates and provides a simple mechanism for monitoring and restarting the service if necessary.
Despite the planned architecture, several issues appeared during early testing of Monavo.
The first problem involved Telegram webhooks. Initially the system did not use idempotency. In practice, Telegram sometimes delivers the same event multiple times, especially during network delays or webhook retries.
This occasionally caused the same operation to run twice. The solution was to introduce idempotency keys and payload hashing so duplicate requests return the original result.
Another issue appeared after moving the entire API to Cloudflare Workers. Workers are excellent for fast API operations, but they are not ideal for long-running or computationally heavy tasks.
Some swap flows required longer execution times than edge functions comfortably support. As a result, the architecture was adjusted so Workers act as a gateway while heavy operations run on a dedicated backend service.
A third challenge appeared when interacting with Solana RPC nodes. Everything seemed stable during testing, but higher request volumes led to rate limit errors and inconsistent responses.
This required adding request limiters, caching layers, and controlled request queues for RPC calls.
Another important lesson came from error handling. Early versions of the system relied heavily on exceptions. Over time this created complex try/catch chains and unpredictable API responses.
Eventually the system was refactored to the result-based approach described earlier. This change made error handling much more predictable and simplified client-side logic.
The core idea behind the Monavo architecture is to separate the system into two layers.
The edge layer handles user interaction and must remain extremely fast. The core layer executes critical business logic and stays isolated from direct public access.
This approach reduces the attack surface, simplifies scaling, and makes the system more resilient to network instability.
Monavo is currently running in beta. The primary goal of this stage is to observe how the architecture performs under real-world conditions and identify potential bottlenecks as the user base grows.
If the system proves stable under production traffic, the next step will be expanding the platform with additional tools for interacting with the Solana ecosystem.
The beta version is already live and gradually opening to more users, so the coming months will provide a good test of how well the current architecture performs in practice.
2026-03-12 18:18:09
This article combines real business scenarios and Result Data to review how we optimized several slow queries that frequently timed out to achieve responses within seconds.
Backend and monitoring developers likely have experienced this anxiety: When the log Data Volume reaches a certain Size, previously smooth queries start to fail or become unresponsive. The monitoring service sends a flood of alarms, or stakeholders need data urgently, but the log API you invoke stalls, eventually returning a Request Timeout.
Recently, we collaborated with a power user (a large business team) to implement Simple Log Service (SLS) materialized views in their core log scenarios. We compared the performance before and after the feature was Enabled in the production environment. Whether in terms of hard Performance Data or actual user experience, the difference is significant.
This article combines real business scenarios and Result Data to review how we optimized several slow queries that frequently timed out to achieve responses within seconds.
Case 1: No more timeouts under high-concurrency SDK load
This is a very typical automated monitoring scenario. The User's monitoring service invokes the log API at high frequency via the SDK to pull the invocation latency data between services.
The difficulty lies in "high concurrency + Dynamic conditions." The monitoring program sends a large number of Requests in a short time, and the query conditions for each Request change. For example, it queries columnx:"abc" in one second, and columnx:"abd" in the next. This usage puts significant pressure on the backend. Before optimization, the average query took 4100 ms. This creates a vicious loop: slow query -> thread pool backlog -> concurrent processes further competing for resources -> eventually widespread timeouts.
SQL after removing business semantics:
query| select
column1, column2, column3,
(timestamp - timestamp % 3600) as time_slot,
count(*) as cnt,
avg(metric_val) as avg_lat
from log
group by column1,column2,column3,time_slot
After using materialized views: The query duration plummeted to 46 ms, an 89-fold performance improvement. More importantly, now no matter how high the SDK concurrency is, or how the query conditions change, because only the pre-computed Result needs to be read, the response time is very stable, completely eliminating the timeout problem under high concurrency.
Case 2: Taming the performance killer: distinct count operations
Anyone who has worked with data knows that count(distinct) is a notoriously resource-intensive operation, especially in scenarios with large Data Volume.
User SQL:
query | select
project_id,
count(1) as event_cnt,
count(distinct hash_val) as issue_cnt
from log
group by project_id
To get a distinct count of issue signatures (represented by the hash value) after removing duplicates, this SQL struggles when the Data Volume is large.
Before optimization: This query previously took an average of 16.8 s. If the Time Range is slightly extended (such as viewing the Trend over the past month), or the peak Traffic is slightly larger, the query often fails.
After optimization: Accelerated by materialized views, the query time dropped to 2.2 s, an 8-fold Performance improvement, transforming this feature from “frequently unusable” to “reliably available.”
Case 3: Comparative Analysis, from "54 s timeout" to "second-level response"
This is the scenario with the largest Performance improvement in this optimization. The User has a requirement to View the comparative change of operation log read latency (comparing Data from 1 Day ago, 3 Days ago, and 7 Days ago).
User SQL:
type:read|
select
time,
diff [1] as day1,
diff [2] as day2,
diff [3] as day3,
diff [4] as day7
from (
select
time,
ts_compare(avg_latency, 86400, 172800,604800) as diff
from (
select
avg(latency) as avg_latency,
date_trunc('hour', __time__) as time
from log
group time )
group by time order by time )
This SQL involves ts_compare and multilayer subquery nesting. When the query Time Range is large, the computational load is very high.
Before optimization: Duration was 54.3 s. If the backend service jitters slightly, the User's Request times out, rendering it essentially unusable.
After optimization: Duration is 958 ms. From a long wait of nearly one minute, it dropped to under one second. Performance improved by 56 times. This experience change from “unqueryable” to “near-instantaneous results” is the most tangible for O&M personnel waiting on the data.
A cost-benefit analysis
The ROI (Return on Investment) of this optimization is very high:
● High utilization rate: In one day, these views served a cumulative total of 10,223 queries.
● Extremely low cost: You may worry whether storing a copy of the Result is expensive. In fact, the added storage cost is less than 0.1% of the raw log storage fee, which is negligible.
Summary
Based on this practical experience, we also summarized three scenarios most suitable for SLS materialized views. If your business also fits the following situations, enable materialized views directly:
Tackling intractable slow queries: If your SQL contains a large number of deduplication statistics (count distinct), high-precision percentile calculations (approx_percentile), or data analytics involving long time ranges such as in Case 3. When the raw data volume is large, these operations are difficult to complete within a few seconds regardless of optimization, or even directly timeout. Materialized views can pre-process these computationally expensive tasks, turning "timeouts" into "second-level responses."
Scenarios requiring highly responsive user interfaces: It is not enough just to avoid timeouts. For data products directly facing Users, or core dashboards that executives view every day, 10 seconds and 1 second provide completely different experiences. If your Target is to make the dashboard operation as smooth as working with a local spreadsheet, pre-computation is essential.
A safeguard against high-concurrency failures: This is the most easily overlooked point. Often, although a single query is tolerable, once a failure occurs, dozens of people refresh the dashboard at the same time, plus hundreds of concurrent requests from automated inspection scripts (SDK), it is easy to trigger resource bottlenecks on the Server. The essence of a materialized view is to turn expensive "on-the-fly computation" into low-latency table lookups. At critical moments, this serves as the cornerstone that prevents the system from being overwhelmed.
A picture is worth a thousand words. We have condensed the core Performance Metrics and best Scenarios of this practice into the following infographic, hoping to provide a reference for your Performance Optimization.
2026-03-12 18:16:43
Many businesses assume Electronic Data Interchange (EDI) is outdated technology. In reality, EDI still powers a huge portion of global B2B commerce. Retailers, logistics providers, and manufacturers rely on it every day to exchange purchase orders, invoices, shipment notices, and inventory data.
What has changed is how companies implement EDI. Instead of relying on rigid legacy infrastructure, modern businesses are adopting API-driven EDI platforms that connect systems faster and provide greater visibility.
I saw this transition while helping a wholesale distributor expand into large retail partnerships. The company had strong demand, but every retailer required EDI integration before orders could begin. Their older system required weeks of mapping and testing for each new connection. Once they moved to a more modern EDI approach, onboarding new partners became noticeably faster and far less stressful for the operations team.
The experience made it clear that modern EDI technology is less about replacing EDI and more about modernizing how it works.
What EDI Actually Does in B2B Operations
Electronic Data Interchange allows businesses to exchange structured documents directly between computer systems. Instead of manually entering order details or invoice information, data moves automatically between partners.
Common EDI document types include:
By automating document exchange, EDI eliminates repetitive data entry and helps ensure consistency across trading partners.
Why Traditional EDI Systems Struggle to Keep Up
Many companies implemented EDI years ago using on-premise infrastructure and private networks. While these systems still function, they can create bottlenecks when organizations try to grow.
Some common issues include:
As supply chains become more digital, businesses need faster and more flexible ways to manage partner integrations.
6 Advantages of Modern API-Driven EDI Platforms
Modern EDI platforms combine traditional document standards with cloud infrastructure and APIs. This approach offers several benefits for growing organizations.
1. Faster Trading Partner Integrations
Legacy EDI onboarding could take months depending on document mapping complexity. API-enabled platforms streamline this process and allow partners to connect much faster.
This helps businesses begin transacting sooner.
2. Real-Time Transaction Visibility
Many modern platforms include dashboards that allow teams to monitor document flow in real time. Instead of waiting for error reports, teams can identify issues immediately.
This improves operational reliability.
3. Simplified Infrastructure
Cloud-based EDI environments remove the need for dedicated servers and complex internal systems. Infrastructure is managed externally, reducing operational overhead.
IT teams can focus on strategic projects rather than maintaining legacy integrations.
4. Easier System Integrations
Modern EDI platforms integrate more easily with ERP systems, warehouse software, and ecommerce tools. This ensures operational data flows smoothly across the business.
Connected systems reduce manual reconciliation work.
5. Improved Data Accuracy
Manual entry of orders or invoices increases the chance of errors. Automated document exchange ensures information moves directly between systems without retyping.
This leads to fewer disputes and faster order processing.
6. Scalability for Expanding Partner Networks
As businesses grow, they work with more suppliers, distributors, and retailers. Modern EDI infrastructure allows organizations to add partners without dramatically increasing technical complexity.
This scalability supports long-term growth.
The Emergence of EDI Networks
Another important change in the EDI landscape is the rise of network-based connectivity. Instead of building individual integrations with each trading partner, companies can connect through centralized networks.
These networks simplify the process of exchanging standardized documents across large partner ecosystems. Providers like Orderful are helping enable this model by offering platforms that allow businesses to connect with trading partners through a unified EDI infrastructure.
This approach reduces onboarding friction while improving visibility into transaction flows.
Final Thoughts
Electronic Data Interchange remains a cornerstone of B2B commerce, even as technology evolves. What is changing is how businesses implement and manage their EDI systems.
Modern platforms combine the reliability of traditional EDI with the flexibility of cloud infrastructure and APIs. Businesses gain faster integrations, improved transaction visibility, and the ability to scale their partner networks more efficiently.
For companies operating in complex supply chains, modern EDI infrastructure is becoming a critical foundation for reliable and scalable B2B communication.
2026-03-12 18:16:07
In the modern web era, passwords are no longer sufficient. They are the root cause of over 80% of data breaches, subject to phishing, reuse and terrible complexity rules. The industry has spoken: Passkeys are the future.
Passkeys, built on the Web Authentication (WebAuthn) and FIDO2 standards, replace traditional passwords with cryptographic key pairs. Your device (iPhone, Android, Windows Hello, YubiKey) stores a private key, while the server only ever sees the public key. No hashes to steal, no passwords to reset and inherently phishing-resistant.
In this comprehensive guide, we will build a 100% passwordless authentication system using Symfony and the official web-auth/webauthn-symfony-bundle. We will eliminate the concept of a password entirely from our application. No fallback, no “reset password” links. Just pure, secure, biometric-backed passkeys.
Passkeys work by replacing a shared secret (password) with a public/private key pair. The private key never leaves the user’s Apple device (iPhone, Mac, iPad) and the public key is stored on your Symfony server.
Run the following command to install the necessary dependencies:
composer require web-auth/webauthn-symfony-bundle:^5.2 \
web-auth/webauthn-stimulus:^5.2 \
symfony/uid:^7.4
We use @simplewebauthn/browser via AssetMapper (which provides excellent wrapper functions for the native browser WebAuthn APIs) because Apple Passkeys require a frontend interaction that is best handled via a Stimulus controller in a modern Symfony environment or you can use React/Vue modules.
This is where our application dramatically diverges from a traditional Symfony app. We are going to strip passwords entirely from the system.
Standard Symfony User entities aren’t equipped to store Passkey metadata (like AAGUIDs or public key Cose algorithms). We need a dedicated entity to store the credentials.
Our User entity implements Symfony\Component\Security\Core\User\UserInterface. Noticeably absent is the PasswordAuthenticatedUserInterface.
namespace App\Entity;
use App\Repository\UserRepository;
use Doctrine\ORM\Mapping as ORM;
use Symfony\Component\Security\Core\User\UserInterface;
use Symfony\Component\Uid\Uuid;
use Symfony\Component\Validator\Constraints as Assert;
#[ORM\Entity(repositoryClass: UserRepository::class)]
#[ORM\Table(name: '`user`')]
class User implements UserInterface
{
#[ORM\Id]
#[ORM\GeneratedValue]
#[ORM\Column]
private ?int $id = null;
#[ORM\Column(length: 255, unique: true)]
private ?string $userHandle = null;
#[ORM\Column(length: 180, unique: true)]
#[Assert\NotBlank]
#[Assert\Email]
private ?string $email = null;
public function __construct()
{
$this->userHandle = Uuid::v4()->toRfc4122();
}
...
}
A single user can have multiple passkeys (e.g., Face ID on their phone, Touch ID on their Mac, a YubiKey on their keychain). We need an entity to store these public keys and their associated metadata.
Create src/Entity/PublicKeyCredentialSource.php. This entity must be capable of translating to and from the bundle’s native Webauthn\PublicKeyCredentialSource object.
Crucially, we must preserve the TrustPath. Failing to do so destroys the attestation data needed if you ever require high-security enterprise hardware keys.
namespace App\Entity;
use App\Repository\PublicKeyCredentialSourceRepository;
use Doctrine\ORM\Mapping as ORM;
use Webauthn\PublicKeyCredentialSource as WebauthnSource;
#[ORM\Entity(repositoryClass: PublicKeyCredentialSourceRepository::class)]
#[ORM\Table(name: 'webauthn_credentials')]
class PublicKeyCredentialSource extends WebauthnSource
{
#[ORM\Id]
#[ORM\GeneratedValue]
#[ORM\Column]
private ?int $id = null;
public function getId(): ?int
{
return $this->id;
}
}
You must also implement a CredentialSourceRepository that implements Webauthn\Bundle\Repository\PublicKeyCredentialSourceRepository.
namespace App\Repository;
use App\Entity\PublicKeyCredentialSource;
use Doctrine\Bundle\DoctrineBundle\Repository\ServiceEntityRepository;
use Doctrine\Persistence\ManagerRegistry;
use Symfony\Component\ObjectMapper\ObjectMapperInterface;
use Webauthn\Bundle\Repository\PublicKeyCredentialSourceRepositoryInterface;
use Webauthn\Bundle\Repository\CanSaveCredentialSource;
use Webauthn\PublicKeyCredentialSource as WebauthnSource;
use Webauthn\PublicKeyCredentialUserEntity;
class PublicKeyCredentialSourceRepository extends ServiceEntityRepository implements PublicKeyCredentialSourceRepositoryInterface, CanSaveCredentialSource
{
public function __construct(ManagerRegistry $registry, private readonly ObjectMapperInterface $objectMapper)
{
parent::__construct($registry, PublicKeyCredentialSource::class);
}
public function findOneByCredentialId(string $publicKeyCredentialId): ?WebauthnSource
{
return $this->findOneBy(['publicKeyCredentialId' => $publicKeyCredentialId]);
}
public function findAllForUserEntity(PublicKeyCredentialUserEntity $publicKeyCredentialUserEntity): array
{
return $this->findBy(['userHandle' => $publicKeyCredentialUserEntity->id]);
}
public function saveCredentialSource(WebauthnSource $publicKeyCredentialSource): void
{
$entity = $this->findOneBy(['publicKeyCredentialId' => base64_encode($publicKeyCredentialSource->publicKeyCredentialId)])
?? $this->objectMapper->map($publicKeyCredentialSource, PublicKeyCredentialSource::class);
$this->getEntityManager()->persist($entity);
$this->getEntityManager()->flush();
}
}
The WebAuthn bundle relies on abstract interfaces to find and persist users and credentials. Our repositories must implement these interfaces.
The UserRepository implements PublicKeyCredentialUserEntityRepositoryInterface. Because we want the bundle to handle user creation automatically during a passkey registration, we also implement CanRegisterUserEntity and CanGenerateUserEntity.
namespace App\Repository;
use App\Entity\User;
use Doctrine\Bundle\DoctrineBundle\Repository\ServiceEntityRepository;
use Doctrine\Persistence\ManagerRegistry;
use Symfony\Component\Uid\Uuid;
use Webauthn\Bundle\Repository\CanGenerateUserEntity;
use Webauthn\Bundle\Repository\CanRegisterUserEntity;
use Webauthn\Bundle\Repository\PublicKeyCredentialUserEntityRepositoryInterface;
use Webauthn\Exception\InvalidDataException;
use Webauthn\PublicKeyCredentialUserEntity;
class UserRepository extends ServiceEntityRepository implements PublicKeyCredentialUserEntityRepositoryInterface, CanRegisterUserEntity, CanGenerateUserEntity
{
public function __construct(ManagerRegistry $registry)
{
parent::__construct($registry, User::class);
}
public function saveUserEntity(PublicKeyCredentialUserEntity $userEntity): void
{
$user = new User();
$user->setEmail($userEntity->name);
$user->setUserHandle($userEntity->id);
$this->getEntityManager()->persist($user);
$this->getEntityManager()->flush();
}
public function generateUserEntity(?string $username, ?string $displayName): PublicKeyCredentialUserEntity
{
return new PublicKeyCredentialUserEntity(
$username ?? '',
Uuid::v4()->toRfc4122(),
$displayName ?? $username ?? ''
);
}
...
Apple requires specific “Relying Party” (RP) information. This identifies your application to the user’s iCloud Keychain.
Create or update config/packages/webauthn.yaml:
webauthn:
allowed_origins: ['%env(WEBAUTHN_ALLOWED_ORIGINS)%']
credential_repository: 'App\Repository\PublicKeyCredentialSourceRepository'
user_repository: 'App\Repository\UserRepository'
creation_profiles:
default:
rp:
name: '%env(RELYING_PARTY_NAME)%'
id: '%env(RELYING_PARTY_ID)%'
request_profiles:
default:
rp_id: '%env(RELYING_PARTY_ID)%'
WebAuthn is incredibly strict about domains. A passkey created for example.com cannot be used on phishing-example.com. To ensure our application is portable across environments, we define our Relying Party (RP) settings in the .env file.
Open .env or .env.local and add:
###> web-auth/webauthn-symfony-bundle ###
RELYING_PARTY_ID=localhost
RELYING_PARTY_NAME="My Application"
WEBAUTHN_ALLOWED_ORIGINS=localhost
###< web-auth/webauthn-symfony-bundle ###
In production RELYING_PARTY_ID must be your exact root domain (e.g., example.com) and WebAuthn require a secure HTTPS context. Browsers only exempt localhost for development.
Passkey registration is a two-step handshake:
Security is paramount. Even though WebAuthn is inherently phishing-resistant, your endpoints are still vulnerable to traditional Cross-Site Request Forgery (CSRF) if left unprotected. We will pass Symfony’s built-in CSRF tokens via headers in our fetch() calls.
Assuming you have a standard CSRF helper (like csrf_protection_controller.js that extracts the token from a meta tag or hidden input) we inject it into our Passkey controller.
import { Controller } from '@hotwired/stimulus';
import { startRegistration, startAuthentication } from '@simplewebauthn/browser';
import { generateCsrfHeaders } from './csrf_protection_controller.js';
export default class extends Controller {
static values = {
optionsUrl: String,
resultUrl: String,
isLogin: Boolean
}
connect() {
console.log('Passkey controller connected! 🔑');
}
async submit(event) {
event.preventDefault();
const username = this.element.querySelector('[name="username"]')?.value;
if (!this.isLoginValue && !username) {
alert('Please provide a username/email');
return;
}
const csrfHeaders = generateCsrfHeaders(this.element);
try {
// 1. Fetch options
const response = await fetch(this.optionsUrlValue, {
method: 'POST',
headers: { 'Content-Type': 'application/json', ...csrfHeaders },
body: username ? JSON.stringify({ username: username, displayName: username }) : '{}'
});
if (!response.ok) {
const errorData = await response.json().catch(() => ({}));
throw new Error(errorData.errorMessage || 'Failed to fetch WebAuthn options from server');
}
const options = await response.json();
// 2. Trigger Apple's Passkey UI (Create or Get)
let credential;
if (this.isLoginValue) {
credential = await startAuthentication({ optionsJSON: options });
} else {
credential = await startRegistration({ optionsJSON: options });
}
// 3. Send result back to verify
const result = await fetch(this.resultUrlValue, {
method: 'POST',
headers: { 'Content-Type': 'application/json', ...csrfHeaders },
body: JSON.stringify(credential)
});
if (result.ok) {
window.location.reload();
} else {
const errorText = await result.text();
alert('Authentication failed: ' + errorText);
}
} catch (e) {
console.error(e);
alert('WebAuthn process failed: ' + e.message);
}
}
}
You need to ensure the routing type for webauthn exists. Create config/routes/webauthn_routes.yaml:
webauthn_routes:
resource: .
type: webauthn
To allow users to log in with their Passkey, we need to configure the Symfony Guard (now the Authenticator system).
In config/packages/security.yaml:
security:
providers:
app_user_provider:
entity:
class: App\Entity\User
property: email
firewalls:
dev:
pattern: ^/(_(profiler|wdt)|css|images|js)/
security: false
main:
lazy: true
provider: app_user_provider
webauthn:
authentication:
routes:
options_path: /login/passkey/options
result_path: /login/passkey/result
registration:
enabled: true
routes:
options_path: /register/passkey/options
result_path: /register/passkey/result
success_handler: App\Security\AuthenticationSuccessHandler
failure_handler: App\Security\AuthenticationFailureHandler
logout:
path: app_logout
access_control:
- { path: ^/dashboard, roles: ROLE_USER }
Because WebAuthn ceremonies involve AJAX fetch() requests from the frontend, a standard Symfony redirect on failure (e.g., trying to register an email that already exists) will be silently swallowed by the browser, resulting in a frustrating user experience.
We implement a custom AuthenticationFailureHandler that returns a clean 401 Unauthorized JSON response when the request is AJAX.
Create src/Security/AuthenticationFailureHandler.php:
namespace App\Security;
use Symfony\Component\HttpFoundation\JsonResponse;
use Symfony\Component\HttpFoundation\RedirectResponse;
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\Routing\Generator\UrlGeneratorInterface;
use Symfony\Component\Security\Core\Exception\AuthenticationException;
use Symfony\Component\Security\Http\Authentication\AuthenticationFailureHandlerInterface;
use Symfony\Component\Security\Http\SecurityRequestAttributes;
readonly class AuthenticationFailureHandler implements AuthenticationFailureHandlerInterface
{
public function __construct(private UrlGeneratorInterface $urlGenerator) {}
public function onAuthenticationFailure(Request $request, AuthenticationException $exception): RedirectResponse|JsonResponse
{
if ($request->getContentTypeFormat() === 'json' || $request->isXmlHttpRequest()) {
return new JsonResponse([
'status' => 'error',
'errorMessage' => $exception->getMessageKey(),
], Response::HTTP_UNAUTHORIZED);
}
// Store the error in the session
$request->getSession()->set(SecurityRequestAttributes::AUTHENTICATION_ERROR, $exception);
return new RedirectResponse($this->urlGenerator->generate('app_login'));
}
}
Since Passkeys often bypass the traditional login form, you need to define where the user goes after a successful “Handshake.”
namespace App\Security;
use Symfony\Component\HttpFoundation\RedirectResponse;
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\Routing\Generator\UrlGeneratorInterface;
use Symfony\Component\Security\Core\Authentication\Token\TokenInterface;
use Symfony\Component\Security\Http\Authentication\AuthenticationSuccessHandlerInterface;
readonly class AuthenticationSuccessHandler implements AuthenticationSuccessHandlerInterface
{
public function __construct(private UrlGeneratorInterface $urlGenerator) {}
public function onAuthenticationSuccess(Request $request, TokenInterface $token): RedirectResponse
{
return new RedirectResponse($this->urlGenerator->generate('app_dashboard'));
}
}
Transitioning to Apple Passkeys with Symfony 7.4 isn’t just a security upgrade; it’s a significant improvement to your user experience. By removing the friction of password managers, “forgot password” emails and complex character requirements, you increase conversion and user retention.
As a senior developer or lead, your priority is ensuring that this implementation remains maintainable. By sticking to the WebAuthn-Symfony-Bundle and PHP 8.x attributes, you ensure that your codebase remains idiomatic and ready for future Symfony LTS releases.
Source Code: You can find the full implementation and follow the project’s progress on GitHub: [https://github.com/mattleads/PasskeysAuth]
If you found this helpful or have questions about the implementation, I’d love to hear from you. Let’s stay in touch and keep the conversation going across these platforms:
LinkedIn: [https://www.linkedin.com/in/matthew-mochalkin/]
X (Twitter): [https://x.com/MattLeads]
Telegram: [https://t.me/MattLeads]
GitHub: [https://github.com/mattleads]
2026-03-12 18:14:39
If you have spent a decade building large-scale backend systems, you know that integrating modern, slow-running workloads—like LLM prompts or complex AI tasks—into legacy synchronous architectures is a massive headache.
Standard HTTP REST calls are inherently brittle for this. If an AI model takes 45 seconds to generate a response, your traditional API gateway or HTTP client will likely time out at the 30-second mark. The connection drops, the user gets a 504 Gateway Timeout, and the backend CPU cycles are completely wasted.
The textbook architectural answer is to introduce a message broker to act as a shock absorber. But what if your client-facing frontend requires a synchronous, Request-Reply experience?
You have to build a "Sync-over-Async" bridge. And if you are using Azure Service Bus, doing this at a massive scale exposes a critical bottleneck.
The Problem with Service Bus Sessions
When implementing a Request-Reply pattern on Azure Service Bus, the default recommendation is to use Sessions. You send a message with a specific SessionId, and your consumer locks onto that session to receive the reply.
It works beautifully in small systems, but it fails spectacularly at scale for two reasons:
The "Sticky" Bottleneck: Sessions create exclusive locks. If one session has 1,000 messages and another has 10, a consumer gets stuck on the heavy session while other pods sit idle.
Hard Limits: On the Standard tier, you are limited to 1,500 concurrent sessions. If you are scaling to hundreds or thousands of Spring Boot replicas during a massive traffic spike, you will hit a wall.
If you try to bypass sessions by having thousands of replicas listen to a single shared reply queue, you create a "competing consumer" disaster, wasting CPU cycles and thrashing the broker.
The Enterprise Solution: The Filtered Topic Pattern
To build a highly scalable, session-less Request-Reply architecture, we need to shift from Queues to Topics with SQL Filters.
Here is how the architecture flows:
The Request: The Spring Boot application generates a unique InstanceId on startup. It sends the request to a standard queue, attaching a custom property: ReplyToInstance = 'Instance-123'.
The Dynamic Subscription: When the pod boots up, it dynamically provisions a lightweight Subscription to a global reply-topic.
The Magic (SQL Filter): We apply a SqlRuleFilter to that subscription: ReplyToInstance = 'Instance-123'.
By leveraging the broker's data plane to evaluate the SQL filter, Azure Service Bus does the heavy lifting. Pod #123 only receives messages destined for Pod #123. There is zero thrashing, no session limits, and you get pure horizontal elasticity.
Introducing the Sentinel Service Bus Starter
Wiring up the Azure Administration Client to dynamically provision and clean up these filtered subscriptions—while managing reactive CompletableFuture mappings—is a lot of boilerplate.
To solve this, I built the Sentinel Service Bus Starter, a plug-and-play Spring Boot library that abstracts this entire pattern into a single dependency. It acts as the core engine for an AI-Native Gateway concept designed to modernize legacy software systems without rewriting the clients.
How it works:
Just drop the dependency into your build.gradle, provide your connection string in application.yml, and inject the SentinelTemplate:
@RestController
@RequestMapping("/api/v1/gateway")
public class GatewayController {
private final SentinelTemplate sentinelTemplate;
public GatewayController(SentinelTemplate sentinelTemplate) {
this.sentinelTemplate = sentinelTemplate;
}
@PostMapping("/process")
public CompletableFuture<ResponseEntity<String>> processRequest(@RequestBody String payload) {
// Sends to the ASB Queue, waits on the dynamic Topic Subscription
return sentinelTemplate.sendAndReceive(payload)
.thenApply(ResponseEntity::ok)
.exceptionally(ex -> ResponseEntity.internalServerError().build());
}
}
Because it leverages Java 21's Virtual Threads (Project Loom) under the hood, Tomcat HTTP threads are never blocked while waiting for the Service Bus round-trip, allowing incredible throughput even when waiting 60 seconds for an AI workload to finish.
Bridging the Legacy Gap
We don't always have the luxury of migrating our entire ecosystem to Event-Driven Architecture overnight. Sometimes, you just need a bulletproof, highly scalable Gateway to protect your modern backends from synchronous legacy clients.
I’d love to hear how other teams are tackling the Sync-over-Async problem in the comments!