MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Transactions in Spring Boot: What `@Transactional` Really Does (and Why It Matters)

2026-02-02 05:47:54

1. Introduction: Why Transactions Matter More Than You Think

If you’ve been working with Spring Boot for a while, chances are you’ve already used @Transactional.

Maybe you added it because “that’s what everyone does”, or because a tutorial told you so.

And most of the time… things seem to work.

Until one day:

  • a payment is charged but the order is not created
  • a user is created, but their profile is missing
  • a retry suddenly creates duplicate data
  • or worse: production data ends up in an inconsistent state

That’s usually the moment when you realize that transactions are not just a database feature — they are a core business concept.

In backend development, especially in stateful systems, partial success is often worse than total failure.

A transaction is what allows you to say:

“Either everything succeeds, or nothing does.”

Spring Boot makes transactions easy to use, but also easy to misunderstand.

And misunderstanding them can lead to subtle bugs that are extremely hard to debug.

In this article, we’ll start from the basics:

  • what a transaction really is
  • why it is fundamental in backend systems
  • and how this concept maps to Spring Boot and @Transactional

Before touching any annotation, we need to get the mental model right.

2. What Is a Transaction? (The Basics, Done Right)

At its core, a transaction is a logical unit of work.

It groups multiple operations into a single, indivisible action.

Think about a very common backend use case:

  • create an order
  • decrease product stock
  • save a payment record

If one of these steps fails, the system should not be left in a half-completed state.

Without transactions, your system might end up like this:

✔ Order created
✔ Payment saved
✘ Stock update failed

That’s not a technical issue.

That’s a business problem.

The ACID Properties

Transactions are usually described using the ACID acronym. You’ve probably heard of it, but let’s translate it into practical backend terms.

Atomicity

“All or nothing.”

Either all operations inside the transaction succeed, or all of them are rolled back.

No partial updates, no “we’ll fix it later”.

Consistency

The system moves from one valid state to another valid state.

Constraints, invariants, and business rules must always hold — before and after the transaction.

Isolation

Concurrent transactions should not step on each other’s toes.

What one transaction sees (or doesn’t see) from another transaction is controlled and predictable.

This is where things start getting tricky — and interesting.

Durability

Once a transaction is committed, it stays committed.

Even if the application crashes right after, the data is there.

Transactions Are Not Just About Databases

This is an important point that often gets overlooked.

Transactions:

  • are implemented by the database
  • but defined by your business logic

The database doesn’t know what an “order” or a “payment” is.

It only knows rows, tables, and constraints.

It’s the backend application that decides:

  • what belongs together
  • where the transactional boundary starts
  • and where it ends

This is exactly why frameworks like Spring exist:

to help you define transactional boundaries in your application code, not in SQL scripts.

Why Transactions Are Fundamental in Backend Systems

Modern backend systems are:

  • concurrent
  • stateful
  • failure-prone by nature
    • Networks fail.
    • Databases timeout.
    • External services go down.

Transactions are your last line of defense against data corruption.

They don’t make failures disappear — but they make failures safe.

And that’s the key idea we’ll carry into Spring Boot and @Transactional.

3. Transactions in Spring Boot: The Big Picture

Before diving deeper into @Transactional, it’s important to understand how Spring manages transactions at a high level.

Not the full internal implementation — just enough to avoid the most common (and painful) mistakes.

Because with transactions in Spring, the “how” matters almost as much as the “what”.

Declarative vs Programmatic Transactions

Spring supports two ways of managing transactions:

1. Programmatic transactions

You explicitly start, commit, and rollback a transaction in code.

TransactionStatus status = transactionManager.getTransaction(definition);
try {
    // business logic
    transactionManager.commit(status);
} catch (Exception ex) {
    transactionManager.rollback(status);
    throw ex;
}

This works, but:

  • it’s verbose
  • it mixes infrastructure concerns with business logic
  • it doesn’t scale well in complex services

You can use it, but in most Spring Boot applications, you shouldn’t.

2. Declarative transactions (the Spring way)

This is where @Transactional comes in.

You declare what should be transactional, not how to manage the transaction.

@Transactional
public void placeOrder() {
    // business logic
}

Spring takes care of:

  • opening the transaction
  • committing it if everything goes well
  • rolling it back if something goes wrong

This separation of concerns is one of the reasons why Spring-based backends are so readable and maintainable.

The Role of PlatformTransactionManager

At runtime, Spring delegates all transaction operations to a PlatformTransactionManager.

Think of it as an abstraction layer between:

  • your application
  • and the actual transaction implementation

Depending on what you use, Spring will plug in a different implementation:

  • DataSourceTransactionManager → JDBC
  • JpaTransactionManager → JPA / Hibernate
  • ReactiveTransactionManager → reactive stacks

This abstraction is what allows you to write framework-agnostic transactional code, while still being tightly integrated with your persistence technology.

You almost never interact with it directly — but it’s always there.

4. What Actually Happens When You Use @Transactional

At first glance, @Transactional looks deceptively simple.

You put it on a method, and magically:

  • a transaction starts
  • your logic runs
  • everything is committed or rolled back

And in happy-path demos, that’s exactly what happens.

In real-world applications, however, where and how you use @Transactional makes a huge difference.

Let’s break it down.

The Simplest (and Most Common) Use Case

The most basic usage looks like this:

@Transactional
public void placeOrder() {
    orderRepository.save(order);
    paymentRepository.save(payment);
}

If an exception is thrown during the method execution:

  • Spring marks the transaction for rollback
  • all database changes are reverted

If the method completes successfully:

  • the transaction is committed

So far, so good.

But this simplicity hides a lot of assumptions.

One of the biggest misconceptions is thinking of @Transactional as something that adds behavior.
It doesn’t. It defines a transactional boundary:

In other words, when you annotate a method with @Transactional, Spring does NOT modify your method.

Instead, Spring:

  1. creates a proxy around your bean
  2. intercepts calls to transactional methods
  3. starts a transaction before the method execution
  4. commits or rolls back after the method returns or throws an exception

This is done using Spring AOP (Aspect-Oriented Programming).

In practice, the flow looks like this:

Client → Spring Proxy → Transaction Interceptor
        → Your Method → Transaction Commit / Rollback

Why is this important?

Because only method calls that go through the proxy are transactional.

This single sentence explains:

  • why self-invocation doesn’t work
  • why @Transactional on private methods is ignored
  • why calling a transactional method from the same class can silently break everything

We’ll get back to this later in the pitfalls section, but keep this mental model in mind.

Where @Transactional Should Live (and Where It Shouldn’t)

A common question is: where do I put @Transactional?

The short answer:

On service-layer methods that define a business operation.

Typically:

  • ❌ Controllers → no business logic, no transactions
  • ⚠️ Repositories → usually too low-level
  • ✅ Services → perfect place for transactional boundaries

Example:

@Service
public class OrderService {

    @Transactional
    public void placeOrder(CreateOrderCommand command) {
        // validate input
        // persist order
        // update stock
        // trigger side effects
    }
}

This makes your transactional boundary:

  • explicit
  • easy to reason about
  • aligned with business use cases

Class-Level vs Method-Level @Transactional

Spring allows you to annotate:

  • a single method
  • or the entire class
@Transactional
@Service
public class OrderService {
    ...
}

This means:

  • every public method is transactional by default
  • unless overridden at method level

This can be useful, but it’s also dangerous if overused.

From experience:

  • class-level works well for simple services
  • method-level is safer for complex ones

Explicit is better than implicit — especially with transactions.

At this point, we have a solid foundation:

  • what a transaction is
  • why it matters
  • how Spring manages it under the hood

5. Understanding @Transactional Attributes (The Real Ones)

@Transactional is not just an on/off switch.

Behind that single annotation there are rules that control how transactions behave, especially when:

  • multiple methods interact
  • exceptions are thrown
  • concurrent operations happen

Most bugs related to transactions come from default assumptions that turn out to be wrong.

Let’s go through the attributes that actually matter in real projects.

5.1 Propagation: How Transactions Interact with Each Other

Propagation defines what happens when a transactional method is called from another transactional method.

This is by far the most important attribute.

REQUIRED (Default)

@Transactional(propagation = Propagation.REQUIRED)

Meaning:

  • join the existing transaction if there is one
  • otherwise, create a new transaction

This is what you want most of the time.

Example:

@Transactional
public void placeOrder() {
    orderService.saveOrder();
    paymentService.charge();
}

If charge() fails:

  • the entire transaction rolls back
  • nothing is persisted

This is usually correct and desirable.

REQUIRES_NEW

@Transactional(propagation = Propagation.REQUIRES_NEW)

Meaning:

  • suspend the existing transaction
  • start a completely new one

Classic use case:

  • audit logs
  • technical events
  • actions that must persist even if the main transaction fails

Example:

@Transactional
public void placeOrder() {
    orderRepository.save(order);
    auditService.logOrderAttempt(order); // REQUIRES_NEW
    throw new RuntimeException("Payment failed");
}

Result:

  • order → rolled back
  • audit log → committed

This is powerful — and dangerous if misused.

Other Propagation Types (Quick but Honest)

  • SUPPORTS → join if exists, otherwise run non-transactionally
  • MANDATORY → fail if no transaction exists
  • NOT_SUPPORTED → suspend any existing transaction
  • NEVER → fail if a transaction exists
  • NESTED → savepoints (DB-dependent)

👉 In most applications:

  • REQUIRED and REQUIRES_NEW cover 95% of use cases
  • the others are niche and should be used intentionally

5.2 Isolation: How Much You See of Other Transactions

Isolation defines how concurrent transactions affect each other.

Default:

@Transactional(isolation = Isolation.DEFAULT)

This delegates to the database default (often READ_COMMITTED).

The Practical Levels

  • READ_COMMITTED You only see committed data. Good balance between consistency and performance.
  • REPEATABLE_READ Data read once won’t change during the transaction. Prevents non-repeatable reads.
  • SERIALIZABLE Full isolation. Transactions behave as if executed sequentially. Very safe. Very expensive.

Higher isolation = fewer anomalies = lower throughput.

👉 Rule of thumb:

  • trust your DB defaults
  • increase isolation only when you have a proven concurrency problem

5.3 Rollback Rules: The Most Common Source of Bugs

This is where many Spring developers get burned.

By default:

Spring rolls back only on unchecked exceptions (RuntimeException) and Error.

That means this will NOT rollback:

@Transactional
public void placeOrder() throws Exception {
    orderRepository.save(order);
    throw new Exception("Checked exception");
}

From a business perspective, this operation clearly failed.

From Spring’s perspective, however, this is a checked exception — and the transaction is committed.

No rollback. No warning. Just inconsistent data.
Yes, really.

Explicit Rollback Rules

You can override this:

@Transactional(rollbackFor = Exception.class)

Or the opposite:

@Transactional(noRollbackFor = BusinessException.class)

This is essential when:

  • using checked exceptions
  • modeling business failures explicitly

A Better Approach: Business Exceptions as Runtime Exceptions

In most real-world Spring Boot applications, business failures should invalidate the transaction.

A clean and effective way to model this is by using custom unchecked exceptions:

public class PaymentFailedException extends RuntimeException {

}
@Transactional
public void placeOrder() {
    orderRepository.save(order);
    throw new PaymentFailedException();
}

This approach has several advantages:

  • rollback happens automatically
  • transactional behavior is explicit
  • no need for extra configuration
  • business intent is clear

If the operation fails, the transaction fails. No ambiguity.

👉 Always be explicit if you rely on checked exceptions.

5.4 readOnly: Small Flag, Big Impact

@Transactional(readOnly = true)

This:

  • hints the persistence provider
  • may optimize flushing and dirty checking
  • documents intent clearly

Perfect for:

  • query-only service methods
  • read-heavy paths

Not a silver bullet — but a good habit.

5.5 timeout: A Safety Net

@Transactional(timeout = 5)

If the transaction runs longer than 5 seconds:

  • it’s rolled back

Useful for:

  • protecting DB resources
  • preventing stuck transactions

Especially relevant under load.

A Hard-Earned Lesson

Most transactional bugs are not caused by:

  • wrong SQL
  • broken databases

They come from:

  • wrong assumptions about propagation
  • unexpected rollback behavior
  • hidden transactional boundaries

Understanding these attributes turns @Transactional from a “magic annotation” into a precise tool.

6. Common Transactional Pitfalls in Spring Boot

At this point, we understand how transactions should work.

Unfortunately, many transactional bugs don’t come from a lack of knowledge —

they come from small details that are easy to miss and hard to debug.

Let’s go through the most common pitfalls you’ll encounter in real Spring Boot applications.

6.1 Self-Invocation: The Silent Transaction Killer

This is probably the most famous Spring transactional pitfall.

@Service
public class OrderService {

    public void placeOrder() {
        saveOrder(); // ❌ no transaction
    }

    @Transactional
    public void saveOrder() {
        orderRepository.save(order);
    }
}

At first glance, this looks fine.

It’s not.

What’s the problem?

The call to saveOrder() happens inside the same class.

It never goes through the Spring proxy.

Result:

  • @Transactional is completely ignored
  • no transaction is started
  • no error, no warning

This is one of the reasons transactional bugs feel “random”.

How to fix it

  • move the transactional method to another bean
  • or make sure it’s called from outside the class
@Service
public class OrderPersistenceService {

    @Transactional
    public void saveOrder(Order order) {
        orderRepository.save(order);
    }
}

6.2 @Transactional on private (or non-public) Methods

Another classic.

@Transactional
private void saveOrder() {
    ...
}

This will never work.

Spring proxies intercept public method calls only (by default).

Private, protected, or package-private methods are ignored.

Again:

  • no exception
  • no warning
  • just no transaction

👉 Transactional methods must be public. Always.

6.3 Catching Exceptions and Accidentally Preventing Rollback

This one is subtle — and extremely common.

@Transactional
public void placeOrder() {
    try {
        paymentService.charge();
    } catch (PaymentFailedException ex) {
        log.error("Payment failed", ex);
    }
}

Looks harmless, right?

But now:

  • the exception is swallowed
  • the method completes normally
  • the transaction is committed

Spring rolls back only if the exception escapes the transactional boundary.

Correct approaches

Either:

  • rethrow the exception
  • or mark the transaction for rollback manually
TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();

But in most cases, rethowing is the cleanest solution.

6.4 Mixing Transactional and Non-Transactional Logic

Sometimes transactional methods grow too much:

@Transactional
public void placeOrder() {
    orderRepository.save(order);
    emailService.sendConfirmationEmail(); // ❌
}

Problems:

  • emails are slow
  • emails can fail
  • emails should not be part of a DB transaction

If the email fails:

  • do you really want to rollback the order?

Probably not.

Better approach

  • keep transactions short
  • isolate side effects
  • use events or async processing

Transactional boundaries should protect data consistency, not external systems.

6.5 Unexpected Rollbacks (UnexpectedRollbackException)

Sooner or later, you’ll see this:

UnexpectedRollbackException: Transaction silently rolled back

This usually happens when:

  • an inner transactional method marks the transaction as rollback-only
  • but the outer method tries to commit

Common causes:

  • caught exceptions inside nested transactional calls
  • mixed propagation settings
  • overuse of REQUIRES_NEW

When you see this exception:

  • don’t look at the commit
  • look at what marked the transaction as rollback-only earlier

6.6 Long-Running Transactions

Technically correct. Practically dangerous.

Long transactions:

  • lock rows for too long
  • reduce throughput
  • increase deadlock probability

Common causes:

  • doing I/O inside transactions
  • calling remote services
  • waiting for user input (yes, it happens…)

Rule:

Transactions should be as short as possible, but as long as necessary.

A Pattern You’ll Start to Recognize

Most transactional problems share a common theme:

  • the code looks correct
  • the behavior is implicit
  • Spring does exactly what you told it to do — not what you meant

That’s why:

  • understanding proxies
  • defining clear boundaries
  • modeling failures explicitly

…is more important than memorizing annotations.

7. @Transactional and @Async: A Dangerous Combination

At some point, almost every Spring Boot developer tries to combine:

  • @Transactional → consistency
  • @Async → performance

On paper, it sounds like a great idea.

In practice, it’s one of the most misunderstood and dangerous combinations in Spring.

Let’s clear things up.

The Core Problem: Different Threads, Different Transactions

@Transactional is thread-bound.

A transaction:

  • is associated with the current thread
  • lives and dies inside that thread

@Async, on the other hand:

  • executes the method in a different thread
  • outside the original call stack

So this code:

@Transactional
public void placeOrder() {
    orderRepository.save(order);
    asyncService.sendConfirmationEmail(order);
}
@Async
public void sendConfirmationEmail(Order order) {
    // ...
}

does not mean:

“send the email in the same transaction, but asynchronously”

It means:

“start a completely separate execution, with no transaction at all (unless explicitly defined)”

The Most Common Wrong Assumption

Many developers assume:

“If the async method is called from a transactional one, it participates in the same transaction.”

It doesn’t.

Ever.

Different thread = different transactional context.

What Happens in Practice

Let’s look at a slightly more subtle example:

@Transactional
public void placeOrder() {
    orderRepository.save(order);
    asyncService.notifyWarehouse(order);
    throw new RuntimeException("Payment failed");
}

Possible outcome:

  • transaction rolls back
  • order is NOT persisted
  • async method still runs
  • warehouse is notified about an order that doesn’t exist

This is how distributed inconsistencies are born.

Making @Async Transactional (Yes, But…)

You can put @Transactional on an async method:

@Async
@Transactional
public void notifyWarehouse(Order order) {
    ...
}

This creates:

  • a new transaction
  • completely independent from the original one

This might be fine — or disastrous — depending on intent.

Again: there is no shared transaction.

Better Patterns Than @Transactional + @Async

1. Transactional Events

Spring provides a much safer mechanism:

@Transactional
public void placeOrder() {
    orderRepository.save(order);
    applicationEventPublisher.publishEvent(new OrderPlacedEvent(order));
}
@TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT)
public void onOrderPlaced(OrderPlacedEvent event) {
    sendEmail(event);
}

Now:

  • the event is processed only if the transaction commits
  • no ghost side effects
  • no inconsistent state

This pattern is gold.

2. Messaging / Event-Driven Architecture

For more complex systems:

  • Kafka
  • RabbitMQ
  • cloud queues

Persist state first, then publish events.

Transactions protect your database, not the world.

A Simple Rule That Saves a Lot of Pain

Never assume an async operation is part of your transaction.

It never is.

If consistency matters:

  • finish the transaction
  • then trigger async behavior

Always in that order.

Final Thought on @Async

@Async is not dangerous by itself.

What’s dangerous is:

  • mixing it with transactions
  • without understanding thread boundaries

Once you internalize this model, the behavior becomes predictable — and safe.

8. Transaction Logging and Debugging

One of the most frustrating things about transactional bugs is that everything looks fine until it’s not.

No errors.

No stack traces.

Just data in the wrong state.

When that happens, logging is often the only way to understand what Spring is actually doing.

Let’s see how to make transactions visible.

Why Transactional Bugs Are Hard to Debug

Transactional behavior is:

  • implicit
  • proxy-based
  • spread across multiple layers

So when a transaction:

  • starts
  • commits
  • rolls back
  • or is marked as rollback-only

…it usually happens outside your business code.

Without proper logs, you’re debugging blind.

Enabling Transaction Logs in Spring Boot

Spring exposes very useful logs — you just need to turn them on.

In application.yml (or application.properties):

logging:
  level:
    org.springframework.transaction: DEBUG

This alone already shows:

  • when a transaction is created
  • when it’s committed
  • when it’s rolled back

Logging Transaction Boundaries

With transaction logging enabled, you’ll start seeing logs like:

Creating new transaction with name [OrderService.placeOrder]
Participating in existing transaction
Committing JDBC transaction
Rolling back JDBC transaction

This tells you:

  • where the transaction starts
  • which method owns it
  • how nested calls behave

When debugging propagation issues, this is invaluable.

Hibernate / JPA Logs: Seeing What Actually Hits the DB

Transactions are about when changes are flushed.

To see what is executed, enable SQL logs:

logging:
  level:
    org.hibernate.SQL: DEBUG

Optionally, parameter binding:

logging:
  level:
    org.hibernate.type.descriptor.sql: TRACE

Now you can correlate:

  • transaction boundaries
  • SQL statements
  • commit / rollback events

This is often where inconsistencies finally make sense.

Debugging Rollbacks That “Come from Nowhere”

If you’ve ever seen this:

UnexpectedRollbackException: Transaction silently rolled back

it means:

  • the transaction was marked as rollback-only earlier
  • but the outer layer tried to commit it anyway

To debug this:

  1. enable transaction logs
  2. look for a setRollbackOnly event
  3. check for swallowed exceptions
  4. inspect nested transactional methods

The rollback never comes from nowhere — it’s just hidden.

Business Logging vs Transaction Logging

A common mistake is relying only on business logs:

log.info("Order placed successfully");

This log may appear:

  • before the transaction commits
  • even if the transaction later rolls back

If you need certainty, log:

  • after commit
  • or via transactional events
@TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT)
public void onOrderCommitted(OrderPlacedEvent event) {
    log.info("Order committed: {}", event.getOrderId());
}

Now your logs reflect reality, not intent.

A Debugging Workflow That Actually Works

When dealing with transactional issues:

  1. Enable Spring transaction logs
  2. Enable SQL logs
  3. Identify transaction boundaries
  4. Track exception flow
  5. Verify commit / rollback timing

This approach turns “random behavior” into deterministic behavior.

Final Takeaway

Transactions don’t fail silently.

They fail quietly.

Logging is what gives them a voice.

Once you get used to reading transaction logs, you’ll start spotting problems before they reach production.

9. Conclusion & Takeaways

Transactions are one of the most powerful tools in Spring Boot — but they are also one of the most misunderstood.

Here’s what you should remember:

  1. Transactions are about business consistency, not just database operations. Define your transactional boundaries around business operations, not technical details.
  2. @Transactional is declarative, but precise. Understand propagation, isolation, rollback rules, and method visibility.
  3. Exceptions drive rollbacks.

    • Unchecked exceptions → rollback by default
    • Checked exceptions → explicit rollback required Model your business exceptions carefully.
  4. Avoid common pitfalls:

    • Self-invocation
    • Private methods
    • Catching exceptions and swallowing them
    • Long-running transactions
    • Mixing async and transactional logic without awareness
  5. Logging is your friend.

    Enable transaction and SQL logging to debug propagation, rollback, and commit behavior. Use @TransactionalEventListener for post-commit business logging.

  6. Async and transactions are tricky.

    Transactions are thread-bound. Async methods run in a different thread and have a separate transactional context. Prefer events or queues for safe decoupling.

Final Thought

Spring Boot gives you powerful tools, but with great power comes great responsibility.

Transactions can protect your data, but only if you understand how they work — not just how they look.

💬 I’d love to hear from you:

  • What are the trickiest transactional issues you’ve faced?
  • Do you have any favorite patterns for async + transactional operations?
  • Any hidden pitfalls you’ve learned the hard way?

Share your experiences in the comments — let’s learn from each other.

App Center alternatives for mobile beta distribution in 2026

2026-02-02 05:42:40

Alternative options for Android and iOS ad-hoc distribution to testers, now Visual Studio App Center has shut down

Microsoft's App Center was a widely used mobile DevOps platform offering mobile CI/CD, beta distribution, and analytics.

With Microsoft discontinuing the service in 2025, teams who previously depended on it are forced to explore alternative options. Let's explore some of the alternatives out there for facilitating ad-hoc distribution of pre-release apps to testers.

Buildstash

Buildstash offers a simple replacement for App Center's beta distribution functionality for iOS and Android, while additionally offering support for managing and sharing binaries across any software platform. So rather than a mobile-specific focus, Buildstash also supports desktop platforms, XR and game teams, embedded systems, and so on.

Buildstash has extensive CI/CD integrations, uploading via a simple API, and a simple "App Center style" web uploader.

For mobile apps, it allows you to upload Android APK/AAB and iOS IPA files and share them with testers via multiple methods: distribution groups, simple share links, and even branded portals you can host on your website. This makes it the most flexible in terms of distribution methods. Unlike some other options, testers don’t need accounts, which makes it ideal for sharing builds with external stakeholders such as clients or QA vendors.

Beyond all this, Buildstash offers more comprehensive management of software binaries, including archival, and QA approval workflows.

Firebase App Distribution

Firebase App Distribution is part of Google’s broader Firebase ecosystem and provides a solution for distributing pre-release Android and iOS apps to trusted testers. It supports managed tester groups, email-based invites, release notes, and crash reporting when paired with Firebase Crashlytics. For Android teams especially, it offers a smooth experience thanks to its tight integration with Gradle and the Android toolchain.

On the iOS side, Firebase App Distribution supports both ad-hoc and enterprise builds, though provisioning and certificate management remain the developer’s responsibility. Teams already invested in Firebase for analytics, authentication, or backend services often find this option convenient, as it consolidates multiple aspects of the development workflow into a single platform, and offers a free plan.

Expo Application Services (EAS)

Expo Application Services (EAS) provides build and distribution tooling specifically for React Native and Expo-based applications. EAS includes a CI/CD tool for Expo apps, and allows developers to easily share resulting builds with internally with testers.

If you're developing an Expo or React Native app, and especially if you're already within the EAS ecosystem, this may be a simple and effective choice for sharing beta builds.

Applivery

Applivery is a more enterprise focused mobile platform, especially suited to internal distribution with Mobile Device Management (MDM). They now additionally offer beta testing with un-managed devices, but it may be an expensive option, starting from €49 for only 1 user / 3 apps / 300 downloads. Applivery also provides over-the-air updates and integrates with popular CI/CD tools, making it suitable for structured testing environments.

Applivery's enterprise and MDM focus may make it particularly attractive for larger teams or organizations that need more governance and traceability in their beta testing process.

Appcircle

Appcircle is positioned as a complete build platform targeting enterprise, including CI/CD. Thus with build automation, testing, and distribution, this may make it an attractive option to replace App Center's feature set for larger teams with an enterprise budget. Its distribution module supports ad-hoc sharing of Android and iOS builds, tester groups, and version history, all accessible through a web dashboard.

Securing Test Environments: Mitigating PII Leakage Through API-Driven Data Masking

2026-02-02 05:40:15

Securing Test Environments: Mitigating PII Leakage Through API-Driven Data Masking

In enterprise software development, testing environments often pose significant security risks, especially regarding the inadvertent exposure of Personally Identifiable Information (PII). As a Senior Architect, addressing this challenge involves implementing robust, scalable solutions that seamlessly integrate into existing workflows. One effective strategy is leveraging API development to enforce data masking and access controls dynamically.

The Challenge of PII Leakage in Test Environments

Test environments are typically replicas of production systems used for testing new features, integrations, and performance. However, they frequently use real production data for authenticity, which can inadvertently lead to PII exposure—raising compliance issues and risking security breaches.

Traditional approaches, like static data anonymization or hardcoded filters, can be insufficient and inflexible, especially as data schemas evolve. This calls for a dynamic, API-centric solution that centralizes control, reduces redundancy, and enhances security.

Architecting an API-Driven Data Masking Layer

The core idea is to develop a Data Masking API that acts as an intermediary between test clients and data sources. This API intercepts data requests and applies masking or redaction based on configurable policies.

Key Principles:

  • Centralized Control: Manage masking policies in a single service.
  • On-the-fly Masking: Mask data dynamically during API responses.
  • Auditability: Log access and masking activities for compliance.
  • Scalability: Handle high request volumes without performance degradation.

Implementation Overview:

from flask import Flask, request, jsonify
app = Flask(__name__)

# Example masking policy
masking_policy = {
    'email': True,
    'phone': True,
    'ssn': True
}

# Mock database data
user_data = {
    'id': 123,
    'name': 'John Doe',
    'email': '[email protected]',
    'phone': '555-1234',
    'ssn': '123-45-6789'
}

# Masking functions
def mask_email(email):
    return email.split('@')[0] + '@***.com'

def mask_phone(phone):
    return '***-****'

def mask_ssn(ssn):
    return '***-**-****'

@app.route('/user/<int:user_id>', methods=['GET'])
def get_user(user_id):
    # In real implementation, fetch from the database
    data = user_data
    # Apply masking based on policy
    if masking_policy.get('email'):
        data['email'] = mask_email(data['email'])
    if masking_policy.get('phone'):
        data['phone'] = mask_phone(data['phone'])
    if masking_policy.get('ssn'):
        data['ssn'] = mask_ssn(data['ssn'])
    return jsonify(data)

if __name__ == '__main__':
    app.run(port=5000)

This API acts as a gatekeeper, ensuring that sensitive data is masked when accessed in test environments.

Deployment and Integration Considerations

  • Policy Management: Use a configuration service or database to dynamically update masking rules without redeploying the API.
  • Authentication & Authorization: Secure the API with OAuth2 or API keys to restrict access.
  • Logging & Auditing: Record each request and masking action for compliance and troubleshooting.
  • Performance: Implement caching strategies where appropriate to reduce latency.

Benefits of API-Centric Data Masking

  • Consistency: Enforces uniform PII handling across all test clients.
  • Flexibility: Easily modify policies independently of the data sources.
  • Auditability: Provides an audit trail for regulatory compliance.
  • Reduced Risk: Limits PII exposure by centralizing data processing.

Conclusion

By developing an API-driven data masking layer, senior architects can significantly reduce PII leakage risks in test environments. This approach ensures compliance, enhances security, and provides the flexibility needed for evolving enterprise needs. Leveraging APIs as a control point enables a scalable, manageable, and auditable solution, aligning with enterprise security standards and best practices.

For organizations operating at scale, integrating such an API into their CI/CD pipelines and data governance frameworks can dramatically improve their security posture without sacrificing development agility.

🛠️ QA Tip

To test this safely without using real user data, I use TempoMail USA.

My Portfolio Challenge 2026

2026-02-02 05:39:08

This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI

About Me

I am an aspiring software developer passionate about building clean, functional web experiences. For this challenge, I participated in creating a personal portfolio that represent my growth and journey in the tech world.

Portfolio

Link to my Live Portfolio

https://uaithegreat001.github.io/PortpolioCallenge2026/

Link to my Source Code & Dockerfile (GitHub)

https://github.com/uaithegreat001/PortpolioCallenge2026.git

Technical Note: I fully containerized this application using a Dockerfile for Google Cloud Run. However, due to billing verification issues that persisted on the final deadline day, I have hosted the live version on GitHub Pages while providing the full Docker configuration in my repository to demonstrate the technical implementation.

How I Built It

This portfolio was created using pure HTML, CSS, and Vanilla Java script to ensure high performance and simplicity, To prepare for the Google Cloud Run requirement, I created a custom Dockerfile based on nginx:alpine to serve my static files on port 8080.

Google AI & Tools used:

  • Gemini AI: Assisted in architecting the Docker container and troubleshooting Google Cloud SDK commands.
  • Google Cloud Shell: Used as the primary development environment to manage files and attempt deployment.
  • Antigravity: This is core implementation assistance that written the code and turning the idea to real project.

What I'm Most Proud Of

I am most proud of the animation physics that was applied during implementation stage and also getting new experience with package Docker container during running the cloud run. As a developer and problem solver we are always welcoming new challenges, solve them, get experience and move.
Thanks All.

Automating React App deployments to AWS with GitHub Actions and OIDC

2026-02-02 05:26:37

After deploying my personal website to AWS, I realized the need to automate the deployment process to streamline the delivery of new features. For that, you can’t go wrong with a CI/CD pipeline powered by your Git provider — in this case, GitHub and GitHub Actions.

In this article, I’ll walk through the architecture and CI/CD pipeline I use to deploy a React application to S3 + CloudFront, authenticated via GitHub Actions OIDC (no long-lived AWS credentials).

This setup is suitable for real-world projects and follows AWS and GitHub best practices.

Architecture Overview

Before diving into CI/CD, let’s clarify the infrastructure that was already in place.

Initial AWS Setup (Pre-requisites)

The following resources were already created before the pipeline was written.

1 - S3 bucket

  • Static website hosting enabled
  • Public access configured appropriately
  • Hosts the built React assets (index.html, /assets/*, etc.)

2 - CloudFront distribution

  • Origin pointing to the S3 website endpoint
  • Default root object: index.html
  • Caching enabled
  • Custom domain configured

3 - Name.com

  • Custom domain pointing to the CloudFront distribution

Once this setup is complete, the website can already be accessed via the custom domain.
The CI/CD pipeline’s job is to automate updates safely and consistently.

AWS: Configuring OIDC Authentication

1 - Create an OIDC Identity Provider

In AWS IAM:

  • Provider URL:
https://token.actions.githubusercontent.com
  • Audience:
sts.amazonaws.com

This allows AWS to trust GitHub as an identity provider.

2 - Create an IAM Role for GitHub Action

This role will be assumed by the GitHub Actions runner.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/token.actions.githubusercontent.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
        },
        "StringLike": {
          "token.actions.githubusercontent.com:sub": "repo:<ORG>/<REPO>:*"
        }
      }
    }
  ]
}

3 - Attach Required IAM Permissions

The role needs permissions to:

  • Sync files to S3
  • Create CloudFront invalidations

Example policy (simplified):

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:DeleteObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::<BUCKET_NAME>",
        "arn:aws:s3:::<BUCKET_NAME>/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": "cloudfront:CreateInvalidation",
      "Resource": "*"
    }
  ]
}

GitHub Actions Pipeline Overview

The pipeline is split into three jobs:

  1. install – install dependencies
  2. build – build the React app and upload artifacts
  3. deploy – authenticate to AWS, deploy to S3, invalidate CloudFront

This separation improves clarity, caching, and debuggability.

Job 1: Installing Dependencies

install:
  runs-on: ubuntu-latest
  steps:
    - uses: actions/checkout@v4
    - name: Install dependencies
      uses: actions/setup-node@v4
      with:
        node-version: "22.x"
        cache: 'npm'
    - run: npm ci

What this job does

  1. Checks out the repository
  2. Sets up Node.js 22
  3. Uses npm ci for deterministic installs
  4. Enables dependency caching for faster builds

This job ensures the dependency tree is valid before moving forward.

Job 2: Building the React Application

build:
  runs-on: ubuntu-latest
  needs: install
  steps:
    - uses: actions/checkout@v4
    - name: Use Node.js 22
      uses: actions/setup-node@v4
      with:
        node-version: "22.x"
        cache: 'npm'
    - run: npm ci
    - run: npm run build
    - name: Upload dist
      uses: actions/upload-artifact@v4
      with:
        name: dist
        path: ./dist

What this job does

  • Rebuilds the application in a clean environment
  • Produces a static dist/ folder
  • Uploads the build output as a pipeline artifact

Artifacts allow the deploy job to be fully decoupled from the build process.

Job 3: Deploying to AWS (S3 + CloudFront)

  deploy:
    needs: build
    runs-on: ubuntu-latest
    environment: aws
    permissions:
      id-token: write
      contents: read

    steps:
      - name: Download dist artifact
        uses: actions/download-artifact@v4
        with:
          name: dist
          path: dist

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.AWS_ARN }}
          aws-region: ${{ vars.AWS_REGION }}
          role-session-name: ${{ vars.ROLE_SESSION_NAME}}

      - name: Deploy to S3
        run: |
          aws s3 sync dist s3://${{ vars.S3_BUCKET_NAME }} \
            --delete \
            --exact-timestamps

      - name: Invalidate CloudFront
        run: |
          aws cloudfront create-invalidation \
            --distribution-id ${{ vars.CLOUDFRONT_DISTRIBUTION_ID }} \
            --paths "/*"

What this job does

  • Downloads the build artifact
  • Authenticates to AWS
  • Uploads the static files to the bucket
  • Invalidate CloudFront's cache

Invalidating /* ensures CloudFront fetches the new version immediately.

Final Result

After every push to main:

  • The app is built
  • Assets are uploaded to S3
  • CloudFront cache is invalidated
  • The custom domain serves the latest version reliably

All without storing a single AWS secret in GitHub.

References

- GitHub Actions OIDC

- AWS IAM OIDC

- CloudFront invalidations

- aws-actions/configure-aws-credentials

Mastering Spam Trap Avoidance in React Without Spending a Dime

2026-02-02 05:12:54

Introduction

In the fiercely competitive world of email marketing, landing in spam traps can significantly damage your sender reputation, leading to delivery issues and decreased engagement. As a Senior Developer, addressing this challenge with a zero-budget approach requires a strategic blend of technical ingenuity and best practices. This blog discusses how to minimize the risk of spam traps while using React to manage your email validation and sending logic, all without incurring extra costs.

Understanding Spam Traps

Spam traps are email addresses used by ISPs and anti-spam organizations to identify invalid or malicious senders. These addresses are not for communication; their purpose is solely to catch bad actors. Sending emails to spam traps can flag your domain as a spammer, which affects deliverability.

The Zero-Budget Strategy

Since budget constraints limit paid solutions, our focus shifts to leveraging open-source tools, meticulous validation, and intelligent frontend practices within a React app.

Step 1: Implement Robust Client-side Validation

Before any server interaction, your React app should validate email address syntax immediately. Use regex patterns for quick initial filtering:

const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;

function validateEmail(email) {
  return emailRegex.test(email);
}

// Usage
if (validateEmail(userEmail)) {
  // Proceed to next validation or API call
} else {
  // Show user error
}

This step prevents obviously invalid emails from reaching servers.

Step 2: Use Free Open-Source Email Validation APIs

There are several free or open-source verification services like Hunter.io's free tier or Mailcheck.ai, which provide basic MX and domain checks. Since paid tools aren't an option, integrate these into your form submission flow. For example:

async function checkEmailValidity(email) {
  const response = await fetch(`https://api.mailcheck.ai/v1/verify?email=${email}`);
  const data = await response.json();
  return data.is_valid && data.is_mx_ok;
}

// Usage
checkEmailValidity(userEmail).then(isValid => {
  if (isValid) {
    // Proceed with sending
  } else {
    // Notify user or reject
  }
});

Regularly updating validation logic helps filter out disposable or invalid domains.

Step 3: Maintain & Optimize Your Email List

Your React application should facilitate list hygiene by encouraging users to confirm their email addresses via double opt-in and providing easy-to-use unsubscribe options. Also, implementing a re-engagement strategy helps clean your list over time.

Step 4: Encourage Best Practices & Use User Behavior

Track bounce rates and engagement through your frontend. React's state management can help flag and suppress emails exhibiting suspicious behavior—such as high bounce rates—reducing chances of hitting spam traps.

// Pseudocode for bounce handling
const [bounces, setBounces] = useState({});

function reportBounce(email) {
  setBounces(prev => ({ ...prev, [email]: true }));
}

// Use bounce info before sending
if (!bounces[email]) {
  // Send email
}

Final Thoughts

While tackling spam traps without a budget is challenging, combining thorough validation, list hygiene, and user feedback within your React app significantly reduces trap risks. Prioritize continuous monitoring and list quality over aggressive broad-based strategies. Remember, respecting user privacy and consent is foundational for sustainable email practices.

In summary, these low-cost measures empower you to uphold high deliverability standards and protect your domain reputation without spending on expensive solutions.

References

🛠️ QA Tip

To test this safely without using real user data, I use TempoMail USA.