2026-02-02 05:47:54
If you’ve been working with Spring Boot for a while, chances are you’ve already used @Transactional.
Maybe you added it because “that’s what everyone does”, or because a tutorial told you so.
And most of the time… things seem to work.
Until one day:
That’s usually the moment when you realize that transactions are not just a database feature — they are a core business concept.
In backend development, especially in stateful systems, partial success is often worse than total failure.
A transaction is what allows you to say:
“Either everything succeeds, or nothing does.”
Spring Boot makes transactions easy to use, but also easy to misunderstand.
And misunderstanding them can lead to subtle bugs that are extremely hard to debug.
In this article, we’ll start from the basics:
@Transactional
Before touching any annotation, we need to get the mental model right.
At its core, a transaction is a logical unit of work.
It groups multiple operations into a single, indivisible action.
Think about a very common backend use case:
If one of these steps fails, the system should not be left in a half-completed state.
Without transactions, your system might end up like this:
✔ Order created
✔ Payment saved
✘ Stock update failed
That’s not a technical issue.
That’s a business problem.
Transactions are usually described using the ACID acronym. You’ve probably heard of it, but let’s translate it into practical backend terms.
“All or nothing.”
Either all operations inside the transaction succeed, or all of them are rolled back.
No partial updates, no “we’ll fix it later”.
The system moves from one valid state to another valid state.
Constraints, invariants, and business rules must always hold — before and after the transaction.
Concurrent transactions should not step on each other’s toes.
What one transaction sees (or doesn’t see) from another transaction is controlled and predictable.
This is where things start getting tricky — and interesting.
Once a transaction is committed, it stays committed.
Even if the application crashes right after, the data is there.
This is an important point that often gets overlooked.
Transactions:
The database doesn’t know what an “order” or a “payment” is.
It only knows rows, tables, and constraints.
It’s the backend application that decides:
This is exactly why frameworks like Spring exist:
to help you define transactional boundaries in your application code, not in SQL scripts.
Modern backend systems are:
Transactions are your last line of defense against data corruption.
They don’t make failures disappear — but they make failures safe.
And that’s the key idea we’ll carry into Spring Boot and @Transactional.
Before diving deeper into @Transactional, it’s important to understand how Spring manages transactions at a high level.
Not the full internal implementation — just enough to avoid the most common (and painful) mistakes.
Because with transactions in Spring, the “how” matters almost as much as the “what”.
Spring supports two ways of managing transactions:
You explicitly start, commit, and rollback a transaction in code.
TransactionStatus status = transactionManager.getTransaction(definition);
try {
// business logic
transactionManager.commit(status);
} catch (Exception ex) {
transactionManager.rollback(status);
throw ex;
}
This works, but:
You can use it, but in most Spring Boot applications, you shouldn’t.
This is where @Transactional comes in.
You declare what should be transactional, not how to manage the transaction.
@Transactional
public void placeOrder() {
// business logic
}
Spring takes care of:
This separation of concerns is one of the reasons why Spring-based backends are so readable and maintainable.
PlatformTransactionManager
At runtime, Spring delegates all transaction operations to a PlatformTransactionManager.
Think of it as an abstraction layer between:
Depending on what you use, Spring will plug in a different implementation:
DataSourceTransactionManager → JDBCJpaTransactionManager → JPA / HibernateReactiveTransactionManager → reactive stacksThis abstraction is what allows you to write framework-agnostic transactional code, while still being tightly integrated with your persistence technology.
You almost never interact with it directly — but it’s always there.
@Transactional
At first glance, @Transactional looks deceptively simple.
You put it on a method, and magically:
And in happy-path demos, that’s exactly what happens.
In real-world applications, however, where and how you use @Transactional makes a huge difference.
Let’s break it down.
The most basic usage looks like this:
@Transactional
public void placeOrder() {
orderRepository.save(order);
paymentRepository.save(payment);
}
If an exception is thrown during the method execution:
If the method completes successfully:
So far, so good.
But this simplicity hides a lot of assumptions.
One of the biggest misconceptions is thinking of @Transactional as something that adds behavior.
It doesn’t. It defines a transactional boundary:
In other words, when you annotate a method with @Transactional, Spring does NOT modify your method.
Instead, Spring:
This is done using Spring AOP (Aspect-Oriented Programming).
In practice, the flow looks like this:
Client → Spring Proxy → Transaction Interceptor
→ Your Method → Transaction Commit / Rollback
Why is this important?
Because only method calls that go through the proxy are transactional.
This single sentence explains:
@Transactional on private methods is ignoredWe’ll get back to this later in the pitfalls section, but keep this mental model in mind.
@Transactional Should Live (and Where It Shouldn’t)
A common question is: where do I put @Transactional?
The short answer:
On service-layer methods that define a business operation.
Typically:
Example:
@Service
public class OrderService {
@Transactional
public void placeOrder(CreateOrderCommand command) {
// validate input
// persist order
// update stock
// trigger side effects
}
}
This makes your transactional boundary:
@Transactional
Spring allows you to annotate:
@Transactional
@Service
public class OrderService {
...
}
This means:
This can be useful, but it’s also dangerous if overused.
From experience:
Explicit is better than implicit — especially with transactions.
At this point, we have a solid foundation:
@Transactional Attributes (The Real Ones)
@Transactional is not just an on/off switch.
Behind that single annotation there are rules that control how transactions behave, especially when:
Most bugs related to transactions come from default assumptions that turn out to be wrong.
Let’s go through the attributes that actually matter in real projects.
Propagation defines what happens when a transactional method is called from another transactional method.
This is by far the most important attribute.
REQUIRED (Default)
@Transactional(propagation = Propagation.REQUIRED)
Meaning:
This is what you want most of the time.
Example:
@Transactional
public void placeOrder() {
orderService.saveOrder();
paymentService.charge();
}
If charge() fails:
This is usually correct and desirable.
REQUIRES_NEW
@Transactional(propagation = Propagation.REQUIRES_NEW)
Meaning:
Classic use case:
Example:
@Transactional
public void placeOrder() {
orderRepository.save(order);
auditService.logOrderAttempt(order); // REQUIRES_NEW
throw new RuntimeException("Payment failed");
}
Result:
This is powerful — and dangerous if misused.
SUPPORTS → join if exists, otherwise run non-transactionallyMANDATORY → fail if no transaction existsNOT_SUPPORTED → suspend any existing transactionNEVER → fail if a transaction existsNESTED → savepoints (DB-dependent)👉 In most applications:
REQUIRED and REQUIRES_NEW cover 95% of use casesIsolation defines how concurrent transactions affect each other.
Default:
@Transactional(isolation = Isolation.DEFAULT)
This delegates to the database default (often READ_COMMITTED).
Higher isolation = fewer anomalies = lower throughput.
👉 Rule of thumb:
This is where many Spring developers get burned.
By default:
Spring rolls back only on unchecked exceptions (
RuntimeException) andError.
That means this will NOT rollback:
@Transactional
public void placeOrder() throws Exception {
orderRepository.save(order);
throw new Exception("Checked exception");
}
From a business perspective, this operation clearly failed.
From Spring’s perspective, however, this is a checked exception — and the transaction is committed.
No rollback. No warning. Just inconsistent data.
Yes, really.
You can override this:
@Transactional(rollbackFor = Exception.class)
Or the opposite:
@Transactional(noRollbackFor = BusinessException.class)
This is essential when:
In most real-world Spring Boot applications, business failures should invalidate the transaction.
A clean and effective way to model this is by using custom unchecked exceptions:
public class PaymentFailedException extends RuntimeException {
}
@Transactional
public void placeOrder() {
orderRepository.save(order);
throw new PaymentFailedException();
}
This approach has several advantages:
If the operation fails, the transaction fails. No ambiguity.
👉 Always be explicit if you rely on checked exceptions.
readOnly: Small Flag, Big Impact
@Transactional(readOnly = true)
This:
Perfect for:
Not a silver bullet — but a good habit.
timeout: A Safety Net
@Transactional(timeout = 5)
If the transaction runs longer than 5 seconds:
Useful for:
Especially relevant under load.
Most transactional bugs are not caused by:
They come from:
Understanding these attributes turns @Transactional from a “magic annotation” into a precise tool.
At this point, we understand how transactions should work.
Unfortunately, many transactional bugs don’t come from a lack of knowledge —
they come from small details that are easy to miss and hard to debug.
Let’s go through the most common pitfalls you’ll encounter in real Spring Boot applications.
This is probably the most famous Spring transactional pitfall.
@Service
public class OrderService {
public void placeOrder() {
saveOrder(); // ❌ no transaction
}
@Transactional
public void saveOrder() {
orderRepository.save(order);
}
}
At first glance, this looks fine.
It’s not.
The call to saveOrder() happens inside the same class.
It never goes through the Spring proxy.
Result:
@Transactional is completely ignoredThis is one of the reasons transactional bugs feel “random”.
@Service
public class OrderPersistenceService {
@Transactional
public void saveOrder(Order order) {
orderRepository.save(order);
}
}
@Transactional on private (or non-public) Methods
Another classic.
@Transactional
private void saveOrder() {
...
}
This will never work.
Spring proxies intercept public method calls only (by default).
Private, protected, or package-private methods are ignored.
Again:
👉 Transactional methods must be public. Always.
This one is subtle — and extremely common.
@Transactional
public void placeOrder() {
try {
paymentService.charge();
} catch (PaymentFailedException ex) {
log.error("Payment failed", ex);
}
}
Looks harmless, right?
But now:
Spring rolls back only if the exception escapes the transactional boundary.
Either:
TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
But in most cases, rethowing is the cleanest solution.
Sometimes transactional methods grow too much:
@Transactional
public void placeOrder() {
orderRepository.save(order);
emailService.sendConfirmationEmail(); // ❌
}
Problems:
If the email fails:
Probably not.
Transactional boundaries should protect data consistency, not external systems.
UnexpectedRollbackException)
Sooner or later, you’ll see this:
UnexpectedRollbackException: Transaction silently rolled back
This usually happens when:
Common causes:
REQUIRES_NEW
When you see this exception:
Technically correct. Practically dangerous.
Long transactions:
Common causes:
Rule:
Transactions should be as short as possible, but as long as necessary.
Most transactional problems share a common theme:
That’s why:
…is more important than memorizing annotations.
@Transactional and @Async: A Dangerous Combination
At some point, almost every Spring Boot developer tries to combine:
@Transactional → consistency@Async → performanceOn paper, it sounds like a great idea.
In practice, it’s one of the most misunderstood and dangerous combinations in Spring.
Let’s clear things up.
@Transactional is thread-bound.
A transaction:
@Async, on the other hand:
So this code:
@Transactional
public void placeOrder() {
orderRepository.save(order);
asyncService.sendConfirmationEmail(order);
}
@Async
public void sendConfirmationEmail(Order order) {
// ...
}
does not mean:
“send the email in the same transaction, but asynchronously”
It means:
“start a completely separate execution, with no transaction at all (unless explicitly defined)”
Many developers assume:
“If the async method is called from a transactional one, it participates in the same transaction.”
It doesn’t.
Ever.
Different thread = different transactional context.
Let’s look at a slightly more subtle example:
@Transactional
public void placeOrder() {
orderRepository.save(order);
asyncService.notifyWarehouse(order);
throw new RuntimeException("Payment failed");
}
Possible outcome:
This is how distributed inconsistencies are born.
@Async Transactional (Yes, But…)
You can put @Transactional on an async method:
@Async
@Transactional
public void notifyWarehouse(Order order) {
...
}
This creates:
This might be fine — or disastrous — depending on intent.
Again: there is no shared transaction.
@Transactional + @Async
Spring provides a much safer mechanism:
@Transactional
public void placeOrder() {
orderRepository.save(order);
applicationEventPublisher.publishEvent(new OrderPlacedEvent(order));
}
@TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT)
public void onOrderPlaced(OrderPlacedEvent event) {
sendEmail(event);
}
Now:
This pattern is gold.
For more complex systems:
Persist state first, then publish events.
Transactions protect your database, not the world.
Never assume an async operation is part of your transaction.
It never is.
If consistency matters:
Always in that order.
@Async
@Async is not dangerous by itself.
What’s dangerous is:
Once you internalize this model, the behavior becomes predictable — and safe.
One of the most frustrating things about transactional bugs is that everything looks fine until it’s not.
No errors.
No stack traces.
Just data in the wrong state.
When that happens, logging is often the only way to understand what Spring is actually doing.
Let’s see how to make transactions visible.
Transactional behavior is:
So when a transaction:
…it usually happens outside your business code.
Without proper logs, you’re debugging blind.
Spring exposes very useful logs — you just need to turn them on.
In application.yml (or application.properties):
logging:
level:
org.springframework.transaction: DEBUG
This alone already shows:
With transaction logging enabled, you’ll start seeing logs like:
Creating new transaction with name [OrderService.placeOrder]
Participating in existing transaction
Committing JDBC transaction
Rolling back JDBC transaction
This tells you:
When debugging propagation issues, this is invaluable.
Transactions are about when changes are flushed.
To see what is executed, enable SQL logs:
logging:
level:
org.hibernate.SQL: DEBUG
Optionally, parameter binding:
logging:
level:
org.hibernate.type.descriptor.sql: TRACE
Now you can correlate:
This is often where inconsistencies finally make sense.
If you’ve ever seen this:
UnexpectedRollbackException: Transaction silently rolled back
it means:
To debug this:
setRollbackOnly eventThe rollback never comes from nowhere — it’s just hidden.
A common mistake is relying only on business logs:
log.info("Order placed successfully");
This log may appear:
If you need certainty, log:
@TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT)
public void onOrderCommitted(OrderPlacedEvent event) {
log.info("Order committed: {}", event.getOrderId());
}
Now your logs reflect reality, not intent.
When dealing with transactional issues:
This approach turns “random behavior” into deterministic behavior.
Transactions don’t fail silently.
They fail quietly.
Logging is what gives them a voice.
Once you get used to reading transaction logs, you’ll start spotting problems before they reach production.
Transactions are one of the most powerful tools in Spring Boot — but they are also one of the most misunderstood.
Here’s what you should remember:
@Transactional is declarative, but precise.
Understand propagation, isolation, rollback rules, and method visibility.Exceptions drive rollbacks.
Avoid common pitfalls:
Logging is your friend.
Enable transaction and SQL logging to debug propagation, rollback, and commit behavior. Use @TransactionalEventListener for post-commit business logging.
Async and transactions are tricky.
Transactions are thread-bound. Async methods run in a different thread and have a separate transactional context. Prefer events or queues for safe decoupling.
Spring Boot gives you powerful tools, but with great power comes great responsibility.
Transactions can protect your data, but only if you understand how they work — not just how they look.
💬 I’d love to hear from you:
Share your experiences in the comments — let’s learn from each other.
2026-02-02 05:42:40
Microsoft's App Center was a widely used mobile DevOps platform offering mobile CI/CD, beta distribution, and analytics.
With Microsoft discontinuing the service in 2025, teams who previously depended on it are forced to explore alternative options. Let's explore some of the alternatives out there for facilitating ad-hoc distribution of pre-release apps to testers.
Buildstash offers a simple replacement for App Center's beta distribution functionality for iOS and Android, while additionally offering support for managing and sharing binaries across any software platform. So rather than a mobile-specific focus, Buildstash also supports desktop platforms, XR and game teams, embedded systems, and so on.
Buildstash has extensive CI/CD integrations, uploading via a simple API, and a simple "App Center style" web uploader.
For mobile apps, it allows you to upload Android APK/AAB and iOS IPA files and share them with testers via multiple methods: distribution groups, simple share links, and even branded portals you can host on your website. This makes it the most flexible in terms of distribution methods. Unlike some other options, testers don’t need accounts, which makes it ideal for sharing builds with external stakeholders such as clients or QA vendors.
Beyond all this, Buildstash offers more comprehensive management of software binaries, including archival, and QA approval workflows.
Firebase App Distribution is part of Google’s broader Firebase ecosystem and provides a solution for distributing pre-release Android and iOS apps to trusted testers. It supports managed tester groups, email-based invites, release notes, and crash reporting when paired with Firebase Crashlytics. For Android teams especially, it offers a smooth experience thanks to its tight integration with Gradle and the Android toolchain.
On the iOS side, Firebase App Distribution supports both ad-hoc and enterprise builds, though provisioning and certificate management remain the developer’s responsibility. Teams already invested in Firebase for analytics, authentication, or backend services often find this option convenient, as it consolidates multiple aspects of the development workflow into a single platform, and offers a free plan.
Expo Application Services (EAS) provides build and distribution tooling specifically for React Native and Expo-based applications. EAS includes a CI/CD tool for Expo apps, and allows developers to easily share resulting builds with internally with testers.
If you're developing an Expo or React Native app, and especially if you're already within the EAS ecosystem, this may be a simple and effective choice for sharing beta builds.
Applivery is a more enterprise focused mobile platform, especially suited to internal distribution with Mobile Device Management (MDM). They now additionally offer beta testing with un-managed devices, but it may be an expensive option, starting from €49 for only 1 user / 3 apps / 300 downloads. Applivery also provides over-the-air updates and integrates with popular CI/CD tools, making it suitable for structured testing environments.
Applivery's enterprise and MDM focus may make it particularly attractive for larger teams or organizations that need more governance and traceability in their beta testing process.
Appcircle is positioned as a complete build platform targeting enterprise, including CI/CD. Thus with build automation, testing, and distribution, this may make it an attractive option to replace App Center's feature set for larger teams with an enterprise budget. Its distribution module supports ad-hoc sharing of Android and iOS builds, tester groups, and version history, all accessible through a web dashboard.
2026-02-02 05:40:15
In enterprise software development, testing environments often pose significant security risks, especially regarding the inadvertent exposure of Personally Identifiable Information (PII). As a Senior Architect, addressing this challenge involves implementing robust, scalable solutions that seamlessly integrate into existing workflows. One effective strategy is leveraging API development to enforce data masking and access controls dynamically.
Test environments are typically replicas of production systems used for testing new features, integrations, and performance. However, they frequently use real production data for authenticity, which can inadvertently lead to PII exposure—raising compliance issues and risking security breaches.
Traditional approaches, like static data anonymization or hardcoded filters, can be insufficient and inflexible, especially as data schemas evolve. This calls for a dynamic, API-centric solution that centralizes control, reduces redundancy, and enhances security.
The core idea is to develop a Data Masking API that acts as an intermediary between test clients and data sources. This API intercepts data requests and applies masking or redaction based on configurable policies.
from flask import Flask, request, jsonify
app = Flask(__name__)
# Example masking policy
masking_policy = {
'email': True,
'phone': True,
'ssn': True
}
# Mock database data
user_data = {
'id': 123,
'name': 'John Doe',
'email': '[email protected]',
'phone': '555-1234',
'ssn': '123-45-6789'
}
# Masking functions
def mask_email(email):
return email.split('@')[0] + '@***.com'
def mask_phone(phone):
return '***-****'
def mask_ssn(ssn):
return '***-**-****'
@app.route('/user/<int:user_id>', methods=['GET'])
def get_user(user_id):
# In real implementation, fetch from the database
data = user_data
# Apply masking based on policy
if masking_policy.get('email'):
data['email'] = mask_email(data['email'])
if masking_policy.get('phone'):
data['phone'] = mask_phone(data['phone'])
if masking_policy.get('ssn'):
data['ssn'] = mask_ssn(data['ssn'])
return jsonify(data)
if __name__ == '__main__':
app.run(port=5000)
This API acts as a gatekeeper, ensuring that sensitive data is masked when accessed in test environments.
By developing an API-driven data masking layer, senior architects can significantly reduce PII leakage risks in test environments. This approach ensures compliance, enhances security, and provides the flexibility needed for evolving enterprise needs. Leveraging APIs as a control point enables a scalable, manageable, and auditable solution, aligning with enterprise security standards and best practices.
For organizations operating at scale, integrating such an API into their CI/CD pipelines and data governance frameworks can dramatically improve their security posture without sacrificing development agility.
To test this safely without using real user data, I use TempoMail USA.
2026-02-02 05:39:08
This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI
I am an aspiring software developer passionate about building clean, functional web experiences. For this challenge, I participated in creating a personal portfolio that represent my growth and journey in the tech world.
https://uaithegreat001.github.io/PortpolioCallenge2026/
https://github.com/uaithegreat001/PortpolioCallenge2026.git
Technical Note: I fully containerized this application using a Dockerfile for Google Cloud Run. However, due to billing verification issues that persisted on the final deadline day, I have hosted the live version on GitHub Pages while providing the full Docker configuration in my repository to demonstrate the technical implementation.
This portfolio was created using pure HTML, CSS, and Vanilla Java script to ensure high performance and simplicity, To prepare for the Google Cloud Run requirement, I created a custom Dockerfile based on nginx:alpine to serve my static files on port 8080.
Google AI & Tools used:
I am most proud of the animation physics that was applied during implementation stage and also getting new experience with package Docker container during running the cloud run. As a developer and problem solver we are always welcoming new challenges, solve them, get experience and move.
Thanks All.
2026-02-02 05:26:37
After deploying my personal website to AWS, I realized the need to automate the deployment process to streamline the delivery of new features. For that, you can’t go wrong with a CI/CD pipeline powered by your Git provider — in this case, GitHub and GitHub Actions.
In this article, I’ll walk through the architecture and CI/CD pipeline I use to deploy a React application to S3 + CloudFront, authenticated via GitHub Actions OIDC (no long-lived AWS credentials).
This setup is suitable for real-world projects and follows AWS and GitHub best practices.
Before diving into CI/CD, let’s clarify the infrastructure that was already in place.
The following resources were already created before the pipeline was written.
1 - S3 bucket
2 - CloudFront distribution
3 - Name.com
Once this setup is complete, the website can already be accessed via the custom domain.
The CI/CD pipeline’s job is to automate updates safely and consistently.
1 - Create an OIDC Identity Provider
In AWS IAM:
https://token.actions.githubusercontent.com
sts.amazonaws.com
This allows AWS to trust GitHub as an identity provider.
2 - Create an IAM Role for GitHub Action
This role will be assumed by the GitHub Actions runner.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
},
"StringLike": {
"token.actions.githubusercontent.com:sub": "repo:<ORG>/<REPO>:*"
}
}
}
]
}
3 - Attach Required IAM Permissions
The role needs permissions to:
Example policy (simplified):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<BUCKET_NAME>",
"arn:aws:s3:::<BUCKET_NAME>/*"
]
},
{
"Effect": "Allow",
"Action": "cloudfront:CreateInvalidation",
"Resource": "*"
}
]
}
The pipeline is split into three jobs:
This separation improves clarity, caching, and debuggability.
Job 1: Installing Dependencies
install:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install dependencies
uses: actions/setup-node@v4
with:
node-version: "22.x"
cache: 'npm'
- run: npm ci
What this job does
This job ensures the dependency tree is valid before moving forward.
Job 2: Building the React Application
build:
runs-on: ubuntu-latest
needs: install
steps:
- uses: actions/checkout@v4
- name: Use Node.js 22
uses: actions/setup-node@v4
with:
node-version: "22.x"
cache: 'npm'
- run: npm ci
- run: npm run build
- name: Upload dist
uses: actions/upload-artifact@v4
with:
name: dist
path: ./dist
What this job does
Artifacts allow the deploy job to be fully decoupled from the build process.
Job 3: Deploying to AWS (S3 + CloudFront)
deploy:
needs: build
runs-on: ubuntu-latest
environment: aws
permissions:
id-token: write
contents: read
steps:
- name: Download dist artifact
uses: actions/download-artifact@v4
with:
name: dist
path: dist
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ARN }}
aws-region: ${{ vars.AWS_REGION }}
role-session-name: ${{ vars.ROLE_SESSION_NAME}}
- name: Deploy to S3
run: |
aws s3 sync dist s3://${{ vars.S3_BUCKET_NAME }} \
--delete \
--exact-timestamps
- name: Invalidate CloudFront
run: |
aws cloudfront create-invalidation \
--distribution-id ${{ vars.CLOUDFRONT_DISTRIBUTION_ID }} \
--paths "/*"
What this job does
Invalidating /* ensures CloudFront fetches the new version immediately.
After every push to main:
All without storing a single AWS secret in GitHub.
2026-02-02 05:12:54
In the fiercely competitive world of email marketing, landing in spam traps can significantly damage your sender reputation, leading to delivery issues and decreased engagement. As a Senior Developer, addressing this challenge with a zero-budget approach requires a strategic blend of technical ingenuity and best practices. This blog discusses how to minimize the risk of spam traps while using React to manage your email validation and sending logic, all without incurring extra costs.
Spam traps are email addresses used by ISPs and anti-spam organizations to identify invalid or malicious senders. These addresses are not for communication; their purpose is solely to catch bad actors. Sending emails to spam traps can flag your domain as a spammer, which affects deliverability.
Since budget constraints limit paid solutions, our focus shifts to leveraging open-source tools, meticulous validation, and intelligent frontend practices within a React app.
Before any server interaction, your React app should validate email address syntax immediately. Use regex patterns for quick initial filtering:
const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
function validateEmail(email) {
return emailRegex.test(email);
}
// Usage
if (validateEmail(userEmail)) {
// Proceed to next validation or API call
} else {
// Show user error
}
This step prevents obviously invalid emails from reaching servers.
There are several free or open-source verification services like Hunter.io's free tier or Mailcheck.ai, which provide basic MX and domain checks. Since paid tools aren't an option, integrate these into your form submission flow. For example:
async function checkEmailValidity(email) {
const response = await fetch(`https://api.mailcheck.ai/v1/verify?email=${email}`);
const data = await response.json();
return data.is_valid && data.is_mx_ok;
}
// Usage
checkEmailValidity(userEmail).then(isValid => {
if (isValid) {
// Proceed with sending
} else {
// Notify user or reject
}
});
Regularly updating validation logic helps filter out disposable or invalid domains.
Your React application should facilitate list hygiene by encouraging users to confirm their email addresses via double opt-in and providing easy-to-use unsubscribe options. Also, implementing a re-engagement strategy helps clean your list over time.
Track bounce rates and engagement through your frontend. React's state management can help flag and suppress emails exhibiting suspicious behavior—such as high bounce rates—reducing chances of hitting spam traps.
// Pseudocode for bounce handling
const [bounces, setBounces] = useState({});
function reportBounce(email) {
setBounces(prev => ({ ...prev, [email]: true }));
}
// Use bounce info before sending
if (!bounces[email]) {
// Send email
}
While tackling spam traps without a budget is challenging, combining thorough validation, list hygiene, and user feedback within your React app significantly reduces trap risks. Prioritize continuous monitoring and list quality over aggressive broad-based strategies. Remember, respecting user privacy and consent is foundational for sustainable email practices.
In summary, these low-cost measures empower you to uphold high deliverability standards and protect your domain reputation without spending on expensive solutions.
To test this safely without using real user data, I use TempoMail USA.