2025-11-30 00:44:23
What is AWS EKS?
AWS Elastic Kubernetes Service (AWS EKS) is a managed Kubernetes service offered by AWS (Amazon Web Services)
AWS EKS manages the Kubernetes control plane, ensuring the availability, scalability, and security of the Kubernetes control plane.
The Kubernetes Control Plane components include the API Server, etcd, Scheduler, Controller Manager, etc.
To know more about AWS EKS, check AWS Documentation
What is AWS Fargate:
AWS Fargate is a serverless technology that provides on-demand and right-sized compute capacity that allows you to run containers without managing the underlying virtual machines.
To know more about AWS EKS with Fargate, check AWS Documentation
The Setup:
Deployed the 2048 game on AWS EKS with Fargate, using Helm to install the AWS Load Balancer Controller, and integrated OIDC Connect provider with EKS for secure service account authentication.
Result:
A production-grade, auto-scaling, highly available, secure application that demonstrates enterprise-level architecture patterns. 🚀
Key Highlights:
✅ Serverless Containers: EKS Fargate eliminated the overhead of managing worker nodes - no patching, no scaling headaches, just pure application focus
✅ Intelligent Load Balancing: Ingress - AWS LB Controller automatically provisions and configures Application Load Balancers, enabling path-based routing
✅ OIDC Integration: Configured IAM OIDC connect provider to enable secure, native AWS IAM authentication for Kubernetes service accounts
✅ Helm-Powered Deployment: Leveraged Helm charts to deploy the AWS Load Balancer
✅ Networking: VPC with private/public subnet isolation
Real Business Benefits:
💰 Cost Efficiency: Pay only for actual usage—no idle servers burning budget
📈 Handles Growth Automatically: Scale seamlessly without manual intervention or additional overhead
🛡️ Security Built-In: Compliance and security policies enforced automatically
🔄 Always Available: Multi-AZ deployment ensures your business stays online, even during failures (can also be implemented in Multi-Region, by deploying separate EKS clusters in different AWS Regions)
🚀 Team Productivity: Infrastructure management is automated — teams can focus on building applications, not maintaining servers
📊 Visibility & Control: Real-time monitoring catches issues before customers are affected
Total Cost for the demo setup: $0.73 (~2.5 Hrs)
Deploying this application reinforced critical lessons about EKS, Fargate, and building resilient, scalable cloud-native systems
Prerequisites:
Before proceeding, you will need an AWS Account, the AWS CLI, eksctl, kubectl, and Helm to install and configure.
AWS Account: You need AWS Account https://aws.amazon.com/
AWS CLI: The AWS Command Line Interface (AWS CLI) is an open-source tool by Amazon Web Services that lets users manage AWS services from their terminal. It offers a consistent interface for services like Amazon EC2, S3, IAM, and more others.
To setup and configure AWS CLI, please refer to AWS CLI
eksctl: eksctl is the official command-line interface (CLI) for Amazon EKS that automates and simplifies the creation and management of Amazon Elastic Kubernetes Service clusters
To install the eksctl tool, please refer to check AWS documentation install eksctl
Kubectl: kubectl is the command-line interface (CLI) tool for interacting with and managing Kubernetes clusters. It acts as a client for the Kubernetes API, allowing users to send commands to the cluster's control plane
To install kubectl, please refer to install kubectl
To know more about eksctl and kubectl check AWS documentation
Helm: Helm is a Kubernetes package manager that simplifies application deployment and management. It uses "charts," pre-configured packages with all necessary resources. Helm allows users to define, install, and upgrade applications with one command, ensuring consistency and reducing errors.
To install Helm, refer to install helm
Deployment:
1. Creating EKS Cluster “game-2048-demo” using eksctl
Run-command:
eksctl create cluster --name game-2048-demo --region us-east-1 --fargate
Note: “eksctl” takes care of the resource provisioning using cloudformation templates for AWS EKS. It creates a new VPC with both public and private subnets across multiple Availability Zones, along with a default Fargate profile.
⚠️ WARNING!
Please be informed that the last step of this demo, once you have successfully implemented this demo project. You need to delete all the resources created.
Deletion of resources is provided as the last or final step
Review the screenshots below, eksctl creating/deploying the AWS EKS Cluster “game-2048-demo”
Review and confirm the cluster “game-2048-demo” created from AWS management Console as well
2. Update the kubeconfig for kubectl:
Run-command:
aws eks update-kubeconfig --name game-2048-demo --region us-east-1
3. Create a new fargate profile “fgp-game-2048” under a new namespace “game-2048”
Run-command:
eksctl create fargateprofile --cluster game-2048-demo --region us-east-1 --name fgp-game-2048 --namespace game-2048
Creating a new Fargate profile “fgp-game-2048” to deploy the application pods in a new namespace “game-2048.”
4. Deploy the game-2048 application:
Run-command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/examples/2048/2048_full.yaml
The file 2048_full.yaml (from the link) has all the required resources (deployment, service, and ingress) configured.
The link is provided by official AWS EKS documentation as an example, to get started with AWS EKS click here
Review the screenshots below, confirming the required resources are created and deployed
To review and confirm the application pods, deployments, service, and ingress are up and running under namespace “game-2048”
Run commands below to verify resources (pods, deployments, service, ingress):
kubectl get pods –n game-2048.
kubectl get deployments –n game-2048.
kubectl get service –n game-2048.
kubectl get ingress –n game-2048.
Note: For ingress, there is no resource created yet at this point, we will create AmazonLoadBalancerController using helm in the upcoming steps
5. Configure IAM Open ID Connect provider (OIDC) for cluster “game-2048-demo”
What is the need for IAM OIDC Provider?
IAM OIDC (OpenID Connect) with AWS EKS refers to the method of granting AWS IAM permissions to Kubernetes service accounts within an Amazon EKS cluster. This allows applications running in EKS pods to securely access AWS resources without requiring the direct storage of long-lived AWS credentials in the pod.
Run-command:
eksctl utils associate-iam-oidc-provider --cluster game-2048-demo --approve
The above command creates and associates an OpenID Connect (OIDC) identity provider with your EKS cluster. This is a foundational step for enabling IAM Roles for Service Accounts (IRSA), which allows Kubernetes pods running in your cluster to securely access AWS services without hardcoded credentials.
6. We will create an IAM role with the necessary permissions for the ALBController to access and create an AWS Application Load Balancer for the “game-2048” application
Download the ALB Controller IAM-Policy
Run-command:
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.json
7. Create the IAM Policy for AWSLoadBalancerController using the IAM policy that we downloaded:
Run-command:
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json
8. We will create an IAM service account that binds an IAM role to a Kubernetes service account, enabling the AWS Load Balancer Controller pod to access AWS ALB resources with least-privilege permissions.
Run-command:
eksctl create iamserviceaccount \
--cluster=game-2048-demo \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn=arn:aws:iam::<your-aws-account-id>:policy/AWSLoadBalancerControllerIAMPolicy \
--approve
What It Does?
9. Deploy the ALB Controller using Helm Charts
You need to add the AWS EKS Helm chart repository with local Helm installation/configuration. It's a prerequisite step before you can search or install charts from AWS.
Run-command:
helm repo add eks https://aws.github.io/eks-chartshelm repo update eks
Install Ingress ALB-Controller using helm:
Run-command:
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=game-2048-demo --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller --set region=us-east-1 --set vpcId=vpc-xxxxxxxxxxxxx
Now, you can verify the ALBController (aws-load-balancer-controller) Pods are up and running under the system namespace “kube-system”
The ALB Controller provisions and configures AWS Application Load Balancers (ALB) in response to Kubernetes resources.
From the below screenshot, you can review that the AWS ALB is provisioned, and the same is reflected under the ingress resource with the ALB DNS name
Now, to test the “game-2048” application. You can copy the AWS Application Load Balancers DNS name (from the AWS managed console or the CLI) as shown above.
Paste the DNS name in a browser (as shown below)
What's your highest score?😉 Let me know in the comments!
Give a glance at what you have built!
Verify the resources created under the system namespace, i.e. kube-system
Run-commands below:
kubectl get pods –n kube-system
kubectl get deployments –n kube-system
kubectl get service –n kube-system
Verify the resources created under the namespace "game-2048"
Run-commands below:
kubectl get pods –n game-2048
kubectl get deployments –n game-2048
kubectl get service –n game-2048
kubectl get ingress –n game-2048
💡Insights!
Note the difference that the ALBController Pods are created under the system namespace "kube-system", but the Application Load Balancer is created under the namespace "game-2048".
The ALB Controller pods are created under the "kube-system" system namespace; this is by design and as developed by AWS.
Run-command:
eksctl delete cluster --name game-2048-demo --region us-east-1
Review the screenshot below. EKS cluster resources are being deleted or cleaned up
Before you finish, review to ensure that no other resources are missed for cleanup.
That's all for this project demo! HAPPY LEARNING...!
Please share your thoughts and suggestions to improve further.
Grateful to @Abhishek Veeramalla for providing the detailed project demonstration on his YouTube Channel.
2025-11-30 00:41:46
On my previous posts, I've been creating a text editor based on the Kilo text Editor.
The changes explained in this post can be found in the Kilo-go github repository, in the config branch
Now we are going to improve it, and since we've finished the guide, I'm not going to continue with the series.
In this article we are going to go through the process of:
First thing we need to address is to be able to run the text editor in any operating system. Upon finishing the series, I found a library that will help us go to raw mode regardless of the operating system we are working in.
Note: I was able to test this feature in both
MacOSandLinux, do not have aWindowsmachine for me to test in
The library that will help us with this is golang.org/x/term so let's proceed to install it
go get golang.org/x/term
Now that we have the library on our project, let's use it to go to raw mode
File: linux/raw.go
package linux
import (
"fmt"
"os"
"github.com/alcb1310/kilo-go/utils"
"golang.org/x/term"
)
func EnableRawMode() (func(), error) {
oldState, err := term.MakeRaw(int(os.Stdin.Fd()))
return func() {
if err = term.Restore(int(os.Stdin.Fd()), oldState); err != nil {
utils.SafeExit(nil, fmt.Errorf("EnableRawMode: error restoring terminal flags: %w", err))
}
}, err
}
File: utils/window.go
package utils
import (
"os"
"golang.org/x/term"
)
func GetWindowSize() (int, int, error) {
return term.GetSize(int(os.Stdout.Fd()))
}
File: main.go
func init() {
...
restoreFunc, err = linux.EnableRawMode()
if err != nil {
fmt.Fprintf(os.Stderr, "Error: %s\r\n", err)
os.Exit(1)
}
}
File: editor/editor.go
func NewEditor(f func()) *EditorConfig {
cols, rows, err := utils.GetWindowSize()
if err != nil {
utils.SafeExit(f, err)
}
...
}
This refactor doesn't only allows us to run the text editor in different platforms, but also simplified it a lot
Now that we've changed the library we use to go to enable raw mode, we need to remove unnecessary libraries
go mod tidy
Next step in our process is to be able to configure the Text Editor using a configuration file, that way we can change the behavior without modifying the application
We will be using variables instead of constants so we can modify them with the ones in the configuration files, the values in that we assign on declaration will be the default behavior the application will have if no config file or if it is not set in the config.
File: utils/constants.go
var (
KILO_TAB_STOP int = 8
KILO_QUIT_TIMES int = 3
KILO_DEFAULT_COLOR [3]uint8 = [3]uint8{255, 255, 255}
KILO_NUMBER_COLOR [3]uint8 = [3]uint8{255, 0, 0}
KILO_MATCH_COLOR [3]uint8 = [3]uint8{51, 255, 0}
KILO_STRING_COLOR [3]uint8 = [3]uint8{255, 39, 155}
KILO_COMMENT_COLOR [3]uint8 = [3]uint8{0, 255, 255}
KILO_KEYWORD_COLOR [3]uint8 = [3]uint8{255, 239, 0}
KILO_TYPE_COLOR [3]uint8 = [3]uint8{126, 239, 55}
)
File: editor/syntax.go
func editorSyntaxToColor(hl utils.EditorHighlight) (r uint8, g uint8, b uint8) {
switch hl {
case utils.HL_NUMBER:
r = utils.KILO_NUMBER_COLOR[0]
g = utils.KILO_NUMBER_COLOR[1]
b = utils.KILO_NUMBER_COLOR[2]
case utils.HL_MATCH:
r = utils.KILO_MATCH_COLOR[0]
g = utils.KILO_MATCH_COLOR[1]
b = utils.KILO_MATCH_COLOR[2]
case utils.HL_STRING:
r = utils.KILO_STRING_COLOR[0]
g = utils.KILO_STRING_COLOR[1]
b = utils.KILO_STRING_COLOR[2]
case utils.HL_COMMENT, utils.HL_MLCOMMENT:
r = utils.KILO_COMMENT_COLOR[0]
g = utils.KILO_COMMENT_COLOR[1]
b = utils.KILO_COMMENT_COLOR[2]
case utils.HL_KEYWORD:
r = utils.KILO_KEYWORD_COLOR[0]
g = utils.KILO_KEYWORD_COLOR[1]
b = utils.KILO_KEYWORD_COLOR[2]
case utils.HL_TYPE_KEYWORD:
r = utils.KILO_TYPE_COLOR[0]
g = utils.KILO_TYPE_COLOR[1]
b = utils.KILO_TYPE_COLOR[2]
default:
r = utils.KILO_DEFAULT_COLOR[0]
g = utils.KILO_DEFAULT_COLOR[1]
b = utils.KILO_DEFAULT_COLOR[2]
}
return
}
TOML file
We now have to be able to read a .toml file, file in which we will be able to save any configuration variables that we will be creating. To do so we will be using the github.com/BurntSushi/toml library, so let's install it
go get github.com/BurntSushi/toml
File: utils/toml.go
package utils
import (
"fmt"
"os"
"path"
"github.com/BurntSushi/toml"
)
type Settings struct {
QuitTimes int `toml:"quit_times"`
TabStop int `toml:"tab_stop"`
}
type TomlConfig struct {
Settings Settings
Theme map[string][3]uint8
}
func LoadTOML() error {
var config TomlConfig = TomlConfig{}
dir, err := os.UserConfigDir()
if err != nil {
fmt.Fprintf(os.Stderr, "Error: %s\r\n", err)
return err
}
if err = os.MkdirAll(path.Join(dir, "kilo"), 0o755); err != nil {
fmt.Fprintf(os.Stderr, "Error: %s\r\n", err)
return err
}
filepath := path.Join(dir, "kilo", "config.toml")
if _, err = os.Stat(filepath); os.IsNotExist(err) {
return nil
}
_, err = toml.DecodeFile(filepath, &config)
if err != nil {
fmt.Fprintf(os.Stderr, "Error: %s\r\n", err)
return err
}
return nil
}
os.UserConfigDir()will return the user configuration directory, in Linux like systems will return the.configfolder inside the user's home directory
os.MkdirAll()will create a directory if it doesn't exists, if it does exists it will do nothing returningnilfor an error
File: main.go
func init() {
...
utils.LoadTOML()
}
File: ${XDG_CONFIG}/kilo/config.toml
[settings]
tab_stop = 2
quit_times = 2
[theme]
comment=[0,25,255]
default=[240,240, 240]
keyword=[255,239,0]
number=[255,0,0]
search=[51,255,0]
string=[255,39,155]
type=[12,239,55]
The final step is to assign these values to their respective variables within the application. It is important to note that we've created values that will flag if the config file is modifying a variable
File: utils/toml.go
func LoadTOML() error {
var config TomlConfig = TomlConfig{
Settings: Settings{
QuitTimes: -1,
TabStop: -1,
},
}
...
if config.Settings.QuitTimes >= 0 {
KILO_QUIT_TIMES = config.Settings.QuitTimes
}
if config.Settings.TabStop >= 0 {
KILO_TAB_STOP = config.Settings.TabStop
}
var val [3]uint8
var ok bool
if val, ok = config.Theme["default"]; ok {
KILO_DEFAULT_COLOR = val
}
if val, ok = config.Theme["number"]; ok {
KILO_NUMBER_COLOR = val
}
if val, ok = config.Theme["match"]; ok {
KILO_MATCH_COLOR = val
}
if val, ok = config.Theme["string"]; ok {
KILO_STRING_COLOR = val
}
if val, ok = config.Theme["comment"]; ok {
KILO_COMMENT_COLOR = val
}
if val, ok = config.Theme["keyword"]; ok {
KILO_KEYWORD_COLOR = val
}
if val, ok = config.Theme["type"]; ok {
KILO_TYPE_COLOR = val
}
...
}
2025-11-30 00:37:08
Hey fellow developers, welcome back to our DSA learning series. We hope
you are doing great and enjoying this journey with us. This series is
all about exploring problems the way a beginner genuinely sees them. We
are students ourselves, so we know exactly how confusing a problem can
look at first glance, and how a simple explanation at the right moment
can completely change the way you understand it. Today, we are going to
look at a very popular and very interesting problem --- Trapping Rain
Water.
This is a problem that almost every interviewer loves to ask. It is
marked as "Hard" on most platforms, but in reality, once you truly
understand what the question is trying to communicate, the whole logic
feels surprisingly intuitive. So let us take our time and walk through
the thought process slowly, step by step.
We are given an array of non-negative integers that represent the
heights of vertical bars placed next to each other. Each bar has a width
of 1 unit. Now imagine rainwater falling on top of these bars. Depending
on the relative heights of the bars on the left and right, some amount
of water might get trapped between them. Our task is to determine how
much water gets accumulated in total.
For example:
Input: [0,1,0,2,1,0,1,3,2,1,2,1]
Output: 6
Looking at the structure visually, you can almost see small pits and
valleys forming, and the water naturally fills up in those gaps.
Let us start with the most straightforward idea --- the one that comes
to mind when you simply think about the situation as a person, not as a
coder.
Imagine you are standing at a particular bar of height h. To figure
out how much water can stay on top of this bar, you do not look only at
the bar itself; you quickly glance to your left and right. Water can
only stay if there is a taller boundary on both sides. So you check:
Then, the maximum water level above the current bar is determined by the
shorter of these two heights. Because water will spill over the smallest
boundary.
So the water trapped on index i becomes:
min(max height on left, max height on right) − current height
This idea is clean and logical. Now, to convert this into code, the
simplest approach is to run two loops for each index --- one scanning
leftward, and another scanning rightward. That gives us a clear
brute-force solution.
public int trap(int[] height) {
int n = height.length;
int totalWater = 0;
for (int i = 0; i < n; i++) {
int maxLeft = 0;
int maxRight = 0;
// Find max on the left
for (int j = i; j >= 0; j--) {
maxLeft = Math.max(maxLeft, height[j]);
}
// Find max on the right
for (int j = i; j < n; j++) {
maxRight = Math.max(maxRight, height[j]);
}
int waterLevel = Math.min(maxLeft, maxRight) - height[i];
totalWater += waterLevel;
}
return totalWater;
}
This solution works perfectly, and it is a good starting point because it forces us to understand the meaning behind the formula. However, it takes O(n²) time since for every index we scan the array twice. The space usage is constant, but the speed becomes an issue as input size grows.
Now that we understand the brute-force idea, let us refine it. Notice
that in the brute force solution, we repeatedly calculate the same
values --- the left maximum and right maximum for many positions. Why
not compute them once and reuse?
This leads to the prefix-max and suffix-max approach.
We create an array left[] where left[i] stores the maximum height
from index 0 to i.
Similarly, we create a right[] array where right[i] stores the
maximum height from index i to the last index.
Once these two arrays are ready, we already have all the information needed to compute the trapped water at every index in O(1) time. The final loop just applies the same formula as before.
Here is the code:
class Solution {
public int trap(int[] height) {
int n = height.length;
if(n == 0) return 0;
int[] left = new int[n];
int[] right = new int[n];
// Build prefix max array
left[0] = height[0];
for(int i = 1; i < n; ++i) {
left[i] = Math.max(left[i - 1], height[i]);
}
// Build suffix max array
right[n - 1] = height[n - 1];
for(int i = n - 2; i >= 0; --i) {
right[i] = Math.max(right[i + 1], height[i]);
}
int trapped = 0;
for(int i = 0; i < n; ++i) {
trapped += Math.min(left[i], right[i]) - height[i];
}
return trapped;
}
}
The time complexity now becomes O(n) and the logic becomes cleaner. The only trade-off is that we use two extra arrays, giving us O(n) space.
Finally, once we understand how prefix and suffix maxima help us, we can
push the optimization even further. The question becomes: Do we really need two whole arrays? The surprising answer is — no. We can keep track of everything we need using just two pointers (one starting from the left end and one from the right) and two variables (leftMax and rightMax) that update as we move inward. Here is the key observation: Water trapping at any point is limited by the smaller boundary between left and right. So, if height[left] is smaller, we focus on the left side. If height[right] is smaller, we shift our attention to the right side. This allows us to calculate trapped water on the fly while moving the pointers.
Here is the final optimized code:
class Solution {
public int trap(int[] height) {
int left = 0, right = height.length - 1;
int leftMax = 0, rightMax = 0;
int trapped = 0;
while (left < right) {
if (height[left] < height[right]) {
if (height[left] >= leftMax) {
leftMax = height[left];
} else {
trapped += leftMax - height[left];
}
left++;
} else {
if (height[right] >= rightMax) {
rightMax = height[right];
} else {
trapped += rightMax - height[right];
}
right--;
}
}
return trapped;
}
}
This version gives us both O(n) time and O(1) space — the optimal combination.
By the time you reach this point, you have seen how the thought process evolves naturally: starting from a basic observation, moving to a brute-force method, then improving it with precomputed arrays, and finally refining it into the elegant two-pointer solution. And this journey is extremely important, because in interviews, most companies are not just looking for the final optimized answer — they want to hear how you build your logic step by step.
This problem is a great example of how understanding the idea is more valuable than memorizing the code. Once the concept becomes clear, all three approaches become easy to derive on your own.
2025-11-30 00:37:00
In this guide I’ll show you how to run fast, isolated, high-quality Database Integration Tests in legacy or framework-less PHP projects. Only Doctrine or PDO needed, and a small but incredibly powerful trick used by many battle-tested frameworks across different programming languages ecosystems.
One reason this is a very solid approach is that it provides the guarantees of real database integration tests — transactions, persisted data, and SQL queries hitting a real database — while keeping execution times extremely low. This makes it ideal for large test suites, continuous refactoring, and yes, even TDD, because it preserves your development flow through a fast feedback loop.
Also, this approach works exceptionally well in legacy projects. Most legacy codebases lack a Testing Foundation. With this technique, you can introduce high-level database integration tests even into very old or badly coupled systems.
Please note that before going 'all-in' into this approach, I've tried different alternatives where none of them is really a real integration with the database. Here are two of them:
SQLite can run entirely in memory, meaning you can have a fully isolated database instance that lives only in RAM.
For example, you could run your Database Integration Tests across 16 parallel processes, each with its own in-memory database.
This is EXTREMELY fast — and a perfectly valid approach if SQLite is your primary database — but there’s a significant gap between SQLite and, for example, PostgreSQL. In behaviour, data types, and SQL semantics. MySQL is somewhat closer to SQLite, but still not equivalent.
If your main database is other than SQLite and you choose this approach, you’ll need to limit your queries to the subset of features SQLite supports. And even then, there will always be a non-negligible mismatch, which may keep your confidence from reaching 100%.
In PHP, you can set up SQLite's in-memory Database like this:
$pdo = new PDO('sqlite::memory:');
Just be aware that with any in-memory database testing setup, you’ll need to recreate the schema for every run, as nothing is persisted.
I suggest that you take a look at the official documentation for more details:
I was genuinely surprised when I came across this project. And the good part is that I can speak from experience, having used it for a couple of months. Vimeo describes it as:
A MySQL engine written in pure PHP.
To quickly illustrate the main idea:
// use a class specific to your current PHP version (APIs changed in major versions)
$pdo = new \Vimeo\MysqlEngine\Php8\FakePdo($dsn, $user, $password);
// currently supported attributes
$pdo->setAttribute(\PDO::ATTR_CASE, \PDO::CASE_LOWER);
$pdo->setAttribute(\PDO::ATTR_EMULATE_PREPARES, false);
The library provides its own PDO implementation, which effectively acts as the interface to Vimeo’s MySQL engine under the hood. In theory, it works the same way as a regular PDO instance, and you can generally use it anywhere you would use native PDO.
Some issues I've noticed, though:
NULL to a non-nullable column produces a library-generated exception rather than the usual MySQL error message — and these can be confusing at first.On the other hand:
I recommend you to take a deeper look and see their real motivation:
This is the chosen approach of this guide.
As you may have noticed, all the previous options come with significant limitations. Unless you choose the SQLite in-memory approach and SQLite is your primary database, none of them provides a 100% trustworthy integration test.
The only way to guarantee fully reliable tests is to interact with your database — the same your application uses. This approach does exactly that and works with any database system that supports transactions.
The idea behind this technique is surprisingly simple. At its core, it looks like this:
public function test_user_registration(): void
{
$pdo = new PDO(...);
// start transaction
$pdo->beginTransaction();
// interact with the database performing real operations
$stmt = $pdo->prepare('INSERT INTO users (name) VALUES (:name)');
$stmt->execute(['name' => 'John']);
// the user was inserted, we can do some assertions
$this->assertCount(1, $pdo->query('SELECT * FROM users')->fetchAll());
// our test has finished, we roll back everything
$pdo->rollBack();
}
In other words:
This is real database interaction with almost zero side effects — and, most importantly, enables a fast and reliable feedback loop that keeps your development flow smooth. The only noticeable side effect that I realized is that if you use auto-incremented IDs they will keep increasing.
Modern testing setups wrap beginTransaction() and rollBack() inside methods such as setUp() and tearDown() which are specific to the Testing Frameworks, in this case, PHPUnit. But the underlying mechanism is exactly the same.
Also, you’ll probably want to separate your testing and development databases. If you mix them (use you development database for tests), your tests won’t start from a clean state, and you’ll eventually end up with incorrect assumptions and unreliable results.
This technique is not new. In fact, it’s well-established and widely used across many battle-tested frameworks and tools in both PHP and non-PHP ecosystems. Frameworks like Ruby on Rails, Django (Python) and Spring Boot (Java) rely on the same idea: run each test inside a database transaction and roll it back at the end.
Over the years, this pattern has proven to be one of the fastest, cleanest, and most reliable ways to write real database integration tests.
Here are some well-known examples:
Since the early versions of Rails (2005–2006, around its initial release), this mechanism has been supported. It has been part of Rails’ DNA from the very beginning.
This approach allowed Rails applications to scale their test suites without suffering the performance penalties of repeatedly creating or truncating tables, and it helped popularize transactional testing patterns in many other frameworks.
By default, Rails automatically wraps tests in a database transaction that is rolled back once completed. This makes tests independent of each other and means that changes to the database are only visible within a single test.
Reference: Ruby on Rails - Transactional Database Tests
Since 2017, WordPress’ PHPUnit test suite has adopted this transactional approach: each test starts a MySQL transaction and rolls it back after execution. This ensures real SQL behavior while keeping the database clean between tests.
Database modifications made during test, on the other hand, are not persistent. Before each test, the suite opens a MySQL transaction (
START TRANSACTION) with autocommit disabled, and at the end of each test the transaction is rolled back (ROLLBACK). This means that database operations performed from within a test, such as the creation of test fixtures, are discarded after each test.
Reference: WordPress Handbook - Testing with PHPUnit
The Illuminate\Foundation\Testing\RefreshDatabase Trait in Laravel also does exactly what we described. It wraps the test within a Database transaction, and rolls back everything at the end of the test.
The
Illuminate\Foundation\Testing\RefreshDatabasetrait does not migrate your database if your schema is up to date. Instead, it will only execute the test within a database transaction. Therefore, any records added to the database by test cases that do not use this trait may still exist in the database.
Reference: Laravel - Resetting the Database after each test
In the Symfony ecosystem, this approach is commonly implemented through the dama/doctrine-test-bundle, a bundle that — as of today — has more than 33 million downloads.
It is also one of the most decoupled, enterprise-grade solutions available. In practice, this means you can use the 'non-Symfony' part of this library in virtually any project, benefiting from the level of robustness and reliability that it has gained over the years.
You might hesitate about the Doctrine requirement — but there’s an important reason for it. This whole approach relies on database transactions, and that raises an immediate question: what happens if your application performs nested transactions?
This is exactly where the library shines. It handles transactional tests, even when your code opens its own transactions internally. Thanks to Doctrine’s DBAL middleware and its savepoint support, nested transactions work seamlessly on drivers such as PostgreSQL and MySQL.
dmaicher/doctrine-test-bundle in your framework-agnostic project
In this section, we’ll focus on how to configure this library for any framework-agnostic project — whether it’s a legacy codebase or a modern project where you intentionally chose to keep things minimal.
As a matter of fact, I actually posted a question in library’s GitHub repository asking about this exact use case, and David Maicher, the official maintainer, was kind enough to help me through the details. What follows is essentially the result of that exchange.
This assumes you already have a working Doctrine connection in place.
Install the composer package:
composer require dama/doctrine-test-bundle:^8.4 --dev
phpunit.xml
This is required so that PHPUnit automatically rolls back the database transaction after each test.
Add the following <extensions> block — or simply add the <bootstrap class> to your existing <extensions> section if you already have one — to your phpunit.xml (or the one you use) file:
<?xml version="1.0" encoding="UTF-8"?>
<phpunit>
<!-- ...other stuff ... -->
<extensions>
<bootstrap class="DAMA\DoctrineTestBundle\PHPUnit\PHPUnitExtension"/>
</extensions>
</phpunit>
The following code shows what you would normally want to have in your “Doctrine connection” setup.
The key parts are:
dama.connection_key parameter (it can be set to anything, but it must be present)setKeepStaticConnections(true)
If you miss any of these, the integration test won't work.
use Doctrine\ORM\Configuration;
use Doctrine\ORM\ORMSetup;
use Doctrine\DBAL\Connection;
use Doctrine\DBAL\DriverManager;
function getDoctrineConnection(Environment $environment): Connection
{
$parameters = [
'driver' => 'pdo_pgsql',
'host' => '127.0.0.1',
'user' => 'postgresql',
'password' => 'postgresql',
'dbname' => 'app_prod',
];
// this is not relevant for this example, but if you use Doctrine, you probably use the ORM too.
// And this is just to make it more similar to your context.
$config = ORMSetup::createAttributeMetadataConfiguration(
paths: [$domainEntitiesPath],
isDevMode: $environment->not(Environment::production),
);
// you will probably want to have a check similar to this
if ($environment->is(Environment::testing)) {
// also, you probably want to switch to an empty, different database for testing!
$parameters['dbname'] = 'app_tests';
// set a connection key
$parameters['dama.connection_key'] = 'anything-is-ok',
// add the DBAL middleware
$config->setMiddlewares([
new \DAMA\DoctrineTestBundle\Doctrine\DBAL\Middleware(),
]);
// keep static connections across tests
\DAMA\DoctrineTestBundle\Doctrine\DBAL\StaticDriver::setKeepStaticConnections(true);
}
return DriverManager::getConnection($parameters, $config);
}
This is it. You don’t need anything else.
This is a complete example of how a Database Integration Test now looks:
use PHPUnit\Framework\TestCase;
class SomeTest extends TestCase
{
public function test_doctrine_connection(): void
{
$connection = getDoctrineConnection(Environment::testing);
// Insert a real row into the database
$connection->insert('users', [
'name' => 'John',
]);
// Fetch the last inserted ID
$userId = $connection->lastInsertId();
// Verify the row exists
$name = $connection->fetchOne(
'SELECT name FROM users WHERE id = :id',
['id' => $userId]
);
$this->assertEquals('John', $name);
// No cleanup needed — everything will be rolled back automatically
}
}
I hope this post helped you understand how to perform real database integration tests in PHP without relying on any framework.
Did you find this approach useful? Would you like to hear about other variations?
If you tried it or ran into anything unexpected, I'd be happy hear how it went — your experience helps keep this post accurate and helpful for others.
And if you’re applying this technique in a real project — especially a legacy one — feel free to share your story. This can encourage others to adopt it as well.
Special thanks to David Maicher (@dmaicher), the maintainer of the doctrine-test-bundle project, for helping clarify how to use the library in a framework-agnostic context.
2025-11-30 00:35:30
There comes a point in every person’s life where their strength runs thin. Not because they’re weak, not because they’ve failed, and not because they lack faith—but because life has demanded more from them than any human heart was designed to carry alone.
This exhaustion isn’t the kind you solve with a nap or a day off. It’s deeper. It’s quieter. It’s spiritual. It settles into the space between your bones and your breath. It drains you in places no one sees. It makes you feel empty inside even while you’re still trying to function on the outside.
This is the kind of tired that comes from surviving too much for too long.
The kind of tired that comes from constantly holding yourself together.
The kind of tired that comes from being strong for everyone else.
The kind of tired that quietly whispers, “I can’t keep doing this.”
But here is the truth that becomes clearer the longer you walk with God:
When you run out of strength, God does His best work.
Many people assume God steps in when you’re strong—when you’re confident, when you’re faithful, when you’re secure, when you’re emotionally stable. But God’s power is not attracted to your strength. God’s power is attracted to your surrender.
Your breaking point is where His rebuilding begins.
Your limits are where His limitless grace takes over.
Your emptiness is where He pours in new strength.
Your collapse is where His compassion carries you.
God doesn’t wait for you to have it all together.
God waits for the moment you finally whisper, “Lord, I can’t carry this alone.”
When you have nothing left, God steps forward.
Not reluctantly.
Not angrily.
Not disappointed.
But lovingly.
Gently.
Faithfully.
Because God has always intended to lift what was too heavy for you.
There is a quiet miracle happening in your life right now, even if you can’t feel it yet. It’s happening in the exhaustion. It’s happening in the confusion. It’s happening in the moments you feel stuck. It’s happening in the places where your heart feels too tired to keep going.
God is lifting you.
Not loudly.
Not dramatically.
Not in a way that draws attention.
But in the way a father lifts a sleeping child—carefully, lovingly, without waking them.
God lifts you through rest that calms your spirit.
God lifts you through peace that doesn’t match your circumstances.
God lifts you through people who show up with love you didn’t expect.
God lifts you through moments of clarity that appear right when you need them.
God lifts you through strength you didn’t know you still had.
God has been carrying you in ways you didn’t recognize.
Some seasons require you to be strong.
Other seasons require you to let God be strong for you.
You are in a season where God is carrying you more than you realize.
But here is what makes this season painful:
You can feel yourself changing.
You can feel God removing old strength that once got you through.
You can feel Him stripping away the illusions of control.
You can feel Him inviting you to trust deeper than ever before.
And that shift inside you feels like breaking.
It feels like losing yourself.
It feels like falling apart.
It feels like you’re becoming weaker.
But that’s not what’s happening.
You’re not becoming weaker—you’re becoming dependent on the right source.
The strength you used to rely on came from you.
The strength you’re learning to rely on now comes from Him.
God is not asking you to fake strength.
He is asking you to find it in Him.
You are not meant to carry everything.
You are not meant to solve everything.
You are not meant to hold yourself upright every moment of your life.
Sometimes the holiest thing you can say is, “Lord, I’m tired.”
Sometimes the most spiritual thing you can do is admit you’ve reached your limit.
Sometimes the strongest thing you can say is, “God, I need You.”
There is a reason God allows you to run out of your own strength.
If you never ran out, you would never learn how strong He is.
If you never reached the end of yourself, you would never discover the beginning of Him.
People around you don’t always understand when your strength runs out.
Some think you’re being dramatic.
Some think you’re giving up.
Some think you’re losing motivation.
Some think you’re failing.
Some think you “just need to push harder.”
But they don’t know the pressure you’ve endured.
They don’t know the battles you’ve fought in silence.
They don’t know the nights you held yourself together by a thread.
They don’t know the weight you carry behind your smile.
And they don’t know the depth of what God is doing in you right now.
You don’t have to prove your strength to anyone.
You don’t have to pretend you’re okay.
You don’t have to act unbreakable.
You are allowed to be tired.
You are allowed to feel empty.
You are allowed to rest.
You are allowed to heal.
You are allowed to lean on God without apology.
Your strength is not measured by how much you carry.
Your strength is measured by your willingness to let God carry you.
There is something sacred about being brought to the end of yourself.
It’s in this place you discover that God doesn’t just restore strength—He replaces it.
He gives you better strength.
He gives you deeper strength.
He gives you strength that isn’t built on pressure, but on presence.
God gives you strength that doesn’t depend on circumstances.
Strength that doesn’t crumble under stress.
Strength that doesn’t run on adrenaline.
Strength that doesn’t break when life hurts you.
God gives you strength that is rooted in faith, grounded in truth, and anchored in His character—not yours.
When you run out of strength, it is not the end—it is the beginning of surrender.
Surrender isn’t weakness.
Surrender isn’t giving up.
Surrender isn’t quitting.
Surrender is the moment your spirit says, “God, I trust You more than I trust myself.”
And that is the moment God begins lifting you higher than your own strength ever could.
You may not feel strong right now.
But you are being strengthened.
You may not feel held right now.
But you are being carried.
You may not feel hopeful right now.
But hope is forming quietly inside you.
You may not feel like you’re rising.
But God is lifting you little by little.
Breath by breath.
Moment by moment.
When God lifts you, it doesn’t always feel like rising.
Sometimes it feels like rest.
Sometimes it feels like slowness.
Sometimes it feels like stillness.
Sometimes it feels like nothing is happening.
But even in the stillness, God is moving.
Even in the silence, God is speaking.
Even in the waiting, God is working.
He is strengthening you underneath the surface.
There will come a moment—unexpected, subtle, beautiful—when you realize you’re not as tired as you used to be.
You wake up one morning and feel a little lighter.
Your thoughts feel a little clearer.
Your heart feels a little steadier.
Your hope feels a little stronger.
Your peace feels a little deeper.
And that’s when you realize something sacred:
God has been lifting you the entire time.
Your strength will return—not the old strength that came from pressure, but a new strength that comes from God’s presence.
A strength that will carry you through the next season with more wisdom.
A strength that will make you unshakeable in situations that used to overwhelm you.
A strength that will define your identity instead of your circumstances.
This strength is forming in you now.
In the exhaustion.
In the heaviness.
In the limit.
In the surrender.
You will rise again.
And when you rise, you will rise differently.
You will rise with clarity.
You will rise with peace.
You will rise with discernment.
You will rise with courage.
You will rise with emotional balance.
You will rise with spiritual stability.
You will rise with God’s strength flowing through you.
So if you feel tired, empty, overwhelmed, or out of strength—
do not fear the feeling.
It is not the sign of your failure.
It is the sign of God’s arrival.
The moment your strength ends, His begins.
And He is lifting you right now—
even if you can’t feel it yet.
— Douglas Vandergraph
Watch Douglas Vandergraph’s inspiring faith-based videos on YouTube
Support the ministry by buying Douglas a coffee
2025-11-30 00:33:36
A common issue in serverless applications: the frontend receives a timeout error while CloudWatch logs show the Lambda function completed successfully. Users see failed requests, but backend operations succeed.
When a Lambda function is called synchronously, the API waits for it to complete and return a response. For long-running tasks, this might cause considerable delays.
Critical timeout constraints:
| Layer | Maximum Timeout | Configurable |
|---|---|---|
| Lambda Function | 15 minutes | Yes |
| API Gateway (REST) | 29 seconds | No |
| AppSync (GraphQL) | 30 seconds | No |
AWS AppSync provides asynchronous Lambda resolver support. Asynchronous execution lets a GraphQL mutation trigger a Lambda function without waiting for it to finish. The resolver returns immediately, bypassing the 30-second timeout limit.
With this pattern, the frontend is no longer tied to the duration of the Lambda execution. This enables long-running workflows to complete in the background.
```Before (Synchronous):
Frontend → "Start job" → Wait 30s → Timeout ❌
Lambda still running...After (Asynchronous):
Frontend → "Start job" → Get job ID immediately ✅
Lambda runs independently → Updates result → Frontend gets notified ✅```
When a GraphQL mutation is invoked with an async handler, AppSync invokes the Lambda function using Event invocation type (asynchronous mode). It returns a response—typically containing a job identifier—without waiting for Lambda completion.
The Lambda function then executes independently in the background. The frontend retrieves results through two methods:
Real-time updates: GraphQL subscriptions notify the client when data changes
Polling: Periodic GraphQL queries check job status at defined intervals
This architecture eliminates the 30-second AppSync resolver timeout limitation while maintaining a responsive user experience.
For Amplify applications using AppSync, AWS provides native support for asynchronous Lambda resolvers:
AWS Documentation:
https://docs.amplify.aws/react/build-a-backend/data/custom-business-logic/#async-function-handlers