MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

How Analysts Turn Messy Data, DAX, and Dashboards into Action with Power BI

2026-02-09 02:58:55

Data is everywhere. Insight is not.
Most organizations don’t struggle with having data—they struggle with turning scattered, messy, and often contradictory data into decisions people actually trust and act on. This is where analysts earn their keep, and where Power BI quietly becomes one of the most powerful tools in the modern analytics stack.
This article walks through how analysts use Power BI to translate raw data, complex DAX, and dashboards into real business action.

Messy Data Is the Starting Point, Not the Problem
Let’s be honest: clean data is the exception.
Analysts usually inherit:
• Excel files with inconsistent columns
• Databases designed for transactions, not analytics
• Multiple systems that disagree on basic definitions
• Missing values, duplicates, and broken dates
Power BI doesn’t magically fix this—but it embraces it.
Power Query: Where the Real Work Begins
Power Query is often where analysts spend the most time. This is where chaos turns into structure.
Common steps include:
• Standardizing column names and data types
• Cleaning nulls, duplicates, and formatting issues
• Merging data from multiple sources
• Creating derived fields like fiscal periods or status flags
Every transformation is recorded, repeatable, and refreshable. That alone is a massive upgrade from one-off Excel cleanups.

The Data Model Is the Real Product
Dashboards get the attention, but the data model does the heavy lifting.
Great analysts think less about charts and more about how the business actually works:
• What is a customer?
• How should revenue be aggregated?
• Which dates matter: order date, ship date, or invoice date?
Modeling for Humans, Not Just Machines
Power BI models are typically built using:
• Fact tables for transactions
• Dimension tables for context (dates, products, customers)
• Clear relationships with predictable filtering behavior
A strong model reduces DAX complexity, improves performance, and—most importantly—ensures everyone is answering the same question with the same logic.
This model becomes the organization’s analytical language.

DAX: Where Questions Become Answers
DAX is often described as “hard,” but in reality, it’s just precise.
Executives don’t ask for sums and averages. They ask things like:
• “Are we performing better than last quarter?”
• “Which regions are underperforming right now?”
• “What happens if we exclude one-time events?”
Why DAX Matters
DAX allows analysts to encode business logic once and reuse it everywhere:
• Time intelligence (YTD, rolling 12 months, comparisons)
• Ratios and KPIs
• Conditional logic for thresholds and alerts
The key skill isn’t memorizing functions—it’s understanding evaluation context. Knowing how filters flow through a model is what makes measures accurate, fast, and reliable.

Dashboards Are Interfaces for Decisions
A dashboard is not a data dump. It’s a decision interface.
The best Power BI dashboards answer three questions:

  1. What’s happening?
  2. Why is it happening?
  3. What should I do next? Designing for Action Effective dashboards: • Surface KPIs first, details second • Use trends and comparisons instead of static totals • Highlight exceptions, not just averages Interactivity matters. Drill-downs, slicers, and tooltips let users explore data without calling the analyst every time they have a follow-up question. That’s how analytics scales. ________________________________________ Turning Insight into Action Insight without action is just interesting trivia. Analysts intentionally design reports to support: • Operational decisions (daily, tactical) • Management reviews (monthly, performance-focused) • Strategic planning (longer-term trends and scenarios) Clear targets, variance indicators, and contextual benchmarks help users quickly see where attention is needed. Trust Is the Final Step Publishing through Power BI Service, applying row-level security, and certifying datasets builds trust. When users trust the numbers, they stop debating data and start debating decisions. That’s the real win. ________________________________________ Final Thoughts Power BI is not “just a visualization tool.” It’s where data preparation, modeling, analytics, and storytelling come together. Analysts sit at the intersection of all four: • They clean messy data • Model the business correctly • Translate questions into DAX • Design dashboards that lead to action When done well, Power BI doesn’t just report on the business—it changes how the business operates. ________________________________________

From Raw Data to Real Action: The Analyst's Journey as a Data Translator in Power BI

2026-02-09 02:54:55

Power BI

In today's organizations, a critical gap persists. On one side, executives and managers demand clear, immediate answers to urgent questions: "Are we on track to hit our quarterly targets?" "Which product line is underperforming and why?" On the other side lies the reality of modern data: a sprawling, chaotic landscape of spreadsheets, databases, and legacy systems—each with its own inconsistencies, errors, and obscure logic.

Bridging this gap is the fundamental role of the data analyst. But to call them mere "number crunchers" is a profound understatement. A more apt description is that of a translator. An analyst's core skill is not just proficiency with tools, but the ability to interpret the raw, technical "language" of disparate systems and translate it into the clear, actionable "language" of business decisions.

This translation is a disciplined, three-act process. It begins with taming chaos into a trusted foundation, moves to encoding complex business logic into dynamic calculations, and culminates in designing a compelling narrative that drives action.

Deciphering the Chaos – Translating Raw Data into a Trusted Foundation

The analyst's first task is to confront the "source text": the raw data. This is rarely clean. It's more likely a collection of CSV files with different date formats, a Salesforce report with merged header cells, and a SQL table where the "Region" column suddenly changed from "EMEA" to "Europe & Middle East."

Data Cleaning

The Translator's First Tool: Power Query

This is where Power Query, Power BI's data transformation engine, moves from being a feature to being a philosophy. Its purpose is not to apply a one-time fix, but to build a single, reproducible source of truth. Every step you record—removing a column, splitting a field, merging a table—is saved as a recipe. The next time data refreshes, the recipe runs automatically, ensuring consistency and freeing you from manual, error-prone cleaning.

Power Query Interface

Here’s how a translator thinks within Power Query:

Filter at the Source, Not the End: A common rookie mistake is to load 10 years of historical data only to analyze the last quarter. A skilled translator uses Power Query's "Filter Rows" step early in the process to load only the necessary data. This dramatically improves performance and model refresh times.

Filtering Pivot

Pivot and Unpivot Thoughtfully: Data often arrives in a "wide" format convenient for human reading but terrible for analysis. A sales report might have columns for Jan_Sales, Feb_Sales, Mar_Sales. A translator "unpivots" these into two columns: Month and Sales. This long format is what Power BI's relationships and calculations need to work efficiently.

Pivot & unpivoting

Leverage Custom Columns for Logic: Need to categorize customers based on purchase frequency or flag orders that exceed a certain threshold? Instead of doing this later in DAX (which can hurt performance), create a Conditional Column in Power Query during the data prep phase. This logic becomes part of your stable data foundation.

Custom Column

The output is no longer just "data." It is a structured, reliable, and analysis-ready dataset. The chaos has been translated into order, setting the stage for the next phase: adding intelligence.

Embedding Intelligence – Translating Business Questions into DAX

Data Decision

With clean tables related in a star schema, the analyst now faces the core translation challenge: turning stakeholder questions into calculated answers. This is the realm of Data Analysis Expressions (DAX), the formula language of Power BI.

DAX is more than a collection of functions; it is the syntax for expressing business rules. A question like "What were our sales this month compared to the same month last year, but only for our premium product segment?" requires a precise translation.

Dax

Moving Beyond Basic Aggregation: The Art of Context
The power of DAX lies in its understanding of context. A simple measure Total Sales = SUM(Sales[Amount]) behaves differently depending on where it's used. Put it in a card visual, it shows the grand total. Put it in a table sliced by Region, it automatically shows the total per region. This is filter context in action.

Sum Function

The translator uses advanced functions to manipulate this context and answer complex questions:

Time Intelligence for Trend Translation: Questions about growth and trends are fundamental. DAX provides dedicated time intelligence functions to translate them.

Sales PY = CALCULATE([Total Sales], SAMEPERIODLASTYEAR('Date'[Date]))
Sales Growth % = DIVIDE([Total Sales] - [Sales PY], [Sales PY])

This code seamlessly calculates prior year sales and the percentage growth, regardless of whether the user is looking at a day, month, or quarter.

Sum Function

The CALCULATE Function: The Master Translator: CALCULATE is the most important function in DAX. It modifies the context of a calculation. It's how you answer "what if" questions.

Sales for Premium Products =CALCULATE([Total Sales],'Product'[Segment] = "Premium")

This measure translates the question "What are sales, but only for premium products?" into a dynamic calculation that respects all other filters on the report.

Formulas

Writing for Readability: The VAR Keyword: Good translators make complex logic understandable. In DAX, the VAR (variable) statement is essential for this.

Profit Margin % =
VAR TotalProfit = SUM(Sales[Profit])
VAR TotalRevenue = SUM(Sales[Revenue])
RETURN DIVIDE(TotalProfit, TotalRevenue, 0)

This breaks the calculation into logical steps, making it easier to debug, modify, and explain to others.

Var Table

The output is a suite of dynamic measures. The dataset is now imbued with business logic, capable of answering nuanced questions interactively. The data is intelligent, but it is not yet a story.

Narrating for Action – Translating Insights into Compelling Dashboards

Dashboard

The final and most critical translation is from insight to action. A dashboard is not a data dump; it is a visual argument and a guidance system. Its success is measured not by how many charts it contains, but by how quickly it leads a user to a confident decision.

Design Principles for the Decision-Maker

A translator designs with empathy for the audience:

The Five-Second Rule: The primary objective of the entire page should be understood within five seconds. This is achieved through a clear visual hierarchy: a prominent KPI header at the top, supporting trend charts in the middle, and detailed breakdowns at the bottom.

Guided Interactivity, Not Just Features: Slicers, cross-filtering, and drill-throughs are powerful, but they must serve the narrative. A well-designed dashboard uses bookmarks to create "guided analytical stories"—clicking a button might reset filters, highlight a key trend, and bring a specific detail page to the forefront, leading the user down a pre-defined analytical path.

Leverage the Full Ecosystem: Power BI is more than a canvas. The translator uses Data Alerts to proactively notify stakeholders when a KPI crosses a threshold, turning a passive report into an active monitoring tool. They enable the Q&A feature, allowing users to ask questions in natural language ("show me sales by region last quarter"), fostering a conversational relationship with the data.

The Virtuous Cycle of Informed Action

The journey of the data translator in Power BI is a continuous, virtuous cycle: ChaosStructureLogicNarrativeAction.

Each decision made from a well-crafted dashboard generates new data and new questions, which flow back to the analyst. This starts the translation process anew, creating a resilient loop of increasingly informed decision-making.

The true power of an analyst, therefore, lies not in memorizing every DAX function or mastering every visualization, but in architecting and sustaining this cycle. It is the deep understanding that their role is to be the essential, human link between the raw potential of data and the tangible progress of the business. By embracing the discipline of translation, they move from being reporters of the past to becoming indispensable guides to the future.

Data Analysis Step by Step;

1st Read: Git & Github Beginner's guide

If you’re also learning version control with Git, you can read my Git & GitHub beginner’s guide here:
https://dev.to/charles_ndungu/git-for-data-scientists-data-engineers-my-very-first-beginner-guide-git-bash-github-3952

2nd Read: Mastering Excel

After mastering Git basics, you can learn how to analyze data using Microsoft Excel here:
https://dev.to/charles_ndungu/ms-excel-for-data-analytics-a-friendly-practical-guide-for-beginners-hjn

3rd Read: Data Modelling & Schemas

This article dives into data modelling in Power BI, covering star and snowflake schemas, fact and dimension tables, relationships, and why good modelling is essential for accurate insights and fast reports.
https://dev.to/charles_ndungu/the-backbone-of-power-bi-a-deep-dive-into-data-modeling-schemas-1o1l

4th Read: Data Analysis Steps in Power BI

This article reveals how Power BI analysts act as data translators, bridging the gap between messy data and clear business action. We break down their essential three-step process: cleaning raw information, encoding logic with DAX, and designing dashboards that drive real decisions.
https://dev.to/charles_ndungu/from-raw-data-to-real-action-the-analysts-journey-as-a-data-translator-in-power-bi-2gl6

Repo

https://github.com/Charles-Ndungu/excel-for-data-analytics

How I would learn programming in 2026 if I had to start from zero

2026-02-09 02:51:45

If I had to start over today — in 2026 — with no experience in software development, I wouldn’t do it the same way I did years ago.

Not because the fundamentals have changed.
But because the environment has.

There’s more content, more tools, more AI, more noise, and more pressure to learn fast.
And paradoxically, that makes it easier to get lost.

So this is the exact approach I would follow if I were beginning again from scratch, knowing what I know now.

No hype. No shortcuts. Just what actually works.

Step 1: I would stop trying to learn everything

One of the biggest mistakes beginners make is trying to learn too much at once:

  • Multiple languages
  • Multiple frameworks
  • Frontend + backend + cloud + AI
  • Several courses at the same time

It feels productive.
But it creates confusion and shallow understanding.

If I were starting today, I would pick:

  • One language
  • One clear path
  • One main resource

And stay there long enough to build real depth.

Not forever.
But long enough to understand how programming actually works.

Step 2: I would focus on fundamentals first

This is the part most people try to skip.

Frameworks are exciting.
Libraries are powerful.
But fundamentals are what give you independence.

I would focus on:

  • Logic and problem-solving
  • Variables, conditions, loops
  • Functions
  • Basic data structures
  • Reading and understanding code

Not because it’s glamorous.
But because this is what allows you to learn anything else later without starting over every time.

Step 3: I would build small projects early

Tutorials are helpful in the beginning, but they create a false sense of progress.

You feel like you’re learning because everything works while you follow along.

The real learning starts when you try to build something on your own and get stuck.

So I would start creating small projects as early as possible:

  • A simple calculator
  • A to-do list
  • A basic API
  • A small automation script

Nothing impressive.
Just real.

Because projects force you to:

  • Make decisions
  • Face errors
  • Search for answers
  • Think

And thinking is the real skill you’re building.

Step 4: I would use AI — but carefully

This is the biggest difference between learning years ago and learning today.

If I were starting in 2026, AI would be part of my daily learning process.

But not as a shortcut.

I wouldn’t use it to generate entire solutions and move on.
I’d use it to understand.

For example:

  • Asking why something works a certain way
  • Requesting simpler explanations
  • Debugging errors step by step
  • Breaking problems into smaller parts

AI can act like a patient mentor that never gets tired of your questions.

But it only helps if you stay mentally involved in the process.

If you just copy and paste solutions, you’re training dependency — not skill.

Step 5: I would accept confusion as part of the journey

This is the part nobody tells you clearly enough.

You will feel lost sometimes.
You will feel slow.
You will forget things.
You will compare yourself to others.

And that’s normal.

The early phase of learning programming is not about clarity.
It’s about building tolerance for not knowing yet.

Every developer goes through this stage.
The difference is that the ones who grow are the ones who keep going even when progress feels invisible.

Step 6: I would measure progress differently

Instead of asking:

  • “How many courses have I finished?”
  • “How many languages do I know?”

I would ask:

  • Can I solve small problems alone?
  • Can I read code and understand what’s happening?
  • Can I debug simple errors without panic?

That’s real progress.

And it compounds over time.

Step 7: I would stay consistent, not intense

You don’t need 8 hours a day to become a developer.

What you need is consistency.

Even 1–2 focused hours a day, done regularly, builds more skill than occasional bursts of motivation followed by long breaks.

Programming is less about talent and more about repetition with awareness.

The biggest mindset shift

If I could summarize everything in one idea, it would be this:

Learning programming is not about memorizing syntax.
It’s about becoming someone who knows how to figure things out.

Languages change.
Tools change.
AI evolves.

But the ability to think, break problems down, and keep learning stays valuable forever.

Next article

One of the most common doubts I’ve seen from people starting this journey is:

“Do I need to be good at math to become a developer?”

In the next post, I’ll talk honestly about that — because this question stops a lot of people before they even begin.

Turning Data into Insight: An Analyst’s Guide to Power BI

2026-02-09 02:44:08

Introduction: The reality of messy business data

In most organizations, data rarely arrives in a clean, analysis-ready format. Analysts typically receive information from multiple sources: spreadsheets maintained by business teams, exports from transactional systems, cloud applications, and enterprise platforms such as ERPs or CRMs. These datasets often contain inconsistent formats, missing values, duplicate records, and unclear naming conventions.

Working directly with such data leads to unreliable metrics, incorrect aggregations and ultimately poor business decisions. This is where Power BI plays a critical role. Power BI is not just a visualization tool, it is an analytical platform that allows analysts to clean, model, and interpret data before presenting it in a form that decision-makers can trust.

From raw data to business action: The analyst workflow

A typical analytical workflow in Power BI follows a logical sequence:

  1. Load raw data from multiple sources e.g., imports from excel, databases or online services.
  2. Clean and transform the data using Power Query.
  3. Model the data into a meaningful structure.
  4. Create business logic using DAX.
  5. Design dashboards that communicate insight.
  6. Enable decisions and actions by stakeholders.

Each step builds on the previous one. If any stage is poorly executed, the final insight becomes misleading, regardless of how attractive the dashboard looks.

Cleaning and transforming data with power query

Data cleaning is the foundation of all reliable analytics. Common data quality issues include:

  • Columns stored in the wrong data type.
  • Missing or null values.
  • Duplicate customer or transaction records.
  • Inconsistent naming and coding systems.

These issues directly affect calculations. For example, a null freight value treated as blank instead of zero will distort average shipping costs. Duplicate customer records inflate revenue totals. Incorrect data types prevent time-based analysis entirely.

Power Query provides a transformation layer where analysts can reshape data without altering the original source. This ensures reproducibility and auditability.

Key Transformation Principles

There are several key principles that should guide an analyst in their approach to data transformation:

1. Remove what is not need

Unnecessary columns increase model size, memory usage, and cognitive complexity. Every column should justify its existence in a business question.

2. Standardize naming

Column and table names should reflect business language, not system codes.
For example:

  • Cust_ID → Customer ID
  • vSalesTbl → Sales

This improves both usability and long-term maintainability.

3. Handle missing and invalid values

Nulls, errors, and placeholders must be explicitly addressed. Analysts must decide whether missing values represent:

  • Zero
  • Unknown
  • Not applicable Each choice has analytical consequences.

4. Remove duplicates strategically

Duplicates should be removed only when they represent the same real-world entity. Otherwise, analysts risk deleting legitimate records.

Building meaningful data models

Most analytical errors in Power BI do not come from DAX formulas or charts. They come from poor data models.

A strong model reflects how the business actually operates. This typically follows a star schema:

  • Fact tables: transactions (Sales, Orders, Payments)
  • Dimension tables: descriptive attributes (Date, Product, Customer, Region)

This structure ensures:

  • Correct aggregations.
  • Predictable filter behavior.
  • High performance.

Without proper modeling, even simple metrics like “Total Sales by Region” can produce incorrect results due to ambiguous relationships or double counting.

Creating business logic with DAX

DAX (Data Analysis Expressions) is a library of functions and operators that can be combined to build formulas and expressions in Power BI, Analysis Services, and Power Pivot in Excel data models. It enables dynamic, context-aware analysis that goes beyond traditional spreadsheet formulas.

Examples of business logic encoded in DAX:

  • What counts as “Revenue”?
  • How is “Customer Retention” defined?
  • What is the official “Profit Margin” formula?

These definitions must be centralized and reusable. Measures become the organization’s single source of analytical truth.

DAX uses a formula syntax similar to Excel but extends it with advanced functions designed specifically for tabular data models in Power BI. It allows users to create measures, calculated columns and calculated tables to perform dynamic and context-aware calculations.

Measures vs Calculated Columns

  • Calculated columns: A calculated column is a column that you add to an existing table (in the model designer) and then create a DAX formula that defines the column's values. They operate row by row and are stored in memory.
  • Measures are evaluated dynamically where results change based on report context.

Creating Measures for Advanced Calculations

  • Measures are a core component of DAX used for calculations on aggregated data.
  • They are evaluated at query time not stored in the data model
  • Measures respond dynamically to filters, slicers and report context
  • Commonly used measures include SUM, AVERAGE and COUNT
  • DAX supports both implicit and explicit measures
  • Using correct data types is essential for accurate measure calculations

For most analytical metrics, measures are preferred, because they respond to filters, slicers, and user interactions.

Understanding Context: The Core of Correct Analytics

Context is one of the most important concepts in DAX because it determines how and where a formula is evaluated. It is what makes DAX calculations dynamic: the same formula can return different results depending on the row, cell, or filters applied in a report.

Without understanding context, it becomes difficult to build accurate measures, optimize performance, or troubleshoot unexpected results.

There are three main types of context in DAX:

Row Context

Refers to the current row being evaluated. It is most commonly seen in calculated columns, where the formula is applied row by row.

Filter Context

It is the set of filters applied to the data. These filters can come from slicers and visuals in the report, or they can be explicitly defined inside a DAX formula.

Query Context

Created by the layout of the report itself.

If analysts misunderstand context, they produce:

  • Wrong totals.
  • Misleading KPIs.
  • Inconsistent executive reports.

In summary, context is the foundation of how DAX works. It controls what data a formula can “see” and therefore directly affects the result of every calculation. Mastering row, query, and filter context is essential for building reliable, high-performing, and truly dynamic analytical models in Power BI and other tabular environments.

Designing dashboards that communicate insight

Designing interactive dashboards helps businesses make data-driven decisions. A dashboard is not a collection of charts. It is a decision interface.

It is essential to design professional reports that focus on optimizing layouts for different audiences, and leveraging Power BI’s interactive features.

Good dashboards:

  • Highlight trends and deviations.
  • Compare performance against targets.
  • Expose anomalies and risks.
  • Support follow-up questions.

Bad dashboards:

  • Show too many metrics.
  • Focus on visuals over meaning.
  • Require explanation to interpret.

Sample Dashboard Data

Turning Dashboards into Business Decisions

This is the most important step, and the most neglected.

Dashboards should answer questions like:

  • Which regions are underperforming?
  • Which products drive the most margin?
  • Where is customer churn increasing?
  • What happens if we change pricing?

Real business actions include:

  • Reallocating marketing budgets.
  • Optimizing inventory levels.
  • Identifying operational bottlenecks.
  • Redesigning sales strategies.

If no decision changes because of a dashboard, then the analysis failed in capturing key business indicators.

Common pitfalls that undermine analytical value

Even experienced analysts fall into these traps:

  • Treating Power BI as a visualization tool instead of a modeling tool.
  • Writing complex DAX on top of poor data models.
  • Using calculated columns instead of measures.
  • Ignoring filter propagation and relationship direction.
  • Optimizing visuals before validating metrics.

These issues lead to highly polished dashboards with fundamentally wrong numbers, an undesired outcome in analytics.

Conclusion

Power BI provides an integrated analytical environment where data preparation, semantic modeling, calculation logic, and visualization are combined into a single workflow.

The analytical value of the platform does not emerge from individual components such as Power Query, DAX, or reports in isolation, but from how these components are systematically designed and aligned with business requirements.

Effective use of Power BI requires analysts to impose structure on raw data, define consistent relationships, implement reusable calculation logic through measures and ensure that visual outputs reflect correct filter and evaluation contexts.

When these layers are properly engineered, Power BI supports reliable aggregation, scalable analytical models, and consistent interpretation of metrics across the organization, enabling stakeholders to base operational and strategic decisions on a shared and technically sound analytical foundation.

I Spent 48 Hours Building a Next.js Boilerplate So You Can Ship in 30 Minutes

2026-02-09 02:42:31

The Problem That Kept Me Up at Night

You know that feeling when you start a new Next.js project and spend the first week just setting things up? Authentication, internationalization, role management, SEO configuration... By the time you're done with the boilerplate, you've lost all that initial excitement.

What you get in one line: Type-safe i18n with RTL → NextAuth + Google OAuth → RBAC with parallel routes → SEO (sitemap, robots, manifest) → Dark mode → ESLint + Prettier → Vitest + Playwright → shadcn/ui → One config file. Production-ready.

I've been there. Too many times.

After launching my third SaaS project this year, I realized I was copy-pasting the same setup code over and over. So I decided to do something about it.

What I Built

Meet my production-ready Next.js boilerplate - not just another "hello world" starter template, but a fully-featured foundation that handles all the boring stuff so you can focus on building your actual product.

🔗 Live Demo | 📦 GitHub Repo | 🚀 Use this template | Deploy on Vercel

Boilerplate Screenshot

Why This Boilerplate is Different

🌍 Real Internationalization (Not Just a Dictionary)

I'm talking about type-safe translations that catch errors at compile time. No more broken translations in production because you typo'd a key.

Here's what makes it special:

  • Three languages out of the box: English, বাংলা (Bengali), and العربية (Arabic)
  • RTL support that actually works: Arabic layouts automatically flip to right-to-left
  • Dead-simple language switching: One click, zero page reload
  • Type-safe with TypeScript: Your IDE will tell you when a translation is missing
// TypeScript knows all your translation keys
t('navigation.home') // ✅ Works
t('navigation.homer') // ❌ Compile error - typo caught!

🔐 Role-Based Access Control That Scales

Most tutorials show you basic auth and call it a day. But what about when you need different dashboards for users and admins? Or when you want to add a "Moderator" role later?

I used Next.js 15's parallel routes to make this painless:

app/
  (protected)/
    @admin/      # Admin-only views
      dashboard/
    @user/       # User views
      dashboard/
    layout.tsx   # Smart routing logic

The layout automatically shows the right dashboard based on the user's role. No messy conditionals scattered everywhere. Want to add a new role? Just create a new parallel route folder. That's it.

🔑 Authentication (NextAuth.js + Google OAuth)

Auth is built in with NextAuth.js. You get:

  • Google OAuth – enable by setting GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET, and NEXT_PUBLIC_GOOGLE_AUTH_ENABLED=true in .env
  • Custom login page at /auth/login with optional "Sign in with Google"
  • Admin role by email – set [email protected] (comma-separated); those Google accounts get the admin role automatically
  • JWT session with role and user id; redirect to /dashboard after sign-in

Copy .env.example to .env, add your secrets, and you're done.

🎨 A Design System That Doesn't Suck

I'm using shadcn/ui because:

  • Components are copy-paste ready (no bloated dependencies)
  • Full TypeScript support
  • Accessible by default (WCAG compliant)
  • Easy to customize without fighting CSS

Plus next-themes for light/dark mode with system preference detection and a manual toggle.

🔧 ESLint That Actually Helps (Not Annoys)

Let's be honest - most ESLint configs are either too strict or too loose. I spent time configuring rules that:

  • Catch real bugs (unused variables, missing dependencies, potential null references)
  • Enforce consistency (import order, naming conventions, formatting)
  • Don't get in your way (no annoying warnings for things that don't matter)
  • Work with Next.js 15 (proper App Router support, server component rules)

The config includes:

  • eslint-config-next - Official Next.js rules
  • TypeScript-specific linting
  • Import sorting and organization
  • Best practices for React hooks
  • Accessibility checks (a11y)

Prettier is wired up too (Tailwind plugin, format on save in .vscode/settings.json). Run npm run lint and npm run prettier:fix for consistent, clean code.

📊 SEO Configuration That's Actually Usable

Instead of hardcoding metadata everywhere, I created a single JSON configuration file that handles:

  • Open Graph tags
  • Twitter cards
  • Structured data (JSON-LD)
  • Multi-language meta tags
  • Canonical URLs
  • Dynamic sitemap generation

Just edit one file:

{
  "appName": "Your App",
  "title": "Your Title",
  "description": "Your Description",
  "domain": "https://yoursite.com",
  "keywords": ["your", "keywords"],
  "social": {
    "twitter": "@yourhandle"
  }
}

Done. SEO handled. The same config drives sitemap (sitemap.ts), robots.txt (robots.ts), and manifest (manifest.ts).

🧪 Testing & CI

  • Unit and component tests: Vitest + React Testing Library. Run npm run test or npm run test:watch; npm run test:coverage for coverage.
  • E2E: Playwright in e2e/. Run npm run e2e (dev server starts automatically); npm run e2e:ui for the UI. Use npm run e2e:webkit for WebKit-only (e.g. to save disk).
  • CI: GitHub Actions – .github/workflows/check.yml runs lint, Prettier, tests, and build on push/PR; .github/workflows/playwright.yml runs E2E.

🏥 Health Check

GET /api/health returns { status: "ok" } for load balancers and Kubernetes probes.

How to Get Started (The Real Way)

Step 1: Clone and Install

# Grab the code
git clone https://github.com/salmanshahriar/nextjs-boilerplate-production-ready.git
cd nextjs-boilerplate-production-ready

# Install dependencies (use whatever you prefer)
npm install
# or bun install / yarn install / pnpm install

Step 2: Configure Your Project

This is where most boilerplates leave you hanging. Not this one.

Edit lib/config/app-main-meta-data.json:

{
  "appName": "My Awesome SaaS",
  "title": "Revolutionary Product That Does X",
  "description": "We help Y achieve Z",
  "domain": "https://myawesomesaas.com",

  "organization": {
    "name": "My Company",
    "email": "[email protected]"
  },

  "social": {
    "twitter": "@myhandle",
    "github": "https://github.com/myhandle"
  }
}

That's your entire brand configuration. One file.

Step 3: Customize Your Languages (Optional)

Want to add Spanish? Here's how:

  1. Create locales/es.json with the same structure as locales/en.json:
{
  "common": { "appName": "Mi App", ... },
  "navigation": {
    "home": "Inicio",
    "about": "Acerca de"
  }
}
  1. Update lib/config/app-main-meta-data.json:
{
  "languages": {
    "supported": ["en", "bn", "ar", "es"],
    "locales": {
      "es": {
        "code": "es",
        "name": "Spanish",
        "nativeName": "Español",
        "locale": "es_ES",
        "direction": "ltr"
      }
    }
  }
}
  1. In lib/i18n/get-translations.ts, import es.json and add es to the translations object. If you use strict translation keys, add the new locale to the TranslationKeys union in lib/i18n/types.ts.

Done. Your app now speaks Spanish.

Step 4: Set Up Your Roles

The boilerplate comes with User and Admin roles. To add more:

  1. Create a new parallel route folder:
mkdir -p app/(protected)/@moderator/dashboard
  1. Add your pages inside that folder

  2. Update app/(protected)/layout.tsx to handle the new role:

if (currentUser?.role === 'moderator') return moderator

That's genuinely all you need to do.

Step 5: Run It

npm run dev

Open http://localhost:3000 and see your fully-configured app running.

Available Scripts

Command Description
npm run dev Start development server
npm run build Production build
npm run start Start production server
npm run lint Run ESLint
npm run lint:fix Fix ESLint errors
npm run test Unit tests (Vitest)
npm run test:watch Unit tests in watch mode
npm run test:coverage Tests with coverage
npm run e2e Playwright E2E tests
npm run e2e:ui Playwright with UI
npm run e2e:webkit E2E in WebKit only
npm run prettier Check formatting
npm run prettier:fix Fix formatting

Step 6: Environment (First-Time Setup)

Copy .env.example to .env. Set NEXT_PUBLIC_APP_URL if you need to override the site URL (e.g. in production). For Google sign-in: set NEXTAUTH_URL, NEXTAUTH_SECRET, GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET, then NEXT_PUBLIC_GOOGLE_AUTH_ENABLED=true. Optionally set [email protected] so those emails get the admin role.

Prerequisites

  • Node.js 18.17 or later
  • npm, yarn, pnpm, or bun

The Project Structure (Explained for Humans)

app/
  (protected)/           # Routes behind auth
    @admin/             # Admin-only pages
    @user/              # User pages
    layout.tsx          # Role-based routing

  api/                  # API routes
    auth/[...nextauth]/ # NextAuth handler
    health/             # GET /api/health → { status: "ok" }
  auth/login/           # Login page
  unauthorized/         # 403-style page

  layout.tsx            # Root layout (theme, i18n)
  page.tsx              # Landing page
  not-found.tsx         # 404
  manifest.ts           # PWA manifest from config
  robots.ts             # Robots.txt from config
  sitemap.ts            # Dynamic sitemap from config

components/
  ui/                   # shadcn/ui
  layout/               # Header, sidebar, theme toggle
  language-switcher.tsx

locales/
  en.json, bn.json, ar.json

lib/
  auth/                 # NextAuth options, session, auth context
  config/
    app-main-meta-data.json
    site.ts             # baseUrl, supportedLocales
  i18n/
    get-translations.ts
    language-context.tsx
    use-translations.ts
    types.ts
  utils.ts

e2e/                    # Playwright E2E tests
.github/workflows/      # CI: check.yml, playwright.yml

What You Get Out of the Box

Next.js 15 with App Router and Server Components

TypeScript (strict mode)

Tailwind CSS

ESLint and Prettier (Next.js, TypeScript, a11y, format on save in .vscode)

NextAuth.js with optional Google OAuth and admin-by-email

i18n with type safety and RTL (en, bn, ar)

RBAC with parallel routes (User / Admin)

SEO from one JSON config (metadata, sitemap, robots, manifest)

next-themes for dark mode (system + manual toggle)

shadcn/ui (accessible, customizable)

Vitest + React Testing Library for unit/component tests

Playwright for E2E in e2e/

GitHub Actions for lint, format, test, build and E2E

Health check at GET /api/health

Vercel-ready (one-click deploy from the repo)

Real Talk: When Should You Use This?

Perfect for:

  • SaaS products with multiple user types
  • International applications
  • MVPs that need to look professional
  • Projects where you want to ship fast

Maybe not ideal for:

  • Simple landing pages (too much infrastructure)
  • Projects with very specific auth requirements (you'll need to customize heavily)
  • Apps that don't need i18n or role management

What I Learned Building This

  1. Parallel routes are underrated: They make role-based routing so much cleaner than conditional rendering
  2. Type-safe i18n is worth the setup: Catching translation bugs at compile time saves hours of debugging
  3. JSON configuration > hardcoded values: When you can change your entire SEO strategy by editing one file, you move faster
  4. Boilerplates should be opinionated: Too many options = decision fatigue. I made the tough choices so you don't have to

Contributing & Support

Found a bug? Want to add a feature? The repo is fully open source:

🐛 Report issues

Star on GitHub

🤝 Submit a PR

Final Thoughts

I built this because I got tired of setting up the same infrastructure for every project. If you're launching a product and don't want to spend two weeks on boilerplate, give it a try.

It's saved me probably 30+ hours across my last three projects. Maybe it'll help you too.

What's your biggest pain point when starting a new Next.js project? Drop a comment below - I'm always looking to improve this.

Happy building! 🚀

How to Test Stripe Webhooks Without Deploying to Production

2026-02-09 02:37:35

You're building a checkout flow. Payments work. Now Stripe needs to tell your app what happened — payment succeeded, subscription renewed, charge disputed. That happens through webhooks: Stripe sends an HTTP POST to your server.

Problem is, your app runs on localhost:3000. Stripe can't reach that.
So how do you test webhooks during development? Let me walk through the three approaches I've used, what annoys me about each, and the workflow I've settled on.

Approach 1: Stripe CLI

The official way. Install it, log in, forward events to your local server:

bashstripe login
stripe listen --forward-to localhost:3000/api/webhooks/stripe
Then trigger mock events:

bashstripe trigger payment_intent.succeeded

It works, but the friction adds up. The mock events have fake data — generic customer IDs, placeholder amounts. If your handler does anything real with the payload (update a database, send a confirmation), mock data doesn't test that logic.

The tunnel dies when you close the terminal. Every restart gives you a new signing secret, so your signature verification breaks until you remember to update .env. And if your handler returns a 500, the CLI output doesn't show you exactly what went wrong.

Approach 2: ngrok

Skip the CLI, expose your server directly:

bashngrok http 3000

You get a public URL, paste it into Stripe's webhook settings, and now you're receiving real events from actual test-mode payments. That's better than mock data.

But the free URL changes every session. You restart ngrok, you update the Stripe dashboard, you restart ngrok again, you update again. If your handler crashes mid-request, the webhook is gone — you either wait hours for Stripe's exponential backoff retry or manually reconstruct the payload. And once you close ngrok, the request history vanishes.

Approach 3: Capture first, process later

This is what I actually use now. Instead of pointing Stripe at my local server, I point it at a persistent endpoint that captures and stores every webhook. Then I inspect payloads on my own time and replay them to localhost when I'm ready.

I built a tool called HookLab for this. Here's the workflow:

Create an endpoint:

bashcurl -X POST https://hooklab-webhook-testing-and-debugging.p.rapidapi.com/api/v1/endpoints \
-H "Content-Type: application/json" \
-H "X-RapidAPI-Key: YOUR_KEY" \
-H "X-RapidAPI-Host: hooklab-webhook-testing-and-debugging.p.rapidapi.com" \
-d '{"name": "stripe-test"}'

You get back a public URL like https://hooklab-webhook-testing-and-debugging.p.rapidapi.com/hook/ep_V1StGXR8_Z5j. Paste that into Stripe's webhook settings.

Make a test payment in the Stripe Dashboard using card 4242 4242 4242 4242. Stripe sends the webhook, HookLab captures it.

Inspect it:

bashcurl https://hooklab-webhook-testing-and-debugging.p.rapidapi.com/api/v1/endpoints/ep_V1StGXR8_Z5j/requests \
-H "X-RapidAPI-Key: YOUR_KEY" \
-H "X-RapidAPI-Host: hooklab-webhook-testing-and-debugging.p.rapidapi.com"

Full headers, full body, Stripe-Signature included, timestamp, everything. No console.log archaeology.

Replay it to your local server:

bashcurl -X POST https://hooklab-webhook-testing-and-debugging.p.rapidapi.com/api/v1/replay \
-H "Content-Type: application/json" \
-H "X-RapidAPI-Key: YOUR_KEY" \
-H "X-RapidAPI-Host: hooklab-webhook-testing-and-debugging.p.rapidapi.com" \
-d '{"request_id": "req_abc123", "target_url": "http://localhost:3000/api/webhooks/stripe"}'

Same headers, same body, same method — sent straight to your handler. It crashes? Fix the bug, replay again. No waiting for Stripe's retry. Same real webhook, as many times as you need.

Why capture-and-replay is worth it

The three most common Stripe webhook bugs all get easier to find:
Signature verification failures. Something between Stripe and your code is modifying the body — middleware parsing JSON, a proxy re-encoding, a framework adding whitespace. With captured webhooks you can see the exact raw body Stripe sent and compare it to what your handler receives.

Wrong event types. You're handling charge.succeeded but Stripe sends payment_intent.succeeded for your integration. Capture one checkout flow and you'll see every event Stripe fires, in order.

Data structure surprises. You wrote event.data.object.customer.email but customer is a string ID, not an expanded object. Real captured payloads show you the actual structure before you write the handler.

Gotchas that catch everyone
Regardless of which approach you use:

Return 200 immediately. Stripe expects a response within 5-10 seconds. Do your heavy processing async. Otherwise Stripe thinks delivery failed and retries, causing duplicates.
Handle duplicates. Stripe uses at-least-once delivery. Use event.id as an idempotency key — check if you already processed it before doing anything.

Test with real test-mode transactions. stripe trigger is fine for checking your endpoint is reachable. But don't consider your integration tested until you've processed webhooks from actual test checkouts. The payloads are different in ways that matter.

If you want to try the capture-and-replay workflow, HookLab has a free tier on RapidAPI — 100 calls/day and 3 endpoints, which is plenty for testing a Stripe integration.

What's your webhook testing setup? I'm curious what other people use — drop a comment