MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Payment System Design at Scale

2026-02-21 14:52:40

What really happens when Maria taps “Confirm Ride”?

Maria has an important meeting in 15 minutes.

She doesn’t have cash.

She opens Uber. Requests a ride. Gets dropped off.

The payment? Invisible. Instant. Effortless.

But behind that single tap is one of the most complex distributed systems in modern software.

Today, we’re breaking it down.

Not just “how to charge a card.”

But how to build a secure, reliable, scalable payment system that can process millions of rides per day.

The Illusion of Simplicity

From the user’s perspective:

Trip ends → $20 charged → Done.

From the backend’s perspective:

  • Securely collect payment details
  • Avoid storing sensitive card data
  • Prevent fraud
  • Handle bank outages
  • Split money across multiple parties
  • Maintain financial correctness
  • Reconcile mismatches
  • Survive retries and timeouts
  • Support global scale

This is not a feature.

This is infrastructure.

1️⃣ The First Problem: You Can’t Store Card Data

When a user enters:

  • Card number
  • CVV
  • Expiry

Storing it directly means:

  • Heavy PCI DSS compliance
  • Massive breach risk
  • Legal exposure

So what do modern systems do?

Tokenization.

The mobile app integrates a payment provider SDK (Stripe, Adyen, etc.).

Instead of sending card details to your servers:

  1. The SDK sends card data directly to the provider.
  2. The provider returns a token.
  3. You store only that token.

The token acts as a reusable, scoped permission to charge the card.

If someone steals it?

It’s useless outside your merchant account.

Security solved. (Mostly.)

2️⃣ Authorization vs Capture (Where Things Get Subtle)

When the ride ends, you don’t just “charge.”

You typically:

Step 1: Authorize

Check if the card has funds and lock the amount.

Step 2: Capture

Actually move the money.

Why split it?

Because:

  • The ride price may change.
  • You may need to adjust final fare.
  • You don’t want unpaid rides.

Large systems often authorize early (estimated fare) and capture later (final fare).

Small detail.

Massive architectural impact.

3️⃣ The Money Doesn’t Go Rider → Driver

This is critical.

The rider does NOT pay the driver directly.

Instead:

Rider → Uber Merchant Account → Split →
    → Driver
    → Uber Commission
    → Taxes
    → Fees

Why?

Because:

  • You need commission control.
  • You must handle taxes.
  • You need dispute handling.
  • You need fraud protection.

Direct peer-to-peer payments would break accounting.

4️⃣ The Hidden Hero: Internal Ledger System

Here’s what most engineers underestimate:

You cannot rely on your payment provider as your source of truth.

You must build your own ledger service.

A simplified double-entry example:

Account Debit Credit
Rider $20
Driver $15
Platform $5

Every movement is recorded.

Why double-entry?

Because money cannot disappear.

If debits ≠ credits → something is broken.

At scale, this is the difference between:

  • “Works fine”
  • “Lost $3M silently”

5️⃣ Reliability: External Systems Will Fail

Your payment system depends on:

  • Banks
  • Card networks
  • Payment providers
  • Network calls

All of them fail.

Common nightmare scenario:

  • Authorization succeeds.
  • Capture request times out.
  • You retry.
  • Customer gets double-charged.

Solution?

Idempotency keys.

Each payment attempt includes a unique key (e.g., ride_id).

If retried, provider recognizes the key and avoids duplicate processing.

Without idempotency?

You will double-charge users.

And you will lose trust.

6️⃣ Smart Retries (Not Blind Retries)

Not all failures are equal.

Error Retry?
Network timeout Yes
Rate limit Yes
Insufficient funds No
Fraud blocked No

Blind retries create chaos.

Intelligent retries create resilience.

7️⃣ Fraud Layer (Before Money Moves)

Before charging:

  • Velocity checks
  • Device fingerprinting
  • Location mismatch
  • Behavioral anomaly detection

Some cases trigger:

  • 3D Secure
  • OTP verification
  • Manual review

Payment systems are also fraud systems.

Ignore this, and chargebacks will destroy margins.

8️⃣ Refunds Aren’t Simple

Refunding isn’t “reverse transaction.”

It requires:

  1. Updating internal ledger
  2. Issuing refund request
  3. Adjusting driver balance
  4. Handling payout already completed

Sometimes, the platform absorbs temporary loss.

Complexity compounds over time.

9️⃣ Driver Payouts: A Different System

Charging cards is one system.

Paying drivers is another.

Most platforms:

  • Aggregate earnings daily
  • Settle weekly
  • Offer instant payout (for a fee)

This uses bank rails like ACH/SEPA.

Completely different from card networks.

Two financial systems under one product.

🔟 Reconciliation (Where Adults Work)

Every night:

  • Pull reports from payment provider
  • Compare with internal ledger
  • Identify mismatches

If mismatch:

  • Flag for review
  • Trigger investigation

Without reconciliation?

Small inconsistencies compound into millions.

1️⃣1️⃣ Scaling to Millions of Rides

At high scale:

  • 1M+ rides/day
  • 1000+ transactions per second at peak

You need:

  • Stateless payment services
  • Event-driven architecture
  • Message queues (Kafka/PubSub)
  • Horizontal scaling

Instead of:

Ride → Immediate Charge

You use:

RideCompleted Event → Payment Queue → Worker → Provider

Decoupling prevents cascading failures.

1️⃣2️⃣ Multi-Provider Strategy

Never depend on one payment provider.

You implement:

  • Primary provider
  • Secondary fallback

With an abstraction layer:

charge(amount, token)

Underneath, routing logic decides where to send it.

Because outages are not “if.”

They are “when.”

What Looks Simple Is Actually Distributed Finance

A ride payment system is not:

  • Just API calls
  • Just token storage
  • Just Stripe integration

It is:

  • Distributed systems
  • Financial accounting
  • Legal compliance
  • Fault tolerance
  • Fraud modeling
  • Bank integrations
  • Event-driven infrastructure

That’s why payment infrastructure is one of the hardest backend domains in the world.

Final Thought

When Maria stepped out of that taxi in Prague, she didn’t think about:

  • Idempotency keys
  • Double-entry accounting
  • Multi-provider failover
  • Fraud scoring
  • Reconciliation pipelines

She just walked into her meeting.

That’s the goal.

Great engineering makes complexity invisible.

If you’re building systems:

Don’t just design features.

Design for:

  • Failure
  • Scale
  • Auditability
  • Correctness

Because money systems don’t forgive mistakes.

Vibe Coding Is Rewriting the Rules of Software Development

2026-02-21 14:43:00

AI agents don't just autocomplete anymore — they architect, debug, and ship. Here's what every developer needs to know.

Something irreversible happened quietly over the past twelve months. Developers stopped writing most of their code — and started directing it. The rise of "vibe coding", a term coined by Andrej Karpathy in early 2025, describes a workflow where engineers describe intent in natural language and AI agents translate it into working software. It sounds like science fiction. It's now Monday morning for millions of developers.

Stat Value
Devs using AI daily (Stack Overflow 2025) 73%
Faster prototyping reported by early adopters
AI coding tools market by 2027 $28B

What "Vibe Coding" Actually Means

The term is deliberately casual — and that's the point. Traditional development demanded precision: exact syntax, correct API calls, proper imports. Vibe coding flips this. You describe the feeling of what you want built, and an AI agent — Claude, GPT-4o, Gemini — iterates until it matches your intent.

"Just describe what you want and the AI figures out the how. The bottleneck shifts from syntax to clarity of thought."
— Andrej Karpathy, Feb 2025

This isn't autocomplete. Modern AI coding agents maintain context across entire codebases, run tests autonomously, read error logs, and self-correct — sometimes across dozens of iterations without human intervention.

The Technical Reality Behind the Hype

Under the hood, the magic is a combination of retrieval-augmented generation (RAG) over your codebase, tool-use APIs that let models execute shell commands, and long-context windows now exceeding 1 million tokens. Here's what a typical agent loop looks like:

# Simplified agent loop (pseudo-code)
while task_not_complete:
    plan    = llm.think(goal, codebase_context)
    action  = llm.select_tool(plan)   # write_file | run_tests | search
    result  = execute(action)
    codebase_context.update(result)
    if tests_pass(result): break

Tools like Claude Code, GitHub Copilot Workspace, and Cursor's Composer run exactly this loop — autonomously writing, running, failing, and fixing code until the task is done. Developers act as product managers: defining acceptance criteria, reviewing outputs, and steering direction.

What Changes for Developers

The skill premium is shifting fast. Low-level syntax knowledge matters less; systems thinking, prompt engineering, and architectural judgment matter more. Developers who thrive are those who can break complex problems into well-specified sub-tasks that agents can execute reliably.

Practically, this means:

  • Writing clearer specs before touching any tool
  • Investing in good test coverage so agents can self-validate
  • Learning to recognize when AI output is confidently wrong — a subtle and dangerous failure mode

The floor for what one developer can ship alone has risen dramatically. The ceiling for what poor judgment can break has too.

Real-World Example: Building a Full Feature in Minutes

A developer at a Series B startup recently described building a complete CSV import pipeline — parsing, validation, error reporting, database writes, and a UI progress bar — in under 40 minutes using an AI agent. A task that previously took two days.

  • Prompt: three sentences
  • Agent clarifying questions: four
  • Tests passing: first try

This isn't an outlier. It's becoming the norm for well-scoped, clearly-specified features. The hard parts — distributed systems, novel algorithms, nuanced UX decisions — still require deep human expertise. But the "boring" 60% of a sprint? Increasingly autonomous.

The Backlash (and Why It Partially Misses the Point)

Critics argue that vibe coding produces brittle, unreviewed code that accumulates technical debt at scale. They're not wrong — but they're describing a misuse, not an inherent flaw.

The developers seeing the worst outcomes treat AI output as ground truth. Those seeing the best outcomes treat every generated file as code they're responsible for owning and understanding.

The analogy is a junior engineer: exceptional output when well-directed and reviewed. A liability when left unsupervised on critical paths.

Conclusion

Vibe coding isn't the end of software engineering — it's its next phase. The developers who will define the next decade aren't those who resist AI agents, nor those who blindly trust them. They're the ones who learn to collaborate with them: setting clear goals, maintaining rigorous standards, and understanding what's happening beneath the surface.

The code still matters. The judgment about what to build, and whether the build is correct, matters more than ever. That's not a demotion for developers. It's a promotion.

💬 Are you vibe coding yet? Share your experience — what's working, what's broken, and what you wish you'd known sooner.

From 10-Minute Blocking APIs to Async Pipelines: A Production Backend Redesign

2026-02-21 14:35:31

When I took over a transaction ingestion system running in production, one of the first things I noticed was the time it took to process CSV uploads.

Users regularly uploaded files averaging around 50,000 rows, with some reaching over 600,000. The upload API handled everything synchronously: parsing the file, inserting rows into Cassandra, fetching historical prices from an external service per transaction, performing reconciliation, calculating balances and tax metrics, and only then returning a response.

In practice, this meant requests could take anywhere from five to ten minutes to complete. Frontend and Nginx timeouts had been extended to accommodate this behavior. The system technically worked, but only by allowing long blocking requests.

It became clear that this wasn’t just a performance issue. The architecture itself was tightly coupled to the request lifecycle. Heavy compute and IO operations were happening directly inside the API path.

Synchronous designs often work well at small scale. But as data size increases, the bottlenecks become harder to ignore.

What the Original Flow Looked Like

At a high level, the upload endpoint was responsible for doing everything in a single request lifecycle.

Once a user uploaded a CSV file, the API would:

  1. Parse the file row by row
  2. Insert each transaction into Cassandra
  3. Fetch historical pricing data for every transaction
  4. Perform reconciliation
  5. Calculate balances and tax metrics
  6. Return a response only after all processing was complete

Conceptually, it looked like this:

@app.post("/upload")
def upload_csv(file):
    rows = parse_csv(file)

    for row in rows:
        insert_into_cassandra(row)

        price = fetch_historical_price(row)
        reconciled = reconcile_transaction(row, price)
        processed = calculate_metrics(reconciled)

        update_transaction(processed)

    return {"status": "completed"}

Every upload request had to wait for:

  • Large file parsing
  • Tens of thousands of database writes
  • External API calls per transaction
  • Reconciliation and tax calculations

There was also no automated retry mechanism. If processing failed midway, it required manual intervention.

At the same time, another API responsible for returning transaction data to the frontend fetched raw records from Cassandra and performed calculations inside the request path. That endpoint routinely took 30–40 seconds to complete.

The ingestion flow looked like this:

User
  │
  ▼
Upload API
  │
  ▼
Parse CSV
  │
  ▼
Insert into Cassandra
  │
  ▼
Fetch Historical Price (External API)
  │
  ▼
Reconciliation
  │
  ▼
Tax / Metrics Calculation
  │
  ▼
HTTP Response

Each stage blocked the next. The system accumulated CPU-bound work (parsing, calculations) and IO-bound work (database writes, external API calls) inside a single HTTP request.

Rethinking the Request Lifecycle

The first architectural decision was simple:

Move heavy work out of the request path.

Instead of completing ingestion synchronously, the upload API was redesigned to become status-based:

@app.post("/upload")
def upload_csv(file):
    job_id = create_job_record(status="queued")
    queue.publish({"job_id": job_id})
    return {"job_id": job_id, "status": "processing"}

The API now initiated a pipeline rather than executing it.

Building a Staged Async Pipeline

Processing was decomposed into multiple independent consumers:

Upload API
   │
   ▼
Queue A  →  Insert Consumer
   │
   ▼
Queue B  →  Historical Price Workers
   │
   ▼
Queue C  →  Reconciliation Consumer
   │
   ▼
Queue D  →  Tax / Metrics Consumer
   │
   ▼
Read-Optimized Table

Each stage became independently scalable and observable.

Handling Large CSV Files

Files were processed in dynamic chunks (typically 500–2000 rows depending on structure). Chunk-level parallelism was introduced using threads, configurable via environment variables (defaulted to 8).

This reduced memory spikes and improved CPU utilization during parsing and metric computation.

Optimizing External API Calls with a Master–Worker Pattern

Originally, historical pricing APIs were called sequentially per transaction.

This was redesigned using a master–worker model:

  • The master grouped transactions into batches
  • Workers processed batches in parallel
  • Workers wrote results directly to Cassandra
  • The master coordinated using asyncio.gather
async def master(batch_groups):
    tasks = [worker.process(batch) for batch in batch_groups]
    results = await asyncio.gather(*tasks)
    return results

This allowed controlled concurrency, better rate-limit handling, and parallel processing instead of sequential IO.

Optimizing the Read Path

The read API was slow because it was calculating metrics at request time.

Instead of using a Materialized View, a new Cassandra table was introduced:

  • Contained only frontend-required fields
  • Included computed metrics
  • Excluded ingestion-only data

A new consumer transformed processed records into this optimized table.

The read API now simply applied filters and limits on precomputed data.

Measured Impact

Upload API

Before: 5–10 minutes
After: ~0.5 seconds

End-to-End Processing

Before: 5–20 minutes
After: 2–4 minutes

Read API

Before: 30–40 seconds
After: ~0.5–1 second

CPU Utilization

Before: ~5–8%
After: ~70–80%

The improvement was not just about latency reduction. It was about reshaping how work flowed through the system.

Engineering Takeaways

  • Heavy work does not belong in the request lifecycle.
  • Separate ingestion from presentation.
  • Parallelism should be intentional and controlled.
  • External rate limits are architectural constraints.
  • Pipelines scale better than monolithic request handlers.

This redesign changed how I approach ingestion systems.
When processing grows, the architecture must evolve with it.

If you’ve worked on ingestion pipelines or faced similar architectural bottlenecks, I’d be interested to hear how you approached it.

Happy to discuss trade-offs or alternative designs.

9+ Best Free Shadcn Date Picker Components for React and Next.js in 2026

2026-02-21 14:20:33

Most modern apps require date pickers - from SaaS dashboards and booking systems to analytics filters and admin panels.

We tested and reviewed 9 free Shadcn date picker components from real repositories and component libraries. This list focuses on real developer needs, such as timezone handling, date ranges, form integration, and production readiness.

This guide is based on actual component code, GitHub activity, TypeScript support, and integration with React and Next.js.

How We Tested These Components

We installed and tested each react date picker in a modern Next.js App Router project to verify real-world compatibility.

We validated every component for:

  • Installation inside Next.js App Router

  • Tested with strict TypeScript mode enabled

  • Controlled and uncontrolled usage patterns

  • Integration with react-hook-form

  • Date range and datetime behavior

  • Timezone handling (where supported)

  • SSR and hydration safety

  • Dependency footprint (react-day-picker, date-fns, etc.)

  • GitHub activity and maintenance status

We only included components that are actively maintained, reusable, and production-ready.

All components listed here are 100% free and open source.

Across the list, you’ll find support for three primary selection modes:

  • Date Picker - Select a single calendar date

  • Date & Time Picker -Allows selection of both date and time

  • Date Range Picker - Select a start and end date

When you should use a Shadcn date picker

Shadcn date pickers are ideal for:

  • SaaS analytics dashboards for filtering data by date

  • Booking and scheduling systems - for single or range date selection

  • Admin panels with reporting filters

  • Financial tools that analyze data-based metrics

  • CRM systems that track activity history

  • Any application already using shadcn/ui and Tailwind CSS

How to Choose the Right Date Picker

Criteria What to Check
Selection Type Single date, range, or datetime support
Form Handling Works with controlled inputs and form libraries
Styling Compatible with Tailwind CSS
Timezone / Localization Needed for global or regional apps
Customization Supports custom trigger, popover, or layout
Dependencies Uses modern libraries like react-day-picker or date utilities

Quick Comparison Table

If you prefer a quick overview before diving into implementation details, here’s a side-by-side comparison:

Component Picker Type Range Support Timezone Support Form Friendly Best For
Shadcn Space Datetime + Range SaaS dashboards
Tailwindadmin Date + Range Admin panels
Datetime Picker (huybuidac) Datetime Global SaaS apps
Date Range Picker (johnpolacek) Date Range Analytics filtering
Shadcn Date Picker (flixlix) Date + Range General applications
Shadcn Calendar (sersavan) Date + Range Partial Custom dashboards
Date Time Picker (Rudrodip) Datetime + Range Booking systems
Datetime Picker (Maliksidk19) Datetime Internal tools
Persian Calendar (MehhdiMarzban) Date + Range Locale-based Regional apps

Best Free Shadcn Date Picker Components

Below is a curated list of free, production-ready Shadcn date picker components. Each component has been thoroughly tested for integration with React, Next.js, TypeScript, and Tailwind CSS.

Shadcn Space Date Picker

Shadcn Space Date Picker<br>

This collection provides multiple ready-to-use date picker components built specifically for shadcn/ui projects. It includes standard date pickers, calendar popovers, and form-integrated pickers. All components follow shadcn component architecture, making them easy to integrate into existing projects.

Tech stack: ShadcnUI v3.5, Base UI v1, React v19, TypeScript v5, Tailwind CSS v4

Last Updated: Feb 2026

Key features:

  • Includes calendar, popover, and input-based picker patterns
  • Uses composable shadcn component structure
  • Clean TypeScript component implementation
  • Supports form integration with controlled inputs
  • Compatible with Next.js server and client components

Best for: SaaS dashboards, admin panels, and internal tools

Visit Component

Tailwindadmin Shadcn Date Picker

Tailwindadmin Shadcn Date Picker<br>

This component provides production-ready date picker examples used in real dashboard interfaces. It includes calendar dropdown picker and input-based picker implementations. The code follows modular patterns suitable for scalable dashboard systems.

Tech stack: ShadcnUI v3.5, Next.js v16, React v19, TypeScript v5, Tailwind CSS v4

Last Updated: Feb 2026

Key features:

  • Dashboard-focused picker UI patterns
  • Modular component separation
  • Clean Tailwind utility usage
  • Designed for analytics and reporting filters
  • Works well inside complex form systems

Best for: Admin dashboards and analytics interfaces

Visit Component

Shadcn Datetime Picker by huybuidac

This is a powerful and fully customizable component that simplifies date and time selection in React applications built with the Shadcn UI framework. With advanced features designed to enhance the user experience, this datetime picker provides seamless integration and a responsive, user-friendly interface. Whether you need a robust datetime, date, or time picker, this provides the flexibility and functionality needed for modern applications.

Tech stack: ShadcnUI v2, Next.js v14, React v18, Radix UI v1, Tailwind CSS v3

GitHub Stars: 202

Last Updated: 2024

Key features:

  • Combined date and time picker support
  • Timezone support for global apps
  • Min and max date validation
  • Custom trigger rendering support
  • Works with React state and form libraries

Best for: SaaS apps with timezone and datetime requirements

Visit Component

Date Range Picker for Shadcn by johnpolacek

Date Range Picker for Shadcn by johnpolacek<br>

This is a reusable component built for Shadcn using beautifully designed components from Radix UI and Tailwind CSS. It provides a dropdown interface to allow users to select or enter a range of dates and includes additional options such as preset date ranges and an optional date comparison feature.

Tech stack: Radix UI v1, Mocha.js v10, React v18, Jest v29.5, Tailwind CSS v3

GitHub Stars: 1K+

Last Updated: 2024

Key features:

  • Native date range selection support
  • Optimized for analytics filtering
  • Clean range selection state logic
  • Works with controlled components
  • Designed for dashboard usage

Best for: Analytics dashboards and reporting systems

Visit Component

Shadcn Date Picker by flixlix

Shadcn Date Picker by flixlix<br>

This custom Shadcn component aims to provide a more advanced alternative to the default date picker component. It is built on top of the react-day-picker library, which provides a wide range of customization options.

Tech stack: ShadcnUI v2.6, Next.js v15, Radix UI v1, React v19, Tailwind CSS v3

GitHub Stars: 363

Last Updated: Dec 2025

Key features:

  • Single date selection
  • Date range selection
  • Month and year navigation
  • Easy integration into existing UI systems
  • Supports Light & Dark Mode

Best for: General application date selection

Visit Component

Shadcn Calendar Component by sersavan

Shadcn Calendar Component by sersavan<br>

This is a reusable calendar and date range picker built for shadcn/ui projects. It is designed for React and Next.js apps using TypeScript and Tailwind CSS. The component focuses on clean UI, easy customization, and smooth date selection. It helps developers quickly add flexible calendar functionality to modern web applications.

Tech stack: Next.js v14, Radix UI v1, Zod v3, React v18, Tailwind CSS v3

GitHub Stars: 327

Last Updated: Dec 2025

Key features:

  • Single date and date range selection support
  • Easy state management
  • Timezone-aware date handling
  • Predefined date ranges like Today, Last 7 Days, This Month
  • Minimal setup required

Best for: Custom calendar integrations

Visit Component

Shadcn Date Time Picker by Rudrodip

Shadcn Date Time Picker by Rudrodip<br>

This project features a range of Date and Time picker components built with ShadCN. These examples demonstrate the versatility and functionality of the component across various use cases.

Tech stack: Next.js v14, Radix UI v1, Zod v3, React v18, Tailwind CSS v3

GitHub Stars: 283

Last Updated: May 2025

Key features:

  • Supports combined date and time selection
  • Date range & 12h formats available
  • Integrates with react-hook-form and Zod for form handling & validation
  • Clean TypeScript implementation
  • Live examples with copy/view code UI for quick implementation

Best for: Booking systems and scheduling apps

Visit Component

Shadcn Datetime Picker by Maliksidk19

Shadcn Datetime Picker by Maliksidk19<br>

This project provides a beautifully crafted datetime picker component built using the Shadcn UI. It offers an intuitive interface for selecting dates and times in React applications.

Tech stack: Next.js v15, Radix UI v1, React v19, Tailwind CSS v3

GitHub Stars: 266

Last Updated: March 2025

Key features:

  • Supports combined datetime selection
  • Works with controlled input components
  • Customizable Layout
  • Easy integration into dashboards
  • Lightweight implementation

Best for: Internal tools and admin apps

Visit Component

Shadcn Persian Calendar by MehhdiMarzban

Shadcn Persian Calendar by MehhdiMarzban<br>

This is a beautiful, accessible, and customizable Persian (Jalali) date picker component for React applications built with Shadcn UI components.

Tech stack: Next.js v15, Radix UI v1, React v19, Tailwind CSS v3

GitHub Stars: 27

Last Updated: Feb 2025

Key features:

  • Persian calendar support
  • Single date, range, and multiple date selection modes
  • Accessible (WAI-ARIA compliant)
  • Year switcher
  • Supports Dark mode

Best for: Persian and regional applications

Visit Component

Frequently Asked Questions

1. Which is the best Shadcn date picker for SaaS dashboards?

Date pickers from Shadcn Space and Tailwindadmin are strong choices because their components are regularly updated and well-maintained. They offer support for analytics filtering and are built with a scalable component architecture, making them reliable for growing applications.

2. Which Shadcn date picker supports timezone?

The datetime picker by huybuidac supports timezone selection, min date, and max date validation. This is useful for global SaaS applications.

3. Can I use these date pickers in Next.js projects?

Yes, all components are built with React, TypeScript, and Tailwind CSS, and work directly in Next.js apps.

Final Thoughts

These 9 free shadcn date picker components provide production-ready solutions for modern applications. They support core needs like date selection, datetime input, analytics filtering, and scheduling.

For most SaaS and dashboard applications, the datetime picker by Shadcn Space and the date range picker by johnpolacek provide the best flexibility and scalability.

Terminal Operations in Java Streams

2026-02-21 14:18:19

Hi Everyone reading,

In the previous article, we explored Intermediate Operations in Java Streams — how they transform and prepare data.

Now it’s time to understand the final and most important step of stream processing:

Without a terminal operation, a stream does nothing.

Let’s understand why.

What are Terminal Operations?

Terminal operations:

  • Produce a result.
  • Trigger execution of intermediate operations.
  • Close the stream after execution.
  • Cannot be reused once executed.

Example

List<Integer> list = List.of(1,2,3,4,5);

list.stream()
    .filter(n -> n % 2 == 0)
    .forEach(System.out::println);   // Terminal operation

Here:

filter() → Intermediate

forEach() → Terminal (executes the stream)

Why Terminal Operations Are Important

Streams are lazy.

Intermediate operations build a pipeline, but execution starts only when a terminal operation is called.

No terminal operation → No processing.

Types of Terminal Operations

Terminal operations can be grouped into:

  • Iteration operations
  • Reduction operations
  • Collection operations
  • Matching operations
  • Finding operations

Let’s explore each.

1 forEach()

Used to iterate over elements.

List<String> names = List.of("Ram", "Amit", "Shyam");

names.stream()
     .forEach(System.out::println);

Output

Ram
Amit
Shyam

Note: Prefer forEachOrdered() when working with parallel streams and order matters.

2 collect()

Most powerful terminal operation.

Used to collect stream elements into:

  • List
  • Set
  • Map
  • String
  • Custom collections

Collect to List

List<Integer> evenNumbers = 
    List.of(1,2,3,4,5).stream()
        .filter(n -> n % 2 == 0)
        .collect(Collectors.toList()); // [2,4]

Collect to Map

Map<String, Integer> map =
    List.of("Ram", "Amit", "Shyam").stream()
        .collect(Collectors.toMap(
            name -> name,
            name -> name.length()
        )); // {Ram=3, Amit=4, Shyam=5}

3 reduce()

Used to combine elements into a single result.

Example: Sum of numbers

int sum = List.of(1,2,3,4)
              .stream()
              .reduce(0, (a,b) -> a + b);

System.out.println(sum);

Output

10

Used for:

  • Sum
  • Product
  • Custom aggregation

4 count()

Returns number of elements.

long count = List.of(1,2,3,4,5)
                 .stream()
                 .filter(n -> n > 2)
                 .count();

System.out.println(count);

Output

3

In the above example, we only count the numbers that are greater than 2.

5 anyMatch(), allMatch(), noneMatch()

Used for condition checking.

boolean anyEven =
    List.of(1,3,5,6).stream()
        .anyMatch(n -> n % 2 == 0); // true

boolean allPositive =
    List.of(1,2,3).stream()
        .allMatch(n -> n > 0); // true

boolean result = List.of(5, 10, 15, 20).stream()
                     .noneMatch(n -> n < 0); // true

These return boolean.

6 findFirst() and findAny()

Return an Optional.

Optional<Integer> first =
    List.of(10,20,30)
        .stream()
        .findFirst(); // Optional[10]

findAny() is useful with parallel streams.

Optional<Integer> result =
                List.of(10, 20, 30, 40, 50)
                    .parallelStream()
                    .findAny(); // Optional[30]

Note: In the output of findAny(), there could be any element within the provided list in the output. In this case it was Optional[30], but 30 could be 10,20,40 or 50. findAny() returns any one element in the given list.

7 min() and max()

Find minimum or maximum element.

Optional<Integer> max =
    List.of(1,5,2,9,3)
        .stream()
        .max(Integer::compareTo); // 9

Optional<Integer> min =
    List.of(1,5,2,9,3)
        .stream()
        .min(Integer::compareTo); // 1

Important Characteristics of Terminal Operations

Feature Description
Triggers Execution Executes all intermediate operations
Produces Result Returns value or side effect
Closes Stream Stream cannot be reused
Non-lazy Actually performs computation

Stream Cannot Be Reused

Stream<Integer> stream = Stream.of(1,2,3);

stream.forEach(System.out::println);

// This will throw IllegalStateException
stream.count();

Once a terminal operation is called, the stream is closed.

Real-World Example (Employee Processing)

Problem statement: For a given list of employees, find the average salary.

class Employee {
    String name;
    int salary;

    Employee(String name, int salary) {
        this.name = name;
        this.salary = salary;
    }

    public String toString() {
        return name + " : " + salary;
    }
}

public class Main {
    public static void main(String[] args) {

        List<Employee> employees = List.of(
                new Employee("Ram", 50000),
                new Employee("Amit", 70000),
                new Employee("Shyam", 40000),
                new Employee("Ankit", 90000)
        );

        // Get average salary
        double avgSalary = employees.stream()
                .mapToInt(e -> e.salary)
                .average()
                .orElse(0);

        System.out.println("Average Salary: " + avgSalary);
    }
}

Output

Average Salary: 62500.0

Intermediate vs Terminal Operations

Intermediate Terminal
Returns Stream Returns result
Lazy Executes pipeline
Can chain Ends stream
filter(), map() collect(), reduce(), count()

Conclusion

Terminal operations are the final step in stream processing.

They:

  • Execute the pipeline
  • Produce meaningful results
  • Close the stream

Mastering terminal operations like collect(), reduce(), count(), and findFirst() makes you confident with Java Streams in real-world applications.

What’s Next?

In the next article, we will explore:
Collectors in depth

Devlog: 2026-02-04

2026-02-21 14:14:20

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

The Hook
I shipped a focused devlog pipeline update that turns my reading queue into concrete build decisions for the week.

Why I Built It
My days were getting noisy: too many good posts, not enough synthesis. I wanted a lightweight path from "interesting idea" to "actionable build," and a way to record it so I can see patterns over time.

The Solution
I wired a simple flow that separates signal capture, decision pressure, and actual build notes.

graph TD;
  A[Collect reading signals] --> B{Is it actionable?};
  B -- Yes --> C[Extract constraints + risks];
  B -- No --> D[Save for later];
  C --> E[Map to builds];
  E --> F[Devlog write-up];
capture -> filter -> extract -> map -> ship
signals = read_queue()
insights = [s for s in signals if s.actionable()]
notes = summarize(insights)
plan = map_to_builds(notes)
write_devlog(plan, notes)

Click to view raw logs
Read queue normalized, 10 sources tagged, 4 insights promoted to build notes.

:::note
Small, repeatable loops beat big, fragile systems.
:::

:::tip
If an insight can't change a build plan, it's just trivia.
:::

:::warning
Don't let "nice to know" overwhelm "need to ship."
:::

:::danger
Automating without guardrails turns your roadmap into a blender.
:::

What I Learned

  • The WordPress AI Leaders pilot is a paid micro-credential that starts with a March 2026 cohort, prioritizes UIC students, and ties learning to real WordPress contributions.
  • The WordPress MCP Adapter supports STDIO and HTTP transports, with STDIO + WP-CLI as the simplest path for local development.
  • MCP adapter safety hinges on least-privilege capabilities and avoiding destructive abilities for public endpoints.
  • Drupal Commerce can support B2B portals inside a single Drupal install without a separate platform, using built-in capabilities and modules.
  • The Mail Composer module provides an OOP + Twig-based email API in Drupal, and has stable releases with Drupal 11 compatibility.
  • The old IE stylesheet tag limit (around 31 link/style tags) remains a real reminder of why CSS aggregation strategy matters.
  • WPTavern's #203 podcast features Miriam Schwab on Elementor's growth and AI direction, which frames how product teams talk about cautious rollouts and AI experimentation.

I still have a few items queued to dig into deeper (Pantheon's new dashboard traffic metrics, Gutenberg 22.5 notes, and a Drupal community values post), but the themes above were enough to shape next week's build priorities.

References

Originally published at VictorStack AI Blog