2026-02-21 14:52:40
Maria has an important meeting in 15 minutes.
She doesn’t have cash.
She opens Uber. Requests a ride. Gets dropped off.
The payment? Invisible. Instant. Effortless.
But behind that single tap is one of the most complex distributed systems in modern software.
Today, we’re breaking it down.
Not just “how to charge a card.”
But how to build a secure, reliable, scalable payment system that can process millions of rides per day.
From the user’s perspective:
Trip ends → $20 charged → Done.
From the backend’s perspective:
This is not a feature.
This is infrastructure.
When a user enters:
Storing it directly means:
So what do modern systems do?
The mobile app integrates a payment provider SDK (Stripe, Adyen, etc.).
Instead of sending card details to your servers:
The token acts as a reusable, scoped permission to charge the card.
If someone steals it?
It’s useless outside your merchant account.
Security solved. (Mostly.)
When the ride ends, you don’t just “charge.”
You typically:
Check if the card has funds and lock the amount.
Actually move the money.
Why split it?
Because:
Large systems often authorize early (estimated fare) and capture later (final fare).
Small detail.
Massive architectural impact.
This is critical.
The rider does NOT pay the driver directly.
Instead:
Rider → Uber Merchant Account → Split →
→ Driver
→ Uber Commission
→ Taxes
→ Fees
Why?
Because:
Direct peer-to-peer payments would break accounting.
Here’s what most engineers underestimate:
You cannot rely on your payment provider as your source of truth.
You must build your own ledger service.
A simplified double-entry example:
| Account | Debit | Credit |
|---|---|---|
| Rider | $20 | |
| Driver | $15 | |
| Platform | $5 |
Every movement is recorded.
Why double-entry?
Because money cannot disappear.
If debits ≠ credits → something is broken.
At scale, this is the difference between:
Your payment system depends on:
All of them fail.
Common nightmare scenario:
Solution?
Each payment attempt includes a unique key (e.g., ride_id).
If retried, provider recognizes the key and avoids duplicate processing.
Without idempotency?
You will double-charge users.
And you will lose trust.
Not all failures are equal.
| Error | Retry? |
|---|---|
| Network timeout | Yes |
| Rate limit | Yes |
| Insufficient funds | No |
| Fraud blocked | No |
Blind retries create chaos.
Intelligent retries create resilience.
Before charging:
Some cases trigger:
Payment systems are also fraud systems.
Ignore this, and chargebacks will destroy margins.
Refunding isn’t “reverse transaction.”
It requires:
Sometimes, the platform absorbs temporary loss.
Complexity compounds over time.
Charging cards is one system.
Paying drivers is another.
Most platforms:
This uses bank rails like ACH/SEPA.
Completely different from card networks.
Two financial systems under one product.
Every night:
If mismatch:
Without reconciliation?
Small inconsistencies compound into millions.
At high scale:
You need:
Instead of:
Ride → Immediate Charge
You use:
RideCompleted Event → Payment Queue → Worker → Provider
Decoupling prevents cascading failures.
Never depend on one payment provider.
You implement:
With an abstraction layer:
charge(amount, token)
Underneath, routing logic decides where to send it.
Because outages are not “if.”
They are “when.”
A ride payment system is not:
It is:
That’s why payment infrastructure is one of the hardest backend domains in the world.
When Maria stepped out of that taxi in Prague, she didn’t think about:
She just walked into her meeting.
That’s the goal.
Great engineering makes complexity invisible.
If you’re building systems:
Don’t just design features.
Design for:
Because money systems don’t forgive mistakes.
2026-02-21 14:43:00
AI agents don't just autocomplete anymore — they architect, debug, and ship. Here's what every developer needs to know.
Something irreversible happened quietly over the past twelve months. Developers stopped writing most of their code — and started directing it. The rise of "vibe coding", a term coined by Andrej Karpathy in early 2025, describes a workflow where engineers describe intent in natural language and AI agents translate it into working software. It sounds like science fiction. It's now Monday morning for millions of developers.
| Stat | Value |
|---|---|
| Devs using AI daily (Stack Overflow 2025) | 73% |
| Faster prototyping reported by early adopters | 4× |
| AI coding tools market by 2027 | $28B |
The term is deliberately casual — and that's the point. Traditional development demanded precision: exact syntax, correct API calls, proper imports. Vibe coding flips this. You describe the feeling of what you want built, and an AI agent — Claude, GPT-4o, Gemini — iterates until it matches your intent.
"Just describe what you want and the AI figures out the how. The bottleneck shifts from syntax to clarity of thought."
— Andrej Karpathy, Feb 2025
This isn't autocomplete. Modern AI coding agents maintain context across entire codebases, run tests autonomously, read error logs, and self-correct — sometimes across dozens of iterations without human intervention.
Under the hood, the magic is a combination of retrieval-augmented generation (RAG) over your codebase, tool-use APIs that let models execute shell commands, and long-context windows now exceeding 1 million tokens. Here's what a typical agent loop looks like:
# Simplified agent loop (pseudo-code)
while task_not_complete:
plan = llm.think(goal, codebase_context)
action = llm.select_tool(plan) # write_file | run_tests | search
result = execute(action)
codebase_context.update(result)
if tests_pass(result): break
Tools like Claude Code, GitHub Copilot Workspace, and Cursor's Composer run exactly this loop — autonomously writing, running, failing, and fixing code until the task is done. Developers act as product managers: defining acceptance criteria, reviewing outputs, and steering direction.
The skill premium is shifting fast. Low-level syntax knowledge matters less; systems thinking, prompt engineering, and architectural judgment matter more. Developers who thrive are those who can break complex problems into well-specified sub-tasks that agents can execute reliably.
Practically, this means:
The floor for what one developer can ship alone has risen dramatically. The ceiling for what poor judgment can break has too.
A developer at a Series B startup recently described building a complete CSV import pipeline — parsing, validation, error reporting, database writes, and a UI progress bar — in under 40 minutes using an AI agent. A task that previously took two days.
This isn't an outlier. It's becoming the norm for well-scoped, clearly-specified features. The hard parts — distributed systems, novel algorithms, nuanced UX decisions — still require deep human expertise. But the "boring" 60% of a sprint? Increasingly autonomous.
Critics argue that vibe coding produces brittle, unreviewed code that accumulates technical debt at scale. They're not wrong — but they're describing a misuse, not an inherent flaw.
The developers seeing the worst outcomes treat AI output as ground truth. Those seeing the best outcomes treat every generated file as code they're responsible for owning and understanding.
The analogy is a junior engineer: exceptional output when well-directed and reviewed. A liability when left unsupervised on critical paths.
Vibe coding isn't the end of software engineering — it's its next phase. The developers who will define the next decade aren't those who resist AI agents, nor those who blindly trust them. They're the ones who learn to collaborate with them: setting clear goals, maintaining rigorous standards, and understanding what's happening beneath the surface.
The code still matters. The judgment about what to build, and whether the build is correct, matters more than ever. That's not a demotion for developers. It's a promotion.
💬 Are you vibe coding yet? Share your experience — what's working, what's broken, and what you wish you'd known sooner.
2026-02-21 14:35:31
When I took over a transaction ingestion system running in production, one of the first things I noticed was the time it took to process CSV uploads.
Users regularly uploaded files averaging around 50,000 rows, with some reaching over 600,000. The upload API handled everything synchronously: parsing the file, inserting rows into Cassandra, fetching historical prices from an external service per transaction, performing reconciliation, calculating balances and tax metrics, and only then returning a response.
In practice, this meant requests could take anywhere from five to ten minutes to complete. Frontend and Nginx timeouts had been extended to accommodate this behavior. The system technically worked, but only by allowing long blocking requests.
It became clear that this wasn’t just a performance issue. The architecture itself was tightly coupled to the request lifecycle. Heavy compute and IO operations were happening directly inside the API path.
Synchronous designs often work well at small scale. But as data size increases, the bottlenecks become harder to ignore.
At a high level, the upload endpoint was responsible for doing everything in a single request lifecycle.
Once a user uploaded a CSV file, the API would:
Conceptually, it looked like this:
@app.post("/upload")
def upload_csv(file):
rows = parse_csv(file)
for row in rows:
insert_into_cassandra(row)
price = fetch_historical_price(row)
reconciled = reconcile_transaction(row, price)
processed = calculate_metrics(reconciled)
update_transaction(processed)
return {"status": "completed"}
Every upload request had to wait for:
There was also no automated retry mechanism. If processing failed midway, it required manual intervention.
At the same time, another API responsible for returning transaction data to the frontend fetched raw records from Cassandra and performed calculations inside the request path. That endpoint routinely took 30–40 seconds to complete.
The ingestion flow looked like this:
User
│
▼
Upload API
│
▼
Parse CSV
│
▼
Insert into Cassandra
│
▼
Fetch Historical Price (External API)
│
▼
Reconciliation
│
▼
Tax / Metrics Calculation
│
▼
HTTP Response
Each stage blocked the next. The system accumulated CPU-bound work (parsing, calculations) and IO-bound work (database writes, external API calls) inside a single HTTP request.
The first architectural decision was simple:
Move heavy work out of the request path.
Instead of completing ingestion synchronously, the upload API was redesigned to become status-based:
@app.post("/upload")
def upload_csv(file):
job_id = create_job_record(status="queued")
queue.publish({"job_id": job_id})
return {"job_id": job_id, "status": "processing"}
The API now initiated a pipeline rather than executing it.
Processing was decomposed into multiple independent consumers:
Upload API
│
▼
Queue A → Insert Consumer
│
▼
Queue B → Historical Price Workers
│
▼
Queue C → Reconciliation Consumer
│
▼
Queue D → Tax / Metrics Consumer
│
▼
Read-Optimized Table
Each stage became independently scalable and observable.
Files were processed in dynamic chunks (typically 500–2000 rows depending on structure). Chunk-level parallelism was introduced using threads, configurable via environment variables (defaulted to 8).
This reduced memory spikes and improved CPU utilization during parsing and metric computation.
Originally, historical pricing APIs were called sequentially per transaction.
This was redesigned using a master–worker model:
asyncio.gather
async def master(batch_groups):
tasks = [worker.process(batch) for batch in batch_groups]
results = await asyncio.gather(*tasks)
return results
This allowed controlled concurrency, better rate-limit handling, and parallel processing instead of sequential IO.
The read API was slow because it was calculating metrics at request time.
Instead of using a Materialized View, a new Cassandra table was introduced:
A new consumer transformed processed records into this optimized table.
The read API now simply applied filters and limits on precomputed data.
Before: 5–10 minutes
After: ~0.5 seconds
Before: 5–20 minutes
After: 2–4 minutes
Before: 30–40 seconds
After: ~0.5–1 second
Before: ~5–8%
After: ~70–80%
The improvement was not just about latency reduction. It was about reshaping how work flowed through the system.
This redesign changed how I approach ingestion systems.
When processing grows, the architecture must evolve with it.
If you’ve worked on ingestion pipelines or faced similar architectural bottlenecks, I’d be interested to hear how you approached it.
Happy to discuss trade-offs or alternative designs.
2026-02-21 14:20:33
Most modern apps require date pickers - from SaaS dashboards and booking systems to analytics filters and admin panels.
We tested and reviewed 9 free Shadcn date picker components from real repositories and component libraries. This list focuses on real developer needs, such as timezone handling, date ranges, form integration, and production readiness.
This guide is based on actual component code, GitHub activity, TypeScript support, and integration with React and Next.js.
We installed and tested each react date picker in a modern Next.js App Router project to verify real-world compatibility.
We validated every component for:
Installation inside Next.js App Router
Tested with strict TypeScript mode enabled
Controlled and uncontrolled usage patterns
Integration with react-hook-form
Date range and datetime behavior
Timezone handling (where supported)
SSR and hydration safety
Dependency footprint (react-day-picker, date-fns, etc.)
GitHub activity and maintenance status
We only included components that are actively maintained, reusable, and production-ready.
All components listed here are 100% free and open source.
Across the list, you’ll find support for three primary selection modes:
Date Picker - Select a single calendar date
Date & Time Picker -Allows selection of both date and time
Date Range Picker - Select a start and end date
When you should use a Shadcn date picker
Shadcn date pickers are ideal for:
SaaS analytics dashboards for filtering data by date
Booking and scheduling systems - for single or range date selection
Admin panels with reporting filters
Financial tools that analyze data-based metrics
CRM systems that track activity history
Any application already using shadcn/ui and Tailwind CSS
| Criteria | What to Check |
|---|---|
| Selection Type | Single date, range, or datetime support |
| Form Handling | Works with controlled inputs and form libraries |
| Styling | Compatible with Tailwind CSS |
| Timezone / Localization | Needed for global or regional apps |
| Customization | Supports custom trigger, popover, or layout |
| Dependencies | Uses modern libraries like react-day-picker or date utilities |
If you prefer a quick overview before diving into implementation details, here’s a side-by-side comparison:
| Component | Picker Type | Range Support | Timezone Support | Form Friendly | Best For |
|---|---|---|---|---|---|
| Shadcn Space | Datetime + Range | ✅ | ❌ | ✅ | SaaS dashboards |
| Tailwindadmin | Date + Range | ✅ | ❌ | ✅ | Admin panels |
| Datetime Picker (huybuidac) | Datetime | ❌ | ✅ | ✅ | Global SaaS apps |
| Date Range Picker (johnpolacek) | Date Range | ✅ | ❌ | ✅ | Analytics filtering |
| Shadcn Date Picker (flixlix) | Date + Range | ✅ | ❌ | ✅ | General applications |
| Shadcn Calendar (sersavan) | Date + Range | ✅ | Partial | ✅ | Custom dashboards |
| Date Time Picker (Rudrodip) | Datetime + Range | ✅ | ❌ | ✅ | Booking systems |
| Datetime Picker (Maliksidk19) | Datetime | ❌ | ❌ | ✅ | Internal tools |
| Persian Calendar (MehhdiMarzban) | Date + Range | ✅ | Locale-based | ✅ | Regional apps |
Below is a curated list of free, production-ready Shadcn date picker components. Each component has been thoroughly tested for integration with React, Next.js, TypeScript, and Tailwind CSS.
This collection provides multiple ready-to-use date picker components built specifically for shadcn/ui projects. It includes standard date pickers, calendar popovers, and form-integrated pickers. All components follow shadcn component architecture, making them easy to integrate into existing projects.
Tech stack: ShadcnUI v3.5, Base UI v1, React v19, TypeScript v5, Tailwind CSS v4
Last Updated: Feb 2026
Key features:
Best for: SaaS dashboards, admin panels, and internal tools
This component provides production-ready date picker examples used in real dashboard interfaces. It includes calendar dropdown picker and input-based picker implementations. The code follows modular patterns suitable for scalable dashboard systems.
Tech stack: ShadcnUI v3.5, Next.js v16, React v19, TypeScript v5, Tailwind CSS v4
Last Updated: Feb 2026
Key features:
Best for: Admin dashboards and analytics interfaces
This is a powerful and fully customizable component that simplifies date and time selection in React applications built with the Shadcn UI framework. With advanced features designed to enhance the user experience, this datetime picker provides seamless integration and a responsive, user-friendly interface. Whether you need a robust datetime, date, or time picker, this provides the flexibility and functionality needed for modern applications.
Tech stack: ShadcnUI v2, Next.js v14, React v18, Radix UI v1, Tailwind CSS v3
GitHub Stars: 202
Last Updated: 2024
Key features:
Best for: SaaS apps with timezone and datetime requirements
This is a reusable component built for Shadcn using beautifully designed components from Radix UI and Tailwind CSS. It provides a dropdown interface to allow users to select or enter a range of dates and includes additional options such as preset date ranges and an optional date comparison feature.
Tech stack: Radix UI v1, Mocha.js v10, React v18, Jest v29.5, Tailwind CSS v3
GitHub Stars: 1K+
Last Updated: 2024
Key features:
Best for: Analytics dashboards and reporting systems
This custom Shadcn component aims to provide a more advanced alternative to the default date picker component. It is built on top of the react-day-picker library, which provides a wide range of customization options.
Tech stack: ShadcnUI v2.6, Next.js v15, Radix UI v1, React v19, Tailwind CSS v3
GitHub Stars: 363
Last Updated: Dec 2025
Key features:
Best for: General application date selection
This is a reusable calendar and date range picker built for shadcn/ui projects. It is designed for React and Next.js apps using TypeScript and Tailwind CSS. The component focuses on clean UI, easy customization, and smooth date selection. It helps developers quickly add flexible calendar functionality to modern web applications.
Tech stack: Next.js v14, Radix UI v1, Zod v3, React v18, Tailwind CSS v3
GitHub Stars: 327
Last Updated: Dec 2025
Key features:
Best for: Custom calendar integrations
This project features a range of Date and Time picker components built with ShadCN. These examples demonstrate the versatility and functionality of the component across various use cases.
Tech stack: Next.js v14, Radix UI v1, Zod v3, React v18, Tailwind CSS v3
GitHub Stars: 283
Last Updated: May 2025
Key features:
Best for: Booking systems and scheduling apps
This project provides a beautifully crafted datetime picker component built using the Shadcn UI. It offers an intuitive interface for selecting dates and times in React applications.
Tech stack: Next.js v15, Radix UI v1, React v19, Tailwind CSS v3
GitHub Stars: 266
Last Updated: March 2025
Key features:
Best for: Internal tools and admin apps
This is a beautiful, accessible, and customizable Persian (Jalali) date picker component for React applications built with Shadcn UI components.
Tech stack: Next.js v15, Radix UI v1, React v19, Tailwind CSS v3
GitHub Stars: 27
Last Updated: Feb 2025
Key features:
Best for: Persian and regional applications
Date pickers from Shadcn Space and Tailwindadmin are strong choices because their components are regularly updated and well-maintained. They offer support for analytics filtering and are built with a scalable component architecture, making them reliable for growing applications.
The datetime picker by huybuidac supports timezone selection, min date, and max date validation. This is useful for global SaaS applications.
Yes, all components are built with React, TypeScript, and Tailwind CSS, and work directly in Next.js apps.
These 9 free shadcn date picker components provide production-ready solutions for modern applications. They support core needs like date selection, datetime input, analytics filtering, and scheduling.
For most SaaS and dashboard applications, the datetime picker by Shadcn Space and the date range picker by johnpolacek provide the best flexibility and scalability.
2026-02-21 14:18:19
Hi Everyone reading,
In the previous article, we explored Intermediate Operations in Java Streams — how they transform and prepare data.
Now it’s time to understand the final and most important step of stream processing:
Without a terminal operation, a stream does nothing.
Let’s understand why.
Terminal operations:
Example
List<Integer> list = List.of(1,2,3,4,5);
list.stream()
.filter(n -> n % 2 == 0)
.forEach(System.out::println); // Terminal operation
Here:
filter() → Intermediate
forEach() → Terminal (executes the stream)
Streams are lazy.
Intermediate operations build a pipeline, but execution starts only when a terminal operation is called.
No terminal operation → No processing.
Terminal operations can be grouped into:
Let’s explore each.
1 forEach()
Used to iterate over elements.
List<String> names = List.of("Ram", "Amit", "Shyam");
names.stream()
.forEach(System.out::println);
Output
Ram
Amit
Shyam
Note: Prefer forEachOrdered() when working with parallel streams and order matters.
2 collect()
Most powerful terminal operation.
Used to collect stream elements into:
Collect to List
List<Integer> evenNumbers =
List.of(1,2,3,4,5).stream()
.filter(n -> n % 2 == 0)
.collect(Collectors.toList()); // [2,4]
Collect to Map
Map<String, Integer> map =
List.of("Ram", "Amit", "Shyam").stream()
.collect(Collectors.toMap(
name -> name,
name -> name.length()
)); // {Ram=3, Amit=4, Shyam=5}
3 reduce()
Used to combine elements into a single result.
Example: Sum of numbers
int sum = List.of(1,2,3,4)
.stream()
.reduce(0, (a,b) -> a + b);
System.out.println(sum);
Output
10
Used for:
4 count()
Returns number of elements.
long count = List.of(1,2,3,4,5)
.stream()
.filter(n -> n > 2)
.count();
System.out.println(count);
Output
3
In the above example, we only count the numbers that are greater than 2.
5 anyMatch(), allMatch(), noneMatch()
Used for condition checking.
boolean anyEven =
List.of(1,3,5,6).stream()
.anyMatch(n -> n % 2 == 0); // true
boolean allPositive =
List.of(1,2,3).stream()
.allMatch(n -> n > 0); // true
boolean result = List.of(5, 10, 15, 20).stream()
.noneMatch(n -> n < 0); // true
These return boolean.
6 findFirst() and findAny()
Return an Optional.
Optional<Integer> first =
List.of(10,20,30)
.stream()
.findFirst(); // Optional[10]
findAny() is useful with parallel streams.
Optional<Integer> result =
List.of(10, 20, 30, 40, 50)
.parallelStream()
.findAny(); // Optional[30]
Note: In the output of findAny(), there could be any element within the provided list in the output. In this case it was Optional[30], but 30 could be 10,20,40 or 50. findAny() returns any one element in the given list.
7 min() and max()
Find minimum or maximum element.
Optional<Integer> max =
List.of(1,5,2,9,3)
.stream()
.max(Integer::compareTo); // 9
Optional<Integer> min =
List.of(1,5,2,9,3)
.stream()
.min(Integer::compareTo); // 1
| Feature | Description |
|---|---|
| Triggers Execution | Executes all intermediate operations |
| Produces Result | Returns value or side effect |
| Closes Stream | Stream cannot be reused |
| Non-lazy | Actually performs computation |
Stream<Integer> stream = Stream.of(1,2,3);
stream.forEach(System.out::println);
// This will throw IllegalStateException
stream.count();
Once a terminal operation is called, the stream is closed.
Problem statement: For a given list of employees, find the average salary.
class Employee {
String name;
int salary;
Employee(String name, int salary) {
this.name = name;
this.salary = salary;
}
public String toString() {
return name + " : " + salary;
}
}
public class Main {
public static void main(String[] args) {
List<Employee> employees = List.of(
new Employee("Ram", 50000),
new Employee("Amit", 70000),
new Employee("Shyam", 40000),
new Employee("Ankit", 90000)
);
// Get average salary
double avgSalary = employees.stream()
.mapToInt(e -> e.salary)
.average()
.orElse(0);
System.out.println("Average Salary: " + avgSalary);
}
}
Output
Average Salary: 62500.0
| Intermediate | Terminal |
|---|---|
| Returns Stream | Returns result |
| Lazy | Executes pipeline |
| Can chain | Ends stream |
filter(), map()
|
collect(), reduce(), count()
|
Terminal operations are the final step in stream processing.
They:
Mastering terminal operations like collect(), reduce(), count(), and findFirst() makes you confident with Java Streams in real-world applications.
In the next article, we will explore:
Collectors in depth
2026-02-21 14:14:20
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
The Hook
I shipped a focused devlog pipeline update that turns my reading queue into concrete build decisions for the week.
Why I Built It
My days were getting noisy: too many good posts, not enough synthesis. I wanted a lightweight path from "interesting idea" to "actionable build," and a way to record it so I can see patterns over time.
The Solution
I wired a simple flow that separates signal capture, decision pressure, and actual build notes.
graph TD;
A[Collect reading signals] --> B{Is it actionable?};
B -- Yes --> C[Extract constraints + risks];
B -- No --> D[Save for later];
C --> E[Map to builds];
E --> F[Devlog write-up];
capture -> filter -> extract -> map -> ship
signals = read_queue()
insights = [s for s in signals if s.actionable()]
notes = summarize(insights)
plan = map_to_builds(notes)
write_devlog(plan, notes)
Click to view raw logs
Read queue normalized, 10 sources tagged, 4 insights promoted to build notes.
:::note
Small, repeatable loops beat big, fragile systems.
:::
:::tip
If an insight can't change a build plan, it's just trivia.
:::
:::warning
Don't let "nice to know" overwhelm "need to ship."
:::
:::danger
Automating without guardrails turns your roadmap into a blender.
:::
What I Learned
I still have a few items queued to dig into deeper (Pantheon's new dashboard traffic metrics, Gutenberg 22.5 notes, and a Drupal community values post), but the themes above were enough to shape next week's build priorities.
Originally published at VictorStack AI Blog