2026-02-01 14:56:07
I recently stumbled upon a website called Talkrush – Stranger Chatting, and it genuinely messed with my understanding of what a “static site” can do.
At first glance, it looks like a simple GitHub Pages project. No backend, no login system, no obvious APIs.
👉 Homepage:
Talkrush – Stranger Chatting
But once you start clicking around, it turns into something else entirely.
You suddenly get:
All of this works smoothly — and that’s where the confusion begins.
There’s a dedicated rooms page here:
And a group chat page here:
What really surprised me is that the same group chat HTML file behaves like multiple different rooms just by changing a query parameter.
For example:
That single group.html file suddenly becomes a completely separate chat room.
No server-side routing.
No backend-generated pages.
Just a static file reacting to the URL.
This raises a big question:
How are room identity and isolation handled purely on the client side?
The peer-to-peer part raises even more questions.
Users can:
Traditionally, you’d expect:
But here, none of that is visible.
So what’s actually happening?
Some likely possibilities:
Even if the site looks static, signaling still has to happen somewhere — browsers can’t magically discover each other without exchanging offers and ICE candidates.
What makes this especially impressive is how clean the experience feels:
That suggests:
All of this is driven by:
No traditional backend routing required.
From what I can tell, this project is built by Aman Kumar, and pulling off stranger chats, group rooms, and video calls in what appears to be a minimal static setup is genuinely impressive.
It seriously challenges the assumption that:
“Real-time apps must always be backend-heavy.”
Modern browser APIs — especially WebRTC, WebSockets, and smart client-side architecture — can take you much further than most people expect.
At this point, I’m not sure if I’m missing something obvious, or if this is simply a great example of how far frontend-only architecture can be pushed in 2026.
If you’ve:
I’d love to hear how you would approach building something like this.
Project Link (for reference):
Talkrush – Stranger Chatting by Aman Kumar
2026-02-01 14:56:06
2026-02-01 14:49:25
In modern development practices, protecting Personally Identifiable Information (PII) within test environments is paramount. Legacy codebases, often riddled with monolithic and unrefined code, pose significant challenges in implementing security measures. As a senior architect, my focus has been on creating a robust, maintainable, and non-intrusive approach to prevent PII leaks when using TypeScript on aged systems.
One critical problem is ensuring that sensitive data doesn't inadvertently flow into logs, test reports, or frontend outputs during testing phases. My solution hinges on integrating static type safety, operational interceptors, and controlled data sanitization—all within a gradually adoptable strategy.
The first step is to enforce strict data models by leveraging TypeScript's type system. Instead of using generic any or loosely typed objects, I define explicit interfaces for data containing PII:
interface UserData {
id: string;
name: string;
email: string;
ssn?: string; // Sensitive info
}
To prevent accidental leaking, I create a utility function that sanitizes or masks PII data:
function sanitizeUserData(user: UserData): UserData {
return {
...user,
email: '[email protected]',
ssn: user.ssn ? 'REDACTED' : undefined
};
}
This approach ensures any data passed through this function is compliant, minimizing human errors.
In legacy systems, direct data flows are pervasive, making it hard to control all outputs. I introduce interceptors at API boundary points or before serialization. For example, wrapping API response functions:
async function fetchUser(id: string): Promise<UserData> {
const user = await legacyFetchUser(id); // Legacy fetch
return sanitizeUserData(user); // Sanitized before returning
}
Similarly, for logging or test output, I ensure all sensitive data is sanitized:
function logTestData(data: UserData) {
const safeData = sanitizeUserData(data);
console.log('Test Output:', JSON.stringify(safeData));
}
Since rewriting the entire legacy codebase isn't feasible immediately, I adopt a wrapper pattern that adds safety layers without invasive changes:
function safeLegacyFetchUser(id: string): Promise<UserData> {
return legacyFetchUser(id).then(sanitizeUserData);
}
This pattern allows me to incrementally retrofit the system with PII safeguards, verifying each step in staging environments.
I also enforce policies through strict compiler settings (noImplicitAny, strictNullChecks) and custom ESLint rules. This ensures developers are alerted early when trying to handle data improperly.
Beyond static measures, I set up audit logs that record data access, ensuring compliance. I use code reviews and static analysis tools to flag potential leaks.
Securing PII in legacy TypeScript applications calls for a layered strategy combining strict type models, data sanitization functions, interceptor patterns, gradual refactoring, and ongoing monitoring. This approach reduces risk exposure while respecting the constraints of existing systems, paving the way for a more secure development lifecycle.
Adopting these best practices ensures that even in complex, old systems, sensitive data remains protected without sacrificing agility or project timelines.
To test this safely without using real user data, I use TempoMail USA.
2026-02-01 14:37:28
This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI
Hi everyone! I'm a Software Engineer based in Bangladesh with over 3 years of experience specializing in Java and Spring Boot.
Currently, I work at Technonext (a sister concern of US-Bangla Airlines), where I lead a small team building complex backend architectures for a ride-sharing application like Uber. While my "home base" is backend engineering with some Angular on the side, I believe a modern developer should never stop learning.
This challenge pushed me to dive into Google Cloud Platform for the first time — and I'm glad it did. Deploying to Cloud Run, configuring containers, and integrating Gemini AI opened up a whole new world.
But beyond the tech, I wanted my portfolio to have personality. Not just another skills grid and project list. Something that talks back.
Enter DS-7 — an AI assistant that knows my entire career and adapts its tone based on who's asking. Recruiters get professional answers. Developers get witty terminal responses.
This portfolio represents that journey — combining my backend roots, frontend skills, and the power of Google's AI tools.
💡 Pro tip: Click the pulsing terminal icon in the bottom-right corner. Try typing
ls,sudo, or just ask "Why should I hire him?"
(Note: The embedded site above is live on Google Cloud Run. Click the chat icon to ask the AI about my skills!)
I didn't want just a static HTML page. As a backend engineer, I wanted to showcase a full-stack architecture that is robust, scalable, and "smart."
This project was born in the cloud.
asia-south1 region).Beyond development, AI is at the core of the user experience:
gcloud deployment scripts and set up the CI/CD pipeline, ensuring a smooth path to production.I am most proud of the seamless integration of new AI tools into a professional DevOps workflow.
Taking a project generated by Antigravity, polishing it with Gemini, and having it deploy automatically to Cloud Run via a CI/CD pipeline felt like the future of software engineering. Seeing the "Hello World" turn into a fully functional, AI-powered application running on Google's infrastructure was the perfect start to 2026!
2026-02-01 14:25:57
Loss functions are the quiet engine behind every machine learning model. They serve as the critical feedback loop, translating the abstract concept of error into a value that a computer can minimize. By quantifying the difference between a model’s prediction and the ground truth, the loss function provides the gradient signal that the optimizer uses to update the network's weights.
In essence, if the model architecture is the body of an AI, the data is its fuel, and the loss function is its central nervous system, constantly measuring pain (error) and instructing the model how to move to avoid it. Understanding which loss function to use is often the difference between a model that converges in minutes and one that never learns.
This guide introduces loss functions from first principles, explains the most common ones, and shows how to use them effectively in PyTorch.
The Two Pillars of Machine Learning: Regression vs. Classification
At the base of every machine learning problem, the objectives generally converge into two main classes: Regression and Classification. Having understood this, we can see how the choice of a loss function is not arbitrary; it is a direct consequence of the mathematical nature of your output.
Once we understand whether our task is predicting continuous values (regression) or discrete categories (classification), the landscape of loss functions becomes far easier to navigate. Every loss function in PyTorch is essentially a specialized tool built on top of these two pillars.
With that foundation in place, we can now explore how this split shapes the design of loss functions in PyTorch and how different tasks extend these two core ideas into more advanced forms, such as multi‑label classification, segmentation, and detection.
Regression Losses (Continuous Outputs)
Regression problems involve predicting continuous numerical values such as house prices, a person’s age, tomorrow’s temperature, or even pixel intensities in an image. In these tasks, the “error” is simply the distance between two points on a number line: the true value and the predicted value.
As a result, regression loss functions are fundamentally distance‑based. They quantify how far predictions deviate from targets and penalize larger deviations more heavily (or more gently), depending on the specific loss function.
Common PyTorch Regression Losses
All these losses share one goal: They measure the distance between predicted and true values.
Mean Squared Error (MSE)
Mean Squared Error (MSE) is the most widely used loss function for regression. It measures the average of the squared differences between predicted and actual values.
By squaring the error, MSE ensures two important properties:
Minimizing MSE is equivalent to maximizing the likelihood of the data under a Gaussian (Normal) noise model. This makes MSE particularly effective when you want the model to strongly avoid large deviations.
PyTorch Implementation
import torch.nn as nn
criterion = nn.MSELoss()
You would typically use it inside a training loop like:
loss = criterion(predictions, targets)
import torch
import torch.nn as nn
# 1. Initialize the Loss
criterion = nn.MSELoss()
# 2. Example Data (Batch size of 2)
predictions = torch.tensor([2.5, 0.0], requires_grad=True)
targets = torch.tensor([3.0, -0.5])
# 3. Calculate Loss
loss = criterion(predictions, targets)
print(f"MSE Loss: {loss.item()}")
# Manual Calculation:
# ((2.5 - 3.0)**2 + (0.0 - (-0.5))**2) / 2
# = (0.25 + 0.25) / 2
# = 0.25
L1 Loss (MAE)
L1 Loss, also known as Mean Absolute Error (MAE), measures the average absolute difference between predicted and true values. Unlike MSE, which squares the error, L1 applies a linear penalty. This makes it more robust to outliers, since large errors do not explode quadratically. If your dataset contains corrupted data or extreme anomalies, MSE tends to overfit to them (skewing the model), whereas MAE treats them with less urgency.
Where MSE aggressively punishes large deviations, L1 treats all errors proportionally. This often leads to models that learn the median of the target distribution rather than the mean, as in engineering, there is a trade-off. The gradient is constant (either 1 or -1), meaning it doesn't decrease as you get closer to the target. This can make it harder for the model to make fine-tuned adjustments at the very end of training compared to MSE.
L1 Loss is useful when:
Optimization can be slower and less smooth than MSE. However, the trade‑off is improved stability in noisy environments.
import torch.nn as nn
criterion = nn.L1Loss()
Usage inside a training loop:
loss = criterion(predictions, targets)
import torch
import torch.nn as nn
# 1. Initialize the Loss
criterion = nn.L1Loss()
# 2. Example Data (Batch size of 2)
predictions = torch.tensor([2.5, 0.0], requires_grad=True)
targets = torch.tensor([3.0, -0.5])
# 3. Calculate Loss
loss = criterion(predictions, targets)
print(f"L1 Loss (MAE): {loss.item()}")
# Manual Calculation:
# (|2.5 - 3.0| + |0.0 - (-0.5)|) / 2
# = (0.5 + 0.5) / 2
# = 0.5
Classification Loss Functions
Classification problems deal with discrete categories, not continuous values. Instead of predicting a single numeric output, the model produces a probability distribution over possible classes. The goal is not to minimize distance on a number line, but to assign high probability to the correct class and low probability to all others.
Because of this, classification loss functions measure how well the predicted probability distribution aligns with the true distribution. They quantify the uncertainty, surprise, or information mismatch between what the model believes and what is actually correct.
At their core, classification losses answer one fundamental question:
“How wrong is the model’s predicted probability for the correct class, and how confidently wrong is it?”
This matters because a model that is confidently wrong should be penalized more heavily than one that is uncertain. We have different types of classification, such as:
BCELoss (Binary Classification)
BCE is used when the task has two classes and the model outputs a single probability (after the sigmoid activation). It measures how close the predicted probability is to the true binary label. To use this function, the input must be probabilities (values between 0 and 1), and the Sigmoid activation function must be applied to your model's last layer before passing the output to this loss.
Note that if the model outputs exactly 0 or 1, the log term becomes −∞, which can lead to numerical instability,It's sensitive to numerical instability, so BCEWithLogitsLoss is preferred
import torch
import torch.nn as nn
criterion = nn.BCELoss()
preds = torch.tensor([0.8, 0.2], requires_grad=True) # probabilities
targets = torch.tensor([1.0, 0.0])
loss = criterion(preds, targets)
print(f"BCE Loss: {loss.item()}")
CrossEntropyLoss (Multi‑Class Classification)
It is the standard loss function for multi‑class, single‑label classification, where each input belongs to exactly one class (e.g., MNIST digits 0-9 or ImageNet). It combines nn.LogSoftmax() and nn.NLLLoss() in a single class, which quantifies information loss when the model’s predicted distribution replaces the true distribution. High probability for the correct class leads to low loss, while confident wrong predictions lead to a very high loss
import torch
import torch.nn as nn
criterion = nn.CrossEntropyLoss()
logits = torch.tensor([[2.0, 1.0, 0.1]]) # raw scores
targets = torch.tensor([0]) # correct class index
loss = criterion(logits, targets)
print(f"CrossEntropy Loss: {loss.item()}")
NLLLoss (Negative Log‑Likelihood Loss)
NLLLoss computes the negative log‑likelihood of the correct class. It is used when the model outputs log‑probabilities, typically via nn.LogSoftmax.It is essentially Cross‑Entropy without the softmax step. It doesn’t compute logs or likelihoods; it simply selects the log‑probability of the correct class from your model’s output (e.g., picking −0.5 from [-1.2, -0.5, -2.3] when the target index is 1) and returns its negative as the loss.
You must apply log_softmax manually before passing values to NLLLoss.
import torch
import torch.nn as nn
# 1. The Model Output (Must be Log-Probabilities!)
# Imagine we have 3 classes.
# We MUST use LogSoftmax first.
m = nn.LogSoftmax(dim=1)
logits = torch.tensor([[0.1, 2.0, -1.0]]) # Raw scores
log_probs = m(logits)
# log_probs is now approx [-2.1, -0.2, -3.2]
# 2. The Target
target = torch.tensor([1]) # The correct class is index 1
# 3. The Loss
criterion = nn.NLLLoss()
loss = criterion(log_probs, target)
print(f"Calculated Loss: {loss.item()}")
# It simply grabbed the value at index 1 (-0.2),
# and flipped the sign to 0.2.
BCEWithLogitsLoss (Binary or Multi‑Label )
BCEWithLogitsLoss is simply Binary Cross‑Entropy applied directly to raw logits, with a built‑in sigmoid activation. Instead of asking you to apply sigmoid() yourself and then compute BCE, PyTorch wraps both steps into one stable operation.
This matters because manually applying a sigmoid can cause numerical instability, an extremely large or small logits can overflow or underflow when converted to probabilities. By combining the sigmoid and BCE into a single optimized function, PyTorch avoids these issues and produces more reliable gradients.
This makes BCEWithLogitsLoss the recommended choice for both binary classification and multi‑label classification, where each class is treated as an independent yes/no prediction.
It accepts raw logits, applies sigmoid internally, and then computes BCE safely and efficiently.
import torch
import torch.nn as nn
# 1. Initialize the Loss
criterion = nn.BCEWithLogitsLoss()
# 2. Example Data (Binary or Multi‑Label)
logits = torch.tensor([1.2, -0.8], requires_grad=True) # raw model outputs
targets = torch.tensor([1.0, 0.0]) # true labels
# 3. Calculate Loss
loss = criterion(logits, targets)
print(f"BCEWithLogits Loss: {loss.item()}")
# Internally:
# - Applies sigmoid to logits
# - Computes Binary Cross‑Entropy on the resulting probabilities
How to Choose the Right Loss Function
Choosing the right loss function is one of the most important decisions in any machine learning project. The loss determines what the model learns, how it learns, and how stable training will be. A model can have the perfect architecture and optimizer, but with the wrong loss function, it will fail to converge or learn the wrong objective entirely.
The key is to match the loss function to three things:
The type of prediction you are making: The type of prediction you are making matters because every loss function is designed for a specific output structure. Continuous values require distance‑based losses like MSE or MAE, single‑class predictions require softmax‑based losses like CrossEntropyLoss, and multi‑label or binary predictions require sigmoid‑based losses like BCEWithLogitsLoss.
The distribution of your data: It matters because losses behave differently when classes are imbalanced, noisy, or skewed; Imbalanced datasets require class weights to prevent the model from collapsing to majority classes, while noisy or heavy‑tailed data may need more robust losses like MAE or CrossEntropy to ensure stable learning.
The structure of your outputs: Every loss function expects predictions in a specific shape. Single logits for binary tasks, a vector of class logits for multi‑class tasks, or multi‑hot vectors for multi‑label tasks, and if your model’s output format doesn’t match what the loss is designed for, the gradients become meaningless and training breaks down.
Once you understand these three dimensions, choosing a loss becomes systematic rather than a matter of guesswork.
Common Mistakes When Using Loss Functions in PyTorch
Using softmax or sigmoid before the loss: CrossEntropyLoss and BCEWithLogitsLoss are designed to take raw logits; adding these activations manually distorts the gradients, causes numerical instability, and leads to slower or failed training.
Choosing the wrong loss for the task: Each loss is designed for a specific prediction structure. Using CrossEntropyLoss for multi‑label data or BCE for multi‑class problems produces incorrect gradients and prevents the model from learning the intended objective
Incorrect target format: Loss function expects labels in a very specific structure. CrossEntropyLoss requires class indices (not one‑hot vectors), while BCEWithLogitsLoss requires float labels for each class, so giving the wrong format leads to shape mismatches, silent errors, or completely incorrect gradients.
Ignoring class imbalance: This is a common mistake because models naturally favor majority classes, and without using class weights or pos_weight, the loss becomes misleadingly low, and the model learns to ignore rare but important classes.
Misunderstanding logits: Logits are raw, unbounded scores, not probabilities, and treating them as probabilities leads to incorrect preprocessing and broken training.
Shape mismatches: They are equally common because loss functions expect predictions and targets to have compatible dimensions, and even a missing or extra batch or class dimension can cause cryptic runtime errors or silently incorrect learning.
2026-02-01 14:25:49
Disclaimer: This post is a mixture of ranting about the complexity of the contemporary frontend, and my thought on how to solve it. I have been away from the frontend/React world for a while, so this post might contain some out-of-date ideas. Please let me know if you find such ideas while reading this post. Also, although I only discuss React/Next.js here, I think the argument could be extended to the entire frontend system as of now, regardless of framework. Therefore I will use the words "frontend" and "React" interchangeably throughout the post.
Lately, I was involved in a task to develop an application where the frontend is to be developed in Next.js. Although I have done a bunch of tasks where I had to use React and Next.js, at those times I didn't have to deal with the design part, and all I was doing was tossing and turning the data from the backend and making React components to present it. However, this was my first time that I had to seriously consider the visual design part as well, from the layout to the font color of a small text under <div> tag. And man, it was super difficult!
But why? Why was it that difficult? For sure the amount of CSS document pages from MDN was overwhelming, but that was a pretty minor issue. The harder part was that it was very very easy to write messy React components (Here, what I mean by "messy code" is that it is difficult to recognize what the purpose or responsibility of this component is, and very hard to track the behavior and the state updates).
But before you are about to blame me for my skill issue, calm down and think up of the last React/Next.js code that you encountered. If you can't, visit a few pages about advanced topics in the official React documentation homepage. For example, the following is an excerpt of code from the page about managing input and states:
export default function Form() {
const [answer, setAnswer] = useState('');
const [error, setError] = useState(null);
const [status, setStatus] = useState('typing');
if (status === 'success') {
return <h1>That's right!</h1>
}
async function handleSubmit(e) {
e.preventDefault();
setStatus('submitting');
try {
await submitForm(answer);
setStatus('success');
} catch (err) {
setStatus('typing');
setError(err);
}
}
function handleTextareaChange(e) {
setAnswer(e.target.value);
}
return (
<>
<h2>City quiz</h2>
<p>
In which city is there a billboard that turns air into drinkable water?
</p>
<form onSubmit={handleSubmit}>
<textarea
value={answer}
onChange={handleTextareaChange}
disabled={status === 'submitting'}
/>
<br />
<button disabled={
answer.length === 0 ||
status === 'submitting'
}>
Submit
</button>
{error !== null &&
<p className="Error">
{error.message}
</p>
}
</form>
</>
);
}
Of course, this is a relatively simple and straightforward example, but there are still lots of things to consume. There are already three states, nested HTML elements, two event handlers, and even a conditionally rendered component, even in a fairly simple and decent example. Why is this so complex?
Well, I believe that there is an innate issue which is hard to overcome in React (or frontend in general). That is, a React component needs to represent information from a managed state in the form of JSX, which is simply a "nicer" version of HTML. But doesn't this sound way too natural? How could it be an essential problem of React and frontend?
I believe any frontend technologies including mobile ones are essentially about how to manage the current state (either on the frontend or on the backend) and render it in hierarchical views. Being hierarchical implies several issues, but to name a few:
<form>? How to deal with responsive design?Because state management and layout are essentially coupled but the HTML elements must be hierarchical at the same time, things are very likely to become complex (or as I expressed in the title, messed). Think about the above excerpt as an example. Suppose you want to switch the <textarea> element to <selector> for enabling users to choose the answer rather than typing it manually, but the answer candidates are dynamic, which means you have to fetch the list of the answers from the backend. Then you might "naturally" think of adding useEffect in the same Form component:
export default function Form() {
// [...]
const [countries, setCountries] = useState([]);
useEffect(async () => {
try {
const response = await fetch("GET_COUNTRY_LIST_API");
const data = await response.json();
setCountries(data.countries || []);
} catch (error) {
setError(error);
}
}, []);
// [...]
}
Do you think this is a good solution? Some of you might say yes, and others may not. Would it be simpler to fetch countries here in the same Form where the data is to be rendered and submitted again, or somewhere else so that Form is purely responsible for submitting data? There is no absolutely correct or incorrect answer, and it is totally up to the developer to decide. However, whereas there are so-called "best practices" or "design patterns" for the backend, for the frontend there seem to be no such widely used or accepted patterns to the best of my knowledge.
Dan Abramov's famous "Presentational-Container" pattern provides a useful insight for organizing this mess (for an easier introduction, I recommend reading this post on pattern.dev). From my understanding, you can have the following two patterns of writing React components: stateful (or non-functional) and stateless (or purely functional).
useState, useEffect, or fetch calls is stateful.useEffect. They are only responsible for how to visualize the given data.Let's get back to the above excerpt. The Form component is obviously stateful, where it manages several states using useState. If we ever add useEffect here for fetching the list of candidate countries, then the component is also responsible for handling data fetched from the backend.
This separation of concerns is especially useful for maintenance. If you want to add any additional data submission, you can tweak this Form component. If you have a problem in submitting country text, then there must be something wrong inside this Form.
Furthermore, if we want to refactor this component according to the Presentational-Container pattern, then we separate the HTML components in the return statement and pass the states and callbacks for state updates like this:
export const FormBox = ({
title,
description,
answer,
status,
error,
handleSubmit,
handlerTextareaChange,
}: props) => {
return (
<>
<h2>{title}</h2>
<p>
{description}
</p>
<form onSubmit={handleSubmit}>
<textarea
value={answer}
onChange={handleTextareaChange}
disabled={status === 'submitting'}
/>
<br />
<button disabled={
answer.length === 0 ||
status === 'submitting'
}>
Submit
</button>
{error !== null &&
<p className="Error">
{error.message}
</p>
}
</form>
</>
);
};
However, this logical separation itself may not be enough. Since the frontend elements are hierarchical, there is no limit to putting another stateful element inside a stateless element. If this is the case, then is that stateless element purely stateless? Even if it doesn't manage any state at all, you may have to look into this element because the state you want to check out is managed by this element as a parent.
export function FormLayout() {
return (
<div>
{/* some other components*/}
<Form />
</div>
);
}
In the above code example, we have Form inside FormLayout. Now, although FormLayout has nothing to do with any form submission logic, you are still very likely to visit this component while searching in your IDE or browser developer tool as long as the FormLayout component is conceptually tied to the form submission. Yes, we need a more comprehensive mental model for further organization of our frontend code.
Brad Frost's Atomic Design suggests another great insight for organizing our React project. Although he introduces five levels of component designs analogous to chemistry, my takeaway is that you can think of an entire frontend page in two aspects.
Now we can see that the naming FormLayout is somewhat misleading, in that it can contain not only the form submission page but other features such as the navigation bar or a Google Ads banner. If this is the case, then we might as well use another name such as "QuizPageLayout" instead.
So far so good, now we have our mental model for separating concerns. There is a hierarchical structure of features for the entire project, and each individual feature should be assigned its own space as a page by the layout logic at its tree-level. Each feature fetches and updates its own feature data. I would like to refer to this mental model as Layout - Page. Have you noticed any familiar names? You're right. This model works naturally with Next.js.
Let's discuss the Layout-Page model in more details in conjunction with the Container-Presentational pattern we discussed previously.
First, Layout corresponds to organisms, templates, and pages in Atomic Design. It is responsible only for how to arrange several components on the entire screen. It decides the position, display, and size of each component. It may contain some visual components such as dividers, but those are pretty rare. Layout never deals with how to render each individual component (Page), even including its margin or padding properties.
Next, each Page represents a single feature in the product, with the Single Responsibility Principle in mind.
Hence the structure here is recursive. You have a tree of pages, and each page is logically separated into layouts and sub-pages. For example, we can organize the Form element and possibly related components as follows:
QuizPage
├── @AdsBanner
│ ├── @Page
│ └── Layout
├── @QuizSubmitPage
│ ├── @Page # <Form> will be in this page
│ └── Layout
└── Layout
Note that this tree structure is very similar to the structure of Next.js App Router. However, Next.js itself doesn't really enforce any design principles for developers, so you won't find the idea argued here in the Next.js documentation. It is totally up to you, the developer, to decide how to organize your project. However, the mechanism of App Router that Next.js provides fits perfectly with the idea of the Layout-Page model, and in fact, a part of inspiration of this model is from the App Router itself.
If we translate the structure above into a Next.js file routing system, then it would be like this:
quiz
├── @adsbanner
│ ├── page.tsx
│ └── layout.tsx
├── submit
│ ├── page.tsx # <Form> will be in this page
│ └── layout.tsx
├── layout.tsx
└── page.tsx
Here, note that we use a parallel route for the ads banner component, since we don't want the user to access the banner only through an exposed route. Also, it is not under submit/ but under quiz/, which means the banner will show up in the other sub-routes of quiz/, not limited to its submission page /quiz/submit. In general, it is essential to utilize the parallel routes in the Next.js App Router system for the Layout-Page model, as there is no guarantee that only one sub-feature exists inside a product feature.
To recap, the entire recursive tree structure of a project according to the Layout-Page model is like this:
Project
├── Layout
├── Page0
│ ├── Layout
│ ├── Page00
│ │ ├── Layout
│ │ ├── Page000
│ │ ...
│ ├── Page01
│ ├── Page02
│ ...
├── Page1
├── Page2
├── Page3
...
I would like to mention that this might not be my original idea, and someone somewhere could have already thought of this mental model and publicized under a name that I haven't heard of yet. However, it was not easy for me to come up with this idea after a long time of searching for any useful idea for organizing the frontend messes, that wasn't much successful. It would be great to see any comments on this. Thank you.