MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

I Built a Free, Self-Hosted SNMP Toolkit — Now With Real-Time WebSocket Push

2026-02-22 21:01:01

If you've ever worked with SNMP — testing NMS integrations, debugging trap handlers, or validating MIB structures — you know the pain. You end up juggling Net-SNMP CLI commands you can never remember, snmptrapd configs scattered everywhere, and a $500 MIB browser that still looks like it was designed in 2003.

I built Trishul SNMP to fix that. One Docker container. Browser-based. Free.

This post covers what's new in v1.2.4 — the biggest update yet — and why I built it this way.

What Is Trishul SNMP?

It's a self-hosted SNMP dev toolkit that combines five tools into one clean web UI:

What Instead of
SNMP Simulator (UDP agent) snmpsim + CLI config
Walk & Parse → JSON snmpwalk + manual OID lookup
Trap Sender + Receiver snmptrap + snmptrapd
MIB Browser (tree view) iReasoning MIB Browser ($500+)
MIB Manager (upload/validate) Text editor + manual dependency hell

Stack: Python 3.11, FastAPI, pysnmp, pysmi, Bootstrap 5, Docker.

# Install in one command
curl -fsSL https://raw.githubusercontent.com/tosumitdhaka/trishul-snmp/main/install-trishul-snmp.sh | bash

Then open http://localhost:8080. Default login: admin / admin123 — change it immediately in Settings.

What's New in v1.2.4

⚡ Real-Time WebSocket Push — Polling Is Dead

The biggest architectural change: the entire frontend is now event-driven.

Before: Every page polled the backend on a setInterval — dashboard every 30s, simulator every 5s. Stale data was common. Switching pages caused spinners.

After: A single persistent WebSocket connection at /api/ws. The backend broadcasts state changes the moment they happen. Dashboard, Simulator status, and Trap Receiver all update instantly — no refresh needed.

Browser ←──── WS push ────── FastAPI /api/ws
                                    │
                              broadcasts on:
                              - trap received
                              - simulator start/stop
                              - MIB uploaded
                              - stats change

The navbar shows a live green dot when the WebSocket is healthy. On reconnect, the backend sends a full_state message to re-seed everything — so you never see stale data after a network hiccup.

Why this matters for devs: If you're building a trap handler and sending test traps from the UI, you now see the counter increment on the dashboard in real time. No F5 needed.

📊 Live Activity Stats Dashboard

A new 8-counter row on the dashboard shows everything happening in your session:

  • SNMP Requests — Total polls served by the simulator
  • OIDs Loaded — OIDs currently in memory across all loaded MIBs
  • Traps Received / Sent — Live trap counters
  • Walks Executed — How many SNMP walks you've run
  • OIDs Returned — Total OIDs returned across all walks
  • MIBs Uploaded / Times Reloaded — MIB management activity

All counters update via WebSocket push. They're also file-backed — they survive container restarts so you don't lose your session history.

⚙️ Settings — Three New Cards

App Behaviour

Two toggles: auto-start Simulator and auto-start Trap Receiver on container boot. Plus a configurable session timeout (60s–86400s). Settings persist to app_settings.json and take effect on next restart.

Stats Management

Export all counters as JSON (useful for logging test session results) or reset to zero for a clean baseline before a test run.

About

Version, author, and description pulled live from the backend — always in sync with what's actually deployed.

🐛 Notable Fixes

Timestamp bug: The Trap Receiver was showing 1970 for Last Received when the backend returned a Unix epoch 0. Now guarded and shown as --.

Traps Sent undercount: Stats were only incremented in the frontend — if you sent traps via API directly, the counter never moved. Fixed by broadcasting stats from the backend after every successful trap send.

Dashboard spinners on page switch: The WS full_state event fires before DashboardModule.init() registers its listener on page switch, so status tiles stayed as spinners until the next reconnect. Fixed by always seeding via REST on init, regardless of WS state.

MIB Browser state conflict: Navigating to the Browser from Walk & Parse (with a pre-filled OID) would sometimes log Could not find node because the previous session's pending tree state conflicted. Fixed by clearing pendingSelectedOid and pendingExpandedNodes when a programmatic OID search is present.

Who Is This For?

NMS / backend developers who need a real SNMP agent to poll without actual hardware, or need to fire specific trap OIDs to validate their handler code.

DevOps / integration engineers who need to test SNMP monitoring integrations in CI/staging environments.

Network engineers who want to explore MIB structures interactively — search by OID, by name, or by description — and understand what traps a device can fire before it's on the floor.

What It's NOT For

  • Production 24/7 monitoring → use Zabbix, LibreNMS, PRTG
  • Enterprise NMS → use SolarWinds, Cisco Prime
  • High-availability trap collection at scale → use dedicated platforms

Trishul is a developer and testing tool, not a production monitoring system.

Quick Architecture

Browser (8080)
    │ HTTP + WebSocket
    ▼
Nginx (static files + reverse proxy)
    │ REST + WS
    ▼
FastAPI Backend (8000)
    ├── MIB Service (parse, validate, search)
    ├── SNMP Simulator (UDP 1061)
    ├── Trap Manager (send + receive on UDP 1162)
    ├── Walker (SNMP walk client)
    └── WebSocket hub (/api/ws)

Everything runs in Docker with host network mode so the SNMP UDP ports (1061, 1162) are accessible directly from your local machine or test devices.

Try It

# One-command install
curl -fsSL https://raw.githubusercontent.com/tosumitdhaka/trishul-snmp/main/install-trishul-snmp.sh | bash

# Access at
http://localhost:8080

If it's useful, a ⭐ on GitHub goes a long way — it helps other devs find the project.

Happy to answer questions in the comments about architecture decisions, the pysnmp/pysmi integration, or the WebSocket implementation. 🔱

I’m from a country you’ve probably never heard of: How I got a remote job in the US 🙌🏻

2026-02-22 21:01:00

Hello! My name is Nico. I'm a software engineer from Paraguay, and since 2021, I've been working fully remote for companies in LATAM, Europe, and the US.

Why am I writing this? If you are from a country like mine and want to start working globally, I want to share some quick advice on how to make it happen.

⚠️ Disclaimer: This is based on my personal experience.
There is no magic formula; read this and draw your own conclusions.

Learn English

First things first: this is your number one priority. If you're a developer and don't know English, stop focusing on React or Rust for a moment. Learn English first. While AI can help you write emails or documentation, real-time communication in meetings is what gets you (and keeps you) the job.

A bit of experience

I recommend having at least a few years of experience before applying to international companies. Why? There is a massive market for junior developers, but companies often prefer local talent for entry-level roles to simplify onboarding and legalities.

Prepare an AI-friendly resume

Keep it simple. Avoid fancy colors or complex layouts. Most companies use ATS (Applicant Tracking Systems) —often powered by AI— to screen resumes. If an algorithm can’t parse your data, a human will never see it. Make it readable for both machines and tired recruiters who have already seen 100 resumes before yours.

Logic over "LeetCode"

Is learning algorithms still necessary? Yes, but with a twist. In the AI era, writing the code is the easy part. However, you need to understand complexity (O(n) notation) and logic to verify if the code the AI generated is efficient or even correct. Practice basic algorithms to sharpen your mental models, not just to memorize solutions.

Master the basics

AI frequently "hallucinates" or provides outdated patterns. To debug effectively, you must know the fundamentals. If you use React, master JavaScript (scopes, event loops, closures). If you use Laravel, master PHP.
Scientific evidence in software engineering suggests that strong foundational knowledge is the best predictor of a developer's ability to adapt to new tools, including AI.

The numbers game

Applying for jobs is a marathon. You might send 100 resumes and get 99 rejections. That’s normal. You only need one "yes" to change your career. It can be frustrating, but the reward of a remote international career is worth the effort.

Thanks for reading! Working remotely changed my life, and I hope it does the same for you 💪🏻

The 2026 Roadmap to Becoming a Full-Stack AI Engineer

2026-02-22 21:00:00

The 2026 Masterclass: How to Become a Full-Stack AI Engineer
I remember sitting in front of my monitor in early 2024, watching a simple script I wrote call the OpenAI API for the first time. It felt like magic. But as I’ve learned over the last two years of building production-ready apps, "magic" doesn't scale. In 2026, the industry has moved beyond the hype. We are no longer impressed by a chatbot that says "Hello." We want systems that think, reason, and act autonomously.

The term Full-Stack AI Engineer has emerged as the definitive career path for developers who want to remain relevant. It’s a hybrid role: you need the discipline of a software engineer and the intuition of a data scientist. You aren't just building a website; you are building an engine of intelligence. If you are just starting your journey, I highly recommend checking out my guide on web development roadmap for students how to start learning web development in 2025 to ensure your foundations are rock solid before diving into AI.

The Reality Shift: Why "Traditional" Full-Stack is Dying
Let's be brutally honest: if your primary skill is building a basic React frontend with a Node.js backend to perform CRUD operations on a database, you are competing with everyone—including the AI itself. My journey in AI-driven development taught me that the "middle" is disappearing. You either become the AI's architect, or you are replaced by its output. In 2026, a "Full-Stack" dev must handle far more than just buttons and tables.

Modern engineering now requires a deep understanding of:

Data Ingestion: Converting unstructured PDFs, videos, and logs into machine-readable formats that an LLM can actually use.
Reasoning Logic: Designing multi-step agentic workflows where the AI can "think" before it executes a task.
Client-Side Intelligence: Running smaller models directly in the browser using WebGPU to save on server costs and improve privacy.

💡 My Personal Experience:
I spent months perfecting my SvelteKit skills, only to realize that the most expensive "bugs" in my freelance projects weren't UI glitches—they were AI hallucinations. That was the moment I stopped being a "web dev" and started being an "AI engineer." If you find your current skills aren't paying the bills, you might be falling into the traps I mentioned in why your skills aren't making you money in freelancing .

Phase 1: The Modern Foundation (The "Hard" Skills)
You cannot build a skyscraper on a swamp. Before you touch a Large Language Model (LLM), you need to master the basics of the 2026 tech stack. This isn't just about syntax; it's about understanding how data flows through a system of intelligence.

  1. Python & JavaScript (The Dual-Citizenship)
    In the past, you could pick a side. In 2026, you must be a polyglot. Python is the language of AI (PyTorch, LangChain, FastAPI), while JavaScript/TypeScript is the language of the user. Most of my successful projects involve a Python backend talking to a SvelteKit frontend . Python handles the "heavy thinking," while JavaScript handles the "elegant presentation."

  2. Vector Databases: The New SQL
    Forget just knowing PostgreSQL. You need to understand Vector Embeddings. When a user asks a question, how does the AI "find" the answer? It doesn't look for keywords; it looks for "mathematical similarity." Tools like Pinecone, Weaviate, or Supabase’s pgvector are now mandatory. Understanding how to store and retrieve these embeddings is what separates a junior dev from a senior AI architect.

Phase 2: Mastering the AI Stack (RAG & Beyond)
If you want to earn the "big bucks" in remote engineering roles, you must move beyond simple prompts. The most in-demand skill right now is Retrieval-Augmented Generation (RAG). RAG allows you to give an AI a "brain" consisting of your private data, ensuring it provides facts rather than fiction.

The RAG Pipeline Explained:
Chunking: Breaking large documents into meaningful pieces without losing context. This is an art form—too small and you lose meaning; too large and you confuse the model.
Embedding: Turning those pieces into numbers (vectors) using models like OpenAI's text-embedding-3-small.
Retrieval: Finding the most relevant pieces based on a user’s query using cosine similarity.
Generation: Passing that context to an LLM (Claude, Gemini, or GPT-5) to get an accurate, grounded answer.

Phase 3: The Frontend of 2026 (Intelligent UIs)
Users don't want to just "chat" with a bot anymore. They want Generative UI—interfaces that change based on what the AI is doing. If an AI is generating a travel itinerary, the UI should automatically render a map. This is where SvelteKit shines. Its ability to handle streaming data natively makes it the perfect partner for AI. If you're building for scale, don't miss our complete SvelteKit tutorial for production apps .

Performance in 2026 isn't just about load times; it's about latency management. You need to learn how to show "partial results" to the user while the AI is still "thinking." This keeps the user engaged and prevents the "dead screen" effect that kills retention. Slow apps are the number one reason clients leave; learn more in why your website is slow and how to fix it .

Phase 4: Monetization and Career Strategy
Why are some developers making $200k+ while others struggle to find clients? It usually comes down to Product Awareness. You have to solve business problems, not just coding problems. Companies in 2026 aren't looking for "coders"; they are looking for "efficiency experts."

Three Ways to Profit in 2026:
The Specialist Freelancer: Don't be a "Web Developer." Be a "Custom AI Agent Architect for Law Firms." The more specific you are, the higher your rate.
The Solopreneur: Build "Micro-SaaS" tools. A simple tool that summarizes Zoom meetings for recruiters can generate $5k/month in passive income if marketed correctly.
The Enterprise Engineer: Large companies are desperate to integrate local LLMs (like Llama 3) for privacy. If you can deploy an AI on-premise, you are indispensable.

Phase 5: Building Your AI Portfolio
To get hired as an AI Engineer, you need projects that prove you can handle real-world messiness. Stop building Todo lists and start building "Agents." An agent is an AI that doesn't just talk—it acts. It can call APIs, search the web, and update databases autonomously.

The Knowledge Base: A RAG system that answers questions about 1,000+ technical documents with 95% accuracy using advanced re-ranking.
The Autonomous Agent: An AI that can browse the web, find a flight, and draft an itinerary without human help using tool-calling.
The Real-Time Translator: A SvelteKit app that uses WebGPU to translate voice-to-text locally on the device, showcasing your edge-computing skills.
The Final Verdict
The transition from a Full-Stack Developer to a Full-Stack AI Engineer is the single best investment you can make in 2026. It requires grit, a willingness to fail at prompt engineering, and the patience to understand high-dimensional vectors. But on the other side of that struggle is a career that is both lucrative and future-proof. Don't let your site suffer from the 5 SEO issues killing traffic while you focus on the tech—balance is key.

Start today. Pick one framework, one vector DB, and one LLM. Build something small, break it, and fix it. That is the only way to truly learn in this fast-paced era.

Here is the Step By Step Guide :

Understanding Garbage Collection (GC) in .NET — How It Works and When It Matters

2026-02-22 21:00:00

Garbage Collection (GC) in .NET

Memory management is one of the most important concepts in .NET. Developers often say “the Garbage Collector handles memory for you,” but few can clearly explain how it works, why it exists, and what you can do to work with it instead of against it.

This guide breaks down .NET’s Garbage Collection (GC) in a simple, practical way — with definitions, diagrams, examples, and real-world scenarios.

What Is Garbage Collection?

Garbage Collection (GC) is an automatic memory management system in .NET.

Its job is to:

  • Allocate memory for new objects
  • Track which objects are still in use
  • Free memory for objects that are no longer needed
  • Prevent memory leaks
  • Reduce developer errors like dangling pointers

In short: GC keeps your application healthy by cleaning up unused objects automatically.

How GC Works in .NET

The .NET GC uses a generational, mark‑and‑compact algorithm.

Let’s break that down.

1. Generations (Gen 0, Gen 1, Gen 2)

Objects are grouped into generations based on their lifetime.

Generation 0

  • Newly created objects
  • Collected frequently
  • Fastest to clean

Generation 1

  • Surviving objects from Gen 0
  • Medium‑lived objects

Generation 2

  • Long‑lived objects
  • Large objects
  • Cleaned infrequently

Why generations?

Most objects die young.

So GC optimizes for that.

2. Mark Phase

GC pauses the application briefly and marks all objects that are still reachable.

Reachable means:

  • Local variables
  • Static references
  • Objects referenced by other objects

Everything else is considered garbage.

3. Compact Phase

After marking, GC compacts memory by moving surviving objects together.

This reduces fragmentation and improves performance.

4. Large Object Heap (LOH)

Objects > 85 KB go to the Large Object Heap.

  • Not compacted often (expensive to move large objects)
  • Can cause fragmentation
  • Avoid unnecessary large allocations

When Does GC Run?

GC runs automatically when:

  • Memory is low
  • Gen 0 fills up
  • The system is under pressure
  • You explicitly call GC.Collect() (not recommended)

Should You Call GC.Collect()?

Almost never.

Calling it manually:

  • Forces a full collection
  • Pauses your app
  • Hurts performance
  • Breaks GC’s optimization logic

Use it only in rare scenarios:

  • After a massive one‑time memory release
  • In controlled environments (e.g., game engines)

How to Work With GC (Best Practices)

✔️ Use using statements for disposable objects

using (var stream = new FileStream("data.txt", FileMode.Open))
{
    // work with stream
}

This ensures deterministic cleanup.

✔️ Avoid unnecessary large object allocations

var bigArray = new byte[100_000]; // Goes to LOH

Reuse buffers when possible.

✔️ Prefer Span<T> and Memory<T> for high-performance scenarios

These avoid heap allocations.

✔️ Avoid long-lived references

If you keep references alive, GC cannot collect them.

✔️ Use dependency injection wisely

Singletons live for the entire app lifetime — avoid storing large objects inside them.

Real‑World Scenarios

Scenario 1: Web API handling thousands of requests

Short-lived objects → Gen 0

GC handles them efficiently.

Scenario 2: Image processing

Large byte arrays → LOH

Avoid repeated allocations; use pooling.

Scenario 3: Background services

Long-lived services → Gen 2

Be careful with memory leaks.

Interview‑Ready Summary

  • GC automatically manages memory in .NET
  • Uses generational collection (Gen 0, 1, 2)
  • Uses mark and compact algorithm
  • LOH stores large objects
  • Avoid calling GC.Collect() manually
  • Use using, pooling, and DI best practices

This explanation shows you understand both the theory and the practical implications.

Bonus thought

Is IDisposable Related to Garbage Collection? Clearing the Confusion

Many developers — especially those new to .NET — assume that IDisposable is part of the Garbage Collector. It’s a common misunderstanding, but the two concepts solve different problems.

Garbage Collection handles managed memory

GC automatically frees memory for objects stored on the managed heap.

Examples:

  • Classes
  • Strings
  • Arrays
  • Delegates

GC decides when to clean them up.

IDisposable handles unmanaged resources

IDisposable exists because GC cannot clean up unmanaged resources, such as:

  • File handles
  • Database connections
  • Network sockets
  • OS handles
  • Streams
  • Native memory

These require deterministic cleanup, which is why we use:

using (var stream = new FileStream("data.txt", FileMode.Open))
{
    // work with stream
}

How they relate

  • GC cleans up managed memory
  • IDisposable cleans up unmanaged resources
  • Finalizers (~ClassName) act as a safety net, but they are expensive
  • The using statement ensures cleanup happens immediately, not “whenever GC runs”

The rule of thumb

GC ≠ Dispose

GC frees memory

Dispose frees resources

This distinction is crucial in interviews and real-world systems.

From Zero to Full-Stack: Getting Started with Your First MERN (MongoDB, Express.js, React.js, Node.js) Architecture Project

2026-02-22 20:59:23

This article is co-authored by @feevol_into and @kwiinnn

I. Introduction to Tech Stack

Have you ever wondered how websites are built from the ground up? A web application is like a house. You need a solid foundation, a frame, plumbing, and a beautiful interior. In the software world, the combination of tools used to build this house is called a tech stack. It is a collection of programming languages, software, and tools that work together to create a fully functioning application.

What is MERN?

One of the most popular combinations for building modern web applications is the MERN stack. MERN is an acronym that stands for four different technologies that seamlessly connect to handle everything from the user interface to the database. Developers love it because it uses JavaScript for everything. You only need to know one primary language to build the entire project.

Let us break down the four key pieces of this puzzle.

  • MongoDB

    MongoDB is the database of the operation. Whenever a user creates an account or saves a post, that information needs a place to live securely. Instead of storing data in rigid tables with rows and columns, MongoDB stores it in flexible documents that look a lot like standard JavaScript objects. This makes it incredibly easy for developers to save and retrieve complex data without having to translate it back and forth.

  • Express.js

    Next is Express.js. Think of Express as the traffic cop of your server. It is a framework that runs on the backend and listens for requests from the user. When someone clicks a button to load their profile, Express catches that request, talks to the database to get the right information, and sends the data back to the user. It simplifies the process of building the server and organizing how the application responds to different actions.

  • React.js

    React.js is what the user actually sees and interacts with. It is a popular library built for creating user interfaces. React allows developers to build reusable pieces of a website called components. A button, a navigation bar, and a search form can all be individual components. When data changes, React efficiently updates only the parts of the screen that need to change, making the website feel incredibly fast and smooth.

  • Node.js

    Finally, there is Node.js. Traditionally, JavaScript could only run inside a web browser. Node.js changed the game by providing an environment that allows JavaScript to run directly on a computer or server. It acts as the engine that powers Express.js and connects the whole backend together. Because of Node, developers can build fast and scalable network applications using the exact same language they use for the front of the website.

Where MERN is used

The MERN stack is incredibly versatile and powers everything from simple single page applications to massive enterprise systems. It is particularly popular for building interactive user interfaces and real time data processing platforms. Because it relies heavily on reusable components and quick rendering, it is the go to choice for modern web applications.

  • Macro Stakeholders

    At the macro level, large enterprises and entire industries rely on MERN for its scalability and performance under heavy traffic.

    • Tech Giants - Massive companies like Meta and Netflix use components of the MERN stack to deliver seamless experiences to millions of users globally.
    • ECommerce Platforms - Online stores use MERN to handle real time inventory updates, secure transactions, and dynamic product searches.
    • Healthcare and Logistics - Industries requiring fast data retrieval use MongoDB and Node to process massive datasets, such as tracking patient records or managing global supply chains.
  • Micro Stakeholders

    At the micro level, the stack is favored by smaller teams, startups, and individual developers who need to move quickly.

    • - Startups and Entrepreneurs - The stack is open source and free, allowing new businesses to build a Minimum Viable Product very quickly without massive upfront software costs.
    • - Small Development Teams - Because the entire stack uses JavaScript, a small team can collaborate seamlessly across both the server and the user interface without needing separate experts for different programming languages.
    • - Independent Developers - For someone already familiar with building user interfaces using React and managing code repositories on GitHub, learning the rest of the MERN stack is the fastest route to becoming a full stack developer. It allows a single person to architect complete software as a service platforms, such as a localized supply management system, entirely on their own.

Benefits of MERN

  • Single Language Environment - The biggest advantage is that JavaScript is used from top to bottom. This eliminates the need to switch mentally between different languages for the server and the browser, making development significantly faster.

  • Rapid Development - With a massive open source community, developers have access to countless pre-built tools. You do not have to reinvent the wheel for standard features like user authentication or database routing.

  • High Performance - Node uses a non blocking architecture that handles multiple network requests simultaneously without slowing down. Coupled with React updating only the necessary parts of the screen, applications feel lightning fast.

  • Easy Scalability - MongoDB stores data flexibly, meaning you can easily adjust your database structure as your application grows and new features are added over time.

II. Creating your first project

When creating a project, it always starts with the ideation and features

STEP 1 — IDEATE (What do I want to build?)

For this beginner’s guide, let’s make a To-Do List Website.

A to-do list full-stack website is one of the best starter projects because it teaches you:

  • how frontend talks to backend

  • how backend talks to database

  • how data persists

  • how CRUD works in real life applications

You are not just building a list — you are building a complete system.

STEP 2 — FEATURES (What can my to do list website do?)

Since we’re having a full-stack website, we must implement CRUD features.

CRUD FEATURES Meaning In our To do list website:
CREATE Add data Add a new task
READ View data Show all tasks
UPDATE Modify data Mark task as completed / edit text
DELETE Remove Data Delete a task

So the website we’ll make will be able to:

  • Add a task
  • View all tasks
  • Edit a task
  • Mark task as completed
  • Delete a task

Now that we know what to build — we move to the environment setup.

STEP 3 — INSTALLATIONS

1. Installing Node.js

Node.js is the engine that will run your JavaScript code outside of a web browser. Installing it is very straightforward.

node.js

  • Step 2 | You will see two download buttons. Always choose the LTS version. LTS stands for Long Term Support and it is the most stable version for building applications.

node download

  • Step 3 | Once the file downloads, open it to launch the installer.

launching

  • Step 4 | Click next through the installation wizard. You can leave all the default settings exactly as they are. Make sure you accept the license agreement.

installation

  • Step 5 | After installing open your terminal and verify:
node -v
npm -v

check

2. Setting up MongoDB Atlas (Recommended for beginners)

A FREE Cloud database, no local setup problems

mongodb sign up

  • Step 3 | Create New Project → name it “todo_website”

mongodb create project

  • Step 4 | Create a Cluster. Choose: - FREE - Provider: AWS - Region: Singapore (ap-southeast-1) - closest to PH → lowest latency

mongodb set up

mongodb set up1

  • Step 5 | After creating your cluster, Database Access will pop up and create a new database user with a username (“todo-user”) and password. Save these credentials somewhere safe — you will need them later.
  • Step 6 | Go back to your cluster and click Connect. Choose Connect to your application.

connect

connect1

  • Step 7 | You will see a connection string that looks something like this:
mongodb+srv://<username>:<password>@cluster0.mongodb.net/todo_db?retryWrites=true&w=majority

Copy this string. Replace and with the credentials you just created. This is your MongoDB URI — think of it as the address your backend will use to talk to your database.

Make sure your driver is Node.js since this is the framework we are using for the backend and select the latest version

mongodb connection

STEP 4 — SETTING UP YOUR PROJECT FOLDERS

When you build a full stack application, you are essentially building two separate projects that talk to each other. Because of this, it is best practice to create one main project folder that contains two separate subfolders. One folder is for your React frontend and the other is for your Node backend.

  • The Server Folder (Node + Express + MongoDB) This is where you will write all your Node and Express code. It handles the database connections and the secret backend logic.

  • The Client Folder (React) This is your frontend folder where your React application lives. It holds all the visual components the user will see and interact with.

Keeping these completely separate prevents your code from getting messy and makes it much easier to host your website later on.

  • Step 1 | Open VSCode
  • Step 2 | Create the ROOT folder. This folder will serve as your project folder.
mkdir todo-website

mkdir creates a new folder.

root folder

  • Step 3 | Open your root folder, todo-website, in VSCode You are now inside your project's home.
  • Step 4 | Open your terminal in VSCode and enter this to create your backend folder, server:
mkdir server

backend folder

STEP 5 — SETTING UP THE BACKEND

Inside your todo-website folder, move into your newly created server folder:

cd server

While mkdir creates you a folder, cd moves you into it.

Now initialize it as a Node.js project by running:

npm init -y

The -y flag automatically answers "yes" to every setup question, which is perfectly fine for our purposes. This command creates a package.json file — think of it as your project's birth certificate. It records the project name, version, and most importantly, every package your backend depends on.

  • Running NPM Install

    When you start a new project, you need tools to help you build it. NPM stands for Node Package Manager. It is a massive online library of free code written by other developers.

  • When you type the command npm install followed by a package name in your computer terminal, your system reaches out to this library, downloads the specific code you requested, and adds it directly to your project. This is how you bring Express, React, and database connection tools into your application.

  • Setting Up Pre Installed Files

  • Whenever you initialize a new project or run an install command, your system automatically creates a very important file called package.json. Think of this file as the recipe book or the master list for your project.

It does not contain the actual heavy code for the tools you downloaded. Instead, it simply lists the names and version numbers of every single package your project needs to run properly. If you share your code with another developer, they do not need to download all your heavy files. They just need your package.json file, and their computer will know exactly what tools to download to make the code work.

1. Installing Backend Dependencies

Your backend needs several packages to function. Install them all at once with this command:

npm install express mongodb mongoose dotenv cors

terminal installation

Here is what each one does and why you need it:

  1. express is the web framework that lets you create routes and handle HTTP requests. Without it, you would have to write hundreds of lines of low-level Node.js code just to accept a simple request from your frontend.

  2. mongodb is the official Node.js driver for MongoDB. It is what actually handles the low-level communication between your application and your MongoDB Atlas cluster. Mongoose is built on top of it — meaning Mongoose uses the mongodb package under the hood to do its work. While Mongoose alone can sometimes pull it in as a dependency automatically, explicitly installing it yourself ensures you have the correct version, avoids version mismatch issues, and is simply the proper practice when working with MongoDB Atlas.

  3. mongoose is the tool that connects your backend to MongoDB. It also lets you define the shape of your data using something called a Schema, which you will see shortly.

  4. dotenv lets you store sensitive information (e.g. API, Passwords, etc.) like your MongoDB URI in a separate .env file instead of hardcoding it into your source code. This is a critical habit — you never want to push your database credentials to GitHub or share them publicly.

  5. cors stands for Cross-Origin Resource Sharing. By default, browsers block requests that come from a different address than the server. Since your React frontend will run on port 5173 and your backend will run on port 3000, they are technically on different "origins." The cors package tells your backend to allow these cross-origin requests.

After installation, you will notice a new folder called node_modules has appeared. This folder contains all the actual code for every package you installed. It will be very large — this is normal. You should never manually edit anything inside it and you should never push it to GitHub. We will cover how to handle that in a moment.

2. Creating the .env File

Inside your server folder, create a file named .env:

MONGO_URI=your_mongodb_connection_string_here
PORT=3000

Replace your_mongodb_connection_string_here with the connection string you copied from MongoDB Atlas earlier. This file is your backend's private configuration. It never gets shared, uploaded, or committed to version control.

To make sure it stays private, create a .gitignore file in your server folder and add these:

node_modules
.env

This tells Git to ignore those two entries completely.

.env and .gitignore

3. Creating the Server File

Now create the most important file in your backendserver.js. This is the entry point, the file that starts everything up when you run your backend.

Create the file:

touch server.js

touch command allows you to create a file through the terminal.

Open server.js in your code editor and paste the following:

const express = require('express');
const mongoose = require('mongoose');
const cors = require('cors');
require('dotenv').config();

const app = express();

// Middleware
app.use(cors());
app.use(express.json());

// Routes
const taskRoutes = require('./routes/tasks');
app.use('/api/tasks', taskRoutes);

// Connect to MongoDB and start server
const PORT = process.env.PORT || 3000;

mongoose
  .connect(process.env.MONGO_URI)
  .then(() => {
    console.log('Connected to MongoDB');
    app.listen(PORT, () => {
      console.log(`Server is running on port ${PORT}`);
    });
  })
  .catch((err) => {
    console.error('Failed to connect to MongoDB:', err.message);
  });

Let us break this down line by line so you understand exactly what is happening.

  • require('dotenv').config() loads your .env file so that process.env.MONGO_URI and process.env.PORT become available throughout your code.
  • app.use(cors()) tells Express to allow requests from your React frontend.

  • app.use(express.json()) tells Express to automatically parse incoming request bodies that are in JSON format. Without this, when your frontend sends task data to your backend, the backend would not be able to read it.

  • app.use('/api/tasks', taskRoutes) connects your routes file to the server and says that any URL beginning with /api/tasks should be handled by that file.

  • mongoose.connect(process.env.MONGO_URI) establishes the connection to your MongoDB Atlas cluster. Only after this connection is confirmed does the server begin listening for requests.

STEP 6 — DEFINING YOUR DATA MODEL

In MongoDB, data is stored as documents — they look a lot like JavaScript objects. But before we start storing tasks, we need to tell Mongoose what a task looks like. We do this by creating a Model. In layman’s terms, we’re trying to create a blueprint or template for our data that will be stored in the database.

Create the models folder and the Task model:

#inside the server folder. terminal should be server

mkdir models
touch models/[Task.js](http://task.js)

Open models/Task.js and paste this:

const mongoose = require('mongoose');

const taskSchema = new mongoose.Schema(
  {
    title: {
      type: String,
      required: true,
      trim: true,
    },
    completed: {
      type: Boolean,
      default: false,
    },
  },
  {
    timestamps: true,
  }
);

module.exports = mongoose.model('Task', taskSchema);

This schema tells Mongoose that every task document in your database will have exactly three pieces of information: a title that is a required string, a completed flag that defaults to false when a new task is created, and timestamps — createdAt and updatedAt — which Mongoose adds automatically when you set timestamps: true.

module.exports at the bottom makes this model available to other files in your project, specifically your routes file, which needs it to perform database operations.

STEP 7 — BUILDING YOUR API ROUTES

Routes are where you define what happens when your frontend sends a specific type of request to a specific URL. This is where all four CRUD operations live.

Create the routes folder and the tasks route file:

mkdir routes
touch routes/tasks.js

Open routes/tasks.js and write the following:

const express = require('express');
const router = express.Router();
const Task = require('../models/Task');

// CREATE — POST /api/tasks
router.post('/', async (req, res) => {
  try {
    const task = new Task({ title: req.body.title });
    const savedTask = await task.save();
    res.status(201).json(savedTask);
  } catch (err) {
    res.status(400).json({ message: err.message });
  }
});

// READ — GET /api/tasks
router.get('/', async (req, res) => {
  try {
    const tasks = await Task.find().sort({ createdAt: -1 });
    res.status(200).json(tasks);
  } catch (err) {
    res.status(500).json({ message: err.message });
  }
});

// UPDATE — PUT /api/tasks/:id
router.put('/:id', async (req, res) => {
  try {
    const updatedTask = await Task.findByIdAndUpdate(
      req.params.id,
      { title: req.body.title, completed: req.body.completed },
      { new: true }
    );
    res.status(200).json(updatedTask);
  } catch (err) {
    res.status(400).json({ message: err.message });
  }
});

// DELETE — DELETE /api/tasks/:id
router.delete('/:id', async (req, res) => {
  try {
    await Task.findByIdAndDelete(req.params.id);
    res.status(200).json({ message: 'Task deleted successfully' });
  } catch (err) {
    res.status(500).json({ message: err.message });
  }
});

module.exports = router;

Each block here corresponds directly to one of your CRUD features. The POST route takes the title from the request body, creates a new Task document, saves it to MongoDB, and returns the saved task back to the frontend. The GET route fetches all tasks from the database, sorted from newest to oldest. The PUT route finds a specific task by its ID — which MongoDB assigns automatically — and updates its title, its completed status, or both. The DELETE route finds a task by its ID and removes it from the database entirely.

Notice that every route is wrapped in a try/catch block. This is because all database operations are asynchronous — they take time. The async/await syntax lets you write this asynchronous code in a way that reads almost like regular sequential code, and the catch block handles any errors gracefully by sending a readable error message back to the frontend instead of crashing the server.

Testing Your Backend

Before building the frontend, it is a good habit to confirm your backend works correctly on its own. Run the backend server:

node server.js

You should see two messages in your terminal:

Connected to MongoDB
Server is running on port 5000

If you see both of these, your backend is fully functional.

STEP 8 — SETTING UP THE FRONTEND WITH REACT

Now we build the part that your users will actually see and interact with. Open a new terminal window, navigate back to your root todo-website folder, and run:

npm create vite@latest client -- --template react
cd client
npm install

We are using Vite here because it is significantly fast, light, and what most professional React developers use today. The --template react flag tells Vite to set up a React project specifically.

After running npm install, Vite will have created a frontend folder with its own node_modules, its own package.json, and a basic React application ready to run.

Now install one additional package for the frontend — axios. Axios is an HTTP client that makes it easy to send requests from your React app to your Express backend:

npm install axios

1. Cleaning Up the Vite Defaults

Vite gives you some starter files you do not need. Open src/App.jsx and replace everything inside it with a clean slate:

import { useState, useEffect } from 'react';
import axios from 'axios';
import TaskForm from './components/TaskForm';
import TaskList from './components/TaskList';
import './App.css';

const API_URL = 'http://localhost:3000/api/tasks';

function App() {
  const [tasks, setTasks] = useState([]);

  const fetchTasks = async () => {
    const response = await axios.get(API_URL);
    setTasks(response.data);
  };

  useEffect(() => {
    fetchTasks();
  }, []);

  const addTask = async (title) => {
    await axios.post(API_URL, { title });
    fetchTasks();
  };

  const updateTask = async (id, updates) => {
    await axios.put(`${API_URL}/${id}`, updates);
    fetchTasks();
  };

  const deleteTask = async (id) => {
    await axios.delete(`${API_URL}/${id}`);
    fetchTasks();
  };

  return (
    <div className="app">
      <h1>My To-Do List</h1>
      <TaskForm onAdd={addTask} />
      <TaskList tasks={tasks} onUpdate={updateTask} onDelete={deleteTask} />
    </div>
  );
}

export default App;

App.jsx is the brain of your frontend. It holds your tasks in state using useState, fetches them from your backend when the page first loads using useEffect, and passes the four CRUD functions down to the child components that need them. Notice that every CRUD operation calls fetchTasks() afterward — this is what keeps your displayed list in sync with what is actually stored in your database.

2. Creating the Components

Create the components folder:

mkdir src/components

Create three files: TaskForm.jsx, TaskList.jsx, and TaskItem.jsx then paste the code below to their respective files.

src/components/TaskForm.jsx — handles adding new tasks:


import { useState } from 'react';

function TaskForm({ onAdd }) {
  const [title, setTitle] = useState('');

  const handleSubmit = (e) => {
    e.preventDefault();
    if (!title.trim()) return;
    onAdd(title);
    setTitle('');
  };

  return (
    <form onSubmit={handleSubmit} className="task-form">
      <input
        type="text"
        placeholder="Add a new task..."
        value={title}
        onChange={(e) => setTitle(e.target.value)}
      />
      <button type="submit">Add Task</button>
    </form>
  );
}

export default TaskForm;

src/components/TaskList.jsx — renders the full list of tasks:

import TaskItem from './TaskItem';

function TaskList({ tasks, onUpdate, onDelete }) {
  if (tasks.length === 0) {
    return <p className="empty">No tasks yet. Add one above!</p>;
  }

  return (
    <ul className="task-list">
      {tasks.map((task) => (
        <TaskItem
          key={task._id}
          task={task}
          onUpdate={onUpdate}
          onDelete={onDelete}
        />
      ))}
    </ul>
  );
}

export default TaskList;

src/components/TaskItem.jsx — handles editing, completing, and deleting individual tasks:

import { useState } from 'react';

function TaskItem({ task, onUpdate, onDelete }) {
  const [isEditing, setIsEditing] = useState(false);
  const [editTitle, setEditTitle] = useState(task.title);

  const handleEdit = () => {
    if (isEditing && editTitle.trim()) {
      onUpdate(task._id, { title: editTitle });
    }
    setIsEditing(!isEditing);
  };

  return (
    <li className={`task-item ${task.completed ? 'completed' : ''}`}>
      <input
        type="checkbox"
        checked={task.completed}
        onChange={() => onUpdate(task._id, { completed: !task.completed })}
      />
      {isEditing ? (
        <input
          type="text"
          value={editTitle}
          onChange={(e) => setEditTitle(e.target.value)}
          className="edit-input"
        />
      ) : (
        <span className="task-title">{task.title}</span>
      )}
      <button onClick={handleEdit} className="btn-edit">
        {isEditing ? 'Save' : 'Edit'}
      </button>
      <button onClick={() => onDelete(task._id)} className="btn-delete">
        Delete
      </button>
    </li>
  );
}

export default TaskItem;

TaskItem has its own local state for the editing mode. When a user clicks Edit, the task title becomes an editable input field. When they click Save, it sends the updated title to your backend through the onUpdate function passed down from App.jsx. The checkbox triggers a completion toggle — every time it changes, it flips the completed boolean in your database.

3. Adding Basic Styles

  • Step 1 | Open src/App.css and replace its contents with the following to make your app presentable:
#root {
  max-width: 1280px;
  margin: 0 auto;
  padding: 2rem;
  text-align: center;
}

* {
  box-sizing: border-box;
  margin: 0;
  padding: 0;
}

body {
  font-family: 'Poppins', sans-serif;
  background: #f3f3f3;
  color: #333;
}

.app {
  width: 1000px;
  max-width: 600px;
  margin: 60px auto;
  padding: 30px;
  background: white;
  border: 1px solid #FF6200;
  border-radius: 20px;
  box-shadow: 0 4px 20px rgba(0, 0, 0, 0.08);
}

h2 {
  text-align: center;
  font-family: 'Poppins', sans-serif;
  font-size: 20px;
  font-weight: 800;
  margin-bottom: -5px;
  color: #FF6200;
}

h1 {
  text-align: center;
  font-family: 'Poppins', sans-serif;
  font-size: 50px;
  font-weight: 800;
  margin-bottom: 24px;
  color: #040042;
}

.task-form {
  display: flex;
  gap: 10px;
  margin-bottom: 24px;
}

.task-form input {
  flex: 1;
  padding: 10px 14px;
  border: 1px solid #ebebeb;
  border-radius: 10px;
  font-size: 14px;
}

.task-form button {
  padding: 10px 18px;
  background: #FF6200;
  color: white;
  border: none;
  border-radius: 8px;
  cursor: pointer;
  font-family: 'Poppins', sans-serif;
  font-weight: 700;
  font-size: 14px;
}

.task-form button:hover {
  padding: 10px 18px;
  background: #040042;
  color: white;
  border: none;
  border-radius: 8px;
  cursor: pointer;
}

.task-list {
  list-style: none;
}

.task-item {
  display: flex;
  align-items: center;
  gap: 10px;
  padding: 12px 0;
  border-bottom: 1px solid #f0f0f0;
}

.task-title {
  flex: 1;
  font-size: 15px;
}

.task-item.completed .task-title {
  text-decoration: line-through;
  color: #FF6200;
}

.edit-input {
  flex: 1;
  padding: 6px 10px;
  border: 1px solid #ccc;
  border-radius: 6px;
  font-size: 14px;
}

.btn-edit {
  padding: 4px 12px;
  border-radius: 4px;
  border: 2px solid #00B087;
  background: rgb(5, 137, 106);
  border-radius: 6px;
  cursor: pointer;
  color: white;
  font-family: 'Poppins', sans-serif;
  font-weight: 600;
  font-size: 13px;
}

.btn-delete {
  padding: 6px 12px;
  background: #cf1500;
  border: 2px solid #ff0707;
  border-radius: 6px;
  cursor: pointer;
  color: white;
  font-family: 'Poppins', sans-serif;
  font-weight: 600;
  font-size: 13px;
}

.empty {
  text-align: center;
  color: #aaa;
  margin-top: 20px;
}

  • Step 2 | Open src/index.css and replace its contents with the following to make your app further presentable:
@import url('https://fonts.googleapis.com/css2?family=Poppins:wght@300;400;500;600;700;800;900&display=swap');

:root {
  font-family: 'Poppins', -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen',
    'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue',
    sans-serif;

  color-scheme: light dark;
  color: rgba(255, 255, 255, 0.87);
  background-color: #242424;

  font-synthesis: none;
  text-rendering: optimizeLegibility;
  -webkit-font-smoothing: antialiased;
  -moz-osx-font-smoothing: grayscale;
}

body {
  margin: 0;
  display: flex;
  place-items: center;
  min-width: 320px;
  min-height: 100vh;
}

h1 {
  font-size: 3.2em;
  line-height: 1.1;
}

STEP 9 — RUNNING YOUR FULL-STACK APP

You need two terminal windows open simultaneously — one for the backend, one for the frontend.

Terminal 1 — Start the backend:

cd  todo-app/server
node server.js

Terminal 2 — Start the frontend:


cd  todo-app/client
npm run dev

terminal

Vite will give you a local URL, typically http://localhost:5173. Open that in your browser. You should see your To-Do List app fully running. Type a task into the input field and click Add Task — it appears in the list instantly. Refresh the page — it is still there, because it is saved in MongoDB. Check the checkbox — it gets crossed out. Click Edit — the title becomes editable. Click Delete — it disappears from both the screen and the database.

Every single one of these actions travels the full length of your stack: from your React component, through Axios to your Express server, through Mongoose to MongoDB Atlas, and back — all in a fraction of a second. That is a full-stack MERN application working exactly as it should.

todo-website

STEP 10 — HOW IT ALL CONNECTS: THE FULL PICTURE

Now that your application is running, it is worth pausing to understand the journey that data takes every time a user interacts with your website. This mental model is the most valuable thing you can take away from this project.

When a user types a task and presses Add Task, the following happens in sequence. React captures the input through a controlled state and calls addTask() in App.jsx. Axios sends a POST request with the task title in the request body to http://localhost:3000/api/tasks. Express receives that request on your backend server and routes it to the POST handler in routes/tasks.js. Mongoose creates a new Task document using your schema and calls .save() to write it to your MongoDB Atlas cluster in the cloud. MongoDB confirms the save and returns the newly created document, including its auto-generated _id. That document travels back through Mongoose, through Express, through your HTTP response, back to Axios on the frontend, where fetchTasks() is triggered and React re-renders the updated list on screen.

This cycle — React talks to Express, Express talks to MongoDB, MongoDB responds, React updates — is the heartbeat of every MERN application, no matter how large or complex it grows.

Yey! You have now finished your FIRST FULL STACK PROJECT!

References

What Is the MERN Stack? Introduction and How it Works

Advantages for using Mern Stack Development

Why does your Startup needs MERN Stack?

MERN Stack Development Trends 2025: What USA CTOs Need to Know?

Github Repository

https://github.com/feevolinto/todo-website

She had a PhD from MIT. She quit after 6 months because nobody knew what sls_txn_f47 meant.

2026-02-22 20:59:00

Dr. Jennifer Park walked into my office on her last day.

"I'm sorry," she said. "I really wanted this to work."

I'd recruited her personally. PhD in Machine Learning from MIT. Five years at Spotify building recommendation engines. Above-market salary. Equity. The works.

She lasted six months and four days.

"What happened?"

"I spent six months trying to do one thing: build a recommendation engine. At Spotify, I built similar systems in six weeks."

"And here?"

"Here, I spent six months just trying to understand the data."

She opened our Snowflake warehouse. 847 tables.

sls_txn_f47

usr_bhv_ag_01

car_lst_vw_2

bid_hist_tmp

"Nobody knows what these mean," she said. "The engineer who built them left two years ago. I spent three months reverse-engineering the schema. Then I discovered we have seven different definitions of user_id across tables. Seven."

"I'm not a bad data scientist," she said. "Your data is just impossible to work with."

Four months later

We hired Alex.

Same challenge: "Build a recommendation engine."

He understood the data model in 15 minutes.

Had a working prototype by end of day.

Shipped an upgraded version the next week. Clickthrough rate up 18%.

What changed?

We rebuilt the foundation.

Killed 535 zombie tables nobody was querying.

Renamed everything:

  • sls_txn_f47auction_transactions
  • usr_bhv_ag_01user_behavior_daily
  • car_lst_vw_2car_listings_current

Created one source of truth for every entity.

Documented everything.

Asked "Is this stupidly simple yet?" until the answer was yes.

The test:

Old model: 30 minutes to find last month's revenue

New model: 30 seconds

Alex understood the structure in 15 minutes because the naming was self-explanatory. Actually building the recommendation engine took the rest of the day.

But he wasn't stuck for weeks reverse-engineering cryptic schemas like Jennifer was.

The lesson

You can't build on top of chaos.

Jennifer was brilliant. The data was just impossible to work with.

How many great engineers have you lost because your schema looked like tbl_usr_tmp_20220304?

This is a scene from The Auction Block — a business fable I wrote about what data/analytics teams get wrong (and how to fix it). Think The Phoenix Project but for data & AI teams. I promise you will become a better version of yourself if you thumb through it!

If you've ever inherited a data graveyard and had to rebuild it, you might find it useful.

Available on Kindle & paperback - https://www.amazon.com/Auction-Block-Novel-About-Teams-ebook/dp/B0GM8BRVWC