MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Express Setup: Simple & Scalable

2026-01-18 04:42:48

There are a few things conventional wisdom has taught us to do when making an Express microservice, from the layout of our source files to how we document the code we're writing. I'll be honest: I haven't always adhered to these conventions. Sometimes, you just need to get code out the door, and all that boilerplate can really slow you down.

In this post, I'll take a quick look at how we're traditionally told to create an Express microservice, explain what I do differently, and walk you through a very simple project to see it all in action.

An Introduction to Express

Express is one of those tools that feels like magic when you first use it. It's a minimal and flexible Node.js framework that makes building web applications and APIs a breeze. Whether you're spinning up a quick prototype or building a production-ready microservice, Express has you covered.

At its core, Express is all about simplicity. It doesn’t force you into a specific structure or way of doing things, which is both its greatest strength and, sometimes, its biggest challenge. With so much freedom, it’s easy to get overwhelmed by all the "best practices" floating around.

Routes

Routes are the backbone of any Express app. They define how your application responds to different HTTP requests. For example, if someone sends a GET request to /users, you might want to return a list of users. Express makes this super simple:

app.get('/users', (req, res) => {
  res.send('Here’s a list of users!');
});

The conventional wisdom is to separate your routes into their own files to keep things organised.

Controllers

Controllers are where the logic lives. They take the requests from your routes, do the heavy lifting (like talking to a database), and send back a response. A typical setup might look like this:

// usersController.js
export const getUsers = (req, res) => {
  res.send('Here’s a list of users!');
};

Then, in your route file, you’d import the controller:

import usersController from './usersController';
app.get('/users', usersController.getUsers);

Middleware

Middleware is one of the coolest parts of Express. Think of it as a series of steps your request goes through before it gets to your route. Middleware can do things like log requests, check authentication or parse JSON bodies.

Here’s a simple example of middleware that logs every request:

app.use((req, res, next) => {
  console.log(`${req.method} ${req.url}`);
  next(); // Pass control to the next middleware or route
});

OpenAPI (Swagger)

Documenting your API is one of those things we all know we should do, but it often gets pushed to the bottom of the to-do list. OpenAPI (often referred to as Swagger) is the de facto method for documenting your APIs.

Whilst this is great in theory, the API documentation is quite complex and time-consuming to write and often goes out of sync with the actual API, making debugging a nightmare.

Here’s how you would serve your swagger file from Express:

import swaggerUi from 'swagger-ui-express';
import swaggerDocument from './swagger.json';

app.use('/api-docs', swaggerUi.serve, swaggerUi.setup(swaggerDocument));

There are packages that attempt to avoid the desync issue by discerning meaning directly from the code and its comments, but I find these make the code messy and aren't always correct.

A Brief Look at JSDoc

If you’ve ever worked on a project for more than a week, you’ve probably had that moment where you look at a function and think, “What does this even do?” That’s where JSDoc comes in. It’s a simple way to add comments to your code that describe what your functions, classes and variables are supposed to do.

JSDoc is great because it doesn’t just help you remember what your code does, it also helps others (or future you) understand it. Plus, tools like VSCode can use JSDoc comments to provide inline documentation and autocomplete suggestions, which is a huge productivity boost.

Here’s a quick example of what JSDoc looks like:

/**
 * Gets a list of users.
 * @param {Request} req - The request object.
 * @param {Response} res - The response object.
 * @returns {void}
 */
function getUsers(req, res) {
  res.send('Here’s a list of users!');
}

The @param tags describe the parameters the function takes, and the @returns tag explains what the function returns. It’s simple, but it makes a big difference when you’re working on a team or revisiting old code.

I love JSDoc and use it as much as possible. It also has a handy little feature that will generate HTML pages for you based on your comments.

My structure

OK, let's talk about how I go about setting up a project. You'll have noticed my aversion to excessive boilerplate, but it's not entirely avoidable. We're going to set up a little bit of code to start with; then, hopefully, everything else in the project will get easier.

Setting up a project

First things first, we need to actually initialise our project: create a directory and run the init command.

npm init

We’ll be asked a few questions (name, description, etc). Fill these out and, when you're finished, a package.json file will be created for you. This is step one.

Next, we're going to want to install our dependencies. I'll just run down the list for now but I'll explain what each is for as and when we require them.

npm i express
npm i --save-dev @eslint/js clean-jsdoc-theme eslint eslint-config-prettier eslint-plugin-prettier jsdoc prettier

That might feel like we've just installed a lot, but only one of the dependencies is actually required. The rest are all just to make our lives easier.

Linting and formatting your code

When we write code we're prone to mistakes. Even if you're writing code with the help of an AI, we also all write code a little differently. What's clear to one dev might be really hard to read for another. To help mitigate these issues, we can have eslint to look through our code for obvious mistakes and prettier to format our code, making all of our coding styles read similarly even if not exactly the same.

ESLint

ESLint statically analyses your code to quickly find problems. It can catch common errors, enforce style guidelines, and help prevent bugs before they happen. Depending on the rules you want to enforce, your config can be as complex or as simple as you like. As a starting point, I tend to just include a few recommended rules; @eslint/js has js.configs.recommended.rules, for instance. I also link my Prettier config here so ESLint will help keep everything tidy.

Prettier

Prettier is a code formatter that just handles all the boring stuff for you. Instead of arguing about tabs vs spaces or lining everything up perfectly by hand, you just write your code and Prettier makes it look clean and consistent. It’s opinionated (on purpose), so you don’t waste time picking styles, everything just is styled. Again, the rules used can vary depending on where your code is going or who is working on it, but I have some simplistic rules I like to use everywhere.

BaseRoute Class

Earlier I mentioned Routes and Controllers and how it's generally advised to split them up into separate files or even directories. Yeah, I don't do that. I keep my Routes and Controllers together in a single Route Class, which contains all controllers and also attaches itself to the Express app. Let's look at how that works.

export default class BaseRoute {
  constructor(path, router) {
    if (!path) {
      throw new Error('Path is required');
    }

    if (!router) {
      throw new Error('Router is required');
    }

    this.path = path;
    this.router = router;
    this.init();
  }

  /**
   * Initialises the route.
   */
  init() {
    // This method should be overridden in subclasses
    throw new Error('init() method must be implemented in subclass');
  }

  all(path, ...props) {
    this.router.all(`${this.path}${path}`, ...props);
  }

  get(path, ...props) {
    this.router.get(`${this.path}${path}`, ...props);
  }

  post(path, ...props) {
    this.router.post(`${this.path}${path}`, ...props);
  }

  put(path, ...props) {
    this.router.put(`${this.path}${path}`, ...props);
  }

  delete(path, ...props) {
    this.router.delete(`${this.path}${path}`, ...props);
  }

  patch(path, ...props) {
    this.router.patch(`${this.path}${path}`, ...props);
  }

  options(path, ...props) {
    this.router.options(`${this.path}${path}`, ...props);
  }

  head(path, ...props) {
    this.router.head(`${this.path}${path}`, ...props);
  }
}

First we have a constructor that takes a path and a router. The path is the start of the URL, /some/url for instance, that will be used for all endpoints exposed by this class. The router is the Express router; we'll pass that in from outside so we don't have to declare it multiple times.

You'll notice the constructor also calls the init function, but all that does is throw an error telling us we need to implement it.

Extending the base class

When we actually want to add a new Route, we simply have to extend our base class and away we go.

/**
 * Example route for returning simple responses.
 */
class ExampleRoute extends BaseRoute {
  /**
   * Creates an instance of the ExampleRoute class.
   *
   * @param {object} router - The Express router instance.
   */
  constructor(router) {
    super('/example', router);
  }

  init() {
    this.get('', this.simpleResponse);
    this.get('error', this.errorResponse);
  }

  /**
   * @category API
   * @summary GET ./example
   * @desc Returns a simple JSON response.
   *
   * @returns 200 - Success - Returns an object with 'data' and 'error' properties.
   * @returns 500 - Internal Server Error - An error occurred while processing the request.
   *
   * @example Request:
   * GET http://localhost:5000/example
   *
   * @example Success response:
   * {
   *  "data": "Example",
   *  "error": null
   * }
   */
  simpleResponse(req, res) {
    res.json({ data: 'Example', error: null });
  }

  /**
   * @category API
   * @summary GET ./example/error
   * @desc Returns an error response.
   *
   * @returns 500 - Internal Server Error - An error occurred while processing the request.
   *
   * @example Request:
   * GET http://localhost:5000/example/error
   *
   * @example Error response:
   * {
   *  "data": null,
   *  "error": "This is an example error."
   * }
   */
  errorResponse() {
    throw new Error('This is an example error.', { cause: { status: 500 } });
  }
}

export default ExampleRoute;

You will notice I am passing the methods directly, such as this.get('', this.simpleResponse). In these specific examples, it works fine because the controllers don't rely on any internal class state. However, if you want to access other properties or methods within your class using this, you will need to bind the method in the init() function: this.simpleResponse.bind(this).

The majority of this code is JSDoc, which you might think is overkill, but it's going to write our documentation for us and because it's right there next to our routes, we'll probably remember to update it when we update the route.

In order to add this new collection of endpoints our router we simply instantiate the ExampleRoute class in the same file we define our router and pass that router in and that's it.

const app = express();
const router = express.Router();

new ExampleRoute(router);

Defining our Docs

In our JSDoc comments we describe our endpoints to using a set of tags, you can look at the different tags and what they do over on JSDoc's website but here's what we've used and why.

  • @category: This is brilliant for grouping related routes. If your microservice has multiple classes (e.g., UserRoute, PaymentRoute), setting the category to "API" ensures they all appear together in the generated sidebar.

  • @summary: A short, one-line description of what the route does. This often appears in the table of contents or headers.

  • @desc: A more detailed explanation of the logic or requirements for that specific endpoint.

  • @returns: In a standard function, this describes a JavaScript return value. In our API context, we use it to document HTTP status codes and the expected JSON structure.

  • @example: This is perhaps the most useful tag. It allows you to provide a "copy-paste" snippet of a request and its expected response, which is invaluable for front-end developers.

Serving our docs

If we had to read through the source code to understand the endpoints, JSDoc would still be useful for code readability, but we would be missing its best feature. We can build a professional webpage from our comments using the jsdoc package and the theme we installed earlier.

Make a config file called jsdoc.config.json

{
  "source": {
    "include": ["."],
    "exclude": ["node_modules", "docs"]
  },
  "opts": {
    "readme": "./README.md",
    "template": "node_modules/clean-jsdoc-theme",
    "destination": "./docs",
    "recurse": true
  }
}

and then modify your package.json file so it has a script called jsdoc (you can name it anything but I've named mine jsdoc). This script will call jsdoc -c jsdoc.config.json and that's it. Simply run the npm command and it will generate a webpage for you that can be hosted with your docs in human readable form.

npm run jsdoc

Simple Example Project

In order to show this layout in a slightly more real world scenario (though still incredibly simplified) I've made a quick project you can look at and the documentation to go alongside it.

Fin

There we have it, the way I tend to write express services. I don't think this way is any 'better' than any other way but it makes sense in my brain. Feel free to share any ways you prefer to write these services in the comments or even tell my ways in which you think this way is worse.

Thanks for reading! If you'd like to connect, here are my BlueSky and LinkedIn profiles. Come say hi 😊

Git and GitHub for Beginners: A Friendly Guide

2026-01-18 04:37:08

_ Installing Git Bash_

Google Git Bash

Download the Windows installer

Run the installer with these recommended settings:

Select "Use Git from Git Bash only "
Choose "Use the OpenSSL library"
Select "Checkout Windows-style, commit Unix-style line endings"
Choose "Use MinTTY"
Leave other options as default

Connecting Git to Your GitHub Account

  1. Check git verson
    git --version

  2. Configure Your Identity

git config --global user.name " name"
git config --global user.email "email"

  1. Generate an SSH Key # Generate a new SSH key ssh-keygen -t ed25519 -C " email"

C:\Users\YOUR_USERNAME.ssh\id_ed25519.pub

Add SSH Key to GitHub

Go to your GitHub account
Go to settings then click SSH and GPG keys
Click "New SSH key"
Paste your key and give it a descriptive name
Click "Add SSH key"

Test Your Connection
ssh -T [email protected]

WHAT IS GIT AND WHY IS VERSION CONTROL IMPORTANT

Git is a free, open-source distributed version control system (DVCS) used by developers to track changes in source code and other files during software development. It allows multiple people to collaborate on the same project without overwriting each other's work and enables users to revert to previous versions if needed.

IMPORTANCE OF VERSION CONTROL
• Prevents losing work when mistakes happen.
• Makes collaboration smooth and organized.
• Helps track who made which changes and when.
• Encourages experimentation—you can try new ideas without fear of breaking the main project.

HOW TO PUSH CODE TO GITHUB
GitHub is a platform that hosts your Git repositories online. Pushing code means sending your local changes to GitHub so others (or future you) can access them.
Steps to push code:
. Initialize Git in your project (if not already):

    Bash
    git init

. Add your files to Git’s staging area:

    Bash
    git add

. Commit your changes with a message:

     Bash
     git commit -m "Initial commit"

. Connect your project to GitHub:

    Bash
    git remote add origin      https://github.com/username/repository.git

. Push your code:

    Bash
    git push -u origin main

      HOW TOP PULL CODE FROM GitHub

Pulling means downloading the latest changes from GitHub to your local machine.
Steps to pull code:
. Make sure you’re inside your project folder.
. Run:
Bash
git pull origin main
This updates your local project with any new commits from GitHub.

HOW TO TRACK CHANGES USING GIT
Git makes it easy to see what’s happening in your project.
• Check the status of your files:
Bash
git status

• Shows which files are new, modified, or staged.
• View commit history:
Bash
git log
• Displays a list of commits with messages, authors, and timestamps.
• Compare changes before committing:
Bash
git diff
Shows the exact lines that were added or removed.

Clean up your Controllers: Mastering File Uploads

2026-01-18 04:32:29

Handling file uploads is a daily task for ASP.NET Core developers. However, writing the validation logic and saving process inside every single Controller Action violates the DRY (Don't Repeat Yourself) principle.

In this guide, we will create a reusable Extension class to handle file validation (Type & Size) and the saving process cleanly.

Step 1: Define the Size Enums

First, to make our size validation readable, let's define a simple Enum. This avoids "magic numbers" in our code.

namespace CoreApp.Utilities.Enums
{
    public enum FileSize
    {
        KB,
        MB,
        GB
    }
}
  1. File Size Validation

Next, we implement the logic to check if a file exceeds a specific limit. Using our FileSize enum makes the call site very readable.

// Returns TRUE if the file is LARGER than the limit
        public static bool ValidateSize(this IFormFile formFile, FileSize fileSize, int size)
        {
            switch (fileSize)
            {
                case FileSize.KB:
                    return formFile.Length > size * 1024;
                case FileSize.MB:
                    return formFile.Length > size * 1024 * 1024;
                case FileSize.GB:
                    return formFile.Length > size * 1024 * 1024 * 1024;
            }
            return false;
        }

Step 3: The Save Logic (Async)

This is the most critical part. We need to:

Generate a unique filename (using Guid) to prevent overwriting existing files.

Combine paths safely using Path.Combine (works on Windows/Linux).

Save the file asynchronously to avoid blocking the thread.

// Handles the unique naming and saving process
        public async static Task<string> CreateFileAsync(this IFormFile formFile, params string[] roots)
        {
            // 1. Generate unique name (Guid + Original Name)
            string fileName = string.Concat(Guid.NewGuid().ToString(), formFile.FileName);

            string path = string.Empty;

            // 2. Combine path parts dynamically
            for (int i = 0; i < roots.Length; i++)
            {
                path = Path.Combine(path, roots[i]);
            }

            // Append the filename to the final path
            path = Path.Combine(path, fileName);

            // 3. Create the stream and copy the file content
            using (FileStream fileStream = new(path, FileMode.Create))
            {
                await formFile.CopyToAsync(fileStream);
            }

            // Return the name to save in the Database
            return fileName;
        }
    }
}

Why did we use Async/Await?
You might notice we used CopyToAsync instead of CopyTo. Here is why asynchronous programming is critical for File Uploads.

  1. Non-Blocking I/O (Scalability) Saving a file is an I/O bound operation. If we use synchronous code, the thread waits until the file is fully saved. With async, the thread is freed up to handle other user requests while the disk writes the file.
// ❌ BAD: Blocks the thread (Server freezes for other users)
formFile.CopyTo(stream); 

// ✅ GOOD: Thread is free while file saves
await formFile.CopyToAsync(stream);
  1. Parallel Execution (Performance) If a user uploads multiple product images (e.g., a Gallery), we don't need to save them one by one. using Task.WhenAll, we can save 5 images simultaneously!

![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t54kykapay5hww8s1vvz.png)

public async Task<List<string>> UploadMultipleAsync(List<IFormFile> files)
{
    var tasks = new List<Task<string>>();

    foreach (var file in files)
    {
        // Starts the task but doesn't wait yet
        tasks.Add(file.CreateFileAsync("wwwroot", "uploads")); 
    }

    // Runs all uploads at the exact same time
    string[] results = await Task.WhenAll(tasks); 

    return results.ToList();
}

How to Use It in a Controller?

Now, look at how clean our Controller becomes. No heavy logic, just simple checks!

[HttpPost]
public async Task<IActionResult> Create(CreateProductVM model)
{
    // 1. Check if it's an image
    if (!model.Photo.ValidateType("image/"))
    {
        ModelState.AddModelError("Photo", "Please upload a valid image file.");
        return View(model);
    }

    // 2. Check if it's smaller than 2MB
    if (model.Photo.ValidateSize(FileSize.MB, 2))
    {
        ModelState.AddModelError("Photo", "File size must be less than 2MB.");
        return View(model);
    }

    // 3. Save the file and get the name
    // Saves to: wwwroot/assets/images/products/
    string fileName = await model.Photo.CreateFileAsync(_env.WebRootPath, "assets", "images", "products");

    // ... Save to DB logic here ...

    return RedirectToAction(nameof(Index));
}

Conclusion
By using Extension Methods, we moved the infrastructure logic out of the Controller. This makes our code testable, reusable, and strictly follows the Single Responsibility Principle.

I Needed an APP to Track My Learning Journey, AI built it in less than half an hour with a single prompt

2026-01-18 04:31:26

I have been trying to build a Learning Hub App for a good few months using various tools and have had many iterations back and forth on trying to get it to work and ended up going round in circles fixing one thing and breaking another.

Today I gave it a shot using goose coding agent and Claude Opus 4.5 as my modal. I used a simple prompt. No .prd file. Just clear instructions of what I wanted and I asked it to come up for a plan for it.

And it did. The plan was great. It started off with: What a great idea! Let me put together a comprehensive plan for this app. I'll call it "AI Learning Hub" - your personal knowledge management system for AI learning resources.

I got a Core Features Overview, Tech Stack Recommendation, Data Structure, UI Layout Concept, Build phases and some extra nice to have stuff for future improvements.

layout of a site design in goose

I asked it to change two things and then off it went. In less than half an hour it was not just built but tested before my eyes by using the Playwright MCP. I watched as it added links tested out the filters and used the site like a user would ensuring everything worked.

goose using Playwright MCP to verify its work

AI Learning Hub - Complete!

My agent told me: I've built your AI Learning Hub app! Here's a summary of what's been created:

Features Implemented
Add URLs with metadata fetching: ✅ Auto-fetches title, description, detects category
Categories: ✅ Blog, Video, Podcast, Course, Paper, Other
Learning workflow :✅ Queue → Learning → Completed
Search: ✅ Full-text search across resources
Category filter: ✅ Filter dropdown
Notes: ✅ Rich notes modal per resource
Drag & Drop: ✅ dnd-kit installed, drag handles ready
Content Ideas: ✅ Sidebar with types (Blog, Video, Tutorial, Thread)
Password protection: ✅ Environment variable based
Netlify ready: ✅ Config file created

ai learning hub site

And how do I add the URLs of the sites I want to add, easy. Just paste them into goose and it will use the Playwright MCP which I already configured and it will populate them all for me so I don't have to.

Conclusion

And that's it, finished and working in less than half an hour while I drank a beer on a Saturday night watching it all in amazement that something that should have taken me weeks to build was built before my very own eyes in minutes.

Now here is the thing. I didn't open an editor once. I haven't looked at the code. It is working as it should and that is really all I care about for this particular project. I studied coding. I care about the quality of code but right now I am ok with not caring. I am ok with trusting the agent and LLM to ensure the code is good and meets the standards it should.

I will add tests and check performance and out of curiosity I might just look at the code when doing the pr. But I am seriously blown away with how easy it is to do this.

Try it yourself

Want to give it a try yourself: Here is the prompt I used:

I would like you to build me an app so that I can easily manage urls for blog posts, podcasts, videos and other things that I would like to learn when it comes to AI. It would be great to be able to easily add the URL and then have a title and description field which can be populated when adding it. search by category would be great. I would be cool to have some sort of system like a todo list so when it is done it goes to a different place but is still findable should i want to share it with someone. maybe even notes so i could add some notes on it for later findings or note taking. should be able to prioritize things so that i learn things based on a particualar order maybe drag or drop so i can change it. it should be a fun app that i can easily deploy, nice and easy on the eye. it would also great to have a section where i can put ideas on content creation based off of the stuff I have learnt. these could be create blog posts, videos etc. just an idea and not sure if this will look great but we could try it out. can you come up with some sort of a plan for this.

🌽 *orn (Porn Quitter Conversational AI Agent )— A Private Recovery Companion in a Week

2026-01-18 04:18:30

This is a submission for the Algolia Agent Studio Challenge: Consumer-Facing Conversational Experiences

What I Built

I built corn, a private, consumer-facing conversational AI designed to support people who are trying to quit porn and regain control over compulsive habits.

corn is not a general chatbot and not a therapist.
It’s a calm, judgment-free recovery companion that focuses on:

  • Managing urges in the moment
  • Handling relapse without shame
  • Staying motivated during difficult phases
  • Following a structured 90-day recovery program
  • Anonymous journaling and self-reflection

The core problem corn addresses is isolation. Many people struggle silently with this habit and don’t want lectures, guilt, or explicit discussions. corn provides a safe space where users can simply talk — especially during moments when willpower is weakest.

The conversational experience is intentionally simple:

  • Short, supportive responses
  • No explicit content
  • No medical claims
  • Focused on “get through this moment” rather than perfection

Demo

🔗 Live Demo:
👉 https://corn-quitter.vercel.app/

📸 Screenshots:

RATE LIMIT IN GOOGLE GEMINI 2.5 FLASH FREE TIER stops the request for providing response. Overall the app works correctly in free testing process using Algolia Sandbox open AI

Testing Video G-Drive Link:

https://drive.google.com/file/d/17dZtFUAm1q4cPE5yExNzTKs60kbANQee/view?usp=sharing

How I Used Algolia Agent Studio

Algolia Agent Studio is the core engine behind corn’s conversational experience.

Instead of putting everything into one index, I designed the agent using multiple purpose-driven indexes, each with a clear responsibility:

Indexed Data Structure

corn_core_intents
Handles real-time conversations like urges, relapse support, motivation, and fallback handling.

corn_90_day_program
Contains structured recovery logic mapped to days and phases (Day 1–90).

corn_journaling_prompts
Stores anonymous journaling prompts that help users process emotions through writing.

Why this mattered

This separation allowed me to:

Route user queries to the right knowledge source

  • Avoid mixing emotional support with structured program data
  • Keep responses predictable, safe, and context-aware
  • Prompt & Instruction Design
  • I used strict system instructions to ensure the agent:
  • Never produces explicit or triggering content
  • Uses a supportive, non-judgmental tone
  • Stays strictly within recovery scope
  • Uses emojis sparingly to maintain warmth 🌱

Retrieval from Algolia indexes ensures the agent responds based on indexed intent-specific data, not generic LLM guessing.

Why Fast Retrieval Matters

For this use case, speed and relevance are critical.

When someone types:

“I have an urge right now”

They don’t want:

  • A long explanation
  • A generic motivational speech
  • A delayed response

They need:

  • The right response
  • Immediately
  • In the right emotional tone
  • Algolia’s fast, contextual retrieval ensures:
  • The correct intent is matched instantly
  • The agent responds with focused, calming guidance
  • No unnecessary or off-topic content is introduced

This makes the experience feel present and reliable, which is especially important for sensitive, time-critical moments.

DEV Team Member Id : https://dev.to/abbas7120

How to DoS A server

2026-01-18 04:17:13

disclamer
This experiment was done in a controlled home lab on systems I own. No real-world systems were harmed. oh and educational purposes only

How It Started

Like many people learning cybersecurity, I used to think Denial of Service (DoS) attacks required massive botnets, insane bandwidth, and Hollywood-level hacking. Turns out, it doesn't. All you need is a vulnerable server and a way to exploit that vulnerability. No malware. No exploits. Just bad application design and a can-do attitude.

DoS vs DDoS

A DoS attack (Denial of Service) attack uses a single source to overwhelm a target, making it simpler to execute and ultimately block, while DDoS (Distributed Denial of Service) uses a network of compromised devices called a botnet to generate massive, distributed traffic, making it far harder to detect, mitigate, and more impactful. The key difference is distribution. DoS is one-to-one, while DDoS is many-to-one, leveraging a "distributed" network for greater power and stealth. Since I am using one machine to attack a target Dos

The Lab I Built

I set up a small, isolated lab using Oracle VirtualBox
Kali Linux → attacker machine
Ubuntu Linux → target machine

Both VMs were connected using a Host-only network, so that they could communicate with each other, so nothing was exposed to the internet for security reasons, and most importantly, everything stayed legal and safe. This was the “real” environment: two machines, communicating two roles, one goal.

The Vulnerable Server

Let's play a game. Can you spot what is wrong with the Python code below?

from http.server import BaseHTTPRequestHandler, HTTPServer
import time

a = 0

class VulnerableHTTPRequestHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        global a
        time.sleep(3)
        self.send_response(200)
        self.send_header('Content-type', 'text/html')
        self.end_headers()
        self.wfile.write(bytes(f"Request processed. Count: {a}", "utf-8"))
        a += 1

def run(server_class=HTTPServer, handler_class=VulnerableHTTPRequestHandler, port=8080):
    server_address = ('', port)
    httpd = server_class(server_address, handler_class)
    print(f"Starting vulnerable server on port {port}...")
    httpd.serve_forever()

if __name__ == "__main__":
    run()

Did you see it?
time.sleep(3)

This was added to simulate “heavy processing,” and this single line would become the entire vulnerability. Why is this server vulnerable?

  • It is single-threaded
  • Each request blocks for 3 seconds
  • No rate limiting
  • No concurrency handling

This makes it an ideal target for an application-layer DoS attack.

Starting the Server

On Ubuntu, when I run the command
python3 vulnerable_server1.py

The server is running on port 8080.

which I can verify on my Kali by running:

curl http://192.168.56.103:8080

The server should respond correctly, and each request increases the counter.

The attack

On Kali, I ran this command:
ab -n 50 -c 10 http://192.168.56.103:8080/

At first glance, it didn’t look dangerous, but if you look closely, you will see that 50 total requests, 10 requests at a time, which in most development is nothing, but in the test development, almost immediately Responses slowed down, and requests started queuing. The server felt “stuck” I hadn’t flooded the network. and I hadn’t crashed the OS. I had simply asked the server to do more than it was designed to handle.

What Actually Happened

Here’s the problem, since the server is single-threaded, each requests is blocks for 3 seconds
Only one request can be processed at a time, so when 10 requests arrive simultaneously.
One request ran while the other 9 waited.d New requests piled on top. This wasn’t a bandwidth attack. It was an application-layer DoS.

Watching the Attack in Wireshark

As a Bonus and to truly understand what was happening, I opened Wireshark on my Kali Linux and captured traffic during the attack.

I applied this filter:
ip.addr == 192.168.56.103 && tcp.port == 8080

What I saw:

  • Repeated HTTP GET requests
  • Delayed responses
  • TCP retransmissions
  • Congestion builds up

In the real world, this kind of vulnerability exists in:

  • Internal tools
  • APIs
  • Microservices
  • Custom dashboards

Attackers don’t always break things. Sometimes they just wait for your code to break itself, such a vulnerability would never make it way to production, and if it did, it would be fixed or patched almost immediatialy but still this makes for a good learning exercise.

This could have been prevented by using async or multi-threaded servers, adding rate limiting and or placing a reverse proxy (like Nginx)

Conclusion

Breaking my own server was one of the most valuable lessons I’ve learned so far. It showed me how application-layer DoS really works, why design choices matter and what attack traffic actually looks like. If you’re learning cybersecurity, I highly recommend building labs like this. They’re safe, legal, and incredibly eye-opening.

Add on, I made this blog post without a plan on how to finish it I also tested out, and http spray DoS attack, but the blog was getting too long for that. If you want to check out the tool, it's on my github. I'm planning on improving it later in the coming days if I ever get the time to do that