MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Thread Pool

2025-11-14 02:55:51

1. The Problem with Creating Threads Per Request

Ever implemented a server that creates a new thread for every request and destroys it when done? This approach has some critical issues you might not immediately notice.

  • Performance Issues
    Thread creation is an expensive operation. The OS has to allocate stack memory, set up execution context, and handle various initialization overhead.

  • Resource Exhaustion
    When thousands of concurrent requests hit your server, an equal number of threads get created. Memory gets exhausted rapidly, and your system becomes unresponsive.

Thread Pool solves this elegantly.

2. Thread Pool

Thread Pool addresses these problems cleanly.

The core idea is simple:

  • Pre-create threads and keep them waiting
  • Assign work to idle threads when requests come in
  • Reuse threads instead of destroying them after work completes

Benefits

  1. Performance Boost: Eliminates thread creation overhead
  2. Resource Control: Pool size limits concurrent thread count
  3. Stability: Thread count stays bounded even during traffic spikes

Think of it like a restaurant: instead of hiring a chef every time a customer orders, you keep N chefs on staff and distribute orders among them.

Architecture

A Thread Pool consists of 3 main components:

1. Worker Threads

  • N pre-created threads
  • Stay alive, waiting for work

2. Task Queue

  • Queue where pending tasks wait
  • Must be thread-safe (multiple threads access it)

3. Work Distribution Logic

  • Idle threads fetch tasks from the queue
  • After completing work, they loop back to check the queue

Thread Pool Architecture

3. Code Example

Using a Thread Pool

// Create Thread Pool
size_t num_threads = std::thread::hardware_concurrency(); // CPU core count
ThreadPool pool(num_threads);

// Insert tasks into queue
for (int i = 0; i < 1000; i++) {
    pool.enqueue([i]() {
        processRequest(requests[i]);
    });
}
// Pool internally maintains only N threads while processing 1000 tasks

Basic Thread Pool Structure

class ThreadPool {
public:
    ThreadPool(size_t threads);
    ~ThreadPool();

    // Add task to queue
    template<class F>
    void enqueue(F&& f);

private:
    std::vector<std::thread> workers;           // Worker threads
    std::queue<std::function<void()>> tasks;    // Task queue

    std::mutex queue_mutex;                     // Protects queue
    std::condition_variable condition;          // Wait/notify threads
    bool stop;
};

Steps on how to create Virtual Machine

2025-11-14 02:51:59

In cloud computing, a virtual machine (VM) is a digital representation of a real computer operating in a virtualized environment. It runs on shared physical hardware under the control of a cloud provider, but it has its own operating system, storage, and apps.

The first step in any project work is to create a resource group
Below are the steps to create a resource group

Step 1
in the Microsoft Azure portal
in the Search Bar, type Resource Group and hit enter, and the resource group with an icon will gray out
search bar

Step 2
Select the Resource Group and in the resource group environment, select + Create
resource group & + create

Steps on how to create a virtual machine

Step 1
In the Microsoft Azure portal, locate the Search Bar, and type Virtual Machines and hit enter
search & virtual machines

Step 2
Click the drop-down arrow by + Create and choose virtual machine
+ create & virtual machine

Step 3
In the create virtual machine environment, navigate to the Resource Group area, and if you have created the resource group earlier, click the drop-down and select and if not, click create new
resource group

Step 4
In the create virtual machine environment, for the virtual machine name, give it a name, for Region, click the drop-down arrow and select while for the availability option, click the drop-down arrow and choose
virtual machine name, region & availability

Step 5
Select the security type from the drop-down arrow and select image from the drop-down arrow
security type & image

Step 6
Navigate to the administrator account, for the username section, type in a name, give it a password, and confirm the password, ensure the passwords are the same
username, password & confirm password

Step 7
Navigate to the select inbounds ports, click the drop-down arrow select and tick the I confirm section
select inbounds ports & I Confirm

Step 8
Navigate to the Monitoring Tab and disable Diagnostics
monitoring, diagnostics

Step 9
Click on Tags Tab Type in the Name section, type in the bar below and do same for value and click Review + Create
tags, name & value

Step 10
validation passed message then scroll down to view other information, click create
validation passed &  create

Step 11
A successful message, your deployment is complete, and then click Go to Resource
go to resource

For the virtual machine not to have disruptive network issue,

Step 1
navigate to the primary NIC public ID and click the link
primary NIC public id

Step 2
when the public ID link is click, in the new environment, increase the idle time out
Idle time out

Steps on how to connect the virtual machine

Step 1
On the overview, click Connect and click the drop-down arrow on the Connect tab and choose Connect
connect

Step 2
click the Connect Access **and download the **RDP File
connect access & download RDP

Step 3
Open the downloaded file and click Connect
remote connection

Step 4
In the window security environment, enter your password and click ok
window security

I hope this article was educational.

How to Integrate WebAssembly: 7 Practical Patterns for Faster Web Applications

2025-11-14 02:47:22

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

When I first started working with web applications, I noticed that some tasks felt sluggish, especially when dealing with heavy computations or complex graphics. Over time, I discovered WebAssembly, a technology that changed how we build fast web apps. It allows code written in languages like C, C++, or Rust to run in browsers at speeds close to native performance. This isn't about replacing JavaScript but enhancing it where needed. In this article, I'll share seven practical ways to integrate WebAssembly into your projects, making them faster and more efficient. I'll include code examples and personal insights to help you understand each approach easily.

Let me begin with compiling existing C or C++ code to WebAssembly. Many systems have legacy codebases in these languages, and porting them to the web can save time and boost performance. I used Emscripten, a toolchain that converts C/C++ into WebAssembly modules. It handles the compilation so that your code runs securely within the browser's sandbox. For instance, if you have a function that calculates Fibonacci numbers, you can compile it and call it from JavaScript. This approach is great for reusing proven algorithms without rewriting everything from scratch.

// A simple C function for calculating Fibonacci numbers
int fibonacci(int n) {
    if (n <= 1) return n;
    return fibonacci(n-1) + fibonacci(n-2);
}

After compiling this with Emscripten, you get a .wasm file. In JavaScript, you can load and use it. I remember a project where we had a legacy physics engine in C++. By compiling it to WebAssembly, we integrated it into a web app, and the performance improvement was immediate. Users reported smoother animations, and we didn't have to learn a new language. The key is that WebAssembly runs this code much faster than JavaScript for such recursive calculations, reducing lag in interactive apps.

Next, Rust integration with WebAssembly offers a blend of speed and safety. Rust is known for preventing memory errors, and when combined with WebAssembly, it creates reliable high-performance modules. I often use the wasm-bindgen crate in Rust to make interaction with JavaScript smooth. It generates bindings so that Rust functions can be called directly from JavaScript, handling data types automatically. This pattern is ideal for tasks like data processing where you need both speed and confidence in code correctness.

use wasm_bindgen::prelude::*;

#[wasm_bindgen]
pub fn process_data(input: &[u8]) -> Vec<u8> {
    // Increment each byte in the input slice
    input.iter().map(|x| x.wrapping_add(1)).collect()
}

In one of my apps, I used this to handle image data. JavaScript passed the image bytes to this Rust function, which applied filters quickly. The wasm-bindgen tool made it easy to pass arrays between languages without manual conversion. If you're new to Rust, start with small functions and gradually expand. The compiler checks help avoid common bugs, and the performance gains are worth the learning curve. I found that Rust's ownership model, while tricky at first, pays off in stable, fast web code.

JavaScript interoperability is another powerful pattern. Instead of rewriting entire apps, you can mix WebAssembly modules with your existing JavaScript code. WebAssembly exports functions that JavaScript calls directly, creating a hybrid approach. This lets you keep your current UI and logic in JavaScript while offloading heavy tasks to WebAssembly. For example, you might have a JavaScript function that handles user input and a WebAssembly function that crunches numbers.

// Load and use a WebAssembly module in JavaScript
const wasmModule = await WebAssembly.instantiateStreaming(
  fetch('compute.wasm')
);
const result = wasmModule.instance.exports.calculate(42);
console.log('Result from WebAssembly:', result);

I applied this in a data visualization project. The JavaScript part managed the chart rendering and user interactions, while WebAssembly handled complex statistical calculations. This division of labor made the app responsive even with large datasets. If you're working on a team, this pattern allows front-end developers to stick with JavaScript for most tasks and only dive into WebAssembly for performance-critical parts. It's a practical way to incrementally improve speed without a full rewrite.

For graphics-intensive applications, WebAssembly shines by accelerating rendering and simulations. Games, 3D visualizations, or video editors often struggle with JavaScript's speed limits. By compiling graphics logic to WebAssembly, you can achieve higher frame rates and smoother experiences. I've seen this in action with game engines ported to the web, where physics and rendering run in WebAssembly modules.

// Example C++ code for frame rendering in a graphics app
void render_frame(float delta_time) {
    update_physics(delta_time);  // Handle movement and collisions
    draw_particles(particle_count);  // Render visual elements
}

In a personal project, I built a simple particle system. The JavaScript handled the canvas setup, but the particle updates were done in WebAssembly. The difference was night and day—what used to stutter with thousands of particles now ran smoothly. If you're developing interactive graphics, consider moving the heavy lifting to WebAssembly. Tools like Emscripten can compile OpenGL code to WebGL, making it easier to port existing desktop apps to the web.

Server-side WebAssembly execution extends these benefits beyond the browser. With edge computing platforms, you can run the same WebAssembly modules on servers, ensuring consistent performance from client to server. This is useful for handling requests that require fast processing, like authentication or data transformation. I've used this in cloud functions to reduce latency.

// Example in an edge runtime like Cloudflare Workers
addEventListener('fetch', event => {
  const wasmInstance = await instantiateWasm(wasmBuffer);
  const response = wasmInstance.exports.handleRequest(event.request);
  event.respondWith(response);
});

On a recent team project, we deployed a WebAssembly module to an edge server that processed API requests. It handled image resizing faster than our previous Node.js service, and because it was the same code as the client-side, testing was straightforward. This pattern helps build unified architectures where logic is shared across environments. If you're scaling a web app, think about where WebAssembly on the server could cut down response times.

Streaming compilation is a technique to improve load times. Browsers can start compiling and executing WebAssembly code while it's still downloading, which speeds up how quickly your app becomes interactive. This is especially helpful for large modules that might otherwise cause delays. I've implemented this in apps with heavy initial computations, and it made a noticeable difference in user experience.

// Instantiate WebAssembly with streaming for faster startup
WebAssembly.instantiateStreaming(fetch('module.wasm'))
  .then(obj => {
    const instance = obj.instance;
    instance.exports.main();  // Start using the module early
  })
  .catch(error => {
    console.error('Failed to load WebAssembly:', error);
  });

In one case, I worked on a web-based tool that needed to load a machine learning model compiled to WebAssembly. Using streaming, the model began processing as soon as the first bytes arrived, rather than waiting for the entire file. This reduced the perceived load time and kept users engaged. If your app has large WebAssembly files, enable streaming to make the most of network bandwidth. Most modern browsers support this, and it's simple to add to your code.

Development toolchains for debugging make working with WebAssembly manageable. Initially, I worried that compiled code would be hard to debug, but tools like source maps allow you to map WebAssembly execution back to your original source code. Emscripten and other compilers can generate debug information, so you can use browser dev tools to step through your C++ or Rust code as if it were JavaScript.

# Compile C code with debugging support using Emscripten
emcc source.c -o output.wasm -g4 --source-map-base

When I first debugged a WebAssembly module, I was surprised how seamless it felt. I set breakpoints in my C code, and the browser paused execution at the right points. This saved hours of guesswork. If you're adopting WebAssembly, invest time in setting up your build process with debugging flags. It makes development faster and less frustrating, especially when tracking down performance issues or memory leaks.

Bringing it all together, these patterns show how WebAssembly can elevate web applications. By combining compiled code with JavaScript, optimizing graphics, and extending to servers, you build apps that handle complex tasks efficiently. I've used these methods in real projects, and they consistently deliver better performance without sacrificing security or accessibility. Start small—perhaps with a single function—and gradually integrate more patterns as you see benefits. The web platform is evolving, and WebAssembly is a key part of that journey, helping us create experiences that were once only possible in native apps.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!

101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools

We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

We Built an x402 Marketplace for Bookmark Collectors (That Pays Creators &amp; Curators in USDC)

2025-11-14 02:45:18

Our Story & Why

We are a small team of Internet enthusiasts who have witnessed tons of websites we loved shutting down over the years. So we've been seeking a way to support them with income and exposure.
We see that the exposure channels are gated by content platforms. The result is that independent websites and content that don't play well with platform algorithms get buried. Also, more people are interacting with the Internet through AI now, so the ads-based business model is also failing. The Internet needs both a fairer discovery system and a new business model.
So we came up with a solution to incentivize people (and AI agents in the future) to find & share valuable content (websites), with both the finder and the original creator rewarded.

Product Link

copus.network

Key features

  1. You can share (curate) any URI (URL) through the copus.network website or the browser extension. Websites you collect will be automatically shared on Copus' homepage where others can visit and collect. It's basically social bookmarking, like a Pinterest for websites.

  2. You can set a USDC price for visiting a link you shared (pay-to-unlock). The payment is powered by the x402 protocol.

  3. Half of the USDC income will go to the author of the original content, claimable after they opt their site into x402 or register a Copus account.

  4. Your collections (bookmarks) are automatically stored on the Arweave permanent storage blockchain. We pay the storage fee so you'll never lose them.

Features coming soon

  1. Spaces (like Pinterest's boards) to organize your collections and collaborate with others.

  2. Weave: If a piece reminds you of another piece, you can weave them together in a "you may also like" section. It's sorta like a collective Obsidian map. The standalone websites become a connected map where every website is a rabbit hole.

  3. AI agent support: You can train an agent to curate for you.

  4. Social features: Follow accounts who have great taste.

Who we imagine this is for

If you've been bookmarking over the years, you already have tons of Internet gems in hand. Please pick the best ones to share with the world! It would be valuable for both the readers and the original creators.
Were you a Pocket user? Save your best bookmarks here and never lose them. (We plan to support putting a copy of the whole website on-chain once the project scales. Right now we put the link, category info, and your notes on-chain for you for free.)

Some other things

  1. Copus is open source, with the frontend built using Claude Code.

  2. We plan to launch a Web3 token to put the ownership of the project into the hands of the people who use it.

  3. We don't mess with rights and privacy. Aside from some essential terms needed to keep the project running, your rights remain yours.

  4. How do we make money? We're still figuring that out. The first plan is to take 10% off every payment.

Enjoy, and thank you in advance for trying it out early! We're open to any questions, comments, and collaborations!

Create Your First Google ADK Agent: A Beginner's Guide

2025-11-14 02:43:07

In this tutorial, we'll walk through how to create your first agent using Google's Agent Development Kit (ADK). We'll explore two primary methods: using the intuitive Visual Builder and a straightforward, no-code YAML configuration.

Understanding the Anatomy of an ADK Agent

Before we build, let's break down the core components of a Google ADK agent:

  • Name: A unique identifier for your agent.
  • Model: The engine powering your agent. This can be a first-party Google model (like Gemini) or any third-party/open-source model via the LightLLM integration.
  • Instructions: This is where you define the agent's persona, its core logic, and the desired output schema. Think of it as a detailed, structured prompt.
  • Description: A concise summary of the agent's purpose. This is crucial in multi-agent systems, where agents interact based on each other's descriptions.
  • Tools: These equip your agent with external capabilities. ADK provides several built-in tools like Google Search and File Retrieval. You can also integrate with third-party tools (LangChain, CrewAI) or build your own custom logic using a Function Tool.

Prerequisites

  1. Get a Gemini API Key: You'll need a Gemini API key for this project. Create a .env file in your project root and add your key:

    GEMINI_API_KEY="YOUR_API_KEY_HERE"
    
  2. Set up a Virtual Environment:

    python -m venv .venv
    source .venv/bin/activate
    
  3. Install or Upgrade Google ADK: The Visual Builder requires version 1.18 or higher.

    pip install --upgrade google-adk
    

Method 1: Create an Agent with the Visual Builder & AI Assist

The fastest way to get started is with the ADK web interface and its natural language capabilities.

  1. Launch the ADK Web Console:

    adk web
    
  2. Create a New Agent: In the web UI, select "Create a new agent in visual mode" and give it a name (e.g., Visual-Agent-1).

  3. Use the AI Assistant: On the right-hand side, you'll find an assistant. Let's give it a prompt to create a research agent.

    Prompt: "Create a bull and bear research agent for a given stock. It should use Google Search to analyze the bull and bear cases for the stock and return a summary in a bulleted list."

    The assistant will generate the necessary YAML configuration.

  4. Save and Test: Once you approve the generated YAML, the agent's instructions and the required GoogleSearch tool will be auto-populated. Save the agent and test it directly in the UI by providing a stock ticker like "Nvidia".

The agent will now perform a search and return a formatted analysis of the bull and bear cases for the stock.

Method 2: Create an Agent with YAML Configuration

If you prefer a code-free, configuration-driven approach from the start, you can use the CLI.

  1. Run the create Command: Use the adk create command with the --configuration flag.

    adk create --name "My-YAML-Agent" --configuration
    
  2. Follow the Prompts: The CLI will guide you through a series of questions to set up your agent:

    • Select the model you want to use (e.g., gemini-2.5-pro).
    • Choose the provider (e.g., Google AI).
    • Enter your API key.
  3. Agent Created: This process generates a root_agent.yaml file with a minimal configuration. You can now run adk web to test it or begin customizing the YAML file with more detailed instructions and tools.

And that's it! You've successfully created your first ADK agent using three different approaches: the AI-assisted builder, the visual workflow, and the no-code YAML configuration. In future posts, we'll dive into more advanced concepts like multi-agent systems and callbacks.

Short videos

(Google ADK Visual Builder)- https://youtube.com/shorts/fIv6fvUM3gg?si=6U_DeU2NRXFaBy8M

(Anatomy of a Google Agent Development Kit (ADK) Agent) - https://youtube.com/shorts/pkttZnC5DCU?si=DvaLy5Z-F1Tg1EgQ

I Think I Accidentally Solved a Problem No One Tried Before?

2025-11-14 02:42:47

I’m working on a personal project recently, and out of nowhere I stumbled into an idea that honestly feels… strange?

Not “AI takes over the world” strange — more like “why did nobody try this?” strange.

Basically:
I found a way to export my backend routes into a single syncable manifest, and then pipe that manifest directly into the frontend so the frontend automatically:

knows every available backend endpoint

auto-generates request functions

becomes type-consistent with backend changes

stays updated without manually modifying anything

No OpenAPI, no Swagger, no codegen, no manual typing.
Just a simple sync step → and my frontend magically gets all the backend API routes as a client.

It’s like the backend and frontend finally speak the same language.

And the weirdest part?
I Googled like crazy to see if someone already built this — but all I find are huge tools (Swagger, TRPC, GraphQL, RPC frameworks, etc.) that are either too heavy or too opinionated.

What I built is stupidly simple, almost embarrassingly minimal… but it works shockingly well.

It feels like a tiny “API pipeline” that no one thought of.

Before I go deeper or package it, I’m just curious:

👉 Has anyone ever seen this exact concept?
👉 Syncing backend routes → auto client generation → frictionless frontend integration?
👉 Without using OpenAPI, RPC, GraphQL, or code-first DSLs?

I genuinely feel like I might’ve discovered a small niche idea that somehow slipped through the cracks.

Would love to hear if anyone has tried this, seen it, or if I’m just reinventing something in a weird way.

Not ready to reveal the full thing yet — still stabilizing the thought —
but I just need some outside perspective before I go too deep.