MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Network Engineer

2026-01-19 03:47:02

This is The Desi Networker, who is choosing to Network Engineer, this is my first journey towards that ...

one of the best way to prove someone ur skilled in netwroking.so i too started

Step 1:- CCNA ( where u learn basic networking with practically implementing main concepts like Switching and Routing).
Step 2:- CCNP Encor (Advanced certificate of ccna (going indepth of ccna topics) u can directly eligible for this certificate without CCNA. ( Where im completed my masters and searching for jobs, with 5+ experience, so im directly jumping to this certificate. but here we are starting from basics, ccna topics and ccnp encor. im also learning stage lets start together.)

and there are many certifications to prove your skilled by choice is this

taruvata a certificate cheyalo naku theledu guys 😆.ur suggestions are welcome.

i decided to give CCNP Encor on March 2nd So lets start as soon as possible.

i decided to learn from (kevin wallace) from Linkedin learning.

  1. CCNA :- https://www.linkedin.com/learning/cisco-certified-network-associate-ccna-v1-1-200-301-cert-prep/diving-into-the-course?resume=false ( musalodi kani chala fast ga cheptadu guys so each every word imp a chusukondi ).

  2. CCNP :- https://www.linkedin.com/learning/cisco-ccnp-enterprise-encor-v1-1-350-401-cert-prep/welcome-to-encor-part-1

Once i finish this by March 2nd , i'll start my job hunting.friend 😎

So Start CCNA,

CCNA Syllabus (u can check on Cisco Website🤷).

my ccna syllabus.

  1. Basic Networking (OSI layer Masteru).
  2. Layer 2 - Data link Layer
    * Switching (MAC Address Table)
    * Vlans
    * trunking
    * STP
    * EtherChannel
    * Layer 2 troubleshooting.

  3. Layer 3 - Network layer
    * Subnetting + gateways
    * Inter-VLAN routing (SVI + router-on-stick concepts)
    * Static routing + default route
    * OSPF
    * NAT/PAT
    * ACL basics
    * few protocols(DHCP,DNS,...)
    * Layer 3 troubleshooting.

  4. Wireless Networks and Conf

  5. QoS (Quality of services).

  6. Treats and Defences

  7. SDN (Software - Defined Networking).

the end buddy , lets start ...

so first download Cisco Packet Tracer Application to play 🏏.

Reinventing a Solved Problem: An Architectural Review of Odoo OWL Frontend Framework

2026-01-19 03:39:25

In this post, I want to focus on Odoo’s OWL framework — the first major layer of frontend complexity in Odoo’s web stack — and question whether building it was truly necessary, or whether it was an avoidable source of long-term complexity justified by the familiar argument: “it’s an ERP, so it must be complex.”

For context, OWL (Odoo Web Library) is the JavaScript framework used to power Odoo’s frontend components, including the dashboard, backend UI, and parts of the website.

According to Odoo, OWL was built from scratch to solve a specific problem: allowing third-party developers to override and extend frontend components defined by other modules without modifying core files or losing changes during upgrades.

On paper, this goal is reasonable — even admirable. However, the conclusion that this required building an entirely new frontend framework is far more questionable.

The same goal is already achievable in all major modern frontend frameworks (React, Vue, Angular) through well-established mechanisms such as component composition, slots, higher-order components, dependency injection, extension APIs, and schema-driven rendering.

What a Mature Frontend Framework Would Have Provided To Odoo ?

By using a mature frontend framework would have provided several major advantages:

  1. Clear, evolving documentation Mature frameworks have continuously updated documentation that closely tracks real-world usage and features. In contrast, Odoo’s documentation is often incomplete, outdated, or misleading — a problem significant enough to warrant a dedicated post.
  2. Security responsiveness Modern frontend ecosystems respond rapidly to security disclosures, issuing patches without requiring a full application upgrade. In Odoo, frontend fixes are tightly coupled to backend releases, making security patching significantly more disruptive.
  3. A vast ecosystem From form builders and schema validators to accessibility tooling, testing frameworks, and UI component libraries — modern ecosystems provide solutions that Odoo either reimplements partially or lacks entirely.
  4. Developer familiarity Frontend developers today are already fluent in React, Vue, or Angular. OWL introduces a proprietary mental model that developers must learn on top of Odoo’s already complex backend abstractions.

Instead, Odoo now maintains a hybrid frontend stack where OWL coexists with legacy code — including multiple versions of jQuery (2.x and 3.x) still present in parts of the system. This alone should raise questions about long-term maintainability.

Before comparing OWL directly to React or Vue, it’s important to understand how OWL actually works.

At its core, OWL consumes XML definitions sent from the backend and dynamically builds a component tree on the frontend. These XML views are parsed, interpreted, and translated into JavaScript-rendered UI components.

For example, consider a simple OWL form definition:

<form>
  <field name="my_model_field_name"/>
</form>

The frontend receives this XML and uses the OWL runtime to translate it into rendered UI components.

  • Parses the XML into a component tree
  • Issues additional requests to fetch model field metadata
  • Decides which component to render based on field definitions
  • Optionally uses the widget attribute to select a custom renderer

In other words, OWL acts as a runtime XML interpreter that generates UI behavior dynamically.

The Hard Truth About OWL

OWL’s flexibility does not come from a fundamentally new idea — it comes from deferring structure and behavior decisions to runtime. This is not unique, nor does it require a custom framework.

The same level of flexibility can be achieved in React for example by using:

  • Schema-driven rendering
  • Plugin registries
  • Declarative extension points
  • Controlled overrides via dependency injection
  • Permissioned component replacement

Many systems already parse XML, JSON, or DSLs and render them safely within mature frameworks — without reinventing rendering lifecycles, state management, reactivity, or tooling.

By choosing to build OWL, Odoo accepted the burden of:

  • Maintaining a proprietary framework
  • Rebuilding tooling that already exists elsewhere
  • Fragmenting frontend knowledge
  • Coupling frontend evolution to backend architectural constraints

In a follow-up section, I’ll demonstrate how Odoo’s XML-based UI model could be rendered and overridden cleanly using React.js, achieving the same extensibility without introducing a custom framework. The goal isn’t to claim OWL is unusable — but to show that building it was an unnecessary architectural choice that added long-term cost without solving a novel problem.

  1. A Simple OWL Component vs a React Component To make the discussion concrete, let’s compare a minimal OWL component with an equivalent React component. OWL Component (Simplified)
/** @odoo-module **/

import { Component, xml } from "@odoo/owl";

export class Hello extends Component {
  static template = xml`
    <div>
      <h1>Hello <t t-esc="props.name"/></h1>
    </div>
  `;
}

Usage is typically tied to XML view definitions sent from the backend, and the component lifecycle, state handling, and rendering behavior are governed by OWL’s custom runtime.

React Component (Equivalent)

function Hello({ name }) {
  return (
    <div>
      <h1>Hello {name}</h1>
    </div>
  );
}

At a surface level, both components are trivial. The key difference isn’t syntax — it’s ecosystem gravity.

In React:

  • Component composition is standard
  • Tooling (linting, testing, profiling) is mature
  • State, effects, and error boundaries are well-defined
  • Integration with schema-driven rendering is commonplace

OWL must reimplement or approximate much of this — while also introducing a proprietary mental model that developers must learn in addition to Odoo’s backend abstractions.

  1. React Pseudo-Implementation That Mirrors OWL Overrides A common defense of OWL is that it allows runtime UI overrides — the ability for modules to replace or extend UI behavior dynamically without editing core code.

This is not unique to OWL.

Below is a simplified React-based architecture that mirrors the same capability.

Component Registry (Core)

const ComponentRegistry = new Map();

export function registerComponent(name, component) {
  ComponentRegistry.set(name, component);
}

export function getComponent(name) {
  return ComponentRegistry.get(name);
}

Schema-Driven Renderer

function Renderer({ schema }) {
  return schema.map((node) => {
    const Component = getComponent(node.type);
    return <Component key={node.id} {...node.props} />;
  });
}

Default Registration (Core Module)

registerComponent("field:text", TextField);
registerComponent("field:number", NumberField);

Override by Addon / Third-Party Module

registerComponent("field:text", CustomTextField);

No core files edited. No fork. No rebuild of a framework just to justify ERP is complex and it needs to have complex UI framework with no ecosystem no tooling no documentation.

by using this simple structure at least someone can achieve almost everything in OWL.js while also if needed they can leverage existing ecosystem nice tooling.
As well as:

  • Runtime replacement
  • Controlled extension points
  • Clear override ownership
  • Predictable behavior

The same pattern scales to:

  • Permissions
  • Feature flags
  • User-specific overrides
  • Context-based rendering
  • A better implementation that handles menus and UI translations.

This is effectively what OWL does — but OWL bundles it with a custom rendering engine, lifecycle model, and tooling stack.

3. “But React Can’t Do Runtime UI Overrides”

This is the most common objection — and it’s based on a misconception.

React absolutely supports runtime UI overrides.
What it does not support is implicit, unstructured mutation — and that’s a feature, not a limitation.

Runtime overrides in React are achieved via:

  • Registries
  • Context providers
  • Dependency injection patterns
  • Plugin systems
  • Schema-driven rendering

All of which are:

  • Explicit
  • Traceable
  • Testable
  • Tooling-friendly

OWL’s approach relies heavily on runtime interpretation of XML combined with implicit behavior resolution. This provides flexibility — but at the cost of:

  • Debuggability
  • Static analysis
  • Predictable failure modes

React’s ecosystem favors controlled extensibility over unrestricted mutation. That makes large systems more maintainable over time, especially when multiple teams and third-party developers are involved.

In other words:

  • OWL optimizes for maximum runtime freedom
  • React optimizes for sustainable extensibility

Closing Clarification

The argument here is not that OWL is unusable, nor that Odoo developers are unaware of existing frameworks.

The argument is that the problem OWL solves was already solved, and rebuilding a framework to solve it again introduced long-term cost without introducing a fundamentally new capability.

Odoo didn’t just choose flexibility — it chose to own the entire frontend stack. And owning the stack means owning every limitation, bug, security issue, and ecosystem gap that comes with it.

That is the real cost of reinventing the web stack. This does not make Odoo unusable — but it does make its long-term evolution far more expensive than it needed to be.

Building a Real-Time "Watchtower": Implementing GPS Activity Monitoring in 2026

2026-01-19 03:33:11

The definition of security services management has shifted from human-centric patrolling to developer-centric automation. In 2026, a "security guard" is often an edge-computing device running a geofence-triggering script.
If you are building the next generation of safety and security tools, here is how to architect a high-performance monitoring system that balances precision with the latest security standards.

  • The Tech Stack: From Pings to PredictionsIn 2026, standard security pings every 5 minutes are obsolete. Modern systems utilize 5G-enabled ultra-low latency streams.
    Ingestion: Use WebSockets or gRPC for real-time telemetry. Avoid REST for high-frequency location updates to keep overhead low.
    Processing: Implement Apache Flink or Kafka Streams to handle "Complex Event Processing" (CEP).
    Storage: Store hot data in Redis for sub-millisecond geofence lookups and move historical breadcrumbs to a time-series database like TimescaleDB.

  • Implementing the "Anomalous Inactivity" LogicThe core of activity monitoring isn't just seeing movement; it's detecting the lack of it.Practical Example: If a patrol unit's GPS tracking data shows a velocity of $0 \, \text{m/s}$ for more than 300 seconds within a "High-Risk Zone," your system security should trigger an automated "Welfare Check" via an encrypted push notification.

  • Hardening the Information Security Management System (ISMS)
    As a developer, your biggest threat in 2026 is GPS Spoofing. To maintain a robust Information Security Management System, you must validate location data against cellular tower triangulation or Wi-Fi BSSID "sniffing." If the GPS coordinates say "Main St" but the nearby Wi-Fi networks belong to "Downtown Mall," your system should flag a security breach.
    Best Practices for Devs:
    Privacy by Design: Use k-anonymity for historical data to ensure individual movements can't be reconstructed by unauthorized users.

Battery Optimization: Implement adaptive sampling. Increase GPS frequency to 10 seconds during movement, and drop to "heartbeat" pings every 15 minutes when stationary.

Fail-Safe Offline Mode: Ensure your client-side app caches telemetry locally if 5G drops, syncing immediately upon reconnection to prevent "blind spots" in your audit logs.

Linux Internals Everyone *Must* Understand

2026-01-19 03:29:42

Linux Internals Every DevOps Engineer Must Understand

(Let's go Beyond “I know Linux”)

If you claim DevOps, Linux isn’t just an OS. it’s your runtime, debugger, firewall, scheduler, and autopsy report.
This article covers the Linux internals that secretly gets you tested.

1. Linux File System Internals: /proc & /sys

Image

Image

Image

Linux exposes its own brain as files.

/proc – Process & Kernel Runtime View

  • Virtual filesystem (no disk I/O)
  • Created at boot
  • Reflects current kernel state

Key paths:

/proc/cpuinfo      # CPU architecture, cores
/proc/meminfo      # Memory stats
/proc/loadavg      # Load average
/proc/<PID>/fd     # Open file descriptors
/proc/<PID>/maps   # Memory mapping

Production insight
If a Java app is leaking memory:

ls -l /proc/<PID>/fd | wc -l

You’ll instantly know if file descriptors are leaking.

/sys – Hardware & Driver Control Layer

  • Used by udev, drivers, containers
  • Allows controlled kernel interaction

Example:

/sys/class/net/eth0/speed
/sys/block/sda/queue/scheduler

Great takeaway

  • /proc = What is happening now
  • /sys = How hardware & kernel are wired

2. Process Lifecycle: fork → exec → zombie

Image

Image

Image

Understanding processes separates juniors from seniors.

The Lifecycle

  1. fork() → child process created
  2. exec() → program replaced
  3. wait() → parent collects exit status
fork();
exec("/bin/java");
wait();

Zombies (are Not Horror, Just Bad Parenting)

  • Process finished execution
  • Parent did not collect exit code
  • PID still exists

Check:

ps aux | grep Z

Fix:

  • Restart parent
  • Or fix application signal handling

Interview gold line

“Zombies don’t consume memory, but they exhaust PID space.”

3. Memory, CPU & Load Average (The Most Misunderstood Topic)

Image

Image

Image

Load Average ≠ CPU Usage

uptime
# 1.2 0.9 0.7

Means:

  • Avg runnable or waiting processes over 1, 5, 15 minutes
Scenario Meaning
Load = 4 on 4 cores Healthy
Load = 10 on 4 cores Overloaded
High load, low CPU I/O bottleneck

Memory: Why “Free” Lies

free -m

Focus on:

  • available, not free
  • Linux aggressively uses cache

Clear myth:

“High memory usage is bad” ❌
“Unused memory is wasted memory” ✅

Real Debug Workflow

vmstat 1
iostat -x
top / htop

Can correlate:

  • CPU wait
  • Disk latency
  • Run queue

4. Networking Basics: Ports, Sockets & Reality

Image

Image

Image

Port ≠ Process

A socket = IP + Port + Protocol

ss -tulnp

Example:

LISTEN 0 128 0.0.0.0:8080 java

Connection States You Must Know

State Meaning
LISTEN Waiting
ESTABLISHED Active
TIME_WAIT Normal close
CLOSE_WAIT App bug

A signal
If you see many CLOSE_WAIT → application is leaking connections.

5. Permissions & SELinux (Where “It Works on My VM” Dies)

Image

Image

Image

Linux Permissions Refresher

-rwxr-x---

But permissions alone are not enough.

SELinux (Mandatory Access Control)

Modes:

getenforce
# Enforcing | Permissive | Disabled

Why prod apps fail:

  • Correct permissions
  • Wrong SELinux context

Fix properly:

ausearch -m avc -ts recent
semanage fcontext
restorecon

Senior rule

Never disable SELinux in production, fix policies.

6. systemd: The Real Init System

Image

Image

Image

systemd Is More Than “service start”

It handles:

  • Process supervision
  • Logging
  • Dependency management
  • Auto-restart

Example unit:

[Service]
ExecStart=/app/start.sh
Restart=always
MemoryMax=2G

Check failures:

journalctl -u myapp --since today

Why DevOps Love systemd

  • Built-in watchdog
  • CGroup resource limits
  • Deterministic startup

How they Evaluate This Knowledge

They won’t ask:

“Explain /proc”

They’ll ask:

“Why is load high but CPU idle?”

Or:

“App restarted but port still busy”

If you understand internals, answers come naturally.

Linux is:

  • Your observability platform
  • Your runtime security layer
  • Your truth source

Tools change.
Containers evolve.
Linux fundamentals compound forever.

Understanding ReLU Through Visual Python Examples

2026-01-19 03:23:36

In the previous articles, we used backpropagation and plotted graphs to predict values correctly.

In all those examples, we used the Softplus activation function.

Now, let’s use the ReLU (Rectified Linear Unit) activation function, which is one of the most popular activation functions used in deep learning and convolutional neural networks.

ReLU is defined as:

ReLU(x) = max(0, x)

The output range is from 0 to infinity.

Assumed values

w1 = 1.70   b1 = -0.85
w2 = 12.6   b2 = 0.00
w3 = -40.8  b3 = -16
w4 = 2.70

We will use dosage values from 0 to 1.

Step 1: First linear transformation (w1, b1) + ReLU

If we plug in 0 for dosage:

0 × 1.70 + (-0.85) = -0.85
ReLU(-0.85) = 0

So the y-axis value is 0.

If we plug in 0.2:

0.2 × 1.70 + (-0.85) = -0.51
ReLU(-0.51) = 0

Still 0.

If we plug in 0.6:

0.6 × 1.70 + (-0.85) = 0.17
ReLU(0.17) = 0.17

Now the output becomes positive.

As we continue the dosage up to 1, we get a bent blue line.

Demo code

import numpy as np
import matplotlib.pyplot as plt

x = np.linspace(0, 1, 100)
w1, b1 = 1.70, -0.85

z1 = w1 * x + b1
relu1 = np.maximum(0, z1)

plt.plot(x, relu1)
plt.xlabel("Dosage")
plt.ylabel("Activation")
plt.title("ReLU(w1 * x + b1)")
plt.show()

Step 2: Multiply the output by w3 = -40.8

Now we multiply the bent blue line by -40.8.

This flips the curve downward and scales it.

Demo code

w3 = -40.8
scaled_blue = relu1 * w3

plt.plot(x, scaled_blue)
plt.xlabel("Dosage")
plt.ylabel("Value")
plt.title("ReLU Output × w3")
plt.show()

Step 3: Bottom node (w2, b2)

Now we repeat the process for the bottom node using w2 and b2.

Since b2 = 0, this produces a straight orange line.

Demo code

w2, b2 = 12.6, 0.0

z2 = w2 * x + b2

plt.plot(x, z2, color="orange")
plt.xlabel("Dosage")
plt.ylabel("Value")
plt.title("w2 * x + b2")
plt.show()

Step 4: Multiply bottom node by w4 = 2.70

Now multiply this straight line by 2.70.

Demo code

w4 = 2.70
scaled_orange = z2 * w4

plt.plot(x, scaled_orange, color="orange")
plt.xlabel("Dosage")
plt.ylabel("Value")
plt.title("(w2 * x + b2) × w4")
plt.show()

Step 5: Add the two paths together

Now we add the bent blue line and the straight orange line.

This gives us a green wedge-shaped curve.

Demo code

combined = scaled_blue + scaled_orange

plt.plot(x, combined, color="green")
plt.xlabel("Dosage")
plt.ylabel("Value")
plt.title("Combined Signal")
plt.show()

Step 6: Add bias b3 = -16

Now we add the bias b3 = -16 to the combined signal.

Demo code

b3 = -16
combined_bias = combined + b3

plt.plot(x, combined_bias, color="green")
plt.xlabel("Dosage")
plt.ylabel("Value")
plt.title("Combined Signal + b3")
plt.show()

Step 7: Apply ReLU again

Now we apply ReLU over the green wedge.

This converts all negative values to 0 and keeps positive values unchanged.

Demo code

final_output = np.maximum(0, combined_bias)

plt.plot(x, final_output,color="green")
plt.xlabel("Dosage")
plt.ylabel("Activation")
plt.title("Final ReLU Output")
plt.show()

So this is our final result, where we plotted a curve using ReLU, making in more realistic for real-world situations.

We will explore more on Neural networks in the coming articles.

You can try the examples out in the Colab notebook.

Looking for an easier way to install tools, libraries, or entire repositories?
Try Installerpedia: a community-driven, structured installation platform that lets you install almost anything with minimal hassle and clear, reliable guidance.

Just run:

ipm install repo-name

… and you’re done! 🚀

Installerpedia Screenshot

🔗 Explore Installerpedia here

Hitori

2026-01-19 03:22:56

I wrote an implementation of the game Hitori using Claude Code in agent mode.

It is a puzzle that I developed after looking at a game called Hitori in Linux. I was studying the architecture of it and noted down how that game was written.

I thought that I should make an implementation of it.

I tried to make a Django implementation where Django serves as the backend and the frontend is in JavaScript. I set up the foundation for it and then pointed Claude Code to the original implementation of Hitori, which is written in C in Linux. I asked it to develop a similar one for a client-server architecture that I could play in a web browser using a Python backend and JavaScript frontend.

I enabled it in agent mode and gave it to dangerously skip permissions. I asked it to implement the game and gave some Yes's for the prompts it asked, then went about with it. I had asked it to make a game and then went to sleep.

When I woke up the next morning, I saw that the complete game was implemented. To my surprise, I didn't have to do anything. The whole game was there and I just played it. I was like, "Oh my goodness." I was amazed for a full day at what had just happened.

But then, to take it further, I thought there should be a login system and game board improvements. That took some time, and then I deployed it on a Kubernetes cluster. Even that was easy - I didn't have to do anything.

This is the first game I developed where I didn't have to open an editor or IDE at all. I did everything completely from the command prompt. This was something new. I thought I should capture this moment, so I'm writing about it in my blog. That's pretty much it.

You can enjoy playing this game at https://hitori.learntosolveit.com