2026-01-19 03:47:02
This is The Desi Networker, who is choosing to Network Engineer, this is my first journey towards that ...
one of the best way to prove someone ur skilled in netwroking.so i too started
Step 1:- CCNA ( where u learn basic networking with practically implementing main concepts like Switching and Routing).
Step 2:- CCNP Encor (Advanced certificate of ccna (going indepth of ccna topics) u can directly eligible for this certificate without CCNA. ( Where im completed my masters and searching for jobs, with 5+ experience, so im directly jumping to this certificate. but here we are starting from basics, ccna topics and ccnp encor. im also learning stage lets start together.)
and there are many certifications to prove your skilled by choice is this
taruvata a certificate cheyalo naku theledu guys 😆.ur suggestions are welcome.
i decided to give CCNP Encor on March 2nd So lets start as soon as possible.
i decided to learn from (kevin wallace) from Linkedin learning.
CCNA :- https://www.linkedin.com/learning/cisco-certified-network-associate-ccna-v1-1-200-301-cert-prep/diving-into-the-course?resume=false ( musalodi kani chala fast ga cheptadu guys so each every word imp a chusukondi ).
Once i finish this by March 2nd , i'll start my job hunting.friend 😎
So Start CCNA,
CCNA Syllabus (u can check on Cisco Website🤷).
my ccna syllabus.
Layer 2 - Data link Layer
* Switching (MAC Address Table)
* Vlans
* trunking
* STP
* EtherChannel
* Layer 2 troubleshooting.
Layer 3 - Network layer
* Subnetting + gateways
* Inter-VLAN routing (SVI + router-on-stick concepts)
* Static routing + default route
* OSPF
* NAT/PAT
* ACL basics
* few protocols(DHCP,DNS,...)
* Layer 3 troubleshooting.
Wireless Networks and Conf
QoS (Quality of services).
Treats and Defences
SDN (Software - Defined Networking).
the end buddy , lets start ...
so first download Cisco Packet Tracer Application to play 🏏.
2026-01-19 03:39:25
In this post, I want to focus on Odoo’s OWL framework — the first major layer of frontend complexity in Odoo’s web stack — and question whether building it was truly necessary, or whether it was an avoidable source of long-term complexity justified by the familiar argument: “it’s an ERP, so it must be complex.”
For context, OWL (Odoo Web Library) is the JavaScript framework used to power Odoo’s frontend components, including the dashboard, backend UI, and parts of the website.
According to Odoo, OWL was built from scratch to solve a specific problem: allowing third-party developers to override and extend frontend components defined by other modules without modifying core files or losing changes during upgrades.
On paper, this goal is reasonable — even admirable. However, the conclusion that this required building an entirely new frontend framework is far more questionable.
The same goal is already achievable in all major modern frontend frameworks (React, Vue, Angular) through well-established mechanisms such as component composition, slots, higher-order components, dependency injection, extension APIs, and schema-driven rendering.
By using a mature frontend framework would have provided several major advantages:
Instead, Odoo now maintains a hybrid frontend stack where OWL coexists with legacy code — including multiple versions of jQuery (2.x and 3.x) still present in parts of the system. This alone should raise questions about long-term maintainability.
Before comparing OWL directly to React or Vue, it’s important to understand how OWL actually works.
At its core, OWL consumes XML definitions sent from the backend and dynamically builds a component tree on the frontend. These XML views are parsed, interpreted, and translated into JavaScript-rendered UI components.
For example, consider a simple OWL form definition:
<form>
<field name="my_model_field_name"/>
</form>
The frontend receives this XML and uses the OWL runtime to translate it into rendered UI components.
widget attribute to select a custom rendererIn other words, OWL acts as a runtime XML interpreter that generates UI behavior dynamically.
OWL’s flexibility does not come from a fundamentally new idea — it comes from deferring structure and behavior decisions to runtime. This is not unique, nor does it require a custom framework.
The same level of flexibility can be achieved in React for example by using:
Many systems already parse XML, JSON, or DSLs and render them safely within mature frameworks — without reinventing rendering lifecycles, state management, reactivity, or tooling.
By choosing to build OWL, Odoo accepted the burden of:
In a follow-up section, I’ll demonstrate how Odoo’s XML-based UI model could be rendered and overridden cleanly using React.js, achieving the same extensibility without introducing a custom framework. The goal isn’t to claim OWL is unusable — but to show that building it was an unnecessary architectural choice that added long-term cost without solving a novel problem.
/** @odoo-module **/
import { Component, xml } from "@odoo/owl";
export class Hello extends Component {
static template = xml`
<div>
<h1>Hello <t t-esc="props.name"/></h1>
</div>
`;
}
Usage is typically tied to XML view definitions sent from the backend, and the component lifecycle, state handling, and rendering behavior are governed by OWL’s custom runtime.
function Hello({ name }) {
return (
<div>
<h1>Hello {name}</h1>
</div>
);
}
At a surface level, both components are trivial. The key difference isn’t syntax — it’s ecosystem gravity.
In React:
OWL must reimplement or approximate much of this — while also introducing a proprietary mental model that developers must learn in addition to Odoo’s backend abstractions.
This is not unique to OWL.
Below is a simplified React-based architecture that mirrors the same capability.
const ComponentRegistry = new Map();
export function registerComponent(name, component) {
ComponentRegistry.set(name, component);
}
export function getComponent(name) {
return ComponentRegistry.get(name);
}
function Renderer({ schema }) {
return schema.map((node) => {
const Component = getComponent(node.type);
return <Component key={node.id} {...node.props} />;
});
}
registerComponent("field:text", TextField);
registerComponent("field:number", NumberField);
registerComponent("field:text", CustomTextField);
No core files edited. No fork. No rebuild of a framework just to justify ERP is complex and it needs to have complex UI framework with no ecosystem no tooling no documentation.
by using this simple structure at least someone can achieve almost everything in OWL.js while also if needed they can leverage existing ecosystem nice tooling.
As well as:
The same pattern scales to:
This is effectively what OWL does — but OWL bundles it with a custom rendering engine, lifecycle model, and tooling stack.
This is the most common objection — and it’s based on a misconception.
React absolutely supports runtime UI overrides.
What it does not support is implicit, unstructured mutation — and that’s a feature, not a limitation.
Runtime overrides in React are achieved via:
All of which are:
OWL’s approach relies heavily on runtime interpretation of XML combined with implicit behavior resolution. This provides flexibility — but at the cost of:
React’s ecosystem favors controlled extensibility over unrestricted mutation. That makes large systems more maintainable over time, especially when multiple teams and third-party developers are involved.
In other words:
The argument here is not that OWL is unusable, nor that Odoo developers are unaware of existing frameworks.
The argument is that the problem OWL solves was already solved, and rebuilding a framework to solve it again introduced long-term cost without introducing a fundamentally new capability.
Odoo didn’t just choose flexibility — it chose to own the entire frontend stack. And owning the stack means owning every limitation, bug, security issue, and ecosystem gap that comes with it.
That is the real cost of reinventing the web stack. This does not make Odoo unusable — but it does make its long-term evolution far more expensive than it needed to be.
2026-01-19 03:33:11
The definition of security services management has shifted from human-centric patrolling to developer-centric automation. In 2026, a "security guard" is often an edge-computing device running a geofence-triggering script.
If you are building the next generation of safety and security tools, here is how to architect a high-performance monitoring system that balances precision with the latest security standards.
The Tech Stack: From Pings to PredictionsIn 2026, standard security pings every 5 minutes are obsolete. Modern systems utilize 5G-enabled ultra-low latency streams.
Ingestion: Use WebSockets or gRPC for real-time telemetry. Avoid REST for high-frequency location updates to keep overhead low.
Processing: Implement Apache Flink or Kafka Streams to handle "Complex Event Processing" (CEP).
Storage: Store hot data in Redis for sub-millisecond geofence lookups and move historical breadcrumbs to a time-series database like TimescaleDB.
Implementing the "Anomalous Inactivity" LogicThe core of activity monitoring isn't just seeing movement; it's detecting the lack of it.Practical Example: If a patrol unit's GPS tracking data shows a velocity of $0 \, \text{m/s}$ for more than 300 seconds within a "High-Risk Zone," your system security should trigger an automated "Welfare Check" via an encrypted push notification.
Hardening the Information Security Management System (ISMS)
As a developer, your biggest threat in 2026 is GPS Spoofing. To maintain a robust Information Security Management System, you must validate location data against cellular tower triangulation or Wi-Fi BSSID "sniffing." If the GPS coordinates say "Main St" but the nearby Wi-Fi networks belong to "Downtown Mall," your system should flag a security breach.
Best Practices for Devs:
Privacy by Design: Use k-anonymity for historical data to ensure individual movements can't be reconstructed by unauthorized users.
Battery Optimization: Implement adaptive sampling. Increase GPS frequency to 10 seconds during movement, and drop to "heartbeat" pings every 15 minutes when stationary.
Fail-Safe Offline Mode: Ensure your client-side app caches telemetry locally if 5G drops, syncing immediately upon reconnection to prevent "blind spots" in your audit logs.
2026-01-19 03:29:42
(Let's go Beyond “I know Linux”)
If you claim DevOps, Linux isn’t just an OS. it’s your runtime, debugger, firewall, scheduler, and autopsy report.
This article covers the Linux internals that secretly gets you tested.
/proc & /sys
Linux exposes its own brain as files.
/proc – Process & Kernel Runtime View
Key paths:
/proc/cpuinfo # CPU architecture, cores
/proc/meminfo # Memory stats
/proc/loadavg # Load average
/proc/<PID>/fd # Open file descriptors
/proc/<PID>/maps # Memory mapping
Production insight
If a Java app is leaking memory:
ls -l /proc/<PID>/fd | wc -l
You’ll instantly know if file descriptors are leaking.
/sys – Hardware & Driver Control Layer
Example:
/sys/class/net/eth0/speed
/sys/block/sda/queue/scheduler
Great takeaway
/proc = What is happening now
/sys = How hardware & kernel are wired
Understanding processes separates juniors from seniors.
fork();
exec("/bin/java");
wait();
Check:
ps aux | grep Z
Fix:
Interview gold line
“Zombies don’t consume memory, but they exhaust PID space.”
uptime
# 1.2 0.9 0.7
Means:
| Scenario | Meaning |
|---|---|
| Load = 4 on 4 cores | Healthy |
| Load = 10 on 4 cores | Overloaded |
| High load, low CPU | I/O bottleneck |
free -m
Focus on:
Clear myth:
“High memory usage is bad” ❌
“Unused memory is wasted memory” ✅
vmstat 1
iostat -x
top / htop
Can correlate:
A socket = IP + Port + Protocol
ss -tulnp
Example:
LISTEN 0 128 0.0.0.0:8080 java
| State | Meaning |
|---|---|
| LISTEN | Waiting |
| ESTABLISHED | Active |
| TIME_WAIT | Normal close |
| CLOSE_WAIT | App bug |
A signal
If you see many CLOSE_WAIT → application is leaking connections.
-rwxr-x---
But permissions alone are not enough.
Modes:
getenforce
# Enforcing | Permissive | Disabled
Why prod apps fail:
Fix properly:
ausearch -m avc -ts recent
semanage fcontext
restorecon
Senior rule
Never disable SELinux in production, fix policies.
It handles:
Example unit:
[Service]
ExecStart=/app/start.sh
Restart=always
MemoryMax=2G
Check failures:
journalctl -u myapp --since today
They won’t ask:
“Explain /proc”
They’ll ask:
“Why is load high but CPU idle?”
Or:
“App restarted but port still busy”
If you understand internals, answers come naturally.
Linux is:
Tools change.
Containers evolve.
Linux fundamentals compound forever.
2026-01-19 03:23:36
In the previous articles, we used backpropagation and plotted graphs to predict values correctly.
In all those examples, we used the Softplus activation function.
Now, let’s use the ReLU (Rectified Linear Unit) activation function, which is one of the most popular activation functions used in deep learning and convolutional neural networks.
ReLU is defined as:
ReLU(x) = max(0, x)
The output range is from 0 to infinity.
w1 = 1.70 b1 = -0.85
w2 = 12.6 b2 = 0.00
w3 = -40.8 b3 = -16
w4 = 2.70
We will use dosage values from 0 to 1.
If we plug in 0 for dosage:
0 × 1.70 + (-0.85) = -0.85
ReLU(-0.85) = 0
So the y-axis value is 0.
If we plug in 0.2:
0.2 × 1.70 + (-0.85) = -0.51
ReLU(-0.51) = 0
Still 0.
If we plug in 0.6:
0.6 × 1.70 + (-0.85) = 0.17
ReLU(0.17) = 0.17
Now the output becomes positive.
As we continue the dosage up to 1, we get a bent blue line.
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 1, 100)
w1, b1 = 1.70, -0.85
z1 = w1 * x + b1
relu1 = np.maximum(0, z1)
plt.plot(x, relu1)
plt.xlabel("Dosage")
plt.ylabel("Activation")
plt.title("ReLU(w1 * x + b1)")
plt.show()
Now we multiply the bent blue line by -40.8.
This flips the curve downward and scales it.
w3 = -40.8
scaled_blue = relu1 * w3
plt.plot(x, scaled_blue)
plt.xlabel("Dosage")
plt.ylabel("Value")
plt.title("ReLU Output × w3")
plt.show()
Now we repeat the process for the bottom node using w2 and b2.
Since b2 = 0, this produces a straight orange line.
w2, b2 = 12.6, 0.0
z2 = w2 * x + b2
plt.plot(x, z2, color="orange")
plt.xlabel("Dosage")
plt.ylabel("Value")
plt.title("w2 * x + b2")
plt.show()
Now multiply this straight line by 2.70.
w4 = 2.70
scaled_orange = z2 * w4
plt.plot(x, scaled_orange, color="orange")
plt.xlabel("Dosage")
plt.ylabel("Value")
plt.title("(w2 * x + b2) × w4")
plt.show()
Now we add the bent blue line and the straight orange line.
This gives us a green wedge-shaped curve.
combined = scaled_blue + scaled_orange
plt.plot(x, combined, color="green")
plt.xlabel("Dosage")
plt.ylabel("Value")
plt.title("Combined Signal")
plt.show()
Now we add the bias b3 = -16 to the combined signal.
b3 = -16
combined_bias = combined + b3
plt.plot(x, combined_bias, color="green")
plt.xlabel("Dosage")
plt.ylabel("Value")
plt.title("Combined Signal + b3")
plt.show()
Now we apply ReLU over the green wedge.
This converts all negative values to 0 and keeps positive values unchanged.
final_output = np.maximum(0, combined_bias)
plt.plot(x, final_output,color="green")
plt.xlabel("Dosage")
plt.ylabel("Activation")
plt.title("Final ReLU Output")
plt.show()
So this is our final result, where we plotted a curve using ReLU, making in more realistic for real-world situations.
We will explore more on Neural networks in the coming articles.
You can try the examples out in the Colab notebook.
Looking for an easier way to install tools, libraries, or entire repositories?
Try Installerpedia: a community-driven, structured installation platform that lets you install almost anything with minimal hassle and clear, reliable guidance.
Just run:
ipm install repo-name
… and you’re done! 🚀
2026-01-19 03:22:56
I wrote an implementation of the game Hitori using Claude Code in agent mode.
It is a puzzle that I developed after looking at a game called Hitori in Linux. I was studying the architecture of it and noted down how that game was written.
I thought that I should make an implementation of it.
I tried to make a Django implementation where Django serves as the backend and the frontend is in JavaScript. I set up the foundation for it and then pointed Claude Code to the original implementation of Hitori, which is written in C in Linux. I asked it to develop a similar one for a client-server architecture that I could play in a web browser using a Python backend and JavaScript frontend.
I enabled it in agent mode and gave it to dangerously skip permissions. I asked it to implement the game and gave some Yes's for the prompts it asked, then went about with it. I had asked it to make a game and then went to sleep.
When I woke up the next morning, I saw that the complete game was implemented. To my surprise, I didn't have to do anything. The whole game was there and I just played it. I was like, "Oh my goodness." I was amazed for a full day at what had just happened.
But then, to take it further, I thought there should be a login system and game board improvements. That took some time, and then I deployed it on a Kubernetes cluster. Even that was easy - I didn't have to do anything.
This is the first game I developed where I didn't have to open an editor or IDE at all. I did everything completely from the command prompt. This was something new. I thought I should capture this moment, so I'm writing about it in my blog. That's pretty much it.
You can enjoy playing this game at https://hitori.learntosolveit.com