2025-11-15 23:26:50
Hello again! I hope you’re all doing well. In my first blog, we explored how the Internet works. Now, we’ll turn our attention to computer networking — the critical foundation beneath the Internet.
Let's get back to the next topic.
Today, we’re going to dive into computer networking — without networking, the Internet as we know it simply wouldn’t exist.
# What is Computer Networking? #
In simple terms, computer networking is the process of linking devices (computers, servers, smartphones) so they can exchange data and make communication and resource sharing smoother and more efficient.
[ In Technical terms, Computer networking is the practice of connecting devices—such as computers, smartphones, and servers ].
In practice, whether you’re sending an email, streaming a video, or accessing a cloud service, networking is what makes that data travel between devices.
# Why do we need Computer Networking #
[ In Technical terms, Computer networking is essential because it allows computers and other digital devices to connect and share information, resources, and services seamlessly, supporting communication, collaboration, and productivity. ]
e.g., “For example, when you print a document from your laptop to a network printer, that’s networking in action
# What is a Protocol? #
In Simple terms, Protocols are a set of rules that work for the completion of Computer connectivity.
[ In Technical terms, A computer network is a group of devices linked together that can exchange data and share resources, like printers and files, using established rules called protocols.]
Without protocols, devices wouldn’t ‘speak the same language’ and communication would fail *
# How does Networking work? #
Each device on a network has a unique address (for example, an IP address). Devices use shared rules (protocols) to talk to each other. Data is broken into packets and sent from the requester to the provider, and then the responses travel back similarly, all governed by the networking protocols.
“A packet is a small chunk of data that includes the content being sent plus information about where it’s going and how it should be handled." *
# Types of Network #
PAN & MAN(Personal Area Network & Metropolitan Area Network):- Serve smaller (personal gadgets) or larger areas.
# Key Components of Networking #
Nodes:- Any device on a network, such as a computer, phone,
Links:- The medium connecting nodes, including cables or wireless signals.
Network Interface Cards (NICs):- Hardware in each device that lets it connect to the network.
Networking Devices:- Equipment such as routers, hubs, Modems, firewalls, etc..
# Protocols & OSI (Open System Interconnection) Model #
Protocols are kept in layers to protect & manage the complexities of Computer networking. The OSI model explains these 7 layers: Layer 1 to Layer 7.
{ E.g.: Common Protocols include TCP (Transmission Control Protocol)/IP, HTTP, FTP (File Transfer Protocol) & Wi-Fi standards}.
# OSI Model and TCP/IP Protocol Suite #
The OSI Model is a conceptual framework that divides networking functions into 7 different layers - from physical connections to applications.
Layer 1 - Physical ( Cables, links)
Layer 2 - Data Link ( MAC addresses)
Layer 3 - Network ( IP address & routing)
Layer 4 - Transport ( TCP for data transfer)
Layer 5 - Session ( Managing connections)
Layer 6 - Presentation (data formatting)
Layer 7 - Application
Diagram for a clearer picture of the OSI model & TCP/IP suites.
In practice, the more widely used suite is the TCP/IP model (4 layers), and the OSI model is a conceptual tool *
An IP address is like the home address of your device on a network.
It helps packets reach the correct device.
There are two types of IP addresses:
IPv4 – 32-bit address (e.g., 192.168.1.1)
IPv6 – 128-bit address (e.g., 2001:0db8:85a3::8a2e:0370:7334)
A subnet is a smaller network inside a larger network.
It’s like dividing a city into smaller neighbourhoods to manage traffic better.
Subnetting helps:
Improve network performance
Increase security &
Efficiently utilise IP addresses
IPv4 has limited addresses, so subnetting became a powerful technique to manage networks before IPv6 fully took over.
🛑 Wrapping Up
That’s all for Blog #2!
You now understand:
What computer networking is, how networks operate, why protocols matter, and OSI layers, Types of networks, IP addressing & subnets
In upcoming blogs, we’ll dive deeper into:
IPv6, Routing (OSPF, BGP), DNS, NAT & Firewalls. Also, Practical networking using AWS (VPC, Subnets, Route Tables)
I’m still learning too — and I’m glad to have you on this journey with me.
See you soon in Blog #3!
“Feel free to drop questions or topics you’d like me to cover next in the comments.”
Ta-Ta..! 😊
2025-11-15 23:25:01
Over the past 8 months I have been quietly building my own cognition layer for AI systems.
Not a shiny frontend. Not another wrapper around a single API. I wanted something that would let me define how a system thinks, step by step, and then replay that thinking when things go wrong.
The project is called OrKa-reasoning. With v0.9.6 I finally shipped the part that annoyed me the most: deterministic, explainable path selection in messy agent graphs.
This post is a mix of story and architecture. It is not a launch announcement. It is more like a lab notebook entry from someone who got tired of magical routing and decided to replace it with a boring scoring function.
If you are building agentic systems, or you care about reproducible AI behaviour, you might find some of this useful. Or you might violently disagree, which is also useful. 🙂
Like many people, I started by wiring models and tools together in the fastest possible way.
At small scale, this feels fine. You can manually test a few flows and convince yourself it is "smart". The problem appears when:
This is where I found myself reading logs that looked like random walks.
The worst part was not that the system was wrong. Of course it was wrong sometimes. The worst part was that I had no clean way to answer the simplest question:
Why did it choose this path instead of the other one?
If the answer is always "because the large model said so", you do not really have a system. You have an expensive dice that generates strings.
I wanted something stricter.
Before talking about scoring, a quick snapshot of what OrKa is.
OrKa is a modular cognition layer where you define agents and orchestration in YAML.
Instead of burying logic inside a single prompt or a massive Python file, you write something like this:
orchestrator:
id: research_orchestrator
strategy: graph
queue: redis_main
agents:
- id: question_normaliser
type: llm
model: local_llm_0
prompt: |
Normalise the user question and extract the core task.
Input: {{ input }}
- id: graph_scout
type: router
implementation: GraphScoutAgent
- id: decision_engine
type: router
implementation: DecisionEngine
- id: executor
type: executor
implementation: PathExecutor
This is not the actual full config of OrKa, but the spirit is there.
The orchestrator knows which agents exist and how they can connect. The runtime executes this graph, logs every step, and writes traces to storage.
OrKa is not about inventing new models. It is about treating models as components inside a larger cognitive process that you can inspect and reproduce.
Which brings us to the main pain point: routing.
Once you have more than a linear sequence of agents, you need to decide which path a request will take.
Typical examples:
Most frameworks solve this with one of these options:
All of these work at small scale. None of them made me happy for serious systems.
What I wanted was:
So I decided to treat path selection as a scoring problem.
Instead of thinking "which tool should I call", I started thinking "which full path through the graph should win".
That leads to a simple structure:
In OrKa v0.9.6 this is handled by four main components:
GraphScoutAgent
PathScorer
DecisionEngine
SmartPathEvaluatorLet us walk through what each one does.
The GraphScoutAgent is responsible for reading the current graph and state and proposing candidate paths.
Its job is intentionally limited:
It just answers the question:
Given where we are now, what are the valid next paths I can take, and what information do I need to evaluate them?
A "path" here is not just a single next node. It can be a short sequence that represents a meaningful strategy.
For example:
["normalise_question", "search_docs", "synthesise_answer"]
["normalise_question", "ask_clarification", "search_docs", "synthesise_answer"]
["normalise_question", "call_external_api", "summarise_api_result"]The scout does some basic pruning. There is no point considering paths that are structurally impossible or obviously invalid.
Once we have a set of candidates, we can start scoring.
The PathScorer is where most of the interesting logic lives.
The scoring function is multi factor and looks roughly like this:
final_score = w_llm * score_llm
+ w_heuristic * score_heuristic
+ w_prior * score_prior
+ w_cost * penalty_cost
+ w_latency * penalty_latency
Each term is normalised to a consistent range before weighting, so scores are comparable across candidates.
The factors:
score_llm
The output of a small evaluation model that looks at a candidate path and the current context and answers a simple question:
How suitable is this path for what we are trying to do?
This does not need to be a giant model. A small local model is often enough.
score_heuristic
Hand written logic. For example:
score_prior
Domain or tenant specific priors. This is still a work in progress in 0.9.6.
Think of it as "distaste" for some strategies in some domains. For instance, in a financial setting you might have a strong prior against generating free form explanations without a verification step.
penalty_cost
Cost is not just money. Cost can be GPU time, external API calls, or latency budgets.
This term penalises candidates that are likely to be expensive.
penalty_latency
Expected latency. Sometimes you want to avoid slow paths even if they are slightly more accurate, especially in user facing flows.
All weights are configurable.
In v0.9.6 the default configuration is conservative. The point is not to ship a magic policy. The point is to ship a structure that you can bend to your needs.
And most importantly: every factor and weight is recorded in the trace.
Once every candidate has a score, the DecisionEngine kicks in.
Its responsibilities:
"Shortlist semantics" might sound like a detail, but it matters in practice.
Sometimes you want:
The DecisionEngine contains this logic and is the main place where you can plug in different strategies without touching scoring itself.
One thing I learned quickly: if you do not formalise this part, you end up with ad hoc logic scattered across the codebase, which is very hard to test.
The SmartPathEvaluator is simply the wrapper that orchestration code talks to.
From the outside, you do not care about scouts, scorers and engines. You want to say:
decision = evaluator.evaluate(current_state)
and get back:
The evaluator handles initialisation, plugs everything together and provides a stable API.
This is the layer where backwards compatibility matters. Internally I can keep iterating on the blocks. As long as the evaluator contract stays stable, orchestration code will not need to change much.
A big motivator behind this refactor was trace quality.
A trace for a single decision now includes at least:
This means that when something weird happens, the debugging flow is finally clear:
No more guessing. No more "the model decided".
I am not claiming this is perfect, but at least there is a concrete trail to follow.
Coverage numbers are easy to game, so here is what the 74 percent in OrKa v0.9.6 really means.
Things that are tested well:
These are mostly unit and component tests. They run fast and have no external dependencies.
Things that are partially tested:
Here I lean on higher level tests that exercise the system in memory with mocks instead of real external services.
Things that are not properly tested on the CD/CI:
These are exactly the items where I'm struggling more. All tests run in to a github actions and there are no real LLM to call. Local test are in place to ensure me all is working before release.
I am sharing this explicitly because I am tired of changelogs that say "improved reliability" without telling you what is still risky.
One side effect of building around deterministic scoring is that local models become even more attractive.
You can use a small local model as the "judgement" part of the scoring function:
Because the rest of the scoring function is deterministic and visible, even a slightly noisy local model can be stabilised by heuristics and priors.
This has a few advantages:
In my own experiments I have used small local models through runtimes like Ollama for this purpose. They are not perfect, but they are good enough when combined with the other factors.
The important part is that the scoring pipeline does not trust the model blindly. It treats it as one signal among many.
To make this less abstract, here is a simplified YAML style example of how a decision might play out.
Imagine you have two candidate paths for a user question:
["normalise", "search_docs", "answer"]
["normalise", "ask_clarification", "search_docs", "answer"]
The inputs to the scorer could look like:
{
"candidates": [
{
"id": "path_a",
"steps": ["normalise", "search_docs", "answer"]
},
{
"id": "path_b",
"steps": ["normalise", "ask_clarification", "search_docs", "answer"]
}
],
"context": {
"question": "Need a short summary of last quarter revenue",
"user_tolerance_ms": 2000
}
}
The scoring results might be:
{
"path_a": {
"score_llm": 0.78,
"score_heuristic": 0.9,
"score_prior": 0.5,
"penalty_cost": -0.1,
"penalty_latency": -0.05,
"final_score": 0.71
},
"path_b": {
"score_llm": 0.82,
"score_heuristic": 0.6,
"score_prior": 0.5,
"penalty_cost": -0.3,
"penalty_latency": -0.3,
"final_score": 0.52
}
}
Weights are hidden here for brevity, but they are part of the trace.
Looking at this, it is clear that:
The DecisionEngine then chooses path A, possibly keeping path B in a shortlist as a fallback for specific error modes.
When someone asks "why did we not ask for clarification here", the trace says it plainly: cost and latency mattered more than that extra step.
This is the sort of conversation I want to be able to have about AI systems.
I do not pretend OrKa v0.9.6 is finished work. It is an advanced beta, not a stable 1.0.
The most important gaps right now:
End to end validation
I need a small, boring suite of tests that run full flows with local LLMs and a real Redis or similar memory backend. No mocks. No shortcuts. Just reproducible runs.
Priors and safety heuristics
The structure is there, but the library of domain specific priors and safety rules is still thin. This is probably the most important piece for high risk domains.
PathExecutor shortlist semantics
I want more coverage of weird real world cases where the top candidate fails mid path and fallback logic kicks in.
LLM schema handling
Right now a lot of schema work is done, but I want schema failures to be first class citizens in traces. If a model gives me garbage, the system should not quietly "fix" it. It should record that the schema was broken.
All of these items are focused and measurable. There is no magic backlog of vague ideas. Just a short list of concrete things that need to be built and tested.
There is a lot of noise in the AI space. Huge claims, vague diagrams, no tests.
I am not trying to shout over that.
I am sharing this for a simpler reason: if you are also building agentic systems, we are probably facing similar problems. You might have better solutions, or you might see blind spots in mine.
In that sense, OrKa is a conversation starter as much as it is a tool.
You can find more details and code here:
If you made it this far, thank you.
Feel free to steal any of these ideas, or tear them apart. That is how the next iteration will get better. 🚀
2025-11-15 23:11:43
A practical exercise in learning closures by building a tiny idle game - no frameworks, just vanilla JavaScript.
Early in my development journey, I struggled with JavaScript closures. The concept felt abstract and slippery - I could read the definitions, but they didn't quite click. So I did what I often do when learning something new: I built a small project that forced me to use them extensively.
The result was Closures & Callstacks, a simple browser-based idle game where a party of adventurers battles a dragon. Built with nothing but vanilla HTML, CSS, and JavaScript - no frameworks, no libraries - it served its purpose: by structuring the entire application around factory functions and closures, I finally internalised how they work.
The premise is straightforward: you generate a party of three adventurers (fighters, wizards, and clerics, each with their own ASCII art representation), then watch them battle an ancient dragon in turn-based combat. Characters attack enemies or heal allies based on simple AI, with health bars updating in real-time and a combat log narrating the action. Victory or defeat depends on whether your party can whittle down the dragon's health before it stomps and firebreathes your adventurers into oblivion.
The game's architecture revolves around factory functions - functions that return objects with methods. These methods "close over" private variables, creating encapsulated state without needing classes or the new keyword.
Here's a simplified version of the health system:
function healthFunctions() {
let maxHealth = 0;
let currentHealth = 0;
let isKO;
return {
setMaxHealth: function(maxHP) {
maxHealth = maxHP;
currentHealth = maxHealth;
isKO = false;
return maxHealth;
},
getCurrentHealth: function() {
return currentHealth;
},
takeDamage: function(damage) {
currentHealth = Math.max((currentHealth -= damage), 0);
isKO = currentHealth === 0 ? true : false;
return currentHealth;
},
healDamage: function(heal) {
if (isKO) {
isKO = false;
}
currentHealth = Math.min((currentHealth += heal), maxHealth);
return currentHealth;
},
isKO: function() {
return isKO;
}
};
}
The variables maxHealth, currentHealth, and isKO are truly private. There's no way to access them directly from outside the function - you can only interact with them through the returned methods. Each method maintains a reference to these variables through closure, even after healthFunctions() has finished executing.
The player factory follows the same pattern but at a larger scale:
function playerFunctions() {
let _id;
let _name;
let _playerClass;
let _allies = {};
let _enemies = {};
let _profBonus = 1;
let playerAttack = {};
let playerBuffs = {};
const playerHealth = healthFunctions();
return {
init: function(name, playerClass, allies, enemies, id) {
_id = id;
_name = name;
_allies = allies;
_enemies = enemies;
_playerClass = playerClass;
playerHealth.setMaxHealth(calculateHP(playerClass));
playerAttack = attackFunctions(_profBonus, _playerClass);
playerBuffs = buffFunctions(_profBonus, _playerClass);
},
getName: function() {
return _name;
},
get playerClass() {
return _playerClass;
},
get health() {
return playerHealth;
},
takeTurn: function() {
if (this.health.isKO(true)) {
return;
}
// ... game logic for taking actions
}
};
}
Each character created by playerFunctions() maintains its own private state. The playerHealth variable holds a reference to a health system (itself created by a factory function), and all the character's methods can access it through closure.
Building this game made closures tangible for me. Instead of being an abstract concept, they became a practical tool for:
playerHealth = healthFunctions()) to build complex behavioursIs this the most efficient way to structure a game? Probably not. The code has plenty of rough edges I'd refactor now (I’ve chosen to leave the code as-is for posterity!). But as a learning exercise, it was invaluable. By forcing myself to use closures everywhere - for health tracking, attack systems, buff management, and game state - I developed an intuitive understanding that stuck.
Sometimes the best way to learn a concept is to build something with it, even if that something is a simple dragon-fighting game with ASCII art characters.
2025-11-15 23:11:07
As AI applications surge in popularity, developers face a pressing challenge: how to monetize these innovations without sacrificing user experience. Enter Monetzly — the first dual-earning platform designed for AI conversations, allowing developers to monetize their apps while hosting relevant ads that users actually appreciate.
Most AI applications struggle with monetization models that either disrupt user experience or rely on subscriptions and paywalls. This not only alienates users but can stifle innovation in the developer ecosystem. At Monetzly, we recognized the need for a solution that enhances user experience while providing sustainable revenue streams for developers and advertisers.
Our engineering team has developed a conversation-native advertising model that integrates ads into AI conversations organically. Here’s how we did it:
Using AI-powered contextual matching, we analyze user interactions in real-time to deliver ads that are relevant to the conversation. This means users see ads that actually align with their interests, leading to higher engagement and satisfaction.
We understand that developers are the backbone of this ecosystem. Monetzly offers a five-minute SDK integration, allowing developers to start monetizing their applications without complex setups. This user-friendly approach ensures that developers can focus on building great features rather than spending time on monetization hurdles.
What sets Monetzly apart is our unique dual-earning model. Developers can monetize their apps not just through user engagement but also by hosting contextually relevant ads. This means more revenue without imposing paywalls or subscriptions on users. Imagine your app generating income while enhancing user engagement — that’s the Monetzly promise.
For developers, this model translates to:
For advertisers, Monetzly offers:
Monetzly is more than just a platform; it’s a movement towards sustainable AI innovation. By creating win-win monetization opportunities, we empower developers, enhance user experiences, and provide advertisers with targeted outreach.
If you’re a developer building LLM-powered applications and want to learn how to leverage our platform, visit us at Monetzly. Join our community and start reaping the benefits of seamless ad integration today.
Let’s redefine AI monetization together!
2025-11-15 23:04:09
If there's one concept that separates good developers from exceptional ones, it's abstraction. Yet after 7+ years in professional software engineering and entrepreneurship, I've witnessed countless talented developers fall into the same trap—they use abstraction without truly understanding it.
Ask any senior software engineer to define abstraction, and you'll typically hear:
"Abstraction is simplifying complex systems by focusing on important characteristics while hiding implementation details."
This definition is correct but dangerously incomplete.
Yes, abstraction allows us to create clean interfaces for complex systems. Yes, it makes frameworks feel "easy to use." But here's the trap: this false sense of simplicity breeds mediocrity.
Here's a pattern I see repeatedly:
The mediocre developer thinks: "The framework provides authentication? Perfect. I'll just call the API and—magic—my application has authentication!"
The great developer asks: "How does this authentication mechanism actually work? What are the security implications? What happens when it fails?"
You cannot hide implementation details effectively if you don't understand them deeply.
Software isn't just "code that runs." It's a carefully orchestrated stack of abstraction layers, each building on the one below:
Every feature you build, every bug you debug, every scaling challenge you face—they all exist somewhere within these layers. The developers who understand layer interactions solve problems 10x faster.
I developed this approach to systematically master complex systems beyond their simple interfaces. It's deceptively simple but incredibly powerful:
Never move to the next abstraction layer until you completely grasp the current one.
Top-Down (Start at Application Layer):
Bottom-Up (Start at Infrastructure Layer):
Let me show you how abstraction mastery solves real problems.
A client's CI/CD pipeline had been broken for a week. Their entire team was stumped. Only one pipeline failed, returning 419 Request Too Large from their self-hosted container registry.
Their Stack:
Cloudflare Proxy (Layer 5): 500MB request limit for Enterprise plan
Ingress Controller (Layer 6): Default request size limits
nginx.ingress.kubernetes.io/proxy-body-size
Container Registry (Layer 7): Configuration limits
One visible error. Three interconnected root causes across different abstraction layers.
Their team spent a week looking at logs. I solved it in hours by systematically analyzing each layer.
At least once, read the source code of critical tools you use:
You'll never look at these tools the same way again.
Next time you encounter a bug:
When using any framework or tool:
Create diagrams (like the ones in this post) for systems you work with. Visualizing abstraction layers dramatically improves understanding.
Here's a common scenario in technical meetings:
Manager: "How do we scale this solution?"
This isn't really a question—it's a disguised request: "Teach me about scalability."
The truth: Scalability, availability, security, robustness, and reliability all come down to understanding abstraction.
You can't architect scalability if you only understand one layer. You need to see how they interact.
The professionals who truly excel in software engineering are those who:
✅ Understand how abstraction layers interact
✅ Can debug across multiple layers simultaneously
✅ Don't treat frameworks as magic black boxes
✅ Read source code regularly
✅ Apply systematic investigation methodologies
Stop treating abstraction as just theory. It's the practical framework that separates good engineers from great ones.
Your understanding of abstraction is likely incomplete—and that's okay. Recognition is the first step.
The question is: What will you do about it?
The developers who master abstraction don't just write code—they architect systems that scale, debug issues that mystify others, and build careers that others envy.
Abstraction isn't just a concept. It's your competitive advantage.
What's your experience with abstraction in software engineering? Have you encountered situations where understanding multiple layers made the difference? Share your stories in the comments below.
2025-11-15 23:02:05
Imagine trying to solve a puzzle with billions of pieces, where each placement affects all the others. That's the challenge in many complex games and real-world problems. But what if you could enlist thousands of helpers, working simultaneously, to find the perfect solution?
The core idea is to intelligently divide and conquer. A sophisticated search algorithm estimates how promising each potential move is, focusing computational power on the most likely paths to a solution. This is accelerated by running multiple instances of the search algorithm on many CPU cores, all sharing information to avoid redundant calculations and refine the search process collaboratively. Think of it like a flock of birds: each bird individually seeks food, but they all benefit from the collective knowledge of the flock.
This massively parallel approach can achieve unprecedented speedups, enabling solutions to previously intractable problems. It's not just about faster computation; it's about unlocking fundamentally new levels of understanding.
Benefits of Massively Parallel Search:
Implementation Challenge: The key challenge lies in minimizing communication overhead between the parallel processes. Too much data exchange can negate the benefits of parallelization. A practical tip is to prioritize sharing only essential information and to utilize asynchronous communication patterns where possible.
What if we could apply this approach to drug discovery, optimizing complex supply chains, or even predicting financial markets? The potential is immense. By harnessing the power of massively parallel search, we can unlock solutions to some of the world's most challenging problems, paving the way for breakthroughs in artificial intelligence and beyond. The next step involves refining these algorithms and exploring their application in other computationally intensive fields.
Related Keywords: Proof-Number Search, Impartial Games, Combinatorial Game Theory, Monte Carlo Tree Search, Alpha-Beta Pruning, Minimax Algorithm, Parallel Algorithms, Distributed Computing, Game AI, Artificial General Intelligence, Cloud Computing, GPU Acceleration, TPU, High-Performance Computing, Heuristic Search, Decision Making, Optimization, Game Solving, Game Development, Algorithm Design, Computational Complexity, Tree Search Algorithms, Scalability, Concurrency