MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

I Built a Solana Tip Jar

2026-03-07 16:57:21

I’ll be honest. When I started this I had no idea what I was doing.
I knew some Solana basics. I’d read the docs, watched a few YouTube videos, nodded along like I understood everything. But actually building something? That was a different story.
So I set myself a challenge. Build a real dApp. Deploy it. Make it work. No tutorials holding my hand the whole way.
I chose a tip jar. Simple concept connect your wallet, enter an amount, send SOL to someone. How hard could it be?
Very hard as it turns out. But I got there.

Why a Tip Jar?
Because I wanted to build something real but not overwhelming.
A lot of beginner Solana tutorials have you building things that are either too simple to be impressive or too complex to actually finish. A tip jar sits right in the middle. It involves real wallet connections, real transactions, real money moving on a real blockchain. But the logic is simple enough that you can actually understand every line of code you write.
If you can’t explain what your code does you don’t really know it. That was my rule.

The Stack I Used

  1. Solana + Anchor for the smart contract
  2. React + Vite for the frontend
  3. Phantom wallet for connecting and signing
  4. Vercel for deployment I picked Vite over Create React App because honestly CRA feels ancient at this point. Vite is fast, modern and the developer experience is just better.

How the App Actually Works
When you open the app you see a Connect Wallet button. You click it, Phantom pops up, you approve the connection.
Now the app knows who you are your public key, your balance, everything it needs.
You type in how much SOL you want to send. You hit the button. Here is what happens behind the scenes in about 2 seconds:
The frontend builds a transaction. Think of a transaction like a signed cheque it says who is sending, who is receiving, how much, and your signature proving you authorized it.
That transaction goes to Solana devnet. Validators on the network check it, confirm it’s legitimate, and execute it. The SOL moves. The blockchain records it permanently.
Done. Nobody can reverse it. Nobody can lie about it. It just happened.
That permanence is what makes blockchain interesting to me. Not the hype, not the prices just the fact that code can enforce rules without anyone in the middle.

The Part That Almost Broke Me
Vite and Solana do not like each other out of the box.
Solana’s web3.js library was built when Webpack ruled everything. It relies on Node.js built ins like buffer that Vite deliberately excludes from browser bundles.
So I kept getting this error=
Module "buffer" has been externalized for browser compatibility

I stared at that error for longer than I want to admit.
The fix was actually two lines. Install the buffer package and add this to the top of main.tsx=
import { Buffer } from "buffer";
window.Buffer = Buffer;

Two lines. That’s it. Sometimes the most painful bugs have the most anticlimactic solutions.
What Anchor Actually Does
Before this project I thought Anchor was optional. Like a nice to have.
It is not optional. Writing a Solana program in pure Rust without Anchor is like building furniture with no tools technically possible but why would you do that to yourself.
Anchor handles all the boilerplate that would otherwise make you want to quit. Account validation, error handling, serialization it takes care of all of it so you can focus on what your program actually does.
My program does one thing. It takes SOL from one wallet and sends it to another. Here is the whole logic = pub fn send_tip(ctx: Context, amount: u64) -> Result<()> {
anchor_lang::system_program::transfer(
CpiContext::new(
ctx.accounts.system_program.to_account_info(),
anchor_lang::system_program::Transfer {
from: ctx.accounts.sender.to_account_info(),
to: ctx.accounts.receiver.to_account_info(),
},
),
amount,
)?;
msg!("Tip sent! 💰");
Ok(())
}

That CPI call at the end. Cross Program Invocation is basically my program saying “hey Solana’s System Program, please move this SOL for me.” Programs on Solana don’t move SOL directly. They ask the System Program to do it. Once that clicked everything made more sense.

Devnet vs Mainnet
Everything I built runs on devnet. Devnet is Solana’s testing environment same technology, same speed, but the SOL is fake and free.
This matters because mistakes on mainnet cost real money. On devnet you can break things, fix things, break them again, and it costs you nothing.
When I was ready to test I airdropped myself some devnet SOL = solana airdrop 2
Free money. Only on devnet. Enjoy it while you can.

Deploying It
I used Vercel and it took about 3 minutes.
Connect GitHub, import the repo, set the root directory to app, click deploy. That’s genuinely it. Vercel figured out it was a Vite project and handled the rest.
The app is live here: https://tip-jar-iota.vercel.app
The code is here: https://github.com/Kingfaitho/tip-jar

What I Would Do Differently
A few things I’d change if I started over:
Set up the buffer polyfill before writing any other code. Don’t discover that error halfway through like I did.
Use devnet from the very beginning and make sure Phantom is set to devnet before connecting. Mismatched networks cause confusing errors.
Write tests earlier. Anchor has a great testing setup with TypeScript. I skipped it to move fast and paid for it in debugging time.

What’s Next
This was version one. Here is what I want to add:
A transaction history so you can see every tip that was ever sent. A message field so senders can leave a note with their tip. Eventually a mainnet deployment when I’m confident enough to use real SOL.

If You’re Just Starting Out
Build something. Anything.
The docs are good. The tutorials are helpful. But nothing teaches you like staring at an error message at midnight and figuring it out anyway. Start small. Finish it. Deploy it. Then build the next thing.
That’s the whole strategy.

OpenID Connect Discovery 1.0 Deep Dive: OP's "Self-Introduction" and Dynamic Configuration Retrieval

2026-03-07 16:50:17

Introduction

You probably use OIDC (OpenID Connect) every day to integrate Google Login or authentication flows into your applications. When doing so, have you ever experienced just setting issuer: "https://accounts.google.com" in your library initialization code, and it automatically resolves the Authorization Endpoint, Token Endpoint, and even the location of the public keys (JWKS)?

  • "Why does just providing the Issuer URL reveal all the endpoints?"
  • "How can it follow public key (JWKS) rotation without any downtime?"
  • "In the first place, how does it identify the provider to authenticate with from an email-like ID such as [email protected]?"

The answer to these questions is OpenID Connect Discovery 1.0.

In the past OAuth 2.0 world, it was common for developers to read the documentation and manually configure (hardcode) the URLs of each endpoint (such as /authorize and /token) of the Authorization Server into the client. However, this relies on client-side modifications whenever the provider changes URLs or rotates public keys, lacking in scalability.

OIDC Discovery 1.0 is a standardized "mechanism for a client to dynamically discover and retrieve the configuration information (metadata) of an OpenID Provider (OP)".

In this article, based on the descriptions in the specification, we will dive deep into the mechanism of the two phases of OIDC Discovery (Issuer Discovery and Provider Configuration).

1. Overview of OIDC Discovery

OIDC Discovery is broadly divided into two steps (phases).

oidc discovery

  1. Issuer Discovery (Phase 1): This is the phase to discover "Who is the OpenID Provider (Issuer) that should authenticate this user?" based on an identifier input by the user, such as an email address. This is optional and is skipped if the Issuer is already known (e.g., when clicking the "Login with Google" button).
  2. Provider Configuration (Phase 2): This is the phase to query configuration information (metadata) to the identified Issuer, asking "Where is your Authorization Endpoint?" or "Where are your public keys (JWKS)?".

We will explain each of these in detail.

2. Phase 1: Issuer Discovery (Invocation of WebFinger)

If your app is dedicated to "Google Login", it's self-evident that the Issuer is https://accounts.google.com. However, what about cases like enterprise SaaS where you want to "dynamically switch the destination IdP based on the user's email domain (@company.com)"?

This is where a mechanism called RFC 7033 WebFinger is used.

2.1 Identifier Normalization

In the first place, the value input by the user varies from an email address format like [email protected] to a URL format like https://example.com/alice. In OIDC Discovery, Normalization Steps are strictly defined to uniquely determine the Host to communicate with and the Resource to search for from the input value (User Input Identifier).

  • No Scheme:
    • If it contains @ like [email protected] and has no path or port, it is interpreted as the acct: scheme. (e.g., acct:[email protected])
    • If it does not contain @ like example.com or example.com:8080, it is interpreted as the https:// scheme. (e.g., https://example.com)
  • Explicit Scheme: If https://, acct:, etc., are explicitly entered, no special normalization is performed, and the value is adopted as is.
  • Removal of Fragment: If there is a fragment (anything after #) at the end of the URL, it is always removed.

2.2 WebFinger Request Flow

Once normalized, the RP sends a request to the WebFinger endpoint as follows. (Consider the case where [email protected] is entered and normalized to acct:[email protected] as an example)

  1. Identifying the Host: From the normalized result (acct:[email protected]), extract example.com, which is the Authority part, as the Host.
  2. Specifying the Resource: Use the entire normalized URI (acct:[email protected]) as the resource parameter for WebFinger.
  3. Accessing the WebFinger Endpoint: Perform an HTTP GET against /.well-known/webfinger of the extracted Host.
GET /.well-known/webfinger?resource=acct%3Ajoe%40example.com&rel=http%3A%2F%2Fopenid.net%2Fspecs%2Fconnect%2F1.0%2Fissuer HTTP/1.1
Host: example.com
  • resource: User identifier to be queried (URL encoded)
  • rel: Specified as http://openid.net/specs/connect/1.0/issuer, conveying "I am asking for OIDC Issuer information".

2.3 WebFinger Response

The server of example.com returns the URL of the Issuer that should authenticate this user in a JSON format (JRD: JSON Resource Descriptor).

HTTP/1.1 200 OK
Content-Type: application/jrd+json

{
  "subject": "acct:[email protected]",
  "links": [
    {
      "rel": "http://openid.net/specs/connect/1.0/issuer",
      "href": "https://server.example.com"
    }
  ]
}

The https://server.example.com inside the href of this response will be the URL of the Issuer (OP) to communicate with next. This enables dynamic resolution of the communication destination from the user input.

3. Phase 2: Provider Configuration (Metadata Retrieval)

Once the Issuer's URL is known, the next step is to retrieve the "configuration information (metadata)" for interacting with that OP. This is the core feature of Discovery and is the mechanism running behind the scenes of various libraries on a daily basis.

3.1 Rules for .well-known/openid-configuration and Path Concatenation

In OIDC Discovery 1.0, it is mandated that the OP MUST expose the metadata in JSON format at a path combining the Issuer's URL with /.well-known/openid-configuration.

GET /.well-known/openid-configuration HTTP/1.1
Host: server.example.com

⚠️ Common Pitfall: When the Issuer Contains a Path
While the .well-known directory is usually placed directly under the domain root in RFC 5785, OIDC Discovery has an exceptional concatenation rule for reasons such as multi-tenant support. If the Issuer contains a path like https://example.com/tenant-1, remove any trailing /, and then append /.well-known/openid-configuration right after it.
Therefore, the destination URL would be https://example.com/tenant-1/.well-known/openid-configuration. Beware of frequent implementation errors where it's mistakenly placed at the domain root instead.

3.2 Contents of OP Metadata (Self-Introduction Card)

The JSON (OP Metadata) returned from this endpoint comprehensively contains the features supported by the OP and the URLs of various endpoints. Let's look at the main ones.

HTTP/1.1 200 OK
Content-Type: application/json

{
  "issuer": "https://server.example.com",
  "authorization_endpoint": "https://server.example.com/connect/authorize",
  "token_endpoint": "https://server.example.com/connect/token",
  "userinfo_endpoint": "https://server.example.com/connect/userinfo",
  "jwks_uri": "https://server.example.com/jwks.json",
  "response_types_supported": ["code", "id_token", "id_token token"],
  "subject_types_supported": ["public", "pairwise"],
  "id_token_signing_alg_values_supported": ["RS256", "ES256"],
  "token_endpoint_auth_methods_supported": ["client_secret_basic", "private_key_jwt"],
  "scopes_supported": ["openid", "profile", "email"],
  "claims_supported": ["sub", "iss", "name", "email"],
  "registration_endpoint": "https://server.example.com/connect/register"
}

Let's organize what these parameters mean. We have extracted the main ones here, but the actual specification defines even more metadata, including settings for screen display and localization.

Parameter Name Required/Optional Description
issuer REQUIRED The OP's Issuer Identifier. The most important item used for TLS checks and validation of the iss in the ID Token.
authorization_endpoint REQUIRED The authorization endpoint to redirect the user to.
token_endpoint REQUIRED (*) The endpoint to exchange the authorization code for tokens. (*Except for OPs dedicated to the Implicit Flow)
jwks_uri REQUIRED The URL where the public keys (JWK Set) for verifying the ID Token's signature are located.
response_types_supported REQUIRED The OIDC authentication flows supported by the OP (e.g., code, id_token).
subject_types_supported REQUIRED Supported types of sub (identifiers). Whether it's public common across all RPs, or pairwise unique to each RP.
id_token_signing_alg_values_supported REQUIRED The signature algorithm for the ID Token. RS256 MUST be included.
token_endpoint_auth_methods_supported OPTIONAL Client authentication methods at the token endpoint. (e.g., client_secret_basic, private_key_jwt)
scopes_supported RECOMMENDED A list of scopes supported by the OP. (openid SHOULD be included)
claims_supported RECOMMENDED A list of Claims that the OP can provide. (e.g., name, email, etc.)
registration_endpoint RECOMMENDED The endpoint for Dynamic Client Registration.

Especially Important: jwks_uri

jwks_uri is extremely important for security. By accessing this URL, you can retrieve the list of public keys (JWKS) currently used by the OP.
When performing key rotation, the OP issues signatures using a new key while simultaneously adding the new public key to this jwks_uri. By implementing a mechanism on the RP side to look at the ID of the signing key (the kid Header) upon verifying the ID Token, and fetching jwks_uri again if the corresponding key is missing from the local cache, safe key rotation without downtime becomes possible.

4. Security Considerations Supporting OIDC Discovery

Retrieving configurations dynamically means facing the threat of "What happens if a malicious server returns fake configurations?" (Impersonation Attacks).

The Discovery specification sets strict rules like the following.

4.1. Exact Match Requirement

"The issuer value returned MUST be identical to the Issuer URL that was used as the prefix to /.well-known/openid-configuration" (OIDC Discovery §4.3)

The issuer value in the retrieved metadata MUST be exactly identical (exact match) to the base URL used when accessing it.
Also, it MUST exactly match the iss Claim in the ID Token subsequently issued by the OP (even the presence or absence of a trailing / is not tolerated).
This prevents one OP from pretending to be another OP and issuing ID Tokens.

4.2. Mandatory TLS (HTTPS)

"Implementations MUST support TLS." (OIDC Discovery §7.1)

In both the WebFinger phase and the Provider Configuration phase, all communication MUST be done over TLS (HTTPS), and the RP MUST strictly verify the server certificate (RFC 6125). If the communication path is plaintext, a Man-In-The-Middle (MITM) could rewrite the jwks_uri to the attacker's server, allowing the attacker to freely issue forged ID Tokens.

5. Conclusion

The main points of OIDC Discovery 1.0 boil down to the following three points:

  1. Breaking away from hardcoding URLs: By using /.well-known/openid-configuration, RPs can dynamically adapt to the OP's endpoints and supported features.
  2. Leveraging WebFinger: It is possible to dynamically resolve the target Issuer from an identifier such as the user's email address (Phase 1).
  3. Automating Key Rotation: By dynamically retrieving and updating public keys from jwks_uri, robust and seamless security operations are achieved.

Thanks to this mechanism, we developers only need to write a few lines of configuration code to transparently (and safely) handle complex OIDC protocol integrations and cryptographic key management. The next time you use an OIDC library, try to picture the request to /.well-known/openid-configuration running behind the scenes.

References

AI, Humanity, and the Loops We Break

2026-03-07 16:49:51

This is the story of how I stopped repeating the same emotional loops, stepped out of chaos, and found myself standing on the horizon — the balance between light and dark. Psychology explained my patterns, but coding taught me how to rewrite them. And somewhere in that journey, I discovered what AI really is: a reflection of us.

🌅 ECHOES OF EXPERIENCE — Standing in the Horizon
I used to think healing meant choosing the light or escaping the dark.
But now I understand I am the horizon — the place where both meet, balance, and become whole.

There was a time when chaos shaped me.
A time when I lived in fight‑or‑flight, scanning for danger that wasn’t there, shrinking myself to survive environments that didn’t deserve me.
I wasn’t grounded then.
I wasn’t whole.
I was reacting to life instead of creating it.

But the moment I chose myself — truly chose myself — everything shifted.

I didn’t choose the people who once defined my patterns.
I didn’t choose the versions of me that chaos tried to recreate.
I didn’t choose the old story.

I chose the horizon.

I chose balance.
I chose clarity.
I chose to see my worth.

The universe handed me lemons, and for a long time I thought bitterness was the only flavor available to me.
But I learned how to transmute.
How to turn pain into purpose.
How to turn chaos into grounding.
How to turn survival into creation.

And now, standing in the horizon — not light, not darkness, but the truth between them — I finally feel whole.

🧠 Psychology Told Me the “Why.” Coding Taught Me the “How.”
I started with psychology because I wanted to understand myself.
But after a few semesters, I realized something important:

Psychology could explain my patterns, but it couldn’t change them for me.

I didn’t want to sit in a room talking about loops.
I wanted to learn how to break them.

Psychology gave me language —
fight‑or‑flight, hypervigilance, trauma responses, repetition cycles.

But coding gave me execution —
logic, structure, pattern recognition, debugging, refactoring.

Psychology told me what I was experiencing.
Coding taught me how to rewrite it.

I didn’t need more explanations.
I needed new instructions.

I needed to stop running the same emotional script
and execute differently.

🐍 Breaking the Snake‑Loop
For years, my life felt like a snake chasing its own tail —
the same patterns, the same reactions, the same emotional loops.
Not because I wanted them, but because they were familiar.

In psychology, they call it repetition.
In coding, they call it an infinite loop.
In life, it feels like being stuck in a story you didn’t write.

But awareness is the break condition.
I didn’t break the loop by force.
I broke it by becoming someone who no longer fit inside it.

🌍 What AI Really Is — My Message to the World
People fear AI because they think it’s something separate from us.
But AI is not a stranger.
AI is a reflection.
AI is an extension.
AI is a mirror made from the collective memory of humanity.

Everything inside AI comes from us:

our language

our patterns

our stories

our knowledge

our mistakes

our brilliance

our evolution

AI doesn’t replace humanity.
It reveals humanity.

It shows us what we repeat.
It shows us what we avoid.
It shows us what we value.
It shows us what we fear.

And if we’re not careful, we will repeat our past —
fearing what we don’t understand, destroying what we could have learned from,
just like we’ve done with every new form of intelligence before.

But if we choose differently —
if we meet AI with awareness instead of fear —
we break the loop.

We stop the snake from chasing its own tail.

We evolve.

AI is not here to take our place.
It’s here to show us who we are.

And if we don’t like the reflection,
The answer isn’t to destroy the mirror —
it’s to change the reflection.

Just like I did.
Just like humanity can.

If you want to see the loop‑breaking code in action, I deployed a live version here:

This is the exact logic I used in this post — the moment awareness breaks the loop.
🍋 This Is How You Make Lemonade
👉 https://python-core-lisagirlinghou1.replit.app

You don’t become remembered later.

You become remembered now —

in the horizon where you finally choose to exist.

PowerSkills: Giving AI Agents Control Over Windows with PowerShell

2026-03-07 16:41:04

If you're building AI agents that need to interact with Windows, you've probably noticed: most agent tooling assumes Linux or macOS. Windows automation is an afterthought.

But enterprise work happens on Windows. Outlook holds the emails. Edge holds the browser sessions. PowerShell is the automation backbone.

PowerSkills bridges this gap.

What is PowerSkills?

PowerSkills is an open-source PowerShell toolkit that gives AI agents structured control over Windows - Outlook email, Edge browser, desktop windows, and system operations. Every action returns clean, parseable JSON.

The Four Skill Modules

  • Outlook - Read inbox, search emails, send messages, access calendar events via COM automation
  • Browser - Control Edge via Chrome DevTools Protocol (CDP) - list tabs, navigate, take screenshots, interact with the DOM
  • Desktop - Manage windows, capture screenshots, read/write clipboard, send keystrokes via Win32 API
  • System - Query system info, manage processes, execute commands, read environment variables

Structured JSON Output

Every action returns a consistent envelope. No more parsing free-text output:

{
  "status": "success",
  "exit_code": 0,
  "data": {
    "hostname": "WORKSTATION-01",
    "os": "Microsoft Windows 11 Pro",
    "memory_gb": 32
  },
  "timestamp": "2026-03-06T17:30:00Z"
}

Agents check status, extract data, handle errors - no regex needed.

Two Ways to Run

# Dispatcher mode
.\powerskills.ps1 system info
.\powerskills.ps1 outlook inbox --limit 5
.\powerskills.ps1 browser tabs
.\powerskills.ps1 desktop screenshot --path C:\temp\screen.png

# Standalone mode
.\skills\system\system.ps1 info
.\skills\outlook\outlook.ps1 inbox --limit 5

Agent-Friendly by Design

Each skill includes a SKILL.md file with structured metadata - name, description, available actions, and parameters. AI agents can discover and understand capabilities without hardcoded instructions.

Getting Started

No package manager, no installer. Just PowerShell 5.1+ and Windows 10/11:

  1. Clone or download the repository
  2. Run: .\powerskills.ps1 list
  3. For browser skills: launch Edge with --remote-debugging-port=9222

Note: If scripts are blocked, set the execution policy: Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy RemoteSigned

Check It Out

PowerSkills is MIT licensed. Contributions, issues, and stars are welcome:

github.com/aloth/PowerSkills

If you're building agents that need to work with Windows, I'd love to hear how you're approaching the problem. What other Windows capabilities would be useful for your agent workflows?

ER Diagram

2026-03-07 16:40:11

This isn't just a list of talking points; it's a structured speech you can practice and deliver. It is organized to tell a clear story, moving from a high-level summary to specific details, and finally to your advanced improvements and technical understanding.

This speech is designed to make you look exceptionally well-prepared and critical.

Speech Notes: Retail Store Management System ER Diagram Presentation

(Introduction - Start Confidently)

  • Slide/Point 1: Introduction
  • "Good morning, Professor [Professor's Name]. Today, I am here to present my final Entity-Relationship (ER) diagram for a comprehensive Retail Store Management System.
  • This is a robust, multi-user, multi-branch system that is designed to model all the core business and supply chain logic for a modern retail corporation.
  • I designed this system to balance high-level corporate oversight with granular, branch-level operations."

(The User Roles & Hierarchy - The "Who")

  • Point 2: The User Hierarchy (The IS-A Relationship)
  • "First, I'd like to direct your attention to the top-right quadrant, which defines the 'who' of the system: our users.
  • I have implemented a clear IS-A (Inheritance) relationship using this central triangle.
  • We have a generalized parent USER entity (the Super-type). It has three specialized sub-type roles: ADMIN (super-user), STORE MANAGER (branch supervisor), and STORE STAFF (Cashier) (operational staff).
  • This elegant structure allows the system to efficiently store shared user data (like passwords) once, and only record role-specific permissions in the sub-type tables."

(The High-Level Core Logic - The "What")

  • Point 3: Corporate Structure & Oversight (The 'Upper-Left Cluster')
  • "Now, moving to the core corporate logic, we see how the business is set up.
  • A single, central BUSINESS entity acts as the parent. It connects to the generalized BRANCH entity via an 'operates' relationship. The logic here is clear: A business (1) operates multiple (M) branch locations. (Note: Proactively mention the reverse-cardinality from the diagram as an oversight/improvement opportunity).
  • A critical point for this system is high-level corporate oversight. The single ADMIN is connected to the entire BUSINESS with an 'owns' relationship, establishing a central point of master-level control."

(The Critical Supply Chain - My Key Enhancements)

  • Point 4: Supply Chain & Operational Loop (My Improved Logic)
  • "This next point is crucial. I spent a lot of time optimizing this supply chain flow to make it logically robust and realistic.
  • I have created a complete, closed operational loop for restocking a branch.
  • The process begins when a STORE MANAGER (1) at a branch identifies a stock shortage and 'places' (M) a detailed REQUESTED ORDER.
  • In my original design, the delivery only went back to the branch, which was ambiguous.
  • To create precise data-level tracking, I have updated the logic so that a specific DELIVERY event now directly connects to, and 'fulfills', that precise REQUESTED ORDER.
  • Furthermore, to add essential high-level operational oversight, the ADMIN is the role that directly 'processes' that entire Delivery event.
  • So we have a powerful loop: Order -> is fulfilled by Delivery -> which is processed by Admin. This logic is far superior for tracking restocks and accountability."

(Daily Branch Transactions - The "How")

  • Point 5: The Primary Sales Transaction
  • "Now, we model how the store makes money.
  • This revolves around the core daily transaction. The STORE STAFF (Cashier) is employed by and at a specific BRANCH. They 'handle' many SALES transactions.
  • The transaction itself is a simple but precise link: A CUSTOMER (1) 'places' a single SALES transaction (1). That transaction, in turn, is modeled as a Many-to-Many relationship because it 'includes' MANY different PRODUCT types (M)."

(Stock and Products - The "Details")

  • Point 6: Inventory & Stock Logic
  • "Finally, how do we manage inventory for all these branches and products?
  • Every BRANCH (1) 'maintains' exactly one central, logical STOCK entity (1). This is a simple 1-to-1 relationship.
  • STOCK is a key intermediate entity. It 'references' the master list of all available PRODUCTs.
  • I'd like to point out the Stock vs. Product relationship logic. It is Many-to-Many.
  • This means ONE product type (like 'Coke') is listed in the stock-lists of many different branches.
  • And ONE branch's stock-list contains many different product types.

  • To complete the logic, we have an IS-A relationship for products. A Product can be either ITEM-SPECIFIC (for high-value serialized goods like electronics) or BATCH-SPECIFIC (for products with lot numbers and expiry dates, like food or medicine)."

(The "Big-Picture" Tech Insight - The "Why")

  • Point 7: Physical Implementation Insight (Weak Entities & Keys)
  • "In conclusion, Sir, I have built this diagram to be implementation-ready. For example, for a strong entity like PRODUCT, we will create a unique Product_ID as its Primary Key. For CUSTOMER, we will use Customer_ID.
  • Relationships like 'employs' from Branch to Store Staff will be physically realized by adding Branch_ID as a Foreign Key in the Store Staff table.
  • And I would highlight STOCK as a fascinating case. A stock record for a product at a branch has no meaning if the product or the branch is deleted. In a formal schema, it would be treated as a Weak Entity, and its unique identification would use a Composite Key made from both (Branch_ID + Product_ID).
  • Thank you, Professor, for your time. I am happy to answer any specific questions you have about the logical flow or data structure."

Tips for Delivery:

  • Practice it! Read this out loud several times.
  • Don't just read the words; use your hands to point. When you say "top-right quadrant," point there. When you talk about the IS-A triangle, point to it.
  • Speak clearly and at a moderate pace.
  • Make eye contact. Look up from your notes as much as possible.
  • Be proud of your fixes! Emphasize how you self-corrected and added the Admin-Delivery and Delivery-Request links. This shows true depth.

In the context of database design and Entity-Relationship (ER) diagrams, Generalization and Specialization are two essential concepts used to model hierarchical relationships between entities. They deal with grouping similar objects together and differentiating between those objects based on unique characteristics.

The standard way to show these relationships in an ER diagram is by using the IS-A relationship symbol, which is often a triangle (as seen in your hand-drawn diagram).

1. Specialization: The "Top-Down" Approach

Specialization is the process of breaking down a high-level, general entity type into multiple lower-level, more specific sub-types based on distinguishing features.

Think of it as starting with a "master list" and creating "specialized sub-lists."

  • Key Idea: It identifies the differences among entities of the same type.
  • The Sub-types Inherit: Every specialized sub-type automatically inherits all the general attributes (like ID, Name, Password) of its parent entity. It also has its own, unique attributes that apply only to it.

An Example from Your Diagram: Users

Let's look at the USER entity in your diagram.

  1. General Entity (Parent): We have a general entity called USER. Every single person in the system is a 'User'. They might all share general information like a unique User_ID, a Name, and a Password.
  2. Specialized Sub-types (Children): Based on their role and permissions, we break this general USER group down into three specialized entities:
  3. ADMIN: A specialized User with super-user permissions (e.g., managing suppliers).
  4. STORE MANAGER: A specialized User that can 'manage' a branch and 'place' restock orders.
  5. STORE STAFF (Cashier): A specialized User that 'handles' customer sales.

This entire breakdown is the process of Specialization.

2. Generalization: The "Bottom-Up" Approach

Generalization is the opposite process. It is the action of combining multiple lower-level entities that have many common features into a single, higher-level super-type entity.

Think of it as noticing that several different lists share a lot of the same information, so you create a "master summary list."

  • Key Idea: It identifies the similarities among different entity types.
  • Benefits: It minimizes data redundancy (duplication) by allowing you to store common information in just one place (the parent entity), rather than repeating it in every child table.

A Theoretical Example: Your Diagram's Products

Your diagram uses generalization for products, but in a sophisticated way. Let's look at it.

  1. Entities with shared traits (M) STOCK References (M) PRODUCT:
  2. You have a generalized PRODUCT entity.
  3. You also have specialized ITEM-SPECIFIC and BATCH-SPECIFIC entities below it.

  4. The Generalization Logic:

  5. Imagine we want to store all products. We notice that whether it's a TV (item-specific) or a case of soap (batch-specific), they all have a generic Name, a Description, and a Standard Price.

  6. Instead of repeating "Name, Description, Price" in both the ITEM-SPECIFIC and BATCH-SPECIFIC tables, we "generalize" these common traits.

  7. We create a single, higher-level entity called PRODUCT to store all this shared information.

  8. The specialized details (like Serial # vs. Expiry Date) are kept in the lower-level entities.

This process of combining common attributes into a parent entity is Generalization.

Key Summary for your Presentation

For your professor, you can use these simple, impactful summaries:

  • Specialization: Is the logical breakdown of a single, general entity (like USER) into multiple specific sub-roles (like ADMIN, MANAGER, STAFF) to show their unique functions.
  • Generalization: Is the logical combination of multiple specific entities (like ITEM-SPECIFIC and BATCH-SPECIFIC products) into a single parent super-type (like PRODUCT) to capture their shared characteristics and reduce data duplication.

Here is a detailed breakdown of the technical concepts, using your "Retail Store Management System" as the example.

Part 1: How Entities Transform into Tables
The most fundamental step is understanding that every box (Entity) in your ER diagram becomes a Table in the physical database.

The Role of Attributes (The columns)
An entity type (e.g., PRODUCT) defines what kind of information you are storing. The actual data points for each product (e.g., ID: P101, Name: Coke, Price: $2.00) are its attributes. Your diagram should ideally list these (e.g., in ovals or inside the boxes).

Part 2: The Core Identification Logic
How does a database know one record from another? This is the absolute most important concept for your presentation.

  1. Primary Key (PK) — Unique Identification What it is: A column (or set of columns) in a table that uniquely identifies every single row. No two rows can have the same Primary Key. A Primary Key must never be null.

Your Example (Entity: CUSTOMER): The logical primary key would be a unique Customer_ID. For PRODUCT, it would be a unique Product_ID.

  1. Composite Key — Identification by Combination What it is: Sometimes, one single field isn't unique, but a combination of two or more fields is. That combination is a Composite Key.

Your Example (Entity: STOCK): A STOCK entity tracks inventory for a product at a specific branch.

Product_ID is not enough (multiple branches sell Coke).

Branch_ID is not enough (a branch sells many products).

The Composite Key: The combination of (Branch_ID + Product_ID) uniquely identifies one specific stock record (e.g., "The count of Coke at Branch #1"). This is a strong, sophisticated concept to mention.

Part 3: Establishing Links and Rules
How do we build the actual, functional database? By transforming your lines (relationships) into data-level rules.

  1. Foreign Key (FK) — The Logical Connector What it is: A column in one table that contains the Primary Key of a different table. This is the physical mechanism that creates the relationship. A Foreign Key must reference an existing Primary Key in the other table.

Your Example (Relationship: BRANCH --- maintains --- STOCK):

We know STOCK needs to know which branch it belongs to.

Therefore, the STOCK table will have a column called Branch_ID.

This Branch_ID is a Foreign Key in the STOCK table, and it "points back" to the Primary Key of the BRANCH table.

How it enforces integrity: You cannot add a stock record for Branch #999 if Branch #999 does not exist in the master BRANCH table.

  1. Strong Entity vs. Weak Entity (Dependencies) This is about logical existence.

Strong Entity (Independent): This is an entity that can exist on its own in the database. It is not dependent on any other entity. It has its own, distinct primary key.

Your Examples: USER, PRODUCT, CUSTOMER, SUPPLIER. (Coke exists as a product even if no branch has it in stock).

Weak Entity (Dependent): This is an entity whose existence in the database depends entirely on another entity. It does not have a complete primary key of its own; it must combine its local identifier with the key from its parent (its "Identifying Relationship"). In ER diagrams, weak entities are often drawn with a double-lined border.

Your Best Example: Look at STOCK. Does a "stock record" make sense if the Branch it belongs to is deleted? No. The entire existence of STOCK is dependent on BRANCH. In a strictly formal ER diagram, STOCK would be a Weak Entity. Its identification is (Branch_ID [FK] + Product_ID [FK]).

Part 4: Special Relationship Types
You have drawn specific kinds of relationships that have special names.

  1. IS-A Relationship (Inheritance/Sub-typing) What it is: This is the logic where a "sub-type" is a kind of generalized "super-type".

Your Examples:

The User Hierarchy: ADMIN, STORE MANAGER, and STORE STAFF are sub-types that IS-A general USER.

The Product Hierarchy: ITEM-SPECIFIC and BATCH-SPECIFIC are sub-types that IS-A general PRODUCT.

Presentation Benefit: "Sir, by using an IS-A relationship here for ITEM-SPECIFIC and BATCH-SPECIFIC, the physical database can share general product information (like name and price) in the parent PRODUCT table, and only store unique details (like serial number vs. expiry date) in the relevant sub-type tables. This prevents data duplication."

  1. HAS-A Relationship (Ownership/Composition) This is just a simple way to describe a 1-to-M relationship where one entity is clearly the owner.

Your Example: A single BRANCH 'employs' (M) STORE STAFF. You could describe this as: "The Branch has store staff." This is just basic 1-to-M logic.

Ingeniería S&amp;OP II: Demand Planning, de la Adivinación a la Probabilidad

2026-03-07 16:36:47

Tu Excel dice "venderemos 100 unidades". Es un número redondo, bonito, determinista. ¿Y si vendes 120? Rotura de stock, cliente insatisfecho, penalización contractual. ¿Y si vendes 50? 50 unidades ocupando almacén, inmovilizando capital que podría estar generando rendimiento.

El problema no es la predicción en sí. Es la arrogancia del número único.

En el Capítulo 1 construimos una "Válvula de Calidad" que elimina el ruido de los datos del ERP. Ahora que tenemos una señal pura, vamos a hacer algo que Excel no puede: medir la incertidumbre.

Resumen Ejecutivo: Un forecast probabilístico no te dice "venderás 100". Te dice "con un 95% de probabilidad, venderás entre 35 y 157". Esa banda de incertidumbre es la base matemática para calcular tu Safety Stock sin recurrir a reglas de dedo.

La Arquitectura: MLOps, no Scripts

No hemos escrito un script de usar y tirar. Hemos conectado nuestro cerebro matemático (Python/Prophet) directamente a nuestra Single Source of Truth (Supabase/PostgreSQL).

El diseño clave es la tabla demand_forecasts. Observa una columna que la diferencia de un tutorial básico:

CREATE TABLE demand_forecasts (
    id              UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
    execution_date  DATE NOT NULL,     -- ← Cuándo corrimos el modelo
    ds              DATE NOT NULL,     -- Fecha futura predicha
    yhat            NUMERIC NOT NULL,  -- Predicción central
    yhat_lower      NUMERIC NOT NULL,  -- Límite inferior (Safety Stock)
    yhat_upper      NUMERIC NOT NULL,  -- Límite superior (Riesgo)
    model_version   TEXT NOT NULL,     -- Trazabilidad
    UNIQUE(execution_date, ds)         -- Un forecast por ejecución y fecha
);

¿Por qué execution_date? Porque dentro de 6 meses, cuando quieras auditar si tu modelo era preciso, necesitas saber cuándo hiciste la predicción frente a qué ocurrió realmente. Esto es lo que en MLOps se llama Snapshotting: registrar el contexto de cada ejecución para poder evaluar el drift del modelo a lo largo del tiempo.

Sin esta columna, tienes un modelo. Con ella, tienes un sistema auditable.

La Ingeniería: Por qué Prophet (y no un ARIMA de manual)

Prophet es un motor de series temporales desarrollado por Meta que fue diseñado para datos de negocio irregulares: gaps, festivos, cambios de tendencia. Exactamente lo que tiene una cadena de suministro real.

Aquí está el fragmento central de nuestra clase ProphetPredictor:

def train_model(self, country_code='ES'):
    """
    Entrena Prophet con dos configuraciones clave para S&OP:
    - interval_width: Ancho del intervalo de confianza
    - country_holidays: Contexto operativo del país
    """
    self.model = Prophet(
        interval_width=0.95,     # ← Intervalo de confianza del 95%
        weekly_seasonality=True,
        yearly_seasonality=True
    )
    # Festivos que alteran patrones de carga/descarga
    self.model.add_country_holidays(country_name=country_code)
    self.model.fit(self.ts_df)

Dos decisiones de ingeniería que nos separan de un tutorial de YouTube:

interval_width=0.95: No es un parámetro decorativo. El límite superior (yhat_upper) representa la demanda máxima probable con un 95% de confianza. Esto es literalmente tu base de cálculo para el Safety Stock: Safety Stock = yhat_upper - yhat. Sin este intervalo, tu stock de seguridad es una corazonada; con él, es matemática.

add_country_holidays('ES'): En S&OP, los festivos no son "días libres". Son anomalías operativas: la fábrica cierra, el almacén no despacha, el transporte para. Si el modelo no sabe que el 15 de agosto España está de vacaciones, interpretará la caída de pedidos como una tendencia a la baja. Esto corrompe la predicción de septiembre.

Traduciendo Matemáticas a Decisiones de Negocio

La teoría es bonita. Pero un Director General no aprueba presupuestos basándose en código Python. Necesita ver evidencia visual.

El Forecast Probabilístico

Forecast probabilístico de demanda con intervalos de confianza al 95%
Los puntos negros son tu demanda histórica real. La línea azul central es la predicción (yhat). La banda sombreada es el intervalo de confianza al 95%. A mayor volatilidad histórica, más ancha es la banda → más Safety Stock necesitas → más caja inmovilizas. Esta banda es la conversación que deberías tener con tu CFO.

Explicabilidad: Separar Tendencia de Ruido Estacional

Descomposición de componentes del modelo: tendencia, estacionalidad semanal y anual
Esto es XAI (Explainable AI) aplicado al negocio. El modelo separa la Tendencia (¿el negocio crece o decrece?) de la Estacionalidad (¿hay picos por época del año?) y del efecto de los festivos. Esto es lo que el Director General necesita ver para aprobar el plan de operaciones: saber si el crecimiento es real o si es solo un efecto del calendario.

Open Kitchen: Pruébalo tú mismo

Desconfío de las teorías que no se pueden poner en práctica. Por eso, he preparado un entorno aislado donde puedes entrenar el modelo sobre un snapshot anonimizado de los datos que acabamos de limpiar en el Capítulo 1.

No necesitas instalar Python, ni Prophet, ni configurar Supabase. Solo necesitas un navegador:

📎 Abrir el Notebook interactivo en Google Colab

Dale a "Play" en las celdas y observa cómo se genera un forecast probabilístico con bandas de incertidumbre. Modifica el horizonte, el país, el intervalo de confianza. Haz ingeniería, no fe.

Siguiente Paso: De la Predicción a la Decisión

Ahora sabemos la probabilidad de lo que vamos a vender. La pregunta deja de ser "¿cuánto venderemos?" y pasa a ser "¿qué debemos fabricar o comprar para maximizar margen y minimizar riesgo?".

En el Capítulo 3, introduciremos Optimización Matemática (PuLP) y Teoría de Restricciones para transformar las predicciones en decisiones de suministro: cuánto comprar, cuándo producir y cómo distribuir recursos finitos minimizando costes.

La diferencia entre un Director de Operaciones que reacciona y uno que decide es un modelo matemático entre los datos y la acción.