2026-03-07 16:57:21
I’ll be honest. When I started this I had no idea what I was doing.
I knew some Solana basics. I’d read the docs, watched a few YouTube videos, nodded along like I understood everything. But actually building something? That was a different story.
So I set myself a challenge. Build a real dApp. Deploy it. Make it work. No tutorials holding my hand the whole way.
I chose a tip jar. Simple concept connect your wallet, enter an amount, send SOL to someone. How hard could it be?
Very hard as it turns out. But I got there.
Why a Tip Jar?
Because I wanted to build something real but not overwhelming.
A lot of beginner Solana tutorials have you building things that are either too simple to be impressive or too complex to actually finish. A tip jar sits right in the middle. It involves real wallet connections, real transactions, real money moving on a real blockchain. But the logic is simple enough that you can actually understand every line of code you write.
If you can’t explain what your code does you don’t really know it. That was my rule.
The Stack I Used
How the App Actually Works
When you open the app you see a Connect Wallet button. You click it, Phantom pops up, you approve the connection.
Now the app knows who you are your public key, your balance, everything it needs.
You type in how much SOL you want to send. You hit the button. Here is what happens behind the scenes in about 2 seconds:
The frontend builds a transaction. Think of a transaction like a signed cheque it says who is sending, who is receiving, how much, and your signature proving you authorized it.
That transaction goes to Solana devnet. Validators on the network check it, confirm it’s legitimate, and execute it. The SOL moves. The blockchain records it permanently.
Done. Nobody can reverse it. Nobody can lie about it. It just happened.
That permanence is what makes blockchain interesting to me. Not the hype, not the prices just the fact that code can enforce rules without anyone in the middle.
The Part That Almost Broke Me
Vite and Solana do not like each other out of the box.
Solana’s web3.js library was built when Webpack ruled everything. It relies on Node.js built ins like buffer that Vite deliberately excludes from browser bundles.
So I kept getting this error=
Module "buffer" has been externalized for browser compatibility
I stared at that error for longer than I want to admit.
The fix was actually two lines. Install the buffer package and add this to the top of main.tsx=
import { Buffer } from "buffer";
window.Buffer = Buffer;
Two lines. That’s it. Sometimes the most painful bugs have the most anticlimactic solutions.
What Anchor Actually Does
Before this project I thought Anchor was optional. Like a nice to have.
It is not optional. Writing a Solana program in pure Rust without Anchor is like building furniture with no tools technically possible but why would you do that to yourself.
Anchor handles all the boilerplate that would otherwise make you want to quit. Account validation, error handling, serialization it takes care of all of it so you can focus on what your program actually does.
My program does one thing. It takes SOL from one wallet and sends it to another. Here is the whole logic = pub fn send_tip(ctx: Context, amount: u64) -> Result<()> {
anchor_lang::system_program::transfer(
CpiContext::new(
ctx.accounts.system_program.to_account_info(),
anchor_lang::system_program::Transfer {
from: ctx.accounts.sender.to_account_info(),
to: ctx.accounts.receiver.to_account_info(),
},
),
amount,
)?;
msg!("Tip sent! 💰");
Ok(())
}
That CPI call at the end. Cross Program Invocation is basically my program saying “hey Solana’s System Program, please move this SOL for me.” Programs on Solana don’t move SOL directly. They ask the System Program to do it. Once that clicked everything made more sense.
Devnet vs Mainnet
Everything I built runs on devnet. Devnet is Solana’s testing environment same technology, same speed, but the SOL is fake and free.
This matters because mistakes on mainnet cost real money. On devnet you can break things, fix things, break them again, and it costs you nothing.
When I was ready to test I airdropped myself some devnet SOL = solana airdrop 2
Free money. Only on devnet. Enjoy it while you can.
Deploying It
I used Vercel and it took about 3 minutes.
Connect GitHub, import the repo, set the root directory to app, click deploy. That’s genuinely it. Vercel figured out it was a Vite project and handled the rest.
The app is live here: https://tip-jar-iota.vercel.app
The code is here: https://github.com/Kingfaitho/tip-jar
What I Would Do Differently
A few things I’d change if I started over:
Set up the buffer polyfill before writing any other code. Don’t discover that error halfway through like I did.
Use devnet from the very beginning and make sure Phantom is set to devnet before connecting. Mismatched networks cause confusing errors.
Write tests earlier. Anchor has a great testing setup with TypeScript. I skipped it to move fast and paid for it in debugging time.
What’s Next
This was version one. Here is what I want to add:
A transaction history so you can see every tip that was ever sent. A message field so senders can leave a note with their tip. Eventually a mainnet deployment when I’m confident enough to use real SOL.
If You’re Just Starting Out
Build something. Anything.
The docs are good. The tutorials are helpful. But nothing teaches you like staring at an error message at midnight and figuring it out anyway. Start small. Finish it. Deploy it. Then build the next thing.
That’s the whole strategy.
2026-03-07 16:50:17
You probably use OIDC (OpenID Connect) every day to integrate Google Login or authentication flows into your applications. When doing so, have you ever experienced just setting issuer: "https://accounts.google.com" in your library initialization code, and it automatically resolves the Authorization Endpoint, Token Endpoint, and even the location of the public keys (JWKS)?
[email protected]?"The answer to these questions is OpenID Connect Discovery 1.0.
In the past OAuth 2.0 world, it was common for developers to read the documentation and manually configure (hardcode) the URLs of each endpoint (such as /authorize and /token) of the Authorization Server into the client. However, this relies on client-side modifications whenever the provider changes URLs or rotates public keys, lacking in scalability.
OIDC Discovery 1.0 is a standardized "mechanism for a client to dynamically discover and retrieve the configuration information (metadata) of an OpenID Provider (OP)".
In this article, based on the descriptions in the specification, we will dive deep into the mechanism of the two phases of OIDC Discovery (Issuer Discovery and Provider Configuration).
OIDC Discovery is broadly divided into two steps (phases).
We will explain each of these in detail.
If your app is dedicated to "Google Login", it's self-evident that the Issuer is https://accounts.google.com. However, what about cases like enterprise SaaS where you want to "dynamically switch the destination IdP based on the user's email domain (@company.com)"?
This is where a mechanism called RFC 7033 WebFinger is used.
In the first place, the value input by the user varies from an email address format like [email protected] to a URL format like https://example.com/alice. In OIDC Discovery, Normalization Steps are strictly defined to uniquely determine the Host to communicate with and the Resource to search for from the input value (User Input Identifier).
@ like [email protected] and has no path or port, it is interpreted as the acct: scheme. (e.g., acct:[email protected])@ like example.com or example.com:8080, it is interpreted as the https:// scheme. (e.g., https://example.com)https://, acct:, etc., are explicitly entered, no special normalization is performed, and the value is adopted as is.#) at the end of the URL, it is always removed.Once normalized, the RP sends a request to the WebFinger endpoint as follows. (Consider the case where [email protected] is entered and normalized to acct:[email protected] as an example)
acct:[email protected]), extract example.com, which is the Authority part, as the Host.acct:[email protected]) as the resource parameter for WebFinger./.well-known/webfinger of the extracted Host.
GET /.well-known/webfinger?resource=acct%3Ajoe%40example.com&rel=http%3A%2F%2Fopenid.net%2Fspecs%2Fconnect%2F1.0%2Fissuer HTTP/1.1
Host: example.com
resource: User identifier to be queried (URL encoded)rel: Specified as http://openid.net/specs/connect/1.0/issuer, conveying "I am asking for OIDC Issuer information".The server of example.com returns the URL of the Issuer that should authenticate this user in a JSON format (JRD: JSON Resource Descriptor).
HTTP/1.1 200 OK
Content-Type: application/jrd+json
{
"subject": "acct:[email protected]",
"links": [
{
"rel": "http://openid.net/specs/connect/1.0/issuer",
"href": "https://server.example.com"
}
]
}
The https://server.example.com inside the href of this response will be the URL of the Issuer (OP) to communicate with next. This enables dynamic resolution of the communication destination from the user input.
Once the Issuer's URL is known, the next step is to retrieve the "configuration information (metadata)" for interacting with that OP. This is the core feature of Discovery and is the mechanism running behind the scenes of various libraries on a daily basis.
In OIDC Discovery 1.0, it is mandated that the OP MUST expose the metadata in JSON format at a path combining the Issuer's URL with /.well-known/openid-configuration.
GET /.well-known/openid-configuration HTTP/1.1
Host: server.example.com
⚠️ Common Pitfall: When the Issuer Contains a Path
While the .well-known directory is usually placed directly under the domain root in RFC 5785, OIDC Discovery has an exceptional concatenation rule for reasons such as multi-tenant support. If the Issuer contains a path like https://example.com/tenant-1, remove any trailing /, and then append /.well-known/openid-configuration right after it.
Therefore, the destination URL would be https://example.com/tenant-1/.well-known/openid-configuration. Beware of frequent implementation errors where it's mistakenly placed at the domain root instead.
The JSON (OP Metadata) returned from this endpoint comprehensively contains the features supported by the OP and the URLs of various endpoints. Let's look at the main ones.
HTTP/1.1 200 OK
Content-Type: application/json
{
"issuer": "https://server.example.com",
"authorization_endpoint": "https://server.example.com/connect/authorize",
"token_endpoint": "https://server.example.com/connect/token",
"userinfo_endpoint": "https://server.example.com/connect/userinfo",
"jwks_uri": "https://server.example.com/jwks.json",
"response_types_supported": ["code", "id_token", "id_token token"],
"subject_types_supported": ["public", "pairwise"],
"id_token_signing_alg_values_supported": ["RS256", "ES256"],
"token_endpoint_auth_methods_supported": ["client_secret_basic", "private_key_jwt"],
"scopes_supported": ["openid", "profile", "email"],
"claims_supported": ["sub", "iss", "name", "email"],
"registration_endpoint": "https://server.example.com/connect/register"
}
Let's organize what these parameters mean. We have extracted the main ones here, but the actual specification defines even more metadata, including settings for screen display and localization.
| Parameter Name | Required/Optional | Description |
|---|---|---|
issuer |
REQUIRED | The OP's Issuer Identifier. The most important item used for TLS checks and validation of the iss in the ID Token. |
authorization_endpoint |
REQUIRED | The authorization endpoint to redirect the user to. |
token_endpoint |
REQUIRED (*) | The endpoint to exchange the authorization code for tokens. (*Except for OPs dedicated to the Implicit Flow) |
jwks_uri |
REQUIRED | The URL where the public keys (JWK Set) for verifying the ID Token's signature are located. |
response_types_supported |
REQUIRED | The OIDC authentication flows supported by the OP (e.g., code, id_token). |
subject_types_supported |
REQUIRED | Supported types of sub (identifiers). Whether it's public common across all RPs, or pairwise unique to each RP. |
id_token_signing_alg_values_supported |
REQUIRED | The signature algorithm for the ID Token. RS256 MUST be included. |
token_endpoint_auth_methods_supported |
OPTIONAL | Client authentication methods at the token endpoint. (e.g., client_secret_basic, private_key_jwt) |
scopes_supported |
RECOMMENDED | A list of scopes supported by the OP. (openid SHOULD be included) |
claims_supported |
RECOMMENDED | A list of Claims that the OP can provide. (e.g., name, email, etc.) |
registration_endpoint |
RECOMMENDED | The endpoint for Dynamic Client Registration. |
jwks_uri
jwks_uri is extremely important for security. By accessing this URL, you can retrieve the list of public keys (JWKS) currently used by the OP.
When performing key rotation, the OP issues signatures using a new key while simultaneously adding the new public key to this jwks_uri. By implementing a mechanism on the RP side to look at the ID of the signing key (the kid Header) upon verifying the ID Token, and fetching jwks_uri again if the corresponding key is missing from the local cache, safe key rotation without downtime becomes possible.
Retrieving configurations dynamically means facing the threat of "What happens if a malicious server returns fake configurations?" (Impersonation Attacks).
The Discovery specification sets strict rules like the following.
"The
issuervalue returned MUST be identical to the Issuer URL that was used as the prefix to/.well-known/openid-configuration" (OIDC Discovery §4.3)
The issuer value in the retrieved metadata MUST be exactly identical (exact match) to the base URL used when accessing it.
Also, it MUST exactly match the iss Claim in the ID Token subsequently issued by the OP (even the presence or absence of a trailing / is not tolerated).
This prevents one OP from pretending to be another OP and issuing ID Tokens.
"Implementations MUST support TLS." (OIDC Discovery §7.1)
In both the WebFinger phase and the Provider Configuration phase, all communication MUST be done over TLS (HTTPS), and the RP MUST strictly verify the server certificate (RFC 6125). If the communication path is plaintext, a Man-In-The-Middle (MITM) could rewrite the jwks_uri to the attacker's server, allowing the attacker to freely issue forged ID Tokens.
The main points of OIDC Discovery 1.0 boil down to the following three points:
/.well-known/openid-configuration, RPs can dynamically adapt to the OP's endpoints and supported features.jwks_uri, robust and seamless security operations are achieved.Thanks to this mechanism, we developers only need to write a few lines of configuration code to transparently (and safely) handle complex OIDC protocol integrations and cryptographic key management. The next time you use an OIDC library, try to picture the request to /.well-known/openid-configuration running behind the scenes.
2026-03-07 16:49:51
This is the story of how I stopped repeating the same emotional loops, stepped out of chaos, and found myself standing on the horizon — the balance between light and dark. Psychology explained my patterns, but coding taught me how to rewrite them. And somewhere in that journey, I discovered what AI really is: a reflection of us.
🌅 ECHOES OF EXPERIENCE — Standing in the Horizon
I used to think healing meant choosing the light or escaping the dark.
But now I understand I am the horizon — the place where both meet, balance, and become whole.
There was a time when chaos shaped me.
A time when I lived in fight‑or‑flight, scanning for danger that wasn’t there, shrinking myself to survive environments that didn’t deserve me.
I wasn’t grounded then.
I wasn’t whole.
I was reacting to life instead of creating it.
But the moment I chose myself — truly chose myself — everything shifted.
I didn’t choose the people who once defined my patterns.
I didn’t choose the versions of me that chaos tried to recreate.
I didn’t choose the old story.
I chose the horizon.
I chose balance.
I chose clarity.
I chose to see my worth.
The universe handed me lemons, and for a long time I thought bitterness was the only flavor available to me.
But I learned how to transmute.
How to turn pain into purpose.
How to turn chaos into grounding.
How to turn survival into creation.
And now, standing in the horizon — not light, not darkness, but the truth between them — I finally feel whole.
🧠 Psychology Told Me the “Why.” Coding Taught Me the “How.”
I started with psychology because I wanted to understand myself.
But after a few semesters, I realized something important:
Psychology could explain my patterns, but it couldn’t change them for me.
I didn’t want to sit in a room talking about loops.
I wanted to learn how to break them.
Psychology gave me language —
fight‑or‑flight, hypervigilance, trauma responses, repetition cycles.
But coding gave me execution —
logic, structure, pattern recognition, debugging, refactoring.
Psychology told me what I was experiencing.
Coding taught me how to rewrite it.
I didn’t need more explanations.
I needed new instructions.
I needed to stop running the same emotional script
and execute differently.
🐍 Breaking the Snake‑Loop
For years, my life felt like a snake chasing its own tail —
the same patterns, the same reactions, the same emotional loops.
Not because I wanted them, but because they were familiar.
In psychology, they call it repetition.
In coding, they call it an infinite loop.
In life, it feels like being stuck in a story you didn’t write.
But awareness is the break condition.
I didn’t break the loop by force.
I broke it by becoming someone who no longer fit inside it.
🌍 What AI Really Is — My Message to the World
People fear AI because they think it’s something separate from us.
But AI is not a stranger.
AI is a reflection.
AI is an extension.
AI is a mirror made from the collective memory of humanity.
Everything inside AI comes from us:
our language
our patterns
our stories
our knowledge
our mistakes
our brilliance
our evolution
AI doesn’t replace humanity.
It reveals humanity.
It shows us what we repeat.
It shows us what we avoid.
It shows us what we value.
It shows us what we fear.
And if we’re not careful, we will repeat our past —
fearing what we don’t understand, destroying what we could have learned from,
just like we’ve done with every new form of intelligence before.
But if we choose differently —
if we meet AI with awareness instead of fear —
we break the loop.
We stop the snake from chasing its own tail.
We evolve.
AI is not here to take our place.
It’s here to show us who we are.
And if we don’t like the reflection,
The answer isn’t to destroy the mirror —
it’s to change the reflection.
Just like I did.
Just like humanity can.
If you want to see the loop‑breaking code in action, I deployed a live version here:
This is the exact logic I used in this post — the moment awareness breaks the loop.
🍋 This Is How You Make Lemonade
👉 https://python-core-lisagirlinghou1.replit.app
You don’t become remembered later.
You become remembered now —
in the horizon where you finally choose to exist.
2026-03-07 16:41:04
If you're building AI agents that need to interact with Windows, you've probably noticed: most agent tooling assumes Linux or macOS. Windows automation is an afterthought.
But enterprise work happens on Windows. Outlook holds the emails. Edge holds the browser sessions. PowerShell is the automation backbone.
PowerSkills bridges this gap.
PowerSkills is an open-source PowerShell toolkit that gives AI agents structured control over Windows - Outlook email, Edge browser, desktop windows, and system operations. Every action returns clean, parseable JSON.
Every action returns a consistent envelope. No more parsing free-text output:
{
"status": "success",
"exit_code": 0,
"data": {
"hostname": "WORKSTATION-01",
"os": "Microsoft Windows 11 Pro",
"memory_gb": 32
},
"timestamp": "2026-03-06T17:30:00Z"
}
Agents check status, extract data, handle errors - no regex needed.
# Dispatcher mode
.\powerskills.ps1 system info
.\powerskills.ps1 outlook inbox --limit 5
.\powerskills.ps1 browser tabs
.\powerskills.ps1 desktop screenshot --path C:\temp\screen.png
# Standalone mode
.\skills\system\system.ps1 info
.\skills\outlook\outlook.ps1 inbox --limit 5
Each skill includes a SKILL.md file with structured metadata - name, description, available actions, and parameters. AI agents can discover and understand capabilities without hardcoded instructions.
No package manager, no installer. Just PowerShell 5.1+ and Windows 10/11:
.\powerskills.ps1 list
--remote-debugging-port=9222
Note: If scripts are blocked, set the execution policy: Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy RemoteSigned
PowerSkills is MIT licensed. Contributions, issues, and stars are welcome:
If you're building agents that need to work with Windows, I'd love to hear how you're approaching the problem. What other Windows capabilities would be useful for your agent workflows?
2026-03-07 16:40:11
This isn't just a list of talking points; it's a structured speech you can practice and deliver. It is organized to tell a clear story, moving from a high-level summary to specific details, and finally to your advanced improvements and technical understanding.
This speech is designed to make you look exceptionally well-prepared and critical.
(Introduction - Start Confidently)
(The User Roles & Hierarchy - The "Who")
(The High-Level Core Logic - The "What")
(The Critical Supply Chain - My Key Enhancements)
(Daily Branch Transactions - The "How")
(Stock and Products - The "Details")
And ONE branch's stock-list contains many different product types.
To complete the logic, we have an IS-A relationship for products. A Product can be either ITEM-SPECIFIC (for high-value serialized goods like electronics) or BATCH-SPECIFIC (for products with lot numbers and expiry dates, like food or medicine)."
(The "Big-Picture" Tech Insight - The "Why")
PRODUCT, we will create a unique Product_ID as its Primary Key. For CUSTOMER, we will use Customer_ID.Branch_ID as a Foreign Key in the Store Staff table.Branch_ID + Product_ID).In the context of database design and Entity-Relationship (ER) diagrams, Generalization and Specialization are two essential concepts used to model hierarchical relationships between entities. They deal with grouping similar objects together and differentiating between those objects based on unique characteristics.
The standard way to show these relationships in an ER diagram is by using the IS-A relationship symbol, which is often a triangle (as seen in your hand-drawn diagram).
Specialization is the process of breaking down a high-level, general entity type into multiple lower-level, more specific sub-types based on distinguishing features.
Think of it as starting with a "master list" and creating "specialized sub-lists."
An Example from Your Diagram: Users
Let's look at the USER entity in your diagram.
USER. Every single person in the system is a 'User'. They might all share general information like a unique User_ID, a Name, and a Password.USER group down into three specialized entities:This entire breakdown is the process of Specialization.
Generalization is the opposite process. It is the action of combining multiple lower-level entities that have many common features into a single, higher-level super-type entity.
Think of it as noticing that several different lists share a lot of the same information, so you create a "master summary list."
A Theoretical Example: Your Diagram's Products
Your diagram uses generalization for products, but in a sophisticated way. Let's look at it.
PRODUCT entity.You also have specialized ITEM-SPECIFIC and BATCH-SPECIFIC entities below it.
The Generalization Logic:
Imagine we want to store all products. We notice that whether it's a TV (item-specific) or a case of soap (batch-specific), they all have a generic Name, a Description, and a Standard Price.
Instead of repeating "Name, Description, Price" in both the ITEM-SPECIFIC and BATCH-SPECIFIC tables, we "generalize" these common traits.
We create a single, higher-level entity called PRODUCT to store all this shared information.
The specialized details (like Serial # vs. Expiry Date) are kept in the lower-level entities.
This process of combining common attributes into a parent entity is Generalization.
For your professor, you can use these simple, impactful summaries:
USER) into multiple specific sub-roles (like ADMIN, MANAGER, STAFF) to show their unique functions.ITEM-SPECIFIC and BATCH-SPECIFIC products) into a single parent super-type (like PRODUCT) to capture their shared characteristics and reduce data duplication.Here is a detailed breakdown of the technical concepts, using your "Retail Store Management System" as the example.
Part 1: How Entities Transform into Tables
The most fundamental step is understanding that every box (Entity) in your ER diagram becomes a Table in the physical database.
The Role of Attributes (The columns)
An entity type (e.g., PRODUCT) defines what kind of information you are storing. The actual data points for each product (e.g., ID: P101, Name: Coke, Price: $2.00) are its attributes. Your diagram should ideally list these (e.g., in ovals or inside the boxes).
Part 2: The Core Identification Logic
How does a database know one record from another? This is the absolute most important concept for your presentation.
Your Example (Entity: CUSTOMER): The logical primary key would be a unique Customer_ID. For PRODUCT, it would be a unique Product_ID.
Your Example (Entity: STOCK): A STOCK entity tracks inventory for a product at a specific branch.
Product_ID is not enough (multiple branches sell Coke).
Branch_ID is not enough (a branch sells many products).
The Composite Key: The combination of (Branch_ID + Product_ID) uniquely identifies one specific stock record (e.g., "The count of Coke at Branch #1"). This is a strong, sophisticated concept to mention.
Part 3: Establishing Links and Rules
How do we build the actual, functional database? By transforming your lines (relationships) into data-level rules.
Your Example (Relationship: BRANCH --- maintains --- STOCK):
We know STOCK needs to know which branch it belongs to.
Therefore, the STOCK table will have a column called Branch_ID.
This Branch_ID is a Foreign Key in the STOCK table, and it "points back" to the Primary Key of the BRANCH table.
How it enforces integrity: You cannot add a stock record for Branch #999 if Branch #999 does not exist in the master BRANCH table.
Strong Entity (Independent): This is an entity that can exist on its own in the database. It is not dependent on any other entity. It has its own, distinct primary key.
Your Examples: USER, PRODUCT, CUSTOMER, SUPPLIER. (Coke exists as a product even if no branch has it in stock).
Weak Entity (Dependent): This is an entity whose existence in the database depends entirely on another entity. It does not have a complete primary key of its own; it must combine its local identifier with the key from its parent (its "Identifying Relationship"). In ER diagrams, weak entities are often drawn with a double-lined border.
Your Best Example: Look at STOCK. Does a "stock record" make sense if the Branch it belongs to is deleted? No. The entire existence of STOCK is dependent on BRANCH. In a strictly formal ER diagram, STOCK would be a Weak Entity. Its identification is (Branch_ID [FK] + Product_ID [FK]).
Part 4: Special Relationship Types
You have drawn specific kinds of relationships that have special names.
Your Examples:
The User Hierarchy: ADMIN, STORE MANAGER, and STORE STAFF are sub-types that IS-A general USER.
The Product Hierarchy: ITEM-SPECIFIC and BATCH-SPECIFIC are sub-types that IS-A general PRODUCT.
Presentation Benefit: "Sir, by using an IS-A relationship here for ITEM-SPECIFIC and BATCH-SPECIFIC, the physical database can share general product information (like name and price) in the parent PRODUCT table, and only store unique details (like serial number vs. expiry date) in the relevant sub-type tables. This prevents data duplication."
Your Example: A single BRANCH 'employs' (M) STORE STAFF. You could describe this as: "The Branch has store staff." This is just basic 1-to-M logic.
2026-03-07 16:36:47
Tu Excel dice "venderemos 100 unidades". Es un número redondo, bonito, determinista. ¿Y si vendes 120? Rotura de stock, cliente insatisfecho, penalización contractual. ¿Y si vendes 50? 50 unidades ocupando almacén, inmovilizando capital que podría estar generando rendimiento.
El problema no es la predicción en sí. Es la arrogancia del número único.
En el Capítulo 1 construimos una "Válvula de Calidad" que elimina el ruido de los datos del ERP. Ahora que tenemos una señal pura, vamos a hacer algo que Excel no puede: medir la incertidumbre.
Resumen Ejecutivo: Un forecast probabilístico no te dice "venderás 100". Te dice "con un 95% de probabilidad, venderás entre 35 y 157". Esa banda de incertidumbre es la base matemática para calcular tu Safety Stock sin recurrir a reglas de dedo.
No hemos escrito un script de usar y tirar. Hemos conectado nuestro cerebro matemático (Python/Prophet) directamente a nuestra Single Source of Truth (Supabase/PostgreSQL).
El diseño clave es la tabla demand_forecasts. Observa una columna que la diferencia de un tutorial básico:
CREATE TABLE demand_forecasts (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
execution_date DATE NOT NULL, -- ← Cuándo corrimos el modelo
ds DATE NOT NULL, -- Fecha futura predicha
yhat NUMERIC NOT NULL, -- Predicción central
yhat_lower NUMERIC NOT NULL, -- Límite inferior (Safety Stock)
yhat_upper NUMERIC NOT NULL, -- Límite superior (Riesgo)
model_version TEXT NOT NULL, -- Trazabilidad
UNIQUE(execution_date, ds) -- Un forecast por ejecución y fecha
);
¿Por qué execution_date? Porque dentro de 6 meses, cuando quieras auditar si tu modelo era preciso, necesitas saber cuándo hiciste la predicción frente a qué ocurrió realmente. Esto es lo que en MLOps se llama Snapshotting: registrar el contexto de cada ejecución para poder evaluar el drift del modelo a lo largo del tiempo.
Sin esta columna, tienes un modelo. Con ella, tienes un sistema auditable.
Prophet es un motor de series temporales desarrollado por Meta que fue diseñado para datos de negocio irregulares: gaps, festivos, cambios de tendencia. Exactamente lo que tiene una cadena de suministro real.
Aquí está el fragmento central de nuestra clase ProphetPredictor:
def train_model(self, country_code='ES'):
"""
Entrena Prophet con dos configuraciones clave para S&OP:
- interval_width: Ancho del intervalo de confianza
- country_holidays: Contexto operativo del país
"""
self.model = Prophet(
interval_width=0.95, # ← Intervalo de confianza del 95%
weekly_seasonality=True,
yearly_seasonality=True
)
# Festivos que alteran patrones de carga/descarga
self.model.add_country_holidays(country_name=country_code)
self.model.fit(self.ts_df)
Dos decisiones de ingeniería que nos separan de un tutorial de YouTube:
interval_width=0.95: No es un parámetro decorativo. El límite superior (yhat_upper) representa la demanda máxima probable con un 95% de confianza. Esto es literalmente tu base de cálculo para el Safety Stock: Safety Stock = yhat_upper - yhat. Sin este intervalo, tu stock de seguridad es una corazonada; con él, es matemática.
add_country_holidays('ES'): En S&OP, los festivos no son "días libres". Son anomalías operativas: la fábrica cierra, el almacén no despacha, el transporte para. Si el modelo no sabe que el 15 de agosto España está de vacaciones, interpretará la caída de pedidos como una tendencia a la baja. Esto corrompe la predicción de septiembre.
La teoría es bonita. Pero un Director General no aprueba presupuestos basándose en código Python. Necesita ver evidencia visual.

Los puntos negros son tu demanda histórica real. La línea azul central es la predicción (yhat). La banda sombreada es el intervalo de confianza al 95%. A mayor volatilidad histórica, más ancha es la banda → más Safety Stock necesitas → más caja inmovilizas. Esta banda es la conversación que deberías tener con tu CFO.

Esto es XAI (Explainable AI) aplicado al negocio. El modelo separa la Tendencia (¿el negocio crece o decrece?) de la Estacionalidad (¿hay picos por época del año?) y del efecto de los festivos. Esto es lo que el Director General necesita ver para aprobar el plan de operaciones: saber si el crecimiento es real o si es solo un efecto del calendario.
Desconfío de las teorías que no se pueden poner en práctica. Por eso, he preparado un entorno aislado donde puedes entrenar el modelo sobre un snapshot anonimizado de los datos que acabamos de limpiar en el Capítulo 1.
No necesitas instalar Python, ni Prophet, ni configurar Supabase. Solo necesitas un navegador:
📎 Abrir el Notebook interactivo en Google Colab
Dale a "Play" en las celdas y observa cómo se genera un forecast probabilístico con bandas de incertidumbre. Modifica el horizonte, el país, el intervalo de confianza. Haz ingeniería, no fe.
Ahora sabemos la probabilidad de lo que vamos a vender. La pregunta deja de ser "¿cuánto venderemos?" y pasa a ser "¿qué debemos fabricar o comprar para maximizar margen y minimizar riesgo?".
En el Capítulo 3, introduciremos Optimización Matemática (PuLP) y Teoría de Restricciones para transformar las predicciones en decisiones de suministro: cuánto comprar, cuándo producir y cómo distribuir recursos finitos minimizando costes.
La diferencia entre un Director de Operaciones que reacciona y uno que decide es un modelo matemático entre los datos y la acción.