2026-05-01 07:54:39
| Concept | Traditional Database | Solana Accounts |
|---|---|---|
| The Basic Unit | A row in a table | An account — a fixed chunk of on-chain storage tied to a public key |
| Who Owns It | The company running the database | The keypair holder. Period. No admin override. No "forgot password." |
| How You Prove Identity | Username + password (stored and verified by the server) | Cryptographic signature with your private key — math proves you, not a middleman |
| How Data is Stored | Flexible schemas, columns, data types — ALTER TABLE whenever you want | Raw bytes. You decide the structure. The chain just holds the blob. |
| Storage Cost | Usually a flat subscription or server cost | You pay rent in SOL proportional to how many bytes you store. No free lunch. |
| Who Can Write to It | Anyone the app gives write permission to | Only the account's owner program can modify data. Not even you directly — your keypair just authorizes it. |
| Deleting Data | DELETE FROM table WHERE id = x |
Close the account, reclaim the rent. The data is gone, the SOL comes back. |
| Relationships Between Data | Foreign keys, JOINs, relational links | Accounts reference other accounts by their public key — no JOINs, just addresses |
| Who Controls the Schema | The DBA or backend developer | The program (smart contract) that owns the account defines what the bytes mean |
| Availability | Depends on your infra — can go down | Global, always-on, no downtime, no region failover needed |
| Access Control | Role-based (admin, read-only, write, etc.) | Binary — you either have the private key or you don't. No roles, no ACLs. |
| Querying Data | SQL — SELECT * FROM users WHERE ...
|
Fetch by public key, or use getProgramAccounts to filter by data layout |
| Transaction Model | ACID transactions, rollbacks, savepoints | Atomic transactions — everything succeeds or nothing does. No partial writes. |
| Who Hosts It | You, AWS, GCP, or a database provider | Every validator on the Solana network simultaneously |
| Mutability | Mutable by default — update anytime | Mutable or immutable — programs can be marked non-upgradeable (trustless by design) |
| The "Table" Equivalent | A table with rows of the same shape | Accounts owned by the same program — same data layout, different addresses |
| Cost to Read | Usually free (infra cost aside) | Free — reading accounts costs no SOL |
| Backups | Your problem (snapshots, replication) | The blockchain is the backup — replicated across thousands of validators |
2026-05-01 07:38:52
Day 11 of #100DaysOfSolana flipped my database instincts upside down. I spent years thinking in tables, rows, and SQL queries but Solana’s account model is a completely different beast. Every single piece of state, whether a user’s wallet or executable program code, lives as a publicly readable account on a global ledger. There are no joins, no server-side filtering, and no admin overrides. Access control is baked into the runtime: only the owning program can modify an account’s data, and only with the right signatures. Storage isn’t a monthly cloud bill; it’s an upfront, refundable deposit in lamports per byte. The biggest shock? Transparency is the default, anyone can read your account data, no login required. This exercise built a mental bridge I know I’ll cross again and again as I dive deeper into Solana development.
2026-05-01 07:28:25
I asked AI to fix a bug. It confidently returned a modified file. I ran it. A different bug appeared.
Sound familiar? It's like asking a confident stranger for directions in an unfamiliar city. The intent is genuine. The accuracy is a separate question.
The Stack Overflow Developer Survey (2025) found that 66% of developers say AI-generated code is "almost right, but not quite," and 45% report that debugging AI-generated code takes more time. AI excels at producing plausible code. It does not excel at asking "under what conditions will this code break?"
So what if we gave AI the thinking patterns that human debuggers use? That's what this article does: 10 debugging techniques, compressed into 5 prompt blocks you can copy into CLAUDE.md or any agent skill definition.
Tell AI "the API returns a 500 error." Most of the time, it adds a try-catch or null check. Sometimes the symptom disappears. But if the real cause was connection pool exhaustion, that try-catch just hid the problem. Hours later, the same failure resurfaces elsewhere.
LLMs predict the most likely next token. "Error handling patterns" exist abundantly in training data. So pattern-matching a fix is easier than investigating a root cause. The human debugger's judgment — "I don't know the cause yet; keep investigating" — doesn't happen unless you explicitly instruct it.
Block 1 Block 2 Block 3 Block 4 Block 5
Question → Boundary → Timeline → Observe → Stop
assumptions & diff & control & simplify signal
Before attempting any fix:
- Are logs complete? Could there be gaps?
- Is monitoring data trustworthy?
- Does the health check verify "working correctly" or just "responding"?
Reproduce the bug first. Show minimal reproduction steps.
If you cannot reproduce it, report that fact.
Do not fix based on guesses.
"Do not fix based on guesses" is the quiet MVP. Without it, AI skips reproduction and jumps to "probably this is the cause."
Identify the boundary where the problem occurs:
- Which component is still working correctly?
- Where does the behavior diverge?
- Check git log for recent changes
"Which component is still working correctly?" forces the AI into a binary search instead of trying to analyze everything at once.
Organize by timeline:
- When did this problem start?
- Sudden change, or gradual degradation?
- Check retry, cache, and timeout configurations
- Is there a path where small errors get amplified?
"Sudden or gradual?" is a classification filter. Sudden = event-triggered. Gradual = resource exhaustion. That one question cuts the investigation scope in half.
If observation points are insufficient, propose adding logs or traces.
If removing components can simplify the problem, show the steps.
Consider intentionally breaking something to test a hypothesis.
If the same test fails 3 times in a row, stop fixing.
Organize and report to the human:
- What fixes were attempted and their results
- Current hypothesis about root cause
- Possible structural issues (architecture, spec ambiguity)
- What needs human judgment
In my experience, AI will attempt a 4th fix if you don't stop it. It keeps digging the same hole. An explicit stop signal also saves you token costs.
No fixes without root cause first.
Claude Code's best practices include this as an explicit rule: "NO FIXES WITHOUT ROOT CAUSE FIRST." It enforces a 4-phase sequence:
| Phase | Action | Why AI skips it |
|---|---|---|
| 1. Root Cause Investigation | Logs, traces, code analysis | "I've seen this pattern" — jumps ahead |
| 2. Pattern Analysis | Check if same bug exists elsewhere | Only fixes the one spot |
| 3. Hypothesis Testing | Write test to verify cause | "Fixing is faster than testing" |
| 4. Implementation | Fix the verified cause | Wants to start here |
Prohibiting the jump from Phase 1 to Phase 4 — as an explicit prompt constraint — noticeably changes AI debugging accuracy.
The most effective way to have AI debug: make the goal unambiguous.
"Fix this bug" → success criteria are vague.
"Make this test pass" → success criteria are exact.
"I don't have time to write tests." I hear this. But without tests, AI fixes tend to create new bugs. You end up spending more time, not less.
When one model fails the same bug three times, it's stuck in the same blind spot. Hand the problem to a different model:
The previous agent attempted 3 fixes for this bug.
All failed. Here are the attempts:
[Failed fix 1, 2, 3]
Analyze the root cause using a different approach.
Do not repeat the previous agent's fixes.
Pair debugging works between humans. It works between AIs too.
For the complete set of AI debugging patterns, CLAUDE.md design, and context engineering practices:
📖 Practical Claude Code: Context Engineering for Modern Development
2026-05-01 07:14:16
Modern platforms like YouTube and Netflix no longer rely solely on traditional query-based systems.
Instead, they leverage semantic understanding powered by Vector Databases to deliver highly personalized experiences.
A simple observation illustrates this:
These patterns are not matched by keywords — they are inferred from behavioral and semantic similarity.
Relational and NoSQL databases such as MySQL and MongoDB operate primarily on exact matching or indexed queries.
Example:
SELECT * FROM content WHERE text LIKE '%cats%';
This approach fails when the query is semantic rather than lexical:
"What do cats like?"
A Vector Database stores data as high-dimensional vectors that represent meaning instead of raw text.
This enables semantic search, where similarity is based on meaning rather than exact matches.
Raw data is ingested into the system:
Large data is split into smaller segments:
Why?
Each chunk is converted into a vector using embedding models.
Example:
"Cats love playing"
→ [0.12, -0.88, 0.47, ...]
These vectors encode semantic meaning, not just words.
Each stored item includes:
"What do cats like?"
The query is converted into a vector using the same embedding model.
Vectors are compared using metrics such as:
The goal is to find vectors that are closest in meaning.
The system retrieves the most relevant results:
These represent the highest semantic similarity.
"What do cats like?"
Vector databases are foundational for:
Traditional systems:
❌ Match keywords
Modern systems:
✅ Understand meaning
Vector databases redefine how systems interact with data:
This is not just an improvement —
it is a fundamental shift in how data is processed and retrieved.
2026-05-01 07:13:53
I’ve been reflecting on something in today’s tech hiring market.
There seems to be a gap between how much companies say they value experienced engineers and how often highly experienced professionals move forward in hiring processes.
In areas like backend development, software architecture, and data systems (ETL pipelines, data platforms), experience often translates into:
Download the Medium App
• better architectural decisions
• awareness of trade-offs
• ability to anticipate risks
• mentoring and team support
More recently, many of these professionals are also expanding into areas like Generative AI.
It raises an interesting question:
Are we fully leveraging one of the most valuable assets in tech — experience?
Curious to hear how others see this.
2026-05-01 07:12:15
Hi everybody, I hope you are doing great!
In this article, I want to show you how I got my first Microsoft certification, how many days I spent preparing, and which courses and practice exams I used.
So, first things first, this year I have the opportunity to obtain certifications, and I’m really excited and motivated to challenge myself. I chose AZ-900 Fundamentals because Azure is part of my daily routine at work, and understanding the fundamentals gives me much more clarity. In this article, I’m going to list the free courses I studied, which helped me pass the exam.
For this exam, I prepared for about 20–30 days using free content on the internet. I don’t know about you, but I learn better when I see the same topic explained by different people. Below are some playlists that helped me. (I’m Brazilian, so the content is in Portuguese)
AZ-900 - Curso Oficial Microsoft - Microsoft Azure Fundamentals
AZ900 | Treinamento Oficial | Microsoft Azure Fundamentals
I also did many practice tests using Microsoft Learn and Udemy. It became part of my routine to study and then test myself.
Avaliação prática do exame AZ-900: Conceitos básicos do Microsoft Azure
Udemy - Microsoft AZ-900 (2026) - Simulados para aprovação no exame
Since it was my first certification exam, I didn’t know what to expect. So I watched videos and read about other people’s experiences to feel more prepared.
I felt a bit anxious, so I studied more than I probably needed. But in the end, it helped me feel more confident, and it was worth it.
I took the exam online, and if I could give one piece of advice, it would be: study, do some practice exams, and just go for it. It’s totally possible.
During the exam, try to stay calm and read each question carefully. That makes a big difference.
I hope this helped you, and good luck on your journey!