2026-03-11 05:58:12
It's not the most powerful one. It's not the free one. Here's why.
If you're using GitHub Copilot with a premium subscription in Visual Studio 2026, you already know the model choice matters. After spending a lot of time coding C# and .NET 10 with Copilot — roughly six hours a day — here's where I've landed.
\ Full video: My Go to Model for GitHub Copilot
When it comes to code quality, models from Anthropic — like Claude Haiku and Claude Sonnet — are a consistently strong choice. They tend to generate cleaner code with fewer bugs and usually require less manual rework than many alternatives.
\ It genuinely feels like Anthropic has made software development a primary focus when training their models. That said, this isn't about dismissing other models — it's about what's been working well in my day-to-day workflow.
\

\
The free models are technically capable. For occasional use, they're fine.
\ But their biggest drawback is speed.
\ In a separate video where I compare model performance in Visual Studio 2026, even for very simple tasks, the free models were roughly two to three times slower than premium alternatives. That might sound tolerable, but compounded across a full coding session, it constantly interrupts your flow.
\ There's no real benefit in using AI if the productivity gain is minimal. The tool should accelerate your work — not make you wait.
With a GitHub Copilot premium subscription, you also get access to agent mode in Visual Studio. This allows you to use multiple models to perform more human-like tasks: creating new solutions and projects, generating files, doing code reviews, and more.
\ A common example is a full codebase rewrite — where you need AI to create a new solution, add multiple projects, install NuGet packages, and wire up project references. On a large codebase, this should happen fast. With free models, it often takes far too long, and sometimes struggles just to generate a basic shell solution with empty projects.
\ Sure, you can do all of this manually for small solutions. But at scale, speed matters.
\

\
If I had to choose only one model for day-to-day coding, it would be Claude Haiku 4.5. For my typical work, it's good enough about 80% of the time.
\ The cost difference is significant: Haiku costs roughly one-third of a Claude Sonnet 4.5. When I use higher-end premium models for around six hours of coding a day, I typically burn through my entire monthly premium request quota within 10 to 15 days. Haiku stretches that budget considerably further.
\ Using the more powerful premium models for routine tasks — scaffolding, boilerplate, test generation — is often overkill. These tasks don't require deep reasoning. They require speed and accuracy. There are many everyday development tasks where speed matters more than raw intelligence.
\

I also have a separate video on switching between free and premium models as a cost-saving strategy, but the short version is this:
| Task Type | My Go-To Model | |----|----| | Boilerplate, scaffolding, tests | Claude Haiku 4.5 | | Complex logic, architecture | Claude Sonnet 4.5 | | When Haiku gets stuck | Step up one tier |
Haiku is my default baseline. I only switch to more powerful options when Haiku gets stuck, or the task truly demands it.
Claude Haiku 4.5 hits the right balance between cost and performance for everyday coding. It's fast, affordable, and handles the majority of real-world development tasks well.
\ Start with Haiku. Escalate when you need to. Your workflow — and your monthly quota — will thank you.
https://youtu.be/Dqf1Pt2alY0?embedable=true
Tagged: GitHub Copilot, Visual Studio 2026, Claude AI, Anthropic, C#, .NET 10, AI Developer Tools, Copilot Premium
\
2026-03-11 05:41:39
\
What is something worth if nobody is trading it?
\ That question sounds philosophical. In DeFi, it is an engineering crisis. On October 10, 2025, oracles feeding stressed market prices into automated liquidation systems contributed to $19.13 billion in leveraged positions being forcibly closed in under 24 hours, the largest single-day deleveraging in crypto history, affecting more than 1.6 million traders. The mechanics were direct: oracle data said collateral was worth less, automated systems triggered liquidations, and those liquidations pushed prices further down, creating a feedback loop that ran at machine speed.
\ For assets with deep, liquid markets, that is a risk management problem. For assets with no active secondary market at all, it is a structural design flaw. On March 10, 2026, DIA launched DIA Value to address exactly that flaw.
\
An oracle, in blockchain terms, is the system that brings external data onchain. Smart contracts cannot independently access information outside their own network, so when a DeFi lending protocol needs to know the current price of an asset to decide whether to liquidate collateral, it reads from an oracle. The accuracy of that price feed is not a detail. It is the foundation on which billions of dollars in automated decisions rest.
\ Most oracles were designed for assets that trade continuously on liquid markets. The methodology is straightforward: aggregate prices from multiple exchanges, filter outliers, and publish a reliable market price. For Bitcoin, Ether, or any token with active order books across major venues, this works. The market has depth, manipulation is costly, and stale data refreshes quickly.
\ The problem is that a growing segment of onchain capital does not fit that description. Tokenized U.S. Treasuries do not trade continuously on secondary markets. Yield-bearing stablecoins like satUSD+ derive their value from what a staking contract pays out, not from what someone last traded them for on a DEX. Liquid staking tokens like stETH have a redemption rate defined directly in their smart contract. Applying market-based oracle pricing to these assets produces numbers that are either unreliable or outright wrong, and for illiquid assets, thin order books make the problem worse: small trades can move prices significantly, creating openings for manipulation that affect collateral calculations across entire protocols.
\ DIA's Dillon Hanson, Head of BizDev, framed it directly:
Oracles were built to answer one question: how is the market valuing this asset? But when most institutional assets entering DeFi don't trade on secondary markets, you need infrastructure that answers a different question: what is this asset fundamentally worth? That's what Value does.
\
DIA Value takes the pricing question back to first principles. Instead of asking what a market is willing to pay, it asks what the asset can actually be redeemed for, what reserves back it, and what the underlying contract defines as its value. That methodology is not new in finance. NAV calculations, mark-to-model frameworks, and reserve verification are standard practice in traditional finance for exactly the same reason: when markets are inactive, intrinsic valuation is the only credible basis for pricing.
\ The architecture implements this across multiple asset types. For a yield-bearing token like stETH, DIA Value reads the redemption rate directly from the protocol's smart contract, which defines precisely how much ETH each stETH can be exchanged for at any point. That number is not subject to the thin order book problem because it is not derived from trading. It is derived from the protocol's own state.
\ For yield-bearing stablecoins, the same logic applies. The River team, whose satUSD+ stablecoin is among the integrated protocols, noted that the value of their asset is defined by what the staking contract pays out, not by secondary market prices. For lending markets and vault strategies accepting satUSD+ as collateral, DIA Value provides that number directly from onchain data, which means the price any protocol sees is the same price the underlying protocol defines as authoritative.
\ Noah Boisserie, CEO of Cooper Labs, described the cross-chain version of the problem and the solution it required:
"When you operate a stablecoin across four chains, pricing fragmentation becomes a real engineering problem. DIA Value solved this for us by computing USDp's fair value directly from onchain redemption data, reading collateral composition and redemption curves from our smart contracts. One verifiable fundamental price, consistent everywhere. That's what lets integrators treat USDp as reliable collateral without building custom pricing logic per chain."
\
The $19 billion liquidation event on October 10, 2025 was triggered by a macro shock, President Trump's announcement of 100% tariffs on Chinese goods, but it was amplified by infrastructure failures. Stablecoin USDe fell to $0.65 on Binance amid oracle misfires and thin liquidity, a 35% discount while other venues held it near $1. That pricing divergence had nothing to do with fundamentals. It reflected how local prices fed into margin engines and oracle systems. Positions that were solvent under cross-venue pricing were liquidated because a single venue's oracle said otherwise.
\ For thinly traded or illiquid assets, this risk is structural. It does not require a geopolitical shock to manifest. Any sustained period of low trading volume in a thin market creates conditions where the price feed diverges from intrinsic value and automated systems respond to that divergence in ways that harm users. Protocols that want to support tokenized treasuries, yield-bearing stablecoins, or Bitcoin-backed tokens as collateral have been forced to either accept that structural risk or exclude the assets entirely.
\ Jeff Garzik, Co-Founder of Hemi Network, stated the stakes plainly in the context of hemiBTC, a Bitcoin-backed DeFi token:
"Bitcoin sitting idle is a trillion-dollar opportunity cost. hemiBTC lets holders deploy BTC productively into DeFi, but that only works if the pricing layer can verify the actual Bitcoin backing each token onchain. DIA Value does exactly that, no secondary market dependency, no centralized attestations. It's the kind of infrastructure that makes Bitcoin-native DeFi viable: fully trustless and verifiable."
\
There is a regulatory layer to this as well, and DIA is positioning for it. Fair value measurement standards in traditional finance, specifically IFRS 13 and ASC 820, explicitly require intrinsic valuation methods when active markets do not exist for an asset. Institutions operating across both traditional and decentralized finance need pricing infrastructure that is compatible with those standards, and market-based oracles cannot provide that compatibility for illiquid assets.
\ Zygis Marazas, Head of Product at DIA, made the connection direct:
Traditional finance solved illiquid asset pricing decades ago with NAV calculations, mark-to-model frameworks, and reserve verification. Blockchain makes it possible to execute those same methodologies with full transparency and 24/7 availability.
\ DIA Value complements DIA's existing Market oracle, which handles pricing for 3,000+ liquid assets with active trading. The two products together cover the full spectrum, which matters as institutional capital continues moving onchain. The tokenized RWA market, excluding stablecoins, has grown to approximately $25 billion and is projected to exceed $10 trillion by 2030. The protocols that can price that capital correctly will capture the infrastructure layer for institutional DeFi. The protocols that cannot will be excluded from it.
\
October 10, 2025 demonstrated that oracle infrastructure is not a background concern in DeFi. It is the mechanism by which abstract price data becomes concrete financial decisions. When that mechanism fails or operates on premises that do not match the nature of the asset, the consequences scale with the leverage sitting on top of it.
\ DIA Value is a direct, practical response to that problem. The methodology is not novel; it borrows from decades of institutional finance practice. What is new is the delivery: verifiable, onchain, and available 24/7 for assets that have never had a reliable pricing layer before. The protocols already using it, Euler, Morpho, Silo, and Hydration, are a credible signal that the demand is real. The $100 billion in illiquid onchain capital waiting for infrastructure that can price it reliably is the market that gets to decide whether it sticks.
\ Don’t forget to like and share the story!
\
2026-03-11 05:15:49
For a while, it became fashionable to say Web3 was over.
\ The hype cooled. NFT mania faded. Token prices fell. Venture money became more selective. And for many outside the industry, Web3 started to look like another overpromised tech trend that had already peaked.
\ But that view misses what usually happens after a hype cycle ends.
\ The loudest phase of a technology movement is rarely the most important one. The real work often begins when the headlines disappear, the tourists leave, and the builders keep going.
\ That is where Web3 seems to be now.
During the peak years, Web3 was often defined by speculation. Every week brought a new token, a new NFT collection, a new metaverse pitch, or a new promise that decentralization would change everything overnight.
\ That environment attracted attention, but it also buried the more serious side of the movement.
\ Underneath the speculation, developers were working on something much less flashy and much more difficult: rebuilding parts of the internet around ownership, openness, and programmable trust.
\ That work did not stop just because the excitement slowed down.
\ If anything, it became easier to take seriously once the noise faded.
One of the biggest misunderstandings about Web3 is that people often reduce it to crypto prices.
\ When prices go up, they say Web3 is the future. When prices go down, they say Web3 is dead.
\ But Web3 was always supposed to be bigger than market cycles.
\ At its core, Web3 is an attempt to redesign how digital systems work. Instead of relying entirely on centralized platforms to store data, control identity, move money, and set the rules, Web3 explores whether some of those functions can be handled by open networks.
\ That includes digital ownership, decentralized finance, on-chain identity, creator monetization, community governance, and applications that are not fully dependent on one company’s servers or policies.
\ Not every experiment will succeed. Many have already failed. But the broader question remains valid: can the internet work in a way that gives users more control?
\ That question has not gone away.
Every emerging technology goes through a stage where the visible consumer excitement fades, and the invisible infrastructure starts improving.
\ That phase usually looks boring from the outside.
\ It is not driven by viral moments. It is driven by better wallets, more scalable networks, stronger developer tools, improved user experience, safer smart contracts, and systems that can support real applications instead of demos.
\ That is where much of Web3 is today.
\ The industry has learned, often painfully, that mainstream users will not arrive just because decentralization sounds good in theory. People come when products are useful, simple, and trustworthy.
\ So the focus has slowly shifted from selling the dream to solving the friction.
\ That may not generate the same headlines, but it matters more.
One reason Web3 continues to matter is that the internet still has an ownership problem.
\ People spend years building audiences on platforms they do not control. Gamers buy digital items that can disappear when a publisher changes the rules. Creators depend on algorithms that can cut off reach overnight. Users generate enormous value online but rarely own the systems they help grow.
\ Web3 does not fix all of that automatically, and sometimes, it has exaggerated its own solutions. But it does introduce a powerful idea: digital assets, identities, and communities do not have to exist only at the mercy of centralized platforms.
\ The concept of portable ownership remains compelling.
\ If a user can hold an asset in a wallet, move it across applications, verify identity without handing full control to a platform, or participate directly in a network they use, that changes the relationship between users and the internet.
\ That is not a small shift.
Another reason people misread Web3 is that they assume it has to replace the entire internet to matter.
\ It probably will not.
\ The more realistic future is likely hybrid.
\ Most users will not care whether every part of a product is on-chain. They will care whether it works, whether it is secure, and whether it gives them more freedom than older systems did.
\ That means the next generation of successful Web3 products may not look like “crypto products” at all. They may simply feel like better internet products with ownership, transparency, and interoperability built in behind the scenes.
\ In that sense, Web3 may become more influential as it becomes less visible.
Web3 also needs honest criticism.
\ The space has had real problems: scams, hacks, poor UX, empty promises, speculative excess, and projects that used decentralization more as a marketing term than a meaningful design choice.
\ These failures damaged trust, and in many cases, that criticism was deserved.
\ But flawed execution does not automatically invalidate the underlying ideas.
\ The dot-com era had bubbles and bad companies, too. That did not mean the internet itself was a mistake. It meant the market was early, overheated, and still figuring out what was real.
\ Web3 may be going through a similar correction.
The strongest sign that Web3 is still alive is not a trend on social media. It is the fact that teams are still building when there is less attention to chase.
\ That is usually when an industry starts becoming more real.
\ Builders become more disciplined. Users become more selective. Products have to earn attention instead of buying it with hype. And the conversation slowly shifts from what sounds revolutionary to what actually works.
\ That is healthier than the boom period, even if it looks less exciting from the outside.
Web3 is not dead. It is simply no longer performing for the crowd in the same way it once did.
\ And that may be exactly why it still matters.
\ The internet still struggles with concentration of power, fragile creator economics, platform dependence, and limited user ownership. Those issues remain unsolved, and Web3 is still one of the few serious attempts to rethink them at the structural level.
\ It may not rebuild the internet in one dramatic wave.
\ It may do it quietly, layer by layer, tool by tool, protocol by protocol.
\ And if that happens, many people will only notice once the rebuilding is already well underway.
\
2026-03-11 04:53:07
There is a recent [February, 2026] paper in Acta Psychiatrica Scandinavica, Potentially Harmful Consequences of Artificial Intelligence (AI) Chatbot Use Among Patients With Mental Illness: Early Data From a Large Psychiatric Service System, stating that, “Specifically, it seems that interaction with AI chatbots, especially if intense/of long duration, may contribute to onset or worsening of delusions or mania, with severe or even fatal consequences.” \n \n “Therefore, we aimed to investigate whether there are reports compatible with potentially harmful consequences of AI chatbot use on mental health among patients with mental illness receiving care in a large psychiatric service system.” \n \n “The result of the consensus assessment was that among the 181 notes containing one of the 22 chatbot/ChatGPT search terms, notes from 38 unique patients (39% females, median age 28 years [25%–75%: 22–39 years]) were compatible with potentially harmful consequences of use of AI chatbots on mental health. Delusions (n = 11), suicidality/self-harm (n = 6), feeding or eating disorder (n = 5), mania/hypomania/mixed state (n < 5), obsessions or compulsions (n < 5), depression (n < 5), anxiety (n < 5), other symptoms/miscellaneous (n < 5), ADHD-related symptoms (n < 5), and unspecific stress (n < 5).” \n \n “There were also examples of patients (n = 32) using AI chatbots for seemingly constructive purposes from a mental health perspective—that may have positive consequences, for example, for psychoeducation, psychotherapy (“talk therapy”), companionship against loneliness or for diagnostics (e.g., entering symptoms and requesting an interpretation).” \n \n “In conclusion, with the substantial caveats described above in mind, the results of this study support the notion that use of AI chatbots may have a negative impact on the mental health of patients with mental illness, especially regarding delusions. Mental health professionals should be aware of this possibility and guide their patients accordingly, as it seems that some patients would likely benefit from reduced/no use of AI chatbots in their current form.”
The availability and accessibility of AI chatbots, as well as their sycophantic alignment, make the possibility that they can result in delusion and then psychosis, for some users, quite high. \n \n This means that even with all the basic disclaimers over chatbots, that they make mistakes or that users should be careful, they still carry mind risks, which require better approaches to mind safety. \n \n And since mind safety is the objective, then the processes of mind for risks and safety, by AI use, should at least be displayed, so as to gauge the distance from delusion, psychosis, or worse. \n \n Simply, to mitigate AI psychosis and delusion, a direct path is to explore dynamic displays of corresponding processes in the mind, with respect to destinations and relays, so that users can have a better sense of what might be happening within and how to be heedful. \n \n This display will be like a flowchart, with shapes and arrows. The shapes will represent destinations in the mind. Destinations include caution, consequences, pleasure, delight, grandeur, fear, hate, care, support, and so forth. Relays include reality lines [meaning transport path of things in reality], then non-reality lines [for transport path of imagination, fantasies, and so forth]. \n \n This display will be dynamic, using the themes and keywords of a chat to place mind movements. A description of this display is like plotting a live graph and seeing how the curve is changing as coordinates are added. It is also similar to data visualization, where some parts of [say] a map are lit, and there are adjustments as data changes. \n \n This application will be separately hosted, where users can plug keywords and then have the display and a score, as well as have recommendations on how to be heedful in the next session. Some AI chatbot companies may host the API so that their subscribers can have access as well. \n \n The robustness of this model to solve AI psychosis is based on Conceptual Biomarkers and Theoretical Biological Factors for Psychiatric and Intelligence Nosology.
There is still no AI psychosis startup anywhere on earth, just like there is no AI psychosis research lab. \n \n While AI use cases for therapy, companionship, relationships and other personal uses continue to grow, it will become evident that AI is accessing the human mind with language, just like other humans would, and for some, it might be possible that mind displacements [from reality or the absence of situational awareness of the thing being a machine] may occur, resulting in unwanted experiences. \n \n This makes it an ongoing and urgent problem to solve, where pre-seed capital, from a forward-looking venture capital, may win the market, since the answer is rooted in conceptual brain science, postulating about electrical and chemical signals, from empirically-supported neuroscience. \n \n It is possible to have the product ready to go on April 10, 2026, if, say, the startup is incorporated this March.
2026-03-11 04:44:19
In the early stages of a startup, communication is a double-edged sword. You need syncs, but every hour spent in a Zoom call is an hour stolen from execution. At CultLab, we hit a wall: the "Founder-to-Engineer" translation gap. We were burning 32 hours a month just documenting decisions.
\ Most people solve this with an AI summarizer. That is a rookie mistake.
\ A summary is passive. It’s noise. As a Growth Hacker, my goal wasn't to "record" meetings; it was to automate the transition from talk to tech-spec.
We didn't just prompt Gemini; we rewired its reasoning using the Shape Up framework. Why? Because LLMs love to hallucinate "big picture" fluff. By forcing the agent to think in terms of Appetite and Rabbit Holes, we turned a chatty bot into a ruthless Product Strategist.
The "Shaper" Architecture:
We built a pipeline that treats a Zoom transcript like raw data to be mined, not a story to be told.
This isn't just about productivity; it’s about resource leverage. By automating the documentation and initial "shaping" of projects:
Founder Liquidity: We effectively "cloned" the founder's strategic thinking, freeing up 4 full working days per month for high-level fundraising and growth hacking.
\
Engineering Velocity: Briefs are now generated in seconds. Designers and front-end teams receive Context, Problem, and Success Metrics instantly, reducing back-and-forth by ~70%.
\
The Cost of Zero: We eliminated the need for junior PMs or technical writers. The system scales with the volume of calls, not the size of the payroll.
The real competitive advantage in 2026 isn't the model you use (we used Gemini 2.5 Flash for its speed and context window). The advantage is the methodology you encode within it.
\ We didn't automate a task; we automated a cognitive workflow. That is how you scale a lean team to compete with incumbents.
2026-03-11 04:32:15
Stop copying code chunks. Start letting AI agents work directly with your files.
TL;DR: Use terminal-based AI tools to give your assistant direct access to your local files and test suites.
You copy code snippets into a web browser chat like ChatGPT, Claude, or Grok.
\ You manually move code back and forth and give small chunks of code, filling up the context window.
\ You lose the context of your folder structure, relations among modules, and the whole architecture.
\ The AI often (wrongly) guesses your project layout and hallucinates.
\ When you do this, you get inconsistent code and outdated logic.
\ You're basically playing assistant to the AI, running around doing the busywork.
Download a CLI or IDE tool like Claude Code, OpenCode, Windsurf, or similar, and let it access ALL your codebase.
\ (You'll need to check compliance, set up safeguards, and respect any NDAs).
\ Open your terminal and start an interactive session. Let the agent navigate through all your code.
\ Describe what you want to accomplish at a high level and delegate to the orchestrator agent.
\ Review the proposed plan in the terminal.
\ Approve the changes to update your local files.
\ Let the agent run your tests and fix failures automatically.
Full project context through local AST and RAG indexing.
\ Self-healing code through automated shell feedback loops.
\ Multi-file edits in a single prompt.
\ Parallel development using multiple agent instances.
\ Iterative incremental learning and experimentation. Baby steps.
We were all blown away when ChatGPT came out.
\ I wrote an article 2 days after its release, understanding it was a game-changer.
\ Even people like me who had been working with earlier GPT models.
\ Four years later, you still see many developers coding this way.
\ It works for small algorithms and functions, but falls apart for real software engineering.
// Please fix the login bug in this snippet:
async function loginUser(email, password) {
const url = 'https://api.penrosebrain.com/login';
try {
const response = await fetch(url, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ email, password }),
});
} catch (error) {
console.error('There was an error:', error.message);
alert(error.message);
}
}
// opencode: "Create a failing text, fix the login bug, run tests,
// Ensure it passes the new test and all the previous ones
// Create a Pull Request so I can review it
async function loginUser(email, password) {
const url = 'https://api.penrosebrain.com/login';
try {
const response = await fetch(url, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ email, password }),
});
} catch (error) {
console.error('There was an error:', error.message);
alert(error.message);
}
}
CLI agents have a learning curve.
\ Always review all changes before committing them.
\ Use a sandbox environment if you run untrusted code.
[X] Semi-Automatic
Don't use this for tiny, one-off scripts.
\ Web chats work better for creative brainstorming or generating images.
\ High token usage in long sessions can drive up your API costs.
[X] Intermediate
https://hackernoon.com/ai-coding-tip-003-force-read-only-planning?embedable=true
https://hackernoon.com/ai-coding-tip-006-review-every-line-before-commit?embedable=true
Connect external data using the Model Context Protocol (MCP).
\ Run local models if you need 100% privacy.
Move your AI assistant to the terminal.
\ You'll work faster and make fewer mistakes.
\ When you delegate the boring parts, you can focus on architecture and high-level design.
https://www.firecrawl.dev/blog/why-clis-are-better-for-agents?embedable=true
https://aider.chat/?embedable=true
https://code.claude.com/docs/en/overview?embedable=true
https://www.builder.io/blog/opencode-vs-claude-code?embedable=true
https://blog.marcnuri.com/boosting-developer-productivity-ai-2025?embedable=true
https://opencode.ai/docs/?embedable=true
Agentic Coding
Terminal Agents
Autonomous Coding Loops
Claude Code, OpenCode, Aider, Codex CLI.
The views expressed here are my own.
\ I am a human who writes as best as possible for other humans.
\ I use AI proofreading tools to improve some texts.
\ I welcome constructive criticism and dialogue.
\ I shape these insights through 30 years in the software industry, 25 years of teaching, and writing over 500 articles and a book.
This article is part of the AI Coding Tip series.
[https://maxicontieri.substack.com/p/ai-coding-tips]()