MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

How lenders actually evaluate insurance during real estate deals (and why most investors misunderstand it)

2026-04-26 07:40:24

When a real estate deal moves from early underwriting into serious diligence, insurance stops being a formality. It becomes a decision point that can shape loan terms, delay closing, or quietly kill a transaction. Most investors still treat it as something administrative. Lenders treat it as risk validation.

That mismatch is where problems start.

Why insurance matters more in lending than most investors realize

From a lender’s perspective, insurance is not about compliance. It is about survivability. They are asking a simple question: if something catastrophic happens to this asset, will the capital stack be protected?

That question breaks down into very specific checks:

  • Is replacement cost accurate under current market conditions
  • Are limits sufficient for worst-case loss scenarios
  • Do policies actually align with ownership structures
  • Are deductibles realistic given cash flow assumptions

If any of these are unclear, underwriting slows down immediately.

The portfolio problem lenders actually care about

Most investors think insurance is evaluated at the property level. In reality, lenders increasingly look at aggregated exposure across the full portfolio.

A sponsor might present five stable assets in different markets. Individually, each policy may look fine. But together, they can reveal concentration risk that changes the entire credit profile.

Common lender concerns include:

  • Multiple assets exposed to the same catastrophe zone
  • Inconsistent replacement cost assumptions across properties
  • Fragmented liability limits across entities
  • Uneven deductibles that distort risk distribution

Without a consolidated view, investors often underestimate how connected their exposures actually are.

Where deals usually get delayed in diligence

Insurance rarely kills a deal outright on day one. It slows it down first.

The most common friction points are predictable:

Outdated valuation data

Replacement costs based on old appraisals or prior underwriting cycles can trigger immediate lender pushback, especially in inflationary construction markets.

Disorganized policy structure

Multiple carriers, entities, and renewal dates create confusion when lenders try to confirm continuity of coverage.

Weak exposure transparency

If an investor cannot quickly explain total insured value by geography or risk type, lenders assume conservative worst-case scenarios.

The result is not rejection, but tighter covenants and slower execution.

Why “deal-by-deal” insurance thinking fails

Most investors only focus on insurance when a deal is live. That creates a reactive cycle: gather documents, send to broker, wait for certificates, repeat.

The issue is that lenders are not evaluating a single moment in time. They are evaluating how well the portfolio is managed continuously.

If exposure data is only updated during acquisitions or renewals, it will almost always lag behind reality. And that lag shows up during underwriting.

The role of structured portfolio visibility

This is where disciplined portfolio tracking becomes a financing advantage, not just an operational one.

Investors who maintain consistent exposure data across all assets can answer lender questions instantly:

  • What is total insured value across the portfolio
  • How much exposure sits in high-risk zones
  • Are valuations updated to current construction costs
  • Do limits scale appropriately across assets

This is also where structured approaches like insurance portfolio management start to matter. Not because lenders require the label, but because they require the clarity it produces.

What strong insurance readiness looks like to lenders

Lenders tend to trust sponsors who demonstrate three things:

1. Accuracy

Current replacement values supported by recent data, not legacy estimates.

2. Consistency

Standardized coverage structures across properties and entities.

3. Visibility

Fast, clear reporting on exposure, limits, and risk concentration.

When those conditions are met, insurance becomes a non-issue in underwriting. When they are not, it becomes a negotiation point.

The real takeaway for investors

Insurance is often treated as paperwork in real estate transactions, but lenders treat it as a reflection of operational maturity.

Investors who manage insurance as part of their broader portfolio intelligence tend to experience fewer underwriting delays, fewer covenant surprises, and smoother closings.

In competitive markets, that operational clarity becomes a quiet but meaningful advantage.

Your Pipeline Is 25.3h Behind: Catching Business Sentiment Leads with Pulsebit

2026-04-26 07:39:52

Your pipeline just missed a critical insight: a 24h momentum spike of -0.587 in business sentiment. This significant dip indicates a shift in perception that could signal underlying trends worth exploring. The data reveals two articles clustered around the theme "How Undergraduate Business Education Is Evolving For Real-World Impact," showcasing a narrative that could inform your strategy. If your model doesn't account for multilingual origins or entity dominance, you’re at risk of being 25.3 hours behind in capturing these emerging sentiments.

English coverage led by 25.3 hours. German at T+25.3h. Confi
English coverage led by 25.3 hours. German at T+25.3h. Confidence scores: English 0.85, Spanish 0.85, French 0.85 Source: Pulsebit /sentiment_by_lang.

This gap reveals a structural flaw in your sentiment analysis pipeline. If you’re not managing multilingual content effectively, you’re likely missing out on critical insights from global conversations. In this case, English-language articles led the conversation, while the German articles lagged by the same 25.3 hours. If your model isn't tuned to handle these variations, you’re effectively blind to shifts in sentiment that could impact your decision-making.

To catch this momentum shift, let's dive into the code. First, we need to filter by the geographic origin of the articles. Here’s how we can query our API to focus on English content:

Geographic detection output for business. India leads with 2
Geographic detection output for business. India leads with 2 articles and sentiment +0.08. Source: Pulsebit /news_recent geographic fields.

import requests

# Define the parameters for API call
params = {
    "topic": "business",
    "lang": "en",
    "momentum": -0.587
}

![Left: Python GET /news_semantic call for 'business'. Right: ](https://pub-c3309ec893c24fb9ae292f229e1688a6.r2.dev/figures/g3_code_output_split_1777160391257.png)
*Left: Python GET /news_semantic call for 'business'. Right: returned JSON response structure (clusters: 3). Source: Pulsebit /news_semantic.*


# API call to fetch sentiment data
response = requests.get("https://api.pulsebit.io/sentiment", params=params)
data = response.json()

print(data)

Next, we’ll run the cluster reason string through the sentiment API to score the narrative framing itself. This is crucial for understanding the sentiment surrounding our identified themes:

# Define the cluster reason string
cluster_reason = "Clustered by shared themes: undergraduate, business, education, evolving, real-world."

# API call to analyze the sentiment of the cluster reason
sentiment_response = requests.post("https://api.pulsebit.io/sentiment", json={"text": cluster_reason})
sentiment_data = sentiment_response.json()

print(sentiment_data)

Now that we've established how to track the emerging sentiment on business education, here are three specific builds you can implement tonight:

  1. Geo-Filtered Alert System: Set up a real-time alert system that triggers whenever a sentiment score dips below a certain threshold (e.g., -0.2) for English-language articles. This will ensure you're immediately notified of any significant sentiment shifts within your domain.

  2. Meta-Sentiment Dashboard: Create a dashboard that visualizes sentiment scores for clustered narratives. Use the meta-sentiment loop to analyze how narratives evolve over time, focusing on themes like business education and its real-world impact. This can guide your content strategy and marketing efforts.

  3. Custom Reports: Develop custom reports that pull together articles and sentiment scores based on specific signals. For instance, aggregate data around the keywords "business" and "education" to spot trends, while also incorporating the geo filter to ensure you're capturing the most relevant discussions.

By leveraging the insights from our API, you can stay ahead of the curve and ensure your models are capturing sentiment shifts as they happen.

To get started, visit pulsebit.lojenterprise.com/docs. You can copy-paste and run this in under 10 minutes, making it easy to integrate these insights into your workflow.

Mythos and Cyber Models: What does it mean for the future of software?

2026-04-26 07:24:05

Anthropic Made Its Model Worse On Purpose. Here's What That Tells You About the State of AI Security.

In the entire history of commercial AI model releases, no company has intentionally made a model worse on a published benchmark before shipping it to the public.

That changed this month.

Anthropic released Opus 4.7. And if you look at the CyberBench scores, it performs below Opus 4.6 — the model it was supposed to supersede. That regression was not a bug. It was a deliberate product decision, and understanding why they made it is one of the most important things a software architect can do right now.

The reason is a model called Claude Mythos. It is the most capable vulnerability-discovery system ever tested on real-world production software. It found a 27-year-old flaw in OpenBSD — one of the most security-hardened operating systems on the planet. It found a 16-year-old vulnerability in FFmpeg. It chained multiple Linux kernel weaknesses into a working privilege escalation exploit, going from ordinary user access to full machine control.

And then Anthropic looked at those results, looked at the systems the rest of the world runs on, and decided the right thing to do was to restrict access before releasing anything more capable publicly.

That decision is the signal. Everything else in this post explains what it means.

What Claude Mythos Actually Did

Mythos is not a research artifact or a red-team proof of concept. It is a production-grade capability that was released — under the codename Project Glasswing — to a small set of approximately 40 vetted organizations that operate critical software, specifically so they could begin hardening their systems before the model's capabilities became more widely known.

What it demonstrated in controlled environments:

Active zero-day discovery at scale. Mythos does not just match known CVE patterns. It analyzes real systems, identifies previously undocumented vulnerabilities, and produces working proof-of-concept exploit chains. The OpenBSD bug had existed since 1997. It was not obscure legacy code that nobody touched — OpenBSD is actively maintained and specifically designed to be resistant to exactly this kind of analysis. A 27-year-old bug surviving in that environment is not a failure of individual engineers. It is a signal about the limits of human-scale review.

Exploit chaining. Finding a single vulnerability is one thing. Combining multiple weaknesses into a viable attack path is the work that turns a theoretical risk into a real one. Mythos demonstrated the ability to do this across kernel-level Linux vulnerabilities, turning a sequence of low-individually-critical issues into full privilege escalation. This is the kind of chain that typically takes a skilled attacker weeks to construct. The model did it as part of its analysis pass.

Scale that no human team can match. The significance is not any single finding — it is the rate. Human security researchers are bottlenecked by expertise, time, and context-switching. Mythos evaluates thousands of potential attack surfaces in parallel, continuously, without fatigue or prioritization constraints.

OpenAI Is Thinking the Same Thing

Anthropic is not operating in isolation. Within days of Mythos going out to Project Glasswing partners, OpenAI released GPT-5.4-Cyber — a variant of its flagship model fine-tuned specifically for defensive cybersecurity use cases. It is only available to vetted participants in their Trusted Access for Cyber (TAC) program.

The parallel is striking:

Anthropic                              OpenAI
─────────────────────────────────────────────────────
Claude Mythos                          GPT-5.4-Cyber
Project Glasswing (~40 partners)       TAC program (vetted participants)
Restricted pre-release access          Safety-guardrail modifications
                                       for authenticated defenders
Vulnerability discovery & chaining     Binary reverse engineering enabled

GPT-5.4-Cyber goes further in one specific way: it removes many standard safety guardrails for authenticated defenders, including support for binary reverse engineering — a capability that is normally off-limits. OpenAI's Codex Security tool has already contributed to fixing over 3,000 critical and high-severity vulnerabilities.

What this pattern tells you is not that these models are risky in an abstract sense. It is that both of the leading frontier AI labs have independently reached the same conclusion: their models are now powerful enough that unrestricted public access would be a net liability. That is not a marketing stunt. That is not regulatory positioning. That is two organizations treating their own work the way defense contractors treat classified technology.

The Shift That Actually Matters: Human Effort Is No Longer the Limit

For as long as software security has existed as a discipline, there has been a natural rate-limiting factor: human effort.

Finding vulnerabilities required skilled people with time, focus, and domain expertise. Even the most sophisticated state-level adversaries were constrained by how fast their teams could move. The difficulty of exploitation was, itself, a form of defense.

That constraint is gone.

Here is what the new operating environment looks like:

Old model (human-rate-limited):
─────────────────────────────────────────────────────
Attacker → manually analyze codebase
         → weeks/months per target
         → limited to known vulnerability patterns
         → exploitation requires specialists
         → limited parallelism

New model (AI-accelerated):
─────────────────────────────────────────────────────
AI system → continuous automated analysis
          → thousands of targets in parallel
          → identifies novel vulnerability classes
          → generates working exploit chains
          → operates 24/7 without fatigue

The attack surface has not changed. The cost of probing it has dropped by orders of magnitude.

Vulnerability discovery now happens continuously instead of periodically. Exploit development can be partially or fully automated. And as these models become accessible — either through legitimate programs or through underground markets where stripped-down variants already circulate — the population of actors capable of sophisticated attacks expands dramatically.

The Real Problem: The Remediation Gap

Here is the uncomfortable truth that the Mythos story exposes.

Most of the risk in software systems today does not come from vulnerabilities that haven't been found yet. It comes from vulnerabilities that have already been found, are already documented, and have not been patched.

Security teams work against a perpetual backlog. Systems are too fragile to update quickly. Regressions break things when patches go in. Dependency chains make change expensive. This is the normal operational state of almost every engineering organization running at scale.

What AI does is accelerate the discovery side without equally accelerating the remediation side. That asymmetry is the actual risk.

Discovery velocity         ████████████████████████████░░  (AI-accelerated)
Remediation velocity       ████████░░░░░░░░░░░░░░░░░░░░░░  (still human-rate-limited)
                                    ^^^
                            This gap is your attack surface

A system that finds 10,000 previously unknown vulnerabilities in a month is not obviously helpful if your team can patch 200. The remaining 9,800 are now known — potentially to adversaries — and unaddressed. The net effect can be a larger effective attack surface, even though the underlying systems have not changed at all.

This is the design problem that the industry has not solved. Mythos forced the conversation into the open.

The Monoculture Risk Nobody Is Talking About

Individual vulnerabilities are dangerous. Vulnerabilities in software that runs everywhere are catastrophic.

The hidden amplification factor in this story is software monoculture: the same operating systems, the same libraries, the same frameworks are used across millions of production systems globally. A single vulnerability in glibc, OpenSSL, or the Linux kernel is not a bug in one application. It is a bug in the substrate that most of the world's software infrastructure runs on.

When AI accelerates vulnerability discovery in monoculture environments, the impact does not scale linearly — it scales by the number of systems running that codebase.

Traditional single-target exploit:
  1 attacker → 1 target → 1 breach

AI-discovered monoculture exploit:
  1 AI system → 1 vulnerability → millions of targets
                                 (same code, different deployments)

This is how the Mythos findings — an OpenBSD bug, an FFmpeg flaw — become systemic risks rather than isolated incidents. OpenBSD runs in firewalls, embedded systems, and network appliances across critical infrastructure. FFmpeg processes video in applications that touch billions of users. These are not edge cases.

An Unexpected Counterforce

There is one interesting development beginning to emerge from the same forces that created this risk.

As AI reduces the cost of building software, organizations may — over time — begin to build more customized, less standardized systems. When you can generate a bespoke authentication module in minutes instead of weeks, the calculus around using shared libraries changes.

If that shift materializes at scale, it could reduce the blast radius of any single vulnerability. Attackers cannot reuse the same exploit across millions of targets if the targets are no longer running identical code.

The catch is that this benefit only materializes if security practices evolve at the same pace as development. Right now, AI is accelerating development velocity significantly faster than it is accelerating security rigor. The window between "built with AI" and "secured with AI" is where the risk lives.

Where This is Heading: AI vs. AI

The end state of this trajectory is a security landscape that operates entirely differently from today's.

Current state:
  Human attackers ──────────► Human defenders
  (slow, expertise-limited)    (slow, expertise-limited)

Near-term state:
  AI attackers ─────────────► Human defenders
  (fast, scalable)              (slow, expertise-limited)
                    ^^^
              Current danger zone

Future state:
  AI attackers ─────────────► AI defenders
  (fast, scalable)              (fast, scalable)
         └──────────────────────────┘
              Competing feedback loops

We are currently in the second phase — the danger zone. AI-accelerated attack capability is outpacing human-scale defense. The third phase, where AI defense catches up, is coming, but it is not here yet.

The organizations that close that gap fastest will not necessarily have the most capable models. They will have the tightest feedback loop between detection and remediation. Anthropic understood this when they degraded Opus 4.7 on CyberBench. They looked at Mythos's capabilities, understood that making something more capable publicly available was a liability before the defense side had caught up, and made a product decision that cost them a benchmark headline in exchange for reduced near-term risk.

That is the playbook. Build for the loop, not the leaderboard.

What Developers and Architects Should Actually Do Right Now

The model release news cycle will pass. The structural shift it represents will not. Here is how to think about your exposure:

Audit your patch lag. The remediation gap is your real risk surface. How long does it take your organization to go from "CVE published" to "patch deployed in production"? That number tells you more about your actual risk than your perimeter security posture.

Treat your dependency graph as infrastructure. Libraries and shared frameworks are not just technical debt decisions — they are blast radius decisions. Every shared dependency is a vector through which a single discovered vulnerability reaches you. That calculus now needs to include AI-accelerated discovery timelines.

Start thinking about detection-to-remediation as a pipeline, not a process. The organizations that will handle the next phase of AI-accelerated attacks are the ones that have automated the boring parts of remediation so that their human capacity can focus on the genuinely novel cases.

Understand which of your systems run on monoculture infrastructure. OpenBSD, Linux kernel, FFmpeg, OpenSSL, glibc — if your systems touch these, you are exposed to a different risk profile than systems running on more customized stacks. Know which category you are in.

Key Takeaways

  • The intentional benchmark regression is the story. Anthropic degraded Opus 4.7 on CyberBench specifically because Mythos demonstrated that unrestricted public access to more capable models is a net liability for critical infrastructure. That is an industry-first decision worth understanding deeply.
  • Human effort is no longer the rate-limiting factor in vulnerability discovery. AI systems can probe attack surfaces at scale, continuously, across thousands of targets — and produce working exploit chains, not just theoretical flags.
  • The remediation gap is now the primary risk. AI accelerates discovery without equally accelerating patching. The asymmetry between those two velocities is your real attack surface.
  • Software monoculture amplifies everything. A single AI-discovered vulnerability in shared infrastructure (Linux, OpenSSL, FFmpeg) is not one bug in one system — it's one bug in the foundation of millions of systems simultaneously.
  • Both Anthropic and OpenAI are now treating their own models like classified defense technology. This is not regulatory theater. It is a calibrated signal that capability has outpaced the defense ecosystem's readiness.

The Question That Should Keep Architects Up at Night

Anthropic made their model worse on purpose because they understood something most of the industry has not caught up to yet: the capability is already here. The question that remains is who gets to use it first, and whether the defense side catches up before the attack side scales.

We like to believe that modern software systems are mature and well understood. They are not. A 27-year-old bug in a deliberately hardened operating system is not an anomaly — it is evidence that complexity has always outpaced our ability to fully audit what we build. AI is not introducing that complexity. It is exposing it.

Here is the question I want to leave you with: If a system like Mythos ran against your production infrastructure today, how long would it take your team to close what it found — and do you have a plan for the gap?

Drop your answer in the comments. I'm particularly curious how organizations with large legacy surface areas are thinking about this.

Credit: The technical analysis in this post is based on insights from Diary of an AI Architect by Anurag Karuparti — a newsletter worth following if you build or operate software at scale.

Software Developers: Redundant or Resilient?

2026-04-26 07:23:14

In the era before AI-assisted coding, my workflow for any feature followed this pattern:

  1. I would analyze the business problem. Even if the Product Manager spent time on it, I would read the documentation and ask clarifying questions. This established a foundational understanding of the problem.

  2. After some back-and-forth discussion, I would begin planning the implementation. This deepened my understanding of the problem.

  3. I would review the existing codebase to identify established patterns and determine what I could reuse. This strengthened my familiarity with the codebase.

  4. If no existing pattern applied, I would research similar scenarios and evaluate design patterns to find the best fit for the problem. This reinforced my coding practices and potentially uncovered new solutions.

  5. Finally, I would start implementing. As I coded, I would continuously consider improvements and alternative approaches. This increased my familiarity with both the problem and the solution.

After completing this process, I could often recall the implementation details and logic from memory during team discussions. If a bug arose, I could usually deduce its cause without immediately inspecting the code, often because I recognized an edge case I had overlooked during implementation.

Overall, this process helped me learn more, retain more knowledge, and perform more of the work myself. These were actually the most fun parts of the process. Today, I spend much of my time reviewing code. However, reviewing is not the same as writing it. As the saying goes in mathematics, you cannot learn simply by reading a textbook; you must engage with the material and put pen to paper.

Maybe times have changed, and I do not even need to know all those details. But then it makes me wonder: am I redundant in this process?

Some people might point out that you bring in taste and judgment. However, what stops a non-developer from showing these skills? They just have to ask AI for alternatives and pick the best solution based on their understanding.

There are still a few places where AI is not as good, especially where there is any integration, whether it involves hardware devices or multiple systems stitched together. However, this mostly covers missing bridges (i.e., AI cannot click buttons on a hardware device or check multiple systems at once). These tasks are limited. Software Engineers working in novel fields might also not feel redundant, but those people are few and far between.

This makes me lean toward "Redundant" as the answer for most dev jobs today. The only way forward seems to be moving to the next level, i.e., truly being an engineer (working with systems that do not exist yet) instead of being a mechanic or developer (working with known systems).

A Simple macOS Tool for Securely Overwriting Files (Without the CLI Headaches)

2026-04-26 07:23:08

Secure file deletion on macOS is one of those things everyone thinks they have handled — until they actually need it. Whether you're dealing with sensitive documents, logs, exports, or personal files, securely overwriting data is still a real requirement for many workflows.

The problem?
Most built‑in or third‑party solutions are either outdated, overly complex, or require command‑line steps that non‑technical users won’t touch. And many GUI tools haven’t kept up with modern macOS changes.

A colleague of mine built Overwrite Pro to solve exactly this. It’s a lightweight macOS utility that securely overwrites files using safe overwrite patterns, with a clean drag‑and‑drop interface and zero telemetry.

🔐 Why this matters
Secure deletion isn’t just for high‑security environments. It’s useful for:

developers handling sensitive test data

journalists or researchers working with confidential files

privacy‑focused users

IT admins sanitizing files before transfer

anyone who wants to ensure deleted files stay deleted

On macOS, simply dragging a file to the Trash doesn’t overwrite anything — it just marks the space as available. Overwrite Pro performs an actual overwrite on disk before deletion.

🧰 What Overwrite Pro does
Securely overwrites files using safe, irreversible patterns

Works entirely offline — no telemetry, no analytics, no external services

Uses native macOS APIs for reliability

Supports drag‑and‑drop for quick workflows

Has a clean, minimal UI that fits macOS

Runs fast and stays lightweight

🖥️ Why it was built
The goal wasn’t to create a full “security suite.” It was to build a small, focused tool that does one job well: permanently destroy files in a way that’s simple, reliable, and privacy‑respecting.

📦 App Store
If you want to check it out, here’s the link Overwrite Pro
Would love feedback from anyone working in security, IT, or privacy. What features would make a tool like this more useful in your workflow?

Why I Needed a Safe Way to Inspect QR Codes on iOS (and the Tool That Solved It)

2026-04-26 07:19:32

QR codes have quietly become one of the easiest ways to deliver malicious links. They show up in phishing kits, physical social‑engineering attempts, fake parking meters, restaurant menus, and even printed scam flyers. If you work in cybersecurity or DFIR, you’ve probably run into situations where you need to inspect a QR code without opening it.

The problem?
Most QR apps on iOS automatically open the link or make external requests. Many also include analytics or third‑party SDKs — not ideal when you’re handling suspicious payloads.

A colleague of mine built QR Lume, a small iOS utility designed specifically for this problem. It lets you safely inspect the raw contents of a QR code inside Apple’s sandbox, with zero telemetry and no third‑party tracking.

🔍 What it does
Shows the raw QR payload without opening anything

Runs fully inside Apple’s native sandbox

Makes no external requests unless you choose to

Contains no analytics, no tracking, no third‑party SDKs

Supports scanning from camera or photo library

Includes a hex + string viewer for deeper inspection

🛡️ Why this matters
QR‑based phishing is growing, and mobile devices are often the weakest link. Having a safe, offline way to inspect QR data is useful for:

DFIR triage

Mobile security testing

Investigating suspicious physical QR codes

Teaching junior analysts safe inspection workflows

Privacy‑minded users who want to see what they’re scanning

📱 The app
If you’re curious, here’s the App Store link:QR Lume on the App Store

Would love feedback from anyone working in mobile security, DFIR, or privacy. What features would make a tool like this even more useful in your workflow?