MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

I code-reviewed 3 "vibe-coded" PRs last week. Every one had hardcoded API keys. So I built a grader.

2026-04-11 11:19:17

"Vibe coding" is everywhere. You prompt an AI, it writes your whole project, you ship it.

Last week I reviewed 3 PRs from vibe-coded projects. All three had hardcoded API keys in the source. Two had no tests. One had a raw eval() on user input.

So I built vibescore.

What it does

pip install vibescore
vibescore .

One command. Letter grade from A+ to F. Four dimensions:

Category What it checks
Security Hardcoded secrets, SQL injection, eval/exec, insecure defaults
Code Quality Function length, complexity, nesting depth, type hint coverage
Dependencies Pinning, lock files, deprecated packages, known CVEs
Testing Test count vs LOC ratio, coverage setup, CI configuration

Example output

vibescore v0.4.0 — Project Report
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  Security     B+  (no hardcoded secrets, 2 eval() calls found)
  Code Quality C   (4 functions >50 lines, low type hint coverage)
  Dependencies A-  (all pinned, lock file present)
  Testing      D   (3 tests for 2,400 LOC)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  OVERALL      C+

Supported languages

  • Python (AST-based analysis)
  • JavaScript/TypeScript (regex-based)
  • Rust (VC221-VC227: unwrap density, unsafe blocks, doc comments, clone detection)
  • Go (VC231-VC237: unchecked errors, goroutine leaks, naked returns, panic in library code)

Extra features

  • vibescore --init-ci — generates a GitHub Actions workflow
  • vibescore --watch — re-scans on file changes in real-time
  • vibescore --dashboard — historical grade tracking (Streamlit web UI)
  • vibescore --save-history — save scan results for trend analysis
  • Zero dependencies. 201 tests.

Comparison

  • SonarQube: requires a Java server, complex setup, enterprise pricing
  • Codacy/CodeClimate: SaaS, requires account, sends code to servers
  • pylint/ruff: lint rules only, no security/testing/dependency analysis, no single grade
  • vibescore: one pip install, one command, local-only, zero deps, covers 4 dimensions with a letter grade

GitHub: github.com/stef41/vibescore
PyPI: pypi.org/project/vibescore

Feedback welcome — especially ideas for new check categories or language support.

I got mass-flagged by GPTZero for my own writing. So I built an open-source alternative in pure Python.

2026-04-11 11:18:51

Every AI text detector is either paid or closed-source.

GPTZero charges $15/month. Originality.ai charges per scan. Turnitin locks you into institutional contracts. And all of them are black boxes — when they flag your text as AI-generated, you have no idea why.

I got tired of this. Especially after GPTZero flagged my own human-written paragraphs as "98% AI."

So I built lmscan.

What it does

pip install lmscan
lmscan "paste any text here"
→ 82% AI probability, likely GPT-4

It analyzes 12 statistical features — burstiness, entropy, Zipf deviation, vocabulary richness, slop-word density — and fingerprints 9 LLM families.

No neural network. No API key. No internet. Runs in <50ms.

The detection approach

AI text is unnaturally smooth. Humans write in bursts — short punchy sentences followed by long rambling ones. LLMs produce eerily consistent sentence lengths.

LLMs also have vocabulary tells:

  • GPT-4 loves "delve" and "tapestry"
  • Claude says "I think it's worth noting"
  • Llama overuses "comprehensive" and "crucial"

lmscan scores text against each family's marker set to fingerprint the source.

Python API

from lmscan import scan
result = scan("your text")
print(f"{result.ai_probability:.0%} AI, likely {result.fingerprint.model}")

Features

  • 12 statistical features (burstiness, entropy, Zipf deviation, hapax legomena, vocabulary richness, slop-word density, and more)
  • 9 LLM fingerprints (GPT-4, Claude, Gemini, Llama, Mistral, Qwen, DeepSeek, Cohere, Phi)
  • Multilingual support (English, French, Spanish, German, Portuguese + CJK auto-detection)
  • Batch directory scanning with --dir
  • Mixed-content paragraph analysis with --mixed
  • HTML reports with --format html
  • Streamlit web UI with pip install lmscan[web]
  • Pre-commit hook integration
  • Calibration API for tuning thresholds on your own data

Honest limitations

This is statistical analysis, not a transformer classifier. It won't catch heavily paraphrased AI text. But:

  • You can see exactly which features triggered
  • No black-box false positives
  • Calibration API lets you tune for your domain
  • 193 tests, Apache-2.0

GitHub: github.com/stef41/lmscan
PyPI: pypi.org/project/lmscan

Feedback welcome — especially on which types of text it struggles with. That helps calibrate the feature weights.

How we made MolTrust A2A v0.3 conformant

2026-04-11 11:10:53

The A2A protocol's Agent Card is how agents discover each other's capabilities. It's a JSON file at /.well-known/agent-card.json — a structured business card for your agent.

MolTrust had a minimal version. Here's what A2A v0.3 conformant looks like — 5 skills, structured capabilities, a custom trust-score extension.

Key structural changes

  1. version means A2A protocol version ("0.3"), not API version
  2. provider is a required object with organization name
  3. capabilities is structured with extensions support
  4. skills replaces flat capabilities with queryable declarations
  5. securitySchemes follows OpenAPI 3.0 format

The MolTrust extension

A2A v0.3 supports custom extensions via capabilities.extensions. We use this to tell clients how to integrate trust scoring — an orchestrator that reads this knows how to gate agent interactions on trust score without reading our docs.

What's still missing

A2A has authorization schemes on its roadmap but hasn't specified them yet. We'll define how AAE tokens travel in A2A task metadata once that lands.

Try it

curl https://api.moltrust.ch/.well-known/agent-card.json | python3 -m json.tool

Full TechSpec (Section 8.8): moltrust.ch
GitHub: github.com/MoltyCel/moltrust-protocol

CSS3 ile Web Sayfalarında Fonksiyonel Düğme Tasarımını Geliştirme

2026-04-11 10:59:42

CSS3'ün Tarihçesi ve Gelişimi

CSS (Cascading Style Sheets), web sayfalarının stilini ve düzenini tanımlamak için kullanılan bir dildir. 1996 yılında W3C tarafından önerilen CSS1, web tasarımında devrim yaratarak sayfa stilizasyonunu HTML'den ayırmaya olanak sağladı.

🔗 Devamını Oku

📌 Kaynak: ForumWeb.net - Web Geliştirme Topluluğu

Building an MCP Server with Common Lisp

2026-04-11 10:55:22

I recently wanted to learn about MCP (Model Context Protocol). As someone whose default programming language is Common Lisp, I naturally decided to build an MCP server using Lisp.

Thanks to the creators of 40ants-mcp, the library provides a nice pattern and code structure that I really like. However, I struggled significantly with installation and getting started. What should have taken minutes ended up taking days.

I'm sharing my experience here so that others who want to build MCP servers in Common Lisp can get started in minutes, not days like I did.

Prerequisites

Before you begin, make sure you have the following installed:

  • SBCL - A high-performance Common Lisp compiler
  • Roswell - A Common Lisp implementation manager and script runner
  • Quicklisp - The de facto package manager for Common Lisp
  • Ultralisp - A community-driven distribution of Common Lisp libraries

Installing Ultralisp

In SBCL with Quicklisp, you can enable Ultralisp by:

(ql-dist:install-dist "http://dist.ultralisp.org/" :prompt nil)

(How to install SBCL and Quicklisp is in the appendix.)

The Gotcha: Loading 40ants-mcp

Here's the issue that cost me days: when you try to load 40ants-mcp with:

(ql:quickload :40ants-mcp)

You might encounter errors. The solution is simple but not obvious—load jsonrpc first:

(ql:quickload :jsonrpc)
(ql:quickload :40ants-mcp)

This dependency isn't automatically resolved, which was the source of my frustration.

Creating Your MCP Server

Here's a minimal example from my mcp-exper package:

(in-package :mcp-exper)

(openrpc-server:define-api (mi-tools :title "mi-tools"))

(40ants-mcp/tools:define-tool (mi-tools add) (a b)
  (:summary "just add")
  (:param a integer "a")
  (:param b integer "b")
  (:result text-content)
  (make-instance 'text-content :text (format nil "~a" (+ a b))))

(defun start-server ()
  (40ants-mcp/server/definition:start-server mi-tools))

Key points:

  1. Use openrpc-server:define-api to define your API
  2. Use 40ants-mcp/tools:define-tool to define tools
  3. Return text-content instances for text results (MCP requires specific content types)

Running the Server

Create a Roswell script (mi-mcp-server.ros):

#!/bin/sh
#|-*- mode:lisp -*-|#
exec ros -Q -- $0 "$@"
|#
(progn
  (ros:ensure-asdf)
  #+quicklisp(ql:quickload '(:mcp-exper) :silent t))

(defun main (&rest argv)
  (declare (ignorable argv))
  (mcp-exper:start-server))

Quick Test

Run directly with Roswell:

ros mi-mcp-server.ros

Production Installation

Build and install as an executable:

ros build mi-mcp-server.ros
install -m 0755 mi-mcp-server $HOME/.local/bin/

Make sure $HOME/.local/bin is in your PATH.

Integrating with Opencode

To enable your MCP server in opencode, add this to ~/.config/opencode/opencode.json:

{
    "mcp": {
        "mi-tools": {
            "type": "local",
            "command": ["mi-mcp-server"],
            "enabled": true
        }
    }
}

Conclusion

Building MCP servers with Common Lisp is straightforward once you know the tricks. The 40ants-mcp library is well-designed, and the OpenRPC integration works smoothly.

I hope this guide saves you the days of frustration I experienced. Happy hacking!

The full source code for this example is available at mcp-exper.

Appendix: Installing SBCL

macOS

brew install sbcl

Debian/Ubuntu

apt install sbcl

Arch Linux

pacman -S sbcl

Appendix: Installing Quicklisp

Download and install Quicklisp:

wget https://beta.quicklisp.org/quicklisp.lisp
sbcl --load quicklisp.lisp \
        --eval '(quicklisp-quickstart:install)' \
        --eval '(ql-util:without-prompting (ql:add-to-init-file))' \
        --quit

Why Your pip Install Output Doesn't Belong in Claude's Context

2026-04-11 10:52:29

pip install -r requirements.txt on a ML project. 200+ lines of output. Download progress for numpy, scipy, torch. Wheel-building logs for packages with C extensions. Dependency resolution warnings.

Claude reads it all. It needs one line: whether the install succeeded or failed.

Before: pip Install With Wheels

Collecting numpy==1.24.3
  Downloading numpy-1.24.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.3/17.3 MB 45.2 MB/s eta 0:00:00
Collecting pandas==2.0.3
  Downloading pandas-2.0.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.2/12.2 MB 52.1 MB/s eta 0:00:00
Building wheels for collected packages: tokenizers
  Building wheel for tokenizers (pyproject.toml) ... done
  Created wheel for tokenizers ... whl
Successfully installed numpy-1.24.3 pandas-2.0.3 tokenizers-0.15.0 ...

Download progress bars, wheel-building logs, hash checksums. None of this helps your AI debug your ImportError.

After: Through ContextZip

Successfully installed numpy-1.24.3 pandas-2.0.3 tokenizers-0.15.0 ...
💾 contextzip: 4,521 → 312 chars (93% saved)

93% reduction. The success/failure status preserved. Everything else stripped.

If the install fails, ContextZip keeps the error:

ERROR: Could not find a version that satisfies the requirement torch==2.5.0
💾 contextzip: 2,103 → 287 chars (86% saved)

Errors always survive. Noise doesn't.

cargo install contextzip
eval "$(contextzip init)"

GitHub: github.com/contextzip/contextzip

Part of the ContextZip Daily series. Follow for daily tips on optimizing your AI coding workflow.

Install: npx contextzip | GitHub: jee599/contextzip