2026-04-11 11:19:17
"Vibe coding" is everywhere. You prompt an AI, it writes your whole project, you ship it.
Last week I reviewed 3 PRs from vibe-coded projects. All three had hardcoded API keys in the source. Two had no tests. One had a raw eval() on user input.
So I built vibescore.
pip install vibescore
vibescore .
One command. Letter grade from A+ to F. Four dimensions:
| Category | What it checks |
|---|---|
| Security | Hardcoded secrets, SQL injection, eval/exec, insecure defaults |
| Code Quality | Function length, complexity, nesting depth, type hint coverage |
| Dependencies | Pinning, lock files, deprecated packages, known CVEs |
| Testing | Test count vs LOC ratio, coverage setup, CI configuration |
vibescore v0.4.0 — Project Report
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Security B+ (no hardcoded secrets, 2 eval() calls found)
Code Quality C (4 functions >50 lines, low type hint coverage)
Dependencies A- (all pinned, lock file present)
Testing D (3 tests for 2,400 LOC)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
OVERALL C+
vibescore --init-ci — generates a GitHub Actions workflowvibescore --watch — re-scans on file changes in real-timevibescore --dashboard — historical grade tracking (Streamlit web UI)vibescore --save-history — save scan results for trend analysisGitHub: github.com/stef41/vibescore
PyPI: pypi.org/project/vibescore
Feedback welcome — especially ideas for new check categories or language support.
2026-04-11 11:18:51
Every AI text detector is either paid or closed-source.
GPTZero charges $15/month. Originality.ai charges per scan. Turnitin locks you into institutional contracts. And all of them are black boxes — when they flag your text as AI-generated, you have no idea why.
I got tired of this. Especially after GPTZero flagged my own human-written paragraphs as "98% AI."
So I built lmscan.
pip install lmscan
lmscan "paste any text here"
→ 82% AI probability, likely GPT-4
It analyzes 12 statistical features — burstiness, entropy, Zipf deviation, vocabulary richness, slop-word density — and fingerprints 9 LLM families.
No neural network. No API key. No internet. Runs in <50ms.
AI text is unnaturally smooth. Humans write in bursts — short punchy sentences followed by long rambling ones. LLMs produce eerily consistent sentence lengths.
LLMs also have vocabulary tells:
lmscan scores text against each family's marker set to fingerprint the source.
from lmscan import scan
result = scan("your text")
print(f"{result.ai_probability:.0%} AI, likely {result.fingerprint.model}")
--dir
--mixed
--format html
pip install lmscan[web]
This is statistical analysis, not a transformer classifier. It won't catch heavily paraphrased AI text. But:
GitHub: github.com/stef41/lmscan
PyPI: pypi.org/project/lmscan
Feedback welcome — especially on which types of text it struggles with. That helps calibrate the feature weights.
2026-04-11 11:10:53
The A2A protocol's Agent Card is how agents discover each other's capabilities. It's a JSON file at /.well-known/agent-card.json — a structured business card for your agent.
MolTrust had a minimal version. Here's what A2A v0.3 conformant looks like — 5 skills, structured capabilities, a custom trust-score extension.
version means A2A protocol version ("0.3"), not API versionprovider is a required object with organization namecapabilities is structured with extensions supportskills replaces flat capabilities with queryable declarationssecuritySchemes follows OpenAPI 3.0 formatA2A v0.3 supports custom extensions via capabilities.extensions. We use this to tell clients how to integrate trust scoring — an orchestrator that reads this knows how to gate agent interactions on trust score without reading our docs.
A2A has authorization schemes on its roadmap but hasn't specified them yet. We'll define how AAE tokens travel in A2A task metadata once that lands.
curl https://api.moltrust.ch/.well-known/agent-card.json | python3 -m json.tool
Full TechSpec (Section 8.8): moltrust.ch
GitHub: github.com/MoltyCel/moltrust-protocol
2026-04-11 10:59:42
CSS3'ün Tarihçesi ve Gelişimi
CSS (Cascading Style Sheets), web sayfalarının stilini ve düzenini tanımlamak için kullanılan bir dildir. 1996 yılında W3C tarafından önerilen CSS1, web tasarımında devrim yaratarak sayfa stilizasyonunu HTML'den ayırmaya olanak sağladı.
📌 Kaynak: ForumWeb.net - Web Geliştirme Topluluğu
2026-04-11 10:55:22
I recently wanted to learn about MCP (Model Context Protocol). As someone whose default programming language is Common Lisp, I naturally decided to build an MCP server using Lisp.
Thanks to the creators of 40ants-mcp, the library provides a nice pattern and code structure that I really like. However, I struggled significantly with installation and getting started. What should have taken minutes ended up taking days.
I'm sharing my experience here so that others who want to build MCP servers in Common Lisp can get started in minutes, not days like I did.
Before you begin, make sure you have the following installed:
In SBCL with Quicklisp, you can enable Ultralisp by:
(ql-dist:install-dist "http://dist.ultralisp.org/" :prompt nil)
(How to install SBCL and Quicklisp is in the appendix.)
Here's the issue that cost me days: when you try to load 40ants-mcp with:
(ql:quickload :40ants-mcp)
You might encounter errors. The solution is simple but not obvious—load jsonrpc first:
(ql:quickload :jsonrpc)
(ql:quickload :40ants-mcp)
This dependency isn't automatically resolved, which was the source of my frustration.
Here's a minimal example from my mcp-exper package:
(in-package :mcp-exper)
(openrpc-server:define-api (mi-tools :title "mi-tools"))
(40ants-mcp/tools:define-tool (mi-tools add) (a b)
(:summary "just add")
(:param a integer "a")
(:param b integer "b")
(:result text-content)
(make-instance 'text-content :text (format nil "~a" (+ a b))))
(defun start-server ()
(40ants-mcp/server/definition:start-server mi-tools))
Key points:
openrpc-server:define-api to define your API40ants-mcp/tools:define-tool to define toolstext-content instances for text results (MCP requires specific content types)Create a Roswell script (mi-mcp-server.ros):
#!/bin/sh
#|-*- mode:lisp -*-|#
exec ros -Q -- $0 "$@"
|#
(progn
(ros:ensure-asdf)
#+quicklisp(ql:quickload '(:mcp-exper) :silent t))
(defun main (&rest argv)
(declare (ignorable argv))
(mcp-exper:start-server))
Run directly with Roswell:
ros mi-mcp-server.ros
Build and install as an executable:
ros build mi-mcp-server.ros
install -m 0755 mi-mcp-server $HOME/.local/bin/
Make sure $HOME/.local/bin is in your PATH.
To enable your MCP server in opencode, add this to ~/.config/opencode/opencode.json:
{
"mcp": {
"mi-tools": {
"type": "local",
"command": ["mi-mcp-server"],
"enabled": true
}
}
}
Building MCP servers with Common Lisp is straightforward once you know the tricks. The 40ants-mcp library is well-designed, and the OpenRPC integration works smoothly.
I hope this guide saves you the days of frustration I experienced. Happy hacking!
The full source code for this example is available at mcp-exper.
brew install sbcl
apt install sbcl
pacman -S sbcl
Download and install Quicklisp:
wget https://beta.quicklisp.org/quicklisp.lisp
sbcl --load quicklisp.lisp \
--eval '(quicklisp-quickstart:install)' \
--eval '(ql-util:without-prompting (ql:add-to-init-file))' \
--quit
2026-04-11 10:52:29
pip install -r requirements.txt on a ML project. 200+ lines of output. Download progress for numpy, scipy, torch. Wheel-building logs for packages with C extensions. Dependency resolution warnings.
Claude reads it all. It needs one line: whether the install succeeded or failed.
Collecting numpy==1.24.3
Downloading numpy-1.24.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.3/17.3 MB 45.2 MB/s eta 0:00:00
Collecting pandas==2.0.3
Downloading pandas-2.0.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.2/12.2 MB 52.1 MB/s eta 0:00:00
Building wheels for collected packages: tokenizers
Building wheel for tokenizers (pyproject.toml) ... done
Created wheel for tokenizers ... whl
Successfully installed numpy-1.24.3 pandas-2.0.3 tokenizers-0.15.0 ...
Download progress bars, wheel-building logs, hash checksums. None of this helps your AI debug your ImportError.
Successfully installed numpy-1.24.3 pandas-2.0.3 tokenizers-0.15.0 ...
💾 contextzip: 4,521 → 312 chars (93% saved)
93% reduction. The success/failure status preserved. Everything else stripped.
If the install fails, ContextZip keeps the error:
ERROR: Could not find a version that satisfies the requirement torch==2.5.0
💾 contextzip: 2,103 → 287 chars (86% saved)
Errors always survive. Noise doesn't.
cargo install contextzip
eval "$(contextzip init)"
GitHub: github.com/contextzip/contextzip
Part of the ContextZip Daily series. Follow for daily tips on optimizing your AI coding workflow.
Install: npx contextzip | GitHub: jee599/contextzip