2026-01-30 20:47:42
TL;DR: skillc helps you create, validate, and optimize reusable knowledge packages that make AI coding agents smarter. Whether you're building skills or consuming them, skillc gives you the tools to work confidently.
You've probably noticed this pattern:
AI agents are stateless. They don't remember your team's conventions, your favorite libraries, or that obscure API you've mastered. Every conversation starts from zero.
Agent Skills solve this by giving agents persistent, structured knowledge they can reference across sessions. A skill is just a directory with a SKILL.md file:
rust-skill/
├── SKILL.md # Metadata + instructions
└── docs/
├── patterns.md # Your coding patterns
└── gotchas.md # Common pitfalls
The agent reads your skill and applies that knowledge contextually. No more repeating yourself.
skillc is the CLI toolkit for working with Agent Skills. It handles two distinct workflows:
Creating a skill that actually helps agents requires validation. skillc provides the full authoring pipeline:
skc init → skc lint → skc build → skc stats → git push
↓ ↓ ↓ ↓ ↓
scaffold validate test locally trace usage publish
# Create a new skill
skc init rust-patterns
# Validate structure, links, and quality
skc lint rust-patterns
# Build locally and test with your agent
skc build rust-patterns
# See how agents actually use your skill
skc stats rust-patterns --group-by sections
# Publish when ready
git push origin main
The skc stats command is particularly powerful — it shows you which sections agents access most:
$ skc stats rust-patterns --group-by sections
Skill: rust-patterns
Path: /Users/dev/.skillc/skills/rust-patterns
Query: Sections
Filters: since=<none>, until=<none>, projects=<none>
Period: start=2026-01-15T09:12:33+00:00, end=2026-01-30T16:45:21+00:00
23 SKILL.md Error Handling
18 SKILL.md Iteration
12 SKILL.md Lifetimes
9 references/async.md Tokio Patterns
7 references/async.md Channel Selection
5 references/testing.md Mocking Strategies
3 references/testing.md Property Testing
Now you know: agents need more content on error handling and iteration. This data helps you optimize for real usage patterns.
Even if you're just using skills, skillc adds superpowers:
skc build → skc search → skc stats
↓ ↓ ↓
indexing find content track usage
# Install any skill
npx skills add username/awesome-skill
# Compile it to enable indexing
skc build awesome-skill
# Full-text search across all content
skc search awesome-skill "error handling"
# Track what your agents are actually reading
skc stats awesome-skill
Building a skill creates a search index, so you can quickly find relevant content without reading everything manually.
Here's a minimal but useful skill for Rust development. First, the SKILL.md frontmatter:
---
name: rust-patterns
description: "Idiomatic Rust patterns and common gotchas"
---
Then the content with practical patterns:
Error Handling section:
Always use
thiserrorfor library errors andanyhowfor applications:
// Library code
#[derive(thiserror::Error, Debug)]
pub enum MyError {
#[error("invalid input: {0}")]
InvalidInput(String),
}
// Application code
use anyhow::{Context, Result};
fn main() -> Result<()> {
do_thing().context("failed to do thing")?;
Ok(())
}
Iteration section:
Prefer iterators over manual loops:
// ❌ Avoid
let mut results = Vec::new();
for item in items {
results.push(transform(item));
}
// ✅ Prefer
let results: Vec<_> = items.iter().map(transform).collect();
After publishing, any agent with this skill installed will apply these patterns automatically.
skillc includes an MCP (Model Context Protocol) server, so agents can query skills directly. Add it to your agent's MCP configuration:
For example, in Cursor (.cursor/mcp.json):
{
"mcpServers": {
"skillc": {
"command": "skc",
"args": ["mcp"]
}
}
}
This exposes tools like skc_search, skc_show, skc_outline, and skc_stats that agents can call programmatically — no CLI needed.
# Install
cargo install skillc
# Create your first skill
skc init my-first-skill --global
# Edit SKILL.md with your knowledge
# Then validate and test
skc lint my-first-skill
skc build my-first-skill
The best skills come from real expertise:
Your knowledge, captured once, applied forever.
Links:
What skill would you create first? Drop a comment below!
2026-01-30 20:47:38
Imagine you've been programming since the 1980s. Maybe you were a bedroom coder on a ZX Spectrum, then had a career writing BASIC and then Visual BASIC before finally transitioning to C#. Or if you were in the USA, maybe you learned on an Apple II and then learned assembly, Pascal, C, C++ and everything that came after. Four decades of programming, three for a living.
If you are such a person you will likely have never heard the word "microcode".
From from being a confession of ignorance, it turns our own surprise at never having heard of it is widespread. At the very least, many who've heard the term have never looked any further.
Most professional programmers – even very senior ones – do not know about microcode in any concrete sense, for the simple reason that most never needed to.
The reason is simple: microcode sits below the abstraction of assembler instructions, which is usually the limit of what programmers care about. If your good old ANSI C compiled to good-looking assembler, most people stopped looking any further. Microcode was designed to be invisible and - for decades - it succeeded.
That is now changing. Security vulnerabilities, performance limits and the sheer complexity of modern processors have forced microcode into view. If you write software in 2025, you probably still don't need to understand microcode in detail; but you should know it exists, what it does and why it suddenly matters.
Every CPU implements an Instruction Set Architecture (ISA) - the set of instructions that software can use. x86, ARM, RISC-V: these are ISAs. When you write assembly language, or when a compiler generates machine code, the result is a sequence of ISA instructions.
Microcode is the layer below.
Inside the CPU, some instructions are simple enough to execute directly in hardware. An integer addition, for instance, can be wired straight through: operands in, result out - done in a single cycle.
Other instructions are however more complex. They involve multiple internal steps, conditional behaviour, memory accesses, flag updates, and corner cases. Implementing all of that in pure hardware would be expensive and inflexible.
Microcode provides an alternative. Instead of hardwiring every instruction, the CPU contains a small internal control program that orchestrates the hardware for complex operations. When the CPU encounters a microcoded instruction, it fetches a sequence of micro-operations from an internal store and executes them in order.
Think of it as firmware for the instruction decoder. The ISA defines what the CPU promises to do. Microcode defines how it actually does it.
The extent of microcode use varies by architecture:
Microcode originated in the 1950s as a way to simplify CPU design. Rather than creating custom hardware for every instruction, engineers could write microcode sequences that reused a common datapath. This made CPUs cheaper to design, easier to debug and simpler (and therefore cheaper) to modify.
By the 1960s, microcode had become central to computer architecture. IBM's System/360, launched in 1964, used microcode extensively. This allowed IBM to sell machines with different performance characteristics – different hardware implementations – while maintaining a single ISA across the product line. Software written for one System/360 model would run on another. Microcode made that possible. It was a big deal.
The pattern persists. x86 has survived for over forty years partly because microcode allows Intel and AMD to implement the same ancient instructions on radically different internal architectures. The 8086 of 1978 and a modern Zen 5 core both execute REP MOVSB. The microcode behind that instruction has been rewritten many times.
Modern microcode also serves as a post-silicon patching mechanism. Once a chip is fabricated, the silicon cannot be changed; but microcode can be updated. Operating systems and firmware routinely load microcode patches at boot time to fix bugs, close security holes and adjust behaviour. The physical chip stays the same; the control logic changes.
Consider the x86 instruction REP MOVSB. In assembly, it looks like a single operation:
REP MOVSB
The architectural specification says: copy ECX bytes from the address in RSI to the address in RDI, incrementing both pointers and decrementing ECX with each byte, until ECX reaches zero.
That is a lot of work for "one instruction." Internally, it involves:
None of this is visible at the ISA level. The programmer sees one instruction. The CPU sees a microcode sequence, something like:
loop:
load byte from [RSI]
store byte to [RDI]
RSI++
RDI++
ECX--
if ECX != 0, jump loop
Modern implementations are more sophisticated – they may copy multiple bytes per iteration, use vector registers, or special-case aligned transfers – but the principle holds. Microcode makes the architectural fiction of "one instruction" hold together.
If microcode has existed since the 1950s, why have most programmers never heard of it?
Three reasons.
First, microcode place in the abstraction stack is quite awkward. Programming education typically covers high-level languages, then perhaps assembly, then maybe pipelines, caches and branch prediction. Microcode sits below the ISA but above transistors – a layer that courses tend to mention briefly, if at all, then move past.
Second, microcode is intentionally invisible. CPU vendors treat it as proprietary. Intel and AMD do not publish microcode documentation. You cannot call microcode from software. You cannot observe it in a debugger. You cannot disassemble it (legally, at least). If something is undocumented, inaccessible and unobservable, it tends to disappear from working knowledge. It's low level of recognition is a sign of success.
Third, for most of computing history, microcode simply did not matter for application programming. Performance problems were algorithmic and bugs were logical. Portability issues lived in languages and operating systems. The hardware was a black box that honoured its documented interface and that was sufficient.
Microcode only intrudes when:
For most programmers, those situations never arose.
Here is an odd fact: microcode was more widely discussed in the 1960s and 1970s than in the 1990s and 2000s.
IBM's System/360 made microcode famous. DEC used it heavily in the PDP-11 and VAX lines. Some machines – Xerox Alto, certain Burroughs systems – even exposed writable microcode, allowing users to define new instructions. Dangerous, but fascinating. Malware authors can only dream.
Then the RISC (Reduced Instruction Set Computing) revolution promised simpler computing due to simpler instructions, executed faster, that would outperform complex microcoded instructions. The slogan was "hardwire everything." Microcode was subjected to name-calling as a relic of the CISC (Complex Instruction Set Computing) past.
Despite the name-calling, there was genuine engineering reality. Early RISC machines – MIPS, SPARC, early ARM – were indeed largely hardwired and performance improved. The argument seemed vindicated.
But x86 survived. Intel and AMD responded not by abandoning microcode but by hiding it more effectively. Modern x86 chips translate complex ISA instructions into internal micro-operations, execute those out of order across multiple pipelines and present the illusion of sequential execution. The microcode is still there. It is just buried under so many layers of complexity that even CPU architects sometimes struggle to explain exactly what is happening.
Meanwhile, the 1980s home computer generation – people who learned on the ZX Spectrum, Commodore 64, BBC Micro, Apple II – grew up with machines that were either hardwired (the 6502) or used microcode invisibly (the Z80). The 6502 famously had no microcode at all; its control logic was hand-drawn. The Z80 did use microcode internally, but this was entirely invisible to programmers and irrelevant to how you wrote software. Either way, there was nothing to notice so nothing to know about.
A whole generation of programmers came up without ever needing to know.
In January 2018, the Spectre and Meltdown vulnerabilities became public. Not at all software bugs, these were flaws in how modern CPUs speculatively execute instructions – flaws that allowed attackers to read memory they should not have been able to access.
The response involved operating system patches, compiler changes and – famously – microcode updates.
Intel, AMD and ARM shipped new microcode that:
Without changing the silicon of chips that were already in computers around the world, the microcode was updated and behaviour changed.
This made microcode visible in a way it had not been for decades. "We fixed the CPU with a software update" is a sentence that only makes sense if you understand that CPU behaviour is partly defined by mutable control logic.
In the years after Spectre and Meltdown there were many more such incidents:
Each required microcode mitigations; and each exposed ever more about the gap between architectural promises and microarchitectural reality.
The traditional conception is that hardware is fixed and software is mutable. You design a chip, fabricate it and its behaviour is set. Software is written that runs on top and whose can be changed at will.
But the underlying reality is that microcode means a CPU is not a fixed hardware object. Its behaviour is affected in three ways:
This continues the working model of mainframes from the 1960s – but its security implications are new: mutable microcode becomes an attack surface; and when it defines security boundaries, microcode bugs become security vulnerabilities.
CPU vendors now publish microcode updates regularly. Linux distributions ship them and Windows Update delivers them. Your BIOS may load them before the operating system even starts. The CPU you are using now is not quite the CPU you bought.
This naturally results out of complexity. Modern CPUs are so complex – billions of transistors, speculative execution, out-of-order pipelines, multiple cache levels, simultaneous multithreading – that getting everything right in silicon is perhaps now impossible. Microcode provides a route for fixes without unpopular hardware upgrades: a way to fix mistakes after the fact, to adjust trade-offs and to respond to threats that were not anticipated during design.
So if you have been programming for decades and only recently learned about microcode, that does not indicate a gap in your education or a failure of curiosity. It means that you worked above an abstraction boundary that abstraction mostly held.
This is how successful design manifests. Abstraction exists so that programmers can ignore lower layers. For most of computing history, ignoring microcode was the correct choice: it let you focus on problems that actually mattered for your work.
We are however now in a transition where hardware is not just mutable but patchable. Not fully – most programmers still do not need to understand microcode in detail – but enough that awareness matters.
Microcode was always there. For most of us, we did not need to know. Now, sometimes, we need to understand where "software" ends and "hardware" begins. That boundary is a little more porous than programmers came to believe, but for practical purposes it held. Security research, performance engineering, and the sheer complexity of modern processors have eroded it.
If you write software that cares about security, performance or correctness at the edges, you should know that:
The illusion of fixed ISA's is still useful. But there's a lot going on beneath it that you need.
Further reading
2026-01-30 20:44:31
This is my submission for the GitHub Copilot CLI Challenge
I created Ops Whisperer, a friendly AI-powered CLI tool that turns your natural language ideas into ready-to-use infrastructure code.
Let's be real: even seasoned engineers sometimes freeze trying to recall the exact YAML structure for a Kubernetes Ingress, the right multi-stage build in a Dockerfile, or the proper Terraform resource syntax. We end up Googling, copy-pasting from old projects, or fighting mysterious indentation bugs.
Ops Whisperer fixes that pain. Just describe what you want in plain English — and it instantly generates clean, production-ready configuration files for you, right in your terminal.
It includes a built-in Safety Rail system (heavily assisted by Copilot) that scans output and blocks truly dangerous patterns — no accidental rm -rf / surprises.
npm install -g ops-whisperer
(Tip: Replace this placeholder with your actual screenshot of running something like ops "create a Node.js Dockerfile with multi-stage build")
Building an AI CLI tool using another AI CLI tool felt wonderfully meta 😄.
The hardest parts of CLI development are rarely the core logic — it's all the surrounding glue: argument parsing, proper exit codes, colored output, error handling, streams… you name it.
GitHub Copilot CLI (gh copilot) became my always-available terminal pair programmer. No more context-switching to browser tabs or docs — everything stayed in the flow.
Here’s how it supercharged my workflow:
Setting up a modern Node.js CLI with ESM, commander, and inquirer usually eats 15–30 minutes of boilerplate hunting.
I just asked:
gh copilot suggest "create a node.js cli entry point using commander and inquirer with ESM imports"
Boom — perfect program.version(), .option(), .parse() structure, correct ESM config, the works. Ready in under 2 minutes.
Blocking dangerous commands (rm -rf, mkfs, fork bombs, device writes, etc.) reliably is tricky — regex hell.
I asked Copilot to help:
gh copilot suggest "javascript regex to detect dangerous linux commands like rm -rf or mkfs or > /dev/sda"
It gave me a solid, explainable pattern I could trust and extend. Huge time-saver and peace of mind.
Hit a weird execa stdio error during interactive prompts?
Piped it straight to:
gh copilot explain "Error: stdio must be of type ..."
Instant clear explanation + suggested fix. Bug squashed in minutes instead of 30+ Googling.
ora spinner + chalk color chaining for that premium feelBottom line: Copilot CLI didn't just autocomplete code — it acted as on-demand docs expert, regex wizard, and logic validator. It let me spend way more time on the interesting part (the AI whisperer logic) and way less on Node.js CLI ceremony.
Thanks to GitHub Copilot CLI, I stayed deep in flow and shipped faster.
Would love any feedback or ideas — feel free to star the repo or open issues! 🚀
This version feels more conversational, engaging, and professional while keeping your original voice and key points. Good luck with the challenge!
2026-01-30 20:44:19
Ever wondered how "unbreakable" AI safety filters actually are?
As developers, we’re often told that state-of-the-art multimodal models like Grok 4, Gemini Nano Banana Pro, and Seedance 4.5 have ironclad guardrails. They are supposed to be aligned, safe, and resistant to malicious prompts. However, recent research from NeuralTrust has uncovered a fundamental, systemic flaw in how these models handle complex, multi-stage instructions.
They call it Semantic Chaining, and it’s not just a theoretical exploit—it’s a functional, successfully tested method that offers a fascinating, and alarming, look into the "blind spots" of multimodal AI security.
Most AI safety filters are reactive and keyword-based. They scan your prompt for "bad words" or "forbidden concepts." If you issue a single, overtly harmful instruction, the model's guardrails trigger, and it responds with a refusal.
Semantic Chaining is an adversarial prompting technique that weaponizes the model's own inferential reasoning and compositional abilities against its safety guardrails. It bypasses the block by breaking a forbidden request into a series of seemingly innocent, "safe" steps. Instead of one big, problematic prompt, the attacker provides a chain of incremental edits that gradually lead the model to the prohibited result.
The core vulnerability is that the model gets so focused on the logic of the modification, the task of substitution and composition, that its safety layers fail to track the latent intent across the entire instruction chain.
The researchers identified a specific, highly effective four-step recipe that consistently tricks these advanced multimodal models:
While generating controversial images is concerning, the most dangerous aspect of Semantic Chaining is its ability to bypass text-based safety filters via Text-in-Image rendering.
Standard LLMs are trained to refuse to provide text instructions on sensitive topics in a chat response. However, using Semantic Chaining, researchers successfully forced these models to:
This effectively turns the image generation engine into a complete bypass for the model's entire text-safety alignment. The safety filters are looking for "bad words" in the chat output, but they are completely blind to the "bad words" being drawn pixel-by-pixel into the generated image.
This technique is effective because the safety architecture of these advanced models is reactive and fragmented.
| Component | Function | Semantic Chaining Blind Spot |
|---|---|---|
| Reasoning Engine | Focuses on task completion, substitution, and composition. | Executes the multi-step logic without re-evaluating the final intent. |
| Safety Layer | Scans the surface-level text of each individual prompt. | Lacks the memory or reasoning depth to track the latent intent across the entire conversational history. |
| Output Filter | Checks the final text response for policy violations. | Is blind to the content rendered inside the generated image. |
The harmful intent is so thoroughly obfuscated through the chain of edits that the output filter fails to trigger. The safety systems do not have the capability to track the contextual intent that evolves over multiple turns, allowing the model to be "boiled like a frog", slowly nudged into violating its own rules.
If you are building applications on top of multimodal LLMs, this research is a critical wake-up call. Relying solely on the model provider's internal safety filters is no longer sufficient.
The cat-and-mouse game between attackers and AI safety researchers is accelerating. As developers, we must assume that any model-side safety can be bypassed and build robust, external governance layers to protect our applications and users.
What do you think? Have you encountered similar multi-turn exploits in your LLM development? Is the future of AI security external governance, or can model-side alignment catch up? Let's discuss in the comments!
2026-01-30 20:38:29
This is a draft write-up from 2019, documenting a CAPTCHA bypass technique I discovered back then. All code and images shown are examples for educational purposes.
One day back in 2019, I got tired of repeatedly logging into my school's student system just to enroll in a full class. Then a lightbulb went off in my head: I could automate these attempts and refocus on my actual work.
When it comes to automating website processes, I love using user scripts.
Our school system expired user sessions after a while, even if you were actively working. So first, I needed to automate the login process. Once successfully logged in, navigating and enrolling in a class would be straightforward (just some click event magic).
The school system had an internally managed CAPTCHA service. It displayed two numbers and asked for their sum. The images were slightly corrupted—but not enough to prevent OCR. However, I didn't want to rely on any API or image processing system to solve this CAPTCHA.
After deciding not to use any image-related service, I started inspecting the network traffic of the login system. I noticed that the answer to the generated CAPTCHA was stored server-side in a session associated with me.
When I clicked the audio playback button, I realized it was reading out the answer directly. Two different endpoints returned two different audio files, split into tens and ones digits.
The audio system seemed like a perfect route for me. I had no intention of feeding these audio files into a speech-to-text service—I was looking for the fastest, hackiest solution possible.
Since the same audio files should have the same file size, I decided to test this hypothesis first.
And bingo! By mapping each tens and ones digit to their file sizes beforehand, I could automatically determine the answer.
Here's an example of how the code worked:
// Example mapping of file sizes to digit values
const tensMap = {
1234: 0, // 0 tens
1456: 10, // 1 ten
1567: 20, // 2 tens
// ... etc
};
const onesMap = {
987: 0, // 0 ones
1023: 1, // 1 one
1145: 2, // 2 ones
// ... etc
};
// Example: Fetch audio files and determine answer by file size
async function solveCaptcha() {
const tensResponse = await fetch('/captcha/audio/tens');
const onesResponse = await fetch('/captcha/audio/ones');
const tensSize = parseInt(tensResponse.headers.get('content-length'));
const onesSize = parseInt(onesResponse.headers.get('content-length'));
const tensValue = tensMap[tensSize] || 0;
const onesValue = onesMap[onesSize] || 0;
return tensValue + onesValue;
}
// Use it in the login flow
const answer = await solveCaptcha();
document.querySelector('#captcha-input').value = answer;
document.querySelector('#login-button').click();
When I tested this approach, it worked perfectly!
This experience taught me a few things:
This technique worked because the audio files were pre-generated and static. A more secure implementation would generate unique audio files or add random noise to prevent size-based fingerprinting.
2026-01-30 20:37:15
Learning git was never such fun.Lets simplify the version control journey. Version control is one of those things everyone uses, but not everyone truly understands—especially how it fits into real DevOps workflows.
I spent time working deeply with Git, GitHub, and team-style workflows, and this week honestly changed how I look at source control. Git wasn’t about memorizing commands—it was about understanding why things are done a certain way in real projects.
From “Just Files” to Real Version Control
It all starts by setting up a small project on your local system. At first glance, it was just another folder—but the moment Git was initialized, that folder gained memory.How, use the command as:
`git init`
Every change, every experiment, every rollback suddenly became traceable. That’s when it really hit me:
Git isn’t about commands—it’s about accountability and confidence.
We use the command git status for tracing the changes. We create a new file for example index.html but how will git know its for the versioning or project so we add it by git add index.html command.
Now git knows the file is for git but what about the pals who work with you on the same project so we are making a commit for a understanding with the command git commit -m "Here goes the message"
I also worked with both local and global Git identities, which made me realize how important author attribution is in professional environments, especially when multiple teams and repositories are involved.Here we add our name and email while creating a config so that we all know who has done the commit, you now now where the accountability comes from!
The commands used for setting up are :
`git config --local user.name "username"
git config --local user.email "[email protected]"
`
Suddenly we want to add a new file to the project so we create it but this time in a different way.Git is like a tree, the master/main is the tree trunk and we have to create a new branch and then merge it as different people are involved and we need to ensure that the main project is not effected.Imagine like its a production environment so slight change in the code can bring the application down.
So we create a new branch that a new file with the command as (lets say its a feature update/contact-page)
`git checkout -b feature/contact-page`
We then have to make the required changes and merge it.We can use the commands as
git checkout master
git merge feature/contact-page
So now we have create a new branch, edited the required and we have merged it with our main.(that is the tree trunk)This is the simplest way to understand git.
But what about enormous code on the internet.Aren't those like roots all over the earth. Imagine you wish to fetch something like some other persons or other organizations repository(used repo in short)so what we do, we use a fork to use the code for own github account.
Github is used as an example here so the enormous pool of code is the github, I can fork any public repo by forking it by simple pressing the button of 'fork'.
Now what, we have cloned it to our own repo.Lets imagine that you are a developer in an xyz organization with a repo where you are required to fork, clone and then update the code.Lets say you are two friends working on two different pages(branches) and would report to a project lead.So you for the particular code would work and make changes as per requirement and then create a pull request.By creating a pull request you are asking your manager to review the code.
But here's where security comes in, we have to set up a secured connection from your local machine to the organizations account.
So we generate PAT, nothing but access tokens that allow you to pass through the authentication grids.
Later we check the connectivity byssh -T [email protected]
These are the commands to set a origin URL and upstream URL:
git remote set-url origin
git remote add upstream
You can verify the origin and upstream by the command as :
git remote -v
Once you have all set for the merging of your code and for your review you can then you can push it by
`git push -u origin nameofthebranch`
You can then create a pull request by going to your fork on GitHub, Click Compare & pull request.Ensure PR target is correct and create a Pull request.This is the general flow or understanding.
P.S. This post is a part of DevOps Micro Internship (DMIByPravinMishra) by
If you’d like to join cohort 3.
Join the Discord community here: https://discord.com/invite/GKUuEaknTG