2025-12-11 14:06:45
Hi, I’m JJ (yuasa), a security engineer.
In this post, I’ll try out Threat Thinker, an automated threat modeling tool that uses LLMs, on several different systems. From an AWS-based system to a smart home, we’ll see how an LLM surfaces threats from architecture diagrams. My goal is to give you a feel for what LLM-based threat modeling looks like, with real outputs included.
https://github.com/melonattacker/threat-thinker
Threat Thinker is a tool that performs automatic threat modeling from system architecture diagrams using an LLM. It can parse various diagram formats such as Mermaid, draw.io, screenshots of architecture diagrams, and OWASP Threat Dragon. From the relationships between components, it infers potential threats.
In traditional threat modeling, once you have an architecture diagram, developers and security engineers have to manually go through and identify threats one by one. In my experience, there are broadly two types of threats:
By using Threat Thinker, you can automate the initial identification of those “basic threats.” Humans can then focus on deeper analysis of “system-specific threats” and on designing countermeasures. It supports both a CLI and a web UI, so even non-security specialists can use it without much friction.

Automatically identifying basic threats from an architecture diagram
Let’s use Threat Thinker to identify threats for three cases: an AWS-based system, a corporate network, and a smart home. For each one, we’ll feed the architecture diagram into Threat Thinker and see which threats it extracts.
Here is the architecture diagram of the target AWS-based system, created in Mermaid. It’s a fairly common web application architecture consisting of CloudFront → ALB → ECS → RDS/S3. We’ll use this to see how Threat Thinker extracts threats from a relatively simple stack.
graph LR
%% Trust boundaries
subgraph Internet
user[User]
end
subgraph AWS_Edge[Edge]
cf[CloudFront]
end
subgraph VPC[VPC]
subgraph PublicSubnet[Public subnet]
alb[ALB]
end
subgraph PrivateSubnet[Private subnet]
ecs[ECS Service]
rds[(Customer RDS<br>PII)]
s3[(S3 Bucket<br>Logs/Uploads)]
end
end
%% Data flows
user -- sends HTTPS request --> cf
cf -- forwards HTTPS request --> alb
alb -- routes HTTP request --> ecs
ecs -- reads/writes data (SQL/TLS) --> rds
ecs -- stores/reads objects (S3 API) --> s3
Below is a CLI-based execution example. You specify the Mermaid diagram file path via --diagram. With --infer-hints enabled, the LLM will also infer auxiliary information not explicitly written in the diagram and use that to reason about threats. In this example, we use OpenAI’s gpt-4.1 model and ask it to output up to 5 threats.
threat-thinker think \
--diagram path/to/diagram/system.mmd \
--infer-hints \
--topn 5 \
--llm-api openai \
--llm-model gpt-4.1 \
--out-dir path/to/report/dir
The top 5 threats in the generated Markdown report are as follows. These are all typical risks you might see when building a web application on AWS: issues around authentication/authorization, lack of encryption, insufficient logging and monitoring, and S3 misconfiguration.
| ID | Threat | Severity | STRIDE | Affected Components | Score |
|---|---|---|---|---|---|
| T001 | Potential Lack of Authentication/Authorization on ALB to ECS Path | High | Spoofing / EoP | ALB → ECS | 8.0 |
| T002 | Unencrypted Traffic Between ALB and ECS Allows Tampering and Disclosure | High | Tampering / Info Disc. | ALB → ECS | 8.0 |
| T003 | Exposure of PII in RDS Without Explicit Encryption at Rest | High | Info Disclosure | ECS ↔ RDS | 7.0 |
| T004 | Insufficient Logging and Monitoring for Sensitive Operations | Medium | Repudiation | ECS / RDS / S3 | 6.0 |
| T005 | Potential S3 Bucket Misconfiguration Exposing Internal Data | Medium | Info Disclosure | S3 | 6.0 |
In the HTML report, you can visually see which part of the architecture each threat maps to.

Visualizing where threats exist in the architecture
Here is the architecture diagram for the target corporate network, created in draw.io. It’s a simple small/medium business network split into three zones: Internet, DMZ, and internal network.

Corporate network architecture diagram
This time, we’ll use the web UI. You can start the web UI with threat-thinker webui.
$ threat-thinker webui
ℹ️ Starting Threat Thinker Web UI
* Running on local URL: http://127.0.0.1:7860
* To create a public link, set `share=True` in `launch()`.
Copy and paste the draw.io (XML) diagram source, and set Diagram Format to drawio.
Configure the options as needed and click Generate Report.
The top 5 threats in the Markdown report are as follows. They include risks such as attacks on the public-facing web server, injection vulnerabilities due to insufficient input validation, lateral movement from the DMZ to the internal network, and exposure of sensitive data stored on internal servers. Overall, it points out a balanced set of risks that you would typically expect in a corporate network.
| ID | Threat | Severity | STRIDE | Affected Components | Score |
|---|---|---|---|---|---|
| T001 | External Attackers Can Reach Public-Facing Web Server | High | Spoofing / Tampering | Internet → Web Server | 9.0 |
| T002 | Insufficient Input Validation on Public Web Server | High | Tampering / Info Disc. | Web Server | 8.0 |
| T003 | Potential Lateral Movement from DMZ to Internal Network | High | EoP / Info Disc. | Web Server → Internal Net | 8.0 |
| T004 | Sensitive Data Exposure on File and Directory Servers | High | Info Disclosure | File Server / AD | 8.0 |
| T005 | VPN Gateway Exposed to Credential Attacks | High | Spoofing / EoP | Internet → VPN GW | 8.0 |
Here is the architecture diagram for the target smart home system, created with the threat modeling tool OWASP Threat Dragon. It represents a typical cloud-connected smart home environment: residents use a mobile app to access a cloud control service, which in turn controls home devices such as IP cameras, smart locks, and smart speakers via the home router.

Smart home architecture diagram
We’ll use the web UI again for this example.
Threat Thinker has a RAG feature that lets you upload Markdown, HTML, and other documents to build a Knowledge Base that the LLM can reference during threat reasoning. Since our target is a smart home system, we’ll build a Knowledge Base based on the OWASP IoT Top 10.

Building a Knowledge Base based on OWASP IoT Top 10
Then we configure the threat inference settings so that the model uses that Knowledge Base.

Using the Knowledge Base during threat inference
The top 5 threats in the Markdown report are as follows. The model points out issues such as potentially unencrypted communication between the mobile app and the cloud service, lack of assurance that commands sent from the cloud to home devices are authentic, and insufficient protection for video/logs stored in the cloud. It also notes that if user authentication is weak, third parties could control devices.
Overall, the results feel reasonable for a threat analysis that references OWASP IoT Top 10.
| ID | Threat | Severity | STRIDE | Affected Components | Score |
|---|---|---|---|---|---|
| T001 | Insecure Communication Between Mobile App and Cloud Control Service | High | Tampering / Info Disc. | Mobile App ↔ Cloud | 9.0 |
| T002 | Lack of Authentication for Device Commands from Cloud to Home Network | High | Spoofing / EoP | Cloud → Home Devices | 9.0 |
| T003 | Insecure Storage of Sensitive Video/Logs in Cloud | High | Info Disc. / Repudiation | Cloud Storage | 8.0 |
| T004 | Unencrypted Video/Telemetry Data in Transit | High | Info Disc. / Tampering | Devices ↔ Cloud | 8.0 |
| T005 | Weak or Missing Authentication for Mobile App User Actions | High | Spoofing / EoP | User ↔ Mobile App | 8.0 |
When you use Threat Dragon as the input format, Threat Thinker can output the results back in Threat Dragon format with the inferred threats added to the diagram.

Threats added and linked to the relevant elements
In this post, we used Threat Thinker to perform LLM-based threat modeling on three different systems: an AWS-based system, a corporate network, and a smart home. We saw that just by feeding in an architecture diagram, Threat Thinker can automatically identify a solid set of basic threats.
At the same time, risks tied to business logic or to organization-specific operations still need human review. LLM-based threat modeling is not a replacement for expert review. Rather, it’s best used as a way to quickly generate an initial draft or as a safety net to prevent oversights. By importing guidelines like the OWASP IoT Top 10 into the Knowledge Base, you can also steer it toward more domain-specific reasoning.
If this sounds interesting, I encourage you to try running Threat Thinker against your own architecture diagrams and see what threats it surfaces.
2025-12-11 14:05:06
Is your React app dragging its feet, but you can't pinpoint the cause?
You might think you're following the best practices, but some common development patterns are actually degrading your app’s performance behind the scenes.
In this article, we'll uncover 5 sneaky anti-patterns that are silently killing your React app's speed, and show you how to fix them for a faster, smoother user experience.
So let's get started.
The Problem: Adding everything in a single context.
The Solution: Split contexts by concern.
The Problem: Passing inline objects/arrays as props to a component.
First Solution: Define constants outside the component.
Second Solution: Use useMemo for dynamic values.
The Problem: Displaying the entire list at once.
The Solution: Use React Window/React Virtualized library.
The Problem: Calculating values during rendering.
The Solution: Use the useMemo hook.
The Problem: Creating new functions on every render.
The Solution: Use event delegation.
Access The Ultimate React Ebooks Collection By Clicking The Image Below👇
Download The Complete Redux Toolkit Ebook Here
2025-12-11 13:56:43
Old school automation is dying. “Computer-use” AI is taking over. Here's how to adapt before your competitors do ↓
Most teams still think AI can only read the web and draft text.
They’re missing the next leap: AI that actually uses a computer like a human.
Clicks. Types. Scrolls. Waits. Decides.
I recently discovered a model called Lux that does exactly this.
Instead of just calling APIs, it lives inside thousands of virtual desktops and practices real workflows.
It doesn’t “simulate” work – it actually runs software, fills forms, and manages dashboards.
One second per action.
Endless patience.
No context-switch fatigue.
Imagine this on your team.
• QA tests run overnight across real UIs.
• Sales ops gets clean CRM data without weekend imports.
• Ops dashboards stay updated without manual copy-paste.
The smartest part isn’t the speed.
It’s the control.
Lux uses three modes to balance planning, speed, and strict oversight so you decide how much freedom it gets.
That’s the hidden truth about AI agents: trust is the real bottleneck.
Here’s a simple way to start today:
↳ Pick 1 workflow that already happens on a desktop (forms, dashboards, QA).
↳ Time how long it takes a human in a week.
↳ Ask: if an AI could do this at 1 second per action, with review, what would that unlock?
The companies that win won’t just “use AI”.
They’ll give AI a seat in front of a real computer.
What’s one workflow on your screen right now you wish you could hand off to an AI this week?
2025-12-11 13:36:14
Day 7: Lost & Found Data Detective – Turning Chaos into Organized Magic with Goose 🕵️♂️
What if a single AI-powered YAML file could take a chaotic pile of lost‑and‑found notes and instantly transform it into a clean, deduped, fully searchable web app that looks like it was built by an entire engineering team? I did that!
How... well let me tell you. I used a goose YAML recipe to instantly transform festival lost & found chaos into a beautiful, searchable, mobile-ready web app – deduplicating, categorizing, and flagging urgent finds with AI! I also had fun while doing it!
goose and my new fav recipes!
Day 7: Lost & Found Data Detective 🧣📱
The Challenge: Festival Lost & Found Mayhem
Imagine you're running a festival (or school fair, or conference...). Dozens of lost item notes flood in – hand-written, half-typed, wildly inconsistent:
blue scarf, found near ice rink, 2pm
BLUE SCARF - ice skating area - 2:15pm
iPhone 13 pro, black case, storytelling tent, 3pm - URGENT
red mitten for kid, cocoa booth, around 2:30
Traditionally, you'd:
What if you could go from chaos to perfect order in minutes?
Enter: Lost & Found Data Detective (A wonderful goose Recipe)
For Day 7, I built a reusable goose YAML recipe that digests messy lists, merges dupes, categorizes items, flags urgencies, and spits out a gorgeous, searchable web app, complete with stats, categories, and mobile/responsive UI!
🛠 Tech Stack
🧪 My Experience (From Raw Notes to App)
Day 7’s dataset: 35+ scribbled notes, all different formats.
lost-and-found-detective.yaml into Goose Desktopfestival-data/day2-lost-and-found.html)What did my application do?
🎨 Why This Is a Game Changer
Who Can Use This?
100% success rate and rave reviews from volunteers!
Lessons & Insights
How You Can Use It
lost-and-found-detective.yaml (recipe)Powered By
My Final Thoughts
Organizing lost & found shouldn’t be a horror story. Now, it’s instant, beautiful, and open source. Using goose’s declarative YAML recipes, I automated the entire lost-and-found workflow from messy notes to a clean, searchable web application. The system deduplicates entries, flags urgent items, and generates a responsive, offline-capable interface in seconds. With zero dependencies and real-time filters, it’s a scalable solution for any event needing fast, accessible data organization. I enjoyed this tremendously.
Day 7: Solved. Lost item chaos: Destroyed. Happy festival, everyone!
This post is part of my Advent of AI journey - AI Engineering: Advent of AI with goose Day 7 of AI engineering challenges.
Follow along for more AI adventures with Eri!
2025-12-11 13:36:14
ideo editing is the process of transforming raw footage into a polished, engaging final product.
It involves cutting, arranging, adding effects, sound design, and color correction to enhance storytelling.
Skilled editing brings clarity, emotion, and visual impact to any video project.
2025-12-11 13:36:01
As your Model Context Protocol (MCP) server unveils its capabilities for AI interaction, a fundamental security challenge emerges: controlling access and usage. This guide will walk you through transforming your server into an impregnable digital stronghold, ensuring robust protection without compromising its utility. A truly effective server is always a well-guarded one.
Drawing from over a decade and a half in API development, I've internalized a critical principle: security must be foundational, not an afterthought. For an MCP server managing your files, data, and critical assets, multi-layered defense is indispensable. Yet, bolstering security doesn't equate to increased complexity. This article outlines the implementation of four vital security safeguards: rigorous input validation to thwart malicious data, robust authentication to verify user identities, precise authorization to define access privileges, and intelligent resource limiting to prevent misuse. Upon completion, your server will be primed for a production environment.
Before diving into code, let's establish our defense-in-depth strategy:
Principle: Treat all incoming data with suspicion. Rigorously validate, sanitize, and verify every piece of information.
Rationale: Inadequate parameter validation can open doors to critical vulnerabilities, such as directory traversal attacks (e.g., accessing sensitive files like ../../etc/passwd), code injection exploits, or even server instability.
Principle: Confirm the identity of every entity interacting with your server. Every incoming request needs to be linked to a confirmed and verified identity.
Rationale: Lacking authentication leaves your tools completely exposed to unauthorized access, much like an unlocked front door invites uninvited guests.
Principle: Validate specific access rights. Even after successful authentication, users should only be permitted to perform actions aligned with their roles.
Rationale: An intern, for instance, has no business accessing human resources records. Implementing fine-grained permissions is key to safeguarding sensitive data.
Principle: Establish clear quotas, set size caps, and implement connection timeouts.
Rationale: These measures are crucial to prevent a single malicious actor or an unforeseen error from overwhelming your server with an exorbitant number of requests, such as 10,000 per second.
We begin with perhaps the most critical defense: validating all incoming data. Implement this by creating src/security/validator.ts as follows:
// src/security/validator.ts
import path from 'path';
import { InputSchema } from '../mcp/protocol';
/**
* Validation error
*/
export class ValidationError extends Error {
constructor(
message: string,
public field?: string,
public expected?: string
) {
super(message);
this.name = 'ValidationError';
}
}
/**
* Parameter validator based on JSON Schema
*/
export class ParameterValidator {
/**
* Validate parameters according to schema
*/
static validate(params: any, schema: InputSchema): void {
// Check that params is an object
if (typeof params !== 'object' || params === null) {
throw new ValidationError('Parameters must be an object');
}
// Check required fields
for (const requiredField of schema.required) {
if (!(requiredField in params)) {
throw new ValidationError(
`Field '${requiredField}' is required`,
requiredField
);
}
}
// Validate each property
for (const [fieldName, fieldValue] of Object.entries(params)) {
const fieldSchema = schema.properties[fieldName];
if (!fieldSchema) {
throw new ValidationError(
`Field '${fieldName}' is not allowed`,
fieldName
);
}
this.validateField(fieldName, fieldValue, fieldSchema);
}
}
/**
* Validate a specific field
*/
private static validateField(
fieldName: string,
value: any,
schema: any
): void {
// Type validation
const actualType = typeof value;
const expectedType = schema.type;
if (expectedType === 'string' && actualType !== 'string') {
throw new ValidationError(
`Field '${fieldName}' must be a string`,
fieldName,
expectedType
);
}
if (expectedType === 'number' && actualType !== 'number') {
throw new ValidationError(
`Field '${fieldName}' must be a number`,
fieldName,
expectedType
);
}
if (expectedType === 'boolean' && actualType !== 'boolean') {
throw new ValidationError(
`Field '${fieldName}' must be a boolean`,
fieldName,
expectedType
);
}
// Enumeration validation
if (schema.enum && !schema.enum.includes(value)) {
throw new ValidationError(
`Field '${fieldName}' must be one of: ${schema.enum.join(', ')}`,
fieldName
);
}
// Length validation for strings
if (expectedType === 'string') {
if (schema.minLength && value.length < schema.minLength) {
throw new ValidationError(
`Field '${fieldName}' must contain at least ${schema.minLength} characters`,
fieldName
);
}
if (schema.maxLength && value.length > schema.maxLength) {
throw new ValidationError(
`Field '${fieldName}' cannot exceed ${schema.maxLength} characters`,
fieldName
);
}
}
// Range validation for numbers
if (expectedType === 'number') {
if (schema.minimum !== undefined && value < schema.minimum) {
throw new ValidationError(
`Field '${fieldName}' must be greater than or equal to ${schema.minimum}`,
fieldName
);
}
if (schema.maximum !== undefined && value > schema.maximum) {
throw new ValidationError(
`Field '${fieldName}' cannot exceed ${schema.maximum}`,
fieldName
);
}
}
// Pattern validation for strings
if (expectedType === 'string' && schema.pattern) {
const regex = new RegExp(schema.pattern);
if (!regex.test(value)) {
throw new ValidationError(
`Field '${fieldName}' doesn't match expected format`,
fieldName
);
}
}
}
}
/**
* File path validator
*/
export class PathValidator {
private allowedDirectories: string[];
private blockedPaths: string[];
constructor(allowedDirectories: string[], blockedPaths: string[] = []) {
// Resolve all paths to absolute
this.allowedDirectories = allowedDirectories.map(dir => path.resolve(dir));
this.blockedPaths = blockedPaths.map(p => path.resolve(p));
}
/**
* Validate that a path is safe
*/
validatePath(filePath: string): string {
// Resolve absolute path
const absolutePath = path.resolve(filePath);
// Check path traversal (../)
if (absolutePath.includes('..')) {
throw new ValidationError(
'Paths with ".." are not allowed (path traversal)'
);
}
// Check that path is in an allowed directory
const isInAllowedDir = this.allowedDirectories.some(dir =>
absolutePath.startsWith(dir)
);
if (!isInAllowedDir) {
throw new ValidationError(
`Access denied: path must be in one of the allowed directories`
);
}
// Check that path is not blocked
const isBlocked = this.blockedPaths.some(blocked =>
absolutePath.startsWith(blocked)
);
if (isBlocked) {
throw new ValidationError(
`Access denied: this path is explicitly blocked`
);
}
return absolutePath;
}
/**
* Add an allowed directory
*/
addAllowedDirectory(directory: string): void {
this.allowedDirectories.push(path.resolve(directory));
}
/**
* Block a specific path
*/
blockPath(pathToBlock: string): void {
this.blockedPaths.push(path.resolve(pathToBlock));
}
}
/**
* File size validator
*/
export class SizeValidator {
/**
* Validate that a size is acceptable
*/
static validateSize(
size: number,
maxSize: number,
fieldName: string = 'file'
): void {
if (size > maxSize) {
throw new ValidationError(
`The ${fieldName} is too large (max ${this.formatSize(maxSize)})`
);
}
}
/**
* Format size in bytes to readable format
*/
static formatSize(bytes: number): string {
const units = ['bytes', 'KB', 'MB', 'GB'];
let size = bytes;
let unitIndex = 0;
while (size >= 1024 && unitIndex < units.length - 1) {
size /= 1024;
unitIndex++;
}
return `${size.toFixed(2)} ${units[unitIndex]}`;
}
}
This comprehensive validation module meticulously verifies:
Next, we'll develop an authentication mechanism leveraging JSON Web Tokens (JWT). Set up src/security/auth.ts with the following content:
// src/security/auth.ts
import crypto from 'crypto';
/**
* User interface
*/
export interface User {
id: string;
username: string;
role: 'admin' | 'user' | 'readonly';
permissions: string[];
}
/**
* Simplified JWT token (for demo - use a real JWT lib in prod)
*/
interface Token {
userId: string;
username: string;
role: string;
permissions: string[];
expiresAt: number;
}
/**
* Authentication manager
*/
export class AuthManager {
private users: Map<string, User> = new Map();
private tokens: Map<string, Token> = new Map();
private readonly SECRET_KEY: string;
private readonly TOKEN_DURATION = 24 * 60 * 60 * 1000; // 24 hours
constructor(secretKey: string) {
this.SECRET_KEY = secretKey;
// Create some test users
this.createUser({
id: '1',
username: 'admin',
role: 'admin',
permissions: ['*'] // All permissions
});
this.createUser({
id: '2',
username: 'user',
role: 'user',
permissions: ['readFile', 'listFiles', 'searchFiles']
});
this.createUser({
id: '3',
username: 'readonly',
role: 'readonly',
permissions: ['readFile', 'listFiles']
});
}
/**
* Create a user
*/
createUser(user: User): void {
this.users.set(user.username, user);
}
/**
* Authenticate a user and generate a token
*/
authenticate(username: string, password: string): string | null {
// In production, verify hashed password!
// This is simplified for demo
const user = this.users.get(username);
if (!user) {
return null;
}
// Generate a token
const tokenId = crypto.randomBytes(32).toString('hex');
const token: Token = {
userId: user.id,
username: user.username,
role: user.role,
permissions: user.permissions,
expiresAt: Date.now() + this.TOKEN_DURATION
};
this.tokens.set(tokenId, token);
return tokenId;
}
/**
* Validate a token
*/
validateToken(tokenId: string): Token | null {
const token = this.tokens.get(tokenId);
if (!token) {
return null;
}
// Check expiration
if (Date.now() > token.expiresAt) {
this.tokens.delete(tokenId);
return null;
}
return token;
}
/**
* Revoke a token
*/
revokeToken(tokenId: string): void {
this.tokens.delete(tokenId);
}
/**
* Get a user
*/
getUser(username: string): User | undefined {
return this.users.get(username);
}
/**
* Clean expired tokens
*/
cleanExpiredTokens(): void {
const now = Date.now();
for (const [tokenId, token] of this.tokens.entries()) {
if (now > token.expiresAt) {
this.tokens.delete(tokenId);
}
}
}
}
/**
* Authentication middleware for Express
*/
export function authMiddleware(authManager: AuthManager) {
return (req: any, res: any, next: any) => {
// Get token from Authorization header
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return res.status(401).json({
success: false,
error: 'Missing authentication token'
});
}
const tokenId = authHeader.substring(7); // Remove "Bearer "
const token = authManager.validateToken(tokenId);
if (!token) {
return res.status(401).json({
success: false,
error: 'Invalid or expired token'
});
}
// Add user info to request
req.user = token;
next();
};
}
Moving forward, we'll build a permission system designed to ascertain whether a specific user is authorized to invoke a particular tool. Construct src/security/permissions.ts with the code below:
// src/security/permissions.ts
import { User } from './auth';
/**
* Permission error
*/
export class PermissionError extends Error {
constructor(message: string) {
super(message);
this.name = 'PermissionError';
}
}
/**
* Permission manager
*/
export class PermissionManager {
/**
* Check if a user has permission to use a tool
*/
static hasPermission(
user: User,
toolName: string,
params?: any
): boolean {
// Admins have access to everything
if (user.permissions.includes('*')) {
return true;
}
// Check specific permission
if (!user.permissions.includes(toolName)) {
return false;
}
// Additional contextual permissions
// For example, check allowed paths for readFile
if (toolName === 'readFile' && params?.file_path) {
return this.canAccessPath(user, params.file_path);
}
return true;
}
/**
* Check access to a specific path
*/
private static canAccessPath(user: User, filePath: string): boolean {
// In readonly, only read in certain folders
if (user.role === 'readonly') {
const allowedPaths = ['/public', '/docs'];
return allowedPaths.some(allowed =>
filePath.startsWith(allowed)
);
}
return true;
}
/**
* Get user permissions
*/
static getPermissions(user: User): string[] {
return user.permissions;
}
/**
* Check and throw error if no permission
*/
static requirePermission(
user: User,
toolName: string,
params?: any
): void {
if (!this.hasPermission(user, toolName, params)) {
throw new PermissionError(
`Permission denied: you don't have access to tool '${toolName}'`
);
}
}
}
/**
* Permission policy for a tool
*/
export interface ToolPolicy {
allowedRoles: string[];
requiredPermissions: string[];
rateLimit?: {
maxRequests: number;
windowMs: number;
};
}
/**
* Tool policy manager
*/
export class PolicyManager {
private policies: Map<string, ToolPolicy> = new Map();
/**
* Set a policy for a tool
*/
setPolicy(toolName: string, policy: ToolPolicy): void {
this.policies.set(toolName, policy);
}
/**
* Get a tool's policy
*/
getPolicy(toolName: string): ToolPolicy | undefined {
return this.policies.get(toolName);
}
/**
* Check that a user respects the policy
*/
checkPolicy(user: User, toolName: string): boolean {
const policy = this.policies.get(toolName);
if (!policy) {
return true; // No policy = allowed by default
}
// Check role
if (!policy.allowedRoles.includes(user.role) &&
!policy.allowedRoles.includes('*')) {
return false;
}
// Check permissions
const hasAllPermissions = policy.requiredPermissions.every(perm =>
user.permissions.includes(perm) || user.permissions.includes('*')
);
return hasAllPermissions;
}
}
To shield our server from potential abuse, let's implement a robust rate limiting framework. Create src/security/rateLimit.ts containing:
// src/security/rateLimit.ts
/**
* Usage record
*/
interface UsageRecord {
count: number;
resetAt: number;
}
/**
* Rate limiting manager
*/
export class RateLimiter {
private usage: Map<string, UsageRecord> = new Map();
constructor(
private maxRequests: number,
private windowMs: number
) {}
/**
* Check and increment counter for a user
*/
checkLimit(userId: string): boolean {
const now = Date.now();
const record = this.usage.get(userId);
// No record or expired window
if (!record || now > record.resetAt) {
this.usage.set(userId, {
count: 1,
resetAt: now + this.windowMs
});
return true;
}
// Limit reached
if (record.count >= this.maxRequests) {
return false;
}
// Increment counter
record.count++;
return true;
}
/**
* Get limit info for a user
*/
getLimitInfo(userId: string): {
current: number;
max: number;
resetsAt: Date;
} {
const record = this.usage.get(userId);
if (!record) {
return {
current: 0,
max: this.maxRequests,
resetsAt: new Date(Date.now() + this.windowMs)
};
}
return {
current: record.count,
max: this.maxRequests,
resetsAt: new Date(record.resetAt)
};
}
/**
* Reset counter for a user
*/
reset(userId: string): void {
this.usage.delete(userId);
}
/**
* Clean expired records
*/
cleanup(): void {
const now = Date.now();
for (const [userId, record] of this.usage.entries()) {
if (now > record.resetAt) {
this.usage.delete(userId);
}
}
}
}
/**
* Rate limiting middleware for Express
*/
export function rateLimitMiddleware(rateLimiter: RateLimiter) {
return (req: any, res: any, next: any) => {
const userId = req.user?.userId || req.ip;
if (!rateLimiter.checkLimit(userId)) {
const info = rateLimiter.getLimitInfo(userId);
return res.status(429).json({
success: false,
error: 'Request limit reached',
limit: {
max: info.max,
current: info.current,
resetsAt: info.resetsAt
}
});
}
next();
};
}
/**
* Quota manager per tool
*/
export class QuotaManager {
private quotas: Map<string, Map<string, number>> = new Map();
/**
* Set a quota for a user and tool
*/
setQuota(userId: string, toolName: string, maxUsage: number): void {
if (!this.quotas.has(userId)) {
this.quotas.set(userId, new Map());
}
this.quotas.get(userId)!.set(toolName, maxUsage);
}
/**
* Check and decrement quota
*/
checkQuota(userId: string, toolName: string): boolean {
const userQuotas = this.quotas.get(userId);
if (!userQuotas) {
return true; // No quota = unlimited
}
const remaining = userQuotas.get(toolName);
if (remaining === undefined) {
return true; // No quota for this tool
}
if (remaining <= 0) {
return false; // Quota exhausted
}
userQuotas.set(toolName, remaining - 1);
return true;
}
/**
* Get remaining quota
*/
getRemainingQuota(userId: string, toolName: string): number | null {
const userQuotas = this.quotas.get(userId);
if (!userQuotas) {
return null; // Unlimited
}
return userQuotas.get(toolName) || null;
}
/**
* Reset a user's quota
*/
resetQuota(userId: string, toolName: string, maxUsage: number): void {
this.setQuota(userId, toolName, maxUsage);
}
}
You've done it! Your MCP server is now equipped with four critical security layers, making it ready for production deployment:
With these safeguards in place, your server can confidently operate in a live environment. AIs can interact with it securely, users are assigned specific, appropriate permissions, and potential misuse is automatically curtailed. In the forthcoming, concluding installment of this series, we will integrate your fortified server with Claude Desktop, demonstrating the complete ecosystem functioning seamlessly with a real AI. Until then, I encourage you to challenge your newly built security mechanisms! Attempt to circumvent them, push their boundaries, and confirm every aspect is meticulously protected. A truly robust security system is one that has endured attempts at breach and emerged resilient.
Article published on December 10, 2025 by Nicolas Dabène - PHP & PrestaShop Expert with 15+ years of experience in software architecture and AI integration
Also read:
If you found this guide helpful and want to dive deeper into software architecture, AI integration, and more development insights, be sure to connect with me!