2025-12-25 01:55:27
Most conversations about Copilot, Teams, and Microsoft 365 security are happening at the tool layer.
That’s understandable — but it’s also where CMMC failures quietly begin.
Microsoft 365 is security-capable by design.
Copilot and Teams are productivity accelerators by intent.
Neither of them is non-compliant.
They are compliance-neutral.
CMMC doesn’t certify tools.
It evaluates architecture, trust boundaries, information flow, and evidence.
If collaboration is treated as a flat plane,
AI doesn’t break compliance — it simply amplifies whatever trust boundaries already exist.
That realization changes everything.
Not a tool.
Not a feature.
A posture.
Rahsi Defense Security Mesh™ is a way to make collaboration itself a provable, regulated surface — without slowing teams down.
This is where collaboration stops being assumed safe and becomes architecturally defensible.
This is not anti-Microsoft.
It exists because Microsoft’s Zero Trust and AI stack are strong enough to support it.
When you design around trust boundaries,
Copilot becomes an ally, not a risk.
If you work in:
This will feel uncomfortably familiar — in a good way.
https://www.aakashrahsi.online/post/rahsi-defense-security-mesh
Silently shared for those who care about doing this right.
— Aakash Rahsi
2025-12-25 01:54:52
This project documents the process of imaging and configuring a Raspberry Pi to function as a headless database server on a local network, with a focus on OS deployment, remote access, and preparing the system for headless database hosting and homelab expansion.
For the official step-by-step process, the Raspberry Pi Foundation provides documentation. I followed their documentation while performing my own process.
Official Raspberry Pi Getting Started Documentation
Download and install the Raspberry Pi Imager for your operating system
Official Raspberry Pi Imager Software
After installation, ensure the software is fully updated before proceeding.
The 64-bit OS provides better memory handling and compatibility with modern services such as databases and containerized applications.
Once connected and granted access to your computer, the storage device should appear in the Select Storage menu of the Imager.
Before writing the OS, the Raspberry Pi Imager allows several important pre-boot configurations.
Set a custom hostname to easily identify the device on the network.
Configure the timezone and keyboard layout based on your geographical location.
Create a user account that will be used for SSH access.
The Wi-Fi setup was skipped in favor of Ethernet.
SSH was enabled during imaging using password authentication.
This ensures the Pi can be accessed remotely without requiring a display.
Raspberry Pi Connect provides secure remote access features. Since this was new to me, I enabled it to explore its capabilities.
If you choose to enable Raspberry Pi Connect, you'll have to create an account and verify it.
After reviewing all customizations, proceed with writing the OS to the microSD card.
Warning: This process erases all existing data on the microSD card.
Once the writing process successfully finishes, the Write Complete page will populate, the Finish button can be selected, and you can remove your storage device that contains the newly imaged microSD card.
Insert the newly imaged microSD card into the Raspberry Pi. The Raspberry Pi is now ready to be integrated into your home network.
2025-12-25 01:54:11
In the rush to automate everything with Large Language Models (LLMs), many make a critical mistake they treat AI as a binary choice either a human does the work, or the machine does.
In reality, the most successful AI implementations exist on a spectrum of human intervention. We call these HITL (Human-in-the-Loop), HOTL (Human-on-the-Loop), and HOOTL (Human-out-of-the-Loop).
Choosing the wrong workflow can lead to either a "bottleneck" (too much human interference) or "hallucination disasters" (too much machine autonomy). Here is everything you need to know about these three pillars of AI architecture.
In an HITL workflow, the AI is a sophisticated assistant that cannot finish its task without a human "checkpoint."
When to use it:
In this example, Gemini writes a press release, but the script refuses to "publish" it until a human manually reviews and edits the text.
from google import genai
from dotenv import load_dotenv
load_dotenv(override=True)
client = genai.Client()
MODEL_NAME = "gemini-2.5-flash"
def hitl_press_release(topic):
"""HITL: Human reviews and approves/edit AI output before finalizing."""
prompt = f"Write a short press release for: {topic}"
ai_draft = client.models.generate_content(
model=MODEL_NAME,
contents=prompt
).text
print("\n--- [ACTION REQUIRED] REVIEW AI DRAFT ---")
print(ai_draft)
feedback = input("\nWould you like to (1) Accept, (2) Rewrite, or (3) Edit manually? ")
if feedback == "1":
final_output = ai_draft
elif feedback == "2":
critique = input("What should the AI change? ")
return hitl_press_release(f"{topic}. Note: {critique}")
else:
final_output = input("Paste your manually edited version here: ")
print("\n[SUCCESS] Press release finalized and saved.")
return final_output
hitl_press_release("Launch of a new sustainable coffee brand")
In HOTL, the AI operates autonomously and at scale, but a human stands by a "dashboard" to monitor the outputs. The human doesn't approve every single item instead, they intervene only when they see the AI deviating from the goal.
When to use it:
In this example, Gemini categorizes customer tickets. The human isn't asked for permission for every ticket, but they have a "Window of Intervention" to stop the process if the AI starts making mistakes.
from google import genai
from dotenv import load_dotenv
import time
load_dotenv(override=True)
client = genai.Client()
MODEL_NAME = "gemini-2.5-flash"
def hotl_support_monitor(tickets):
"""On-the-Loop: Human monitors AI decisions in real-time and can veto."""
print("System active. Monitoring AI actions... (Press Ctrl+C to PAUSE/VETO)")
for i, ticket in enumerate(tickets):
try:
response = client.models.generate_content(
model=MODEL_NAME,
contents=f"Categorize this ticket (Billing/Tech/Sales): {ticket}"
)
category = response.text.strip()
print(f"[Log {i+1}] Ticket: {ticket[:30]}... -> Action: Tagged as {category}")
time.sleep(2)
except KeyboardInterrupt:
print(f"\n[VETO] Human supervisor has paused the system on ticket: {ticket}")
action = input("Should we (C)ontinue or (S)kip this ticket? ")
if action.lower() == 's':
continue
else:
pass
tickets = ["My bill is too high", "The app keeps crashing", "How do I buy more?"]
hotl_support_monitor(tickets)
The AI handles the entire process from start to finish. Human intervention only happens after the fact during an audit or a weekly performance review. This is the goal for high-volume, low-risk tasks.
When to use it:
This script takes customer reviews and summarizes them into a report without ever stopping to ask a human for help.
from google import genai
from dotenv import load_dotenv
load_dotenv(override=True)
client = genai.Client()
MODEL_NAME = "gemini-2.5-flash"
def hootl_batch_processor(data_list):
"""Human-out-of-the-Loop: AI processes batch independently; human reviews final report."""
print(f"Starting HOOTL process: {len(data_list)} items to process.")
final_report = []
for item in data_list:
response = client.models.generate_content(
model=MODEL_NAME,
contents=f"Extract key sentiment (Happy/Sad/Neutral): {item}"
)
final_report.append({"data": item, "sentiment": response.text.strip()})
return final_report
reviews = ["Great food!", "Slow service", "Expensive but worth it"]
report = hootl_batch_processor(reviews)
print("Final Report:", report)
| Workflow | Interaction Level | Human Effort | Latency (Speed) | Risk Tolerance |
|---|---|---|---|---|
| HITL | Active | High | Slow | Zero Tolerance (High Risk) |
| HOTL | Passive | Medium | Medium | Managed Risk (Scale + Safety) |
| HOOTL | None | Low | Very Fast | Low Risk (High Volume) |
2025-12-25 01:53:08
bilbij i kjjjjhh Use Markdown to write and format posts.
Commonly used syntax
Embed rich cont Use Markdown to w jl b kkhkn
2025-12-25 01:47:51
A collaborative note-taking backend built with Vapor (Swift) that supports real-time collaboration, authentication, and drawing strokes persistence.
JWT-based authentication with user registration and login functionality.
// User model with email and password
final class User: Model, Content, Authenticatable {
static let schema = "users"
@ID(key: .id)
var id: UUID?
@Field(key: "email")
var email: String
@Field(key: "password_hash")
var passwordHash: String
}
Endpoints:
POST /api/v1/auth/register - Register new userPOST /api/v1/auth/login - Login userGET /api/v1/auth/me - Get current user (protected)Custom authentication middleware that verifies JWT tokens and protects routes.
struct AuthMiddleware: AsyncMiddleware {
func respond(to request: Request, chainingTo next: any AsyncResponder) async throws -> Response {
let payload: UserToken = try await request.jwt.verify(as: UserToken.self)
guard let user = try await User.find(payload.userID, on: request.db) else {
throw Abort(.unauthorized, reason: "User not found")
}
request.auth.login(user)
return try await next.respond(to: request)
}
}
Database configuration using FluentMongoDriver with automatic migrations.
// Configure MongoDB
try app.databases.use(
.mongo(
connectionString: Environment.get("MONGODB_URI") ?? "mongodb://localhost:27017"
),
as: .mongo
)
// Run migrations
app.migrations.add(User.Migration())
app.migrations.add(NotesModel.Migration())
Full CRUD operations for notes with drawing strokes support.
final class NotesModel: Model, Content {
@Field(key: "title")
var title: String
@Field(key: "strokes")
var strokes: [DrawingStroke]
@Parent(key: "user_id")
var user: User
}
struct DrawingStroke: Codable {
let points: [DrawingPoint]
let color: DrawingColor
let width: Double
let timestamp: Date
}
Endpoints:
POST /api/v1/notes - Create noteGET /api/v1/notes - Get all notesGET /api/v1/notes/get/:id - Get single notePUT /api/v1/notes/:id - Update noteDELETE /api/v1/notes/:id - Delete noteWebSocket manager for real-time collaborative editing with note session management.
final class WebSocketManager {
private var connections: [UUID: WebSocket] = [:]
private var noteCollaborators: [UUID: Set<UUID>] = [:]
func joinNoteSession(noteID: UUID, userID: UUID)
func leaveNoteSession(noteID: UUID, userID: UUID)
func broadcastToNote(noteID: UUID, message: String, excludeUserID: UUID?)
}
WebSocket Features:
Endpoint: WS /api/v1/auth/handleInvite (protected)
Share notes via JWT tokens with external users.
// Share token endpoint
GET /api/v1/notes/shared/:shareToken
// Verifies JWT token and returns note
let payload = try await req.jwt.verify(shareToken, as: ShareTokenPayload.self)
Sources/NoterPlayBackend/
├── Controllers/
│ ├── AuthenticationController.swift
│ ├── NotesController.swift
│ └── InviteController.swift
├── Middleware/
│ └── AuthMiddleware.swift
├── Models/
│ ├── User.swift
│ ├── NotesModel.swift
│ ├── UserToken.swift
│ ├── ShareTokenPayload.swift
│ └── InviteModel.swift
├── WSManager/
│ └── WebSocketManager.swift
├── configure.swift
└── routes.swift
# Install dependencies
swift package resolve
# Run the server
swift run
# With Docker
docker-compose up
Built with ❤️ using Vapor and Swift
2025-12-25 01:46:53
While working on a Terraform project, I ran into several Git push errors that initially felt confusing and frustrating. However, each error turned out to be a valuable learning moment. This article documents those issues step by step, explains why they happen, and shows how to fix them correctly.
If you’re learning Terraform, DevOps, or Infrastructure as Code, chances are you’ll encounter these same problems.
Error:
File .terraform/...terraform-provider-aws is larger than 100 MB
Why this happens
The .terraform/ directory was committed. This directory contains Terraform provider binaries, which can be hundreds of megabytes in size and should never be version-controlled.
Correct Fix
Add the following to .gitignore
.terraform/
*.tfstate
*.tfstate.backup
If the file already exists in Git history, the cleanest approach for new projects is to reinitialise the repository:
rm -rf .git
git init
git add .
git commit -m <commit-id>
Error:
Push cannot contain secrets (AWS Access Key detected)
Why this happens
AWS credentials were hardcoded inside provider.tf. GitHub automatically scans commits for secrets and blocks pushes to prevent credential leaks.
What Not to Do
provider "aws" {
access_key = "AKIA..."
secret_key = "xxxx"
}
Correct Approach
provider "aws" {
region = "us-east-1"
}
(or add credentials in another file and add that file in .gitignore)
aws configure
Environment variables
IAM roles (recommended for EC2, CloudShell, CI/CD)
⚠️ If credentials were committed, they should be rotated immediately, even if the push was blocked.