MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Rahsi Defense Security Mesh™ | Copilot & Teams Enable Productivity | CMMC Compliance Demands Architecture, Policy and Governance

2025-12-25 01:55:27

Most conversations about Copilot, Teams, and Microsoft 365 security are happening at the tool layer.

That’s understandable — but it’s also where CMMC failures quietly begin.

Microsoft 365 is security-capable by design.

Copilot and Teams are productivity accelerators by intent.

Neither of them is non-compliant.

They are compliance-neutral.

CMMC Doesn’t Certify Products

CMMC doesn’t certify tools.

It evaluates architecture, trust boundaries, information flow, and evidence.

If collaboration is treated as a flat plane,

AI doesn’t break compliance — it simply amplifies whatever trust boundaries already exist.

That realization changes everything.

Introducing Rahsi Defense Security Mesh™

Not a tool.

Not a feature.

A posture.

Rahsi Defense Security Mesh™ is a way to make collaboration itself a provable, regulated surface — without slowing teams down.

What the Mesh Enforces

  • Explicit CUI / FCI / Unclassified collaboration zones
  • Copilot containment by scope, index, and classification
  • Cross-tenant trust as deny-by-default, not convenience
  • Assessor-grade audit spine that survives real investigations

This is where collaboration stops being assumed safe and becomes architecturally defensible.

Built on Microsoft — Not Against It

This is not anti-Microsoft.

It exists because Microsoft’s Zero Trust and AI stack are strong enough to support it.

When you design around trust boundaries,

Copilot becomes an ally, not a risk.

Who This Is For

If you work in:

  • Azure or Microsoft 365
  • Security or compliance architecture
  • Copilot or Teams governance
  • Defense Industrial Base (DIB) environments
  • CMMC Level 2 readiness

This will feel uncomfortably familiar — in a good way.

Read the Full Article

https://www.aakashrahsi.online/post/rahsi-defense-security-mesh

Silently shared for those who care about doing this right.

Aakash Rahsi

Headless Raspberry Pi Homelab – Part 1: OS Configuration & Remote Access

2025-12-25 01:54:52

Project Overview

This project documents the process of imaging and configuring a Raspberry Pi to function as a headless database server on a local network, with a focus on OS deployment, remote access, and preparing the system for headless database hosting and homelab expansion.

Hardware Used

Hardware used to configure the Raspberry Pi

Installing the OS & Writing to the microSD card

For the official step-by-step process, the Raspberry Pi Foundation provides documentation. I followed their documentation while performing my own process.

Official Raspberry Pi Getting Started Documentation

Step 1: Install the Raspberry Pi Imager

Download and install the Raspberry Pi Imager for your operating system

Official Raspberry Pi Imager Software

After installation, ensure the software is fully updated before proceeding.

Raspberry Pi Imager Software Update

Step 2: Select Device & Operating System

  1. Launch Raspberry Pi Imager
  2. Select your Raspberry Pi model (I'm using a Raspberry Pi 4)
  3. Choose the Raspberry Pi OS (64-bit is recommended)

Select Raspberry Pi model menu

Why 64-bit?

The 64-bit OS provides better memory handling and compatibility with modern services such as databases and containerized applications.

Select Raspberry Pi OS menu

Step 3: Prepare the Storage Device

  1. The Samsung microSD card and adapter.
    Samsung microSD card and adapter

  2. Insert the Samsung microSD card and adapter into the Uni SD Card Reader.
    Storage device complete

  3. Connect the storage device to your computer.
    Storage device connected to computer

Once connected and granted access to your computer, the storage device should appear in the Select Storage menu of the Imager.

Storage device

Step 4: Configure OS Customization Settings

Before writing the OS, the Raspberry Pi Imager allows several important pre-boot configurations.

Hostname

Set a custom hostname to easily identify the device on the network.

Hostname menu

Localization

Configure the timezone and keyboard layout based on your geographical location.
Localization menu

Username & Password

Create a user account that will be used for SSH access.
User account menu

Step 5: Network & Access Configuration

Wi-Fi (Optional)

The Wi-Fi setup was skipped in favor of Ethernet.

Why Ethernet?

  • More stable connection
  • Lower latency
  • Easier to troubleshoot
  • Common practice in server environments

Choose Wi-Fi menu

SSH

SSH was enabled during imaging using password authentication.

SSH Authentication menu

This ensures the Pi can be accessed remotely without requiring a display.

Step 6: Raspberry Pi Connect (Optional)

Raspberry Pi Connect provides secure remote access features. Since this was new to me, I enabled it to explore its capabilities.

Raspberry Pi Connect menu

If you choose to enable Raspberry Pi Connect, you'll have to create an account and verify it.

Raspberry Pi Connect account creation form

Step 7: Write the OS Image

After reviewing all customizations, proceed with writing the OS to the microSD card.

Warning: This process erases all existing data on the microSD card.

Write Image menu

Once the writing process successfully finishes, the Write Complete page will populate, the Finish button can be selected, and you can remove your storage device that contains the newly imaged microSD card.

Write Image complete

Step 8: Booting the Raspberry Pi

Insert the newly imaged microSD card into the Raspberry Pi. The Raspberry Pi is now ready to be integrated into your home network.

Raspberry Pi 4 with microSD card inserted

Next Steps (Part 2)

  • Connect the Raspberry Pi to the home network via Ethernet, using a Netgear switch and Xfinity router.
  • Verify that the Pi is properly recognized on the network using the Xfinity admin tool and Netgear admin tool.
  • Test connectivity by pinging the Pi from another device on the network or accessing it via SSH.

A Guide to HITL, HOTL, and HOOTL Workflows

2025-12-25 01:54:11

In the rush to automate everything with Large Language Models (LLMs), many make a critical mistake they treat AI as a binary choice either a human does the work, or the machine does.

In reality, the most successful AI implementations exist on a spectrum of human intervention. We call these HITL (Human-in-the-Loop), HOTL (Human-on-the-Loop), and HOOTL (Human-out-of-the-Loop).

Choosing the wrong workflow can lead to either a "bottleneck" (too much human interference) or "hallucination disasters" (too much machine autonomy). Here is everything you need to know about these three pillars of AI architecture.

1. Human-in-the-Loop (HITL)

In an HITL workflow, the AI is a sophisticated assistant that cannot finish its task without a human "checkpoint."

When to use it:

  • High-stakes legal or medical documents.
  • Creative writing where "voice" and "nuance" are vital.
  • Generating code for production systems.

Code Example

In this example, Gemini writes a press release, but the script refuses to "publish" it until a human manually reviews and edits the text.

from google import genai
from dotenv import load_dotenv

load_dotenv(override=True)

client = genai.Client()

MODEL_NAME = "gemini-2.5-flash"

def hitl_press_release(topic):
    """HITL: Human reviews and approves/edit AI output before finalizing."""
    prompt = f"Write a short press release for: {topic}"
    ai_draft = client.models.generate_content(
        model=MODEL_NAME,
        contents=prompt
    ).text

    print("\n--- [ACTION REQUIRED] REVIEW AI DRAFT ---")
    print(ai_draft)

    feedback = input("\nWould you like to (1) Accept, (2) Rewrite, or (3) Edit manually? ")

    if feedback == "1":
        final_output = ai_draft
    elif feedback == "2":
        critique = input("What should the AI change? ")
        return hitl_press_release(f"{topic}. Note: {critique}")
    else:
        final_output = input("Paste your manually edited version here: ")

    print("\n[SUCCESS] Press release finalized and saved.")
    return final_output

hitl_press_release("Launch of a new sustainable coffee brand")

2. Human-on-the-Loop (HOTL)

In HOTL, the AI operates autonomously and at scale, but a human stands by a "dashboard" to monitor the outputs. The human doesn't approve every single item instead, they intervene only when they see the AI deviating from the goal.

When to use it:

  • Live social media moderation.
  • Real-time customer support chatbots.
  • Monitoring industrial IoT sensors.

Code Example

In this example, Gemini categorizes customer tickets. The human isn't asked for permission for every ticket, but they have a "Window of Intervention" to stop the process if the AI starts making mistakes.

from google import genai
from dotenv import load_dotenv
import time

load_dotenv(override=True)

client = genai.Client()

MODEL_NAME = "gemini-2.5-flash"

def hotl_support_monitor(tickets):
    """On-the-Loop: Human monitors AI decisions in real-time and can veto."""
    print("System active. Monitoring AI actions... (Press Ctrl+C to PAUSE/VETO)")

    for i, ticket in enumerate(tickets):
        try:
            response = client.models.generate_content(
                model=MODEL_NAME,
                contents=f"Categorize this ticket (Billing/Tech/Sales): {ticket}"
            )
            category = response.text.strip()

            print(f"[Log {i+1}] Ticket: {ticket[:30]}... -> Action: Tagged as {category}")

            time.sleep(2)

        except KeyboardInterrupt:
            print(f"\n[VETO] Human supervisor has paused the system on ticket: {ticket}")
            action = input("Should we (C)ontinue or (S)kip this ticket? ")
            if action.lower() == 's':
                continue
            else:
                pass

tickets = ["My bill is too high", "The app keeps crashing", "How do I buy more?"]
hotl_support_monitor(tickets)

3. Human-out-of-the-Loop (HOOTL)

The AI handles the entire process from start to finish. Human intervention only happens after the fact during an audit or a weekly performance review. This is the goal for high-volume, low-risk tasks.

When to use it:

  • Spam filtering.
  • Translating massive databases of product descriptions.
  • Basic data cleaning and formatting.

Code Example

This script takes customer reviews and summarizes them into a report without ever stopping to ask a human for help.

from google import genai
from dotenv import load_dotenv

load_dotenv(override=True)

client = genai.Client()

MODEL_NAME = "gemini-2.5-flash"

def hootl_batch_processor(data_list):
    """Human-out-of-the-Loop: AI processes batch independently; human reviews final report."""
    print(f"Starting HOOTL process: {len(data_list)} items to process.")
    final_report = []

    for item in data_list:
        response = client.models.generate_content(
            model=MODEL_NAME,
            contents=f"Extract key sentiment (Happy/Sad/Neutral): {item}"
        )
        final_report.append({"data": item, "sentiment": response.text.strip()})

    return final_report

reviews = ["Great food!", "Slow service", "Expensive but worth it"]
report = hootl_batch_processor(reviews)
print("Final Report:", report)

Which one should you build?

Workflow Interaction Level Human Effort Latency (Speed) Risk Tolerance
HITL Active High Slow Zero Tolerance (High Risk)
HOTL Passive Medium Medium Managed Risk (Scale + Safety)
HOOTL None Low Very Fast Low Risk (High Volume)

hjvvvyjhv

2025-12-25 01:53:08

bilbij i kjjjjhh Use Markdown to write and format posts.
Commonly used syntax
Embed rich cont Use Markdown to w jl b kkhkn

Swift On Server's

2025-12-25 01:47:51

Vapor Drawing Backend - Project Overview

A collaborative note-taking backend built with Vapor (Swift) that supports real-time collaboration, authentication, and drawing strokes persistence.

Tech Stack

  • Framework: Vapor 4.115.0
  • Database: MongoDB with Fluent ORM
  • Authentication: JWT (JSON Web Tokens)
  • Real-time: WebSockets
  • Language: Swift 6.0

Features Implemented

1. Authentication System

JWT-based authentication with user registration and login functionality.

// User model with email and password
final class User: Model, Content, Authenticatable {
    static let schema = "users"

    @ID(key: .id)
    var id: UUID?

    @Field(key: "email")
    var email: String

    @Field(key: "password_hash")
    var passwordHash: String
}

Endpoints:

  • POST /api/v1/auth/register - Register new user
  • POST /api/v1/auth/login - Login user
  • GET /api/v1/auth/me - Get current user (protected)

2. JWT Middleware

Custom authentication middleware that verifies JWT tokens and protects routes.

struct AuthMiddleware: AsyncMiddleware {
    func respond(to request: Request, chainingTo next: any AsyncResponder) async throws -> Response {
        let payload: UserToken = try await request.jwt.verify(as: UserToken.self)

        guard let user = try await User.find(payload.userID, on: request.db) else {
            throw Abort(.unauthorized, reason: "User not found")
        }
        request.auth.login(user)

        return try await next.respond(to: request)
    }
}

3. MongoDB with Fluent ORM

Database configuration using FluentMongoDriver with automatic migrations.

// Configure MongoDB
try app.databases.use(
    .mongo(
        connectionString: Environment.get("MONGODB_URI") ?? "mongodb://localhost:27017"
    ),
    as: .mongo
)

// Run migrations
app.migrations.add(User.Migration())
app.migrations.add(NotesModel.Migration())

4. Notes Management

Full CRUD operations for notes with drawing strokes support.

final class NotesModel: Model, Content {
    @Field(key: "title")
    var title: String

    @Field(key: "strokes")
    var strokes: [DrawingStroke]

    @Parent(key: "user_id")
    var user: User
}

struct DrawingStroke: Codable {
    let points: [DrawingPoint]
    let color: DrawingColor
    let width: Double
    let timestamp: Date
}

Endpoints:

  • POST /api/v1/notes - Create note
  • GET /api/v1/notes - Get all notes
  • GET /api/v1/notes/get/:id - Get single note
  • PUT /api/v1/notes/:id - Update note
  • DELETE /api/v1/notes/:id - Delete note

5. Real-time WebSocket Collaboration

WebSocket manager for real-time collaborative editing with note session management.

final class WebSocketManager {
    private var connections: [UUID: WebSocket] = [:]
    private var noteCollaborators: [UUID: Set<UUID>] = [:]

    func joinNoteSession(noteID: UUID, userID: UUID)
    func leaveNoteSession(noteID: UUID, userID: UUID)
    func broadcastToNote(noteID: UUID, message: String, excludeUserID: UUID?)
}

WebSocket Features:

  • Join/leave note sessions
  • Real-time stroke updates
  • Broadcast to all collaborators (excluding sender)
  • Personal messaging

Endpoint: WS /api/v1/auth/handleInvite (protected)

6. Note Sharing

Share notes via JWT tokens with external users.

// Share token endpoint
GET /api/v1/notes/shared/:shareToken

// Verifies JWT token and returns note
let payload = try await req.jwt.verify(shareToken, as: ShareTokenPayload.self)

Project Structure

Sources/NoterPlayBackend/
├── Controllers/
│   ├── AuthenticationController.swift
│   ├── NotesController.swift
│   └── InviteController.swift
├── Middleware/
│   └── AuthMiddleware.swift
├── Models/
│   ├── User.swift
│   ├── NotesModel.swift
│   ├── UserToken.swift
│   ├── ShareTokenPayload.swift
│   └── InviteModel.swift
├── WSManager/
│   └── WebSocketManager.swift
├── configure.swift
└── routes.swift

Getting Started

# Install dependencies
swift package resolve

# Run the server
swift run

# With Docker
docker-compose up

Vapor Drawing Github

A lot to more to Fix it Up exploring just a small prototype.

Built with ❤️ using Vapor and Swift

A Practical Guide to Troubleshooting Git Push Errors in Terraform Projects

2025-12-25 01:46:53

While working on a Terraform project, I ran into several Git push errors that initially felt confusing and frustrating. However, each error turned out to be a valuable learning moment. This article documents those issues step by step, explains why they happen, and shows how to fix them correctly.

If you’re learning Terraform, DevOps, or Infrastructure as Code, chances are you’ll encounter these same problems.

1️⃣GitHub Rejects Large Files (>100 MB)

Error:

File .terraform/...terraform-provider-aws is larger than 100 MB

Why this happens
The .terraform/ directory was committed. This directory contains Terraform provider binaries, which can be hundreds of megabytes in size and should never be version-controlled.

Correct Fix
Add the following to .gitignore

.terraform/
*.tfstate
*.tfstate.backup

If the file already exists in Git history, the cleanest approach for new projects is to reinitialise the repository:

rm -rf .git
git init
git add .
git commit -m <commit-id>

2️⃣GitHub Push Protection Blocks Secrets

Error:

Push cannot contain secrets (AWS Access Key detected)

Why this happens
AWS credentials were hardcoded inside provider.tf. GitHub automatically scans commits for secrets and blocks pushes to prevent credential leaks.

What Not to Do

provider "aws" {
  access_key = "AKIA..."
  secret_key = "xxxx"
}

Correct Approach

provider "aws" {
  region = "us-east-1"
}
(or add credentials in another file and add that file in .gitignore)

Provide credentials securely using:

  • aws configure

  • Environment variables

  • IAM roles (recommended for EC2, CloudShell, CI/CD)

⚠️ If credentials were committed, they should be rotated immediately, even if the push was blocked.