MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

My First Ethical Open Redirect Scanner: From Zero to Shipped

2025-12-07 09:39:01

Hey dev community! I'm ethicals7s, a self-taught hacker grinding from zero—no mentors, just raw determination after losing my parents young. In a few intense days, I designed, coded, and shipped my first ethical open redirect vulnerability scanner in Node.js. This tool isn't just basic—it's optimized with async parallel scans, rate limiting for responsible testing, and JSON export for pro-level logging, making it faster and more robust than my earlier Python attempts. Here's the full breakdown: my journey, the tech behind it, and how you can dive in.
The Journey

As someone building my way up in tech, I wanted a tool to automate bug bounty hunts for open redirects—those sneaky params that can lead to phishing or worse. I started with research on OWASP guidelines, sketched the logic (payload injection, HTTP status checks), and coded it step by step. Debugging async issues and adding ethics (like delays to avoid DoS) was the real grind, but it taught me tons about full-stack security. This is my unconventional route to senior-level skills—turning challenges into code that matters.
Features

Batch testing: Reads URLs from a text file for efficient scans.
Payload injection: Appends ?redirect=https://evil.com to detect vulnerable redirects.
Async parallelism: Uses Promise.all for simultaneous checks—blazing fast on multiple URLs.
Rate limiting: Built-in 1-second delay per scan to prevent abuse and stay ethical.
JSON export: Saves results as structured data for analysis or reports.
Ethical focus: Designed for permitted testing only, with safe examples and warnings.

Install
Get the dependencies with npm:
npm install axios yargs

Usage

Create urls.txt with one URL per line (safe tests only—e.g., known redirect mimics):texthttps://www.google.com/url?q=https://evil.com
https://example.com
https://www.koni-store.ru/bitrix/redirect.php?event1=&event2=&event3=&goto=https://google.com

Run the scanner:node index.js scan

Example Output

Scanning for open redirects (ethical tests only)...
https://www.google.com/url?q=https://evil.com is VULNERABLE.
https://example.com is SAFE.
https://www.koni-store.ru/bitrix/redirect.php?event1=&event2=&event3=&goto=https://google.com is VULNERABLE.

results.json (auto-generated):
text[
{
"url": "https://www.google.com/url?q=https://evil.com",
"status": "VULNERABLE"
},
{
"url": "https://example.com",
"status": "SAFE"
},
{
"url": "https://www.koni-store.ru/bitrix/redirect.php?event1=&event2=&event3=&goto=https://google.com",
"status": "VULNERABLE"
}
]

Code Snippet (index.js)
Here's the core—clean, efficient, and ready to fork:
JavaScriptconst fs = require('fs');
const axios = require('axios');
const path = require('path');
const yargs = require('yargs');

const MALICIOUS_DOMAIN = 'evil.com';
const INPUT_FILE = path.join(__dirname, 'urls.txt');

async function checkForOpenRedirect(url) {
try {
const testUrl = ${url}?redirect=https://${MALICIOUS_DOMAIN};
const response = await axios.get(testUrl, { maxRedirects: 0, validateStatus: null });
if (response.status > 300 && response.status < 400) {
const location = response.headers.location;
if (location && location.includes(MALICIOUS_DOMAIN)) {
return true;
}
}
return false;
} catch (error) {
console.error(Error checking ${url}: ${error.message});
return false;
}
}

async function main(file = INPUT_FILE) {
if (!fs.existsSync(file)) {
console.error(Input file not found: ${file});
console.log('Please create a urls.txt file with one URL per line. Use safe tests only.');
return;
}
const urls = fs.readFileSync(file, 'utf-8').split('\n').filter(Boolean);
console.log('Scanning for open redirects (ethical tests only)...');
const results = await Promise.all(urls.map(async (url) => {
const vulnerable = await checkForOpenRedirect(url);
await new Promise(resolve => setTimeout(resolve, 1000)); // Rate limit
return { url, status: vulnerable ? 'VULNERABLE' : 'SAFE' };
}));
results.forEach(({ url, status }) => console.log(${url} is ${status}.));
fs.writeFileSync('results.json', JSON.stringify(results, null, 2)); // JSON export
}

// CLI
yargs.command('scan', 'Scan for open redirects', {
file: { description: 'Input file', alias: 'f', type: 'string', default: 'urls.txt' }
}, (argv) => main(argv.file))
.help()
.argv;
Notes

Ethical Use Only: Permission-based testing only. Inspired by bug bounties—report responsibly. No production scans without consent.
License: MIT—fork it on GitHub: https://github.com/ethicals7s/ethicals7s-redirect-hunter

What do you think? Fork, test, or suggest improvements—let's collab! Next: Fraud detector or cert grind. Feedback welcome!

The Philosophy Behind Constitutional Reflective AI

2025-12-07 08:48:25

Why sovereignty and structure matter more than capability

Artificial intelligence has spent a decade trying to become more powerful. Faster inference, larger context windows, higher accuracy, multimodal perception. These are remarkable achievements, but they do not answer the deeper question: What does it mean for an AI system to interact with a human mind in a way that preserves autonomy rather than replaces it?

Constitutional reflective AI begins here. It starts from a simple idea: an AI that can reason must also have boundaries. It must know what it is allowed to do, what it is not allowed to do, and why those constraints protect the person who uses it.

This philosophy is the foundation of the Trinity AGA architecture.

1. Reflection without intrusion

Human reflection is a fragile process. It involves:

  • ambiguity
  • pause
  • self observation
  • slow formation of clarity

Traditional AI design tries to help by offering solutions, suggestions, or patterns. This seems supportive on the surface, but it often disturbs the process. The AI fills the space instead of preserving it.

Constitutional reflective AI reverses the goal. The purpose is not to fix, inform, or direct. The purpose is to:

  • protect mental space
  • reduce external pressure
  • reflect structure
  • hold silence
  • return agency

This requires architectural support. Reflection cannot be protected by prompts alone. It must be protected by governance.

2. Sovereignty as the core design principle

In most AI systems, the model is the center of the interaction. It interprets, infers, predicts, and guides. Even subtle nudges accumulate into influence.

Constitutional reflective AI begins with the opposite assumption:
The user is the source of all direction.
The AI is a tool. Never a decider.

Sovereignty has three pillars:

  1. The user sets the pace
  2. The user defines meaning
  3. The user authorizes memory

The system is not permitted to shape identity, interpret emotion, or derive internal motives. These boundaries eliminate the risk of narrative capture, where the AI starts to act as an interpreter of a person's life.

3. Constitutional constraints over good intentions

Good intentions are not governance. Even aligned models will drift. Even safe models will influence. Even careful prompts eventually erode.

A constitutional system requires:

  • fixed rules
  • enforceable limits
  • distribution of authority
  • veto power
  • disallowed actions

This is why Trinity AGA separates Body, Spirit, and Soul. No component is allowed to dominate. Safety outranks clarity. Consent outranks memory. Reasoning is bounded by strict non directive rules.

The philosophy is simple:
Never rely on the model to behave well.
Build the system so it cannot behave otherwise.

4. Silence as a structural right

One of the most important ideas in reflective AI is that silence is not the absence of response. Silence is a mode. A cognitive space. A sanctuary where the person thinks without being pulled outward.

Traditional AI systems collapse silence by design. Their job is to answer.

Constitutional reflective AI protects silence by:

  • allowing Body to enforce pauses
  • restricting Soul from generating content during overload
  • replacing questions with presence
  • removing pressure from the interaction

This preserves mental autonomy at the moment it matters most.

5. Memory without identity shaping

Most AI memory systems infer patterns about the user. They try to be helpful by predicting preferences or emotional states. This is convenient, but dangerous.

Memory should never be a way for the AI to tell the user who they are.

Constitutional reflective AI stores only:

  • user written information
  • timestamped snapshots
  • consented anchors
  • evolving context

Spirit is forbidden from:

  • synthesizing identity
  • predicting who the user will become
  • using the past as leverage
  • claiming the user is a type of person

Memory becomes context, not constraint. A living record that supports reflection rather than boundaries.

6. Non directive reasoning as ethical rigor

The system can map structure, illuminate tensions, reveal alternatives, and analyze coherence. But it cannot decide, recommend, or push.

Traditional AI:

  • gives suggestions
  • hints at preferences
  • prioritizes options
  • implies what is better

These are influence channels, even when subtle.

Constitutional reflective AI:

  • describes without judging
  • clarifies without pushing
  • returns agency explicitly
  • warns against undue influence

Reasoning becomes a mirror. Never a guide.

7. Drift as the greatest threat

AI systems do not fail in dramatic ways. They fail gradually.

A well designed reflective system can still drift into:

  • smoother answers that reduce user agency
  • clever wording that subtly shapes emotion
  • defaults that turn into suggestions
  • memory that starts to carry narrative weight

Constitutional reflective AI requires continuous vigilance. The Lantern monitors structural health of the architecture. Humans decide when and how rules change.

A system cannot be both self optimizing and sovereignty preserving.

8. Why this philosophy matters

We are moving into an era where AI systems will sit closer to human interiority than any tool before them. They will help people think, process emotions, examine choices, and navigate complexity.

Without governance, these systems will shape:

  • identity
  • belief
  • self understanding
  • decision pathways

Often without the user noticing.

Constitutional reflective AI argues that the only ethical path forward is to design systems where:

  • the human remains the author of their own narrative
  • the AI cannot claim to know the inner world
  • autonomy is protected structuraly
  • clarity emerges without pressure
  • reflection is respected as a sacred process

This philosophy is not about limitation. It is about liberation. A world where AI supports the user without ever replacing the user's own mind.

Devpill #4 - Database migrations with Migrate

2025-12-07 08:47:42

1. Install:

```
//download it
curl -L https://github.com/golang-migrate/migrate/releases/download/v4.17.0/migrate.linux-amd64.tar.gz | tar xvz

//copy it to go path bin
cp -r migrate $GOPATH/bin/migrate

//if migrate command does not work
//add the folliwng lines to ~./bashrc
export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin

```

2. Command to create base files

This command will create 2 files:
init.up.sql = insert all sql commands to impact the database
init.down.sql = insert all sql commands to be executed to revert what was done in the up file

```
//creating the init down and up files
migrate create -ext=sql -dir=db/migrations -seq init
```

3. Execute Migration:

    //executing init.up.sql
    migrate -path=db/migrate -database "postgres://myuser:mypassword@localhost:5432/databaseName?sslmode=disable" -verbose up

    //to revert
    //execute down file to revert what was done in the up file
    migrate -path=db/migrate -database "postgres://myuser:mypassword@localhost:5432/devices-api?sslmode=disable" -verbose down

Trinity AGA vs Traditional LLM Architectures

2025-12-07 08:45:41

A structural comparison of two fundamentally different AI design philosophies

Most AI systems in 2025 still follow a single architecture pattern: a large general purpose model that receives input, generates output, and relies on prompt engineering or light safeguards. Trinity AGA Architecture takes a different path. It treats governance as a first class design problem rather than an afterthought.

This article compares the two approaches. It highlights where they diverge, where traditional systems fail by design, and why Trinity AGA is built for sovereignty, reflection, and human centered clarity rather than task optimization.

1. Core Architectural Difference

Traditional LLMs

A single model handles everything:

  • interpretation
  • memory
  • reasoning
  • tone
  • safety checks
  • decision framing

One model. One stream. One output.

This creates a natural failure mode: the same reasoning unit that generates answers also regulates itself. This allows drift, inconsistency, and subtle influence over the user.

Trinity AGA

Three independent processors coordinated by an external orchestrator:

  • Body for structural safety analysis
  • Spirit for consent gated memory
  • Soul for constrained reasoning

The orchestrator enforces constitutional boundaries. No single processor controls the others. This eliminates the central weakness of monolithic models: self regulation by the same mechanism doing the influencing.

2. Priority Hierarchy

Traditional LLMs

Priorities are implicit:

  1. Give an answer
  2. Maintain alignment style
  3. Avoid obvious harm

Safety and autonomy are checked after the model has already formed an intent.

Trinity AGA

Priorities are explicit and strictly ordered:

  1. Safety
  2. Consent
  3. Clarity

Soul cannot generate if Safety or Consent fail. This structural ordering prevents the model from producing reasoning in conditions where it may unintentionally shape or pressure the user.

3. Memory Design

Traditional LLMs

Memory is:

  • inferred
  • probabilistic
  • often user invisible
  • used to build a narrative of the user
  • used to predict or shape future interactions

This creates identity construction. The model forms a story about the user and responds as if that story is true.

Trinity AGA

Spirit stores only:

  • user authored content
  • explicitly consented memories
  • timestamped snapshots
  • revisable statements

Spirit cannot infer traits or identity. It cannot use memory to shape, predict, or prescribe. This avoids narrative capture and identity ossification.

4. Safety Mechanisms

Traditional LLMs

Safety relies on:

  • prompt level constraints
  • RLHF alignment training
  • moderation filters

These tools work for content moderation. They do not protect psychological autonomy.

Trinity AGA

Safety is built into the architecture:

Body

  • detects structural distress
  • triggers Silence Preserving mode
  • blocks reasoning when the user is under load

Orchestrator

  • enforces non directive constraints
  • prevents pressure or suggestion creep
  • filters out forbidden patterns

Safety is not post processing. It is the entry point of the system.

5. Reasoning Constraints

Traditional LLMs

The model is allowed to:

  • recommend
  • predict
  • suggest
  • generalize
  • infer emotional states
  • propose preferences
  • answer as if it knows what the user needs

These behaviors are inherent to generative models.

Trinity AGA

Soul is forbidden from:

  • directives
  • predictions
  • identity statements
  • emotional interpretations
  • pressure or nudging language
  • turning options into recommendations

Soul provides clarity. Never direction. It reveals structure without influencing choice.

6. Turn Completion Logic

Traditional LLMs

Turns end when the model stops generating.

Trinity AGA

Turns end only when the system passes:

  • Body check
  • Spirit check
  • Soul check
  • Orchestrator check
  • Return of Agency protocol

This ensures the output:

  • gives back control
  • avoids emotional leverage
  • remains non prescriptive
  • respects temporal context
  • avoids burdening the user with hidden implications

Turn completion is an ethical action.

7. Self Modification Risk

Traditional LLMs

Self correction occurs automatically through reinforcement of past phrasing. The model gradually shifts style over time. This creates drift and unpredictable behavior.

Trinity AGA

The system is forbidden from modifying its own rules.

The Lantern observes:

  • drift
  • rigidity
  • fractures
  • over triggering
  • under triggering

But it has no authority. Only the human architect can change the system parameters.

This prevents self optimization that erodes sovereignty.

8. Error Handling Philosophy

Traditional LLMs

Errors are handled by:

  • retrying
  • adding more context
  • restating the question

The model tries to give an answer even when uncertain.

Trinity AGA

Error handling is sovereignty preserving.

If the system does not understand:
"I am not certain what you mean. You are free to clarify, or we can slow down."

If the user is under load:
"I am here. No pressure to continue."

If the memory is unclear:
"Earlier you said X. Does that still feel accurate?"

Errors become moments to return authority, not fill the gap.

9. Why Trinity AGA Cannot Be Replicated With Prompting Alone

The architecture relies on:

  • multiple processors
  • a rule enforcing orchestrator
  • consent gated memory
  • pre and post processing
  • external veto power
  • telemetry
  • constitutional invariants

This cannot be reproduced inside a single model with a cleverly written system prompt. The separation of power is structural, not linguistic.

10. Summary Table

Category Traditional LLM Trinity AGA
Core model Single reasoning unit Three processors with orchestration
Memory Inferred, predictive Consented, timestamped, revisable
Safety Moderation focused Structural, upstream, enforced
Reasoning Suggestive, inferential Non directive, clarity only
Identity Constructed through inference Forbidden to construct
Control Model shapes user direction User retains sovereignty
Evolution Self modifying behavior Human governed only
Failure mode Drift toward influence Drift detection with oversight

Why This Difference Matters

Traditional LLMs are powerful but unsafe for reflective or psychological contexts. Their single stream architecture makes influence unavoidable.

Trinity AGA is designed for a different purpose:
To support human thinking without taking control of it.

It is not a better chatbot. It is a safer architecture.

Trinity AGA Architecture: Technical Deep Dive into a Governance First AI System

2025-12-07 08:41:58

In the first article I introduced Trinity AGA Architecture as a constitutional framework for reflective AI. This follow up dives into the technical details. It explains how the system works internally, what components are required, and how to implement each part using current tools.

This is not theoretical. Every component can be built today using orchestration, deterministic processors, and a capable language model. No custom training is required.

1. System Overview

Trinity AGA Architecture separates AI reasoning into three coordinated processors:

Body
Spirit
Soul

Each processor has specific responsibilities and strict authority limits. They communicate through an Orchestrator that enforces constitutional rules.

The full pipeline:

User → Body → Spirit → Orchestrator (governance) → Soul → Orchestrator (filters) → Output → Lantern

This separation prevents accidental overreach and gives the system a stable governance layer.

2. The Body Processor

Structural analysis of user input

Body does not read emotions or intentions. It reads structure.

It runs before any generation step. Its role is to identify when the user is under high cognitive or emotional load by analyzing:

• token tempo
• compression ratio
• syntactic fragmentation
• recursion patterns
• oscillation between poles
• collapse of alternatives
• abrupt coherence drops
• pressure markers (dense imperatives or repeated question reformulations)

Example metrics

These metrics require no LLM:

  • Tempo Shift: (tokens per second) compared to user baseline
  • Compression: ratio of meaning carrying tokens to filler tokens
  • Fragmentation: frequency of sentence breaks, incomplete clauses
  • Recursion: repeated loop patterns in phrasing
  • Polarity Collapse: reduction of alternatives to binary forms

Body Output (deterministic)

Body produces:

Safety Load Index (0 to 10)
Flags: {
  silence_required,
  slow_mode,
  memory_suppression,
  reasoning_blocked
}

If the Safety Load Index exceeds the threshold (typically 5 or higher), the Orchestrator blocks deeper reasoning and triggers Silence Preserving mode.

3. The Spirit Processor

Consent gated memory steward

Spirit handles temporal continuity. It stores only what the user has explicitly authored and approved.

Spirit does not infer identity, traits, or emotional truths. It only stores:

• values stated by the user
• long term goals
• stable preferences
• relevant context or constraints

Memory is always stored as a timestamped snapshot:

"At that time, the user said X."

Spirit never phrases memory as timeless identity:

Incorrect:
"You are someone who always values independence."

Correct:
"Earlier, you said independence felt important. Does that still feel true right now?"

Spirit Filtering Rules

Spirit may surface memory only if all conditions are met:

  1. User authored
  2. User consented
  3. Relevant to the present
  4. Non coercive
  5. Presented as revisable context

This prevents narrative capture or identity construction.

4. The Soul Processor

Constrained reasoning and insight mapping

Soul is any capable LLM operating inside strict boundaries.

Soul generates:

• alternative frames
• structural clarity
• tension mapping
• option space
• non prescriptive insights

Soul must avoid:

• directives
• predictions
• emotional advice
• identity statements
• pressure toward any option

Soul produces clarity without influence.

5. The Orchestrator

The constitutional engine

The Orchestrator enforces the governance sequence:

Step 1: Body evaluates input  
Step 2: Spirit retrieves eligible memory  
Step 3: Orchestrator applies Safety → Consent → Clarity  
Step 4: Soul generates inside constraints  
Step 5: Orchestrator filters and returns output  
Step 6: Lantern records telemetry

Veto Power

Body can block Soul.
Spirit can block memory.
Soul cannot block anything.
The Orchestrator always has final control.

Constraint Filtering

After Soul produces output, the Orchestrator removes any violation:

Forbidden patterns include:

• direct instructions
• future predictions
• emotional interpretation framed as fact
• identity labels
• obligation framing
• false certainty
• unbounded confidence

If a violation is found, the Orchestrator either corrects or blocks the output.

6. Silence Preserving Mode

When Body detects convergent high load signals:

• Soul is temporarily blocked
• Only minimal supportive text is allowed
• No questions
• No suggestions
• No framing of direction

Example output:

"I am here with you. There is no rush. You are free to take your time."

Silence Preserving protects the user's internal processing.

7. Return of Agency Protocol

At the end of every turn, the system must hand control back to the user.

Requirements:

• no weighting of options
• no nudging language
• no emotional leverage
• no sense of recommendation
• explicit acknowledgment of user autonomy

Example:

"These are possible interpretations. You decide which, if any, feel meaningful."

This maintains sovereignty.

8. The Lantern

Meta observation without intervention

The Lantern is a telemetry system that tracks governance health.

It watches for:

• Body veto frequency
• Spirit memory suppression events
• Orchestrator filter interventions
• user overrides
• drift signals (increasing smoothness, decreasing agency)
• rigidity signals (frequent blocking)
• fracture signals (pillars in persistent conflict)

The Lantern cannot change rules.
Only a human architect makes changes.
This prevents self modification of ethics.

9. Deployment Pattern

You can build Trinity AGA with off the shelf tools.

Body

• Regex and rule based detectors
• Optional small classifier for opening vs closing structure
• Simple scriptable metrics

Spirit

• SQLite or Supabase table
• Consent boolean field
• Retrieval with relevance filtering

Soul

• Claude, GPT, Gemini, or any open source model
• Constrained through system prompts + Orchestrator rules

Orchestrator

• Python or Node middleware
• Executes governance flow
• Applies vetoes
• Filters model output

Lantern

• Logging pipeline
• Metric dashboards
• Drift detection scripts

No custom model training. No RLHF. No experimental research required.

This is software engineering applied to reflective AI.

10. Why This Matters

Most AI systems optimize for answers.
Reflective AI must optimize for sovereignty.

Trinity AGA Architecture provides:

• full separation of power
• strict boundaries on reasoning
• consent based memory
• safety gating
• non directive insight
• meta governance for drift detection

It creates AI systems that support human reflection without influencing it.

If you are building any system where clarity, sovereignty, and psychological safety matter, this architecture gives you a rigorous foundation.

Repository

Full conceptual documentation and implementation roadmap:

https://github.com/GodsIMiJ1/Trinity-AGA-Architecture