MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

My Search to Become a DevRel: How Community, Teaching, WebAI and Vibe Coding Became My Path

2025-11-24 07:49:17

For years I have been doing the work of a Developer Advocate without ever holding the title.

Not because I was trying to check a career box, but because I genuinely love teaching, building, speaking, and helping people unlock what is possible with the Web. My journey into DevRel did not start with a job description. It started with curiosity, community, and an obsession with sharing what I learn.

This is the story I want to tell.

From Teaching English to Teaching Tech

Long before I ever touched WebGPU, AI APIs, or agentic patterns, I was a teacher.

I taught English to pay for college, and I did not know it at the time, but that experience shaped everything. I learned how to explain complex ideas simply. How to read a room. How to make someone feel capable, even if they are just starting.

When I eventually transitioned into software engineering, I did not stop teaching.
I taught through local meetups, study groups, DMs, and later through tech communities I organized myself. Teaching has always been the throughline of my career.

Falling in Love With the Web Again Because of AI

I have always loved the Web. The openness, the flexibility, the creativity. The Web is the most accessible platform ever created.

But WebAI changed everything for me.

The idea that I could use my existing web knowledge, the same JavaScript and browser APIs I had been mastering for years, to create AI powered experiences was mind blowing. It made AI feel native, natural, and ours as web developers.

WebAI gave me a way in.
Vibe coding made the possibilities feel infinite.
And suddenly, I could not stop building.

But I also could not stop sharing.

I started recording videos.
I started writing.
I started speaking, first locally, then nationally, and then internationally.

My first international talk was in Romania, and it changed everything. I realized something important:
This is the work I want to do. This is the work I am already doing.

Becoming a Developer Advocate Before Becoming a Developer Advocate

DevRel is not about clout, stages, or airport lounges.

At its core it is about:

  • Teaching

  • Empowering people

  • Sharing knowledge

  • Building community

  • Connecting ideas with the people who need them

  • Exploring new tools and showing what is possible

And I have been doing all of that for three years, not because it was my job, but because I could not avoid doing it.

I have organized meetups in my city out of passion.
I have brought new technologies to local communities.
I have created videos and tutorials so people can learn faster than I did.
I have spoken at events on my own budget, sometimes using my vacation days to do it.

I did not realize it then, but I was doing DevRel the slow and difficult way.
Out of pure love, not sustainability.

The Burnout: When Passion Is Not Enough

At first it was fine.

But as more invitations came in, as communities grew, as more people asked for help, something shifted. I was doing the work full time without the support or resources of an actual DevRel role.

I was using my free time, weekends, vacation days, and often my own money to show up for communities. Eventually it created a type of burnout that hurts because it comes from something you love.

That is when I realized something important:

If I want to keep teaching, speaking, and enabling builders, I need to do it in a sustainable way. I need to do it as my actual job.

Not as a side passion.
Not as extra work squeezed into the edges of my life.
But as my career.

Because DevRel is not only something I am good at.
It is the way I naturally move through the world.

Why DevRel Is the Path I Want to Commit To

I believe that DevRel is not a performance. It is a service.

I want to serve developer communities by:

  • Teaching

  • Helping developers use the Web as a platform for AI experiences

  • Sharing how I build and experiment in public

  • Bringing powerful tools to places that rarely see them

  • Showing beginners and non engineers that they can build too

  • Creating content that teaches and inspires

  • Growing communities locally and globally

I have already done all of this. I simply need the chance to do it full time.

Why I Am Sharing This

This blog post is not only a reflection. It is a declaration.

I want to be a Developer Advocate.

Not someday. Not in an abstract way.
I am ready now.

I have built the habits, the skills, the community, and the love for the craft. I simply want the opportunity to keep growing, to keep teaching, to keep building bridges between technology and people.

DevRel is not a title I am chasing.
It is the role that finally matches the work I have been doing and the person I have become.

If You Are Reading This

If you are a DevRel professional, a manager, a founder, or someone who works in community or advocacy, or if you simply know me:

I am open to opportunities.
I am ready to create.
I am ready to contribute.
And more than anything, I am ready to help people build the future of the Web.

The Web taught me everything.
Community carried me forward.
Teaching is how I give back.

Rust у мікросервісах: швидкість та продуктивність, що змінюють правила гри

2025-11-24 07:41:10

Rust у мікросервісах: швидкість та продуктивність, що змінюють правила гри

Коли мова заходить про вибір мови програмування для мікросервісної архітектури, розробники часто стикаються з компромісом: або швидкість розробки, або продуктивність виконання. Rust пропонує унікальне поєднання обох переваг.

Чому Rust для мікросервісів?

🚀 Неймовірна швидкість виконання

Rust демонструє продуктивність на рівні C/C++, що робить його ідеальним вибором для високонавантажених мікросервісів:

  • Нульова вартість абстракцій - високорівневі конструкції не впливають на швидкість виконання
  • Відсутність garbage collector - передбачувана продуктивність без пауз для збирання сміття
  • Компіляція в нативний код - максимальна оптимізація під конкретну платформу

💪 Ефективне використання ресурів

У світі мікросервісів, де кожен сервіс споживає пам'ять та CPU, Rust показує вражаючі результати:

// Приклад простого HTTP сервера на Axum
use axum::{
    routing::get,
    Router,
};

#[tokio::main]
async fn main() {
    let app = Router::new()
        .route("/health", get(health_check));

    axum::Server::bind(&"0.0.0.0:3000".parse().unwrap())
        .serve(app.into_make_service())
        .await
        .unwrap();
}

async fn health_check() -> &'static str {
    "OK"
}

Порівняння споживання пам'яті:

  • Node.js сервіс: ~50-100 MB базове споживання
  • Go сервіс: ~10-20 MB
  • Rust сервіс: ~2-5 MB

⚡ Асинхронність на повну потужність

Rust має один з найефективніших асинхронних рантаймів завдяки Tokio:

  • Мільйони конкурентних з'єднань на одному сервері
  • Мінімальні накладні витрати на переключення контексту
  • Zero-copy операції для мережевого I/O

Реальні переваги у продакшені

1. Зменшення витрат на інфраструктуру

Коли ваш мікросервіс споживає в 10 разів менше пам'яті, це означає:

  • Менше серверів у кластері
  • Нижчі рахунки від AWS/GCP/Azure
  • Швидше масштабування

2. Надійність під навантаженням

// Обробка мільйонів запитів з graceful degradation
use tower::ServiceBuilder;
use tower_http::limit::RateLimitLayer;

let app = Router::new()
    .route("/api/data", get(get_data))
    .layer(
        ServiceBuilder::new()
            .layer(RateLimitLayer::new(1000, Duration::from_secs(1)))
    );

3. Безпека пам'яті = стабільність сервісу

Rust гарантує відсутність:

  • Segmentation faults
  • Data races
  • Use-after-free помилок

Це означає менше нічних викликів та більш стабільні сервіси.

Benchmark: Rust vs інші мови

Тест простого JSON API (10,000 запитів):

Мова Час (сек) Req/sec Пам'ять (MB)
Rust 0.45 22,000 3
Go 0.68 14,700 12
Node.js 2.1 4,800 65
Python 8.5 1,200 95

Екосистема для мікросервісів

Rust має потужні фреймворки та бібліотеки:

Web фреймворки:

  • Axum - ергономічний, швидкий
  • Actix-web - один з найшвидших
  • Rocket - зручний синтаксис

Для gRPC:

  • Tonic - повна підтримка gRPC

Для async операцій:

  • Tokio - де-факто стандарт
  • async-std - альтернатива

Коли варто обирати Rust?

Використовуйте Rust, якщо:

  • Потрібна максимальна продуктивність
  • Високе навантаження (millions RPS)
  • Критична стабільність
  • Важливі витрати на інфраструктуру

⚠️ Можливо, варто подумати, якщо:

  • Команда не має досвіду з Rust
  • Швидкість прототипування критична
  • Проста CRUD логіка без навантаження

Висновок

Rust у мікросервісній архітектурі — це не просто тренд, а прагматичний вибір для команд, які цінують:

  • Продуктивність
  • Надійність
  • Ефективність використання ресурів

Крива навчання може бути крутішою, ніж у Go чи Node.js, але віддача у вигляді стабільності та швидкості того варта.

Пробували Rust для мікросервісів? Поділіться досвідом у коментарях! 🦀

rust #microservices #performance #backend #ukrainian

AWS Policy Deep Dive

2025-11-24 07:37:11

Parte 1: ¿Crees que tus datos están seguros solo con IAM? Hablemos de SCPs y RCPs.

🛡️ La seguridad en la nube requiere una estrategia de defensa en profundidad. 🛡️
Cuando empecé en AWS, pensaba que con IAM bastaba. Pero al profundizar, me di cuenta de la importancia de capas adicionales como las SCPs y RCPs.

Entender la diferencia entre IdentidadRecurso y Control Organizacional es vital para proteger nuestros entornos. No es solo "dar permisos", es saber dónde aplicarlos para garantizar el menor privilegio y la máxima seguridad.

Aquí les comparto mi guía mental para no perderse y saber identificar cuando podemos o debemos usar cada una de estas.

1️⃣ Identity-based Policies (IAM Policies)

👉 El "Quién puede hacer qué" Son las más comunes. Se adjuntan a Usuarios, Grupos o Roles.
📝 Ejemplo: Un usuario de operaciones necesita iniciar o detener instancias EC2s, pero no puede crear, terminar ni modificar instancias.

  • Acción: Crear una Policy que permita ec2:StartInstances  y ec2:StopInstances en la tabla de "Productos".
  • Resultado: Si intenta modificar o eliminar la instancia, AWS le dice "No".

2️⃣ Resource-based Policies

👉 El "Quién puede tocar ESTO" Aquí la regla vive en el recurso mismo (S3 Bucket, SQS, KMS Key), no en el usuario. Son vitales para accesos Cross-Account pero no restringidos a ellos. 📝 Ejemplo: Tienes un Bucket S3 de logs centralizados.

  • Acción: Colocas una Bucket Policy que dice: "Permitir s3:PutObject a la cuenta de Producción (Account B)".
  • Resultado: La cuenta B puede escribir logs ahí sin tener un usuario creado en tu cuenta de logs.

3️⃣ Service Control Policies (SCPs)

👉 Las "Reglas de la Casa" (Límites de Identidad, popularmente guardrail o barandilla de seguridad) Aquí es donde muchos se confunden. Las SCPs NO dan permisos. Solo definen el máximo permiso posible. Si la SCP dice "No", no importa si tienes AdminAccess, es un "No". 📝 Ejemplo: Seguridad Compliance.

  • Acción: Aplicas una SCP en la raíz de tu Organización que niega ec2:RunInstances en cualquier región que no sea us-east-1.

- Resultado: Aunque seas Admin, si intentas levantar un servidor en Tokyo, fallará.

 Resource Control Policies (RCPs)

👉 El "Perímetro de Datos" (Límites de Recurso) Funcionan como las SCPs, pero enfocadas en restringir el acceso a tus recursos, sin importar quién sea el que llama (incluso si es externo). 📝 Ejemplo: Data Perimeter estricto.

  • Acción: Creas una RCP que dice: "Nuestros Buckets S3 solo pueden ser accedidos por Principals que pertenezcan a nuestra Organización (PrincipalOrgID)".
  • Resultado: Si alguien configura por error un Bucket como "Público" o intenta darle acceso a una cuenta externa de un proveedor, la RCP lo bloquea automáticamente. Ideal para prevenir ataques de exfiltración de datos

 Resumen

Tipo de Política Se aplica a... Función Principal ¿Cuándo usarlo?
IAM Policy Usuario/Rol/Grupo Otorgar permisos Para otorgar permisos a usuarios, grupos o roles
Resource Policy S3, SQS, etc. Otorgar acceso (incluso externo) Para definir quien puede acceder a un recurso
SCP Cuenta AWS (excepto cuenta administradora de la organización), Unidades Org Restringir permisos máximos (Identidad) Para establecer límites máximos sobre las identidades
RCP Recursos de la Org, excepto recursos de la cuenta administradora de la organización Restringir acceso máximo (Recurso) Para establecer límites máximos sobre sobre quien accede a tus datos

🔥 Bonus: Perspectiva Red Team

Entender IAM, SCPs y RCPs no solo es vital para proteger tu cuenta y organización, también es un componente clave para reducir la superficie de ataque.

Por ejemplo:

  • Escenario de Red Team: un atacante intenta asumir roles o crear usuarios para escalar privilegios.

  • SCPs bloquean el movimiento lateral: aunque tenga AdminAccess en una cuenta hija, no podrá ejecutar acciones prohibidas por la SCP a nivel de Organización (ej: organizations:LeaveOrganization, iam:CreateUser).

  • RCPs evitan exfiltración de datos: un bucket mal configurado no permite accesos de cuentas externas aunque el atacante tenga privilegios en otra parte.

  • Conclusión: conocer y aplicar correctamente estas políticas es como poner murallas antes de que alguien llegue al nucleo de tu entorno.

💡 Pregunta de reflexión: ¿por qué es un riesgo crítico desplegar recursos productivos o mantener usuarios activos operando directamente en la cuenta administradora de la organización o también llamado Payer Account?

🔜 En el próximo post: Profundizaremos en las SCPs, cuales son las SCPs mas relevantes que toda cuenta debería tener y cómo configurarlas correctamente para dormir tranquilos.

Fraud Detection with Knowledge Graphs: A Protégé and VidyaAstra Approach

2025-11-24 07:31:34

Fraud detection is one of the most common use cases presented for knowledge graph applications. It has been already covered in a variety of articles and implementations across the industry. I will present how to do it via Protégé and the VidyaAstra plugin.

Machine learning, neural networks and artificial intelligence techniques are commonly used to detect credit card frauds and money laundering schemes. However, these approaches have significant limitations because these techniques rely on statistical models which can be easily fooled by hackers using synthetic data sets or through other methods known as adversarial attacks.

Graph databases and knowledge graphs represent the state of the art in fraud detection.

The reason is that they contain a massive amount of interconnected data, and even if one piece of information is incorrect or missing, the system can still identify fraudulent patterns through relationship analysis.

Knowledge graphs can be used to detect:

  • Fake identities (e.g., people trying to open accounts with fake ID cards)
  • Credit card fraud (e.g., someone applying for credit with a stolen credit card)
  • Money laundering (e.g., circular money flows and layering schemes)

This article presents my approach to building fraud detection knowledge graphs using Protégé and the VidyaAstra plugin, based on industry-standard techniques discussed in fraud detection literature.

Table of Contents

  1. What is a Knowledge Graph?
  2. Why Knowledge Graphs for Fraud Detection?
  3. Limitations of Traditional ML Approaches
  4. The Knowledge Graph Advantage
  5. Building a Fraud Detection Knowledge Graph
  6. Real-World Use Case: Circular Money Flow Detection
  7. Using VidyaAstra with Protégé
  8. Graph Algorithms for Fraud Detection
  9. Getting Started

What is a Knowledge Graph?

A knowledge graph is a database of facts and relations between different entities.

A knowledge graph can be used to represent the world, with objects being concepts or physical things, and their attributes, relationships and metadata. For example, a financial institution could have a knowledge graph containing information about its customers, accounts, loans, transactions, and employees. The institute might also have a separate knowledge graph containing information about its offices and locations.

In fraud detection, a knowledge graph represents:

  • Entities: Accounts, customers, transactions, merchants, IP addresses, devices, locations
  • Relationships: sends_money_to, owns_account, uses_device, located_at, shares_email_with
  • Attributes: transaction amounts, timestamps, risk scores, customer profiles

Graph databases are a natural way of building a knowledge graph because they provide an efficient way of storing relations between entities. A fact can be represented as an entity and its relationship with another entity can be represented as an edge between them. This representation enables us to use graph algorithms on our knowledge graph to find answers to various questions such as whether a user is fraudulent or if a transaction pattern indicates money laundering.

Why Knowledge Graphs for Fraud Detection?

Limitations of Traditional ML Approaches

Limitations of Traditional ML Approaches

Machine learning, neural networks, and AI techniques have made significant strides in fraud detection, but they face critical limitations:

1. Vulnerability to Adversarial Attacks

  • Statistical models can be fooled by synthetic data sets
  • Adversarial attacks can bypass pattern recognition
  • Hackers can craft transactions that appear legitimate to ML models

2. Black Box Problem

  • Neural networks provide predictions without explanations
  • Regulators and compliance officers need to understand WHY a transaction was flagged
  • Difficult to justify account freezing or SAR filing based on opaque model outputs

3. Statistical Limitations

  • Require large amounts of labeled fraud data (which is rare)
  • Struggle with new fraud patterns not seen in training data
  • High false positive rates (often 90%+ in production)
  • Cannot capture complex multi-hop relationships

4. Missing Contextual Understanding

  • Treat transactions in isolation
  • Don't understand relationships between entities
  • Can't reason about patterns like "money returning to origin through intermediaries"

The Knowledge Graph Solution

Knowledge graphs address these limitations by:

Relationship-Native: Connections are first-class citizens, not expensive joins

Context-Aware: Every entity exists within a web of relationships

Explainable: Query results show the exact path of reasoning

Pattern-Based: Define fraud patterns once, detect them everywhere

Robust: Missing or incorrect data doesn't break relationship analysis

Most importantly: Knowledge graphs combine the power of graph algorithms (DFS, cycle detection, community finding) with semantic reasoning (ontologies, inference rules) to detect fraud patterns that are invisible to traditional approaches.

The Knowledge Graph Advantage

How Knowledge Graphs Detect Fraud

The power of knowledge graphs for fraud detection comes from their ability to model and query complex relationships:

Traditional Database Approach:
SELECT * FROM transactions 
WHERE amount > 10000 AND suspicious_flag = TRUE
→ Finds individual suspicious transactions (high false positives)

Knowledge Graph Approach:
MATCH (a1:Account)-[:SENDS_MONEY]->(a2:Account)-[:SENDS_MONEY]->
      (a3:Account)-[:SENDS_MONEY]->(a4:Account)-[:SENDS_MONEY]->(a1)
WHERE a1.id = a2.id
→ Finds circular money flows (actual fraud pattern)

Key Capabilities

1. Multi-Hop Relationship Queries

Find patterns like:

  • Account A sends to B, B sends to C, C sends back to A (circular flow)
  • Multiple accounts sharing the same email, phone, or device ID
  • Transaction chains that end at known fraudulent merchants
  • Short paths between unrelated accounts (potential collusion)

2. Pattern Matching

Define suspicious patterns once in your ontology:

<owl:Class rdf:about="#CircularMoneyFlow">
  <rdfs:subClassOf rdf:resource="#FraudPattern"/>
  <rdfs:comment>
    Money returns to originating account through intermediaries
  </rdfs:comment>
</owl:Class>

Then detect them automatically using graph algorithms and SPARQL queries.

3. Semantic Reasoning

The ontology enables automatic inference:

Facts:
- Transaction_T1 connects Account_A to Account_B
- Account_A shares_email_with Account_C
- Account_C shares_device_with Account_D

Inferred Knowledge:
- Account_A potentially_colluding_with Account_D
- Risk_Score increases due to device sharing
- Pattern matches "Account Takeover" fraud type

Building a Fraud Detection Knowledge Graph

Step 1: Data Modeling

The most important step is creating a graph of relationships between various pieces of information about users and transactions. The key is to associate all available information with account IDs:

Core Entities:

  • Accounts: Unique identifiers for financial accounts
  • Customers: People or businesses who own accounts
  • Transactions: Money transfers between accounts
  • Devices: Phones, computers used to access accounts
  • Locations: IP addresses, physical addresses
  • Merchants: Businesses receiving payments

Relationships to Model:

  • owns_account: Customer → Account
  • sends_money_to: Account → Account
  • uses_device: Account → Device
  • accessed_from: Account → IP Address
  • shares_email: Account → Account
  • shares_phone: Account → Account
  • located_at: Account → Location

Attributes:

  • Account: account_number, creation_date, account_type
  • Transaction: amount, timestamp, currency, status
  • Customer: name, DOB, tax_id, email, phone
  • Device: device_id, device_type, OS, browser

Step 2: Define Suspicious Patterns

Using a knowledge graph, you can build powerful rules that detect known fraudulent behavior:

Common Fraud Patterns to Look For:

A. Common Attributes (Identity Fraud)

  • Multiple accounts using the same email address
  • Multiple accounts using the same phone number
  • Same tax identification number across different names
  • Same device accessing unrelated accounts

B. Circular Money Flow (Money Laundering)

  • Money sent from Account A → B → C → D → back to A
  • Short timeframe between transactions in the cycle
  • Equal or similar amounts at each step
  • No legitimate business relationship between accounts

C. Rapid Transactions (Layering)

  • Short paths between multiple accounts
  • High transaction velocity (many transactions in short time)
  • Large total amount split across many small transactions
  • Transactions outside normal patterns (e.g., 3 AM on weekends)

D. Structuring (Smurfing)

  • Multiple transactions just below reporting threshold ($10,000)
  • Same source account splitting large amount
  • Coordinated timing across different accounts
  • Similar amounts to avoid detection

E. Account Takeover

  • Sudden change in transaction patterns
  • New device or location accessing account
  • Large withdrawals shortly after access change
  • Password/email changes followed by transfers

Step 3: Create the Ontology

Using Protégé and VidyaAstra, you can create an OWL ontology that formally defines your fraud detection domain:

Using VidyaAstra's "Create New Ontology" Mode:

Description:
"Create a fraud detection ontology for anti-money laundering. 
Include the following:

Entities:
- Account (with properties: account_id, balance, account_type, creation_date)
- Customer (with properties: customer_id, name, email, phone, tax_id)
- Transaction (with properties: transaction_id, amount, timestamp, currency)
- Device (with properties: device_id, type, ip_address)
- FraudPattern (parent class for all fraud types)

Fraud Pattern Types:
- CircularMoneyFlow (subclass of FraudPattern)
- MoneyLaundering (subclass of FraudPattern)
- Structuring (subclass of FraudPattern)
- AccountTakeover (subclass of FraudPattern)
- IdentityFraud (subclass of FraudPattern)

Relationships:
- sendsMoneyTo (Account to Account)
- ownsAccount (Customer to Account)
- usesDevice (Account to Device)
- sharesEmail (Account to Account)
- sharesPhone (Account to Account)
- involvedIn (Account to FraudPattern)
- detectedBy (FraudPattern to DetectionAlgorithm)

Detection Algorithms:
- DFS (Depth First Search for cycle detection)
- TarjanSCC (Strongly Connected Components)
- LouvainCommunity (Community detection for fraud rings)

Risk Levels:
- HighRisk, MediumRisk, LowRisk

Include data properties for risk scores, transaction amounts, and timestamps."

VidyaAstra will generate a complete OWL ontology in 20-30 seconds, including:

  • All class definitions with proper hierarchy
  • Object properties with domain and range
  • Data properties with appropriate types
  • Sample individuals for testing

Real-World Use Case: Circular Money Flow Detection

The Scenario

Circular money flow is a classic money laundering technique where funds are moved through a series of accounts and eventually return to the originating account. This creates the appearance of legitimate business activity while obscuring the illicit origin of funds.

Example Pattern:

Account A sends $50,000 → Account B
Account B sends $50,000 → Account C  
Account C sends $50,000 → Account D
Account D sends $50,000 → Account A (returns to origin)

This pattern is difficult to detect with traditional database queries because it requires:

  • Multi-hop relationship traversal (4 steps)
  • Cycle detection algorithms
  • Understanding that the pattern indicates fraud

Building the Ontology with VidyaAstra

Step 1: Create the Fraud Detection Ontology (5 minutes)

Open Protégé and launch the VidyaAstra plugin. Select "Create New Ontology" mode and provide this description:

Create a fraud detection ontology for anti-money laundering with circular money flow detection.

Include:
- Account entities with properties (account_id, balance, creation_date)
- Transaction entities linking accounts
- CircularMoneyFlow fraud pattern class
- MoneyLaundering parent class
- DFS and Tarjan cycle detection algorithms
- Risk levels (High, Medium, Low)
- Investigation and compliance action classes

Add relationships:
- sendsMoneyTo (Account to Account)
- involvedIn (Account to FraudPattern)
- detectedBy (FraudPattern to Algorithm)
- triggers (RiskLevel to Action)

Add data properties:
- riskScore (decimal 0-1)
- cycleLength (integer)
- totalAmount (decimal)
- detectionTimestamp (datetime)

VidyaAstra will generate a complete OWL ontology including all classes, properties, and basic individuals.

Step 2: Add Sample Transaction Data

Add these individuals to your ontology to represent the circular money flow pattern:

<!-- Accounts in the cycle -->
<owl:NamedIndividual rdf:about="#Account_A">
  <rdf:type rdf:resource="#Account"/>
  <accountId>ACC-001</accountId>
  <sendsMoneyTo rdf:resource="#Account_B"/>
  <involvedIn rdf:resource="#CircularFlow_001"/>
</owl:NamedIndividual>

<owl:NamedIndividual rdf:about="#Account_B">
  <rdf:type rdf:resource="#Account"/>
  <accountId>ACC-002</accountId>
  <sendsMoneyTo rdf:resource="#Account_C"/>
  <involvedIn rdf:resource="#CircularFlow_001"/>
</owl:NamedIndividual>

<owl:NamedIndividual rdf:about="#Account_C">
  <rdf:type rdf:resource="#Account"/>
  <accountId>ACC-003</accountId>
  <sendsMoneyTo rdf:resource="#Account_D"/>
  <involvedIn rdf:resource="#CircularFlow_001"/>
</owl:NamedIndividual>

<owl:NamedIndividual rdf:about="#Account_D">
  <rdf:type rdf:resource="#Account"/>
  <accountId>ACC-004</accountId>
  <sendsMoneyTo rdf:resource="#Account_A"/>
  <involvedIn rdf:resource="#CircularFlow_001"/>
</owl:NamedIndividual>

<!-- The fraud pattern instance -->
<owl:NamedIndividual rdf:about="#CircularFlow_001">
  <rdf:type rdf:resource="#CircularMoneyFlow"/>
  <cycleLength rdf:datatype="&xsd;integer">4</cycleLength>
  <totalAmount rdf:datatype="&xsd;decimal">200000</totalAmount>
  <riskScore rdf:datatype="&xsd;decimal">0.92</riskScore>
  <detectedBy rdf:resource="#DFS_Algorithm"/>
  <hasRiskLevel rdf:resource="#HighRisk"/>
</owl:NamedIndividual>

Step 3: Query Using Natural Language

Switch to "Basic Query" mode in VidyaAstra and ask:

"Show me all accounts involved in circular money flow"

VidyaAstra will:

  1. Translate your question to SPARQL
  2. Execute the query against the ontology
  3. Return results in plain English

Example SPARQL generated:

SELECT ?account ?accountId WHERE {
  ?account rdf:type :Account .
  ?account :involvedIn ?pattern .
  ?pattern rdf:type :CircularMoneyFlow .
  ?account :accountId ?accountId .
}

Result:

Found 4 accounts involved in circular money flow:
- Account_A (ACC-001)
- Account_B (ACC-002)
- Account_C (ACC-003)
- Account_D (ACC-004)

Pattern: A → B → C → D → A
Risk Score: 0.92 (High Risk)
Detection Method: DFS Algorithm

Step 4: Run Graph Algorithms

The ontology structure allows you to run various graph algorithms to detect and analyze fraud patterns:

Step 4: Run Graph Algorithms

The ontology structure allows you to run various graph algorithms to detect and analyze fraud patterns:

1. Depth-First Search (DFS) for Cycle Detection

DFS is used to detect cycles in the transaction graph. It traverses the graph starting from each account and checks if it can return to the starting point.

# SPARQL query to find 4-hop cycles using DFS pattern
SELECT ?a1 ?a2 ?a3 ?a4
WHERE {
  ?a1 :sendsMoneyTo ?a2 .
  ?a2 :sendsMoneyTo ?a3 .
  ?a3 :sendsMoneyTo ?a4 .
  ?a4 :sendsMoneyTo ?a1 .
}

Complexity: O(V + E) where V = accounts, E = transactions

Best for: Finding simple cycles quickly

2. Tarjan's Strongly Connected Components (SCC)

Identifies groups of accounts where money can flow between any two accounts in the group. This is more sophisticated than simple cycle detection.

# Find accounts that are part of strongly connected components
SELECT ?account
WHERE {
  ?account :sendsMoneyTo+ ?otherAccount .
  ?otherAccount :sendsMoneyTo+ ?account .
  FILTER(?account != ?otherAccount)
}

Best for: Detecting complex fraud rings where money circulates among multiple accounts

3. Louvain Algorithm for Community Detection

Groups accounts into communities based on transaction patterns. Fraudulent accounts often form tight communities.

Use case: Identify clusters of accounts that primarily transact with each other, suggesting coordination or collusion.

4. PageRank for Account Importance

Assigns importance scores to accounts based on incoming and outgoing transaction patterns.

Use case: Identify "hub" accounts that are central to money laundering operations.

5. Shortest Path Analysis

Find the shortest path between two accounts to understand how money flows.

# Using property paths to find connections
SELECT ?intermediateAccount
WHERE {
  :SuspiciousAccount_A :sendsMoneyTo+ ?intermediateAccount .
  ?intermediateAccount :sendsMoneyTo+ :SuspiciousAccount_B .
}

Use case: Track how illicit funds move from source to destination.

Using VidyaAstra with Protégé

VidyaAstra extends Protégé with three key capabilities that make fraud detection ontology development accessible:

1. Basic Query Mode - Natural Language to SPARQL

Instead of writing complex SPARQL queries manually, fraud analysts can ask questions in plain English:

Traditional Approach:

SELECT ?account ?riskScore
WHERE {
  ?account :involvedIn ?pattern .
  ?pattern rdf:type :CircularMoneyFlow .
  ?pattern :riskScore ?riskScore .
  FILTER (?riskScore > 0.8)
}

VidyaAstra Approach:

Simply ask: "Which accounts are involved in high-risk circular money flows?"

The plugin:

  1. Analyzes the current ontology structure
  2. Sends the context + question to an LLM (GPT-4, Claude, Nvidia)
  3. Gets back valid SPARQL
  4. Executes it and returns results in plain English

2. Create New Ontology Mode - AI-Generated OWL

Instead of manually creating classes, properties, and individuals in OWL/RDF XML, describe what you need:

"Create a fraud detection ontology for credit card fraud including:
- Transaction entities with amount, timestamp, merchant
- Customer entities with account info
- Fraud patterns: velocity checks, geographic anomalies, merchant risk
- Detection rules for unusual spending patterns"

VidyaAstra generates a complete, valid OWL ontology in ~20 seconds.

3. Modify Ontology Mode - Intelligent Updates

Extend existing ontologies without manual XML editing:

"Add a new fraud type called 'Account Takeover' that includes:
- Login from new device
- Password change
- Followed by large withdrawal
Link it to HighRisk level"

The plugin updates your ontology, validates consistency, and applies changes immediately.

Graph Algorithms for Fraud Detection

Here are the key graph algorithms used in knowledge graph-based fraud detection systems:

1. Cycle Detection Algorithms

Depth-First Search (DFS)

  • Purpose: Find circular money flows
  • How it works: Traverses the graph and maintains a recursion stack to detect back edges (cycles)
  • Complexity: O(V + E)
  • Implementation in SPARQL:
SELECT ?start ?end
WHERE {
  ?start :sendsMoneyTo+ ?end .
  ?end :sendsMoneyTo+ ?start .
  FILTER(?start != ?end)
}

Tarjan's Strongly Connected Components

  • Purpose: Find groups of accounts where money circulates
  • How it works: Uses DFS with low-link values to identify SCCs in one pass
  • Complexity: O(V + E)
  • Use case: Detect complex fraud rings, not just simple cycles

2. Path Finding Algorithms

Shortest Path (Dijkstra/Bellman-Ford)

  • Purpose: Find how money moves from point A to B
  • Use case: Track layering schemes where money passes through multiple intermediaries
  • SPARQL Property Paths:
SELECT (COUNT(?intermediate) AS ?pathLength)
WHERE {
  :Account_A :sendsMoneyTo+ ?intermediate .
  ?intermediate :sendsMoneyTo+ :Account_B .
}

All Paths Enumeration

  • Purpose: Find all possible routes money can take
  • Use case: Identify alternative laundering paths, redundant connections

3. Community Detection Algorithms

Louvain Algorithm

  • Purpose: Identify clusters of accounts that transact primarily with each other
  • How it works: Optimizes modularity by iteratively moving nodes between communities
  • Use case: Detect organized fraud rings, mule account networks

Label Propagation

  • Purpose: Fast community detection for large graphs
  • How it works: Nodes adopt the most common label among their neighbors
  • Use case: Real-time fraud ring detection

4. Centrality Algorithms

PageRank

  • Purpose: Identify important "hub" accounts in the transaction network
  • Use case: Find money mules, central accounts in laundering operations

Betweenness Centrality

  • Purpose: Find accounts that act as bridges between different parts of the network
  • Use case: Identify intermediary accounts used for layering

Degree Centrality

  • Purpose: Count incoming/outgoing transactions per account
  • Use case: Detect accounts with unusual transaction volumes

5. Pattern Matching Algorithms

Subgraph Isomorphism

  • Purpose: Find instances of known fraud patterns in the transaction graph
  • How it works: Match a template pattern against the full graph
  • SPARQL Example:
# Match the "smurfing" pattern: one source, multiple small transactions
SELECT ?source (COUNT(?dest) AS ?numTransactions) (SUM(?amount) AS ?total)
WHERE {
  ?source :sendsMoneyTo ?dest .
  ?transaction :from ?source ;
               :to ?dest ;
               :amount ?amount .
  FILTER(?amount < 10000)
}
GROUP BY ?source
HAVING (COUNT(?dest) > 10 && SUM(?amount) > 50000)

6. Temporal Analysis

Time-Window Queries

  • Purpose: Detect rapid transaction sequences
  • Use case: Layering detection, velocity checks
SELECT ?account (COUNT(?tx) AS ?count)
WHERE {
  ?tx :fromAccount ?account ;
      :timestamp ?time .
  FILTER(?time > "2024-11-23T00:00:00"^^xsd:dateTime &&
         ?time < "2024-11-23T01:00:00"^^xsd:dateTime)
}
GROUP BY ?account
HAVING (COUNT(?tx) > 10)

Visualization and Manual Inspection

In addition to automated graph algorithms, fraud analysts need to visually inspect suspicious patterns. Protégé's built-in visualization tools, combined with VidyaAstra's query capabilities, allow manual exploration:

OntoGraf Visualization

  1. Open OntoGraf view in Protégé
  2. Select a suspicious account (e.g., Account_A)
  3. Visualize relationships (:sendsMoneyTo, :sharesEmail, etc.)
  4. Manually trace money flow paths

SPARQL-Based Exploration

Use VidyaAstra to iteratively drill down:

Query 1: "Show accounts with more than 5 outgoing transactions"
Query 2: "Which of these accounts share email addresses?"
Query 3: "Show the transaction history for Account_XYZ"
Query 4: "Are any of these accounts involved in fraud patterns?"

This iterative, conversational approach combines automated detection with human expertise.

Integration with External Graph Databases

While Protégé is excellent for ontology development and testing, production fraud detection systems typically use dedicated graph databases for scalability:

Integration Architecture

┌─────────────────────────────────────────┐
│  Protégé + VidyaAstra                   │
│  • Ontology design & testing            │
│  • Pattern definition                   │
│  • Query prototyping                    │
└─────────────────────────────────────────┘
          ↓ (Export OWL)
┌─────────────────────────────────────────┐
│  Graph Database (Production)            │
│  • Apache Jena Fuseki                   │
│  • GraphDB                              │
│  • Neo4j (with neosemantics plugin)     │
│  • Amazon Neptune                       │
└─────────────────────────────────────────┘
          ↓
┌─────────────────────────────────────────┐
│  Real-time Transaction Processing       │
│  • Stream processing (Kafka, Flink)     │
│  • Pattern matching                     │
│  • Alert generation                     │
└─────────────────────────────────────────┘

Export Process

  1. Design ontology in Protégé with VidyaAstra
  2. Test queries on sample data
  3. Export OWL file
  4. Load into production triple store:
# Apache Jena Fuseki
curl -X POST \
  -H "Content-Type: application/rdf+xml" \
  --data-binary @fraud-detection-ontology.owl \
  http://localhost:3030/fraud/data

# GraphDB
curl -X POST \
  -H "Content-Type: application/rdf+xml" \
  --data-binary @fraud-detection-ontology.owl \
  http://localhost:7200/repositories/fraud/statements
  1. Populate with production transaction data
  2. Run graph algorithms at scale

Getting Started

Getting Started

Prerequisites

  1. Protégé 5.6.4+ - Download from https://protege.stanford.edu/
  2. Java 11+ - Required for running Protégé and plugins
  3. LLM API Access - OpenAI, Anthropic Claude, or Nvidia NGC API key
  4. VidyaAstra Plugin - Download from https://github.com/vishalmysore/vidyaastra-plugin

Installation

Step 1: Install Protégé

Download and install Protégé for your operating system.

Step 2: Install VidyaAstra Plugin

# Windows
Copy-Item vidyaastra-1.0.1.jar "C:\Program Files\Protege-5.6.7\plugins\"

# macOS
cp vidyaastra-1.0.1.jar "/Applications/Protege.app/Contents/Java/plugins/"

# Linux
cp vidyaastra-1.0.1.jar "$HOME/Protege-5.6.7/plugins/"

Step 3: Launch Protégé and Activate Plugin

  1. Start Protégé
  2. Go to Window → Views → Ontology Views → VidyaAstra View
  3. The VidyaAstra panel will appear

Step 4: Configure API Key

Enter your OpenAI/Claude/Nvidia API key in the VidyaAstra preferences.

Quick Start Example

Create Your First Fraud Detection Ontology:

  1. Select "Create New Ontology" mode
  2. Enter this description:
Create a fraud detection ontology for money laundering detection.
Include:
- Account and Transaction entities
- CircularMoneyFlow fraud pattern
- sendsMoneyTo relationship
- DFS cycle detection algorithm
- Risk levels and scores
  1. Click "Ask AI" and wait 20-30 seconds
  2. Save the generated ontology as fraud-detection.owl

Query Your Ontology:

  1. Switch to "Basic Query" mode
  2. Ask: "Show me all circular money flow patterns"
  3. VidyaAstra translates to SPARQL and returns results

Modify Your Ontology:

  1. Switch to "Modify Ontology" mode
  2. Request: "Add a Structuring fraud pattern for transactions below $10,000"
  3. Changes are applied and validated automatically

Technical Implementation Details

How VidyaAstra Works

1. Natural Language Query Processing

// Simplified flow
String userQuery = "Which accounts have high risk scores?";

// 1. Extract ontology context
String context = extractClassesAndProperties(activeOntology);

// 2. Build LLM prompt
String prompt = "Given this ontology:\n" + context + 
                "\nTranslate to SPARQL: " + userQuery;

// 3. Call LLM
String sparqlQuery = llm.complete(prompt);

// 4. Execute query
ResultSet results = ontology.executeQuery(sparqlQuery);

// 5. Format results
String answer = formatAsNaturalLanguage(results);

2. AI Ontology Generation

// Simplified flow
String description = "Create fraud detection ontology...";

// 1. Generate with strict prompt
String systemPrompt = "Generate valid OWL/RDF XML only. " +
                      "No markdown, no explanations.";

// 2. Get LLM response
String owlXml = llm.complete(systemPrompt, description);

// 3. Clean and validate
owlXml = removeMarkdown(owlXml);
owlXml = fixCommonXmlIssues(owlXml);

// 4. Validate with OWL API
OWLOntology ont = manager.loadFromString(owlXml);

// 5. Save
saveOntology(ont, "generated-ontology.owl");

SPARQL Query Examples

Find Circular Money Flows:

PREFIX : <http://example.org/fraud#>

SELECT DISTINCT ?account1 ?account2 ?account3 ?account4
WHERE {
  ?account1 :sendsMoneyTo ?account2 .
  ?account2 :sendsMoneyTo ?account3 .
  ?account3 :sendsMoneyTo ?account4 .
  ?account4 :sendsMoneyTo ?account1 .
}

Find Accounts Sharing Email:

SELECT ?account1 ?account2 ?email
WHERE {
  ?account1 :hasEmail ?email .
  ?account2 :hasEmail ?email .
  FILTER(?account1 != ?account2)
}

Find High-Risk Patterns:

SELECT ?pattern ?riskScore
WHERE {
  ?pattern rdf:type :FraudPattern .
  ?pattern :riskScore ?riskScore .
  FILTER(?riskScore > 0.80)
}
ORDER BY DESC(?riskScore)

Temporal Analysis - Rapid Transactions:

SELECT ?account (COUNT(?tx) AS ?txCount)
WHERE {
  ?tx :fromAccount ?account ;
      :timestamp ?time .
  FILTER(?time >= "2024-11-23T00:00:00"^^xsd:dateTime &&
         ?time <= "2024-11-23T02:00:00"^^xsd:dateTime)
}
GROUP BY ?account
HAVING (COUNT(?tx) > 5)

Conclusion

Why Knowledge Graphs for Fraud Detection

Fraud detection is fundamentally a relationship problem:

  • Money flows through networks of accounts
  • Fraudsters create patterns across transactions
  • Detection requires multi-hop analysis
  • Explanations need semantic context

Traditional approaches struggle with:

  • ML/Neural Networks: Black boxes vulnerable to adversarial attacks, can't explain decisions
  • Rule-Based Systems: Brittle, high false positives, miss complex patterns
  • SQL Databases: Multi-hop queries are slow and complex

Knowledge graphs solve these problems by natively representing relationships and enabling graph algorithms.

Why Protégé + VidyaAstra

Protégé provides:

  • Industry-standard OWL ontology editor
  • SPARQL query engine
  • Reasoning capabilities (Pellet, HermiT, ELK)
  • Visualization tools

VidyaAstra adds:

  • Natural language query interface (no SPARQL expertise needed)
  • AI-powered ontology generation (minutes vs. weeks)
  • Intelligent ontology modification
  • Multi-LLM support (OpenAI, Claude, Nvidia)

Together, they enable fraud analysts to build and query knowledge graphs without deep technical expertise in ontologies or SPARQL.

Next Steps

  1. Download Protégé and VidyaAstra
  2. Create your first fraud detection ontology using the examples in this article
  3. Load sample transaction data
  4. Query using natural language
  5. Extend with your specific fraud patterns
  6. Deploy to production graph database when ready

Resources

Software

Documentation

Sample Ontology

The complete fraud detection ontology example is available in this repository:

  • File: fraud-detection-ontology.owl
  • Includes: 28 classes, 12 object properties, 8 data properties
  • Sample Data: Circular money flow with 4 accounts

About

Author: Vishal Mysore

Repository: https://github.com/vishalmysore/vidyaastra-plugin

Disclaimer

This article presents my approach to fraud detection using knowledge graphs, building on industry-standard techniques with Protégé and the VidyaAstra plugin. The circular money flow use case is a well-documented fraud pattern in financial crime literature and has been covered in many articles before, and this implementation demonstrates how ontologies and graph algorithms can detect such patterns effectively.

https://medium.com/neo4j/find-circular-money-flow-with-neo4j-c9138e1c3183
https://www.journalofaccountancy.com/issues/2009/dec/20091793/
https://digitaldealer.com/news/circular-bank-statement-fraud-the-new-synthetic-income-scam-dealers-lenders-must-fight/168087/

Important Notes:

  • The views and techniques presented here are my own/
  • This is an educational demonstration using publicly available fraud detection patterns documented in academic and industry literature
  • The examples use fictional data and scenarios for illustration purposes only
  • This implementation is not production-ready and should not be used for actual fraud detection without proper validation, compliance review, and security hardening
  • Organizations implementing fraud detection systems should consult with their legal, compliance, and security teams

The Dark Art Of Behavioral Enumeration And Why It Works Every Time

2025-11-24 07:26:22

There is an old truth that operators learned long before the rest of the world even noticed it existed. A person is never opaque. Not really. Every human you meet leaks tiny packets of behavioral data at all times. Micro signals. Rhythm quirks. Stress beads. Directional cues. Environmental preferences. Latent rituals. These signals do not announce themselves. They sit in the edges of perception like peripheral ghosts. Most people ignore them. Operators do not.

Behavioral enumeration is the practice of collecting those signals without ever touching the target. You watch. You listen. You record. You categorize. You build a living index of a person’s internal code before you ever open your mouth. It is the closest thing the real world has to source access for the human mind. The process is old but the precision is new. The modern operator does not guess. They enumerate. They compile. They pull a person’s behavior apart until the pieces line up and reveal the architecture beneath.

This works every time because behavior is deterministic under pressure. Once you know the pattern you know the person. Once you know the person you know how to move through them like you move through an unpatched system.

This is the dark art.

What Behavioral Enumeration Actually Is

Most people think behavioral reading is intuition. Operators know it is computation. Enumeration means you pull in distinct behavioral variables and slot them into categories that reveal structural truths. You are not looking for emotion or motive. You are looking for patterns that repeat. Humans are biological machines with recursive loops. They move according to internal rules they barely understand.

Enumerate enough of these rules and you can predict their decision tree before they reach the fork in the path.

Behavioral enumeration follows one assumption. Nothing people do is random. It looks random because you have not collected enough data to see the pattern. The moment you hit critical mass the randomness dissolves. The structure appears.

This is why enumeration works. Everything leaks.

The Three Surfaces Every Human Exposes

Operators treat humans like three surface layers that converge into one predictable organism.

Surface One: The Automatic Layer

This layer emerges from habits the person never consciously formed. Posture. Breathing pattern. Blink rhythm. Idle stance. Direction they default toward when entering a space. How they adjust their weight when they anticipate a decision. These are the lowest level loops. They rarely change and they reveal the foundation.

Enumeration begins here. If you watch a target long enough you start to see what they cannot hide. These are the immutable processes that drive all their higher level behaviors.

Surface Two: The Functional Layer

This layer contains everything a person does to operate within the world. The way they speak. The tempo of their voice. The pacing of their steps. How they react to waiting. How they respond to interruption. Their pattern of attention. Their choice of words. Their orientation toward noise. These are semi conscious traits. They can be masked for short periods but not indefinitely.

Functional behavior is where you see the footprint of their stress response. You see how they protect themselves. You see how they conserve energy. You see where their focus goes when they are tired. The functional layer predicts how they will behave under unexpected pressure.

Surface Three: The Interpersonal Layer

This is the layer that involves other humans. It is the cleanest layer to enumerate because social behavior is scripted. People are conditioned to respond predictably to status changes, confidence shifts, and conversational cues. The interpersonal layer is the easiest environment for enumeration because the patterns are obvious once you know where to look.

Every person belongs to a social archetype. Not a personality type. An archetype defined by how they position themselves inside a social field. Some dominate. Some minimize. Some mirror. Some deflect. Some perform. Some retreat. Once you know which archetype they run you can predict how they will treat you before they speak.

These three surfaces are enough to build a full operational map of a person.

Why Enumeration Is More Reliable Than Direct Interaction

When you interact with someone you trigger their performance layer. People behave differently when they know they are being watched. They script their responses. They reinforce their persona. They hide their loops. They become noise.

Pre contact enumeration bypasses all of that. You observe the target in their natural low awareness state. This is the only state where reality shows itself.

Behavioral enumeration lets you capture the raw signals. The pure unfiltered data. You are not influenced by their social mask. You do not get fooled by charisma or authority cues. You are reading the architecture beneath the behavior, not the behavior itself.

This is what makes it powerful.

The Operator’s Enumeration Cycle

There is a cycle operators use to pull a complete behavioral map before contact. It has five steps.

Step One: Identify Recurring Movements

Every person has small physical tics that emerge under minimal cognitive load. Thumb tap. Pocket check. Shoulder roll. Weight shift. Surface scan. These movements reveal the body’s default reset loop. That reset loop reveals stress tolerance, vigilance level, and internal focus style.

Step Two: Track Attention Drift

Where does their attention go when nothing is happening. Do they look at exits. Do they look at people. Do they look at screens. Do they look at the floor. This drift shows threat model, curiosity type, and internal drive orientation. Drift is more revealing than focus. Focus is conscious. Drift is instinct.

Step Three: Note Timing And Rhythm

Do they move fast or slow. Do they pause before acting. Do they interrupt. Do they hesitate. Timing shows their decision tree. Fast movers favor impulse. Slow movers favor evaluation. Hesitators favor external validation. Interruptors favor control. Once you know their timing you know their operational tempo.

Step Four: Document Stress Markers

You look for the micro behaviors that only appear under mild tension. Hair adjustment. Lip compression. Jaw shift. Rapid blink. Short breath. These markers reveal which emotional circuits dominate under pressure. Some people collapse inward. Some project outward. Some freeze. Some redirect. Stress markers predict exactly how they will behave if an interaction becomes difficult.

Step Five: Test Their Predictability From A Distance

You observe how they respond to small environmental changes. Loud noise. New person entering the space. Sudden line movement. Unexpected delay. Their reaction reveals whether their nervous system is rigid or adaptive. Predictable people are easier to steer. Adaptive people require a different interaction strategy.

Together these steps create the behavioral index. Once the index exists the target becomes readable.

Why Enumeration Feels Like Mind Reading To Outsiders

When you enumerate someone before speaking to them and then interact with them using the map you built you appear psychic. You appear intuitive. You appear like you know things you should not know.

This is because enumeration lets you avoid every blind alley before it appears. You never trigger their defenses. You never hit their resistance points. You never misread their tone. You always respond with exactly the cadence they expect. You match their reinforcement loop on the first try.

From their perspective you are perfectly aligned with them. From your perspective you are simply following the map.

This is why behavioral enumeration works. It produces an illusion of compatibility so strong that targets relax without realizing why.

The Advanced Level Operators Use

At higher levels enumeration involves pattern stacking. Operators start combining behavioral variables to produce predictive composites.

For example:

A target with slow timing, low drift, and strong stress markers under noise is likely conflict avoidant but internally rigid. You approach gently but assert concretely.

A target with fast timing, high drift, and minimal stress markers is novelty driven, reward seeking, and easily pulled into momentum. You approach with energetic pacing and open frameworks.

A target with low timing variation, consistent gaze patterns, and high environment scanning is vigilance trained. You approach with transparency and precision.

The combinations form archetypes far more accurate than personality tests or conventional profiling.

This is scientific intuition. It looks mystical but it is engineering.

The Real Reason It Works Every Time

Behavioral enumeration succeeds because people cannot stop being themselves. Even when they try. Even when they mask. Even when they lie. Even when they perform. The automatic layer always leaks through. The rhythm always leaks through. The timing always leaks through.

You cannot disguise rhythm. You cannot disguise baseline posture. You cannot disguise micro stress signals. These signals anchor everything else.

When you enumerate these primitives you bypass deception entirely.

Behavioral enumeration works because the human nervous system is predictable.

How Operators Use Enumeration In The Field

Enumeration serves three primary purposes.

It Predicts Resistance Points

You know exactly where a person will push back and exactly where they will let you through.

It Determines Communication Angle

You choose tone, pacing, word choice, and interaction strategy that aligns with their internal rhythm. This eliminates friction.

It Reduces Operational Risk

When you understand their stress cascade you know how they will react under pressure. This lets you avoid escalation entirely.

Enumeration is risk management. The psychological version of a firewall rule audit. You reveal the paths of least resistance. Once those paths appear you walk them.

Why You Need It Now More Than Ever

In 2025 every digital operation has a human component somewhere in the chain. Someone approves access. Someone presses a button. Someone answers an email. Someone misconfigures a system. Someone trusts the wrong message. Someone believes the wrong detail.

Behavioral enumeration lets you understand that person before they enter your orbit. It gives you a clean operational path. It grants you clarity in a time where everyone else operates blind.

It is not manipulation. It is literacy. Most people live inside their own heads. Operators read the world outside theirs.

Final Thought

Behavioral enumeration is the dark art that is not dark at all once you practice it. It is simply the science of paying attention at the level most people cannot sustain. When you enumerate someone you unlock their architecture. And once you know the architecture the interaction is no longer a negotiation. It becomes navigation.

There is no magic. Only data. Only rhythm. Only structure. And once you see the structure you never stop seeing it.

If you want the full field manual that goes far deeper into real behavioral reconnaissance, covert psychological mapping, and applied mind surface analysis, the pack is here:
Psychological Recon Techniques: A Field Manual for H4CK3RS

Building an Intelligent Portfolio Filtering System with Next.js and React Context

2025-11-24 07:25:59

The Challenge: Helping Recruiters Find What Matters

When building a portfolio website, one of the biggest challenges is information overload. You want to showcase everything you've built - your full-stack capabilities, data projects, UI/UX work, project management experience - but recruiters and hiring managers often have specific interests.

A front-end recruiter doesn't need to sift through your database optimization projects, and a data science manager shouldn't have to scroll past your Angular components.

The solution? An intelligent filtering system that asks visitors what they're interested in and quietly highlights the most relevant content throughout the entire portfolio.

Design Goals and UX Principles

Before writing a single line of code, I established clear design goals based on fundamental UX principles:

1. Progressive Disclosure

Don't overwhelm users with all filtering options at once. Start with high-level interests (Full Stack, Web Development, Data, etc.) and let the system handle the complexity behind the scenes.

2. Persistence Without Friction

Filter selections should persist across page navigation and browser sessions, but users should be able to adjust them at any time without going through a complex flow.

3. Subtle Over Aggressive

Visual highlighting should guide attention without screaming. Instead of hiding non-matching content (which feels restrictive), use subtle visual cues - lighter borders, reduced opacity - to naturally draw the eye to relevant items.

4. F-Pattern Optimization

Leverage the natural F-shaped reading pattern by placing filter controls at the top of each page and using horizontal layouts that align with how users scan content.

5. First-Time User Experience

Show a welcome modal only once per session, making it feel like a helpful concierge rather than an annoying popup.

6. URL-Based Context Setting

Enable direct linking to filtered views through URL parameters, allowing users to share specific portfolio contexts or bookmark their preferred view.

Technical Architecture

Technology Stack

  • Next.js 14 with App Router for server-side rendering and optimal performance
  • React Context API for global state management
  • TypeScript for type safety across the entire filtering system
  • localStorage and sessionStorage for different persistence needs
  • Tailwind CSS for responsive, utility-first styling

Core Components

The system is built around four key architectural pieces:

1. Central Filter Configuration (lib/filters.ts)

The heart of the system is a centralized configuration file that maps high-level interest categories to specific technology tags:

export const filterPresets: Record<InterestCategory, FilterPreset> = {
  'full-stack': {
    id: 'full-stack',
    label: 'Full Stack Development',
    description: 'End-to-end application development with modern frameworks',
    tags: [
      'React', 'Angular', 'Vue', 'Next.js',
      'Node.js', 'Express', '.NET', 'C#',
      'JavaScript', 'TypeScript', 'Python',
      'SQL', 'SQL Server', 'MongoDB',
      'REST APIs', 'CI/CD', 'Docker', 'Kubernetes'
    ]
  },
  // ... more presets
}

This approach follows the Single Source of Truth principle - all filtering logic stems from this one configuration, making it easy to add new categories or adjust mappings.

The file also includes helper functions for matching logic:

export function matchesFilters(itemTags: string[], activeFilters: string[]): boolean {
  return activeFilters.some(filter => itemTags.includes(filter))
}

export function getFilterStrength(itemTags: string[], activeFilters: string[]): number {
  if (activeFilters.length === 0) return 1
  const matches = itemTags.filter(tag => activeFilters.includes(tag)).length
  return matches / activeFilters.length
}

The getFilterStrength function is crucial - it returns a 0-1 score indicating how strongly an item matches active filters, enabling nuanced visual highlighting.

2. Global State Management (contexts/FilterContext.tsx)

React Context provides global state without prop drilling. The FilterContext manages:

  • activeFilters: Array of currently active technology tags
  • selectedInterest: The high-level category chosen by the user
  • hasSeenWelcome: Whether the user has dismissed the welcome modal

The implementation uses lazy initialization to avoid hydration mismatches, a common gotcha in Next.js:

const [activeFilters, setActiveFilters] = useState<string[]>(() => {
  if (typeof window !== 'undefined') {
    const stored = localStorage.getItem('portfolioFilters')
    return stored ? JSON.parse(stored) : []
  }
  return []
})

This pattern ensures localStorage is only accessed on the client, preventing server/client render discrepancies.

URL Parameter Integration

A powerful feature of the filtering system is the ability to set filters directly through URL parameters. This enables shareable links and bookmarkable views:

const loadInitialInterest = (): InterestCategory | null => {
  if (typeof window === 'undefined') return null

  // Check URL parameter first
  const urlParams = new URLSearchParams(window.location.search)
  const urlInterest = urlParams.get('interest')

  if (urlInterest && urlInterest in filterPresets) {
    return urlInterest as InterestCategory
  }

  // Fall back to stored preference
  const stored = localStorage.getItem('portfolioInterest')
  return stored ? (stored as InterestCategory) : null
}

This creates several powerful use cases:

  • Direct Links: Share https://www.ryanverwey.dev/?interest=full-stack to showcase full-stack work
  • Resume Integration: Add filtered links to your resume for role-specific portfolios
  • Job Applications: Send ?interest=web-development to a front-end position
  • Social Media: Tweet different filtered views for different audiences

The URL parameter takes precedence over stored preferences, ensuring shared links always display the intended context. After initial load, the selection is persisted to localStorage for subsequent navigation.

The context also provides methods for manipulating filters:

const applyPreset = useCallback((interest: InterestCategory) => {
  const preset = filterPresets[interest]
  setSelectedInterest(interest)
  setActiveFilters(interest === 'browse-all' ? [] : preset.tags)
}, [])

3. Welcome Modal (components/WelcomeModal.tsx)

First impressions matter. The welcome modal appears once per session, presenting six interest options in a clean, scannable grid:

<div className="fixed inset-0 z-50 flex items-center justify-center p-4 
     bg-black/50 backdrop-blur-sm animate-in fade-in duration-200">
  <div className="relative w-full max-w-lg bg-white dark:bg-zinc-900 
       rounded-xl shadow-xl">
    {/* Interest options in a 2-column grid */}
  </div>
</div>

Key UX decisions:

  • Backdrop blur creates depth and focuses attention
  • Max-width constraint ensures readability on large screens
  • Skip option respects user autonomy, no forced interaction
  • sessionStorage (not localStorage) means the modal returns in new sessions, gently reminding return visitors they can customize their view

4. Filter Bar (components/FilterBar.tsx)

The FilterBar appears consistently at the top of filtered pages, providing both context and control:

<div className="bg-white/80 dark:bg-zinc-950/80 backdrop-blur-sm 
     border-b border-zinc-200 dark:border-zinc-800 mb-8">
  <div className="container mx-auto max-w-6xl px-4 py-4">
    {/* Interest selector buttons */}
    {/* Active filters display */}
  </div>
</div>

The semi-transparent background with backdrop blur creates a modern, elevated feel while maintaining visual hierarchy.

Implementation Patterns

Highlighting Strategy

The highlighting system uses a three-tier approach based on match strength:

const getHighlightClass = (tags: string[]) => {
  if (activeFilters.length === 0) return ''
  const strength = getFilterStrength(tags, activeFilters)

  if (strength >= 0.5) {
    return 'ring-1 ring-zinc-400 dark:ring-zinc-600'
  } else if (strength > 0) {
    return 'ring-1 ring-zinc-300 dark:ring-zinc-700'
  } else {
    return 'opacity-40'
  }
}

This creates a visual hierarchy:

  • Strong matches (50%+ tags match) get a noticeable but subtle ring
  • Partial matches get a lighter ring
  • Non-matches are dimmed but still visible

This follows the Recognition Over Recall principle - users can see all content but immediately recognize what's most relevant.

Type Safety Throughout

TypeScript ensures compile-time safety across the entire system:

export type InterestCategory = 
  | 'full-stack'
  | 'web-development'
  | 'back-end'
  | 'data'
  | 'project-management'
  | 'browse-all'

interface FilterPreset {
  id: InterestCategory
  label: string
  description: string
  tags: string[]
}

Union types for categories prevent typos and enable autocomplete. Every filter interaction is type-checked, reducing runtime errors.

Performance Considerations

Several optimizations ensure the filtering system doesn't impact performance:

  1. Memoization with useCallback: Filter operations are memoized to prevent unnecessary re-renders
  2. Lazy initialization: State is initialized once, not on every render
  3. Selective filtering: Projects page doesn't apply filtering logic (only highlighting) to avoid jarring layout shifts
  4. Static generation: Pages remain statically generated, filtering happens entirely client-side

Accessibility

The system incorporates several accessibility features:

  • Semantic HTML: Buttons use proper <button> elements, not divs
  • Focus management: Keyboard navigation works naturally through filter options
  • Color contrast: All text meets WCAG AA standards
  • Reduced motion: Animations respect prefers-reduced-motion
  • Screen reader labels: Filter buttons clearly announce their state

Page-Specific Implementations

About Page

The About page uses highlighting on both experience cards and skill buttons:

const renderSkillButton = (skill: string) => {
  const isFiltered = activeFilters.includes(skill)
  return (
    <button
      className={`block w-full px-3 py-2 text-sm rounded-lg border transition-all ${
        isFiltered
          ? 'border-zinc-400 bg-zinc-100 dark:bg-zinc-800 font-medium'
          : 'border-zinc-200 dark:border-zinc-700 hover:border-zinc-300'
      }`}
    >
      {skill}
    </button>
  )
}

This creates a cohesive experience where skills relevant to the selected interest are subtly emphasized.

Projects Page

Projects showcase a unique challenge - they have both category filters (Web Apps, Websites, Tools) and global filters. The solution:

const filteredProjects = projects.filter(p => {
  const categoryMatch = selectedCategory === 'All' || p.category === selectedCategory
  // Don't filter by activeFilters - only highlight
  return categoryMatch
})

Projects remain visible regardless of global filters (avoiding confusing disappearing content), but the highlighting guides attention to relevant tech stacks.

Experience Page

The Experience page combines a collapsible skill sidebar with the global FilterBar:

const toggleSkill = (skill: string) => {
  if (activeFilters.includes(skill)) {
    removeFilter(skill)
  } else {
    addFilter(skill)
  }
}

This allows granular control - users can click individual skills to add them to filters, creating a dynamic, exploratory experience.

Blog Page

The blog integrates with existing category/tag filters while syncing with global state:

useEffect(() => {
  if (currentTag && !activeFilters.includes(currentTag)) {
    addFilter(currentTag)
  }
}, [currentTag, activeFilters, addFilter])

This creates continuity - selecting a tag in the blog updates global filters, so navigating to the Experience page automatically highlights related skills.

UI/UX Principles in Action

Visual Hierarchy

The entire system leverages Gestalt principles of perception:

  • Proximity: Related controls are grouped together
  • Similarity: Matching tags share visual styling
  • Continuity: The FilterBar maintains consistent positioning across pages
  • Figure-Ground: Highlighted items naturally pop against dimmed content

Cognitive Load Reduction

Rather than forcing users to understand tag taxonomies, the system presents intent-based presets:

  • "I'm interested in Full Stack Development" → System applies 20+ relevant tags
  • User never needs to know the underlying complexity
  • Follows Don Norman's principle of mapping mental models to system models

Feedback and Affordances

Every interaction provides clear feedback:

  • Hover states signal clickability
  • Active states show current selection
  • Transition animations create continuity (but are subtle, 150ms or less)
  • Read-only badges (active filters) provide context without inviting accidental interaction

Mobile Responsiveness

The system adapts to smaller screens:

.flex-col sm:flex-row sm:items-center

On mobile, filter buttons stack vertically. On desktop, they flow horizontally. This maintains usability across all viewport sizes.

Lessons Learned

What Worked Well

  1. Central configuration: Having one source of truth made the system easy to extend and debug
  2. Subtle highlighting: Users report the filtering feels "helpful, not pushy"
  3. Browse All as default: Starting unfiltered respects user agency and prevents confusion
  4. Type safety: TypeScript caught numerous bugs during development

What I'd Do Differently

  1. Analytics integration: Tracking which interests users select would inform content strategy
  2. Fuzzy matching: Currently, matching is exact - "React" vs "ReactJS" won't match. A fuzzy matcher would be more forgiving
  3. A/B testing: Test different highlight intensities to optimize for conversion
  4. Filter combinations: Allow multiple interest selections simultaneously for hybrid roles

Performance Metrics

After deployment:

  • First Contentful Paint: < 1.2s
  • Largest Contentful Paint: < 2.0s
  • Cumulative Layout Shift: 0.01 (excellent)
  • Time to Interactive: < 2.5s

The filtering system adds negligible overhead - most logic is simple array operations, and React's reconciliation handles updates efficiently.

Conclusion

Building an intelligent filtering system isn't just about functionality - it's about understanding user intent and creating an interface that feels anticipatory rather than reactive. By combining thoughtful UX principles, modern React patterns, and performant architecture, we created a system that helps visitors find exactly what they're looking for without feeling restrictive.

The key insights:

  • Start with user research: Understand why people visit your portfolio
  • Map intents to implementations: High-level categories → specific tags
  • Prioritize subtlety: Guide, don't force
  • Maintain consistency: Same patterns across all pages
  • Enable shareability: URL parameters make filtered views shareable
  • Test on real users: Assumptions ≠ reality

This system demonstrates that sophisticated features don't require complex UIs. By leveraging fundamental design principles and modern web technologies, we created an experience that feels effortless - the hallmark of great design.

Real-World Application

The URL-based filtering has proven particularly valuable in job applications. Instead of sending recruiters to a generic portfolio, I can now send role-specific links:

  • Full-Stack Position: ?interest=full-stack - Highlights end-to-end development work
  • Front-End Role: ?interest=web-development - Emphasizes UI components and React expertise
  • Backend Position: ?interest=back-end - Showcases API development and database work
  • Data Role: ?interest=data - Features analytics, ETL, and visualization projects

This targeted approach increases engagement by showing recruiters exactly what they're looking for, without making them hunt through unrelated content.

Technologies Used: React, Next.js 14, TypeScript, Tailwind CSS, React Context API

Methodologies: User-Centered Design, Progressive Enhancement, Mobile-First Development, Accessibility-First

UI/UX Principles: Progressive Disclosure, Recognition Over Recall, F-Pattern Layout, Gestalt Principles, Visual Hierarchy