2025-11-24 07:49:17
For years I have been doing the work of a Developer Advocate without ever holding the title.
Not because I was trying to check a career box, but because I genuinely love teaching, building, speaking, and helping people unlock what is possible with the Web. My journey into DevRel did not start with a job description. It started with curiosity, community, and an obsession with sharing what I learn.
This is the story I want to tell.
Long before I ever touched WebGPU, AI APIs, or agentic patterns, I was a teacher.
I taught English to pay for college, and I did not know it at the time, but that experience shaped everything. I learned how to explain complex ideas simply. How to read a room. How to make someone feel capable, even if they are just starting.
When I eventually transitioned into software engineering, I did not stop teaching.
I taught through local meetups, study groups, DMs, and later through tech communities I organized myself. Teaching has always been the throughline of my career.
I have always loved the Web. The openness, the flexibility, the creativity. The Web is the most accessible platform ever created.
But WebAI changed everything for me.
The idea that I could use my existing web knowledge, the same JavaScript and browser APIs I had been mastering for years, to create AI powered experiences was mind blowing. It made AI feel native, natural, and ours as web developers.
WebAI gave me a way in.
Vibe coding made the possibilities feel infinite.
And suddenly, I could not stop building.
But I also could not stop sharing.
I started recording videos.
I started writing.
I started speaking, first locally, then nationally, and then internationally.
My first international talk was in Romania, and it changed everything. I realized something important:
This is the work I want to do. This is the work I am already doing.
DevRel is not about clout, stages, or airport lounges.
At its core it is about:
Teaching
Empowering people
Sharing knowledge
Building community
Connecting ideas with the people who need them
Exploring new tools and showing what is possible
And I have been doing all of that for three years, not because it was my job, but because I could not avoid doing it.
I have organized meetups in my city out of passion.
I have brought new technologies to local communities.
I have created videos and tutorials so people can learn faster than I did.
I have spoken at events on my own budget, sometimes using my vacation days to do it.
I did not realize it then, but I was doing DevRel the slow and difficult way.
Out of pure love, not sustainability.
At first it was fine.
But as more invitations came in, as communities grew, as more people asked for help, something shifted. I was doing the work full time without the support or resources of an actual DevRel role.
I was using my free time, weekends, vacation days, and often my own money to show up for communities. Eventually it created a type of burnout that hurts because it comes from something you love.
That is when I realized something important:
If I want to keep teaching, speaking, and enabling builders, I need to do it in a sustainable way. I need to do it as my actual job.
Not as a side passion.
Not as extra work squeezed into the edges of my life.
But as my career.
Because DevRel is not only something I am good at.
It is the way I naturally move through the world.
I believe that DevRel is not a performance. It is a service.
I want to serve developer communities by:
Teaching
Helping developers use the Web as a platform for AI experiences
Sharing how I build and experiment in public
Bringing powerful tools to places that rarely see them
Showing beginners and non engineers that they can build too
Creating content that teaches and inspires
Growing communities locally and globally
I have already done all of this. I simply need the chance to do it full time.
This blog post is not only a reflection. It is a declaration.
I want to be a Developer Advocate.
Not someday. Not in an abstract way.
I am ready now.
I have built the habits, the skills, the community, and the love for the craft. I simply want the opportunity to keep growing, to keep teaching, to keep building bridges between technology and people.
DevRel is not a title I am chasing.
It is the role that finally matches the work I have been doing and the person I have become.
If you are a DevRel professional, a manager, a founder, or someone who works in community or advocacy, or if you simply know me:
I am open to opportunities.
I am ready to create.
I am ready to contribute.
And more than anything, I am ready to help people build the future of the Web.
The Web taught me everything.
Community carried me forward.
Teaching is how I give back.
2025-11-24 07:41:10
Коли мова заходить про вибір мови програмування для мікросервісної архітектури, розробники часто стикаються з компромісом: або швидкість розробки, або продуктивність виконання. Rust пропонує унікальне поєднання обох переваг.
Rust демонструє продуктивність на рівні C/C++, що робить його ідеальним вибором для високонавантажених мікросервісів:
У світі мікросервісів, де кожен сервіс споживає пам'ять та CPU, Rust показує вражаючі результати:
// Приклад простого HTTP сервера на Axum
use axum::{
routing::get,
Router,
};
#[tokio::main]
async fn main() {
let app = Router::new()
.route("/health", get(health_check));
axum::Server::bind(&"0.0.0.0:3000".parse().unwrap())
.serve(app.into_make_service())
.await
.unwrap();
}
async fn health_check() -> &'static str {
"OK"
}
Порівняння споживання пам'яті:
Rust має один з найефективніших асинхронних рантаймів завдяки Tokio:
Коли ваш мікросервіс споживає в 10 разів менше пам'яті, це означає:
// Обробка мільйонів запитів з graceful degradation
use tower::ServiceBuilder;
use tower_http::limit::RateLimitLayer;
let app = Router::new()
.route("/api/data", get(get_data))
.layer(
ServiceBuilder::new()
.layer(RateLimitLayer::new(1000, Duration::from_secs(1)))
);
Rust гарантує відсутність:
Це означає менше нічних викликів та більш стабільні сервіси.
Тест простого JSON API (10,000 запитів):
| Мова | Час (сек) | Req/sec | Пам'ять (MB) |
|---|---|---|---|
| Rust | 0.45 | 22,000 | 3 |
| Go | 0.68 | 14,700 | 12 |
| Node.js | 2.1 | 4,800 | 65 |
| Python | 8.5 | 1,200 | 95 |
Rust має потужні фреймворки та бібліотеки:
Web фреймворки:
Для gRPC:
Для async операцій:
✅ Використовуйте Rust, якщо:
⚠️ Можливо, варто подумати, якщо:
Rust у мікросервісній архітектурі — це не просто тренд, а прагматичний вибір для команд, які цінують:
Крива навчання може бути крутішою, ніж у Go чи Node.js, але віддача у вигляді стабільності та швидкості того варта.
Пробували Rust для мікросервісів? Поділіться досвідом у коментарях! 🦀
2025-11-24 07:37:11
🛡️ La seguridad en la nube requiere una estrategia de defensa en profundidad. 🛡️
Cuando empecé en AWS, pensaba que con IAM bastaba. Pero al profundizar, me di cuenta de la importancia de capas adicionales como las SCPs y RCPs.
Entender la diferencia entre Identidad, Recurso y Control Organizacional es vital para proteger nuestros entornos. No es solo "dar permisos", es saber dónde aplicarlos para garantizar el menor privilegio y la máxima seguridad.
Aquí les comparto mi guía mental para no perderse y saber identificar cuando podemos o debemos usar cada una de estas.
👉 El "Quién puede hacer qué" Son las más comunes. Se adjuntan a Usuarios, Grupos o Roles.
📝 Ejemplo: Un usuario de operaciones necesita iniciar o detener instancias EC2s, pero no puede crear, terminar ni modificar instancias.
ec2:StartInstances y ec2:StopInstances en la tabla de "Productos".👉 El "Quién puede tocar ESTO" Aquí la regla vive en el recurso mismo (S3 Bucket, SQS, KMS Key), no en el usuario. Son vitales para accesos Cross-Account pero no restringidos a ellos. 📝 Ejemplo: Tienes un Bucket S3 de logs centralizados.
s3:PutObject a la cuenta de Producción (Account B)".👉 Las "Reglas de la Casa" (Límites de Identidad, popularmente guardrail o barandilla de seguridad) Aquí es donde muchos se confunden. Las SCPs NO dan permisos. Solo definen el máximo permiso posible. Si la SCP dice "No", no importa si tienes AdminAccess, es un "No". 📝 Ejemplo: Seguridad Compliance.
ec2:RunInstances en cualquier región que no sea us-east-1.👉 El "Perímetro de Datos" (Límites de Recurso) Funcionan como las SCPs, pero enfocadas en restringir el acceso a tus recursos, sin importar quién sea el que llama (incluso si es externo). 📝 Ejemplo: Data Perimeter estricto.
| Tipo de Política | Se aplica a... | Función Principal | ¿Cuándo usarlo? |
|---|---|---|---|
| IAM Policy | Usuario/Rol/Grupo | Otorgar permisos | Para otorgar permisos a usuarios, grupos o roles |
| Resource Policy | S3, SQS, etc. | Otorgar acceso (incluso externo) | Para definir quien puede acceder a un recurso |
| SCP | Cuenta AWS (excepto cuenta administradora de la organización), Unidades Org | Restringir permisos máximos (Identidad) | Para establecer límites máximos sobre las identidades |
| RCP | Recursos de la Org, excepto recursos de la cuenta administradora de la organización | Restringir acceso máximo (Recurso) | Para establecer límites máximos sobre sobre quien accede a tus datos |
Entender IAM, SCPs y RCPs no solo es vital para proteger tu cuenta y organización, también es un componente clave para reducir la superficie de ataque.
Por ejemplo:
Escenario de Red Team: un atacante intenta asumir roles o crear usuarios para escalar privilegios.
SCPs bloquean el movimiento lateral: aunque tenga AdminAccess en una cuenta hija, no podrá ejecutar acciones prohibidas por la SCP a nivel de Organización (ej: organizations:LeaveOrganization, iam:CreateUser).
RCPs evitan exfiltración de datos: un bucket mal configurado no permite accesos de cuentas externas aunque el atacante tenga privilegios en otra parte.
Conclusión: conocer y aplicar correctamente estas políticas es como poner murallas antes de que alguien llegue al nucleo de tu entorno.
💡 Pregunta de reflexión: ¿por qué es un riesgo crítico desplegar recursos productivos o mantener usuarios activos operando directamente en la cuenta administradora de la organización o también llamado Payer Account?
🔜 En el próximo post: Profundizaremos en las SCPs, cuales son las SCPs mas relevantes que toda cuenta debería tener y cómo configurarlas correctamente para dormir tranquilos.
2025-11-24 07:31:34
Fraud detection is one of the most common use cases presented for knowledge graph applications. It has been already covered in a variety of articles and implementations across the industry. I will present how to do it via Protégé and the VidyaAstra plugin.
Machine learning, neural networks and artificial intelligence techniques are commonly used to detect credit card frauds and money laundering schemes. However, these approaches have significant limitations because these techniques rely on statistical models which can be easily fooled by hackers using synthetic data sets or through other methods known as adversarial attacks.
Graph databases and knowledge graphs represent the state of the art in fraud detection.
The reason is that they contain a massive amount of interconnected data, and even if one piece of information is incorrect or missing, the system can still identify fraudulent patterns through relationship analysis.
Knowledge graphs can be used to detect:
This article presents my approach to building fraud detection knowledge graphs using Protégé and the VidyaAstra plugin, based on industry-standard techniques discussed in fraud detection literature.
A knowledge graph is a database of facts and relations between different entities.
A knowledge graph can be used to represent the world, with objects being concepts or physical things, and their attributes, relationships and metadata. For example, a financial institution could have a knowledge graph containing information about its customers, accounts, loans, transactions, and employees. The institute might also have a separate knowledge graph containing information about its offices and locations.
In fraud detection, a knowledge graph represents:
Graph databases are a natural way of building a knowledge graph because they provide an efficient way of storing relations between entities. A fact can be represented as an entity and its relationship with another entity can be represented as an edge between them. This representation enables us to use graph algorithms on our knowledge graph to find answers to various questions such as whether a user is fraudulent or if a transaction pattern indicates money laundering.
Machine learning, neural networks, and AI techniques have made significant strides in fraud detection, but they face critical limitations:
1. Vulnerability to Adversarial Attacks
2. Black Box Problem
3. Statistical Limitations
4. Missing Contextual Understanding
Knowledge graphs address these limitations by:
✅ Relationship-Native: Connections are first-class citizens, not expensive joins
✅ Context-Aware: Every entity exists within a web of relationships
✅ Explainable: Query results show the exact path of reasoning
✅ Pattern-Based: Define fraud patterns once, detect them everywhere
✅ Robust: Missing or incorrect data doesn't break relationship analysis
Most importantly: Knowledge graphs combine the power of graph algorithms (DFS, cycle detection, community finding) with semantic reasoning (ontologies, inference rules) to detect fraud patterns that are invisible to traditional approaches.
The power of knowledge graphs for fraud detection comes from their ability to model and query complex relationships:
Traditional Database Approach:
SELECT * FROM transactions
WHERE amount > 10000 AND suspicious_flag = TRUE
→ Finds individual suspicious transactions (high false positives)
Knowledge Graph Approach:
MATCH (a1:Account)-[:SENDS_MONEY]->(a2:Account)-[:SENDS_MONEY]->
(a3:Account)-[:SENDS_MONEY]->(a4:Account)-[:SENDS_MONEY]->(a1)
WHERE a1.id = a2.id
→ Finds circular money flows (actual fraud pattern)
1. Multi-Hop Relationship Queries
Find patterns like:
2. Pattern Matching
Define suspicious patterns once in your ontology:
<owl:Class rdf:about="#CircularMoneyFlow">
<rdfs:subClassOf rdf:resource="#FraudPattern"/>
<rdfs:comment>
Money returns to originating account through intermediaries
</rdfs:comment>
</owl:Class>
Then detect them automatically using graph algorithms and SPARQL queries.
3. Semantic Reasoning
The ontology enables automatic inference:
Facts:
- Transaction_T1 connects Account_A to Account_B
- Account_A shares_email_with Account_C
- Account_C shares_device_with Account_D
Inferred Knowledge:
- Account_A potentially_colluding_with Account_D
- Risk_Score increases due to device sharing
- Pattern matches "Account Takeover" fraud type
The most important step is creating a graph of relationships between various pieces of information about users and transactions. The key is to associate all available information with account IDs:
Core Entities:
Relationships to Model:
owns_account: Customer → Accountsends_money_to: Account → Accountuses_device: Account → Deviceaccessed_from: Account → IP Addressshares_email: Account → Accountshares_phone: Account → Accountlocated_at: Account → LocationAttributes:
Using a knowledge graph, you can build powerful rules that detect known fraudulent behavior:
Common Fraud Patterns to Look For:
A. Common Attributes (Identity Fraud)
B. Circular Money Flow (Money Laundering)
C. Rapid Transactions (Layering)
D. Structuring (Smurfing)
E. Account Takeover
Using Protégé and VidyaAstra, you can create an OWL ontology that formally defines your fraud detection domain:
Using VidyaAstra's "Create New Ontology" Mode:
Description:
"Create a fraud detection ontology for anti-money laundering.
Include the following:
Entities:
- Account (with properties: account_id, balance, account_type, creation_date)
- Customer (with properties: customer_id, name, email, phone, tax_id)
- Transaction (with properties: transaction_id, amount, timestamp, currency)
- Device (with properties: device_id, type, ip_address)
- FraudPattern (parent class for all fraud types)
Fraud Pattern Types:
- CircularMoneyFlow (subclass of FraudPattern)
- MoneyLaundering (subclass of FraudPattern)
- Structuring (subclass of FraudPattern)
- AccountTakeover (subclass of FraudPattern)
- IdentityFraud (subclass of FraudPattern)
Relationships:
- sendsMoneyTo (Account to Account)
- ownsAccount (Customer to Account)
- usesDevice (Account to Device)
- sharesEmail (Account to Account)
- sharesPhone (Account to Account)
- involvedIn (Account to FraudPattern)
- detectedBy (FraudPattern to DetectionAlgorithm)
Detection Algorithms:
- DFS (Depth First Search for cycle detection)
- TarjanSCC (Strongly Connected Components)
- LouvainCommunity (Community detection for fraud rings)
Risk Levels:
- HighRisk, MediumRisk, LowRisk
Include data properties for risk scores, transaction amounts, and timestamps."
VidyaAstra will generate a complete OWL ontology in 20-30 seconds, including:
Circular money flow is a classic money laundering technique where funds are moved through a series of accounts and eventually return to the originating account. This creates the appearance of legitimate business activity while obscuring the illicit origin of funds.
Example Pattern:
Account A sends $50,000 → Account B
Account B sends $50,000 → Account C
Account C sends $50,000 → Account D
Account D sends $50,000 → Account A (returns to origin)
This pattern is difficult to detect with traditional database queries because it requires:
Open Protégé and launch the VidyaAstra plugin. Select "Create New Ontology" mode and provide this description:
Create a fraud detection ontology for anti-money laundering with circular money flow detection.
Include:
- Account entities with properties (account_id, balance, creation_date)
- Transaction entities linking accounts
- CircularMoneyFlow fraud pattern class
- MoneyLaundering parent class
- DFS and Tarjan cycle detection algorithms
- Risk levels (High, Medium, Low)
- Investigation and compliance action classes
Add relationships:
- sendsMoneyTo (Account to Account)
- involvedIn (Account to FraudPattern)
- detectedBy (FraudPattern to Algorithm)
- triggers (RiskLevel to Action)
Add data properties:
- riskScore (decimal 0-1)
- cycleLength (integer)
- totalAmount (decimal)
- detectionTimestamp (datetime)
VidyaAstra will generate a complete OWL ontology including all classes, properties, and basic individuals.
Add these individuals to your ontology to represent the circular money flow pattern:
<!-- Accounts in the cycle -->
<owl:NamedIndividual rdf:about="#Account_A">
<rdf:type rdf:resource="#Account"/>
<accountId>ACC-001</accountId>
<sendsMoneyTo rdf:resource="#Account_B"/>
<involvedIn rdf:resource="#CircularFlow_001"/>
</owl:NamedIndividual>
<owl:NamedIndividual rdf:about="#Account_B">
<rdf:type rdf:resource="#Account"/>
<accountId>ACC-002</accountId>
<sendsMoneyTo rdf:resource="#Account_C"/>
<involvedIn rdf:resource="#CircularFlow_001"/>
</owl:NamedIndividual>
<owl:NamedIndividual rdf:about="#Account_C">
<rdf:type rdf:resource="#Account"/>
<accountId>ACC-003</accountId>
<sendsMoneyTo rdf:resource="#Account_D"/>
<involvedIn rdf:resource="#CircularFlow_001"/>
</owl:NamedIndividual>
<owl:NamedIndividual rdf:about="#Account_D">
<rdf:type rdf:resource="#Account"/>
<accountId>ACC-004</accountId>
<sendsMoneyTo rdf:resource="#Account_A"/>
<involvedIn rdf:resource="#CircularFlow_001"/>
</owl:NamedIndividual>
<!-- The fraud pattern instance -->
<owl:NamedIndividual rdf:about="#CircularFlow_001">
<rdf:type rdf:resource="#CircularMoneyFlow"/>
<cycleLength rdf:datatype="&xsd;integer">4</cycleLength>
<totalAmount rdf:datatype="&xsd;decimal">200000</totalAmount>
<riskScore rdf:datatype="&xsd;decimal">0.92</riskScore>
<detectedBy rdf:resource="#DFS_Algorithm"/>
<hasRiskLevel rdf:resource="#HighRisk"/>
</owl:NamedIndividual>
Switch to "Basic Query" mode in VidyaAstra and ask:
"Show me all accounts involved in circular money flow"
VidyaAstra will:
Example SPARQL generated:
SELECT ?account ?accountId WHERE {
?account rdf:type :Account .
?account :involvedIn ?pattern .
?pattern rdf:type :CircularMoneyFlow .
?account :accountId ?accountId .
}
Result:
Found 4 accounts involved in circular money flow:
- Account_A (ACC-001)
- Account_B (ACC-002)
- Account_C (ACC-003)
- Account_D (ACC-004)
Pattern: A → B → C → D → A
Risk Score: 0.92 (High Risk)
Detection Method: DFS Algorithm
The ontology structure allows you to run various graph algorithms to detect and analyze fraud patterns:
The ontology structure allows you to run various graph algorithms to detect and analyze fraud patterns:
1. Depth-First Search (DFS) for Cycle Detection
DFS is used to detect cycles in the transaction graph. It traverses the graph starting from each account and checks if it can return to the starting point.
# SPARQL query to find 4-hop cycles using DFS pattern
SELECT ?a1 ?a2 ?a3 ?a4
WHERE {
?a1 :sendsMoneyTo ?a2 .
?a2 :sendsMoneyTo ?a3 .
?a3 :sendsMoneyTo ?a4 .
?a4 :sendsMoneyTo ?a1 .
}
Complexity: O(V + E) where V = accounts, E = transactions
Best for: Finding simple cycles quickly
2. Tarjan's Strongly Connected Components (SCC)
Identifies groups of accounts where money can flow between any two accounts in the group. This is more sophisticated than simple cycle detection.
# Find accounts that are part of strongly connected components
SELECT ?account
WHERE {
?account :sendsMoneyTo+ ?otherAccount .
?otherAccount :sendsMoneyTo+ ?account .
FILTER(?account != ?otherAccount)
}
Best for: Detecting complex fraud rings where money circulates among multiple accounts
3. Louvain Algorithm for Community Detection
Groups accounts into communities based on transaction patterns. Fraudulent accounts often form tight communities.
Use case: Identify clusters of accounts that primarily transact with each other, suggesting coordination or collusion.
4. PageRank for Account Importance
Assigns importance scores to accounts based on incoming and outgoing transaction patterns.
Use case: Identify "hub" accounts that are central to money laundering operations.
5. Shortest Path Analysis
Find the shortest path between two accounts to understand how money flows.
# Using property paths to find connections
SELECT ?intermediateAccount
WHERE {
:SuspiciousAccount_A :sendsMoneyTo+ ?intermediateAccount .
?intermediateAccount :sendsMoneyTo+ :SuspiciousAccount_B .
}
Use case: Track how illicit funds move from source to destination.
VidyaAstra extends Protégé with three key capabilities that make fraud detection ontology development accessible:
Instead of writing complex SPARQL queries manually, fraud analysts can ask questions in plain English:
Traditional Approach:
SELECT ?account ?riskScore
WHERE {
?account :involvedIn ?pattern .
?pattern rdf:type :CircularMoneyFlow .
?pattern :riskScore ?riskScore .
FILTER (?riskScore > 0.8)
}
VidyaAstra Approach:
Simply ask: "Which accounts are involved in high-risk circular money flows?"
The plugin:
Instead of manually creating classes, properties, and individuals in OWL/RDF XML, describe what you need:
"Create a fraud detection ontology for credit card fraud including:
- Transaction entities with amount, timestamp, merchant
- Customer entities with account info
- Fraud patterns: velocity checks, geographic anomalies, merchant risk
- Detection rules for unusual spending patterns"
VidyaAstra generates a complete, valid OWL ontology in ~20 seconds.
Extend existing ontologies without manual XML editing:
"Add a new fraud type called 'Account Takeover' that includes:
- Login from new device
- Password change
- Followed by large withdrawal
Link it to HighRisk level"
The plugin updates your ontology, validates consistency, and applies changes immediately.
Here are the key graph algorithms used in knowledge graph-based fraud detection systems:
Depth-First Search (DFS)
SELECT ?start ?end
WHERE {
?start :sendsMoneyTo+ ?end .
?end :sendsMoneyTo+ ?start .
FILTER(?start != ?end)
}
Tarjan's Strongly Connected Components
Shortest Path (Dijkstra/Bellman-Ford)
SELECT (COUNT(?intermediate) AS ?pathLength)
WHERE {
:Account_A :sendsMoneyTo+ ?intermediate .
?intermediate :sendsMoneyTo+ :Account_B .
}
All Paths Enumeration
Louvain Algorithm
Label Propagation
PageRank
Betweenness Centrality
Degree Centrality
Subgraph Isomorphism
# Match the "smurfing" pattern: one source, multiple small transactions
SELECT ?source (COUNT(?dest) AS ?numTransactions) (SUM(?amount) AS ?total)
WHERE {
?source :sendsMoneyTo ?dest .
?transaction :from ?source ;
:to ?dest ;
:amount ?amount .
FILTER(?amount < 10000)
}
GROUP BY ?source
HAVING (COUNT(?dest) > 10 && SUM(?amount) > 50000)
Time-Window Queries
SELECT ?account (COUNT(?tx) AS ?count)
WHERE {
?tx :fromAccount ?account ;
:timestamp ?time .
FILTER(?time > "2024-11-23T00:00:00"^^xsd:dateTime &&
?time < "2024-11-23T01:00:00"^^xsd:dateTime)
}
GROUP BY ?account
HAVING (COUNT(?tx) > 10)
In addition to automated graph algorithms, fraud analysts need to visually inspect suspicious patterns. Protégé's built-in visualization tools, combined with VidyaAstra's query capabilities, allow manual exploration:
Account_A):sendsMoneyTo, :sharesEmail, etc.)Use VidyaAstra to iteratively drill down:
Query 1: "Show accounts with more than 5 outgoing transactions"
Query 2: "Which of these accounts share email addresses?"
Query 3: "Show the transaction history for Account_XYZ"
Query 4: "Are any of these accounts involved in fraud patterns?"
This iterative, conversational approach combines automated detection with human expertise.
While Protégé is excellent for ontology development and testing, production fraud detection systems typically use dedicated graph databases for scalability:
┌─────────────────────────────────────────┐
│ Protégé + VidyaAstra │
│ • Ontology design & testing │
│ • Pattern definition │
│ • Query prototyping │
└─────────────────────────────────────────┘
↓ (Export OWL)
┌─────────────────────────────────────────┐
│ Graph Database (Production) │
│ • Apache Jena Fuseki │
│ • GraphDB │
│ • Neo4j (with neosemantics plugin) │
│ • Amazon Neptune │
└─────────────────────────────────────────┘
↓
┌─────────────────────────────────────────┐
│ Real-time Transaction Processing │
│ • Stream processing (Kafka, Flink) │
│ • Pattern matching │
│ • Alert generation │
└─────────────────────────────────────────┘
# Apache Jena Fuseki
curl -X POST \
-H "Content-Type: application/rdf+xml" \
--data-binary @fraud-detection-ontology.owl \
http://localhost:3030/fraud/data
# GraphDB
curl -X POST \
-H "Content-Type: application/rdf+xml" \
--data-binary @fraud-detection-ontology.owl \
http://localhost:7200/repositories/fraud/statements
Step 1: Install Protégé
Download and install Protégé for your operating system.
Step 2: Install VidyaAstra Plugin
# Windows
Copy-Item vidyaastra-1.0.1.jar "C:\Program Files\Protege-5.6.7\plugins\"
# macOS
cp vidyaastra-1.0.1.jar "/Applications/Protege.app/Contents/Java/plugins/"
# Linux
cp vidyaastra-1.0.1.jar "$HOME/Protege-5.6.7/plugins/"
Step 3: Launch Protégé and Activate Plugin
Window → Views → Ontology Views → VidyaAstra View
Step 4: Configure API Key
Enter your OpenAI/Claude/Nvidia API key in the VidyaAstra preferences.
Create Your First Fraud Detection Ontology:
Create a fraud detection ontology for money laundering detection.
Include:
- Account and Transaction entities
- CircularMoneyFlow fraud pattern
- sendsMoneyTo relationship
- DFS cycle detection algorithm
- Risk levels and scores
fraud-detection.owl
Query Your Ontology:
Modify Your Ontology:
1. Natural Language Query Processing
// Simplified flow
String userQuery = "Which accounts have high risk scores?";
// 1. Extract ontology context
String context = extractClassesAndProperties(activeOntology);
// 2. Build LLM prompt
String prompt = "Given this ontology:\n" + context +
"\nTranslate to SPARQL: " + userQuery;
// 3. Call LLM
String sparqlQuery = llm.complete(prompt);
// 4. Execute query
ResultSet results = ontology.executeQuery(sparqlQuery);
// 5. Format results
String answer = formatAsNaturalLanguage(results);
2. AI Ontology Generation
// Simplified flow
String description = "Create fraud detection ontology...";
// 1. Generate with strict prompt
String systemPrompt = "Generate valid OWL/RDF XML only. " +
"No markdown, no explanations.";
// 2. Get LLM response
String owlXml = llm.complete(systemPrompt, description);
// 3. Clean and validate
owlXml = removeMarkdown(owlXml);
owlXml = fixCommonXmlIssues(owlXml);
// 4. Validate with OWL API
OWLOntology ont = manager.loadFromString(owlXml);
// 5. Save
saveOntology(ont, "generated-ontology.owl");
Find Circular Money Flows:
PREFIX : <http://example.org/fraud#>
SELECT DISTINCT ?account1 ?account2 ?account3 ?account4
WHERE {
?account1 :sendsMoneyTo ?account2 .
?account2 :sendsMoneyTo ?account3 .
?account3 :sendsMoneyTo ?account4 .
?account4 :sendsMoneyTo ?account1 .
}
Find Accounts Sharing Email:
SELECT ?account1 ?account2 ?email
WHERE {
?account1 :hasEmail ?email .
?account2 :hasEmail ?email .
FILTER(?account1 != ?account2)
}
Find High-Risk Patterns:
SELECT ?pattern ?riskScore
WHERE {
?pattern rdf:type :FraudPattern .
?pattern :riskScore ?riskScore .
FILTER(?riskScore > 0.80)
}
ORDER BY DESC(?riskScore)
Temporal Analysis - Rapid Transactions:
SELECT ?account (COUNT(?tx) AS ?txCount)
WHERE {
?tx :fromAccount ?account ;
:timestamp ?time .
FILTER(?time >= "2024-11-23T00:00:00"^^xsd:dateTime &&
?time <= "2024-11-23T02:00:00"^^xsd:dateTime)
}
GROUP BY ?account
HAVING (COUNT(?tx) > 5)
Fraud detection is fundamentally a relationship problem:
Traditional approaches struggle with:
Knowledge graphs solve these problems by natively representing relationships and enabling graph algorithms.
Protégé provides:
VidyaAstra adds:
Together, they enable fraud analysts to build and query knowledge graphs without deep technical expertise in ontologies or SPARQL.
The complete fraud detection ontology example is available in this repository:
fraud-detection-ontology.owl
Author: Vishal Mysore
Repository: https://github.com/vishalmysore/vidyaastra-plugin
This article presents my approach to fraud detection using knowledge graphs, building on industry-standard techniques with Protégé and the VidyaAstra plugin. The circular money flow use case is a well-documented fraud pattern in financial crime literature and has been covered in many articles before, and this implementation demonstrates how ontologies and graph algorithms can detect such patterns effectively.
https://medium.com/neo4j/find-circular-money-flow-with-neo4j-c9138e1c3183
https://www.journalofaccountancy.com/issues/2009/dec/20091793/
https://digitaldealer.com/news/circular-bank-statement-fraud-the-new-synthetic-income-scam-dealers-lenders-must-fight/168087/
Important Notes:
2025-11-24 07:26:22
There is an old truth that operators learned long before the rest of the world even noticed it existed. A person is never opaque. Not really. Every human you meet leaks tiny packets of behavioral data at all times. Micro signals. Rhythm quirks. Stress beads. Directional cues. Environmental preferences. Latent rituals. These signals do not announce themselves. They sit in the edges of perception like peripheral ghosts. Most people ignore them. Operators do not.
Behavioral enumeration is the practice of collecting those signals without ever touching the target. You watch. You listen. You record. You categorize. You build a living index of a person’s internal code before you ever open your mouth. It is the closest thing the real world has to source access for the human mind. The process is old but the precision is new. The modern operator does not guess. They enumerate. They compile. They pull a person’s behavior apart until the pieces line up and reveal the architecture beneath.
This works every time because behavior is deterministic under pressure. Once you know the pattern you know the person. Once you know the person you know how to move through them like you move through an unpatched system.
This is the dark art.
Most people think behavioral reading is intuition. Operators know it is computation. Enumeration means you pull in distinct behavioral variables and slot them into categories that reveal structural truths. You are not looking for emotion or motive. You are looking for patterns that repeat. Humans are biological machines with recursive loops. They move according to internal rules they barely understand.
Enumerate enough of these rules and you can predict their decision tree before they reach the fork in the path.
Behavioral enumeration follows one assumption. Nothing people do is random. It looks random because you have not collected enough data to see the pattern. The moment you hit critical mass the randomness dissolves. The structure appears.
This is why enumeration works. Everything leaks.
Operators treat humans like three surface layers that converge into one predictable organism.
This layer emerges from habits the person never consciously formed. Posture. Breathing pattern. Blink rhythm. Idle stance. Direction they default toward when entering a space. How they adjust their weight when they anticipate a decision. These are the lowest level loops. They rarely change and they reveal the foundation.
Enumeration begins here. If you watch a target long enough you start to see what they cannot hide. These are the immutable processes that drive all their higher level behaviors.
This layer contains everything a person does to operate within the world. The way they speak. The tempo of their voice. The pacing of their steps. How they react to waiting. How they respond to interruption. Their pattern of attention. Their choice of words. Their orientation toward noise. These are semi conscious traits. They can be masked for short periods but not indefinitely.
Functional behavior is where you see the footprint of their stress response. You see how they protect themselves. You see how they conserve energy. You see where their focus goes when they are tired. The functional layer predicts how they will behave under unexpected pressure.
This is the layer that involves other humans. It is the cleanest layer to enumerate because social behavior is scripted. People are conditioned to respond predictably to status changes, confidence shifts, and conversational cues. The interpersonal layer is the easiest environment for enumeration because the patterns are obvious once you know where to look.
Every person belongs to a social archetype. Not a personality type. An archetype defined by how they position themselves inside a social field. Some dominate. Some minimize. Some mirror. Some deflect. Some perform. Some retreat. Once you know which archetype they run you can predict how they will treat you before they speak.
These three surfaces are enough to build a full operational map of a person.
When you interact with someone you trigger their performance layer. People behave differently when they know they are being watched. They script their responses. They reinforce their persona. They hide their loops. They become noise.
Pre contact enumeration bypasses all of that. You observe the target in their natural low awareness state. This is the only state where reality shows itself.
Behavioral enumeration lets you capture the raw signals. The pure unfiltered data. You are not influenced by their social mask. You do not get fooled by charisma or authority cues. You are reading the architecture beneath the behavior, not the behavior itself.
This is what makes it powerful.
There is a cycle operators use to pull a complete behavioral map before contact. It has five steps.
Every person has small physical tics that emerge under minimal cognitive load. Thumb tap. Pocket check. Shoulder roll. Weight shift. Surface scan. These movements reveal the body’s default reset loop. That reset loop reveals stress tolerance, vigilance level, and internal focus style.
Where does their attention go when nothing is happening. Do they look at exits. Do they look at people. Do they look at screens. Do they look at the floor. This drift shows threat model, curiosity type, and internal drive orientation. Drift is more revealing than focus. Focus is conscious. Drift is instinct.
Do they move fast or slow. Do they pause before acting. Do they interrupt. Do they hesitate. Timing shows their decision tree. Fast movers favor impulse. Slow movers favor evaluation. Hesitators favor external validation. Interruptors favor control. Once you know their timing you know their operational tempo.
You look for the micro behaviors that only appear under mild tension. Hair adjustment. Lip compression. Jaw shift. Rapid blink. Short breath. These markers reveal which emotional circuits dominate under pressure. Some people collapse inward. Some project outward. Some freeze. Some redirect. Stress markers predict exactly how they will behave if an interaction becomes difficult.
You observe how they respond to small environmental changes. Loud noise. New person entering the space. Sudden line movement. Unexpected delay. Their reaction reveals whether their nervous system is rigid or adaptive. Predictable people are easier to steer. Adaptive people require a different interaction strategy.
Together these steps create the behavioral index. Once the index exists the target becomes readable.
When you enumerate someone before speaking to them and then interact with them using the map you built you appear psychic. You appear intuitive. You appear like you know things you should not know.
This is because enumeration lets you avoid every blind alley before it appears. You never trigger their defenses. You never hit their resistance points. You never misread their tone. You always respond with exactly the cadence they expect. You match their reinforcement loop on the first try.
From their perspective you are perfectly aligned with them. From your perspective you are simply following the map.
This is why behavioral enumeration works. It produces an illusion of compatibility so strong that targets relax without realizing why.
At higher levels enumeration involves pattern stacking. Operators start combining behavioral variables to produce predictive composites.
For example:
A target with slow timing, low drift, and strong stress markers under noise is likely conflict avoidant but internally rigid. You approach gently but assert concretely.
A target with fast timing, high drift, and minimal stress markers is novelty driven, reward seeking, and easily pulled into momentum. You approach with energetic pacing and open frameworks.
A target with low timing variation, consistent gaze patterns, and high environment scanning is vigilance trained. You approach with transparency and precision.
The combinations form archetypes far more accurate than personality tests or conventional profiling.
This is scientific intuition. It looks mystical but it is engineering.
Behavioral enumeration succeeds because people cannot stop being themselves. Even when they try. Even when they mask. Even when they lie. Even when they perform. The automatic layer always leaks through. The rhythm always leaks through. The timing always leaks through.
You cannot disguise rhythm. You cannot disguise baseline posture. You cannot disguise micro stress signals. These signals anchor everything else.
When you enumerate these primitives you bypass deception entirely.
Behavioral enumeration works because the human nervous system is predictable.
Enumeration serves three primary purposes.
You know exactly where a person will push back and exactly where they will let you through.
You choose tone, pacing, word choice, and interaction strategy that aligns with their internal rhythm. This eliminates friction.
When you understand their stress cascade you know how they will react under pressure. This lets you avoid escalation entirely.
Enumeration is risk management. The psychological version of a firewall rule audit. You reveal the paths of least resistance. Once those paths appear you walk them.
In 2025 every digital operation has a human component somewhere in the chain. Someone approves access. Someone presses a button. Someone answers an email. Someone misconfigures a system. Someone trusts the wrong message. Someone believes the wrong detail.
Behavioral enumeration lets you understand that person before they enter your orbit. It gives you a clean operational path. It grants you clarity in a time where everyone else operates blind.
It is not manipulation. It is literacy. Most people live inside their own heads. Operators read the world outside theirs.
Behavioral enumeration is the dark art that is not dark at all once you practice it. It is simply the science of paying attention at the level most people cannot sustain. When you enumerate someone you unlock their architecture. And once you know the architecture the interaction is no longer a negotiation. It becomes navigation.
There is no magic. Only data. Only rhythm. Only structure. And once you see the structure you never stop seeing it.
If you want the full field manual that goes far deeper into real behavioral reconnaissance, covert psychological mapping, and applied mind surface analysis, the pack is here:
Psychological Recon Techniques: A Field Manual for H4CK3RS
2025-11-24 07:25:59
When building a portfolio website, one of the biggest challenges is information overload. You want to showcase everything you've built - your full-stack capabilities, data projects, UI/UX work, project management experience - but recruiters and hiring managers often have specific interests.
A front-end recruiter doesn't need to sift through your database optimization projects, and a data science manager shouldn't have to scroll past your Angular components.
The solution? An intelligent filtering system that asks visitors what they're interested in and quietly highlights the most relevant content throughout the entire portfolio.
Before writing a single line of code, I established clear design goals based on fundamental UX principles:
Don't overwhelm users with all filtering options at once. Start with high-level interests (Full Stack, Web Development, Data, etc.) and let the system handle the complexity behind the scenes.
Filter selections should persist across page navigation and browser sessions, but users should be able to adjust them at any time without going through a complex flow.
Visual highlighting should guide attention without screaming. Instead of hiding non-matching content (which feels restrictive), use subtle visual cues - lighter borders, reduced opacity - to naturally draw the eye to relevant items.
Leverage the natural F-shaped reading pattern by placing filter controls at the top of each page and using horizontal layouts that align with how users scan content.
Show a welcome modal only once per session, making it feel like a helpful concierge rather than an annoying popup.
Enable direct linking to filtered views through URL parameters, allowing users to share specific portfolio contexts or bookmark their preferred view.
The system is built around four key architectural pieces:
lib/filters.ts)
The heart of the system is a centralized configuration file that maps high-level interest categories to specific technology tags:
export const filterPresets: Record<InterestCategory, FilterPreset> = {
'full-stack': {
id: 'full-stack',
label: 'Full Stack Development',
description: 'End-to-end application development with modern frameworks',
tags: [
'React', 'Angular', 'Vue', 'Next.js',
'Node.js', 'Express', '.NET', 'C#',
'JavaScript', 'TypeScript', 'Python',
'SQL', 'SQL Server', 'MongoDB',
'REST APIs', 'CI/CD', 'Docker', 'Kubernetes'
]
},
// ... more presets
}
This approach follows the Single Source of Truth principle - all filtering logic stems from this one configuration, making it easy to add new categories or adjust mappings.
The file also includes helper functions for matching logic:
export function matchesFilters(itemTags: string[], activeFilters: string[]): boolean {
return activeFilters.some(filter => itemTags.includes(filter))
}
export function getFilterStrength(itemTags: string[], activeFilters: string[]): number {
if (activeFilters.length === 0) return 1
const matches = itemTags.filter(tag => activeFilters.includes(tag)).length
return matches / activeFilters.length
}
The getFilterStrength function is crucial - it returns a 0-1 score indicating how strongly an item matches active filters, enabling nuanced visual highlighting.
contexts/FilterContext.tsx)
React Context provides global state without prop drilling. The FilterContext manages:
The implementation uses lazy initialization to avoid hydration mismatches, a common gotcha in Next.js:
const [activeFilters, setActiveFilters] = useState<string[]>(() => {
if (typeof window !== 'undefined') {
const stored = localStorage.getItem('portfolioFilters')
return stored ? JSON.parse(stored) : []
}
return []
})
This pattern ensures localStorage is only accessed on the client, preventing server/client render discrepancies.
A powerful feature of the filtering system is the ability to set filters directly through URL parameters. This enables shareable links and bookmarkable views:
const loadInitialInterest = (): InterestCategory | null => {
if (typeof window === 'undefined') return null
// Check URL parameter first
const urlParams = new URLSearchParams(window.location.search)
const urlInterest = urlParams.get('interest')
if (urlInterest && urlInterest in filterPresets) {
return urlInterest as InterestCategory
}
// Fall back to stored preference
const stored = localStorage.getItem('portfolioInterest')
return stored ? (stored as InterestCategory) : null
}
This creates several powerful use cases:
https://www.ryanverwey.dev/?interest=full-stack to showcase full-stack work?interest=web-development to a front-end positionThe URL parameter takes precedence over stored preferences, ensuring shared links always display the intended context. After initial load, the selection is persisted to localStorage for subsequent navigation.
The context also provides methods for manipulating filters:
const applyPreset = useCallback((interest: InterestCategory) => {
const preset = filterPresets[interest]
setSelectedInterest(interest)
setActiveFilters(interest === 'browse-all' ? [] : preset.tags)
}, [])
components/WelcomeModal.tsx)
First impressions matter. The welcome modal appears once per session, presenting six interest options in a clean, scannable grid:
<div className="fixed inset-0 z-50 flex items-center justify-center p-4
bg-black/50 backdrop-blur-sm animate-in fade-in duration-200">
<div className="relative w-full max-w-lg bg-white dark:bg-zinc-900
rounded-xl shadow-xl">
{/* Interest options in a 2-column grid */}
</div>
</div>
Key UX decisions:
components/FilterBar.tsx)
The FilterBar appears consistently at the top of filtered pages, providing both context and control:
<div className="bg-white/80 dark:bg-zinc-950/80 backdrop-blur-sm
border-b border-zinc-200 dark:border-zinc-800 mb-8">
<div className="container mx-auto max-w-6xl px-4 py-4">
{/* Interest selector buttons */}
{/* Active filters display */}
</div>
</div>
The semi-transparent background with backdrop blur creates a modern, elevated feel while maintaining visual hierarchy.
The highlighting system uses a three-tier approach based on match strength:
const getHighlightClass = (tags: string[]) => {
if (activeFilters.length === 0) return ''
const strength = getFilterStrength(tags, activeFilters)
if (strength >= 0.5) {
return 'ring-1 ring-zinc-400 dark:ring-zinc-600'
} else if (strength > 0) {
return 'ring-1 ring-zinc-300 dark:ring-zinc-700'
} else {
return 'opacity-40'
}
}
This creates a visual hierarchy:
This follows the Recognition Over Recall principle - users can see all content but immediately recognize what's most relevant.
TypeScript ensures compile-time safety across the entire system:
export type InterestCategory =
| 'full-stack'
| 'web-development'
| 'back-end'
| 'data'
| 'project-management'
| 'browse-all'
interface FilterPreset {
id: InterestCategory
label: string
description: string
tags: string[]
}
Union types for categories prevent typos and enable autocomplete. Every filter interaction is type-checked, reducing runtime errors.
Several optimizations ensure the filtering system doesn't impact performance:
The system incorporates several accessibility features:
<button> elements, not divsprefers-reduced-motion
The About page uses highlighting on both experience cards and skill buttons:
const renderSkillButton = (skill: string) => {
const isFiltered = activeFilters.includes(skill)
return (
<button
className={`block w-full px-3 py-2 text-sm rounded-lg border transition-all ${
isFiltered
? 'border-zinc-400 bg-zinc-100 dark:bg-zinc-800 font-medium'
: 'border-zinc-200 dark:border-zinc-700 hover:border-zinc-300'
}`}
>
{skill}
</button>
)
}
This creates a cohesive experience where skills relevant to the selected interest are subtly emphasized.
Projects showcase a unique challenge - they have both category filters (Web Apps, Websites, Tools) and global filters. The solution:
const filteredProjects = projects.filter(p => {
const categoryMatch = selectedCategory === 'All' || p.category === selectedCategory
// Don't filter by activeFilters - only highlight
return categoryMatch
})
Projects remain visible regardless of global filters (avoiding confusing disappearing content), but the highlighting guides attention to relevant tech stacks.
The Experience page combines a collapsible skill sidebar with the global FilterBar:
const toggleSkill = (skill: string) => {
if (activeFilters.includes(skill)) {
removeFilter(skill)
} else {
addFilter(skill)
}
}
This allows granular control - users can click individual skills to add them to filters, creating a dynamic, exploratory experience.
The blog integrates with existing category/tag filters while syncing with global state:
useEffect(() => {
if (currentTag && !activeFilters.includes(currentTag)) {
addFilter(currentTag)
}
}, [currentTag, activeFilters, addFilter])
This creates continuity - selecting a tag in the blog updates global filters, so navigating to the Experience page automatically highlights related skills.
The entire system leverages Gestalt principles of perception:
Rather than forcing users to understand tag taxonomies, the system presents intent-based presets:
Every interaction provides clear feedback:
The system adapts to smaller screens:
.flex-col sm:flex-row sm:items-center
On mobile, filter buttons stack vertically. On desktop, they flow horizontally. This maintains usability across all viewport sizes.
After deployment:
The filtering system adds negligible overhead - most logic is simple array operations, and React's reconciliation handles updates efficiently.
Building an intelligent filtering system isn't just about functionality - it's about understanding user intent and creating an interface that feels anticipatory rather than reactive. By combining thoughtful UX principles, modern React patterns, and performant architecture, we created a system that helps visitors find exactly what they're looking for without feeling restrictive.
The key insights:
This system demonstrates that sophisticated features don't require complex UIs. By leveraging fundamental design principles and modern web technologies, we created an experience that feels effortless - the hallmark of great design.
The URL-based filtering has proven particularly valuable in job applications. Instead of sending recruiters to a generic portfolio, I can now send role-specific links:
?interest=full-stack - Highlights end-to-end development work?interest=web-development - Emphasizes UI components and React expertise?interest=back-end - Showcases API development and database work?interest=data - Features analytics, ETL, and visualization projectsThis targeted approach increases engagement by showing recruiters exactly what they're looking for, without making them hunt through unrelated content.
Technologies Used: React, Next.js 14, TypeScript, Tailwind CSS, React Context API
Methodologies: User-Centered Design, Progressive Enhancement, Mobile-First Development, Accessibility-First
UI/UX Principles: Progressive Disclosure, Recognition Over Recall, F-Pattern Layout, Gestalt Principles, Visual Hierarchy