MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Modern Identity Governance Strategies for Evolving IT Environments

2025-12-12 03:10:02

As organizations continue transitioning from traditional network perimeters to distributed, cloud-first architectures, identity governance has become one of the most critical pillars of cybersecurity. Employees now access corporate resources from personal devices, home networks, and global locations—and as a result, IT teams must ensure that authentication, authorization, and lifecycle management remain both seamless and secure. Modern identity governance focuses not only on managing who has access to what but also on continuously evaluating risk signals, enforcing least privilege, and automating processes that were once manual and error-prone.

The Shift Toward Zero Trust Foundations

Zero Trust principles have reshaped how organizations design access policies. Instead of assuming that users inside the network can be trusted, Zero Trust requires continuous verification based on context: the user’s identity, device status, location, and security posture. Modern identity platforms increasingly integrate risk-based controls that adapt access requirements in real time. For example, logins from unusual locations may trigger multifactor authentication, while high-risk sessions can be automatically blocked until verified by an administrator.

This shift is especially important as remote and hybrid workforces expand, increasing the number of unmanaged devices and off-network connections. Identity governance solutions capable of ingesting threat intelligence, correlating behavior patterns, and enforcing granular policy conditions give organizations a significant security advantage.

Automating Lifecycle Management

Employee onboarding, role changes, and departures introduce opportunities for misconfigurations and excessive access if handled manually. Automated identity lifecycle management reduces these risks by tying permissions to HR events and predefined role templates. When an employee joins the company, the system automatically assigns the appropriate applications and group memberships. When they move to a new department, their previous permissions are revoked and replaced with new ones that reflect their updated responsibilities.

Automation also ensures timely deprovisioning. Dormant accounts are among the most attractive targets for attackers, and identity governance tools that monitor account activity can flag or disable unused credentials before they become a liability.

Improving Access Visibility and Compliance

Regulatory frameworks such as SOX, HIPAA, and GDPR require organizations to demonstrate tight control over access to sensitive information. Identity governance platforms provide centralized visibility into who has access to which systems, when that access was granted, and whether it aligns with internal security policies. Access reviews—once tedious and spreadsheet-driven—are now automated and streamlined, reducing the burden on department managers and compliance teams.

Comprehensive reporting helps auditors validate that access controls are being enforced consistently. Meanwhile, automated alerts notify administrators when privileged accounts deviate from policy, enabling rapid remediation.

Building a Resilient Identity Strategy

A resilient identity governance program blends automation, policy logic, and continuous monitoring. Organizations that invest in scalable identity tools can reduce operational friction, strengthen compliance, and respond quickly to emerging threats. As identity remains the new security perimeter, IT teams must evaluate solutions that can evolve alongside their infrastructure.

For a deeper look at how different identity systems shape authentication, security models, and administrative workflows, consider reviewing this comparison of entra id vs active directory, which highlights key architectural distinctions.

Luna API buit with Xano Backend

2025-12-12 03:09:08

How I Built a Developer Opportunity API in 3 Days with Xano + AI

Cover Image

An AI-powered API that matches developers with hackathons, jobs, and grants - built for the Xano Challenge

The Problem

I was tired of manually searching 10+ platforms every week:

  • DevPost for hackathons
  • LinkedIn for jobs
  • Various grant databases
  • Twitter for announcements

Why isn't there ONE API that aggregates all of this and tells me what's actually relevant to MY skills?

So I built one. In 3 days.

What I Built

The Developer Opportunity Aggregator API is a RESTful API that:

  1. Aggregates hackathons, jobs, grants, scholarships, and learning resources
  2. Matches opportunities to your developer profile using an intelligent algorithm
  3. Ranks results by how well they fit YOUR skills, experience, and interests

First AI Prompt used to almost fully build out the whole api backend

Build this API in Xano exactly as specified in this plan. Start with the database schema, then build endpoints in this order:[attached the plan thats in the project root]. Use XanoScript for the matching algorithm logic.

The Magic: Smart Matching

Instead of just filtering, the API calculates a match score (0-100) for each opportunity:

{
  "title": "AI Hackathon 2025",
  "match_score": 92,
  "match_reasons": [
    "Matches 3/4 required skills (TypeScript, Python, AI/ML)",
    "Experience level is perfect fit (Advanced)",
    "Aligns with 2/3 interests (AI, Backend)"
  ]
}

The algorithm weights:

  • Skills Match (40%) - Do your skills match the requirements?
  • Experience Level (30%) - Are you qualified?
  • Interest Alignment (20%) - Does it match what you're looking for?
  • Location (10%) - Remote-friendly? Relocation works?

The Tech Stack

  • Xano - No-code backend for database + API
  • AI-Assisted Development - Generated initial endpoint logic
  • OpenAPI 3.1 - Full documentation spec

Why Xano?

I needed to ship fast. Xano let me:

  • Design database schema visually
  • Build REST endpoints without writing server code
  • Focus on business logic (the matching algorithm)

Key Endpoints

1. Register Developer Profile

curl -X POST "https://api.example.com/api:v1/developers_register" \
  -d '{
    "email": "[email protected]",
    "skills": ["TypeScript", "React", "Python"],
    "experience_level": "advanced",
    "interests": ["AI", "Backend"]
  }'

2. Get Personalized Matches

curl "https://api.example.com/api:v1/opportunities_matching?developer_id=YOUR_ID"

Returns opportunities sorted by match score!

3. Search with Filters

curl "https://api.example.com/api:v1/opportunities_search?type=hackathon&skills=Python"

The Matching Algorithm

Here's the core logic (simplified):

function calculateMatchScore(developer, opportunity) {
  let score = 0;

  // Skills (40% weight)
  const matchedSkills = intersection(
    developer.skills,
    opportunity.required_skills
  );
  score += (matchedSkills.length / opportunity.required_skills.length) * 40;

  // Experience (30% weight)
  if (developer.level >= opportunity.required_level) {
    score += 30;
  } else {
    score += 15; // partial credit
  }

  // Interests (20% weight)
  const matchedInterests = intersection(
    developer.interests,
    opportunity.category
  );
  score += (matchedInterests.length / opportunity.category.length) * 20;

  // Location (10% weight)
  if (opportunity.remote || developer.willing_to_relocate) {
    score += 10;
  }

  // Deadline bonus
  if (daysUntilDeadline < 3) {
    score += 5;
  }

  return Math.min(score, 100);
}

What I Learned

1. Data modeling is everything

Spent the first day just designing the schema. Worth it.

2. Test as you build

Created a Postman collection alongside development. Caught bugs early.

3. AI accelerates, doesn't replace

AI helped generate boilerplate. I still had to:

  • Fix edge cases
  • Handle null values properly
  • Debug UUID parsing errors

What's Next?

  • [ ] Connect real data sources (DevPost API, GitHub Jobs)
  • [ ] Add email notifications for high-match opportunities
  • [ ] Build a simple frontend

Try It Yourself

API Documentation: Documentation

Live API: https://xmsx-hkkx-nz6p.n7e.xano.io/api:v1

GitHub: Github

Postman Collection: Postman

Thanks for Reading!

If you're building something similar or have questions about the Xano + AI workflow, drop a comment below!

Built for the Xano AI-Powered Backend Challenge 2025

xano #api #ai #hackathon #buildingpublic

I Got Tired of Rewriting Payment Code, So I Built a Unified SDK for Africa

2025-12-12 03:08:16

Last weekend, I hit a wall.

I was working on a project that needed payment integration. I started with Paystack. Halfway through, I realized for business reasons, I needed to switch to Flutterwave.

"Should be a quick 10-minute job," I thought.

I was wrong. It was a nightmare.

The Fragmentation Problem

Switching from Paystack to Flutterwave wasn't a simple swap. It was a complete refactor.

  • Inconsistent Data: Paystack expects the amount in kobo (50000). Flutterwave wants Naira (500).
  • Prop Name Hell: Was it publicKey or public_key? ref or tx_ref?
  • Different Response Objects: Success callbacks returned completely different data structures.

My clean codebase was suddenly littered with if (provider === 'paystack') statements. It was messy, hard to maintain, and felt incredibly inefficient. If I was struggling with this, thousands of other African developers probably were too.

A senior engineer's solution would be to write an Adapter pattern—a translation layer to normalize the data. But that’s dozens of lines of boilerplate code just to get started.

There had to be a better way. So I built one.

Introducing use-africa-pay

use-africa-pay is an open-source SDK that unifies Africa's top payment gateways (Paystack, Flutterwave, Monnify, and Remita) under one simple, consistent API.

It’s the Adapter pattern you don't have to write.

Before: The Manual, Messy Way

// Every provider needs its own config and logic
if (provider === 'paystack') {
// ...handle Paystack's props, amount in kobo, and response
}
if (provider === 'flutterwave') {
// ...handle Flutterwave's props, amount in Naira, and response
}``

After: The Unified, Clean Way

With use-africa-pay, you write your logic once.

import { useAfricaPay } from '@use-africa-pay/core';

function PaymentButton({ provider, amount, email }) {
const { initializePayment } = useAfricaPay({
provider: provider, // 'paystack' | 'flutterwave' | 'monnify'
apiKey: '...',
amount: amount, // Always in the base currency (e.g., Naira)
email: email,
onSuccess: (response) => {
// Standardized response!
console.log('Success! Transaction ID:', response.transactionId);
},
onClose: () => console.log('Payment closed.'),
});

return <button onClick={initializePayment}>Pay Now</button>;
}

That’s it. Now, switching from Paystack to Flutterwave is as simple as changing the provider prop. The SDK handles all the messy translation in the background.

We Need Your Help! This is an Open-Source Project

I built this over a weekend, but its potential is huge. The goal is to create the ultimate payment layer for African developers, built by the community.

This is a perfect opportunity if you're looking for your first open-source contribution. We have several issues labeled good first issue that are ready to be picked up.

How you can contribute:

  • Star the repo on GitHub to show your support.
  • 💻 Pick up an issue and submit a PR.
  • 💡 Suggest a new feature or request integration for another payment gateway.

Let's work together to solve this problem for everyone.

What other payment gateways should we add next? Let me know in the comments!

Quic-test: an open tool for testing QUIC, BBRv3, and FEC under real-world network conditions

2025-12-12 03:05:53

This article was prepared as part of the CloudBridge Research project focused on optimizing network protocols (BBRv3, MASQUE, FEC, QUIC).

Project: github.com/cloudbridge-research/quic-test

Video demo of quic-test

Screenshot of interface

Why we built this

When we began studying the behavior of QUIC, BBRv3, and Forward Error Correction in real networks — from Wi-Fi to mobile networks and regional backbones — we ran into a simple problem: there were almost no tools capable of accurately reproducing real-world network conditions.

You can use iperf3, but it’s about TCP and basic UDP. You can take standalone QUIC libraries, but they lack visualization and load generation. You can write custom simulators, but they do not reflect real channel behavior. Want to test how BBRv3 performs between Moscow and Novosibirsk? Please: find three servers across different datacenters, configure netem, manually collect metrics and hope results will be reproducible.

There was no comprehensive QUIC tester with charts, channel profiles, FEC, BBRv3 support, TUI, and Prometheus metrics.
So we built quic-test — an open tool we use inside CloudBridge Research for all experiments. And now we share it with the community, universities, and engineers.

What is quic-test

A fully open laboratory environment for analyzing the behavior of QUIC, BBRv3, and FEC in real networks.
Not a simulator, but a real engineering instrument — all our research is based on measurements collected with it.

Who quic-test is for

Network engineers — compare TCP/QUIC/BBRv3 in your own networks and understand where QUIC provides real benefits.

SRE and DevOps teams — test service behavior under packet loss and high RTT, prepare for production issues before they appear.

Educators and students — run modern labs on transport protocols with real metrics and visualization.

Researchers — gather datasets for ML routing models, publish reproducible results.

Key capabilities

Protocols: QUIC (RFC 9000), HTTP/3, 0-RTT resumption.

Congestion Control: BBRv2 (stable), BBRv3 (experimental), CUBIC, Reno.

Forward Error Correction: XOR-FEC (working), RS-FEC (in development).

Network profiles: mobile (4G/LTE), Wi-Fi, lossy (3–5% loss), high-latency (regional routes).
All profiles are based on real measurements from our CloudBridge Edge PoPs.

Metrics: Prometheus, Grafana, TUI visualization (quic-bottom in Rust).

Comparison: TCP vs QUIC under identical conditions.

Quick Start

Docker

The simplest way to start is with ready Docker images:

# Run client (performance test)
docker run mlanies/quic-test:latest --mode=client --server=demo.quic.tech:4433

# Run server
docker run -p 4433:4433/udp mlanies/quic-test:latest --mode=server

Build from source

git clone https://github.com/twogc/quic-test
cd quic-test

# Build FEC library (requires clang)
cd internal/fec && make && cd ../..

# Build main tool (requires Go 1.21+)
go build -o quic-test cmd/quic-test/main.go

# Run
./quic-test --mode=client --server=demo.quic.tech:4433

First test: QUIC vs TCP

Server:

./quic-test --mode=server --listen=:4433

Client:

./quic-test --mode=client --server=127.0.0.1:4433 --duration=30s

QUIC vs TCP comparison:

./quic-test --mode=client --server=127.0.0.1:4433 --compare-tcp --duration=30s

TUI visualization:

quic-bottom --server=127.0.0.1:4433

The results include RTT, jitter, throughput, retransmissions, packet loss, FEC recovery, and fairness under parallel tests.
These are the exact metrics we used when achieving jitter <1 ms on PoP↔PoP.

Real network profiles

Profiles are based on actual measurements from CloudBridge Edge PoPs (Moscow, Frankfurt, Amsterdam).

Mobile (4G/LTE)

RTT 50–150 ms (avg ~80), throughput 5–50 Mbps (avg ~20), 0.1–2% loss, jitter 10–30 ms.
On this profile we tested FEC and achieved ~+10% goodput at 5% loss.

Wi-Fi

Burst losses and micro-drop behavior typical for office/home Wi-Fi.

Lossy

3–5% stable loss — ideal for testing FEC recovery efficiency.

High-latency

RTT 50–150 ms — typical interregional RU↔EU routes that we tested.

Usage:

./quic-test --mode=client --profile=mobile --compare-tcp --duration=60s

Custom profiles

./quic-test --mode=client --profile=custom \
  --rtt=100ms \
  --bandwidth=10mbps \
  --loss=1% \
  --jitter=20ms

Metrics & Grafana integration

Server with Prometheus:

./quic-test --mode=server --prometheus-port=9090

Available metrics

  • quic_rtt_ms
  • quic_jitter_ms
  • quic_loss_total
  • fec_recovered
  • tcp_goodput_mbps
  • quic_goodput_mbps
  • bbrv3_bandwidth_est
  • quic_datagram_rate
  • connection_drops
  • queue_delay_ms

We use these metrics in Grafana and in AI Routing Lab.

Research examples

Case 1: Mobile profile (5% loss)

Metric Baseline QUIC QUIC + FEC 10% Gain
Goodput 3.628 Mbps 3.991 Mbps +10%
Jitter 0.72 ms 0.72 ms stable
RTT P50 51.25 ms 51.25 ms stable

TCP CUBIC shows 4–6× degradation here.

Case 2: VPN tunnels with 10% loss

Metric TCP QUIC QUIC + FEC 15% Gain
Throughput 25 Mbps 45 Mbps 68 Mbps +172%
Retransmissions 18,500 12,200 3,800 -79%
P99 RTT 450 ms 320 ms 210 ms -53%

Other results

  • PoP↔PoP (Moscow—Frankfurt—Amsterdam): jitter <1 ms, connection time 9.20 ms
  • BBRv2 vs BBRv3 on satellite-like profiles: +16% throughput
  • Production profile: 9.258 Mbps goodput at 30 connections

More in docs/reports/.

Usage in universities

quic-test was originally designed as a teaching laboratory environment.

Available lab works

Lab #1: QUIC basics — RTT, jitter, handshake, 0-RTT, connection migration.
Lab #2: TCP vs QUIC — losses, HOL blocking, performance.
Lab #3: Losses & FEC — redundancy trade-offs.
Lab #4: BBRv3 vs CUBIC — congestion control comparison.
Lab #5: NAT traversal — ICE/STUN/TURN.
Lab #6: HTTP/3 performance — multiplexing vs HOL-blocked HTTP/2.

Materials are available in docs/labs/.

Architecture

Detailed scheme in docs/ARCHITECTURE.md. Summary:

Go core — transport, QUIC, measurements (quic-go v0.40, BBRv2 and experimental BBRv3).
Rust TUIquic-bottom real-time visualization.
C++ FEC module — AVX2 SIMD optimized, stable XOR-FEC, experimental RS-FEC.
Metrics — Prometheus, HDR histograms.
Network emulation — token bucket, delay queue, random drop.

Project status

Stable — QUIC client/server, TCP vs QUIC comparison, profiles, Prometheus, TUI.
Experimental — BBRv3, RS-FEC, MASQUE CONNECT-IP, TCP-over-QUIC.
Planned — automatic plotting, eBPF latency inspector, mini-PoP container.

Why we opened the project

CloudBridge Research is an independent research center (ANO "Center for Network Technology Research and Development", founded in 2025).
Our goal is to create an open stack of tools for engineers and universities.
We believe that open research accelerates technological progress and makes it accessible to everyone.

Related projects

AI Routing Lab — uses quic-test metrics to train delay prediction models (>92% accuracy target).
masque-vpn — QUIC/MASQUE VPN load-tested with quic-test, including high-loss scenarios.

How to reproduce our results

All configs and commands are in the repo.

Production profile (0.1% loss, 20 ms RTT):

./quic-test --mode=server --listen=:4433 --prometheus-port=9090

./quic-test --mode=client \
  --server=<server-ip>:4433 \
  --connections=30 \
  --duration=60s \
  --congestion=bbrv3 \
  --profile=custom \
  --rtt=20ms \
  --loss=0.1%

Mobile profile (5% loss) with FEC:

./quic-test --mode=server \
  --listen=:4433 \
  --fec=true \
  --fec-redundancy=0.10

./quic-test --mode=client \
  --server=<server-ip>:4433 \
  --profile=mobile \
  --fec=true \
  --fec-redundancy=0.10 \
  --duration=60s

Other scenarios are described in scripts/ and docs/reports/.

Contributions & feedback

We welcome issues, PRs, test reports, feature proposals, and integrations into university courses.

GitHub: https://github.com/cloudbridge-research/quic-test
Email: [email protected]
Blog: https://cloudbridge-research.ru

Conclusion

If you need to honestly evaluate how QUIC, BBRv3, and FEC behave in real networks — from Wi-Fi to mobile to regional backbones — try quic-test.

All results are reproducible, all tools are open, and this is a living engineering project that evolves with the community.

Try it, reproduce our findings, share your results — together we make networks better.

Day 2 — Beginning My 40 Days AWS DevOps Journey

2025-12-12 03:05:00

Today was Day 2 of my 40-day AWS + DevOps challenge.
I focused on strengthening Linux fundamentals because every DevOps job requires fast command-line skills — especially on EC2.

Here’s what I practiced today:

** File & Permissions **

chmod – make scripts executable
chown – fix file ownership
df -h – check disk space
du -sh – check folder size

** Processes & Services **

ps aux – list running processes
top – live CPU/RAM
systemctl status/start/restart – manage services (nginx, docker)

** Networking **

curl – check if a service/website is responding
ss -tulnp – check open ports

** Logs **

/var/log/ – main log directory
tail -f – watch logs in real time
journalctl -u – check service logs {This is for aws linux 2023}

** Pipes **
ps aux | grep nginx
tail -f messages | grep error