MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Why GIT Exists: The Pendrive Problem

2026-02-01 04:41:41

If you are a developer, you must already be aware of Git and GitHub. But do you know

  • why Git was created?
  • What problems existed before Git?
  • And how was the world of coding managed before version control systems?

In this article, you’ll find answers to all these questions.

How Was the World of Coding Before Git?

Let’s go back in time and imagine a world without Git.

You are a developer working smoothly on a project. One day, you need help to develop a new feature, so you ask your developer friend, Rishi. Rishi agrees and asks for the code.

To share the code, you zip the project, copy it to a pendrive, and hand it over to him. Rishi develops the feature, zips the updated code along with the existing code, and returns the pendrive to you.

When you unzip the project, the first problem you notice is that there is a lot of code, and you have no idea which part was written by whom. Since the feature works fine, you ignore this issue and continue working with the updated code.

A few days later, you discover a bug in the feature your friend wrote. You ask Rishi to fix it. Once again, you copy the entire project to the pendrive and give it to him. He fixes the bug, patches the code, and returns the pendrive.

Now you know the code has changed — but you don’t know exactly what changed or where. To understand the modifications, you have to sit with your friend, discuss the changes line by line, and this takes time.

Another major issue is conflicts and wasted time. Sometimes, while fixing a bug, some important code gets modified or removed without your knowledge. As a result, you may have to debug the entire project again.

Pen drive problem

Problems Faced Before Git

By now, you can clearly see the problems developers faced:

  • Code Sharing: Every time a feature needed to be developed or a bug needed fixing, the code had to be zipped and shared via a pendrive repeatedly.
  • Difficult to Track Code Changes: You had no visibility into what changes your friend made or which files were modified.
  • Hard to Collaborate: If multiple developers needed to work on the same project, tracking changes became almost impossible. Also, only one person could work at a time — the one who had the pendrive.

Thinking About a Solution

Now let’s think about solving these problems.

Become a member

The first challenge is to track who made what changes. To solve this, imagine creating a software system that:

  • Tracks code changes
  • Stores the author of each change
  • Shows differences between old and new code
  • Maintains version history

This solution would address the first two problems: tracking changes and identifying contributors.

Birth of version control system

However, collaboration is still an issue. Only one developer can work at a time because the source code still exists in a single physical location.

To solve this, you think of creating a single source of truth for the code. You purchase a server, install this code-tracking system on it, and upload your entire project there.

Now, your friends can pull the code from the server, work independently, and push their changes back. Other developers can then pull the updated code from the same server.

Git as a distributed system

This is exactly what Git (the code-tracking system) and GitHub (the hosted server) do.

The Birth of Git

Git was created by Linus Torvalds to manage the Linux project. As Linux grew larger day by day, tracking changes became extremely difficult. To solve this problem, Linus developed Git — a distributed version control system (VCS).

Alternatives to Git

  • AWS CodeCommit
  • Apache Subversion
  • Unity Version Control
  • Fossil
  • Concurrent Versions System (CVS)

Alternatives to GitHub

  • Bitbucket
  • GitLab
  • Gitea
  • Codeberg

This was all about why Git was needed — the problems developers faced before version control systems and how Git solved those challenges. Understanding this background makes it much easier to appreciate Git’s power and importance in modern software development.

In the next article, we’ll dive into how Git actually works, explore its internal concepts, and take a closer look at the folder structure of the .git directory to understand what happens behind the scenes.

Streamlining Email Flow Validation in Microservices with API-Driven Approaches

2026-02-01 04:40:57

In modern microservices architectures, ensuring reliable email delivery and validating email flows is crucial for maintaining user trust and compliance. As a DevOps specialist, leveraging API development to validate email flows provides a scalable, testable, and automation-friendly solution.

The Challenge:
Managing email validations in a distributed microservices environment can be complex. Instead of traditional monolithic approaches, each service may handle specific parts of the email process—from user registration to notifications. Validating these flows requires a centralized, yet flexible mechanism.

Solution Overview:
Implement a dedicated email validation service exposed via RESTful APIs. This service acts as a gatekeeper, verifying email syntax, domain authenticity, and simulating email deliverability without sending actual emails during testing. This approach allows the entire system to programmatically validate email flows during CI/CD pipelines and runtime.

Designing the Validation API:
Here's an example of a simple API endpoint for email validation:

from flask import Flask, request, jsonify
import re

app = Flask(__name__)

@app.route('/validate-email', methods=['POST'])
def validate_email():
    data = request.get_json()
    email = data.get('email')
    validation_result = {
        'is_valid': False,
        'reason': ''
    }

    # Basic syntax check
    email_regex = r"[^@]+@[^@]+\.[^@]+"  # Simplified regex for example
    if not re.match(email_regex, email):
        validation_result['reason'] = 'Invalid email syntax.'
        return jsonify(validation_result)

    # Domain validation example (could integrate with DNS API or third-party service)
    domain = email.split('@')[1]
    if domain.lower() in ['example.com', 'test.org']:
        validation_result['reason'] = 'Domain is in blocked list.'
        return jsonify(validation_result)

    # Simulate deliverability (here, just a placeholder)
    if email.endswith('@test.com'):
        validation_result['reason'] = 'Simulated undeliverable domain.'
        return jsonify(validation_result)

    # If all checks pass
    validation_result['is_valid'] = True
    validation_result['reason'] = 'Email is valid.'
    return jsonify(validation_result)

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

This API provides a centralized way to perform email validation checks asynchronously.

Integrating with Microservices:
Each microservice, whether handling user registration or notifications, can call this API during their workflows:

import requests

def validate_user_email(email):
    response = requests.post('http://email-validation-service:5000/validate-email', json={'email': email})
    result = response.json()
    if result['is_valid']:
        # Proceed with email-dependent workflows
        pass
    else:
        # Handle invalid email scenario
        print(f"Invalid email: {result['reason']}")

This decouples email validation logic from core applications, promotes reusability, and simplifies testing.

Benefits of API-Driven Validation:

  • Scalability: Easy to integrate across multiple services.
  • Testability: Can perform validation in testing environments without sending real emails.
  • Automation: Integrate with CI/CD pipelines to catch issues early.
  • Extensibility: Easily incorporate additional checks or third-party verifications.

Conclusion:
Adopting an API-centric strategy for email flow validation enables microservices teams to maintain high reliability, improve testing efficiency, and facilitate better coordination across distributed components. As the system grows, centralized validation APIs prove essential for consistent quality assurance and compliance.

🛠️ QA Tip

Pro Tip: Use TempoMail USA for generating disposable test accounts.

Securing Test Environments: How a Lead QA Engineer Mitigates PII Leaks with TypeScript and Open Source Tools

2026-02-01 04:39:36

Securing Test Environments: How a Lead QA Engineer Mitigates PII Leaks with TypeScript and Open Source Tools

In modern software development, protecting Personally Identifiable Information (PII) during testing is a critical concern. Leaking sensitive data in test environments can lead to compliance issues, reputational damage, and security vulnerabilities. As a Lead QA Engineer, taking proactive measures to prevent PII exposure using TypeScript and open source tools is essential.

Understanding the Challenge

Test environments often require realistic data to ensure the quality of the application. However, copying production data containing PII into testing datasets can risk accidental leaks. The challenge is to sanitize sensitive information automatically during testing, ensuring that no PII escapes.

Approach Overview

Our strategy involves creating a middleware or a data sanitization layer in the testing pipeline that detects PII and redacts or masks it dynamically. Leveraging TypeScript's typing system, along with open source libraries, provides a robust, maintainable, and type-safe solution.

Selecting Open Source Tools

The following tools form the core of the solution:

  • TypeScript: Ensures type safety and code clarity.
  • faker: Generates fake data to replace real PII.
  • class-transformer: Transforms data objects, allowing us to manipulate and sanitize data seamlessly.
  • Ajv: Validates data schemas to confirm sanitized data conforms to expected formats.

Implementation Details

The core component is a data transformer that examines incoming data objects, identifies fields containing PII, and replaces them with sanitized counterparts.

Step 1: Define Data Models with Typescript Interfaces

interface User {
  id: string;
  name: string;
  email: string;
  ssn?: string; // Social Security Number
}

Step 2: Create a Sanitization Service

import { plainToClass, Transform } from 'class-transformer';
import * as faker from 'faker';

class UserSanitizer {
  static sanitize(user: User): User {
    return {
      ...user,
      name: faker.name.findName(),
      email: faker.internet.email(),
      ssn: faker.helpers.replaceSymbolWithNumber('###-##-####'),
    };
  }
}

This service replaces PII fields with fake data. You can enhance it to detect PII fields dynamically by metadata or annotations.

Step 3: Automate Data Sanitization in Testing

Integrate this sanitizer into your test data setup:

// Example test data
const testUserRaw: User = {
  id: '12345',
  name: 'Jane Doe',
  email: '[email protected]',
  ssn: '123-45-6789'
};

// Sanitized output
const testUserSanitized = UserSanitizer.sanitize(testUserRaw);
console.log(testUserSanitized);

Step 4: Validation of Sanitized Data

Use Ajv to validate data schemas:

import Ajv from 'ajv';

const ajv = new Ajv();
const userSchema = {
  type: 'object',
  properties: {
    id: { type: 'string' },
    name: { type: 'string' },
    email: { type: 'string', format: 'email' },
    ssn: { type: 'string', pattern: '^\d{3}-\d{2}-\d{4}$' }
  },
  required: ['id', 'name', 'email'],
  additionalProperties: false
};

const validate = ajv.compile(userSchema);
const valid = validate(testUserSanitized);
if (!valid) {
  console.error(validate.errors);
} else {
  console.log('Sanitized data is valid');
}

Best Practices and Final Notes

  • Automate sanitization to run before data reaches test environments.
  • Limit PII exposure in logs and error reports.
  • Continuously update sanitization rules as data models evolve.
  • Combine with role-based access controls to further protect sensitive data.

By integrating these open source tools within your testing pipeline and enforcing strict sanitization practices, you can significantly reduce the risk of PII leaks, ensure compliance, and strengthen your security posture—all while leveraging the safety and tooling benefits of TypeScript.

Ensuring robust PII protection in test settings is not just about compliance—it's about responsible data stewardship. Implementing automated, type-safe sanitization layers with open source tools empowers QA teams to deliver quality software securely.

🛠️ QA Tip

To test this safely without using real user data, I use TempoMail USA.

Why Web Accessibility Matters (And Why It's Not Just About Avoiding Lawsuits)

2026-02-01 04:37:32

If you're running a real estate website in New York, you've probably heard the term "web accessibility" thrown around. Maybe you've even received one of those scary demand letters. But here's the thing — accessibility isn't just a legal checkbox. It's about making sure everyone can use your website, including the people who need it most.

Let me break this down in a way that actually makes sense.

The Real Reason Accessibility Matters

Before we talk about lawsuits and compliance standards, let's talk about people.

About 1 in 4 adults in the United States lives with some form of disability. That's not a small number. These are people looking for apartments, browsing property listings, trying to download your offering memorandum, or filling out a contact form to schedule a viewing.

When your website isn't accessible, you're essentially putting a "closed" sign in front of a significant portion of your potential clients. Not because you meant to — but because nobody told you the door was locked.

Making your website accessible means:

  • A person using a screen reader can navigate your property listings
  • Someone with limited mobility can use your site with just a keyboard
  • A visitor with low vision can actually read your content
  • Anyone can download and read your PDFs with assistive technology

This isn't about compliance. This is about treating people with respect and giving everyone equal access to your business.

Okay, But Let's Talk About the Legal Side Too

I won't pretend the legal landscape doesn't exist — especially in New York.

New York has become one of the most active states for web accessibility lawsuits. Law firms actively scan websites for accessibility issues, and real estate companies are frequent targets. Why? Because real estate sites typically have:

  • PDF documents (offering memorandums, brochures, lease agreements)
  • Contact forms
  • Property search features
  • Image-heavy listings

All of these are common failure points for accessibility.

The standard that courts look to is WCAG 2.1 Level AA — a set of guidelines that define what makes a website accessible. If your site doesn't meet this standard, you're potentially exposed to litigation.

What Actually Works (And What Doesn't)

Here's where I need to be direct with you: accessibility widgets don't solve the problem.

You've probably seen those little accessibility icons on websites — tools like accessiBe, UserWay, and others. They promise one-click compliance. It sounds great, right?

The reality is different. Courts increasingly reject these overlay tools as proof of compliance. Multiple lawsuits have explicitly named websites that already had widgets installed. Why?

  • Overlay tools don't fix the underlying code problems
  • Screen reader users often disable or avoid these overlays because they interfere with their assistive technology
  • They create a false sense of security

Installing a widget and claiming compliance can actually increase your legal risk because it shows you were aware of the issue but chose a shortcut instead of a real solution.

What Actually Reduces Risk

Real accessibility comes from building it into your website properly. Here's what that looks like:

In the code itself:

  • Proper heading structure (H1, H2, H3 in logical order)
  • Alt text on every meaningful image
  • Keyboard navigation that works throughout the site
  • Forms with proper labels and error messages
  • Sufficient color contrast
  • Focus indicators for interactive elements

For your documents:

  • PDFs that are properly tagged and readable by screen readers
  • Accessible forms that can be filled out with assistive technology

For your process:

  • An accessibility statement on your website with a contact method for reporting issues
  • Regular audits (at least annually, and after any major redesign)
  • Documentation of your accessibility efforts
  • A remediation process when issues are reported

This is what defense attorneys actually want to show in court — evidence of ongoing, good-faith effort to maintain accessibility.

The Tools We Use

You don't need expensive consultants to get started. Here are practical tools that help identify issues:

  • axe DevTools — browser extension that catches many WCAG violations
  • WAVE — visual feedback about accessibility issues on your page
  • Lighthouse — built into Chrome, gives you a basic accessibility score
  • NVDA / VoiceOver — actual screen readers for manual testing

The key is making accessibility part of your development workflow, not an afterthought.

A Better Way to Think About This

Here's my perspective: accessibility and good web design are the same thing.

A website that's easy to navigate with a keyboard is also easier to navigate for everyone. Clear headings help screen reader users, but they also help every visitor scan your content. Proper color contrast isn't just for people with low vision — it helps anyone viewing your site on a phone in bright sunlight.

When you build with accessibility in mind, you build a better website. Period.

What This Means for Your Business

If you're a real estate company in New York, accessibility should be part of your website strategy — not because you're scared of lawsuits, but because:

  1. It opens your business to more potential clients
  2. It demonstrates that you care about all members of your community
  3. It protects you legally (yes, this matters too)
  4. It results in a better website for everyone

The good news is that proper accessibility isn't prohibitively expensive or complicated. It requires attention and expertise, but it's entirely achievable.

Moving Forward

If you're concerned about your website's accessibility, here's a practical starting point:

  1. Run your site through an automated tool like WAVE or axe
  2. Try navigating your site using only your keyboard
  3. Check if your PDFs can be read by a screen reader
  4. Review your forms for proper labeling
  5. Look at your color contrast ratios

These simple checks will tell you a lot about where you stand.

Accessibility isn't a one-time fix — it's an ongoing commitment. But it's a commitment worth making, both for your business and for the people you serve.

From Childhood Dream to Cloud Run: Building My Portfolio That Took a Decade to Imagine

2026-02-01 04:35:33

This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI

The Dream Before the Code

Before I even knew what a framework was or understood how the web really worked, I had this vision: a unified digital space that would gather everything I wanted to share with the world. A place where people could actually know me through what I built.

I spent years browsing sites with incredible designs—some brilliant, others not so much. But what really caught my eye were the unique features: radio players streaming live music, browser-based games, AI-powered interactions, interactive databases. I wanted all of that. Not just to replicate it, but to make it mine.

Even back in high school, I ditched conventional "rules" just to build my logic and work toward this massive personal project. It started because of video games, but it evolved into something much bigger—more than a career, honestly, more like a marathon.

The Long Road: Jobs, Delays, and Divine Timing

Fast forward to four months before graduation. I landed my second professional job at Airtm, which forced me to pause my dream project and focus on theirs. My first corporate project taught me a harsh lesson: building for others takes longer than you think. So I made a decision—pause the Airtm project and finally start my portfolio. Something simpler, more personal, with massive professional benefits (including a Harvard-format CV, which was a nice touch).

Four days after my 24th birthday (August 11), I graduated. Clients started reaching out almost immediately. But I told them the same thing: "I need to help myself first. Then I'll help you."

Time pressure hit hard. I felt like I was drowning—starting, pausing, restarting over and over. But I committed everything to God, and on December 7, 2025, I finally began.

The Build: Chaos, Creativity, and a Google-Sponsored Miracle

I created the repository before this contest even existed. With my limited budget, I had to be smart about my tech choices. I prepared my AI assistants like personal tutors and evaluators, picked my stack carefully, and dove headfirst into development.

Then came December 20—I had to fly to the United States for Christmas with my family. I worried the momentum would die. But surprisingly, the trip became an opportunity to think bigger. I bought a US SIM card (Ultra Mobile) to get an American phone number (+1 area code, since my phone doesn't support eSIMs). I wanted Google Voice too, but it wasn't available in Colombia when I returned. A rookie mistake, but hey—when life gives you lemons, make lemonade.

Back home, I kept building. Adding backend features (my specialty—especially databases, since I've been writing SQL since 2015). Setting up my professional infrastructure: Google Workspace email, social media accounts, logos, brand identity. My personal brand was always there, but my business brand—Vixis Studio—needed proper structure.

I deployed to Deno Deploy first, squashed countless bugs, locked down security (thanks to my Kali Linux experience since 2015 and my CyberOps Associate + Ethical Hacker certifications from Cisco and my university in 2024).

Then Came the Turning Point

While implementing the blog functionality and setting up my newsletter, I discovered something that felt scripted by fate itself: the Google New Year, New You Portfolio Challenge.

A new year. A new me. A portfolio contest sponsored by Google, published on Dev.to. OF COURSE I'M PARTICIPATING.

I immediately migrated from Deno Deploy to Google Cloud. But here's the kicker—when I had no infrastructure for my radio streaming setup (Render, Railway, etc. all fell short), Google dropped over 1 MILLION Colombian pesos worth of cloud credits as a gift.

I literally shouted HALLELUJAH. Finally, I could practice my cloud skills properly. And what better way than with Google (even though I'd been using AWS for my CDN before)?

I started using Compute Engine, Cloud Run, Cloud Build, and other services to optimize everything. The best part? It adapted perfectly to my local development machine (thanks to Cursor, Antigravity, and WSL on my PC).

The Struggles: When Dreams Meet Reality

Let me be real: this wasn't easy.

I faced criticism as a kid and teenager. I was physically hurt, lied to, deceived. Lost opportunities because of my bad reputation. Not having God in my life back then was something I didn't even realize was missing. Failure seemed inevitable.

But I didn't quit.

Setting up the radio alone felt like an odyssey. At first, I had no infrastructure. Liquidsoap didn't work. Then FFmpeg failed. Back to Liquidsoap with different parameters and commands until finally—BOOM. I had my own radio streaming on my own site.

That moment hit different. Using Suno and my guitar skills, I even created my own custom jingle for the radio (a short audio ad between songs). I found myself listening to my own radio more than my Spotify playlists.

Live streaming was complicated too. Without money for a full broadcast setup, I downloaded a program called butt (yes, that's its name). I wanted to configure it in OBS Studio first, but couldn't find useful documentation. So I kept it simple: butt + Voicemeeter for audio management. Done.

I even tried integrating Google AdSense ads on the Radio and Home pages, but they rejected my application (still don't know why—first time doing this).

The Architecture: Phases of Creation

Before building, I planned every tool, every GUI, every CLI. I structured development into 8 clear phases:

  1. Phase 1: Initialize — Set up the foundation
  2. Phase 2: Tailwind CSS + Routing — Design system and navigation
  3. Phase 3: Frontend — Build the interface
  4. Phase 4: Backend + Database + Pricing + CDN + Radio — Core functionality
  5. Phase 5: Cybersecurity — Lock it down
  6. Phase 6: Deployment + CI/CD — Ship it
  7. Phase 7: Automation + AI — Smart workflows
  8. Phase 8: Monitoring + Logs + Maintenance — Keep it alive

This took a huge weight off my shoulders. My logic and focus leveled up—like Mario grabbing a red mushroom (because there are bad mushrooms too 😅).

What Makes This Portfolio Different

🎯 Core Features

Interactive Snake Timeline — An animated journey through my career with smooth scrolling and visual storytelling. Check out the homepage to see it in action.

Live Radio Streaming — Custom-built radio with Icecast/Liquidsoap, complete with playlist management, live streaming capabilities, and my own jingles.

Multi-Language Support — Full Spanish/English internationalization. The site dynamically switches languages without reloading.

Custom Admin Panel — I can manage all content (projects, blog posts, work experience, skills, education) without touching code.

Dynamic Pricing System — Integrated with Airtm's API (because paying $500/year for Stripe Atlas isn't realistic for me right now).

Unique Loading Animation — A skateboarding character animation that adds personality.

Figma-Style Interactive Comments — Tooltips and interactions inspired by professional design tools.

My Own Store — Showcasing services I offer (with external tools when needed).

🛠️ Tech Stack

  • Frontend: React 18, TypeScript, Vite
  • Runtime: Deno (modern, secure, fast)
  • Styling: Tailwind CSS
  • Animations: Framer Motion, GSAP
  • Backend: Supabase (PostgreSQL, Auth, Storage, Real-time)
  • Deployment: Google Cloud Run, Docker, Cloud Build
  • CDN: Custom CDN setup (cdn.vixis.dev)
  • Radio Streaming: Icecast, Liquidsoap, FFmpeg
  • CI/CD: Automated with Cloud Build
  • Security: CyberOps-certified configuration with CSP headers, secure environment variables

⚡ Why Google Cloud Run?

Serverless. No server management headaches.

Auto-scaling. Handles traffic spikes automatically.

Cost-effective. Pay only for what you use (plus those generous free credits until April 2026).

Global. Deploy close to users worldwide.

Container-based. Full control over the runtime environment.

🤖 A Note on AI Integration

You might notice I didn't include an AI chatbot or agent in this portfolio. This was a conscious design decision, not a technical limitation.

I know how to implement AI features—I specialize in backend and AI development. But for this project, I didn't see it as necessary. The portfolio isn't so immense that it requires an AI assistant to navigate. Sometimes the best technical decision is knowing when not to add complexity.

That said, if you find any bugs, have suggestions, or want to report issues, head over to the GitHub repository and open an issue. I'm always open to feedback and improvements.

🏗️ Deployment Configuration

The service is configured with the required contest label:

--labels dev-tutorial=devnewyear2026

Deployed to us-central1 with:

  • 512Mi memory
  • 1 vCPU
  • Auto-scaling (0 minimum, 10 maximum instances)
  • Port 8080
  • Public access

Portfolio

Radio Vixis in Portfolio

Visit vixis.dev (or https://portfolio-66959276136.us-central1.run.app) to experience:

  • The interactive timeline
  • Live radio streaming
  • Multi-language support
  • All my projects and work

You can also leave testimonials at vixis.dev/status or request songs for the radio playlist! 😎

How I Built It

The Development Journey

Total development time: 1.5 months. Much faster than my previous Airtm project that took several months.

I'm sure I've forgotten some details, but this project is so artistic to me that I'm certain nothing will ever be more special—even years from now. Even if people say it's "small" or "ugly" or "I could do better"—I don't care.

This is my catapult that elevates my potential like never before.

Technical Challenges Solved

Challenge 1: Deno + Node.js Compatibility

  • Configured nodeModulesDir: "auto" in deno.json
  • Used npm: specifiers for Node packages
  • Built custom Vite plugin for module resolution

Challenge 2: Radio Streaming Infrastructure

  • Started with Liquidsoap (failed)
  • Tried FFmpeg (failed)
  • Back to Liquidsoap with proper configuration (SUCCESS)
  • Added butt + Voicemeeter for live streaming

Challenge 3: Multi-language Content Management

  • Centralized i18n configuration
  • Database schema with JSONB *_translations columns
  • Runtime language switching with React context

Challenge 4: Image Optimization

  • CDN integration (cdn.vixis.dev)
  • WebP format with automatic conversion
  • Lazy loading with Intersection Observer
  • Responsive images with srcset

Security Implementation

As a CyberOps Associate and Ethical Hacker, security was non-negotiable:

  • Content Security Policy (CSP) headers
  • X-Frame-Options, X-Content-Type-Options
  • HTTPS only
  • Supabase-managed authentication
  • Environment variables for all sensitive data

What I'm Most Proud Of

The Personal Victory

In total, this took 1.5 months. I knew it would be faster than the Airtm project. I may have forgotten some details, but this project is so deeply artistic and personal that no other project will ever be this special to me—not even years from now.

No matter what anyone says—whether they think it's "small," "ugly," or claim "I could do better"—I don't care.

This portfolio is my catapult. It launches my potential to heights I've never reached before. Any opinions you have can be left on this blog or especially in my Testimonials form at vixis.dev/status (you can also request songs for the radio playlist or my live streams 😎).

The Domino Effect

I congratulate myself on this victory I've gathered for my career. This is a domino effect for the entire world, starting with the people who accompany me.

But above all, my victory is dedicated to the Lord and living God, our connection in Jesus Christ. To Him be my victories forever, in spite of everything. ✝️

Why This Portfolio Deserves to Win

Innovation: Unique features like interactive timeline, integrated radio streaming, and custom admin panel.

Technical Excellence: Modern stack, clean architecture, optimal performance, certified security practices.

User Experience: Smooth interactions, responsive design, accessibility-first approach.

Scalability: Built to grow with additional features and content.

Professional Quality: Production-ready with proper security, monitoring, and CI/CD.

Lessons Learned

Building this portfolio taught me invaluable lessons:

  1. Start with Planning — Clear architectural decisions prevent technical debt
  2. Prioritize UX — Technical excellence means nothing if users struggle
  3. Embrace Modern Tools — New technologies can significantly improve developer experience
  4. Iterate and Improve — Launch MVP first, then enhance based on feedback
  5. Document Everything — Good documentation saves time long-term
  6. Security First — Build security from the start, not as an afterthought
  7. Performance Matters — Every millisecond counts for user experience
  8. Prioritize Low Cloud Costs — Don't run configurations that lead to surprise high costs
  9. Building a Radio Station — Maybe not like established ones, but I finally learned how to create a fully online, personalized radio
  10. Automation + AI — Automating repetitive processes (like social media outreach) led me to tools like n8n, Make, Manychat, Chatwoot, and many others

The Road Ahead

I've faced criticism, been hurt, lost opportunities due to a bad reputation. Not having God in my life made failure seem inevitable.

But I didn't give up.

Even when mounting the radio felt like an odyssey—no infrastructure, Liquidsoap failing, FFmpeg not working—I kept pushing until I finally had something I deeply appreciate: my own radio on my own site.

Creating my custom jingle with Suno and my guitar skills filled me with so much emotion that I now listen to my radio more than my own Spotify playlists.

Final Thoughts

This portfolio represents more than a technical achievement—it's a testament to continuous learning, creative problem-solving, and dedication to the craft.

The Dev New Year 2026 Challenge pushed me to think critically about every aspect:

  • How do I stand out in a competitive field?
  • What makes a portfolio memorable?
  • How can I demonstrate both creativity and technical skill?

The answer: Build something authentic that shows not just what I've done, but who I am as a developer.

Thank you for reading about my journey. Here's to new beginnings, continuous learning, and building amazing things in 2026!

🚀 Deployed on Google Cloud Run with label: dev-tutorial=devnewyear2026

🔗 Live Site: vixis.dev or https://portfolio-66959276136.us-central1.run.app

💼 Vixis Studio: My B2B organization for enterprise clients

🎸 Fun Fact: I'm also a certified Digital Marketer (HubSpot) and Guitarist (Yousician), B1 English certified, and I created my radio jingle thanks to Suno.

This portfolio is a testament to what happens when passion meets purpose, and technology meets creativity.

Ansible playbooks for the Linux minimalist

2026-02-01 04:31:01

Overview

I am sharing these playbooks because they have helped me manage virtual machines in my home lab and cloud instances. Users should have a basic knowledge of Ansible, some experience with Linux, and some familiarity with virtual machines. You can follow along with the repository at Git Hub here.

Environment

These playbooks were written with the following environment in place. You will need to adjust files and playbooks to meet your environment's needs.

  • Local network of 10.20.26.x
  • Working Bind/DNS service managing the home.internal domain
  • Working email service for the above domain
  • Working HashiCorp Vault instance to manage secrets
  • Debian 12 workstation to run the Ansible playbooks

Ansible Setup

To install Ansible and its dependencies on Debian 12 run the following command:

sudo apt install ansible

Next, clone this repository and change to the cloned directory. The next command will create an Ansible Vault file:

ansible-vault create secrets.yaml

This will ask you for an Ansible Vault password and then open secrets.yaml in a text editor. If you plan to use HashiCorp Vault to manage passwords, you can add your token to secrets.yaml here.

Text editing of secrets.yaml

Otherwise you can add your sudo passwords to secrets.yaml for Ansible Vault to manage.

If using HashiCorp Vault, you will also need to install the hvac Python package via pip:

sudo apt install python3-pip
python3 -m venv myenv
source myenv/bin/activate
pip install hvac

If you are using vault.yaml, it will connect to your HashiCorp Vault instance. Edit the URL to match your Vault instance's URL. vault.yaml is treated as a vars file; it pulls keys from a HashiCorp Vault path. For this example the path is kv/data/ansible. It uses the Ansible secret named ansible_token to store the token for HashiCorp Vault.

The next file to edit is the inventory.yaml file. This example shows how to pull secrets from both HashiCorp Vault and Ansible Vault. For example, the ansible_become_password for dev.devsrv1 is stored in Ansible Vault. The ansible_become_password for network.debiansrv1 is stored in HashiCorp Vault and returned in the key list called vault_data, under the key debiansrv1_sudo.

Inventory.yaml snapshot

The last step is to generate SSH key pairs if you don't already have them. This public key will be used to communicate with the servers that Ansible will manage.

ssh-keygen

Minimal Debian VM Setup

On to creating VM templates. The first template will be a fresh install of Debian 12. I changed the VM networking to use bridged mode. This places the VM on the local network using DHCP.

For a minimal install, deselect all options in Software Selection except SSH Server, since this is needed to manage the machine via Ansible.

Debian Software selection screen

After the instance is installed, stop it and clone the instance for further work. I call this template deb12base.

Preparing cloned template for Ansible

On a clean Debian 12 install, you cannot SSH in as root. Therefore, sudo and python3 need to be installed.

apt install sudo python3
/usr/sbin/usermod -a -G sudo <sudouser>

Next, edit /etc/network/interfaces to give the template a static IP on the network. Add the host to DNS so you can use the hostname in the inventory file. There is a playbook for that: bind-update.yaml.

If you want to change the template hostname, use the following command:

sudo hostnamectl set-hostname <hostname>.home.internal

To allow Ansible playbooks to connect, copy your SSH public key to the new VM:

ssh-copy-id <sudouser>@<hostname>

After these changes, stop the instance and clone it for further work. I call this template deb12work.

Ansible Playbook usage

Now, on to using the Ansible playbooks.

The first Ansible playbook to run sets the system clock to use systemd-timesyncd:

ansible-playbook --ask-vault-pass clock-setup.yaml

The second playbook sets up an email client so services or cron jobs can notify a network user via email. This playbook copies a couple of files from the assets folder to the server. These files should be reviewed and edited before running the playbook.

ansible-playbook --ask-vault-pass mail-setup.yaml

Test the email setup by sending an email from the guest machine.

The next playbook installs rkhunter and ClamAV. This script installs cron jobs to run these tools daily. The cron jobs also run an apt upgrade check and email the results of what packages need to be installed without actually installing them. Be sure to edit these so they notify the correct email address. The playbook will also copy a stricter /etc/ssh/sshd_config to prevent root logins and enforce safer TLS settings. rkhunter will complain if these are not set.

ansible-playbook --ask-vault-pass security-tools-install.yaml

This playbook will restart the server or guest instance.

Next playbook will use nftables as a firewall. It isn't very strict as it allows all outbound traffic. It does block inbound traffic except for SSH. Adjust it according to your needs.

ansible-playbook --ask-vault-pass nftables-install.yaml

Next playbook will install and configure fail2ban. This utility slows down attackers by putting them in a jail for a period of time. Adjust the configurations to your preference.

ansible-playbook --ask-vault-pass fail2ban-install.yaml

Apt Upgrades

This playbook will upgrade apt packages on multiple servers. Configure as needed.

ansible-playbook --ask-vault-pass apt-upgrade.yaml

Install Nginx Server

This playbook will install nginx and compile it with modsecurity. It will add modsecurity rules from OWASP. Also, it will add jails to fail2ban for modsecurity and for various 400 errors over periods of time to prevent system scanning. The playbook has a variable for the version of nginx you are using. Be sure to change this to the correct version you are targeting.

ansible-playbook --ask-vault-pass nginx-install.yaml

Shutdown and Restart

These playbooks can simply restart or shutdown a group of instances.

ansible-playbook --ask-vault-pass restart.yaml
ansible-playbook --ask-vault-pass shutdown.yaml

Change Hostname

When cloning new instances from templates, you might want to change the hostname for the new clone. This playbook will copy /etc/network/interfaces so you can set the static IP as well. Make sure to edit that before running. Here is the playbook call for that. This playbook shows how to pass environment variables to ansible.

ansible-playbook --ask-vault-pass change-hostname.yaml -e 'hostname=devsrv1'

Finalizing templates by regenerating host keys

Up until now, I have been cloning templates and reusing those templates. This is fine except for the fact that SSH host keys have been reused as well. For a final step I want to regenerate the SSH host keys and finalize a VM template to be re-used. This will cause Man in the Middle warnings the next time you try to connect, so you will have to follow the instructions to clear those up. I found this technique at this address, https://www.youtube.com/watch?v=oAULun3KNaI.

ansible-playbook --ask-vault-pass prep-template.yaml