2026-02-01 04:41:41
If you are a developer, you must already be aware of Git and GitHub. But do you know
In this article, you’ll find answers to all these questions.
Let’s go back in time and imagine a world without Git.
You are a developer working smoothly on a project. One day, you need help to develop a new feature, so you ask your developer friend, Rishi. Rishi agrees and asks for the code.
To share the code, you zip the project, copy it to a pendrive, and hand it over to him. Rishi develops the feature, zips the updated code along with the existing code, and returns the pendrive to you.
When you unzip the project, the first problem you notice is that there is a lot of code, and you have no idea which part was written by whom. Since the feature works fine, you ignore this issue and continue working with the updated code.
A few days later, you discover a bug in the feature your friend wrote. You ask Rishi to fix it. Once again, you copy the entire project to the pendrive and give it to him. He fixes the bug, patches the code, and returns the pendrive.
Now you know the code has changed — but you don’t know exactly what changed or where. To understand the modifications, you have to sit with your friend, discuss the changes line by line, and this takes time.
Another major issue is conflicts and wasted time. Sometimes, while fixing a bug, some important code gets modified or removed without your knowledge. As a result, you may have to debug the entire project again.
By now, you can clearly see the problems developers faced:
Now let’s think about solving these problems.
Become a member
The first challenge is to track who made what changes. To solve this, imagine creating a software system that:
This solution would address the first two problems: tracking changes and identifying contributors.
However, collaboration is still an issue. Only one developer can work at a time because the source code still exists in a single physical location.
To solve this, you think of creating a single source of truth for the code. You purchase a server, install this code-tracking system on it, and upload your entire project there.
Now, your friends can pull the code from the server, work independently, and push their changes back. Other developers can then pull the updated code from the same server.
This is exactly what Git (the code-tracking system) and GitHub (the hosted server) do.
Git was created by Linus Torvalds to manage the Linux project. As Linux grew larger day by day, tracking changes became extremely difficult. To solve this problem, Linus developed Git — a distributed version control system (VCS).
This was all about why Git was needed — the problems developers faced before version control systems and how Git solved those challenges. Understanding this background makes it much easier to appreciate Git’s power and importance in modern software development.
In the next article, we’ll dive into how Git actually works, explore its internal concepts, and take a closer look at the folder structure of the .git directory to understand what happens behind the scenes.
2026-02-01 04:40:57
In modern microservices architectures, ensuring reliable email delivery and validating email flows is crucial for maintaining user trust and compliance. As a DevOps specialist, leveraging API development to validate email flows provides a scalable, testable, and automation-friendly solution.
The Challenge:
Managing email validations in a distributed microservices environment can be complex. Instead of traditional monolithic approaches, each service may handle specific parts of the email process—from user registration to notifications. Validating these flows requires a centralized, yet flexible mechanism.
Solution Overview:
Implement a dedicated email validation service exposed via RESTful APIs. This service acts as a gatekeeper, verifying email syntax, domain authenticity, and simulating email deliverability without sending actual emails during testing. This approach allows the entire system to programmatically validate email flows during CI/CD pipelines and runtime.
Designing the Validation API:
Here's an example of a simple API endpoint for email validation:
from flask import Flask, request, jsonify
import re
app = Flask(__name__)
@app.route('/validate-email', methods=['POST'])
def validate_email():
data = request.get_json()
email = data.get('email')
validation_result = {
'is_valid': False,
'reason': ''
}
# Basic syntax check
email_regex = r"[^@]+@[^@]+\.[^@]+" # Simplified regex for example
if not re.match(email_regex, email):
validation_result['reason'] = 'Invalid email syntax.'
return jsonify(validation_result)
# Domain validation example (could integrate with DNS API or third-party service)
domain = email.split('@')[1]
if domain.lower() in ['example.com', 'test.org']:
validation_result['reason'] = 'Domain is in blocked list.'
return jsonify(validation_result)
# Simulate deliverability (here, just a placeholder)
if email.endswith('@test.com'):
validation_result['reason'] = 'Simulated undeliverable domain.'
return jsonify(validation_result)
# If all checks pass
validation_result['is_valid'] = True
validation_result['reason'] = 'Email is valid.'
return jsonify(validation_result)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
This API provides a centralized way to perform email validation checks asynchronously.
Integrating with Microservices:
Each microservice, whether handling user registration or notifications, can call this API during their workflows:
import requests
def validate_user_email(email):
response = requests.post('http://email-validation-service:5000/validate-email', json={'email': email})
result = response.json()
if result['is_valid']:
# Proceed with email-dependent workflows
pass
else:
# Handle invalid email scenario
print(f"Invalid email: {result['reason']}")
This decouples email validation logic from core applications, promotes reusability, and simplifies testing.
Benefits of API-Driven Validation:
Conclusion:
Adopting an API-centric strategy for email flow validation enables microservices teams to maintain high reliability, improve testing efficiency, and facilitate better coordination across distributed components. As the system grows, centralized validation APIs prove essential for consistent quality assurance and compliance.
Pro Tip: Use TempoMail USA for generating disposable test accounts.
2026-02-01 04:39:36
In modern software development, protecting Personally Identifiable Information (PII) during testing is a critical concern. Leaking sensitive data in test environments can lead to compliance issues, reputational damage, and security vulnerabilities. As a Lead QA Engineer, taking proactive measures to prevent PII exposure using TypeScript and open source tools is essential.
Test environments often require realistic data to ensure the quality of the application. However, copying production data containing PII into testing datasets can risk accidental leaks. The challenge is to sanitize sensitive information automatically during testing, ensuring that no PII escapes.
Our strategy involves creating a middleware or a data sanitization layer in the testing pipeline that detects PII and redacts or masks it dynamically. Leveraging TypeScript's typing system, along with open source libraries, provides a robust, maintainable, and type-safe solution.
The following tools form the core of the solution:
faker: Generates fake data to replace real PII.class-transformer: Transforms data objects, allowing us to manipulate and sanitize data seamlessly.Ajv: Validates data schemas to confirm sanitized data conforms to expected formats.The core component is a data transformer that examines incoming data objects, identifies fields containing PII, and replaces them with sanitized counterparts.
interface User {
id: string;
name: string;
email: string;
ssn?: string; // Social Security Number
}
import { plainToClass, Transform } from 'class-transformer';
import * as faker from 'faker';
class UserSanitizer {
static sanitize(user: User): User {
return {
...user,
name: faker.name.findName(),
email: faker.internet.email(),
ssn: faker.helpers.replaceSymbolWithNumber('###-##-####'),
};
}
}
This service replaces PII fields with fake data. You can enhance it to detect PII fields dynamically by metadata or annotations.
Integrate this sanitizer into your test data setup:
// Example test data
const testUserRaw: User = {
id: '12345',
name: 'Jane Doe',
email: '[email protected]',
ssn: '123-45-6789'
};
// Sanitized output
const testUserSanitized = UserSanitizer.sanitize(testUserRaw);
console.log(testUserSanitized);
Use Ajv to validate data schemas:
import Ajv from 'ajv';
const ajv = new Ajv();
const userSchema = {
type: 'object',
properties: {
id: { type: 'string' },
name: { type: 'string' },
email: { type: 'string', format: 'email' },
ssn: { type: 'string', pattern: '^\d{3}-\d{2}-\d{4}$' }
},
required: ['id', 'name', 'email'],
additionalProperties: false
};
const validate = ajv.compile(userSchema);
const valid = validate(testUserSanitized);
if (!valid) {
console.error(validate.errors);
} else {
console.log('Sanitized data is valid');
}
By integrating these open source tools within your testing pipeline and enforcing strict sanitization practices, you can significantly reduce the risk of PII leaks, ensure compliance, and strengthen your security posture—all while leveraging the safety and tooling benefits of TypeScript.
Ensuring robust PII protection in test settings is not just about compliance—it's about responsible data stewardship. Implementing automated, type-safe sanitization layers with open source tools empowers QA teams to deliver quality software securely.
To test this safely without using real user data, I use TempoMail USA.
2026-02-01 04:37:32
If you're running a real estate website in New York, you've probably heard the term "web accessibility" thrown around. Maybe you've even received one of those scary demand letters. But here's the thing — accessibility isn't just a legal checkbox. It's about making sure everyone can use your website, including the people who need it most.
Let me break this down in a way that actually makes sense.
Before we talk about lawsuits and compliance standards, let's talk about people.
About 1 in 4 adults in the United States lives with some form of disability. That's not a small number. These are people looking for apartments, browsing property listings, trying to download your offering memorandum, or filling out a contact form to schedule a viewing.
When your website isn't accessible, you're essentially putting a "closed" sign in front of a significant portion of your potential clients. Not because you meant to — but because nobody told you the door was locked.
Making your website accessible means:
This isn't about compliance. This is about treating people with respect and giving everyone equal access to your business.
I won't pretend the legal landscape doesn't exist — especially in New York.
New York has become one of the most active states for web accessibility lawsuits. Law firms actively scan websites for accessibility issues, and real estate companies are frequent targets. Why? Because real estate sites typically have:
All of these are common failure points for accessibility.
The standard that courts look to is WCAG 2.1 Level AA — a set of guidelines that define what makes a website accessible. If your site doesn't meet this standard, you're potentially exposed to litigation.
Here's where I need to be direct with you: accessibility widgets don't solve the problem.
You've probably seen those little accessibility icons on websites — tools like accessiBe, UserWay, and others. They promise one-click compliance. It sounds great, right?
The reality is different. Courts increasingly reject these overlay tools as proof of compliance. Multiple lawsuits have explicitly named websites that already had widgets installed. Why?
Installing a widget and claiming compliance can actually increase your legal risk because it shows you were aware of the issue but chose a shortcut instead of a real solution.
Real accessibility comes from building it into your website properly. Here's what that looks like:
In the code itself:
For your documents:
For your process:
This is what defense attorneys actually want to show in court — evidence of ongoing, good-faith effort to maintain accessibility.
You don't need expensive consultants to get started. Here are practical tools that help identify issues:
The key is making accessibility part of your development workflow, not an afterthought.
Here's my perspective: accessibility and good web design are the same thing.
A website that's easy to navigate with a keyboard is also easier to navigate for everyone. Clear headings help screen reader users, but they also help every visitor scan your content. Proper color contrast isn't just for people with low vision — it helps anyone viewing your site on a phone in bright sunlight.
When you build with accessibility in mind, you build a better website. Period.
If you're a real estate company in New York, accessibility should be part of your website strategy — not because you're scared of lawsuits, but because:
The good news is that proper accessibility isn't prohibitively expensive or complicated. It requires attention and expertise, but it's entirely achievable.
If you're concerned about your website's accessibility, here's a practical starting point:
These simple checks will tell you a lot about where you stand.
Accessibility isn't a one-time fix — it's an ongoing commitment. But it's a commitment worth making, both for your business and for the people you serve.
2026-02-01 04:35:33
This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI
Before I even knew what a framework was or understood how the web really worked, I had this vision: a unified digital space that would gather everything I wanted to share with the world. A place where people could actually know me through what I built.
I spent years browsing sites with incredible designs—some brilliant, others not so much. But what really caught my eye were the unique features: radio players streaming live music, browser-based games, AI-powered interactions, interactive databases. I wanted all of that. Not just to replicate it, but to make it mine.
Even back in high school, I ditched conventional "rules" just to build my logic and work toward this massive personal project. It started because of video games, but it evolved into something much bigger—more than a career, honestly, more like a marathon.
Fast forward to four months before graduation. I landed my second professional job at Airtm, which forced me to pause my dream project and focus on theirs. My first corporate project taught me a harsh lesson: building for others takes longer than you think. So I made a decision—pause the Airtm project and finally start my portfolio. Something simpler, more personal, with massive professional benefits (including a Harvard-format CV, which was a nice touch).
Four days after my 24th birthday (August 11), I graduated. Clients started reaching out almost immediately. But I told them the same thing: "I need to help myself first. Then I'll help you."
Time pressure hit hard. I felt like I was drowning—starting, pausing, restarting over and over. But I committed everything to God, and on December 7, 2025, I finally began.
I created the repository before this contest even existed. With my limited budget, I had to be smart about my tech choices. I prepared my AI assistants like personal tutors and evaluators, picked my stack carefully, and dove headfirst into development.
Then came December 20—I had to fly to the United States for Christmas with my family. I worried the momentum would die. But surprisingly, the trip became an opportunity to think bigger. I bought a US SIM card (Ultra Mobile) to get an American phone number (+1 area code, since my phone doesn't support eSIMs). I wanted Google Voice too, but it wasn't available in Colombia when I returned. A rookie mistake, but hey—when life gives you lemons, make lemonade.
Back home, I kept building. Adding backend features (my specialty—especially databases, since I've been writing SQL since 2015). Setting up my professional infrastructure: Google Workspace email, social media accounts, logos, brand identity. My personal brand was always there, but my business brand—Vixis Studio—needed proper structure.
I deployed to Deno Deploy first, squashed countless bugs, locked down security (thanks to my Kali Linux experience since 2015 and my CyberOps Associate + Ethical Hacker certifications from Cisco and my university in 2024).
While implementing the blog functionality and setting up my newsletter, I discovered something that felt scripted by fate itself: the Google New Year, New You Portfolio Challenge.
A new year. A new me. A portfolio contest sponsored by Google, published on Dev.to. OF COURSE I'M PARTICIPATING.
I immediately migrated from Deno Deploy to Google Cloud. But here's the kicker—when I had no infrastructure for my radio streaming setup (Render, Railway, etc. all fell short), Google dropped over 1 MILLION Colombian pesos worth of cloud credits as a gift.
I literally shouted HALLELUJAH. Finally, I could practice my cloud skills properly. And what better way than with Google (even though I'd been using AWS for my CDN before)?
I started using Compute Engine, Cloud Run, Cloud Build, and other services to optimize everything. The best part? It adapted perfectly to my local development machine (thanks to Cursor, Antigravity, and WSL on my PC).
Let me be real: this wasn't easy.
I faced criticism as a kid and teenager. I was physically hurt, lied to, deceived. Lost opportunities because of my bad reputation. Not having God in my life back then was something I didn't even realize was missing. Failure seemed inevitable.
But I didn't quit.
Setting up the radio alone felt like an odyssey. At first, I had no infrastructure. Liquidsoap didn't work. Then FFmpeg failed. Back to Liquidsoap with different parameters and commands until finally—BOOM. I had my own radio streaming on my own site.
That moment hit different. Using Suno and my guitar skills, I even created my own custom jingle for the radio (a short audio ad between songs). I found myself listening to my own radio more than my Spotify playlists.
Live streaming was complicated too. Without money for a full broadcast setup, I downloaded a program called butt (yes, that's its name). I wanted to configure it in OBS Studio first, but couldn't find useful documentation. So I kept it simple: butt + Voicemeeter for audio management. Done.
I even tried integrating Google AdSense ads on the Radio and Home pages, but they rejected my application (still don't know why—first time doing this).
Before building, I planned every tool, every GUI, every CLI. I structured development into 8 clear phases:
This took a huge weight off my shoulders. My logic and focus leveled up—like Mario grabbing a red mushroom (because there are bad mushrooms too 😅).
Interactive Snake Timeline — An animated journey through my career with smooth scrolling and visual storytelling. Check out the homepage to see it in action.
Live Radio Streaming — Custom-built radio with Icecast/Liquidsoap, complete with playlist management, live streaming capabilities, and my own jingles.
Multi-Language Support — Full Spanish/English internationalization. The site dynamically switches languages without reloading.
Custom Admin Panel — I can manage all content (projects, blog posts, work experience, skills, education) without touching code.
Dynamic Pricing System — Integrated with Airtm's API (because paying $500/year for Stripe Atlas isn't realistic for me right now).
Unique Loading Animation — A skateboarding character animation that adds personality.
Figma-Style Interactive Comments — Tooltips and interactions inspired by professional design tools.
My Own Store — Showcasing services I offer (with external tools when needed).
Serverless. No server management headaches.
Auto-scaling. Handles traffic spikes automatically.
Cost-effective. Pay only for what you use (plus those generous free credits until April 2026).
Global. Deploy close to users worldwide.
Container-based. Full control over the runtime environment.
You might notice I didn't include an AI chatbot or agent in this portfolio. This was a conscious design decision, not a technical limitation.
I know how to implement AI features—I specialize in backend and AI development. But for this project, I didn't see it as necessary. The portfolio isn't so immense that it requires an AI assistant to navigate. Sometimes the best technical decision is knowing when not to add complexity.
That said, if you find any bugs, have suggestions, or want to report issues, head over to the GitHub repository and open an issue. I'm always open to feedback and improvements.
The service is configured with the required contest label:
--labels dev-tutorial=devnewyear2026
Deployed to us-central1 with:
Visit vixis.dev (or https://portfolio-66959276136.us-central1.run.app) to experience:
You can also leave testimonials at vixis.dev/status or request songs for the radio playlist! 😎
Total development time: 1.5 months. Much faster than my previous Airtm project that took several months.
I'm sure I've forgotten some details, but this project is so artistic to me that I'm certain nothing will ever be more special—even years from now. Even if people say it's "small" or "ugly" or "I could do better"—I don't care.
This is my catapult that elevates my potential like never before.
Challenge 1: Deno + Node.js Compatibility
nodeModulesDir: "auto" in deno.jsonnpm: specifiers for Node packagesChallenge 2: Radio Streaming Infrastructure
Challenge 3: Multi-language Content Management
*_translations columnsChallenge 4: Image Optimization
As a CyberOps Associate and Ethical Hacker, security was non-negotiable:
In total, this took 1.5 months. I knew it would be faster than the Airtm project. I may have forgotten some details, but this project is so deeply artistic and personal that no other project will ever be this special to me—not even years from now.
No matter what anyone says—whether they think it's "small," "ugly," or claim "I could do better"—I don't care.
This portfolio is my catapult. It launches my potential to heights I've never reached before. Any opinions you have can be left on this blog or especially in my Testimonials form at vixis.dev/status (you can also request songs for the radio playlist or my live streams 😎).
I congratulate myself on this victory I've gathered for my career. This is a domino effect for the entire world, starting with the people who accompany me.
But above all, my victory is dedicated to the Lord and living God, our connection in Jesus Christ. To Him be my victories forever, in spite of everything. ✝️
Innovation: Unique features like interactive timeline, integrated radio streaming, and custom admin panel.
Technical Excellence: Modern stack, clean architecture, optimal performance, certified security practices.
User Experience: Smooth interactions, responsive design, accessibility-first approach.
Scalability: Built to grow with additional features and content.
Professional Quality: Production-ready with proper security, monitoring, and CI/CD.
Building this portfolio taught me invaluable lessons:
I've faced criticism, been hurt, lost opportunities due to a bad reputation. Not having God in my life made failure seem inevitable.
But I didn't give up.
Even when mounting the radio felt like an odyssey—no infrastructure, Liquidsoap failing, FFmpeg not working—I kept pushing until I finally had something I deeply appreciate: my own radio on my own site.
Creating my custom jingle with Suno and my guitar skills filled me with so much emotion that I now listen to my radio more than my own Spotify playlists.
This portfolio represents more than a technical achievement—it's a testament to continuous learning, creative problem-solving, and dedication to the craft.
The Dev New Year 2026 Challenge pushed me to think critically about every aspect:
The answer: Build something authentic that shows not just what I've done, but who I am as a developer.
Thank you for reading about my journey. Here's to new beginnings, continuous learning, and building amazing things in 2026!
🚀 Deployed on Google Cloud Run with label: dev-tutorial=devnewyear2026
🔗 Live Site: vixis.dev or https://portfolio-66959276136.us-central1.run.app
💼 Vixis Studio: My B2B organization for enterprise clients
🎸 Fun Fact: I'm also a certified Digital Marketer (HubSpot) and Guitarist (Yousician), B1 English certified, and I created my radio jingle thanks to Suno.
This portfolio is a testament to what happens when passion meets purpose, and technology meets creativity. ✨
2026-02-01 04:31:01
I am sharing these playbooks because they have helped me manage virtual machines in my home lab and cloud instances. Users should have a basic knowledge of Ansible, some experience with Linux, and some familiarity with virtual machines. You can follow along with the repository at Git Hub here.
These playbooks were written with the following environment in place. You will need to adjust files and playbooks to meet your environment's needs.
To install Ansible and its dependencies on Debian 12 run the following command:
sudo apt install ansible
Next, clone this repository and change to the cloned directory. The next command will create an Ansible Vault file:
ansible-vault create secrets.yaml
This will ask you for an Ansible Vault password and then open secrets.yaml in a text editor. If you plan to use HashiCorp Vault to manage passwords, you can add your token to secrets.yaml here.
Otherwise you can add your sudo passwords to secrets.yaml for Ansible Vault to manage.
If using HashiCorp Vault, you will also need to install the hvac Python package via pip:
sudo apt install python3-pip
python3 -m venv myenv
source myenv/bin/activate
pip install hvac
If you are using vault.yaml, it will connect to your HashiCorp Vault instance. Edit the URL to match your Vault instance's URL. vault.yaml is treated as a vars file; it pulls keys from a HashiCorp Vault path. For this example the path is kv/data/ansible. It uses the Ansible secret named ansible_token to store the token for HashiCorp Vault.
The next file to edit is the inventory.yaml file. This example shows how to pull secrets from both HashiCorp Vault and Ansible Vault. For example, the ansible_become_password for dev.devsrv1 is stored in Ansible Vault. The ansible_become_password for network.debiansrv1 is stored in HashiCorp Vault and returned in the key list called vault_data, under the key debiansrv1_sudo.
The last step is to generate SSH key pairs if you don't already have them. This public key will be used to communicate with the servers that Ansible will manage.
ssh-keygen
On to creating VM templates. The first template will be a fresh install of Debian 12. I changed the VM networking to use bridged mode. This places the VM on the local network using DHCP.
For a minimal install, deselect all options in Software Selection except SSH Server, since this is needed to manage the machine via Ansible.
After the instance is installed, stop it and clone the instance for further work. I call this template deb12base.
On a clean Debian 12 install, you cannot SSH in as root. Therefore, sudo and python3 need to be installed.
apt install sudo python3
/usr/sbin/usermod -a -G sudo <sudouser>
Next, edit /etc/network/interfaces to give the template a static IP on the network. Add the host to DNS so you can use the hostname in the inventory file. There is a playbook for that: bind-update.yaml.
If you want to change the template hostname, use the following command:
sudo hostnamectl set-hostname <hostname>.home.internal
To allow Ansible playbooks to connect, copy your SSH public key to the new VM:
ssh-copy-id <sudouser>@<hostname>
After these changes, stop the instance and clone it for further work. I call this template deb12work.
Now, on to using the Ansible playbooks.
The first Ansible playbook to run sets the system clock to use systemd-timesyncd:
ansible-playbook --ask-vault-pass clock-setup.yaml
The second playbook sets up an email client so services or cron jobs can notify a network user via email. This playbook copies a couple of files from the assets folder to the server. These files should be reviewed and edited before running the playbook.
ansible-playbook --ask-vault-pass mail-setup.yaml
Test the email setup by sending an email from the guest machine.
The next playbook installs rkhunter and ClamAV. This script installs cron jobs to run these tools daily. The cron jobs also run an apt upgrade check and email the results of what packages need to be installed without actually installing them. Be sure to edit these so they notify the correct email address. The playbook will also copy a stricter /etc/ssh/sshd_config to prevent root logins and enforce safer TLS settings. rkhunter will complain if these are not set.
ansible-playbook --ask-vault-pass security-tools-install.yaml
This playbook will restart the server or guest instance.
Next playbook will use nftables as a firewall. It isn't very strict as it allows all outbound traffic. It does block inbound traffic except for SSH. Adjust it according to your needs.
ansible-playbook --ask-vault-pass nftables-install.yaml
Next playbook will install and configure fail2ban. This utility slows down attackers by putting them in a jail for a period of time. Adjust the configurations to your preference.
ansible-playbook --ask-vault-pass fail2ban-install.yaml
This playbook will upgrade apt packages on multiple servers. Configure as needed.
ansible-playbook --ask-vault-pass apt-upgrade.yaml
This playbook will install nginx and compile it with modsecurity. It will add modsecurity rules from OWASP. Also, it will add jails to fail2ban for modsecurity and for various 400 errors over periods of time to prevent system scanning. The playbook has a variable for the version of nginx you are using. Be sure to change this to the correct version you are targeting.
ansible-playbook --ask-vault-pass nginx-install.yaml
These playbooks can simply restart or shutdown a group of instances.
ansible-playbook --ask-vault-pass restart.yaml
ansible-playbook --ask-vault-pass shutdown.yaml
When cloning new instances from templates, you might want to change the hostname for the new clone. This playbook will copy /etc/network/interfaces so you can set the static IP as well. Make sure to edit that before running. Here is the playbook call for that. This playbook shows how to pass environment variables to ansible.
ansible-playbook --ask-vault-pass change-hostname.yaml -e 'hostname=devsrv1'
Up until now, I have been cloning templates and reusing those templates. This is fine except for the fact that SSH host keys have been reused as well. For a final step I want to regenerate the SSH host keys and finalize a VM template to be re-used. This will cause Man in the Middle warnings the next time you try to connect, so you will have to follow the instructions to clear those up. I found this technique at this address, https://www.youtube.com/watch?v=oAULun3KNaI.
ansible-playbook --ask-vault-pass prep-template.yaml