MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Hand-Crafted Creative Counter-Culture against Toxic Digitization

2025-12-01 18:02:11

Let me assure you, I am no luddite. I use electricity, computers, the internet, and I develop websites as a professional web developer. I use AI when it makes sense, I use Google and StackOverflow, GitHub and digital notes online.

However, many fellow developers, customers, and users seem to fall for a misunderstood promise of the current tech and AI hype, failing to understand marketing patterns, hype cycles and psychological bias.

Constructive Criticism like 8 Alternatives to AI for Coding and Creativity, Google Alternatives and StackOverflow alternatives for web developers already show that it's no solution to fall back for yesterday's tool to avoid today's problems.

Deceptive patterns are a thing: misguided KPIs and online marketing insights reinforce bad practice, like making users spend much time and interact on a website as a positive signal only serves ad pusblishers and content creators promoting quantity as consistency. A clear and focus experience allowing people to reach their goals quickly falls below the radar and gets shadow-banned for not creating enough engangement, and this cycle keeps boosting bad UI and pushing poor search results to page one top places - so users would spend more time browsing search results while seeing more paid ads. Part of generative AI's success is that we'd rather read a flattering slop post than bother with any more SEO blogs bloated with ad banners or risk that StackOverflow gatekeepers mark our carefully crafted question for deletion.

A Healthy Digital Information Retrieval Pyramid

We could compare third-party online services with buying groceries. Maybe we have some home-grown specialitities or in a small neighborhood garden. We could go to a local food market, fill our cart in a supermarket or order something delivered to our door. Food quality and sustainbility differ a lot. A single avocado shipped, stored, packaged and delivered has a higher carbon footprint than the one that grows in a local greenhouse. An organic fruit bowl with overnight oats has a better nutritional value than a toast sandwich wrapped in plastic since leaving a factory oven a week before.

Sketch of a triangular pyramid shape with a kind of explosion onn top: AI, google search, ecosia search, bookmarked websites, local-first

Offline-first may be a new trend, rediscovering autonomy with local language models, localhost development servers, and regional data centers independent of centralized cloud services like CloudFlare whose recent outage took down a great part of today's mainstream internet. Offline-first is like a sunny kitchen window or a small greenhouse in grandma's garden: nice, useful, but it doesn't scale well. Still, that aligns with the IndieWeb POSSE principle: start at your own place, keep and grow a local core of your business and content resources that you control completely.

Offline First: Local Resources

Local resources also include my own analytic thinking and creativity! Before turning to AI or web search, I can ask myself if I look at a problem from a helpful angle, if I already checked typical points of failure, read error message details and log files, and use local code analysis like linters, auto completion suggestions or even a local LM. Local resources also include pen an paper: standing up and drawing a sketch of an information flow, design detail, or software architecture can help a lot! Also, sketching and painting can inspire and empower humans much more than crafting the right prompt to make an AI generate a picture.

Any third-party service online is a commodity that may or may not be available at a certain moment in time. I wouldn't go so far and keep a local copy of every website and it's recent IP address. But offline documenatation and notes on paper won't hurt, and I even buy or borrow a printed book occasionally. I have a link list of websites, formerly known as digital bookmarks, so I don't need to google or waste Perplexity tokens to land on an autoritative page like Mozilla's MDN web development reference.

Bookmarks, Ecosia, Google, AI

I use Ecosia as my default search engine, to save energy and improve privacy. Only when the search results don't seem helpful or I suspect that they don't based on prior experience (like specific coding error messages), I use Google instead. Within the search results, I favor well-known sites like MDN, DEV or StackOverflow (which still has not the best internal search function). When the search results don't help, or when the problem seems too complex to phrase it as a short sentence that a classic search engine might understand, I turn to AI. Search engines sometimes run an AI-mode query already anyway, otherwise I'd prefer Perplexity when I want to check information sources, or Claude when it comes to coding.

Creative Surprises and Intutition

Creativity often comes with suprise and intuition! When sketching my pen-and-paper version of an information retrieval pyramid, the diagonal walls didn't quite conntect, leaving the top open. That made me think of an erupting volcano as an apt analogy for AI's verbosity and energy consumption.

Gateway Endpoints vs Interface Endpoints: What’s the Difference?

2025-12-01 17:51:05

AWS provides several ways to keep your workloads connected without exposing them to the public internet. One of the most useful tools for this is the VPC Endpoint, which enables private access from your VPC to AWS services over the AWS internal network. There are two main types: Gateway Endpoints and Interface Endpoints. Each endpoint type serves a different purpose, so choosing the right one matters for security, performance, and cost.

What VPC Endpoints Actually Do

A VPC Endpoint creates a private path so your resources can reach specific AWS services without requiring:

  • Public IPs
  • Internet Gateways
  • NAT Gateways
  • Direct internet routing

Both endpoint types enable private connectivity, but they operate differently inside the VPC.

Gateway Endpoints (for S3 and DynamoDB)

Gateway Endpoints are the simpler option. They work by adding routes to your route tables so that traffic to S3 or DynamoDB stays within AWS.

Key characteristics

  • Attached to route tables — not subnets or resources
  • No hourly cost
  • Supports only S3 and DynamoDB
  • Scales automatically with no bandwidth limits
  • Works at the subnet level through routing

This makes Gateway Endpoints ideal for workloads that frequently interact with S3 or DynamoDB and need a predictable, low-cost way to stay private.

Example use case

A private application uploading logs to S3 can use a Gateway Endpoint to avoid NAT Gateway charges and keep all traffic on the internal AWS network.

Interface Endpoints (AWS PrivateLink)

Interface Endpoints work differently. Instead of modifying routes, they create Elastic Network Interfaces (ENIs) in your subnets. These ENIs act as private entry points for AWS services using PrivateLink.

Important traits

  • Creates ENIs with private IP addresses
  • Supports many AWS services (SSM, Secrets Manager, ECR, KMS, CloudWatch, etc.)
  • Charges per hour and per GB processed
  • Uses Security Groups for traffic filtering
  • Provides fine-grained, resource-level control

This makes Interface Endpoints ideal when you need controlled access to a wide range of AWS services.

Example use case

An EC2 instance retrieving secrets from AWS Secrets Manager through an Interface Endpoint, with Security Groups enforcing access restrictions.

How They Work Together

Both endpoint types enable private access, but through different mechanisms:

  • Gateway Endpoints use route tables to redirect S3/DynamoDB traffic.
  • Interface Endpoints expose AWS services as private IPs through ENIs.

In practice:

  • Use Gateway Endpoints for large, cost-sensitive workloads that rely heavily on S3 or DynamoDB.
  • Use Interface Endpoints when you need granular control or must access services beyond S3 and DynamoDB.

Choosing the Right Type

Use Gateway Endpoints when:

  • You only need S3 or DynamoDB
  • You want zero hourly cost
  • You need high throughput
  • You prefer subnet-wide behavior

Use Interface Endpoints when:

  • You need access to services like SSM, ECR, KMS, CloudWatch, or Secrets Manager
  • You want Security Group filtering
  • You need strict network isolation or compliance
  • You use PrivateLink for cross-VPC or third-party connectivity

Practical Examples

Private subnet accessing S3

  • Use Gateway Endpoint
  • Result: no internet exposure, no NAT cost

EC2 accessing Secrets Manager

  • Use Interface Endpoint
  • Result: controlled access through Security Groups

Microservices across VPCs

  • Use Interface Endpoint + PrivateLink
  • Result: no internet or VPC peering required

Fully isolated environment with no internet

  • Use Gateway Endpoint for S3
  • Result: workloads remain isolated but functional

Operational Notes

Gateway Endpoints

  • Very little maintenance
  • No Security Groups to configure
  • Easy to troubleshoot
  • Ideal for high-volume S3/DynamoDB traffic

Interface Endpoints

  • Requires correct Security Group configuration
  • Adds cost per AZ and per GB
  • DNS overrides may affect applications
  • Creates multiple ENIs, increasing resource management

Tips for Working with VPC Endpoints

  • Use Gateway Endpoints whenever possible for S3 and DynamoDB
  • Keep SG rules simple for Interface Endpoints
  • Monitor the cost of multiple Interface Endpoints
  • Enable Private DNS for easier service access
  • Use clear naming conventions for all endpoints

Conclusion

Gateway Endpoints and Interface Endpoints both enable private access to AWS services, but they operate differently. Gateway Endpoints offer a simple, free, route-based option for S3 and DynamoDB, while Interface Endpoints provide ENI-based, security-controlled access to a wide range of AWS services.

The Most Underrated Developer Tool Isn't GitHub Copilot, It's The Sun

2025-12-01 17:50:53

We are obsessed with our tools. We debate VS Code vs. Cursor, we optimize our dotfiles, and we buy $300 mechanical keyboards to type 5% faster. But what if the biggest bottleneck to your code quality isn't your IDE, but your biology?

I used to be that guy. In my years at several major internet companies, my "productivity stack" was simple: Caffeine, Dark Mode, and Noise-Canceling Headphones. I treated my body like a container for my brain—a container that was annoying because it needed sleep and food.

I wore my "cave dweller" status like a badge of honor. But looking back, I wasn't optimizing for output; I was optimizing for burnout.

Developer working in a dark room with code on screen

The "Efficiency" Lie We Tell Ourselves

There's a pervasive lie in the tech industry: Time at desk = Output.

We believe that if we step away from the screen, we are losing ground. But as I transitioned from a big tech PM to an independent developer, I realized something counter-intuitive: My best code was never written when I was forcing it.

It was written after a walk. It was written after I sat on my balcony for 15 minutes doing absolutely nothing.

When we lock ourselves in dark rooms, we disrupt our circadian rhythms. We confuse our cortisol and melatonin cycles. The result? "Brain fog." And what do we do? We drink more coffee to fight the fog, creating a loop of anxiety and crash. We are debugging code while running on a corrupted operating system.

Sunlight streaming through a green forest

Debugging Your Biology

I started treating my sunlight exposure with the same rigor I treat my git commits. I realized that sunlight is not just "nice to have"; it is the signal that synchronizes our internal clock.

But here's the problem: As a developer, it's easy to lose track of time. You dive into a bug at 10 AM, and suddenly it's 4 PM and you haven't seen the sky.

I needed a tool. Not another productivity tracker that yells at me to "work harder," but a gentle nudge to "be human."

Why I Built SunshinePal
I didn't build SunshinePal to gamify nature. I built it because I needed a visualizer for my "nature deficit."

SunshinePal uses the Apple Watch to passively track how much time I spend in daylight. It’s my "biological debugger." Now, when I see my sunlight ring is low, I don't feel guilty. I just know: "Okay, Micky, your hardware is overheating. Go outside."

The ROI of "Doing Nothing"

Since I started prioritizing my 30-60 minutes of daily sunlight, my code quality has gone up. My anxiety (that "tight chest" feeling I lived with for years) has gone down.

We have been scammed into believing that "grinding" is the only way to succeed. But the most sustainable growth comes from rhythm, not intensity.

**Code is logic, but you are biology. **You cannot cheat the system forever.

Silhouette of person meditating at sunset
Ready to debug your circadian rhythm? Learn more about SunshinePal or download it from the App Store.

How AI is Transforming Cloud Engineering: Practical Use Cases for 2025

2025-12-01 17:49:40

Introduction – Why AI and Cloud Matter in 2025

Have you ever wondered what happens behind the scenes when you upload a file, stream a video, or use an app that just works? By 2025, cloud technology is no longer just about storage or computation. It’s becoming intelligent, thanks to AI.

From my experience helping organizations migrate workloads, optimize infrastructure, and deploy applications, one thing is clear: AI is no longer an optional add-on. It is becoming the backbone of modern cloud engineering. In this article, I’ll walk you through how AI is transforming cloud engineering, share real-world examples, and give practical tips you can implement today.

Why AI + Cloud Is Exploding in 2025

Cloud adoption continues to grow, with more enterprises embedding AI into core operations. Some key reasons include:

  • AI-enabled cloud services help scale business operations, manage costs, and accelerate innovation.
  • Companies are increasingly relying on AI to automate cloud infrastructure, optimize performance, and improve security.
  • AI adoption in cloud environments allows organizations to handle complex workloads with agility and efficiency.

The moment is now: cloud is mainstream, and AI is being deeply integrated into its core functions.

How AI Is Changing Cloud Engineering

1. Smart Resource Allocation and Cost Optimization

Balancing cloud performance and cost has always been a challenge. AI solves this by predicting workloads and optimizing resource usage dynamically.

Practical examples:

  • AI-driven frameworks can automatically scale microservices up or down based on real-time demand, reducing costs while improving performance.
  • Predictive models forecast CPU and memory usage for big data pipelines, helping teams avoid over-provisioning and saving money.

From my experience managing a mid-sized SaaS migration, implementing AI-based autoscaling reduced monthly compute bills by nearly 28 percent without affecting performance.

2. Automated DevOps and Deployment

Manual deployment and configuration are time-consuming and error-prone. AI is making these processes smarter:

  • Large language model-driven DevOps systems can generate and refine cloud configurations automatically.
  • AI-assisted tools reduce errors, speed up deployment, and help manage complex multi-tenant environments.

In one project, AI-powered configuration tools cut deployment time nearly in half and drastically reduced configuration drift.

3. AI-First Workload Management

Cloud platforms are now becoming first-class homes for AI and machine learning workloads:

  • Businesses use AI cloud services for predictive analytics, image recognition, NLP, and anomaly detection.
  • Migrating AI workloads to cloud infrastructure reduces training time and enables near real-time insights.

For example, a retail analytics team I worked with reduced model training time from several hours to under 20 minutes by using cloud AI pipelines, enabling faster demand forecasting.

4. Improved Security and Compliance

AI is helping make cloud environments more secure and compliant:

  • AI-based monitoring tools detect anomalies and suspicious activity faster than humans.
  • AI enforces governance, identifies vulnerabilities, and optimizes data flows.

However, careful planning is essential to ensure AI integration does not introduce new risks.

Common Pitfalls and Misconceptions

From my experience, teams often stumble when:

  • They treat AI as a magic solution without monitoring or governance.
  • Data pipelines are messy or inconsistent, causing AI to underperform.
  • Security and compliance are neglected during AI integration.
  • Cost management is ignored, leading to unexpected bills from AI workloads.

Advanced Tips and Emerging Trends

For those ready to go deeper:

  • Use reinforcement learning-based resource management for dynamic workloads.
  • Adopt AI-driven DevOps systems to manage infrastructure as code efficiently.
  • Treat AI as a first-class citizen in cloud architecture for pipelines, model training, and deployment.
  • Implement FinOps practices that track AI resource efficiency and costs.
  • Use AI-based monitoring to enhance security and compliance, combined with human oversight.

Actionable Takeaways

Here’s what beginners and professionals can do today:

  1. Audit cloud usage to find workloads with variable demand for AI-based autoscaling.
  2. Migrate a small AI or analytics workload to the cloud to measure cost and performance improvements.
  3. Use AI-enabled infrastructure tools for deployment and configuration.
  4. Implement basic FinOps to monitor usage, cost, and efficiency.
  5. Plan for governance and security by defining roles, permissions, and monitoring practices.

Conclusion

AI is transforming cloud engineering from static infrastructure management to intelligent, dynamic, and efficient systems. It enables cost savings, improved performance, smarter deployment, and stronger security. Success requires careful planning, governance, and ongoing monitoring.

If you want to learn more about cloud engineer roles and responsibilities in this AI-driven cloud environment, check out this resource: Cloud Engineer Roles and Responsibilities

Which AI-powered cloud use cases are you most excited to explore in 2025? Share your thoughts in the comments – I’d love to hear your experiences.

Veeam Backup for Microsoft 365 Installation Guide

2025-12-01 17:46:53

Veeam Backup for Microsoft 365 is a comprehensive solution designed to protect your organization’s critical data stored within Microsoft 365 services, including Exchange Online, SharePoint Online, OneDrive for Business, and Microsoft Teams. By providing robust backup, restoration, and recovery capabilities, it ensures that your data is safe, accessible, and fully compliant with legal and regulatory requirements. Microsoft 365 provides powerful cloud productivity tools but operates on a shared responsibility model, meaning that while Microsoft safeguards its infrastructure, protecting your data remains your responsibility. Veeam fills this gap by offering a reliable, flexible, and user-friendly backup solution tailored to meet the demands of modern businesses.

Key Features of Veeam Backup for Microsoft 365

  1. Comprehensive Backup: Securely back up Microsoft 365 data, including emails, files, and conversations, to on-premises storage or the cloud of your choice.

  2. Granular Recovery: Restore individual emails, files, or entire sites with precision, reducing downtime and disruption during data recovery scenarios.

  3. Flexible Deployment: Deploy on-premises, in the cloud, or in a hybrid environment, allowing you to customize the solution to suit your infrastructure needs.

  4. Advanced Search and eDiscovery: Easily search and retrieve critical data for compliance, legal, or operational requirements with powerful eDiscovery tools.

  5. Microsoft Teams Support: Backup and recovery of Microsoft Teams data, including channels, tabs, and shared files, to ensure uninterrupted collaboration.

  6. Automation and Scalability: Automate routine backup tasks with PowerShell and RESTful APIs, while scaling effortlessly to support enterprise-level environments.

  7. Secure and Reliable:
    Utilize encryption for backup data in transit and at rest, ensuring maximum security and compliance with industry standards. By choosing Veeam Backup for Microsoft 365, businesses can achieve peace of mind knowing their critical data is safeguarded against accidental deletion, security threats, and regulatory risks.

Pre-requisites

Before proceeding with the installation, ensure the following requirements are met:

- System Requirements:

A supported Windows Server OS (Windows Server 2016 or later recommended).

At least 4 CPU cores and 8 GB RAM for small environments (adjust for larger deployments).

Sufficient storage space for backup data.

- Microsoft 365 Requirements:

An account with the necessary permissions (Global Administrator or Application Administrator roles) in Microsoft 365.

Modern authentication enabled for better security.

- Software Requirements:

Microsoft .NET Framework 4.7.2 or later.

A supported version of Microsoft PowerShell (v5.1 or later).

- License:

A valid license key for Veeam Backup for Microsoft 365.

Step 1: Download the Installer

  1. Visit the Veeam official website and log in with your Veeam account.
  2. Navigate to the Veeam Backup for Microsoft 365 product page.
  3. Download the latest version of the software.

Step 2: Install Veeam Backup for Microsoft 365

  1. Run the Installer:
  • Double-click the downloaded setup file to launch the installer.
  • Select your preferred language and click OK.

Accept the License Agreement:

  • Read and accept the terms of the License Agreement.
  • Click Next to continue

  • Choose Installation Type:
  • Select Full Installation to install the Veeam Backup for Microsoft 365 server and console on the same machine.
  • Alternatively, select Custom Installation to install components on separate servers.

  • Select Installation Folder:
  1. Choose the destination folder or accept the default location.
  2. Click Next.

  • Install Prerequisites:
  1. The installer will automatically detect and install any missing prerequisites.
  2. Click Install to proceed.

Conclusion

By following the installation steps in this guide, you have successfully set up Veeam Backup for Microsoft 365 on your system. With the software now installed, you are ready to:

  • Configure your backup infrastructure
  • Connect it to your Microsoft 365 environment
  • Safeguard critical data, including emails, files, and collaborative content

This setup ensures your organization is better equipped to handle data recovery scenarios, minimize downtime, and meet compliance requirements.

Next, proceed with configuring:

  • Repositories
  • Backup jobs
  • Retention policies

Tailor these settings to your business needs for optimal protection. For ongoing success, regularly update your Veeam installation and monitor your backups to ensure continuous, reliable protection.