2026-02-12 20:20:10
My boss at work decided it was time to test my knowledge, patience, and DevOps engineering skills, so he gave me a task that seemed simple at first glance but was actually guite challenging.
We decided to build our own local mail server from scratch for use among our colleagues. Instead of using ready-made Docker images, I created a mail server using Postfix and Dovecot, implemented virtual users, enabled LMTP delivery, and secured everything.
Before we start creating our own mail server, we need to decide on the technologies we will use.
Without a doubt, the core of our project are Postfix and Dovecot, which serve a single purpose - to help us create a mail server.
I'll quickly explain what they do.
Postfix
Postfix is responsible for handling SMTP traffic. It:
Dovecot
Dovecot acts as the mail access server. It:
Before configuring the services, let's quickly understand how email would flow through the system.
A mail server is essentially a pipeline. To debug more precisely we need to know how messages flow between components.
In my setup, the mail flow looks like this:
Client -> SMTP -> LMTP -> Dovecot -> Maildir
Client -> IMAP -> Dovecot -> Maildir
Sending an Email
When user sends an email:
Instead of using the Postfix's built-in virtual delivery system, I chose LMTP because it allows Dovecot to handle the final delivery process directly.
Reading an Email
When a user checks their mailbox:
Mail storage format Maildir
Maildir is a mail storage format provided by Dovecot to store messages on the server.
In Maildir, each message is stored as an individual file inside a structured directory hierarchy ( tmp/, new/, and cur/ ).
This design eliminated file locking issues common in older mailbox format and allows multiple processes to work safely in parallel.
SSL/TLS Encryption: Secure Communication
SSL (Secure Sockets Layer) and TLS (Transport Layer Security) are protocols that encrypt the communication between email servers and clients, protecting your emails and login credentials during transmission over the internet.
Once the architecture is clear, we can start our project. First, let's install the necessary components.
sudo apt update
sudo apt install postfix dovecot-imapd dovecot-lmtpd
You might ask: "Why virtual users?". Creating Linux system users for each email account would tightly couple mail accounts with operating system accounts.
This approach has several drawbacks:
Now that we have installed all the main components, we can move on to configuring Postfix.
/etc/postfix/main.cf
Basic Identity
myhostname = hostname.domain
mydomain = domain
myorigin = $mydomain
These parameters define how the server identifies itself when sending and receiving mail.
Network Configuration
inet_interfaces = all
inet_protocols = all
mynetworks = 127.0.0.0/8 ...
mydestination = localhost, localhost.$mydomain
relayhost =
Mailbox Settings
mailbox_size_limit = 0
Define that there is no restrictions on mailbox size.
Relay Restrictions && Recipient
smtpd_recipient_restrictions =
permit_mynetworks,
permit_sasl_authenticated,
reject_unauth_destination
This prevents the server from becoming an open relay.
SASL Authentication
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_auth_enable = yes
It simply tells Postfix to not save the users and let the Dovecot do the job.
TLS
smtpd_tls_cert_file = /etc/postfix/certs/server.crt
smtpd_tls_key_file = /etc/postfix/certs/server.key
smtpd_tls_security_level = may
smtp_tls_security_level = may
We tell the server:
For the local environments is perfect.
Virtual Domain && LMTP
virtual_mailbox_domains = domain
virtual_transport = lmtp:unix:private/dovecot-lmtp
Once Postfix is fully configured... wait! We are not done yet.
After setting up Postfix, we need to make sure the mail server can actually receive connections. For that, we need to open the necessary ports in our firewall (UFW) and configure master.cf.
| Port | Protocol | Purpose |
|---|---|---|
| 25 | TCP | SMTP for receiving mail from other mail servers |
| 587 | TCP | SMTP submission for sending mail from email clients (authenticated) |
| 465 | TCP | SMTP over SSL (optional, secure submission) |
sudo ufw allow 25/tcp
sudo ufw allow 587/tcp
sudo ufw allow 465/tcp
sudo ufw reload
sudo ufw status
The master.cf file defines how Postfix listens and handles connections.
Submissions defines the outgoing from clients.
/etc/postfix/master.cf
submission inet n - y - - smtpd
-o syslog_name=postfix/submission
-o smtpd_tls_security_level=encrypt
-o smtpd_sasl_auth_enable=yes
Each -o option overrides default Postfix behavior for this service only, like forcing TLS and enabling authentication.
Now that we are finished configuring the Postfix, we can move on to configuring Dovecot.
After Postfix is configured to handle sending and receiving mails, Dovecot takes care of user authentication and mail storage access.
Create the vmail User
All virtual mailboxes will be stored under a dedicated system user called vmail. This avoids creating a system account for each email user, improving security and manageability.
sudo groupadd -g 5000 vmail
sudo useradd -g 5000 -u 5000 vmail -s /bin/bash -m /home/vmail
sudo passwd vmail
Create a Virtual User Password File
Dovecot needs a lsit of virtual users and their passwords. This can be a simple text file for local testing.
/etc/dovecot/passwd
sender@domain:{PLAIN}passwd:5000:5000::/home/vmail/domain/sender
receiver@domain:{PLAIN}passwd:5000:5000::/home/vmail/domain/receiver
In Dovecot, virtual users are not real Linux users, but they still need file system access to read/write their mailboxes. Virtual usera are mapped to vmail's UID/GID so Dovecot processes access the files as vmail.
| Field | Meaning |
|---|---|
sender@domain |
Virtual email address (login name). |
{PLAIN}passwd |
Password for this user ({PLAIN} = plain text for testing). |
5000 |
UID of the system user Dovecot should use (vmail). |
5000 |
GID of the system group (vmail). |
(empty) |
Typically used for extra info like quota; empty here. |
/home/vmail/domain/sender |
Home directory / Maildir path for this user. Dovecot stores the mail here. |
Configure Password and User Databases
/etc/dovecot/conf.d/auth-passwdfile.conf.ext
passdb passwd-file {
driver = passwd-file
auth_username_format = %{user}
passwd_file_path = /etc/dovecot/passwd
}
userdb static-users {
driver = static
fields {
uid = 5000
gid = 5000
home = /home/vmail/domain/%{user | username}
mail = maildir:~/Maildir
}
}
This file tells Dovecot how to find virtual users and which system account should own their mail files. There are two main sections: passdb and userdb
passwd_file_path - Location of the file with virtual users and passwords.
userdb static-users - This is a user database named static-users. It tells Dovecot who owns the mailbox.
driver = static - Same system UID/GID for all virtual users.
uid = 5000 - The system user ID used to acces mailbox files.
gid = 5000 - The system group ID used to access mailbox files.
home - The home path of a virtual user
mail - Location of the user's Maildir inside the home directory.
Mail Storage Configuration
/etc/dovecot/conf.d/10-mail.conf
mail_driver = maildir
mail_home = /home/vmail/domain/%{user | username}
mail_path = ~/Maildir
This is where we tell Dovecot where and how to store messages for each virtual user.
%{user | username} takes the local part of the email ( everything before @ )Authentication Settings
/etc/dovecot/conf.d/10-auth.conf
auth_mechanisms = plain login
auth_allow_cleartext = no
#!include auth-system.conf.ext
!include auth-passwdfile.conf.ext
This file controls how users log in and which authentication backends Dovecot uses.
Master Process && Socket Configuration
/etc/dovecot/conf.d/10-master.conf
service imap-login {
inet_listener imap {
port = 143
}
}
service auth {
unix_listener /var/spool/postfix/private/auth {
mode = 0660
user = postfix
group = postfix
}
unix_listener auth-userdb {
mode = 0660
user = vmail
group = vmail
}
}
service lmtp {
unix_listener /var/spool/postfix/private/dovecot-lmtp {
mode = 0600
user = postfix
group = postfix
}
}
This file controls how Dovecot runs its services ( IMAP, LMTP, authentication ) and how they communicate with other processes.
| Service | Purpose | Socket/Port | User/Group | Notes |
|---|---|---|---|---|
| imap-login | IMAP client connections | TCP 143 | - | Allows clients to read mail |
| auth | User authentication | /var/spool/postfix/private/auth |
postfix:postfix | Postfix can check credentials |
| auth-userdb | User info query | internal | vmail:vmail | Dovecot internal use |
| lmtp | Mail delivery from Postfix | /var/spool/postfix/private/dovecot-lmtp |
postfix:postfix | Delivers mail to Maildir |
LMTP Protocol Settings
/etc/dovecot/conf.d/20-lmtp.conf
protocol lmtp {
auth_username_format = %{user}
}
This file controls how Dovecot handles mail delivery via LMTP from Postfix.
Also, we should enable LMTP in Dovecot config file.
/etc/dovecot/dovecot.conf
protocols = imap lmtp
SSL/TLS Settings
/etc/dovecot/conf.d/10-ssl.conf
ssl = required
ssl_server_cert_file = /etc/postfix/certs/server.crt
ssl_server_key_file = /etc/postfix/certs/server.key
These settings make sure that all client connections to Dovecot are encrypted, keeping passwords and emails secure.
! Since we are creating a local mail server, we won't need this, as we won't be trying to send messages to external domains such as gmail.
Firewall Settings
Let's allow 143 port for IMAP / POP3 protocols.
sudo ufw allow 143/tcp
sudo ufw reload
Even though our mail server is local, I'm still creating is inside my corporate network and email clients and Postfix still rely on DNS to resolve domain names.
Forward Zone maps domain names to IP addresses.
Records: The forward zone contains different types of records.
A RECORD - Maps a hostname to an IP address.
ANAME - Some DNS server support aliases as a CNAME replacement at the root domain.
CNAME - Creates an alias from one hostname to another.
MX Record - Directs email to the mail server for our domain.
Putting It All Together
Monitoring logs is crucial for:
Postfix logs are usually written to syslog.
sudo tail -f /var/log/mail.log
What to look for:
| Log Entry | Meaning |
|---|---|
status=sent |
Email was successfully delivered. |
status=bounced |
Delivery failed (check recipient or MX). |
reject_unauth_destination |
Someone tried to relay through your server — Postfix blocked it. |
sasl_method=PLAIN |
Authentication attempt logged. |
To see Dovecot logs authentication and mail access we can use journalctl -u dovecot. In order to extend the logs we should edit the /etc/dovecot/conf.d/10-logging.conf.
auth_debug_passwords = yes
log_debug=category=mail
mail_plugins {
notify = yes
mail_log = yes
}
To check if ports are listening:
sudo ss -tulpn | grep -E '25|587|456|143'
Now that Postfix and Dovecot are configured, it's crucial to verify that:
We can do this easily with Telnet ( or OpenSSL ).
First thing first, restart your services.
sudo systemctl restart postfix
sudo systemctl restart dovecot
Test SMTP ( Port 587 / 25 )
telnet domain 587
You should see a response like
220 domain ESMTP Postfix
Now send a test email manually:
EHLO domain.com
AUTH LOGIN
MAIL FROM:<sender@domain>
RCPT TO:<receiver@domain>
DATA
Subject: Test Email
Hello! This is a test.
.
QUIT
Test IMAP ( Port 143 )
telnet domain 143
You should see:
* OK Dovecot ready.
Now check an inbox.
a login [email protected] passwd
b select inbox
You now have a fully functional local mail server that is capable of handling virtual users, sending and receiving emails, and securely storing messages in Maildir format!
If you have any questions or suggestions, I will be happy to hear them! Criticism is also welcome!
2026-02-12 20:10:08
confdroid_resources is a small, focused Puppet module that automates the installation and configuration of common YUM / DNF repositories on Rocky Linux (and other Enterprise Linux 9 family systems).
Its primary purpose is to make sure essential third-party repositories — especially EPEL — are reliably present on every managed node without causing duplicate resource conflicts.
Rocky Linux (and other RHEL derivatives) rely on YUM/DNF repositories to provide software packages.
Many useful packages — debugging tools, monitoring agents, additional utilities, and more — live outside the base OS repositories and are only available through EPEL (Extra Packages for Enterprise Linux).
In Puppet, repository definitions are singletons: you can only declare a yumrepo resource once per module & catalog. Attempting to declare the same repository multiple times (for example, in different modules that depend on EPEL) causes a duplicate resource conflict and catalog compilation failure.
The cleanest solution is to:
That central place is confdroid_resources.
The module declares the most commonly needed repositories (starting with EPEL) in a controlled, idempotent way.
Because these repositories are declared only once and included globally, downstream modules can safely install packages that depend on them without worrying about duplication errors.
You can also enable or disable individual repositories via class parameters or Hiera/Foreman data — without touching the rest of your codebase.
include confdroid_resources
This way, the module is applied on all hosts, since likely all will need the repos. In case any hosts should not have it enabled, the parameter $rs_enable_epel should be set to 0.
package { 'htop':
ensure => installed,
require => Class['confdroid_resources'],
}
Or better yet, simply include the requirement in your module:
require confdroid_resources
This simply ensures that the confdroid_resource module is applied first.
No need to declare yumrepo { 'epel': … } again — it's already there.
Add the module to your environment (via the ConfDroid Forge or by cloning the repo), then simply include it in your base profile or node classification.
# Example in a base profile class
class profile::base {
include confdroid_resources
}
That's it — EPEL (and any future shared repos) will be available everywhere, without duplication headaches.
Happy automating! 🚀
Questions, feedback, or feature requests → https://feedback.confdroid.com
Did you find this post helpful? You can support me.
2026-02-12 20:09:59
We’ve all been there. You have a side project running on a $5 VPS. You wrote a quick bash script to run pg_dump or mysqldump every night at 3 AM. You put it in crontab. You felt responsible. You felt safe.
Then, six months later, you actually need that data.
You go to your backup folder. File size: 0kb.
Or worse, the file is there, but when you try to restore it, you realize you never tested the restore command, and version mismatches are throwing errors everywhere.
This is exactly why I started building Oops Backup.
As developers, we obsess over the backup part. "Is the data saved?"
But we rarely obsess over the restore part. "Can I get this app back online in 5 minutes if the server catches fire?"
I realized that my collection of haphazard shell scripts and cron jobs had three major flaws:
No Visibility: If a backup failed silently (disk full, credentials changed), I wouldn't know until it was too late.
Storage Hell: Storing backups on the same server as the database is a recipe for disaster. Moving them to S3 via scripts is annoying to maintain.
Restore Anxiety: Restoring a database usually meant SSH-ing in, finding the file, unzipping it, and praying the import command worked.
I decided to build a tool that treats Backup & Restore as a first-class citizen, not an afterthought.
I’m a huge fan of keeping infrastructure simple. Oops Backup is built to run cleanly in Docker containers. I manage my own infra with Portainer, and I wanted Oops Backup to feel just as native.
Automated Scheduling: No more manual cron editing.
Storage: Native integrations for S3-compatible storage (AWS, Cloudflare R2, Backblaze B2), Oops Storage or SFTP transfer.
One-Click Restore: This is the killer feature. You can browse your backups and restore them to a specific database instance via the UI.
Building in public means admitting when you’re wrong.
Initially, I wanted to support everything: MongoDB, PostgreSQL, MySQL, and Microsoft SQL Server.
Last week, I officially dropped support for MSSQL.
Why? Because trying to support the Windows-heavy ecosystem of MSSQL inside a lightweight tool was bloating the project and distracting me from the core user base. 95% of indie hackers and devs I know run on the "LAMP" or "MERN" stack equivalents.
By cutting MSSQL, I could double down on making the Postgres, Mongo, and MySQL experience flawless. Sometimes, you have to cut a feature to save the product.
I’m currently finalizing and polishing the UI. I’m building this in public and sharing the journey on X, Threads and Bluesky.
If you’re tired of trusting a backup.sh file you wrote three years ago, I’d love for you to check out what I’m building.
Check it out at Oops Backup
I’m looking for brutally honest feedback. What’s the one feature your current backup solution is missing?
2026-02-12 20:08:42
While navigating the Dev.to feed, I noticed a UI inconsistency: the Follow button in the user profile hover card displays duplicated text. When hovering over a user’s avatar or username, the tooltip preview shows overlapping “Follow” labels, e.g., FollowFollow. This seems to be a frontend rendering issue, likely caused by double-rendering the button label or an unexpected state duplication.
The profile preview tooltip should display a single, properly styled Follow button with one label. The behavior should match other follow interactions across the site, maintaining UI consistency.
Likely caused by frontend rendering or state duplication, possibly due to:
Verified not caused by browser extensions (tested with all extensions disabled).
Occurs in dark mode; light mode not yet tested.
Could affect accessibility if screen readers interpret both labels.
Severity: Low to Medium – primarily a visual/UX inconsistency, but it may confuse users.
2026-02-12 20:06:42
This is a submission for the GitHub Copilot CLI Challenge
I used Copilot CLI and Copilot SDK to create a CLI application 🖥️ called DevScope.
DevScope is a macOS-first, privacy-focused CLI tool that helps developers understand how they actually spend their time across applications, terminal commands, and browser activity.
Github code - https://github.com/sonu0702/devscope
Developers generate a lot of activity data:
But raw metrics do not answer higher-level questions about focus, intent, or progress. Existing tools either:
DevScope is designed to live in the terminal, stay transparent and explainable, and treat Copilot as a thinking partner instead of just autocomplete.
🚀 GitHub Copilot is used in two distinct ways:
During development
At runtime (Copilot SDK)
This project demonstrates how Copilot can be both a developer tool and a product feature.
Why an Agent?
Raw metrics like “time spent” or “commands used” do not answer higher-level questions such as:
To solve this, DevScope is designed to integrate the GitHub Copilot SDK as a reasoning layer on top of locally collected activity data.
This application is in initial stage everyone is welcome to contribute with Idea or code.
2026-02-12 20:06:38
Sarah McLachlan totally wowed at her Tiny Desk Concert, giving everyone chills with her amazing, gentle voice. She even threw a surprise country twist into "Building a Mystery" that felt nothing short of miraculous!
This set really highlighted her deep artistic soul and killer skills, proving she's way more than just a Lilith Fair founder. Honestly, it was one of the warmest and most poignant performances the Tiny Desk has ever seen.
Watch on YouTube