2026-01-16 06:04:37
We all have that one song in our listening history that we hope nobody sees. But instead of hiding my shame, I decided to automate it!
I recently built my own custom AI bot on Poe called lastfm-roaster. Its only job is to look at a person's Last.fm music taste (if you're curious. here's mine as an example) and absolutely destroy them.
But there was a problem: The roasts were trapped in the chatbot interface. I wanted them delivered straight to my inbox every morning so I could easily forward the best (worst) burns to my friends for a laugh.
So, I built a pipeline.
Today, I’m going to show you how I connected my custom Poe bot to a Python script that pings the API, analyzes my listening history, and uses Regex to inject random neon colors into a beautiful HTML email.
Best of all? It runs entirely for free on GitHub Actions, and the API costs less than a cup of coffee for the entire year.
Onward!
I went to Poe and created a new bot using their ScriptBot feature. I gave it a specific system prompt roughly like this:
"You are a pretentious music critic. Your job is to analyze Last.fm profiles and roast them mercilessly. Be sarcastic, use slang, and do not hold back."
After iterating back and forth with the ScriptBot until I was satisfied with the results, I had the intelligence (lastfm-roaster), but I needed to get the output out of Poe and into my email.
We are going to use GitHub Actions to run this, which means we need to keep our API keys safe. Never hardcode passwords in your script!
Create a new private repository on GitHub, then go to Settings > Secrets and variables > Actions.
Add these three secrets:
POE_API_KEY
EMAIL_ADDRESS
EMAIL_PASSWORD
We need a few Python libraries to make this magic happen. Create a file named requirements.txt in your repo:
openai
markdown
(Yes, we use openai! Poe's API is now compatible with the OpenAI client, making it super easy to use. Gemini 2.5 Flash is referenced via the Poe API, so I don't have to worry about managing that key, as well.)
lastfm_roast.py)
Here is the cool part. I didn't want the email to look boring. I wanted the "insults" (the bold text) to pop in different colors.
We use Python's re (Regex) and itertools to find every <strong> tag my bot generates and cycle through a "Dracula" color palette to inject inline CSS styles.
import os
import smtplib
import markdown
import re
import itertools
from email.message import EmailMessage
from openai import OpenAI
# Configs (Loaded safely from GitHub Secrets)
POE_API_KEY = os.environ.get("POE_API_KEY")
EMAIL_ADDRESS = os.environ.get("EMAIL_ADDRESS")
EMAIL_PASSWORD = os.environ.get("EMAIL_PASSWORD")
LASTFM_URL = "https://www.last.fm/user/profoundlypaige"
# --- NEON PALETTE ---
# A list of bright colors that look good on dark backgrounds
# (Pink, Cyan, Green, Orange, Purple, Yellow)
COLORS = ["#FF79C6", "#8BE9FD", "#50FA7B", "#FFB86C", "#BD93F9", "#F1FA8C"]
def get_roast():
"""Pings the Poe API to get the roast."""
client = OpenAI(api_key=POE_API_KEY, base_url="https://api.poe.com/v1")
try:
print("🔥 Fetching roast from Poe...")
response = client.chat.completions.create(
model="lastfm-roaster",
messages=[
{"role": "user", "content": f"Roast my music taste: {LASTFM_URL}"}
]
)
return response.choices[0].message.content
except Exception as e:
return f"Error fetching roast: {e}"
def inject_colors(html_content):
"""
Finds every <strong> tag and injects a different color from the palette.
"""
color_cycle = itertools.cycle(COLORS)
def replace_match(match):
next_color = next(color_cycle)
# Returns <strong style="color: #CODE">
return f'<strong style="color: {next_color}">'
# Regex to replace <strong> with the colored version
return re.sub(r'<strong>', replace_match, html_content)
def create_html_email(roast_text):
# 1. Convert Markdown to basic HTML
raw_html = markdown.markdown(roast_text)
# 2. Inject the rotating neon colors into bold tags
colorful_html = inject_colors(raw_html)
# 3. Wrap in the styled container
html_template = f"""
<!DOCTYPE html>
<html>
<head>
<style>
body {{ margin: 0; padding: 0; background-color: #121212; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; }}
.container {{
max-width: 600px;
margin: 40px auto;
background-color: #1e1e1e;
border-radius: 16px;
overflow: hidden;
box-shadow: 0 10px 30px rgba(0,0,0,0.5);
border: 1px solid #333;
}}
.header {{
background: linear-gradient(135deg, #2b2b2b 0%, #1a1a1a 100%);
padding: 30px;
text-align: center;
border-bottom: 2px solid #333;
}}
/* The title is now a gradient text effect */
.header h1 {{
margin: 0; font-size: 28px; letter-spacing: 2px; text-transform: uppercase;
background: -webkit-linear-gradient(#FF79C6, #8BE9FD);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
}}
.content {{ padding: 30px; color: #d1d5db; line-height: 1.7; font-size: 16px; }}
h2 {{
color: #ffffff;
border-left: 5px solid #BD93F9; /* Purple accent */
padding-left: 15px;
margin-top: 30px;
text-transform: uppercase;
font-size: 18px;
letter-spacing: 1px;
}}
ul {{ padding-left: 20px; }}
li {{ margin-bottom: 10px; }}
/* Styles for links */
a {{ color: #8BE9FD; text-decoration: none; border-bottom: 1px dotted #8BE9FD; }}
.footer {{
background-color: #121212;
padding: 20px;
text-align: center;
font-size: 12px;
color: #555;
}}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>🔥 The Daily Burn</h1>
</div>
<div class="content">
{colorful_html}
</div>
<div class="footer">
Served fresh by Poe API, Gemini 2.5 Flash, & GitHub Actions<br>
<a href="{LASTFM_URL}" style="color:#555; border:none;">View your tragic Last.fm Profile</a>
</div>
</div>
</body>
</html>
"""
return html_template
def send_email(roast_text):
msg = EmailMessage()
msg["Subject"] = "Your Daily Last.fm Roast 🎸"
msg["From"] = EMAIL_ADDRESS
msg["To"] = EMAIL_ADDRESS
msg.set_content(roast_text)
msg.add_alternative(create_html_email(roast_text), subtype='html')
try:
with smtplib.SMTP_SSL("smtp.gmail.com", 465) as smtp:
smtp.login(EMAIL_ADDRESS, EMAIL_PASSWORD)
smtp.send_message(msg)
print("✅ Email sent successfully!")
except Exception as e:
print(f"❌ Failed to send email: {e}")
if __name__ == "__main__":
roast = get_roast()
send_email(roast)
I don't want to run this manually -- I want to be roasted automatically. So we'll use a GitHub Actions workflow to run this script every day at 12:00 UTC.
To create .github/workflows/daily_roast.yml:
name: Daily Lastfm Roast
on:
schedule:
- cron: '0 12 * * *' # Noon UTC, every day
jobs:
roast:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: '3.9'
- run: pip install -r requirements.txt
- run: python lastfm_roast.py
env:
POE_API_KEY: ${{ secrets.POE_API_KEY }}
EMAIL_ADDRESS: ${{ secrets.EMAIL_ADDRESS }}
EMAIL_PASSWORD: ${{ secrets.EMAIL_PASSWORD }}
This is my favorite part. Poe charges "Compute Points" to run the underlying model that powers the bot (I used Gemini 2.5 Flash for the backend of my bot). I checked my consumption after a few test runs to see what kind of bill I was racking up:
Here is the breakdown from my dashboard:
If I run this every single day for a year:$0.0048 * 365 days = $1.75
$1.75 per year.
For less than two bucks, I can get a state-of-the-art LLM to analyze my listening trends and tell me my taste in indie pop is "derivative and sad" every single morning. That is high-value ROI. 📈
Before (Chat Interface):
Trapped in an app. Hard to share. Markdown text.
After (The Neon Upgrade):
Now, when I open my email, I get a sleek, dark-mode card. The band names—the targets of the insults—are highlighted in Pink, Cyan, and Green, making sure I don't miss exactly who I'm being mocked for listening to.
And because it's an email, I can instantly forward the roast to my friends so they, too, can laugh at my pain.
This pattern (Custom Bot + API + HTML Generation + Actions) is my go-to for almost all my personal automation. It’s robust, free to host, and creates genuinely fun daily interactions.
Repositories mentioned:
Let me know in the comments if you try this out, or share the worst roast the AI gave you, and happy coding! ✨
2026-01-16 06:02:55
A great policy is worthless if it's not being followed. When I take a look at our detection platform, I see that we have a number of resources that violate the policy we've carefully constructed. Trying to remediate these is like playing whack-a-mole, just when I think I've got them all, new ones pop-up.
Take S3 for example, whether these are old resources or new ones I
routinely find one out of compliance, for example with the public access block off. Now, even with that off the bucket is not public, but it's one step or one mistake away from being available to anyone. The public access block was a guardrail added by AWS 6 years ago to prevent one of the most common vulnerabilities at the time, public S3 buckets. Back then it seemed like every month or so a major company was having an S3 bucket leaked.
So how can you prevent things like this? One way is using AWS Config, however this is difficult to setup auto-remediation for and often request a custom Lambda function which you need to then maintain. This is where Preventive Security Posture Management (PSPM) comes in.
A PSPM focuses on enforcing policy continuously and automatically, not just detecting violations after the fact. Instead of alerting you that something drifted from policy requiring manual cleanup, a PSPM prevents or immediately corrects the drift as it happens.
Now wait a minute, if I already have a CNAPP, why do I need a PSPM? A CNAPP provides broad visibility across cloud, workload, and application risk, including runtime and vulnerability context. But a PSPM like Turbot complements a CNAPP by ensuring cloud policies are always enforced preventing misconfigurations from occurring and persisting in your environment.
Without automated policy enforcement, you rely on people and processes, violations will always happen and detection and manual cleanup can take weeks or months. In order to be secure, automatic policy enforcement needs to be in place. Turbot makes this possible without having to write lots of custom code.
Automate prevention to reduce alert fatigue, eliminate manual cleanup, and close security gaps as soon as they appear. Learn more about Turbot here: https://fandf.co/4ps42Qm
Special thanks to Turbot for sponsoring this post!
2026-01-16 05:57:24
You spend 15 minutes filling out a long configuration form. You get a Slack notification, switch tabs, reply to a colleague, and grab a coffee.
30 minutes later, you come back to the form and click "Save".
The page flashes. The login screen appears.And your data is gone.
This is the most annoying UX pattern in web development, and we need to stop doing it.
Why does this happen? Usually, it's because the JWT (access token) expired, the backend returned a 401 Unauthorized, and the frontend code did exactly what the tutorials said to do:
// Don't do this
axios.interceptors.response.use(null, error => {
if (error.response.status === 401) {
window.location.href = '/login'; // RIP data 💀
}
return Promise.reject(error);
});
Developers often argue: "But it's a security requirement! The session is dead!"
Yes, the session is dead. But that doesn't mean you have to kill the current page state.
If a user is just reading a dashboard, a redirect is fine. But if they have unsaved input (forms, comments, settings), a redirect is a bug.
Here is how a robust app handles this:
Intercept: Catch the 401 error.
Queue: Pause the failed request. Do not reload the page.
Refresh: Try to get a new token in the background (using a refresh token) OR show a modal asking for the password again.
Retry: Once authenticated, replay the original request with the new token.
The user doesn't even notice. The form saves successfully.
Implementing the "Silent Refresh" is tricky, but testing it is annoying.
Access tokens usually last 1 hour. You can't ask your QA team to "wait 60 minutes and then click Save" to verify the fix.
You need a way to trigger a 401 error exactly when you click the button, even if the token is valid.
Instead of waiting for the token to expire naturally, we can just delete it "mid-flight."
I use Playwright for this. We can intercept the outgoing request and strip the Authorization header before it hits the server.
This forces the backend to reject the request, triggering your app's recovery logic immediately.
Here is a Python/Playwright snippet I use to verify my apps are "expiry-proof":
def test_chaos_silent_logout(page):
# 1. Login and go to a form
page.goto("/login")
# ... perform login logic ...
page.goto("/settings/profile")
# 2. Fill out data
page.fill("#bio", "Important text I don't want to lose.")
# 3. CHAOS: Intercept the 'save' request
def kill_token(route):
headers = route.request.headers
# We manually delete the token to simulate expiration
if "authorization" in headers:
del headers["authorization"]
# Send the "naked" request. Backend will throw 401.
route.continue_(headers=headers)
# Attach the interceptor
page.route("**/api/profile/save", kill_token)
# 4. Click Save
page.click("#save-btn")
# 5. Check if we survived
# If the app is bad, we are now on /login
# if page.url == "/login": fail()
# If the app is good, it refreshed the token and retried.
# The text should still be there, and the save should succeed.
expect(page.locator("#bio")).to_have_value("Important text I don't want to lose.")
expect(page.locator(".success-message")).to_be_visible()
Network failures and expired tokens are facts of life. Your app should handle them without punishing the user.
If you want to build high-quality software, treat 401 Unauthorized as a recoverable error, not a fatal crash.
PS: If you need to test this on real mobile devices where you can't run Playwright scripts, you can use a Chaos Proxy to strip headers on the network level.
2026-01-16 05:47:49
Using a model class (even a simple PORO in app/models) for features has several advantages in Rails.
This is why developers often prefer “a model class” over scattering logic in controllers, helpers, or initializers.
Below are the key advantages:
If a feature’s logic is placed in a model class (e.g., FeatureFlag, BetaAccess, Onboarding), you can reuse it from:
Instead of duplicating the logic in multiple places.
Rails controllers and views should stay thin.
Putting domain logic in a model keeps your design clean (Fat Model, Skinny Controller).
Example:
Instead of:
if user.admin? && SomeConfig.beta_enabled?
You do:
if BetaAccess.allowed_for?(user)
Models are the easiest to test:
RSpec.describe BetaAccess do
describe ".allowed_for?" do
# ...
end
end
No need to spin up controllers or simulate web requests.
If your feature has logic that may grow, models keep it in one place.
Example: onboarding flow:
class Onboarding
def completed?(user)
user.profile_filled? && user.verified? && user.tutorial_done?
end
end
If later you add new onboarding rules — you update one class.
A dedicated model communicates intent clearly:
if FeatureFlag.enabled?(:new_ui)
is more readable than:
if Rails.configuration.x.new_ui_enabled
or random constants.
You might start with:
class FeatureFlag
FLAGS = { new_ui: false }
end
Later decide to store flags in DB:
class FeatureFlag < ApplicationRecord
end
Same interface, no major changes in the rest of the app.
A model class naturally fits into authorization and service layers.
Example:
class FeatureFlagPolicy < ApplicationPolicy
def enable?
user.admin?
end
end
Using a model (ActiveRecord or PORO) gives you:
You can use a service or other patterns.
A model class is not the only valid approach.
The reason some teams choose a model instead of a service depends on what kind of logic they are modeling.
Here’s a clear breakdown.
A service object usually represents something the system does:
SendEmail.new.callImportCsv.new.callCreateSubscription.new.callServices are “verb-like.”
Feature flags (or similar domain rules) are not actions.
They’re state + rules about state.
Example:
FeatureFlag.enabled?(:new_ui)
A service would feel unnatural:
FeatureFlagChecker.new(:new_ui).call
You’re not doing anything; you're querying domain state.
If you put your logic in a service, you often end up with this anti-pattern:
The logic becomes scattered.
A model consolidates it.
Rails gives you conveniences for models:
app/models)Services don’t get these benefits.
Feature flags, onboarding rules, eligibility rules, user states —
these aren’t actions.
These are entities in your domain.
Rails convention: entities → model classes.
Example domain models that aren’t ActiveRecord:
AuthenticatorCartCheckoutOnboardingSubscriptionRulesFeatureFlagRails devs often keep PORO domain models in app/models.
Models give you intuitive, domain-driven APIs:
FeatureFlag.enabled?(:new_ui)
Eligibility.for(user).allowed?
Onboarding.completed?(user)
PlanPrice.for(:premium)
Services… not so much:
FeatureFlagService.new(:new_ui).enabled?
EligibilityService.new(user).allowed?
OnboardingService.new(user).completed?
This is more verbose, less “domain-sounding.”
Feature flags start simple but later evolve:
Models evolve cleanly.
Services become spaghetti when they accumulate state + logic.
A service is appropriate if:
Examples:
GenerateReportChargeCreditCardSendWelcomeEmailNot good for:
Those map better to models.
In Rails (especially with gems like ActiveInteractor or Interactor), an Interactor is a pattern for encapsulating a single unit of business logic — usually a transactional action that may:
Think of Interactors as “coordinators of actions”, often orchestrating multiple models and services.
Example:
class CreateOrder
include Interactor
def call
order = Order.create!(context.params)
PaymentProcessor.charge(order)
NotificationMailer.order_created(order).deliver_later
context.order = order
rescue StandardError => e
context.fail!(error: e.message)
end
end
| Concept | Responsibility | Examples |
|---|---|---|
| Model | Represents domain state & rules; encapsulates attributes & behavior |
FeatureFlag, User, Subscription
|
| Service | Performs a discrete action; can use multiple models |
PaymentProcessor, EmailSender
|
| Interactor | Orchestrates a workflow or transaction using models & services; handles success/failure |
CreateOrder, SendWeeklyReport, EnrollUserInCourse
|
Key difference:
FeatureFlag.enabled?(:new_ui)
PaymentProcessor.charge(order)
CreateOrder.call(params: order_params)
Analogy:
Example workflow:
# Model
class User; end
class FeatureFlag; end
# Service
class WelcomeEmailSender; end
# Interactor
class OnboardNewUser
include Interactor
def call
user = User.create!(context.params)
WelcomeEmailSender.send(user)
context.success_message = "Welcome #{user.name}!"
rescue => e
context.fail!(error: e.message)
end
end
| Pattern | Best for | Bad for |
|---|---|---|
| Model (PORO or ActiveRecord) | Domain concepts, rules, states | One-time actions |
| Service | Executable actions (“do X”) | Representing domain objects |
| Initializer / config | Static rules | Rules that may grow or need dependencies |
| Interactor | Orchestrating multi-step workflows / transactions | Single-purpose state or simple rules |
Originally posted at DevBlog.
2026-01-16 05:38:00
BragDoc now automatically groups your achievements into meaningful themes. Discover what you've really been working on and tell a better story in your next performance review.
You've been tracking your Achievements in Bragdoc, maybe dozens of them by now. But when it's time to write your self-review or update your resume, you're still staring at a long list trying to figure out what story it tells.
Today we're launching Workstreams — a feature that automatically discovers the themes and patterns in your work.
Workstreams are AI-generated clusters of related achievements. Instead of a chronological list, you see your work organized by what it actually represents:
Each workstream tells a story about a theme in your career. The groupings happen automatically based on what your achievements are actually about, not just when you did them or which project they belonged to.
Performance reviews become easier. 6 months worth of Achievements can be overwhelming, but Workstreams put them each into context. Each workstream becomes a talking point. You can immediately see where you've been spending your time and what impact you've had in each area.
Career patterns become visible. Are you becoming more specialized or more generalist? Are you developing the skills you want to grow? Workstreams show you the actual shape of your work over time.
Your story writes itself. When someone asks "what have you been working on?", you have a real answer. Not a list of tasks, but a narrative about the themes that define your contributions.
The creation of Workstreams helps you better prepare for Performance Reviews by providing clear context about how all of your hundreds of Achievements this year fit into the themes that define your contributions. Seeing the wood through the trees is the first step in writing a great self-review.
Next week, we'll be launching the first version of the bragdoc.ai Performance Reviews tool, which builds on Workstreams and automatically creates excellent self-reviews to support your next Performance Review.
The documents it creates can be completely personalized to how your company does performance reviews, and you can create as many Performance Reviews as you want, with each one aware of and able to refer back to the last. More to come next week!
Workstreams are available to all BragDoc users with 20 or more achievements. Just click "Generate Workstreams" in your dashboard and watch as your achievements organize themselves into meaningful groups.
Want to see it in action first? Try our instant demo — it's pre-loaded with sample data so you can explore Workstreams right away:
Head to the Workstreams feature page for the full details on how it works, use cases, technical specs, and FAQs.
Workstreams is our biggest feature launch since BragDoc v2. We can't wait to hear what patterns you discover in your own career. Give it a try and let us know what you think.
2026-01-16 05:36:07
Recuerdo perfectamente la primera vez que tuve una Raspberry Pi en mis manos. Parecía más un juguete o un componente suelto que un ordenador real. Mi primera reacción fue de duda: "¿De verdad esta placa del tamaño de una tarjeta de crédito puede ejecutar un sistema operativo completo?". En aquel entonces, estaba acostumbrado a torres gigantescas y portátiles pesados, así que la idea de la "computación mínima" me generaba una mezcla de curiosidad y escepticismo.
Mi mayor frustración al principio fue la elección del software. Intenté instalar versiones ligeras de otros sistemas, pero nada terminaba de encajar. Todo iba lento, los drivers fallaban o la configuración era una pesadilla técnica que me quitaba las ganas de experimentar. Fue entonces cuando descubrí que la clave no estaba en el hardware, sino en el alma que le das a esa placa.
Si quieres sacar provecho a este ecosistema, tienes que entender qué es el sistema operativo Raspbian (ahora conocido oficialmente como Raspberry Pi OS). Aprender sobre este sistema fue un antes y un después para mis proyectos. Basado en Debian, es una joya de la optimización. Lo que más me sorprendió es cómo logra exprimir cada hercio del procesador y cada mega de RAM para ofrecer una experiencia de escritorio real, fluida y, sobre todo, extremadamente estable.
Lo que empezó como una duda se convirtió en una obsesión saludable. Gracias a la ligereza de Raspbian, convertí esa pequeña placa en un servidor de archivos, en un bloqueador de publicidad para toda mi casa y, eventualmente, en una consola de juegos retro. La magia de este sistema operativo es que elimina las barreras de entrada: no necesitas ser un ingeniero de sistemas para empezar, pero te da todas las herramientas para convertirte en uno si tienes la paciencia suficiente.
Hoy en día, aunque existen muchas alternativas, sigo pensando que para quien empieza en el mundo de la microcomputación, no hay nada como el sistema oficial. Es donde está la comunidad, donde los tutoriales siempre funcionan y donde realmente aprendes cómo funciona Linux por debajo del capó.
Si tienes una Raspberry Pi cogiendo polvo en un cajón porque no sabías qué instalarle, te animo a que le des una oportunidad a este sistema. Es la forma más barata y divertida de recuperar la soberanía sobre tu propia tecnología.