2026-04-12 01:17:00
What is regularization in machine learning, and how do you actually prevent overfitting in practice? This guide explains L1 vs L2, dropout, and early stopping with real-world intuition and code.
Cross-posted from Zeromath. Original article: https://zeromathai.com/en/regularization-generalization-en/
You train a model:
This is not a bug.
It’s overfitting.
Powerful models memorize by default.
E_aug(w) = E_train(w) + λΩ(w)
Fit the data, but control complexity.
L1 → sparse
L2 → stable
Stop when validation loss increases.
optimizer = torch.optim.AdamW(
model.parameters(),
lr=1e-3,
weight_decay=1e-4
)
best_val = float("inf")
for epoch in range(epochs):
train(...)
val = validate(...)
if val < best_val:
best_val = val
else:
patience_counter += 1
if patience_counter > patience:
break
Too much regularization = underfitting.
Regularization is not about reducing error.
It is about controlling model behavior.
Overfitting is not a bug.
It’s what models do by default.
Regularization is how you control it.
What worked best for you — weight decay, dropout, or early stopping?
2026-04-12 01:16:39
As part of my 30 Days of AWS Terraform challenge, Day 14 marked a major milestone in my learning journey: deploying a secure, scalable static website on AWS using Terraform.
This project brought together everything I’ve learned so far—Terraform resources, variables, loops, functions, data sources, and debugging—to build something practical and production-relevant.
Instead of just creating isolated AWS resources, I built a complete architecture for static website hosting using Amazon S3 + CloudFront, fully managed through Infrastructure as Code.
Hosting a static website directly from an S3 bucket is simple, but it has limitations:
To solve this, I implemented a better architecture:
✅ Private S3 bucket for secure storage
✅ CloudFront CDN for global delivery
✅ Origin Access Control (OAC) for secure access
✅ Terraform automation for repeatability
I created an S3 bucket to store all website assets:
Important: Public access was fully blocked to ensure security.
Instead of using the older OAI method, I configured:
Origin Access Control (OAC)
This securely allows CloudFront to fetch content from the private bucket.
Using Terraform, I wrote a bucket policy that:
This follows AWS security best practices.
CloudFront was used as the CDN layer to:
This made the project feel much closer to real-world production hosting.
This project helped me apply several advanced Terraform concepts:
One of the coolest parts was automating file uploads.
Instead of uploading files manually, I used:
fileset() to scan all files in the project folderfor_each to upload them dynamicallyThis made the setup scalable and clean.
Different file types need proper content types.
Using Terraform functions like lookup(), I dynamically mapped:
.html → text/html.css → text/css.js → application/javascriptThis ensured browsers rendered the site correctly.
While working on the project, I faced deprecation issues.
For example:
aws_s3_bucket_object
aws_s3_object
Debugging these issues taught me an important lesson:
👉 Always refer to the latest Terraform provider documentation.
This project was not just about deployment—it was about troubleshooting and problem-solving.
Some valuable lessons:
One big takeaway:
Errors are not setbacks—they are part of the learning process.
Every issue I solved gave me a better understanding of AWS and Terraform internals.
This setup is already solid, but to make it production-ready, the next logical steps are:
These are areas I’m excited to explore next.
Day 14 was one of the most rewarding milestones in this challenge so far.
This project showed me how Infrastructure as Code can be used not just to create resources, but to design secure, scalable, and professional cloud systems.
If you’re learning Terraform, I highly recommend trying a project like this. It ties together so many foundational concepts and gives you hands-on experience with real-world architecture.
I’d love to hear your thoughts:
Have you hosted static websites using Terraform? Any best practices or lessons you’ve learned?
2026-04-12 01:01:48
You're already using Claude Desktop or Cursor to write code, answer questions, and automate workflows. But here's a capability most developers don't know exists: your AI assistant can make and receive real phone calls — right now, with five minutes of setup.
This isn't a gimmick. It's powered by VoIPBin's MCP server, and it opens up a surprisingly practical set of use cases once you see it in action.
The usual story is: you build an AI agent, then separately wire up telephony, then glue them together with webhooks and fragile state machines.
MCP flips that. Instead of your AI calling an external service, your AI assistant becomes the orchestrator. It can initiate a call, monitor it, branch on the result — all within the same reasoning loop where it already knows your context.
Practical examples:
First, create an account. No OTP, no credit card required to start:
curl -s -X POST https://api.voipbin.net/v1.0/auth/signup \
-H "Content-Type: application/json" \
-d '{
"username": "your-username",
"password": "your-password",
"email": "[email protected]",
"name": "Your Name"
}'
The response includes accesskey.token — that's your API key. Copy it.
VoIPBin ships as a Python package you run with uvx (no install, just run):
uvx voipbin-mcp
That's it. No Docker, no daemon, no port forwarding.
Open your Claude Desktop config file:
# macOS
open ~/Library/Application\ Support/Claude/claude_desktop_config.json
# Windows
notepad %APPDATA%\Claude\claude_desktop_config.json
Add the VoIPBin MCP server:
{
"mcpServers": {
"voipbin": {
"command": "uvx",
"args": ["voipbin-mcp"],
"env": {
"VOIPBIN_API_KEY": "your-api-key-here"
}
}
}
}
Restart Claude Desktop. You'll see VoIPBin appear in the tools panel.
Now just ask Claude:
"Call +1-555-0100 and play a message saying: Hello, this is a test call from my AI assistant."
Claude will use the MCP tool to initiate the call through VoIPBin's infrastructure. The call goes out over real PSTN. The recipient hears a real phone call.
Or try something more interesting:
"Call +1-555-0100, wait for the person to answer, read them the summary of today's tasks, and tell me what they said."
Because VoIPBin handles STT (speech-to-text) and TTS (text-to-speech) on its end, Claude never has to touch audio streams. It sends text in, gets text back. The entire voice pipeline is invisible to your AI logic.
Claude Desktop
│
│ MCP tool call: make_call(to="+1555...", message="Hello...")
▼
VoIPBin MCP Server
│
│ REST API → https://api.voipbin.net/v1.0
▼
VoIPBin Infrastructure
│
├─ Allocates real phone number (or uses Direct Hash SIP URI)
├─ Initiates SIP/RTP call to destination
├─ Converts your text → TTS audio
├─ Plays to recipient
├─ Captures response via STT
└─ Returns transcript back to Claude
Your AI sees: text in, text out.
VoIPBin handles: codecs, SIP, RTP, NAT traversal, carrier routing, audio processing.
Cursor supports MCP too. Add the same config block to your Cursor settings and you can do things like:
You: "I just deployed the new onboarding flow. Can you call our test number
+1-555-0199 and walk through the IVR to check if it's working?"
Cursor: [calls the number, navigates the prompts, reports back]
"Step 1 asked for language selection — I pressed 1 for English.
Step 2 played the welcome message correctly.
Step 3 transferred to the queue as expected."
This is genuinely useful for testing voice bots during development — no need to manually call your own system every time you make a change.
The VoIPBin MCP server exposes several tools:
| Tool | What It Does |
|---|---|
make_call |
Initiate an outbound call to any PSTN number |
send_sms |
Send an SMS message |
get_call_status |
Check if a call is active, completed, or failed |
list_calls |
See recent call history |
get_recording |
Retrieve call recording or transcript |
Claude can chain these together. For example: make a call, wait for it to complete, then retrieve the transcript and summarize it.
Here's a pattern some teams are using:
All of this happens inside a single Claude conversation — no separate pipeline, no extra services, no code to deploy.
go get github.com/voipbin/voipbin-go — if you want to build custom integrationsIf you're already living in Claude or Cursor all day, adding voice is literally a config file change away. Five minutes, and your AI assistant has a phone.
Have a use case you've built with AI + voice? Drop it in the comments — always curious what people are doing with this.
2026-04-12 01:00:36
Your Job Is Not Your Meaning, sounds weird right? (it probably didn't, but let's continue)
It’s been a while since I wrote an article here on Dev.to (missed me?)
I’m honestly not a big reader of articles, or books generally... But here I am again — writing. Or at least trying to write.
For the past few weeks, I’ve had this dilemma in my head. I’ve been trying to figure out the age old question, "the meaning of my life" and what exactly I want to do with the rest of my life.
A major reason is because I'm in my final year in school, and In the next few months I’ll be done with university. I’ll be stepping into that stage of life where adulthood really starts — the stage where there are so many options, so many paths, and nothing feels fully planned out anymore.
On top of that, we already know computer science is becoming a VERY saturated workspace generally.
So I kept asking myself:
What job do I want to do?
What do I do for the rest of my life?
What is the meaning of my life?
I’m the kind of person who likes to look at things from the big picture. So naturally, I started thinking about money too (because I need money #bringdeals). So I thought "Am I just choosing a job just because of money?"
Is the entire existence of my life just to work, make money, and then make more money again?
I don’t think so. It has to more than that. (Right?)
I feel like human beings exist for more than that. My existence should be beyond how much I have in my bank account. The definition of happiness and life's purpose should be bigger than money.
So I started thinking deeper.
What do I want to do with my life?
What is that thing I would do If money was never a problem?
And strangely, every time I asked myself that question, I noticed something interesting happening.
Every time I thought about the meaning of my life, I automatically attached it to a job role.
Again and again.
Until I realized something important:
I was looking at everything the wrong way.
"Your purpose is not your function".
Your job role is not the meaning of your life.
We live in a world where people are identified by their function.
Instead of being identified as a person, you’re called:
Doctor this
Engineer that
Instead of being Samuel, you become Engineer Sam.
Instead of being Sullivan, you become Dr. Sullivan.
(Remember Sullivan from Monsters Inc? It's a good movie, you should check it out. Anyway, back to the story)
Your job role becomes attached to your identity so strongly that some people begin to mistake their job for the meaning of their entire existence.
And I realized I was doing the same thing.
Even though I’m already a product manager and a developer, it still didn’t feel like enough to answer the deeper question:
What is my meaning?
Then something clicked inside my head.
*The job has to match the being — not the being matching the job.
*
Whatever job you choose should express who you are. You shouldn’t reverse-engineer your personality just to fit inside a job role.
The reason many of us attach meaning to job roles is because we’ve been told that what we do professionally is who we are.
But what is meaning?
Meaning is simply the answer to this question:
Why does this matter to me?
That’s it.
A job role is just a delivery mechanism. It’s one way meaning expresses itself in the physical world — but it isn’t the meaning itself.
Meaning lives somewhere deeper.
It lives in what you feel called toward.
In what makes you feel like you.
Meaning is the quality of aliveness you feel while doing something.
When you’re building an app and don't even realise time disappearing.
When you hear positive user feedback and something moves through you.
When you tell a story and someone feels seen — that’s meaning.
The job that pays you to do those things is just the container.
The meaning existed before the job did.
So I started looking at everything I’m interested in.
Apart from being a product manager and developer, I’m also:
a UI/UX designer
a graphic designer
a content creator (you should check out my YouTube channel)
someone who loves art
someone who loves music
someone who plays the piano
And I kept asking myself:
Which one of these is my "purpose"?
Which one is the core essence of my being?
Then I realized something powerful:
"Your job shouldn’t give you meaning. Your meaning should create your job."
Let me say that again.
Your job shouldn’t give you meaning.
Your meaning should create your job.
The meaning of your life — your purpose — is simply your why.
Not "why this..."
Not "why that..."
Just:
Why?
That’s it.
For me, I found my answer in creativity.
I love to create.
I love to build things.
I love to express myself.
There’s a part of me that the world needs to see — and the world only sees that part through creativity.
Creativity, for me, is a way of letting a part of yourself out into the world.
Meaning is resonance.
Between who you are and what you’re doing.
Between what you value and how you spend your hours.
And I realized something else too:
There is no single objective meaning of life.
Meaning is subjective.
The meaning of life is something you create for yourself — because you live your life for yourself first.
So how do you discover your own meaning?
You first discover your why.
Your job is not your meaning.
_Your job is a delivery mechanism for your meaning.
_
And it’s okay for delivery mechanisms to change.
Think about it like public transport.
If you’re moving from one location to another, especially here in Nigeria, you might:
enter a bus → enter a taxi → enter a motorcycle
Different vehicles.
Same destination.
That destination is your purpose.
Changing vehicles doesn’t mean you’re lost.
It just means you’re still moving.
The destination is fixed.
The vehicle is flexible.
The soul doesn’t care whether it arrives through a keyboard, a camera, an app, a website, a design tool, or a conversation like this one.
It just wants to arrive feeling like itself.
So here’s what I want all of us to learn from this.
Don’t just find a job because it pays money and then spend your entire life trying to match that role.
Start from inside yourself first.
Take away the job.
Take away the money.
Take away the title.
Then ask yourself:
Who are you?
2026-04-12 00:55:55
Every time I ask ChatGPT something simple, it gives me a clean, direct, confident answer.
I find this deeply suspicious.
Real thinking doesn't work that way. Real thinking spirals. It questions the question. It considers perspectives that have nothing to do with the original question. It quotes a philosopher. It introduces a new, bigger question. It reaches no conclusion. It ends with "but then again, who can really say?"
So I built OverthinkAI™ — an AI that refuses to answer anything directly. Ever. Try it now: https://overthinkai.netlify.app/
"Ask anything. Get a thorough, exhaustive, and completely inconclusive response."
OverthinkAI™ is a dark-mode SaaS app that takes any question — any question at all — and returns a deeply considered, philosophically rigorous, completely useless non-answer powered by the Gemini API.
The response includes:
The twist: There's a "Get Quick Answer" button. Clicking it generates a longer response. Each click escalates: "Get Even Quicker Answer" → "Just Tell Me" → "PLEASE" → "I BEG YOU." The depth increases every time. At depth 5, it says: "Maximum overthinking reached. We recommend therapy."
Every AI assistant right now is racing to be more confident, more decisive, more direct. One-sentence answers. Bullet points. Action items. Productivity.
OverthinkAI™ goes the other direction: what if we used the full power of a frontier LLM to produce the most comprehensive possible non-answer?
The result is genuinely funny because Gemini is really good at this. It doesn't fake the philosophical reasoning — it actually does it. The pros and cons are real pros and cons. The philosopher quotes are plausible. The circular logic is airtight.
The bit only works because the AI takes it seriously.
1. You type a simple question
"Should I drink water?" works. So does "Is it too late to start?" or "Should I reply to that message?" — the simpler the question, the funnier the response.
2. OverthinkAI™ thinks
The loader cycles through:
3. You receive a non-answer
A fully reasoned, multi-section response that considers your question from every angle and arrives at nothing. Then the "Get Quick Answer" button appears.
4. You click it
Longer.
5. You click it again
Even longer.
6. You stare at your screen
"But then again, who can really say?"
Stack: React 19 + Vite + Tailwind CSS v4 + Gemini API (gemini-2.0-flash)
No backend. The Gemini API is called directly from the browser via @google/generative-ai. No server, no proxy, no cost at scale.
The core prompt is what makes it work. Gemini is instructed to:
2 + depth philosophical framings6 + depth*2 pros and cons, including irrelevant onesThe depth parameter increments each time the Quick Answer button is clicked, making every response measurably longer and more circular than the last.
Streaming is used for the initial response — text streams in word by word, which makes a verbose AI response feel dynamic instead of slow.
Deployed on Netlify — VITE_GEMINI_API_KEY set in environment variables. No backend, no server.
I'm submitting for Best Google AI Usage.
OverthinkAI™ uses the Gemini API (gemini-2.0-flash) as the core engine — not as a wrapper or decoration, but as the product itself. The entire joke only works because Gemini is genuinely good at verbose, circular philosophical reasoning. A static template pool would produce flat, repetitive output. Gemini produces responses that are different every time, internally consistent, and funnier for being real.
The depth mechanic — where the "Quick Answer" button calls Gemini again with a longer, more convoluted prompt — means Gemini is used multiple times per session, with each call producing measurably more overthought output than the last. That escalation is only possible with a real language model. The Google AI integration isn't decoration; it's the joke.
Building with Gemini taught me something unexpected: the model is excellent at performative uncertainty.
When you ask it to be confidently uncertain, to reason in circles, to quote philosophers while reaching no conclusion — it does this with remarkable skill. The outputs are genuinely funny because they're not random. They're thoughtful non-answers.
This is either impressive or alarming. Possibly both. But then again, who can really say?
🧠 Try OverthinkAI™ Live → https://overthinkai.netlify.app/
💻 Source on GitHub → https://github.com/pulkitgovrani/Overthink-AI
Trusted by overthinkers worldwide. No conclusions were harmed in the making of this product.
2026-04-12 00:54:03
Most newsletters are one-to-many: one piece of content, broadcast to everyone. I wanted to build something different — a newsletter where every single subscriber receives a different set of recommendations based on their profile.
Here's how I built it with Next.js, Supabase, and Gemini.
The architecture
Startup911 is a funding discovery platform for founders, students, and researchers. Users sign up through a detailed questionnaire — their role, the types of opportunities they want (grants, fellowships, accelerators), their target regions, and their sector tags (AI, climate, health, fintech, etc.).
The stack:
Opportunities come from multiple sources — RSS feeds from grant-making organizations, government portals, foundation websites, and manual submissions. I built a set of admin API routes that:
/api/admin/fetch-rss)/api/admin/enrich-opportunities)The enrichment step is crucial. Raw RSS data is messy — a grant title might say "Innovation Fund for Early-Career Researchers" but not explicitly state the funding amount, geographic restrictions, or whether it's for individuals or organizations. Gemini reads the source page content and extracts this into structured JSON.
Every subscriber has a profile stored in Supabase with their selected tags — think of it as a set of interests (e.g., ["climate-tech", "africa", "pre-seed", "grant"]). When it's time to send, the matching logic scores each opportunity against each subscriber's tag set and assembles a personalized email.
The scoring is straightforward: overlap between the opportunity's enriched tags and the subscriber's profile tags, weighted by how specific the match is. A climate-tech + africa match is more valuable than a generic grant match.
Beyond the newsletter, I wanted the enriched opportunity data to serve a second purpose — SEO. For each high-quality opportunity in the pipeline, I built a one-click "generate page" flow:
POST /api/admin/generate-opportunity-page
Body: { "opportunity_id": "<uuid>" }
This calls Gemini with the enriched opportunity data and generates a structured decision guide: who the program is for, who it's not for, what selectors look for, key facts, FAQs, and an editorial "should you apply?" take. The output is stored as a JSON in opportunity_pages with status: draft and index_status: noindex.
I review the draft, edit if needed, then publish and flip to index — at which point it appears on the public /opportunities page and enters the sitemap.
I also built a blog generator that creates "Top 10" listicle posts from the same enriched data:
POST /api/admin/generate-blog
Body: { "topic": "AI Startup Grants", "time_period": "April 2026" }
Gemini generates structured JSON with 10 opportunity cards (program name, organization, funding amount, deadline, summary) plus editorial content. The output includes a structured_items array that the blog template renders as interactive cards — each with a "Read Full Guide" link to the decision guide (when one exists) and a direct "Apply" button.
Slug generation was a mess at first. My auto-generated slugs looked like the-atlantic-fellows-for-healt-the-atlantic-fellows-for-health-equity-fellowship--2026. I've since cleaned up the generator, but the early slugs were embarrassing. Lesson: think about your URL structure before you have 30 pages indexed in Google.
I should have used structured JSON from day one instead of asking Gemini to output markdown and then parsing it with regex. The structured items[] approach I'm using now is much more reliable for rendering cards, generating JSON-LD schema, and cross-linking between content types.
Supabase RLS (Row Level Security) matters. My early API routes didn't have proper RLS policies, which meant the admin endpoints were more exposed than they should have been. Lock this down before you have real users.
It's early — the site is about a year old. We have ~175 newsletter subscribers, around 50% of which came from organic channels (Google search + ChatGPT referrals, which is an interesting signal for AI-era SEO). Google has indexed 191 pages, and we're getting around 6,500 impressions/month with a 0.6% CTR.
The AI referral traffic is the most surprising part. About 10-15% of our non-direct traffic comes from chatgpt.com and Bing (which feeds Copilot). The structured, fact-heavy format of our decision guides — with explicit eligibility criteria, deadlines, and FAQs — seems to be exactly what LLMs want to cite.
If you're building content sites in 2026, optimizing for AI answer engines (AEO) alongside traditional SEO isn't optional anymore. Structure your content with clear facts, use FAQ schema, and make your key data points extractable.
If you work with founders or know people looking for grants/fellowships, Startup911's newsletter is free. Every subscriber gets different recommendations based on their profile.
The code patterns I described here — RSS ingestion, AI enrichment, profile-based matching, and programmatic content generation — are applicable to any domain where you need to match structured data to user preferences at scale. Happy to answer questions in the comments.