MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Your 2025 Marketing Budget Probably Lied to You: A Q1 2026 Audit Framework

2025-12-03 13:10:05

Look, we all started 2025 with a beautiful spreadsheet. Color-coded tabs. Projected ROI that would make a CFO weep with joy. Maybe you even had a deck with those gradient charts that look impressive in meetings.

And then reality happened.

Here's the thing about marketing budgets: they're aspirational documents masquerading as financial planning. By December, that meticulously planned allocation looks about as accurate as a weather forecast from six months ago. The paid social budget got blown in Q2 when CPMs spiked. The content strategy pivoted three times. And that experimental budget for "emerging platforms"? Yeah, nobody can quite remember what that money actually bought.

But December is when we get honest. Not because we're suddenly virtuous, but because Q1 2026 budget decisions are happening whether we're ready or not.

So let's audit this thing properly. Not the sanitized version for your boss, the real one.

Start With What Actually Happened (Not What You Reported)

Pull your actual spending data. Not the budget tracker you update monthly with optimistic projections. The real numbers from your accounting system, payment processors, and that one company card that three people share.

I've done this exercise with dozens of marketing teams, and the gap between "budgeted spend" and "actual spend" averages around 23%. Sometimes it's overspend in channels that worked. More often, it's underspend in initiatives that never quite launched because Karen left in March and nobody picked up her projects.

Break it down by channel:

  • Paid advertising (search, social, display, programmatic)
  • Content production (writing, design, video, freelancers)
  • Tools and platforms (all those SaaS subscriptions)
  • Agency and contractor fees
  • Events and sponsorships
  • That "miscellaneous" category that's somehow 18% of total spend

For each channel, you need three numbers: budgeted, actual spent, and actual results. Yes, results. Revenue, leads, whatever your north star metric is. Because spending the budgeted amount isn't an achievement if it generated nothing.

The Attribution Problem Nobody Wants to Discuss

Here's where it gets messy.

You spent $47,000 on LinkedIn ads in 2025. Great. What did that generate? If you're using last-click attribution, you probably have a number. If you're being honest about how B2B buying actually works, you know that number is fiction.

That LinkedIn ad contributed to deals. So did the email campaign. And the webinar. And the sales rep who remembered to follow up. Attribution modeling promises to solve this. In practice, it mostly generates reports that everyone interprets differently in budget meetings.

So here's what actually works: segment your analysis.

For channels with clear attribution (direct response, bottom-funnel PPC), use the data you have. Google Ads conversions, Facebook lead forms, that Shopify integration that actually tracks properly—these give you real numbers.

For everything else—brand awareness, content marketing, that podcast sponsorship—you need proxy metrics. Traffic trends. Survey data on brand recall. Pipeline velocity changes. Deals that mentioned specific content in the sales notes.

Is this perfect? No. But pretending you have perfect attribution when you don't is worse. At least this way you're honest about the confidence level of each decision.

If you're still figuring out how content fits into your broader strategy, the principles in AI in Content Marketing: 2025 Strategy Guide apply here—especially the parts about measuring content contribution beyond last-click metrics.

Find Your Expensive Mistakes

Every budget has them. The channel that consumed resources and produced approximately nothing. The tool you bought in January and last logged into in March. The agency relationship that's mostly status calls at this point.

I'm not talking about experiments that didn't work. Those are fine. Necessary, even. I'm talking about the stuff you kept funding out of inertia or because admitting failure felt awkward.

One client I worked with was spending $3,200 monthly on a marketing automation platform. Sounds reasonable. Except they were using exactly two features: email sending and a basic contact database. Features they could get from a $300/month tool. The expensive platform had been "essential" when they bought it three years ago. Now it was just expensive.

Look for:

  • Tools with declining usage (check login data, feature utilization)
  • Channels with increasing costs but flat or declining results
  • Contractors or agencies you're paying retainers to but not actually using
  • Ad campaigns running on autopilot that nobody's optimized in quarters
  • Subscriptions for team members who left (happens more than you'd think)

One brutal exercise: calculate cost per result for each major channel. Not ROAS or fancy metrics. Dollars spent divided by leads generated, or revenue created, or whatever matters to your business.

Some channels will look terrible. That's fine if they serve a strategic purpose (brand awareness, top-of-funnel reach). It's not fine if they're just... there.

What Actually Worked (And Why)

Now the good part.

Every budget has bright spots. Channels that overperformed. Campaigns that hit. That one piece of content that somehow generated leads for eight months straight.

But here's what matters: understanding why they worked.

Did your Google Ads performance improve because you got better at targeting, or because a competitor stopped bidding on your keywords? Did that viral LinkedIn post succeed because of brilliant strategy, or because you accidentally timed it with a news cycle?

Context matters for 2026 planning. If something worked because of a one-time condition, you can't just budget for it to work again.

I watched a SaaS company triple their content budget for 2025 because one whitepaper generated 400 leads in 2024. Sounds smart. Except that whitepaper worked because it addressed a specific regulatory change that happened once. They spent 2025 trying to recreate lightning in a bottle.

For each successful channel or campaign, document:

  • What specifically drove results (audience, messaging, timing, format)
  • Whether those conditions still exist or will exist in Q1 2026
  • Whether the success was scalable or a one-time win
  • What you learned that applies to other channels

The goal isn't just to fund what worked. It's to understand the principles behind what worked so you can apply them elsewhere.

The Q1 2026 Reality Check

Okay, you've audited 2025. Now comes the hard part: planning Q1 2026 with actual wisdom instead of hopeful projections.

First, acknowledge what's different. Market conditions change. Your competition isn't static. The platforms you use keep "improving" things (usually by making them more expensive or complicated). And your business priorities might have shifted.

Q1 is weird for marketing budgets. You've got fresh annual budget, which feels abundant. But you also have aggressive Q1 targets because someone in finance decided 30% of annual goals should happen in the first three months. These forces create interesting tensions.

Here's a framework that actually works:

Protect Your Base (50-60% of budget)
These are channels with proven ROI that you understand. The stuff that might not be exciting but reliably generates results. For most B2B companies, this is search ads, email marketing, and core content production. For e-commerce, it's probably paid social, Google Shopping, and retention campaigns.

Don't get creative with your base. Optimize it, sure. But this isn't where you experiment.

Scale Your Winners (25-35% of budget)
Those channels from 2025 that worked and have room to grow. The key phrase is "room to grow." Doubling budget doesn't double results in most channels. You hit saturation points, efficiency curves flatten, and CPMs increase as you expand targeting.

Be realistic about scale potential. If a channel generated $100K in revenue from $10K spend, can you actually deploy $50K effectively? Or will you just bid up your own costs and see diminishing returns?

Experiment Intelligently (10-20% of budget)
New channels, new tactics, new approaches. This is your learning budget. The stuff that might fail but could also become next year's "scale your winners" category.

The trick with experimental budget: define success criteria upfront. Not "we'll see how it goes." Actual metrics and timeframes. "If this doesn't generate 50 qualified leads by end of Q1, we're cutting it." That kind of clarity.

And here's the thing nobody wants to hear: some of your 2025 "base" budget needs to move to experimental. Because markets shift, channels mature, and what worked last year stops working. If you're not actively testing new approaches, you're slowly becoming obsolete.

Tools and Platforms: The Subscription Audit

Let's talk about your marketing tech stack. Or as I like to call it, "the collection of SaaS products you swore you'd fully implement."

According to ChiefMartec, the average marketing team uses 23 different tools. I've seen teams with 40+. And here's what's wild: most teams actively use maybe 30% of the features they're paying for.

Go through every subscription:

  • When did you last use it? (Actually use it, not just log in)
  • What specific value does it provide that you can't get cheaper elsewhere?
  • How many team members actually access it?
  • Is there overlap with other tools you're paying for?

SEMrush, Ahrefs, and Moz all do similar things. You probably don't need all three. HubSpot, Marketo, and Pardot all do marketing automation. Pick one and actually use it properly instead of paying for multiple platforms at partial capacity.

One team I advised was paying for both Canva Pro and Adobe Creative Cloud. Plus a stock photo subscription. Plus a video editing tool. Plus a separate design tool for social media. Total cost: about $8,000 annually. After consolidation: $3,200, with better integration and less tool-switching.

The question isn't "is this tool useful?" Everything's useful. The question is "is this tool worth its cost relative to alternatives and our actual usage?"

Building Your Q1 2026 Allocation

You've got the data. You know what worked, what didn't, and what's worth trying. Now you need an actual budget allocation that won't fall apart by February.

Start with revenue targets. Work backwards. If you need to generate $X in revenue, and your average customer value is $Y, and your close rate is Z%, you need this many leads. Which means this much traffic, this much engagement, this much reach.

Yes, it's a funnel. Funnels are boring. They're also how math works.

Then allocate budget to channels based on:

  1. Proven efficiency (cost per lead, cost per acquisition)
  2. Scale potential (can this channel actually deliver the volume you need?)
  3. Speed (how quickly does this channel generate results?)
  4. Control (how much can you actually influence outcomes?)

Q1 is not the time for long-bet strategies. You need channels that can generate results within the quarter. The experimental brand awareness campaign can wait until Q2 when you've got some wins under your belt.

Be specific with allocations. Not "$50K for paid social" but "$50K for paid social: $30K Facebook/Instagram, $15K LinkedIn, $5K testing TikTok ads for the younger demographic segment."

And build in contingency. Not a "miscellaneous" line item where budget goes to die. An actual contingency fund for mid-quarter opportunities or adjustments. I usually recommend 10-15% of total budget.

Because here's what will definitely happen in Q1: something will change. A competitor will do something unexpected. A platform will update its algorithm. A piece of content will overperform and you'll want to amplify it. Having flexible budget to respond is the difference between capitalizing on opportunities and watching them pass by.

The Conversation With Finance

At some point, you need to defend this budget to someone who thinks marketing is "the department that makes the brochures."

Here's what works: speak their language.

Don't talk about impressions, engagement, or brand awareness. Talk about customer acquisition cost, lifetime value, and payback period. Show them the math that connects budget to revenue.

"We spent $X, which generated Y leads, which closed at Z rate, which produced $W in revenue, which gives us a payback period of M months."

That's a sentence finance people understand.

Also, be honest about what you don't know. "This experimental budget might not work, but here's the upside if it does, and here's how we'll know within 60 days whether to continue or cut it."

CFOs respect intellectual honesty more than marketing optimism. They've heard too many promises about "viral campaigns" and "exponential growth." Show them you're making evidence-based decisions with clear success metrics, and they'll usually give you the budget.

Making It Stick

The final piece: actually following the budget you just created.

I know, revolutionary concept.

Set up monthly reviews. Not just "did we spend the money" but "did we get the results we expected?" If a channel is underperforming by mid-February, you need to know and adjust. Not in March when you've already blown through the quarterly budget.

Use a simple dashboard. Budgeted spend, actual spend, target results, actual results. Four columns. Update it weekly. Share it with your team so everyone knows where things stand.

And give yourself permission to adjust. A budget isn't a prison sentence. If something's not working, stop doing it. If something's working better than expected, shift resources toward it.

The goal isn't budget compliance. It's business results.

What You're Really Doing Here

Look, this whole exercise isn't really about spreadsheets and allocation percentages. It's about getting honest with yourself about what marketing actually accomplished in 2025 and what it can realistically accomplish in Q1 2026.

Most marketing budgets are built on optimism and last year's numbers plus 20%. This approach—starting with brutal honesty about what worked and what didn't—gives you something better. Not perfect, but better.

You'll still make mistakes. Some Q1 bets won't pay off. Some channels will underperform. That's fine. The difference is you'll know faster, adjust quicker, and waste less money on things that aren't working.

And next December, when you're auditing 2026, you'll have better data, clearer insights, and maybe even a budget that mostly resembles what actually happened.

Start the audit this week. Not January. By the time Q1 actually starts, you want this done so you can focus on execution instead of planning.

Your 2025 budget lied to you. Your 2026 budget doesn't have to.

How I Built a Multi-Platform AI Chatbot with n8n and LangBot

2025-12-03 13:08:59

Connecting n8n's visual workflow automation with LangBot's multi-platform bot framework creates a powerful, code-free way to deploy AI chatbots across QQ, WeChat, Discord, Telegram, Slack, and more. This tutorial shows you how to integrate these tools in minutes.

What You'll Need

  • Python 3.8+ installed
  • Node.js 18+ installed
  • npm or npx available
  • 15 minutes of your time

Deploy LangBot in 3 Commands

LangBot is a production-ready bot framework that connects to multiple messaging platforms and AI services including Dify, FastGPT, Coze, OpenAI, Claude, and Gemini. Deploy it instantly:

cd your-workspace
mkdir -p langbot-instance && cd langbot-instance
uvx langbot@latest

On first launch, LangBot initializes automatically. Open http://127.0.0.1:5300 in your browser.

LangBot initialization screen

Register your admin account when prompted. You'll land on the dashboard where you can manage bots, models, pipelines, and integrations.

LangBot dashboard

Set Up n8n Workflow Automation

n8n is an open-source automation platform with 400+ integrations and powerful AI capabilities. Launch it locally:

cd your-workspace
mkdir -p n8n-data
export N8N_USER_FOLDER=$(pwd)/n8n-data
npx n8n

Visit http://127.0.0.1:5678 and create your owner account.

n8n initial setup

Build Your AI Workflow

Create a new workflow in n8n. You'll need two essential nodes:

n8n workflow editor

Add the Webhook Trigger

Click "+" on the canvas and add a Webhook node. Configure it:

  • HTTP Method: POST
  • Response Mode: Streaming (enables real-time chat responses)
  • Authentication: None (adjust for production)

Webhook node configuration

Add the AI Agent

Press Tab, navigate to the "AI" category, and select AI Agent.

Configure the Chat Model: Click "Chat Model" and choose "OpenAI Chat Model". Add your credentials:

  • API Key: Your OpenAI API key (or compatible service key)
  • Base URL: For OpenAI alternatives like Claude, Gemini, or local models, update to your provider's endpoint

Critical Step - Fix the Prompt Source: By default, the AI Agent expects a Chat Trigger node, which won't work with webhooks. Here's how to fix it:

  1. Find "Source for Prompt (User Message)" in the AI Agent settings
  2. Change from "Connected Chat Trigger Node" to "Define below"
  3. Switch to "Expression" mode
  4. Enter: {{ $json.body }}

This expression pulls the user's message from the webhook request body.

Configured webhook with AI Agent

Activate and Get Your Webhook URL

Save the workflow and toggle the activation switch (top-right). Switch to the "Production URL" tab and copy the webhook URL:

http://localhost:5678/webhook/{your-webhook-id}

Connect LangBot to n8n

Back in the LangBot dashboard, navigate to Pipelines and click the default "ChatPipeline".

LangBot pipelines page

Switch to the AI tab and select "n8n Workflow API" from the Runner dropdown. Configure:

  • Webhook URL: Paste your n8n production webhook URL
  • Authentication Type: None (match your n8n webhook settings)
  • Timeout: 120 seconds
  • Output Key: response

Click Save.

Test It Out

In the Pipeline editor, click "Debug Chat" on the left sidebar. Send a test message like "What is LangBot?"

If everything works, you'll see LangBot send the message to n8n, where the AI Agent processes it and streams back a response.

Troubleshooting

Error: "Expected to find the prompt in an input field called 'chatInput'"

This means the AI Agent is still configured for a Chat Trigger node. Fix it:

  1. Open the AI Agent configuration
  2. Set "Source for Prompt (User Message)" to "Define below"
  3. Switch to Expression mode
  4. Enter: {{ $json.body }}

Test Your Webhook Directly

Verify the webhook works with curl:

curl -X POST http://localhost:5678/webhook/your-webhook-id \
  -H "Content-Type: application/json" \
  -d '{"body": "Hello, can you introduce yourself?"}'

You should receive streaming JSON with the AI's response.

How the Integration Works

Here's the complete flow:

  1. User sends a message via QQ, WeChat, Discord, Telegram, Slack, LINE, or any LangBot-supported platform
  2. LangBot's Pipeline receives the message and calls the n8n Workflow API
  3. n8n's Webhook node captures the request and passes it to the AI Agent
  4. The AI Agent uses OpenAI, Claude, Gemini, or your configured LLM to generate a response
  5. n8n streams the response back to LangBot
  6. LangBot delivers the response to the user on their original platform

Why This Combination Works

LangBot + n8n unlocks powerful capabilities:

  1. No-Code AI Logic: Design conversation flows visually in n8n without touching code
  2. Multi-Platform Reach: Deploy the same AI across QQ, WeChat, Discord, Telegram, Slack, LINE, DingTalk, and Lark simultaneously
  3. Flexible AI Models: Swap between OpenAI GPT, Anthropic Claude, Google Gemini, Coze, Dify, local models, and more
  4. Rich Integrations: Connect n8n's 400+ integrations - databases, APIs, Notion, Airtable, Google Sheets, Slack, and beyond
  5. Tool-Calling Agents: AI Agent can trigger n8n tools like database queries, API calls, or custom functions
  6. Workflow Extensions: Add preprocessing, content moderation, logging, or custom business logic

Perfect For:

  • Enterprise customer support bots
  • Knowledge base Q&A systems
  • Multi-platform community management
  • Task automation assistants
  • Unified chat interfaces for teams

Next Steps

Extend your bot further:

  • Integrate Dify or FastGPT for advanced RAG (retrieval-augmented generation)
  • Add vector database nodes (Pinecone, Qdrant, Weaviate) for knowledge retrieval
  • Connect business APIs for real-time data
  • Implement conversation memory and context tracking
  • Add content filtering and moderation workflows
  • Use Langflow or Coze for additional AI orchestration

This integration gives you the flexibility of code-based AI frameworks like LangChain with the simplicity of visual workflow builders - all while reaching users across every major messaging platform.

Ready to deploy your multi-platform AI assistant? Start with LangBot and n8n today.

ML Observability & Monitoring — The Missing Layer in ML Systems (Part 7)

2025-12-03 13:02:56

🔎 ML Observability & Monitoring — The Missing Layer in ML Systems

Part 7 of The Hidden Failure Point of ML Models Series

Most ML systems fail silently.

Not because models are bad…

Not because algorithms are wrong…

But because nobody is watching what the model is actually doing in production.

Observability is the most important layer of ML engineering —

yet also the most neglected.

This is the part that determines whether your model will survive,

decay, or collapse in the real world.

❗ Why ML Systems Need Observability (Not Just Monitoring)

Traditional software monitoring checks:

  • CPU
  • Memory
  • Requests
  • Errors
  • Latency

This works for software.

But ML models are different.

They fail in ways standard monitoring can’t detect.

ML systems need three extra layers:

  1. Data monitoring
  2. Prediction monitoring
  3. Model performance monitoring

Without these, failures remain invisible until business damage is done.

🎯 What ML Observability Actually Means

Observability answers 3 questions:

  1. Is the data still similar to what the model was trained on?
  2. Is the model making consistent predictions?
  3. Is the model still performing well today?

If any answer becomes No, your model is silently breaking.

⚡ The Three Types of Monitoring Every ML System Must Have

1) 🧩 Data Quality & Data Drift Monitoring

Your model is only as good as the data flowing into it.

What to track:

  • Missing values
  • Unexpected nulls
  • New categories
  • Value distribution changes
  • Range changes
  • Outliers
  • Schema mismatches

Example:

A location-based model starts receiving coordinates outside valid regions.

Accuracy drops.

No errors are thrown.

But predictions degrade massively.

You won’t know unless you monitor data.

2) 🔁 Model Prediction Monitoring

Even if data is fine, outputs can still behave strangely.

What to track:

  • Prediction distribution
  • Sudden spikes in a single class
  • Prediction confidence dropping
  • Unusual drift in probability scores
  • Segment-level prediction stability

Example:

A fraud model suddenly outputs:

probability_of_fraud = 0.01 for 97% of transactions

Looks normal at infrastructure level.

But prediction behavior has collapsed.

3) 🎯 Model Performance Monitoring (Real-World Metrics)

This is the hardest part because:

  • Ground truth often arrives days or weeks later
  • You don’t immediately know whether predictions were correct

Two techniques solve this:

A) Delayed Performance Tracking

Compare predictions vs true labels when they arrive.

B) Proxy Performance

Real-world signals such as:

  • Chargeback disputes
  • Customer complaints
  • Manual review overrides
  • Acceptance/rejection patterns

These indicate model quality before ground truth arrives.

🧭 Complete ML Observability Blueprint

Your production ML system should monitor:

Data Layer

  • Schema violations
  • Missing values
  • Drift (PSI, JS divergence, KS test)
  • Outliers
  • Category shifts

Feature Layer

  • Feature drift
  • Feature importance stability
  • Feature correlation changes
  • Feature availability

Prediction Layer

  • Output distribution
  • Confidence distribution
  • Class imbalance
  • Segment-wise prediction consistency

Performance Layer

  • Precision/Recall/F1 over time
  • AUC
  • Cost metrics
  • Latency
  • Throughput

Operational Layer

  • Model serving errors
  • Pipeline failures
  • Retraining failures

🧠 Why Most Teams Ignore Observability (But Shouldn’t)

Common excuses:

  • “We’ll add monitoring later.”
  • “We don’t have infrastructure for this.”
  • “The model is working fine right now.”
  • “Drift detection is too complicated.”

But ignoring observability leads to:

  • Silent model decay
  • Wrong predictions with no alerts
  • Millions in business losses
  • Loss of user trust
  • Late detection of catastrophic errors

🔥 Real Failures Caused by Missing Observability

1) Credit Scoring System Failure

A bank’s ML model approved risky users because a single feature drifted 2 months earlier.

Nobody noticed.

Approval rates skyrocketed.

Losses followed.

2) Ecommerce Recommendation Collapse

A feature pipeline failed silently.

All products returned the same embedding vector.

Users saw irrelevant recommendations for weeks.

3) Fraud Detection Blind Spot

Model performance dropped suddenly during festival season.

Reason: new fraud patterns.

No drift detection → fraud surged.

🛠 Practical Tools & Techniques for ML Observability

Model Monitoring Platforms

  • Arize AI
  • Fiddler
  • WhyLabs
  • Evidently AI
  • MonitoML
  • Datadog + custom model dashboards

Statistical Drift Methods

  • Population Stability Index (PSI)
  • KL Divergence
  • Kolmogorov–Smirnov (KS) test
  • Jensen–Shannon divergence

Operational Monitoring

  • Prometheus
  • Grafana
  • OpenTelemetry

Feature Store Monitoring

  • Feast
  • Redis-based feature logs
  • Online/offline feature consistency checks

🧩 The Golden Rule

If you aren’t monitoring it, you’re guessing.

And guessing is not ML engineering.

Observability is not optional.

It is the backbone of reliable ML systems.

✔ Key Takeaways

Insight Meaning
Models decay silently Without monitoring you won’t see it happening
Observability ≠ Monitoring ML needs deeper tracking than software
Data drift kills models Must detect it early
Prediction drift matters Output patterns reveal issues fast
Ground truth is delayed Use proxy metrics
Observability = Model Survival Essential for long-lived ML systems

🔮 Coming Next — Part 8

How to Architect a Real-World ML System (End-to-End Blueprint)

Pipelines, training, serving, feature stores, monitoring, retraining loops.

🔔 Call to Action

Comment “Part 8” if you want the final chapter of this core series.

Save this article — observability will save your ML systems one day.

Why AI Won't Take our Coding Job: A Future Where Engineers and AI Thrive Together!

2025-12-03 13:00:00

AI is transforming software engineering faster than ever, but instead of replacing human developers, it’s becoming a powerful partner. Together, humans and AI are shaping a future where development teams are more productive, creative, and impactful. The future of coding isn’t about competition—it’s about collaboration.

Human–AI Collaboration: Why It Works

AI tools are great at handling repetitive tasks like generating code, running tests, and spotting bugs. This frees developers to focus on the parts of the job that machines can’t handle: creative problem-solving, strategic thinking, and designing solutions that really fit business needs.

Humans bring something AI can’t replicate—contextual understanding, ethical judgment, and the spark of innovation. While AI can crunch data and suggest improvements, it’s the human developer who decides what makes sense and what aligns with larger goals.

Working together, humans and AI can scale workflows, streamline hiring, and create entirely new hybrid roles like “AI-augmented software engineer.” AI handles the heavy lifting, but humans guide direction and ensure accountability.

The Evolving Role of Developers

The role of the developer is shifting. Writing code is still important, but much of it now involves curating, reviewing, and managing AI-generated logic.

  • Quality assurance engineers design the tests AI will run and validate results with their domain expertise.
  • Architects and product engineers use AI insights to make smart, context-aware design decisions.
  • New specialties are emerging, from Senior Machine Learning Engineers to Generative AI Engineers—roles that demand both coding skills and AI fluency.

AI can analyze patterns, handle large-scale operations, and boost efficiency—but it still relies on humans for creativity, ethical oversight, and nuanced judgment. Skills like adaptability, communication, and emotional intelligence remain uniquely human and essential to software development.

Why Collaboration is Key

The most successful teams treat AI as a collaborator, not a replacement. Upskilling, embracing AI tools, and leveraging human strengths are essential to staying ahead. When humans and AI work together, the results are faster, higher-quality software, new job opportunities, and innovation that neither could achieve alone.

AI doesn’t take away coding jobs—it amplifies the value of developers. By pairing human creativity with AI efficiency, the next wave of technology will be shaped by teams who know how to work with intelligent systems, not against them.

Don't Learn Prompt Engineering. Here's What Matters More

2025-12-03 13:00:00

I originally posted this post on my blog.

I'm not an AI evangelist, and I'm not a hater either.

I've tried AI for coding. And after a week or two, I noticed how dependent I was becoming. Since then, I've used AI for small coding tasks, like generating small functions or finding typos, not to treat English as my main programming language.

Marcus Hutchins, a security researches, puts it boldly in his post Every Reason Why I Hate AI and You Should Too:

I'd make a strong argument that what you shouldn't be doing is "learning" to do everything with AI. What you should be doing is learning regular skills. Being a domain expert prompting an LLM badly is going to give you infinitely better results than a layperson with a "World’s Best Prompt Engineer" mug.

I agree with the core message.

When everybody is relying on AI, it's time to go back to old-school habits:

  • Read code
  • Write trustworthy tests
  • Devour classic textbooks
  • Troubleshoot bugs with pen and paper

And outside coding: read books on paper, take notes by hand, and write our own summaries. To develop taste and judgment.

Using AI is like holding a calculator on a math exam. Even the best calculator is useless if you don't know what to compute.

Build skills, then leverage AI.

I used to think coding was only about cracking symbols. That's where AI shines. But coding is also about talking to non-tech managers, negotiating deadlines, and saying no politely.

And that's why I wrote, Street-Smart Coding: 30 Ways to Get Better at Coding, to share the skills I wish I'd learned to become a confident coder.

Get your copy of Street-Smart Coding here. It's the roadmap I wish I had when I was starting out.

Welcome Thread - v354

2025-12-03 13:00:00

  1. Leave a comment below to introduce yourself! You can talk about what brought you here, what you're learning, or just a fun fact about
    yourself.

  2. Reply to someone's comment, either with a question or just a hello. 👋

  3. Come back next week to greet our new members so you can one day earn our Warm Welcome Badge!

Lionel Ritchie saying