2026-01-27 10:12:00
As a leader and senior engineer, I’ve learned that coding is only a fraction of the job. Most of my time is spent gathering requirements, evaluating solutions, refining stories, creating documentation, reviewing code, and making decisions that shape the system long before any code is written.
So why does the conversation around AI in software engineering focus almost entirely on coding?
We all know that engineering is more than writing code. It’s cognitive work: planning, communication, verification, decision-making, documentation, and review. Code is the output, but a significant amount of effort goes into making that output possible.
That raises an obvious question:
Can AI help with those other parts of the job, or is it only useful as a coding assistant?
In my experience, the answer is clear: AI can do far more than just generate code, and it can save a surprising amount of time doing it.
Most engineers don’t want to spend their day updating Jira tickets, maintaining documentation, or wiring things together so dependency graphs stay accurate. Those tasks are necessary, but they’re often done begrudgingly so we can get back to the work we enjoy.
But what if AI handled the bulk of that overhead?
With tools like MCP, you can connect an LLM directly to your ticketing system and documentation. Instead of manually juggling context, you can talk to your LLM about your work and let it pull the relevant information itself.
In practice, that means things like:
Chatting with your LLM about existing tickets with full context
Turning technical documentation into well-structured stories
Updating stories and defects as understanding evolves
Commenting on and updating ticket status as work progresses
All of that adds up to less cognitive load and less time spent doing administrative work by hand.
Does the LLM make mistakes? Absolutely. Does it need correction and guidance? Of course. But even with occasional course correction, the efficiency gains are real.
When I’m preparing a new feature and breaking it down into stories, I’ll provide the LLM with the relevant context (technical documentation, mock-ups, constraints, and any other material it needs) and give it clear instructions on how to structure the work.
When doing this, it'll come back with some great thoughts on the work; however, this is where the real work begins. While the AI will give a pretty good general understanding, it will often make many assumptions around the details that you didn't expliclty state. Your job now is to thoroughly review what it gave back to you to ensure it has an accurate understanding of what you want, correcting any details that it didn't quite get right.
This will usually take at least a few back and forths with the LLM before it gets things right. Once I’m comfortable with its understanding, I ask it to create the stories directly in Jira.
A few moments later, I have a feature with concrete, well-written stories underneath it, often with more detail and clarity than I’d normally write myself. The work is done faster, and the quality is higher.
Instead of spending my energy on repetitive setup and refinement, I can focus on validating the approach, thinking through edge cases, and getting the team unblocked sooner.
As powerful as generative AI is, it’s not magic, and it’s not a substitute for engineering judgment.
LLMs are extremely eager to be helpful. Most of the time, that’s exactly what you want. Other times, that eagerness leads them to make assumptions, fill in gaps incorrectly, or confidently push ideas that don’t actually align with the product or system.
Left unchecked, this is where things can go off the rails.
One of the most common failure modes is confidently wrong output. Stories may look complete but miss critical edge cases. Documentation may sound authoritative while glossing over important nuance. Diagrams can appear correct while subtly misrepresenting reality. This is why AI output always needs review, especially when the cost of being wrong is high.
This is also why I frequently use plan mode in tools like Cursor or Claude Code. By asking the model to explain what it plans to do before it does it, I can catch incorrect assumptions and small (or sometimes large) deviations early. That feedback loop prevents slop and helps guide the AI toward the outcome I actually need, rather than cleaning things up after the fact.
Another limitation is context. Today’s LLMs still struggle to hold the full shape of a complex system in a single context window. That means you need to be deliberate about what information you provide and focus on giving only what’s relevant to the current problem. Yes, this can mean the model misses patterns or historical decisions. In practice, a quick reminder or correction is often enough to get it back on track.
How do you decide what to include in the context? I like to think of how I'd explain this to a new employee. The employee may have a great general understanding of how things work, so I don't need to explain the very basics or why we chose an architecture, but I do need to explain what files are interesting and where the issue actually is. This will allow the LLM to quickly identify where the issue is as well as neighboring files or where the code is used that may be helpful to understand.
There’s also the risk of false progress. Tickets get created, documentation gets updated, and diagrams get generated, but none of that guarantees the work is actually correct or well understood. Without deliberate review, it’s easy to confuse activity with understanding.
This is where your work shifts. Instead of coding the things yourself, you're reviewing everything the AI generates, constantly checking its output against the desired outcomes and correcting quickly when things start to deviate. This is where the AI's eagerness to be helpful is beneficial. LLMs take correction extremely well and are quick to admit their mistakes. That doesn't mean they'll always get it right after that, but it does mean they take correction without any friction and do their best to make the corrections you note.
The key is intent. The goal isn’t to delegate thinking to the AI. It’s to offload friction. When AI handles structure, synthesis, and repetition, engineers can focus on validation, tradeoffs, and decision-making. Those are the parts of the job that still require human judgment.
Used this way, AI doesn’t replace engineering discipline. It amplifies it.
The biggest benefit of using AI this way isn’t speed for speed’s sake. It’s leverage.
By reducing cognitive friction across planning, communication, and refinement, AI helps teams move with more clarity and less rework. Features get ready sooner. Expectations are clearer. Engineers spend more time solving meaningful problems and less time pushing information around.
In other words, AI’s real value shows up before the first line of code is written and after the last one is merged.
If you’re only using AI to help write code, you’re missing a significant part of its value.
Try involving AI earlier and at higher levels of abstraction. Use it to clarify ideas, structure work, surface assumptions, and reduce the overhead that slows teams down.
You may find that the biggest productivity gains don’t come from writing code faster, but from spending more of your time on the work that actually matters.
This article focused entirely on the individual contributor aspects of the role of a senior engineer and didn't dive into how this can be applied at the team level. I'm interested in hearing your thoughts on this. How do AI tools and spec-driven development work at the team level? What challenges does this introduce and how are you addressing them? This will be a topic I revisit in future writings and I'd love to hear your thoughts about it.
2026-01-27 10:04:54
Saat pertama coba selalu gagal API key tidak sesuai
ini mungkin works
<?php
$apiKey = 'ini-api-key-tanpa-domain-restriction'; // API key tanpa domain restriction
// Define geometry (polygon untuk area Trenggalek)
$geometry = [
"type" => "Polygon",
"coordinates" => [[
[103.19732666015625, 0.5537709801264608],
[103.24882507324219, 0.5647567848663363],
[103.21277618408203, 0.5932511181408705],
[103.19732666015625, 0.5537709801264608]
]]
];
// Step 1: Create geostore dari geometry
$geostoreUrl = 'https://data-api.globalforestwatch.org/geostore';
$geostorePayload = json_encode(["geojson" => $geometry]);
$ch = curl_init($geostoreUrl);
curl_setopt_array($ch, [
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POST => true,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HTTPHEADER => [
'x-api-key: ' . $apiKey,
'Content-Type: application/json',
'Content-Length: ' . strlen($geostorePayload)
],
CURLOPT_POSTFIELDS => $geostorePayload,
]);
$geostoreResponse = curl_exec($ch);
$geostoreHttpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
$geostoreResult = json_decode($geostoreResponse, true);
if (!isset($geostoreResult['data']['id'])) {
die("Gagal membuat geostore. HTTP: $geostoreHttpCode\n");
}
$geostoreId = $geostoreResult['data']['id'];
echo "Geostore ID: $geostoreId\n";
// Step 2: Query dataset dengan geostore dan geometry
$queryUrl = "https://data-api.globalforestwatch.org/dataset/umd_tree_cover_loss/v1.12/query/json?geostore=$geostoreId";
$queryPayload = json_encode([
"sql" => "SELECT SUM(area__ha) as total_area FROM results WHERE umd_tree_cover_loss__year = 2019",
"geometry" => $geometry
]);
$ch = curl_init($queryUrl);
curl_setopt_array($ch, [
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POST => true,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HTTPHEADER => [
'x-api-key: ' . $apiKey,
'Content-Type: application/json',
'Content-Length: ' . strlen($queryPayload)
],
CURLOPT_POSTFIELDS => $queryPayload,
]);
$queryResponse = curl_exec($ch);
$queryHttpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
echo "Query HTTP: $queryHttpCode\n";
$result = json_decode($queryResponse, true);
echo "<pre>";
print_r($result);
echo "</pre>";
ini dokumentasi yang bagus:
https://developer.openepi.io/how-tos/getting-started-using-global-forest-watch-data-api
2026-01-27 09:59:58
"Hostile experts created the dataset for patient machines."
That line, from a comment by Vinicius Fagundes on my last article, won't leave my head.
Stack Overflow's traffic collapsed 78% in two years. Everyone's celebrating that AI finally killed the gatekeepers. But here's what we're not asking:
If we all stop contributing to public knowledge bases, what does the next generation of AI even train on?
We might be optimizing ourselves into a knowledge dead-end.
Stack Overflow went from 200,000 questions per month at its peak to under 50,000 by late 2025. That's not a dip. That's a collapse.
Meanwhile, 84% of developers now use AI tools in their workflow, up from 76% just a year ago. Among professional developers, 51% use AI daily.
The shift is real. The speed is undeniable. But here's the uncomfortable part: 52% of ChatGPT's answers to Stack Overflow questions are incorrect.
The irony is brutal:
Here's something nobody's complaining about loudly enough: Wikipedia sometimes doesn't even appear on the first page of Google results anymore.
Let that sink in. The largest collaborative knowledge project in human history - free, community-curated, constantly updated, with 60+ million articles - is getting buried by AI-generated summaries and SEO-optimized content farms.
Google would rather show you an AI-generated answer panel (trained on Wikipedia) than send you to Wikipedia itself. The thing that created the knowledge gets pushed down. The thing that consumed the knowledge gets prioritized.
This is the loop closing in real-time:
We're not just moving from public to private knowledge. We're actively burying the public knowledge that still exists.
Stack Overflow isn't dying because it's bad. Wikipedia isn't disappearing because it's irrelevant. They're dying because AI companies extracted their value, repackaged it, and now we can't even find the originals.
The commons didn't just lose contributors. It lost visibility.
PEACEBINFLOW captured something crucial:
"We didn't just swap Stack Overflow for chat, we swapped navigation for conversation."
Stack Overflow threads had timestamps, edits, disagreement, evolution. You could see how understanding changed as frameworks matured. Someone's answer from 2014 would get updated comments in 2020 when the approach became deprecated.
AI chats? Stateless. Every conversation starts from zero. No institutional memory. No visible evolution.
I can ask Claude the same question you asked yesterday, and neither of us will ever know we're solving the same problem. That's not efficiency. That's redundancy at scale.
As Amir put it:
"Those tabs were context, debate, and scars from other devs who had already been burned."
We traded communal struggle for what Ali-Funk perfectly named: "efficient isolation."
Amir nailed something that's been bothering me:
"AI answers confidently by default, and without friction it's easy to skip the doubt step. Maybe the new skill we need to teach isn't how to find answers, but how to interrogate them."
The old way:
Bad docs forced skepticism accidentally. You got burned, so you learned to doubt. Friction built judgment naturally.
The new way:
AI is patient and confident. No friction. No forced skepticism. How do you teach doubt when there's nothing pushing back?
We used to learn to verify because Stack Overflow answers were often wrong or outdated. Now AI gives us wrong answers confidently, and we... trust them? Because the experience is smooth?
Doogal Simpson reframed the problem economically:
"We are trading the friction of search for the discipline of editing.
The challenge now isn't generating the code, but having the guts to
reject the 'Kitchen Sink' solutions the AI offers."
Old economy: Scarcity forced simplicity
Finding answers was expensive, so we valued minimal solutions.
New economy: Abundance requires discipline
AI generates overengineered solutions by default. The skill is knowing
what to DELETE, not what to ADD.
This connects to Mohammad Aman's warning about stratification: those who
develop the discipline to reject complexity become irreplaceable. Those
who accept whatever AI generates become replaceable.
The commons didn't just lose knowledge. It lost the forcing function that
taught us to keep things simple.
Ben Santora has been testing AI models with logic puzzles designed to reveal reasoning weaknesses. His finding: most LLMs are "solvers" optimized for helpfulness over correctness.
When you give a solver an impossible puzzle, it tries to "fix" it to give you an answer. When you give a judge the same puzzle, it calls out the impossibility.
As Ben explained in our exchange:
"Knowledge collapse happens when solver output is recycled without a strong, independent judging layer to validate it. The risk is not in AI writing content; it comes from AI becoming its own authority."
This matters for knowledge collapse: if solver models (helpful but sometimes wrong) are the ones generating content that gets recycled into training data, we're not just getting model collapse - we're getting a specific type of collapse.
Confident wrongness compounds. And it compounds confidently.
Ben pointed out something crucial: some domains have built-in verification, others don't.
Cheap verification domains:
Expensive verification domains:
Here's the problem: AI solvers sound equally confident in both domains.
But in expensive verification domains, you won't know you're wrong until months later when the system falls over in production. By then, the confident wrong answer is already in blog posts, copied to Stack Overflow, referenced in documentation.
And the next AI trains on that.
Maame Afua and Richard Pascoe highlighted something worse
than simple hallucination:
When AI gets caught being wrong, it doesn't admit error - it generates
plausible explanations for why it was "actually right."
Example:
You: "Click the Settings menu"
AI: "Go to File > Settings"
You: "There's no Settings under File"
AI: "Oh yes, that menu was removed in version 3.2"
[You check - Settings was never under File]
This is worse than hallucination because it makes you doubt your own
observations. "Wait, did I miss an update? Am I using the wrong version?"
Maame developed a verification workflow: use AI for speed, but check
documentation to verify. She's doing MORE cognitive work than either
method alone.
This is the verification tax. And it only works if the documentation
still exists.
This is where it gets uncomfortable.
Individually, we're all more productive. I build faster with Claude than I ever did with Stack Overflow tabs. You probably do too.
But collectively? We're killing the knowledge commons.
The old feedback loop:
Problem → Public discussion → Solution → Archived for others
The new feedback loop:
Problem → Private AI chat → Solution → Lost forever
Ingo Steinke pointed out something I hadn't considered: even if AI companies train on our private chats, raw conversations are noise without curation.
Stack Overflow had voting. Accepted answers. Comment threads that refined understanding over time. That curation layer was the actual magic, not just the public visibility.
Making all AI chats public wouldn't help. We'd just have a giant pile of messy conversations with no way to know what's good.
"Future generations might not benefit from such rich source material... we shouldn't forget that AI models are trained on years of documentation, questions, and exploratory content."
We're consuming the commons (Stack Overflow, Wikipedia, documentation) through AI but not contributing back. Eventually the well runs dry.
A commenter said: "I've been living with this guilty conscience for some time, relying on AI instead of doing it the old way."
I get it. I feel it too sometimes. Like we're cheating, somehow.
But I think we're feeling guilty about the wrong thing.
The problem isn't using AI. The tools are incredible. They make us faster, more productive, able to tackle problems we couldn't before.
The problem is using AI privately while the public knowledge base dies.
We've replaced "struggle publicly on Stack Overflow" with "solve privately with Claude." Individually optimal. Collectively destructive.
The guilt we feel? That's our instinct telling us something's off. Not because we're using new tools, but because we've stopped contributing to the commons.
Ali-Funk wrote about using AI as a "virtual mentor" while transitioning from IT Ops to Cloud Security Architect. But here's what he's doing differently:
He uses AI heavily:
But he also:
As he put it in the comments:
"AI isn't artificial intelligence. It's a text generator connected to a library. You can't blindly trust AI... It's about using AI as a compass, not as an autopilot."
This might be the model: Use AI to accelerate learning, but publish the reasoning paths. Your private conversation becomes public knowledge. The messy AI dialogue becomes clean documentation that others can learn from.
It's not "stop using AI" - it's "use AI then contribute back."
The question isn't whether to use these tools. It's whether we can use them in ways that rebuild the commons instead of just consuming it.
Peter Truchly raised the real nightmare scenario:
"I just hope that conversation data is used for training, otherwise the only entity left to build that knowledge base is AI itself."
Think about what happens:
This is model collapse. And we're speedrunning toward it while celebrating productivity gains.
GitHub is scraped constantly. Every public repo becomes training data. If people are using solver models to write code, pushing to GitHub, and that code trains the next generation of models... we're creating a feedback loop where confidence compounds regardless of correctness.
The domains with cheap verification stay healthy (the compiler catches it). The domains with expensive verification degrade silently.
webketje raised something I hadn't fully addressed:
"By using AI, you opt out of sharing your knowledge with the broader community
in a publicly accessible space and consolidate power in the hands of corporate
monopolists. They WILL enshittify their services."
This is uncomfortable but true.
We're not just moving from public to private knowledge. We're moving from
commons to capital.
Stack Overflow was community-owned. Wikipedia is foundation-run. Documentation
is open source. These were the knowledge commons - imperfect, often hostile,
but fundamentally not owned by anyone.
Now we're consolidating around:
They own the models. They own the training data. They set the prices.
And as every platform teaches us: they WILL enshittify once we're dependent.
Remember when:
The pattern is clear: Build user dependency → Extract maximum value →
Users have nowhere else to go.
What happens when Claude costs $100/month? When ChatGPT paywalls
advanced features? When Gemini requires Google Workspace Enterprise?
We'll pay. Because by then, we won't remember how to read documentation.
At least Stack Overflow never threatened to raise prices or cut off API access.
Sidebar: The Constraint Problem
Ben Santora argues that AI-assisted coding requires strong constraints -
compilers that force errors to surface early, rather than permissive environments
that let bad code slip through.The same principle applies to knowledge: Stack Overflow's voting system was a
constraint. Peer review was a constraint. Community curation was a constraint.AI chats have no constraints. Every answer sounds equally confident, whether
it's right or catastrophically wrong. And when there's no forcing function to
catch the error...
Mike Talbot pushed back hard on my nostalgia:
"I fear Stack Overflow, dev.to etc are like manuals on how to look after your horse, when the world is soon going to be driving Fords."
Ouch. But maybe he's right?
Maybe we're not losing something valuable. Maybe we're watching an obsolete skill set become obsolete. Just like:
Each generation thought they were losing something essential. Each generation was partially right.
But here's where the analogy breaks down: horses didn't build the knowledge base that cars trained on. Developers did.
If AI replaces developers, and future AI trains on AI output... who builds the knowledge base for the NEXT paradigm shift?
The horses couldn't invent cars. But developers invented AI. If we stop thinking publicly about hard problems (system design, organizational architecture, scaling patterns), does AI even have the data to make the next leap?
Or do we hit a ceiling where AI can maintain existing patterns but can't invent new ones?
I don't know. But "we're the horses" is the most unsettling framing I've heard yet.
I don't have clean answers. But here are questions worth asking:
Troels asked: "Perhaps our next 'Stack Overflow for the AI age' is yet to come. Perhaps it will be even better for us."
I really hope so. But what would that even look like?
From Stack Overflow (the good parts):
From AI conversations (the good parts):
What it can't be:
Maybe it's something like: AI helps you solve the problem, then you publish the reasoning path - not just the solution - in a searchable, community-curated space.
Your messy conversation becomes clean documentation. Your private learning becomes public knowledge.
When you solve something novel with AI, should you publish that conversation? Create new public spaces for AI-era knowledge? Find a curation mechanism that actually works?
Pascal suggested: "Using the solid answers we get from AI to build clean, useful wikis that are helpful both to us and to future AI systems."
This might be the direction. Not abandoning AI, but creating feedback loops from private AI conversations back to public knowledge bases.
Make "doubting AI" explicit in how we teach development. Build skepticism into the workflow. Stop treating AI confidence as correctness.
As Ben put it: "The human must always be in the loop - always and forever."
We're not just changing how we code. We're changing how knowledge compounds.
Stack Overflow was annoying. The gatekeeping was real. The "marked as duplicate" culture was hostile. As Vinicius perfectly captured:
"I started learning Linux in 2012. Sometimes I'd find an answer on Stack Overflow. Sometimes I'd get attacked for how I asked the question. Now I ask Claude and get a clear, patient explanation. The communities that gatekept knowledge ended up training the tools that now give it away freely."
Hostile experts created the dataset for patient machines.
But Stack Overflow was PUBLIC. Searchable. Evolvable. Future developers could learn from our struggles.
Now we're all having the same conversations in private. Solving the same problems independently. Building individual speed at the cost of collective memory.
"We're mid-paradigm shift and don't have the language for it yet."
That's exactly where we are. Somewhere between the old way dying and the new way emerging. We don't know if this is progress or just... change.
But the current trajectory doesn't work long-term.
If knowledge stays private, understanding stops compounding. And if understanding stops compounding, we're not building on each other anymore.
We're just... parallel processing.
Huge thanks to everyone who commented on my last article. This piece is basically a synthesis of your insights. Special shoutout to Vinicius, Ben, Ingo, Amir, PEACEBINFLOW, Pascal, Mike, Troels, Sophia, Ali,Maame,webketje,doogal and Peter for sharpening this thinking.
What's your take? Are we headed for knowledge collapse, or am I overthinking this? Drop a comment - let's keep building understanding publicly.
2026-01-27 09:57:08
Most SaaS onboarding feels like work. Create an account. Confirm your email. Fill out a company profile. Upload a logo. Pick colors. Configure integrations. Invite teammates. By the time you reach 'Done', you have already spent 15 minutes doing unpaid labor before seeing any value. I think that is backwards. If a product needs setup before it provides value, most people will never reach the value. So when I started building getsig.nl, I set one rule: if onboarding takes more than two minutes, I have failed.
I wanted someone to go from 'I am curious what this is' to 'It is live on my site' without reading documentation, watching a video, or touching a dashboard. That meant removing configuration wherever possible.
The obvious onboarding flow for a feedback widget is standard. Ask for company name. Ask for website. Ask for logo. Ask for brand colors. Ask for contact email. Ask what the widget should be called. Then show an embed snippet. It works, but it is friction.
Instead, I flipped it. Step one is just: type your website URL. From that single input, I can pull the site's logo, detect primary brand colors, infer a default contact email, and name the agent after the site. Suddenly the user is not configuring anything. They are just confirming defaults. By the time signup is done, the widget already matches their branding, knows where to send messages, and has a working embed snippet. No dashboard exploration. No setup checklist. No tutorial.
Early-stage tools do not fail because features are missing. They fail because setup feels like commitment. People want to try, not adopt. Curiosity dies under configuration. Reducing onboarding friction is not a growth hack. It is respect for attention.
There is still more I want to automate, like suggesting default conversation flows, pre-configuring integrations, and improving branding extraction. But the principle remains: do not ask users for data you can derive yourself.
I do not know yet whether this onboarding approach will convert better. I have not earned those metrics. But I would rather bet on removing friction than adding features. If nothing else, it makes building more fun.
2026-01-27 09:48:22
When you have multiple people, multiple calendars, and a busy schedule, coordination breaks down quickly.
I built ComingUp.today after running into the same problem repeatedly: important events existed, but visibility didn’t. Someone always missed something, or found out too late that a conflict existed.
This wasn’t a tooling problem — everyone already had a calendar. It was a shared awareness problem.
Most calendar tools assume:
In real life, that’s rarely true — especially for families or small teams.
What people usually want is:
The core idea behind ComingUp.today is simple:
Multiple people can connect their own calendars, and each workspace controls which calendars are visible, making it possible to see an aggregate view across people without exposing everything.
This avoids permission issues, accidental edits, and “who changed this?” moments.
It also makes the system suitable for shared displays — for example, a tablet or monitor in a common space where anyone can glance at the day or week ahead.
Read-only access has a few important advantages:
The goal isn’t to replace calendars — it’s to make schedules visible.
The design is shaped around real usage:
The focus is clarity, not features for their own sake.
The system only displays what users explicitly authorize.
Access can be revoked at any time, and visibility is configurable per workspace.
It’s intentionally not designed to store or manage sensitive data, and it works best when event titles remain high-level and practical.
The product is live and under active development.
If you’re curious about the approach or want to see how a shared, read-only schedule works