2026-01-19 18:13:43
If you use VS Code as your editor but rely on tmux as the real source of truth for your terminal sessions, it's natural to want the integrated terminal to always land in the right tmux session — without manual commands or duplicated sessions.
VS Code allows you to configure a terminal profile like this:
// "terminal.integrated.profiles.osx": {
// "tmux-shell": {
// "path": "tmux",
// "args": ["new-session", "-A", "-s", "${workspaceFolderBasename}"]
// }
// },
// "terminal.integrated.defaultProfile.osx": "tmux-shell"
What this does well:
-A
For many setups, this is good enough and elegant.
settings.json does not support conditional logic.
Variables like ${workspaceFolderBasename}:
Imagine you have several related projects and you want to work on all of them within one shared tmux session to preserve context, panes, and layouts:
fe-dashboard, fe-admin, be-api, be-auth
With the native configuration, each folder forces its own tmux session, even if conceptually they all belong to the same "work context".
You cannot express rules like:
"If the folder starts with
fe-orbe-, use theworksession"
This is simply not representable in VS Code's terminal configuration.
VS Code injects environment variables such as VSCODE_PID and TERM_PROGRAM=vscode, which allow you to reliably detect when a terminal is launched from VS Code.
From there, the shell decides.
Example using fish:
if set -q VSCODE_PID; or test "$TERM_PROGRAM" = "vscode"
if not set -q TMUX
set -l folder_name (basename (pwd))
if string match -qr "^fe-.*" -- $folder_name
set folder_name "work"
else if string match -qr "^be-.*" -- $folder_name
set folder_name "work"
else
set folder_name "projects"
end
# tmux new-session -A -s $folder_name &>/dev/null
end
end
VS Code becomes just a launcher.
The decision-making logic lives where it belongs: the shell.
VSCODE_PID is not a formal API, but it has been stable for yearsIf you use VS Code as your editor but tmux as your session manager, this integration removes friction and significantly improves day-to-day workflow.
👉 How do you manage your terminal sessions across projects?
2026-01-19 18:10:11
Humor me, try Googling the word **Web3.**
You’ll find a ton of results, most of them filled with big technical words that make it feel like you need a computer science degree just to understand what’s going on.
But here’s the truth: even if you’re not in the technical space, it’s important to understand the big idea behind Web3, especially if you’re an investor, creator, or just someone who uses the internet daily.
Because Web3 is changing how the internet works.
I'll break it all down, simply.
Web3 didn’t just appear overnight.
It started with Bitcoin.
What is Bitcoin?
Bitcoin is a digital currency that lets people send money directly to each other without banks.
Basically, instead of trusting a bank, people trust a system called a blockchain.
What is a blockchain?
A blockchain is like a public notebook that:
So instead of one company controlling the data, everyone shares and verifies it together.
Then Came Ethereum
Five years after Bitcoin, Ethereum was launched.
So… What Is Web3?
In simple terms:
Web3 is the version of the internet where users own their data, money, and digital assets. Instead of sending transactions through banks, apps, or corporations, you send them directly to a decentralized blockchain.
Web3 is about:
Instead of:
The platform owns your data.
Web3 says:
You own your data.
2026-01-19 18:09:36
Denis Tsyplakov, Solutions Architect at DataArt, explores the less-discussed side of AI coding agents. While they can boost productivity, they also introduce risks that are easy to underestimate.
In a short experiment, Denis asked an AI code assistant to solve a simple task. The result was telling: without strong coding skills and a solid grasp of system architecture, AI-generated code can quickly become overcomplicated, inefficient, and challenging to maintain.
People have mixed feelings about AI coding assistants. Some think they’re revolutionary, others don't trust them at all, and most engineers fall somewhere in between: cautious but curious.
Success stories rarely help. Claims like “My 5-year-old built this in 15 minutes” are often dismissed as marketing exaggeration. This skepticism slows down adoption, but it also highlights an important point: both the benefits and the limits of these tools need a realistic understanding.
Meanwhile, reputable vendors are forced to compete with hype-driven sellers, often leading to:
Drop in quality. Products ship with bugs or unstable features.
Development decisions driven by hype, not user needs.
Unpredictable roadmaps. What works today may break tomorrow.
Experiment: How Deep Does AI Coding Go?
I ran a small experiment using three AI code assistants: GitHub Copilot, JetBrains Junie, and Windsurf.
The task itself is simple. We use it in interviews to check candidates’ ability to elaborate on tech architecture. For a senior engineer, the correct approach usually takes about 3 to 5 seconds to give a solution. We’ve tested this repeatedly, and the result is always instant. (We'll have to create another task for candidates after this article is published.)
Copilot-like tools are historically strong at algorithmic tasks. So, when you ask them to create an implementation of a simple class with well-defined and documented methods, you can expect a very good result. The problem starts when architectural decisions are required, i.e., on how exactly it should be implemented.
Junie, GitHub Copilot, and Windsurf showed similar results. Here is a step-by-step breakdown for the Junie prompting.
Prompt 1: Implement class logic
The result would not pass a code review. The logic was unnecessarily complex for the given task, but it is generally acceptable. Let’s assume I don't have skills in Java tech architecture and accept this solution.
Prompt 2: Make this thread-safe
The assistant produced a technically correct solution. Still, the task itself was trivial.
Prompt 3:
Implement method List<String> getAllLabelsSorted() that should return all labels sorted by proximity to point [0,0].
This is where things started to unravel. The code could be less wordy. As I mentioned, LLMs excel at algorithmic tasks, but not for a good reason. It unpacks a long into two ints and sorts them each time I use the method. At this point, I would expect it to use a TreeMap, simply because it stores all sorted entries and gives us O(log n) complexity for both inserts and lookups.
So I pushed further.
Prompt 4: I do not want to re-sort labels each time the method is called.
OMG!!! Cache!!! What could be worse!?
From there, I tried multiple prompts, aiming for a canonical solution with a TreeMap-like structure and a record with a comparator (without mentioning TreeMap directly, let's assume I am not familiar with it).
No luck. The more I asked, the hairier the solution became. I ended up with three screens of hardly readable code.
The solution I was looking for is straightforward: it uses specific classes, is thread-safe, and does not store excessive data.
Yes, this approach is opinionated. It has (log(n)) complexity. But this is what I was going to achieve. The problem is that I can get this code from AI only if I know at least 50% of the solution and can explain it in technical terms. If you start using an AI agent without a clear understanding of the desired result, the output becomes effectively random.
Can AI agents be instructed to use the right technical architecture? You can instruct them to use records, for instance, but you cannot instruct common sense. You can create a project.rules.md file that covers specific rules, but you cannot reuse it as a universal solution for each project.
The biggest problem is supportability. The code might work, but its quality is often questionable. Code that’s hard to support is also hard to change. That’s a problem for production environments that need frequent updates.
Some people expect that future tools will generate code from requirements alone, but that's still a long way off. For now, supportability is what matters.
AI coding assistants can quickly turn your code into an unreadable mess if:
Instructions are vague.
Results aren’t checked.
Prompts aren’t finetuned.
That doesn’t mean you shouldn’t use AI. It just means you need to review every line of generated code, which takes strong code-reading skills. The problem is that many developers lack experience with this.
From our experiments, there’s a limit to how much faster AI-assisted coding can make you. Depending on the language and framework, it can be up to 10-20 times faster, but you still need to read and review the code.
Code assistants work well with stable, traditional, and compliant code in languages with strong structure, such as Java, C#, and TypeScript. But when you use them with code that doesn’t have strong compilation or verification, things get messy. In other parts of the software development life cycle, like code review, the code often breaks.
When you build software, you should know in advance what you are creating. You should also be familiar with current best practices (not Java 11, not Angular 12). And you should read the code. Otherwise, even with a super simple task, you will have non-supportable code very fast.
In my opinion, assistants are already useful for writing code, but they are not ready to replace code review. That may change, but not anytime soon.
Having all of these challenges in mind, here's what you should focus on:
Start using AI assistants where it makes sense.
If not in your main project, experiment elsewhere to stay relevant.
Review your language specifications thoroughly.
Improve technical architecture skills through practice.
Used thoughtfully, AI can speed you up. Used blindly, it will slow you down later.
2026-01-19 18:03:44
When I switched fully to Neovim a while back, one of the things I missed most from JetBrains IDEs (like RubyMine) was Git integration, especially the Git log view I’d use every day to quickly inspect recent commits and diffs. In RubyMine I’d often ask: “What changed in the last three commits?” and get an instant interactive history view.
Neovim, however, doesn’t have this kind of UI built in, and while there are many Git plugins out there (from inline signs like gitsigns.nvim to full interfaces like Neogit), none gave me exactly the workflow I wanted: a compact “pick commits, then diff them” experience.
So I built a plugin for that: gitlogdiff.nvim: a tiny Neovim plugin for listing recent Git commits and quickly seeing their diffs.
At its core, gitlogdiff.nvim gives you:
A simple commit list, sourced from git log, showing your recent history
Easy navigation with j/k and selecting commits with
Pressing opens a diff view (via diffview.nvim) comparing commits
This workflow feels much closer to the Git log explorer I used in JetBrains, right inside Neovim, without leaving your editor.
The plugin is on GitHub - here. Drop issues, ideas, or pull requests! It’s meant to stay lightweight but extendable, and someday might support alternative diff viewers beyond Diffview too.
Happy viming! 🧑💻
2026-01-19 18:00:04
After grinding through two sets of fairly heavy theory lessons over the weekend and writing about them, it was a relief to get back to a freeCodeCamp workshop.
The workshop in question had me building a list of major web browsers with the HTML boilerplate provided, as shown beyond:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>List of Browsers and Descriptions</title>
</head>
<body>
</body>
</html>
As before on freeCodeCamp, earlier theory lessons will now be put into practice with this workshop - with Description Lists front-and-center!
In each exercise, I used description list elements to add a title and description. Every step followed the same pattern - listing the browser name along with a short explanation. Seven steps later, the workshop was complete, and the final result is shown in the code below:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>List of Browsers and Descriptions</title>
</head>
<body>
<h1>List of Major Web Browsers</h1>
<dl>
<dt>Google Chrome</dt>
<dd>This is a free web browser developed by Google and first released in 2008.</dd>
<dt>Firefox</dt>
<dd>This is a free web browser developed by the Mozilla Corporation and first created in 2004.</dd>
<dt>Safari</dt>
<dd>This browser was developed by Apple and is the default browser for iPhone, iPad and Mac devices.</dd>
<dt>Brave</dt>
<dd>This is a free web browser first released in 2016 that is based on the Chromium web browser.</dd>
<dt>Arc</dt>
<dd>This is a free Chromium based web browser first released in 2023 by The Browser Company.</dd>
</dl>
</body>
</html>
After that, I jumped into a few theory lessons surrounding text and time semantic elements. I’ll probably wrap those up later today, and the next post will be another workshop, this time for a Job Tips page.
2026-01-19 17:58:57
By 2026, workflow automation has evolved from a "nice-to-have" into the backbone of modern business operations. The debate, however, remains the same: n8n vs. Zapier. Both platforms have matured significantly, leveraging AI and expanded integration libraries, but they still cater to fundamentally different philosophies.
If you are trying to decide which tool belongs in your tech stack this year, this guide breaks down the differences in pricing, usability, power, and AI capabilities to help you make the right choice.
Zapier remains the undisputed king of accessibility. Its mission is to make automation available to everyone, regardless of technical ability. In 2026, Zapier’s "Natural Language Actions" have become even more refined, allowing users to describe a workflow in plain English and have the platform build it instantly. It is built for marketing teams, sales operations, and founders who want linear, reliable automations without touching a line of code.
n8n (nodemation) continues to dominate the "fair-code" and developer-friendly space. It uses a node-based visual editor that looks more like a flowchart than a simple checklist. While Zapier hides complexity, n8n embraces it, allowing for complex branching, loops, and deep data manipulation. It is designed for CTOs, engineers, and power users who need granular control over their data.
Zapier is linear. It follows a "Trigger -> Action" logic. While "Paths" (logic branching) exist, they can become cumbersome to manage in complex scenarios. The interface is polished, intuitive, and prevents you from making errors by restricting what you can do.
n8n is multi-dimensional. You can create workflows that branch into five different directions, merge back together, wait for external webhooks, and process data in batches. If you need to transform a JSON object or run a custom JavaScript function in the middle of a workflow, n8n makes it native and easy.
In 2026, automation is nothing without AI.
Pricing is often the deciding factor for businesses scaling up.
Zapier is purely cloud-based (SaaS). Your data lives on their servers (and the servers of the tools you connect). For most, this is fine. For strictly regulated industries (healthcare, finance, EU data compliance), this can be a hurdle.
n8n offers a Self-Hosted version. You can install it on your own AWS, DigitalOcean, or private server. This means the data never leaves your infrastructure, a critical feature for security-conscious enterprises in 2026.
In 2026, the gap has widened: Zapier has become the ultimate consumer automation tool, while n8n has solidified itself as the operating system for automated business logic. Choose the one that fits your team's DNA.