2025-11-14 17:48:21
AI Won’t Replace Developers - AI Users Will
AI coding tools are transforming software development automation, but not in the way most people think. The industry has it backwards. AI won’t replace developers - it will reward developers who use it. After rolling out AI coding tools across dozens of projects in two years, we’ve seen it up close. The job-loss panic? It completely misses the real story.
Remember Excel and accounting. It didn’t eliminate accountants; it made the best ones 10x more productive and shifted their work to higher-value analysis. Those who refused to learn spreadsheets didn’t lose jobs to Excel; they lost them to colleagues who mastered it.
AI coding tools are doing the same for software teams. Developers aren’t being replaced by AI, they’re being replaced by developers who use AI well.
Before you panic or celebrate, get clear on what AI actually does, and where it still falls short. Knowing those limits is what separates teams that thrive with AI from teams buried in pretty, broken output.
Where AI Coding Tools Fall Short in Real Projects
Real-world challenges
AI coding tools, like GitHub Copilot, Cursor and ChatGPT, aren’t miracle workers. They’re fast and helpful, but real systems are messy - legacy integrations, institutional knowledge, partial documentation, and business rules that live in people’s heads.
Last quarter we inherited a 2012 logistics platform—spaghetti code, half-documented endpoints, and inventory rules that no longer matched real operations. AI sped up refactors of simple functions, but it missed nuances: how stock moved between sites, why Sunday shipments spiked, and what “urgent” meant for a key contract. Tools optimize isolated tasks; they don’t understand an urgent 6 p.m. call from operations.
AI coding tools optimize isolated tasks. They see trees, not forests. Layer on messy requirements or legacy integrations, and they struggle to match human judgment.
A 2025 randomized controlled trial by METR revealed that experienced developers using AI coding assistants were actually 19% slower on familiar codebases, spending considerable time cleaning and refining AI outputs despite believing they were faster. This highlights the real-world limitations of AI coding tools in mature, complex projects and underscores the essential role of human expertise in reviewing and integrating AI-generated code.
Here’s the uncomfortable truth: debugging AI-generated code often requires more expertise than writing it. When a model proposes a “correct-looking” solution that fails under load, only deep system knowledge spots the flaw. The last mile - edge cases, production readiness, integrations - still demands senior-level thinking.
The Human Decision Layer
Machine learning predicts patterns; it doesn’t make business choices. AI coding tools optimize for the next plausible answer, not for risk, policy, or trade-offs. That’s why the human decision layer matters.
AI can draft code quickly, but humans must review it line by line for correctness, load behavior, and real-world usage. “Looks good” is not “works in production.”
Code that looks correct isn’t the same as code that is safe, compliant, and reliable under load. People weigh cost versus complexity, choose what to automate, set service levels, and decide when “good enough” is too risky.
According to Salesforce’s engineering research, agentic tools can generate code quickly, but developers still review every line before production. The code looked good—but “looking good” and “working under load with real user behavior” are two different universes.
The pattern extends to analytics: agentic tools surface anomalies and model scenarios, but humans define thresholds, interpret context, and make decisions.
Can AI understand your business problem? Not yet, and maybe never fully. Because context isn’t just data points; it’s relationships, history, risk tolerance, and instincts sharpened by years of building and fixing things.
The complexity gap is real. Bridging it requires skilled people making sense of chaos with AI by their side, not instead of them.
When AI Works
After testing GitHub Copilot, Cursor, and other AI code assistants across multiple projects, we developed a practical framework:
Case Study: Invoice Automation
It’s Thursday, 4:30 p.m. The finance team is staring at hundreds of invoices. Historically, it took three people about three days—and the totals still didn’t quite line up.
We used an AI code assistant for the repetitive parts—endpoints, data models, validators, and scaffolding. Documentation was generated alongside the code, including clear API guides. This is the same approach behind MYGOM Invoices.
Then came the hard part: handling legacy invoices. The first pass needed refinement, so we added concrete examples and constraints and let the assistant update only what was required.
By Friday at noon, the first end-to-end run was live: invoices captured from email/shared folders, parsed for key fields, reconciled against bank payments (with up to 95% match accuracy), and spot-checked by humans for edge cases.
Results: 40% reduction in processing time, 30% lower software spend, and 10× more invoices processed per person.
That’s not “AI replacing jobs.” It’s AI clearing repetitive work so people can focus on accuracy, cash-flow insight, and compliance.
How AI Code Assistants Change Developer Roles
This software development automation didn’t just change delivery speed, it flipped team roles upside down.
The junior work - boilerplate, simple endpoints, repetitive patterns - is mostly automated. Our juniors now review AI output and debug integrations within their first six months. Mid-level developers take on problems that used to need a senior. Seniors focus on what matters most - mentoring, architecture, and business choices - not syntax.
Before AI (2022)
After AI (2025)
Prototyping Now
Two years ago, new features meant whiteboards and days of setup. Now a senior can set up data models and sample data with AI while mapping flows with product in real time. Ideas become demos in hours, not days.
Results We See
Critics say AI only helps simple tasks. They miss the point: AI removes the boring parts so people can focus on design, quality, and creativity.
Implementing AI Coding Tools
1. How to Write Effective Prompts for AI Code Assistants
Most teams treat AI coding tools like vending machines - type a request, expect perfect code. That’s backwards. Success comes from asking the right question, checking the answer, and iterating fast.
A first draft looked slick, but it assumed daily logins and referenced fields we don’t have. We added real field names, spelled out edge cases and business rules, tested, refined, and tested again. The loop: prompt → evaluate → refine → test. We move faster because we master this loop, not because we skip it.
What developers need now
2. System Design Over Syntax
AI is great at syntax and boilerplate. It won’t choose an architecture, plan for failure, or weigh trade-offs.
What matters instead
System thinking has replaced syntax grinding as the core developer skill.
Rollout Without Chaos
Step 1: Audit the work
Track a week of effort:
Step 2: Start with one pain
Good first targets: API docs, new-service scaffolding, test generation, docstrings.
Avoid as starters: core business logic, security, live databases.
Run a 30-day pilot and measure time saved and defect rates.
Step 3: Train review muscles
AI doesn’t replace code review - it raises the bar. Teach teams to spot hallucinated APIs, decide “good enough” vs. rewrite, iterate prompts, and know when to abandon AI and hand-code.
Ready or Not
Software development automation with AI isn’t about replacing humans - it’s about amplifying what skilled developers can accomplish. With the right setup, work that once took days now runs in hours - with humans handling checks and decisions. Developers spend less time on boilerplate and more time on architecture and outcomes. AI isn’t a crutch; it’s an amplifier.
AI won’t replace developers - developers who use AI will. The advantage goes to teams that pair automation with judgment, reviews, and clear guardrails.
The future won’t wait. Teams adopting AI are already shipping faster, documenting more, and burning out less. Teams that delay keep the grunt work, the bottlenecks, and the debt.
If you’re wrestling with legacy systems or aiming higher than your current capacity, start small: pick one workflow, run a 30-day pilot, and train the review muscle. If you want a hand shaping that pilot, or want to see how this looks in your context, reach out.
2025-11-14 17:43:37
Modularization is the most important factor for the long-term maintainability of a growing codebase. Two powerful options are available for Spring Boot projects: Spring Modulith and a multi-module setup. How do they differ, and which approach should you choose for your project?
With Spring Modulith, the main focus is on splitting the actual Java or Kotlin code into modules. Under a main package (for example my/company/app) all subpackages are treated as modules (for example product, inventory). Access to another module is only possible through a defined interface - usually a service in the top-level package of a module as well as other packages that are explicitly exposed. Internal details like repositories and entities are not available.
The entire structure is verified by the spring-modulith libraries. Other artifacts are taken from src/main/resources, just like in a monolith. With this approach, practically all external libraries of the application can be used without restrictions.
Multi-module projects are based on the native ability of Gradle or Maven to manage multiple subprojects and integrate them into the build process using the Spring Boot plugin. Each module has its own folder in the project (for example app/product, app/inventory) and each module has its own src/main/java (or src/main/kotlin) and src/main/resources folders. These modules are built as real jars - in the final executable fat jar, all other libraries are included again.
An existing monolith can be converted to a Spring Modulith project relatively quickly - you only need to add the dependency and change the package structure. IntelliJ even shows the modules and breaches directly. Hardly any changes are needed in the code itself, only the module interfaces must be defined and used correctly.
For multi-module projects, no additional library is required, but the build files (build.gradle, build.gradle.kts or pom.xml) need much more adjustment. Access is only possible along the defined dependency path - imports outside of the integrated modules are technically not possible. Restricting access to internal classes is not built in and should be defined through coding guidelines rather than code restrictions.
For integration tests that cover the entire application, both approaches achieve a similar goal but in different ways.
@ApplicationModuleTest with BootstrapMode.ALL_DEPENDENCIES is used. This runs a test in the current module including all dependent modules. Since Spring Modulith determines modules at runtime based on actual dependencies, missing modules may need to be added via extraIncludes. This can be the case, for example, if a module only contains configuration and has no exposed services.api project(':product') in a build.gradle). When integration tests with @SpringBootTest are run, the application automatically starts with the context available for that module. Since each module also provides its own resources, only those are actually available (for example, only the included Flyway scripts are executed).Some libraries also require additional configuration to operate in a multi-module setup. While libraries like Flyway and Thymeleaf work almost out of the box, support for others such as jte is missing completely and must be provided separately.
Both modularization approaches achieve the main goal of logically splitting the code and thus ensuring long-term maintainability. This makes both approaches clearly better than a monolith, often called the “big ball of mud.” In both cases, it is important to design the module structure around your domain, not around technical layers - so better product, inventory, order instead of services, controller, frontend. More details in best practices for multi-module projects.
Spring Modulith differs less from a monolith, so the learning and configuration effort is smaller. Interfaces need to be clearly defined, which can be either an advantage or a disadvantage. A special test can also generate documentation of the current module structure.
The higher setup effort of multi-module projects comes with the benefit of real, logical modules. All artifacts are directly assigned to a module, making it easier to split work within a team. The code itself is not aware of the modules, since the structure is managed by Maven or Gradle.

Overview of modularization options in Spring Boot
In both approaches, individual modules can later be separated into microservices if needed. However, this step should be carefully considered - architecture is not an end in itself, but the added benefits should clearly outweigh the new costs.
In the Bootify Builder, you can create a modularized Spring Boot app using either Spring Modulith or the multi-module approach. Define your custom module structure, including the assignments of entities and Spring Security configs, and get the full setup directly in the browser.
2025-11-14 17:39:53
"We need to refactor this code."
Blank stares.
"The architecture is getting messy."
More blank stares. (Even a hint of 'leave me alone, we have features to ship.')
"If we don't address this technical debt now, it'll slow us down later."
Polite nods. Then: "Can we just ship this feature first?"
I've had this conversation more times than I can count. And for years, I blamed the product managers.
They don't get it. They only care about features. They're ignoring the foundation while demanding we build another floor.
Then I realized: they weren't the problem. My communication was.
Here's what I was saying: "We need to refactor this code because the architecture is getting messy and technical debt is accumulating."
Here's what they heard: "I want to spend two weeks making things prettier instead of building what customers asked for."
We were speaking different languages. I was talking about code quality. They were thinking about customer value, revenue, and roadmap commitments.
Neither of us was wrong. We just weren't connecting.
I was in yet another meeting trying to explain why we needed to pause feature work to address technical debt.
The product manager's eyes were glazing over. I could see her mentally calculating how to politely end this conversation.
So I stopped mid-sentence and tried something different.
"Imagine you just cut half your finger off. You could properly clean it and put on a bandage, or you could just ignore it. What happens if you keep ignoring half cut off fingers?"
She looked at me like I'd lost my mind. But then she got it.
"They get infected."
"Exactly. And eventually, you can't use that hand at all. That's what technical debt does to our codebase."
Suddenly, we weren't debating code architecture. We were discussing wounds that fester, infections that spread, and hands that stop working.
Technical debt became something she could visualize. And visualization creates urgency.
We're not wrong to use these terms with each other. But with stakeholders who measure success in features shipped, revenue generated, and customer satisfaction scores, we need a different vocabulary.
Not because they're less intelligent. Because they're optimizing for different outcomes.
A product manager isn't ignoring technical debt out of malice.
They're focused on:
And "we need to refactor" doesn't map to any of those goals. So we need to show them how it does.
The change isn't about dumbing things down, rather it's about finding shared language.
Here are the metaphors that have worked for me:
The Band-Aid on an Infected Wound
Every quick fix is a band-aid over a cut we didn't properly clean. Every shortcut is like painting over a crack in the wall instead of fixing the foundation.
At first, it looks fine: the wall looks painted, the cut is covered.
But band-aids fall off. Paint peels. And what's underneath is worse than when you started.
Why it works: Everyone understands infections get worse when ignored. Nobody argues with "this will get infected."
The Cracked Foundation
You can keep building floors on a cracked foundation. And for a while, it'll hold. But every new floor adds pressure. The cracks spread. And one day, the whole thing collapses—right when you need it most.
Why it works: It connects technical decisions to risk management, something every business leader thinks about.
Metaphors help, but you know what really works? Translating consequences into business language.
Instead of: "This code is hard to maintain."
Try: "Every new feature in this area takes 3x longer because of how it's structured. That's 20 extra hours per sprint we could be spending on new features."
Instead of: "We have technical debt here."
Try: "This is costing us $15,000 in developer time every quarter. If we invest two weeks now, we'll save that every quarter going forward."
Instead of: "The architecture is messy."
Try: "Our bug rate in this module is 4x higher than elsewhere.
Customers are reporting issues every week. We can fix that, but it requires addressing the underlying structure."
Hours lost. Money wasted. Bugs multiplying. Velocity decreasing.
These are metrics stakeholders understand. These create urgency.
Here's the framework I use now:
Don't start with opposition. Start with alignment.
Show how the technical problem affects their goals, not yours.
Make the invisible visible. Give them numbers.
Frame it as an investment with clear returns, not a cost.
Empower them to make the decision with full information.
Before the next technical debt conversation:
During the conversation:
After you get buy-in:
Here's what I wish someone had told me earlier: Cross-discipline communication is a core engineering skill.
Not a soft skill. Not a nice-to-have. A core skill.
The best engineers I know aren't just technically excellent. They can translate technical concerns into business value, design implications, or user impact.
They understand that:
Finding the bridge between disciplines isn't about compromising your expertise. It's about making your expertise relevant to people who optimize for different outcomes.
When's the last time you tried to explain a technical problem and got blank stares? What language were you using?
What metaphors resonate with your specific stakeholders?
Are you quantifying the business impact of technical decisions, or just hoping people trust you?
How often do you start technical debt conversations with stakeholder goals vs. engineering concerns?
Technical debt will bite us in the ass. But saying that to stakeholders won't create urgency.
Band-aids on infected wounds will. Credit card interest will. Cracked foundations will.
Every quick fix is a band-aid over a cut we didn't properly clean.
Every shortcut is like painting over a crack in the wall instead of fixing the foundation.
At first, it looks fine. But band-aids fall off. Paint peels. And what's underneath is worse than when you started.
Your job isn't just to identify technical debt. It's to make non-technical people care about it as much as you do.
And that starts with speaking their language.
Photo by charlesdeluvio on Unsplash
2025-11-14 17:38:14
We've all been there. You get a stunning, pixel-perfect design from your UI/UX team. You spin up your project with Tailwind CSS, ready to fly... and then you hit a wall.
That beautiful "burnt sienna" brand color isn't in the default palette. The dashboard card needs a box-shadow that's just slightly different. The spacing in the hero section is a weirdly specific 22px.
What's the first instinct? For many of us, it's to crack open a custom.css file and start writing overrides.
css
/* DON'T DO THIS! */
.my-custom-card {
box-shadow: 3px 7px 15px 0px rgba(0,0,0,0.08);
margin-top: 22px;
}
.btn-primary {
background-color: #E97451 !important; /* oh no */
}
This feels easy at first, but it's a trap. You're creating a second source of truth, fighting Tailwind's cascade, and littering your codebase with !important tags. Your CSS becomes brittle, hard to maintain, and completely breaks the utility-first workflow.
This isn't just about "clean code" — it's about Proven Speed. A messy CSS override system slows down your entire team, making development cycles longer and shipping new features a chore.
This file is your command center. Before you do anything else, 90% of your customization should happen right here. The key is using theme.extend. This merges your custom values with Tailwind's defaults, rather than replacing them.
Configure Custom Color Palettes and Design Tokens
Let's add that "burnt sienna" and a full brand palette.
// tailwind.config.js
module.exports = {
theme: {
extend: {
colors: {
'brand': {
'light': '#F9A184',
'DEFAULT': '#E97451', // Now 'bg-brand' works
'dark': '#C05739',
},
'burnt-sienna': '#E97451', // A one-off color
},
},
},
// ...
}
Now you can use classes like bg-brand, text-brand-light, or border-burnt-sienna just like any other default class.
Set Up Custom Breakpoints and Typography Scales
Does your design use a different responsive scale? Or a specific font family and size scale? Extend the theme!
// tailwind.config.js
const defaultTheme = require('tailwindcss/defaultTheme');
module.exports = {
theme: {
extend: {
// Add your custom breakpoints
screens: {
'3xl': '1920px',
},
// Set your project's fonts
fontFamily: {
'sans': ['Inter', ...defaultTheme.fontFamily.sans],
'serif': ['Merriweather', ...defaultTheme.fontFamily.serif],
},
// Create a custom typography scale
fontSize: {
'display-lg': ['4rem', { lineHeight: '1.1', letterSpacing: '-0.02em' }],
'display-md': ['3rem', { lineHeight: '1.2', letterSpacing: '-0.01em' }],
},
},
},
// ...
}
Now font-sans will use Inter, and you can use text-display-lg for your main headings.
Define Custom Spacing and Animation Curves
You can do the same for spacing, animation, keyframes, and more.
// tailwind.config.js
module.exports = {
theme: {
extend: {
spacing: {
'13': '3.25rem', // For when 12 (3rem) is too small and 14 (3.5rem) is too big
'section': '80px',
},
animation: {
'fade-in': 'fadeIn 0.5s ease-out forwards',
},
keyframes: {
fadeIn: {
'0%': { opacity: '0', transform: 'translateY(10px)' },
'100%': { opacity: '1', transform: 'translateY(0)' },
},
},
},
},
// ...
}
Now you can use pt-13, mb-section, and animate-fade-in.
Okay, but what about that 22px margin? You don't want to add 22px to your theme, as it's just a one-off. This is where arbitrary values (the square brackets) are a game-changer.
You can feed any value directly into a utility class.
Use Square Bracket Notation for One-Off Styling
Before (Messy CSS):
<div class="my-weird-margin">...</div>
.my-weird-margin { margin-top: 22px; }
After (Tailwind Magic):
<div class="mt-[22px]">...</div>
That's it! No config, no custom CSS file. It's co-located, explicit, and still a utility.
Combine Arbitrary Values with Modifiers
This power-move works with all of Tailwind's responsive and interactive modifiers.
<button
class="bg-[#E97451] p-[11px]
hover:bg-[#C05739] hover:shadow-[3px_5px_10px_rgba(0,0,0,0.1)]
md:mt-[22px]"
>
Click Me
</button>
Handle Whitespace and Resolve Ambiguities
Tailwind is smart. If your value has spaces (like in a grid-template-columns value), just use underscores. Tailwind will convert them to spaces.
<div class="grid-cols-[200px_1fr_min-content]">...</div>
Create Arbitrary Properties and Variants for Advanced Control
Need a CSS property Tailwind doesn't have a utility for? You can even write completely custom properties on the fly. This is amazing for one-off CSS variables or filters.
<div
class="[--my-variable:10px] [mask-image:url(...)]"
style="--my-variable: 10px;"
>
This div uses a custom property and a mask-image,
all without leaving the HTML.
</div>
You can even create arbitrary variants for building custom selectors:
<div class="[&_p]:text-red-500">
<p>I am red.</p>
<div><p>I am also red.</p></div>
</div>
Quick Question: What's the most complex or "weird" one-off style you've had to build? Could arbitrary values have made it easier? Share your story in the comments!
Sometimes you find yourself using the same arbitrary value over and over. You could add it to the config, but what if it's a more complex group of properties?
Tailwind's plugin system is powerful, but now you can add simple custom utilities right in your main CSS file using the @utility directive.
Register Simple Custom Utilities with @utility
Let's say you constantly need to truncate text at 2 lines.
/* In your main CSS file (e.g., global.css) */
@tailwind base;
@tailwind components;
@tailwind utilities;
@layer utilities {
@utility .text-truncate-2 {
overflow: hidden;
display: -webkit-box;
-webkit-box-orient: vertical;
-webkit-line-clamp: 2;
}
}
(Wait, you said no custom CSS! Well, this is technically CSS, but we're using the @layer utilities directive. This tells Tailwind to treat this class as a utility, so it's "JIT-aware" and works with all your modifiers (hover:, md:, etc.) automatically. This is the "right way".)
Create Functional Utilities That Accept Arguments
This is where it gets really powerful. You can create utilities that take arguments, just like Tailwind's core utilities.
Let's make a text-truncate utility that can accept any number.
/* In your main CSS file */
@layer utilities {
@utility .text-truncate {
/* Set defaults */
overflow: hidden;
display: -webkit-box;
-webkit-box-orient: vertical;
/* Use the 'value' to set the clamp */
-webkit-line-clamp: var(--value);
/* Register 'value' as 'number' */
composes: @defaults(
--value: 1 / type:number
);
}
}
Now you can use this in your HTML:
<p class="text-truncate-2">... (2 lines)</p>
<p class="text-truncate-3">... (3 lines)</p>
<p class="text-truncate-[5]">... (5 lines, arbitrary!)</p>
You've just built a custom, functional utility that supports theme values and arbitrary values, all from your CSS file.
What if you need a utility to apply in a very specific situation? For example, styling something only when a parent has an aria-disabled attribute?
You can create your own variants with @custom-variant.
Create Theme-Based and State-Based Custom Variants
Let's create a variant that applies when an element is inside a .dark class (for dark mode) and is part of a group (group-hover).
/* In your main CSS file */
@layer utilities {
/* This creates a 'group-dark-hover:' variant */
@custom-variant :merge(.dark) :merge(.group:hover) &;
}
Now you can use it:
<div class="dark">
<div class="group">
<p class="text-white group-dark-hover:text-blue-300">
This text turns blue on hover, but *only* in dark mode.
</p>
</div>
</div>
Implement Complex Multi-Rule Custom Variants
Let's try that aria-disabled example.
/* In your main CSS file */
@layer utilities {
/* Creates an 'aria-disabled:' variant */
@custom-variant :merge([aria-disabled="true"]) &;
}
And in your HTML:
<div aria-disabled="true">
<button class="opacity-100 aria-disabled:opacity-50">
This button will be faded out.
</button>
</div>
This is powerful stuff! These newer features are a bit more advanced. Have you tried using @utility or @custom-variant in your projects yet? I'd love to hear your experience!
Finally, what if you really just need to add some global base styles (like for a body tag) or create a reusable component class (like .card)?
Don't fight the cascade. Use the @layer directive.
@layer base: For global styles, like setting a default font-smoothing or body background. Tailwind will load these first.
@layer components: For reusable, multi-utility component classes. Tailwind loads these after base styles but before utilities.
/* In your main CSS file */
@tailwind base;
@tailwind components;
@tailwind utilities;
@layer base {
body {
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
@apply bg-gray-50 text-gray-800;
}
}
@layer components {
.card {
@apply rounded-lg bg-white shadow-md p-6;
}
.btn-primary {
@apply py-2 px-4 bg-brand text-white font-semibold rounded-lg shadow-md;
}
.btn-primary:hover {
@apply bg-brand-dark;
}
}
Now you can use <div class="card">...</div> in your HTML. The best part? Utilities still win. If you add p-0 to that card (<div class="card p-0">...</div>), the padding will be 0. You're creating maintainable components without ever losing the power of utility-first.
By embracing these techniques, your entire workflow becomes streamlined.
Responsive Design: All these custom utilities and components (.card, text-truncate-2) will automatically work with Tailwind's built-in modifiers like md:, lg:, etc.
Dark Mode: The same goes for dark mode. dark:bg-brand and dark:card (if you defined it) just work.
**Framework Integration: **This approach is framework-agnostic. It works beautifully with React, Vue, Svelte, or simple HTML. Your tailwind.config.js and CSS file remain the single source of truth for your design system.
You've Unlocked Tailwind's True Power
By mastering theme customization, arbitrary values, and the @utility and @layer directives, you can build any design, no matter how complex or specific, without ever "dropping out" of the utility-first workflow.
You get the speed of utilities, the maintainability of a design system, and the precision of custom CSS, all rolled into one. Your codebase will be cleaner, your development process faster, and your confidence in tackling any design will skyrocket.
Your Challenge
You've seen the techniques, now it's time to put them into practice.
My challenge to you: Take one part of your portfolio or a side project that has some messy custom CSS. Try to refactor it using one of these methods:
Move custom colors or fonts into your tailwind.config.js.
Replace one-off overrides with arbitrary values ([...]).
Turn a common custom class into a component with @layer components and @apply.
Share your "before" and "after" thoughts in the comments below! Did you get to delete a bunch of custom CSS? What was the biggest win?
I'd love to see what you build.
Mastering Tailwind is a huge step, but sometimes you need a dedicated partner to build scalable, high-performance frontends that users love.
If you're looking for a team to help you accelerate your development and deliver flawless user experiences, I'd love to chat.
Visit frontend and UI/UX SaaS partner at hashbyt.com to see our work and learn more.
2025-11-14 17:34:42
Hello, web developers!
I want to quickly introduce a really cool and relatively new browser API that can be incredibly useful for frontend developers. We’re talking about URLPattern.
This API allows you to quickly parse and match URLs against predefined patterns
URLPattern gives you access to:
Create a pattern using the new URLPattern() constructor:
const pattern = new URLPattern(...);
The object has two main methods:
test(url)
Returns a boolean — whether the URL matches the pattern.
exec(url)
Returns an object with extracted parameters from the URL.
Example of the result structure:
{
hash: { groups: { 0: '' }, input: '' },
hostname: { groups: { 0: 'domain.com' }, input: 'domain.com' },
inputs: ['https://domain.com/path/1'],
password: { groups: { 0: '' }, input: '' },
pathname: { groups: { id: '1' }, input: '/path/1' },
port: { groups: { 0: '' }, input: '' },
protocol: { groups: { 0: 'https' }, input: 'https' },
search: { groups: { 0: '' }, input: '' },
username: { groups: { 0: '' }, input: '' }
}
Named Parameters
const pattern = new URLPattern({ pathname: "/path/:id" });
pattern.test('https://domain.com/path/1');
true
pattern.test('https://domain.com/path/sub_path/1');
false
pattern.test('https://domain.com/path');
false
pattern.exec('https://domain.com/path/1').pathname.groups;
{ id: '1' }
Wildcard (*)
const pattern = new URLPattern({ pathname: "/path/*" });
pattern.test('https://domain.com/path/1');
true
pattern.test('https://domain.com/path/sub_path/1');
true
pattern.test('https://domain.com/path');
false
pattern.exec('https://domain.com/path/1').pathname.groups;
{ 0: '1' }
pattern.exec('https://domain.com/path/sub_path/1').pathname.groups;
{ 0: 'sub_path/1' }
Optional Segment ({/sub_path}?)
const pattern = new URLPattern({ pathname: "/path{/sub_path}?" });
pattern.test('https://domain.com/path/1');
false
pattern.test('https://domain.com/path/sub_path');
true
pattern.test('https://domain.com/path/sub_path/1');
false
pattern.test('https://domain.com/path');
true
pattern.exec('https://domain.com/path/1').pathname.groups;
null
pattern.exec('https://domain.com/path/sub_path/1').pathname.groups;
null
pattern.exec('https://domain.com/path/sub_path').pathname.groups;
{}
Hostname + Pathname
const pattern = new URLPattern({
hostname: ":sub.domain.com",
pathname: "/path/:id"
});
pattern.test("https://api.domain.com/path/42");
true
pattern.test("https://api.domain.com/path");
false
pattern.test("https://other.other_domain.com/path/42");
false
pattern.exec("https://api.domain.com/path/42").pathname.groups;
{ id: '42' }
pattern.exec("https://api.domain.com/path/42").hostname.groups;
{ sub: 'api' }
In the URLPattern constructor, you can use the following properties:
URLPattern is a very convenient API for working with URLs in the browser. For obvious reasons, it has some limitations in older browsers, but for modern web applications, it provides a simple and concise way to parse and match URLs.
Personally, I’m very glad this API exists — it really simplifies the life of frontend developers.
Thank you all for your time and attention!
2025-11-14 17:34:20
In the competitive world of the paper and forest-products industry, small to mid-sized companies are under increasing pressure to deliver higher productivity, reduce costs, and meet stringent sustainability and regulatory requirements. At BrightPath Associates LLC, we specialise in supporting exactly these kinds of enterprises — helping them recruit executive leadership, engineers, and operations talent who can drive innovation and competitive advantage. That is why the topic of mill automation and advanced systems in the pulp & paper sector is especially relevant.
The process of taking raw wood fibre through pulping, forming, drying and finishing is inherently resource-intensive: requiring significant energy, water, chemical inputs, and labour. According to industry research, the integration of modern automation systems—IoT sensors, advanced process control (APC), SCADA systems, and artificial intelligence (AI)-driven analytics—can meaningfully improve operational metrics such as throughput, quality, and cost reduction.
For companies in the paper & forest-products industry that are navigating tight margins and demanding customers, these technologies are no longer optional—they are rapidly becoming business-critical enablers of sustainable growth and competitive differentiation.
Here are several of the advanced system types now being deployed and the advantages they bring:
Modern mills deploy a network of sensors and IIoT (Industrial Internet of Things) devices that continuously monitor conditions like temperature, moisture, pressure, flow rate and chemical concentrations. When paired with analytics platforms, these systems offer predictive insights—identifying a potential machine fault before it causes downtime, or highlighting inefficiencies in energy or water use. One study indicates such systems can reduce unplanned outages by up to 30% and material waste by 20-25%.
Rather than manually adjusting key parameters in pulping or drying processes, APC systems dynamically optimize these parameters in real time. This means more consistent product quality, fewer rejects and less variation in critical metrics. For example, controlling moisture content more precisely in the dryer section prevents over-drying (wasting energy) or under-drying (risking quality faults).
Rather than relying only on manual inspection, firms are leveraging machine-vision systems and AI models that flag defects, identify trends and offer immediate correction suggestions. These systems are especially relevant when companies aim for high-grade speciality papers or board products where tolerances are tighter, and customer expectations are higher. The result: higher yield, lower rework, improved margins.
While large global mills may implement full green-field automation platforms, many small to mid-sized mills can benefit from targeted retrofits—such as converting a manual paper stacking line into a semi-automated cell, or implementing condition-based maintenance on critical assets rather than full-scale replacement. This modular approach reduces upfront cost, shortens deployment time, and improves ROI for the kinds of companies we partner with.
For executives, plant managers and talent strategists in the paper & forest-products sector, the value of mill automation goes beyond technical improvements. Here’s how to think about it:
- Sustainability and Compliance: Automation leads to resource-efficient operations (less water, less energy, less waste), which supports ESG goals, regulatory compliance and corporate reputation.
- Operational Resilience: Sophisticated automation systems reduce reliance on manual workforce (especially relevant given labour challenges) and make processes more robust against disruptions.
- Competitive Differentiation: For smaller mills, being able to offer higher quality, faster turnaround, more consistent output than rivals helps win contracts. Automation enables that.
- Talent Attraction & Retention: Younger engineers and technical staff increasingly seek work in digitally advanced operations—not legacy manual plants. Investing in automation helps position your company as forward-looking and appealing.
- Lead Generation & Business Development: As a recruitment partner, BrightPath can leverage your automation strategy as a proof point: reinforcing to prospects and candidates that your firm is committed to growth, innovation and operational excellence.
If your company is planning to deploy advanced automation systems, here are some practical guidelines:
- Begin with a pilot: Select a section of your mill (e.g., dryer, finishing line) where improvements will provide immediate cost/quality impact. Deploy sensors and analytics there, measure results, then scale.
- Link to talent strategy: Identify the digital and automation skills required (data analytics, IIoT architecture, control engineers, maintenance analytics), and build your hiring strategy accordingly.
- Partner wisely: Many technology vendors offer modular solutions tailored for smaller operations—tap into their domain expertise and ensure training is part of the package.
- Ensure change management: *Automation is not only about tech—it’s also about people. Provide upskilling and engagement for your team to embrace new systems rather than resist.
*- Measure ROI and communicate it: Track metrics like reduced downtime, scrap rate, energy consumption, yield improvement. Communicate successes clearly to the leadership team and staff to maintain momentum.
Waiting is no longer an option. Market demands for sustainable packaging, digitisation and agility mean that even smaller mills must respond or risk falling behind. As one industry summary notes: “AI, IoT & automation are reshaping pulp and paper mills—driving efficiencies, real-time control and sustainable production in modern operations.”
The combination of technology, operations excellence and talent strategy creates a virtuous cycle: better technology attracts top talent, which drives better operations and improved results, which in turn enhances your reputation and business growth.
At BrightPath Associates, we specialise in partnering with paper & forest-products companies - especially small to mid-sized enterprises—to source and secure talent that is aligned with automation and digital transformation strategies. Whether you are hiring a Director of Automation, Controls Engineer, Data Analyst or Plant Manager with a digital-first mindset, our deep industry network and consultative approach give you an edge.
If you’re ready to explore how adopting advanced automation systems in your mill can both improve efficiency and strengthen your talent pipeline, we can help you design the hiring roadmap and identify candidates who fit your strategic vision.
If your company is seeking to transform its operations through automation — and build the leadership team to support that transformation — let’s connect. Drop us a comment below, ask your questions, or reach out directly so we can explore how BrightPath Associates can accelerate your recruitment strategy and operational excellence project.