2026-01-30 00:08:38
Most backup and sync tools assume one of two models:
There is a common use case that falls between these models and is poorly served by existing tools:
Keep a NAS continuously updated from a local machine, one-way, while preserving existing files on the NAS.
This is the problem nas-sync-script-builder exists to solve.
The target scenario:
Many tools approximate this behavior, but none provide it directly:
rsync requires careful flag management
lsyncd is powerful but tricky to configure
What’s missing is a safe, repeatable way to set up:
nas-sync-script-builder is a configuration generator, not a daemon.
It provides:
rsync, lsyncd, cifs-utils)
/etc/fstab block
rsync
lsyncd for continuous updates
The Python tool exists to make this setup explicit, reviewable, and reproducible.
2026-01-30 00:06:35
mcfly just hit us with his new track, “Bayé,” on Cercle Records! Get ready for some hypnotic afro-house grooves blended with a whole lot of emotion and nostalgic vibes – it’s a truly immersive experience that’s out everywhere now.
This guy, previously known as hip-hop's DJ Jeezy, totally reinvented himself as mcfly in 2024 and is quickly becoming a house music sensation, even catching the ears of industry heavyweights like Rampa and Bedouin.
Watch on YouTube
2026-01-30 00:00:00
Generative AI is stateless. It doesn't remember that it answered the same question 5 seconds ago. This leads to redundant token usage and unnecessary latency. Today, I solved this by implementing a Memoization Pattern using DynamoDB.
The Strategy
Instead of blindly calling bedrock.invoke_model(), I wrapped the call in a caching function:
Fingerprinting: I serialize the transaction list and hash it using MD5. This creates a unique key (cache_key).
Lookup: The Lambda checks DynamoDB for this key.
TTL (Time To Live): I set a 1-hour expiration on cached items using DynamoDB's native TTL feature.
Now, when I refresh my dashboard, I see a ⚡ icon next to the analysis. That tells me: "This didn't cost you a penny."
2026-01-29 23:55:38
This article is a logical continuation of our dive into architecture. If in the first part we brought order inside the "black box" called ViewModel, here we'll step beyond its boundaries. You'll learn how to rip out navigation logic from ViewControllers and ViewModels, why prepare(for:sender:) is an architectural dead end, and how to build a navigation system that won't turn your project into spaghetti when you add the tenth screen. We'll break down the concept of Child Coordinators, solve memory leak problems, and discuss whether this pattern survived in the SwiftUI era.
If MVVM is responsible for what happens inside a screen, then Coordinator is the answer to the question "where are we going next?".
In Apple's standard approach, navigation is baked right into UIViewController. This is convenient for small demo projects, but in real production it leads to controllers knowing way too much about each other. When LoginViewController creates HomeViewController, you get tight coupling. Try changing the flow later or reusing LoginViewController somewhere else - and you'll understand why this is a bad idea.
I believe a controller should be selfish. It shouldn't know where it came from or where it's going. Its job is to display data and report that the user tapped a button. That's it.
Let's be honest: who among us hasn't written self.navigationController?.pushViewController(nextVC, animated: true) directly in a button tap handler?
Problems start when:
if-else statements and hacks.A coordinator is a simple object that encapsulates the logic of creating controllers and managing their lifecycle. Let's start with a basic protocol.
Swift
protocol Coordinator: AnyObject {
var childCoordinators: [Coordinator] { get set }
var navigationController: UINavigationController { get set }
func start()
}
Why do we need childCoordinators? This is a critically important point for memory management. If you just create a coordinator and don't save a reference to it, it'll die right after exiting the method. The childCoordinators array keeps them "alive" while working with a specific flow.
The start() method is the entry point. No magic, just creating the needed screen and displaying it.
Swift
final class AuthCoordinator: Coordinator {
var childCoordinators: [Coordinator] = []
var navigationController: UINavigationController
// Delegate for communication with parent (e.g., AppCoordinator)
weak var parentCoordinator: MainCoordinator?
init(navigationController: UINavigationController) {
self.navigationController = navigationController
}
func start() {
let viewModel = LoginViewModel()
// Here we link the VM and Coordinator
viewModel.coordinator = self
let viewController = LoginViewController(viewModel: viewModel)
navigationController.pushViewController(viewController, animated: true)
}
func showRegistration() {
let regVC = RegistrationViewController()
navigationController.pushViewController(regVC, animated: true)
}
}
How does ViewModel tell the coordinator "time to navigate"? I prefer two approaches:
Personally, I've been leaning toward closures lately because it reduces boilerplate code. But if the flow is complex and there are many events, delegates look cleaner.
Swift
// Closure approach in ViewModel
final class LoginViewModel {
var onLoginSuccess: (() -> Void)?
var onForgotPassword: (() -> Void)?
func loginPressed() {
// ... API call
onLoginSuccess?()
}
}
// In Coordinator
func start() {
let vm = LoginViewModel()
vm.onLoginSuccess = { [weak self] in
self?.showMainFlow()
}
// ...
}
The biggest pain in UIKit when using coordinators is the system back button. The user taps it, the controller gets removed from memory, and your coordinator is still hanging in the childCoordinators array. Congratulations, you have a memory leak.
There are three ways to solve this:
UINavigationControllerDelegate: The didShow method lets you check which controller was removed, and if it was "ours", remove the coordinator too.viewDidDisappear. Also has nuances.I usually use the first option. It requires a bit more code, but it works reliably and preserves native system behavior.
Imagine the app as a tree. At the root is AppCoordinator. It decides whether to launch AuthCoordinator or MainCoordinator.
When AuthCoordinator finishes its work (user logged in), it must notify the parent. The parent removes it from childCoordinators, clears the stack, and launches the next flow.
Swift
func didFinishAuth() {
// Remove from children list
childCoordinators.removeAll { $0 === authCoordinator }
// Launch main flow
showMainTabbar()
}
Coordinator is the perfect place for dependency injection. Instead of passing NetworkService through five controllers, you keep it in the coordinator (or get it from a DI container) and pass it only to the ViewModel that actually needs it.
This makes the code insanely easy to test. You can create a coordinator with MockNetworkService and verify that it handles errors correctly.
With the release of NavigationStack in iOS 16 and NavigationPath, Apple gave us tools for managing navigation at the state level. Does this mean the death of the pattern?
Yes and no. In SwiftUI, the classic coordinator that holds UINavigationController is no longer needed. But the concept of separating navigation logic from View hasn't gone anywhere. Now we often call it Router.
Instead of manipulating controllers, Router manages NavigationPath (an array of data representing the stack).
Swift
class Router: ObservableObject {
@Published var path = NavigationPath()
func navigateToDetails(product: Product) {
path.append(product)
}
}
It's the same coordinator, just dressed in modern SwiftUI clothes.
UIViewController to the outside. It should show or push it itself.weak for references to parent coordinators, otherwise you'll create a retain cycle, and memory will never be released.To sum up: Coordinator isn't just an extra layer of code. It's freedom. Freedom to swap screens in five minutes, freedom to test navigation separately from UI, and freedom to never see prepare(for:segue:) in your nightmares. Yes, it requires discipline and writing slightly more protocols, but in a project that lives longer than a couple of months, it pays off handsomely.
2026-01-29 23:55:08
Nuxt 4 + Tailwind + Cloudflare. Sounds like the dream stack, right?
When I decided to rebuild BulkPicTools, that’s exactly what I thought. "Nuxt 4 is built for speed," I told myself. "This is going to be an easy win."
Well, reality hit me pretty hard.
Honestly, this is the "Bleeding Edge Tax." Everything runs perfectly on localhost, but the moment you try to deploy, things start breaking. Since I’ve already spent the sleepless nights fixing this stuff, I’m writing it down so you (hopefully) don’t have to.
When I deployed the first version, I was feeling pretty good. The UI was snappy, Tailwind was doing its job. I ran a Lighthouse test just to see that sweet 100/100.
Score: < 60. 🔴

(Caption: Ouch. The initial mobile performance score was a disaster.)
I was shocked. Nuxt 4 is supposed to be a performance beast!
I stared at the screen for a solid minute. Isn't Nuxt 4 supposed to be a performance beast?
Turns out, it was the CSS. By default, the build was generating a bunch of separate CSS files for Tailwind. The browser had to wait for all those network requests before it could paint anything. My First Contentful Paint (FCP) was trash.
The fix was a bit aggressive: I forced the styles directly into the HTML.
I stopped using <style> blocks in my Vue components entirely (strict discipline required) and forced Nuxt to inline the Tailwind config.
Here is the "magic switch" I added to nuxt.config.ts:
// nuxt.config.ts
export default defineNuxtConfig({
// Centralize all Tailwind styles here
css: ['~/assets/css/tailwind.css'],
features: {
// This is the lifesaver: inline the styles to kill network requests
inlineStyles: true
},
})
After this? The score shot back up to 90+. It feels a bit weird to inline everything in 2025, but hey, if it makes the site fast, I’ll take it.

Finally seeing green. The "Inline Styles" config brought the mobile score from <60 back to 99.
This is the sentence every developer dreads.
I chose Cloudflare Pages because it’s free and fast. Locally, npm run generate worked flawlessly. I pushed to GitHub, poured a coffee, and waited for the green checkmark.
Build Failed. 💥
The error was complaining about a missing queryCollection. This is a core function I use to fetch blog content. It turns out Cloudflare’s build environment handles module resolution differently than my local Mac.
Did I have time to debug Cloudflare's internal Node environment? No.
So, I decided to "lie" to the compiler.
I wrote a Mock Adapter to trick the build process into thinking those modules existed.
First, I aliased the missing modules in the config:
// nuxt.config.ts
import { fileURLToPath } from 'url'
export default defineNuxtConfig({
alias: {
// Can't find it? Redirect to my fake file.
'@nuxt/content/server': fileURLToPath(new URL('./adapter-content.ts', import.meta.url)),
'@nuxt/content/dist/module.mjs': fileURLToPath(new URL('./adapter-content.ts', import.meta.url))
},
})
Then, I created the "liar" file, adapter-content.ts:
// adapter-content.ts
// 1. Mock queryCollection for the Content v3 API
export const queryCollection = () => {
return {
all: () => Promise.resolve([]),
where: () => ({
all: () => Promise.resolve([])
})
}
}
// 2. Mock the legacy API too, just in case the Sitemap module asks for it
export const serverQueryContent = () => {
return {
find: () => Promise.resolve([]),
where: () => ({
find: () => Promise.resolve([])
})
}
}
export default {
queryCollection,
serverQueryContent
}
Is it a dirty hack? Yes. Did it work? Absolutely. The build passed, and the site went live. Good enough for me.
My plan is to launch dozens of specific tools, like "WebP to JPG" or "HEIC to PNG."
If I had to manually code the Title, Meta Description, JSON-LD Schema, and i18n support for every single one... I’d be stuck writing boilerplate code for the rest of my life. Plus, trends move fast. If I see a keyword opportunity, I need that page live in under 30 minutes.
So I stopped writing pages and started writing config.
I built a Configuration-Driven Engine:
.vue file that reads the config.Breadcrumbs? Generated from the route.
Tool Schema? Filled in automatically.
Translations? It looks up the keys based on the ID.
Now, if I want to launch a new tool, I just commit a few lines of JSON. That’s how it should be.
Building with bleeding-edge tech like Nuxt 4 is a bit of a rollercoaster. You get the incredible performance, but you also have to be ready to patch the holes in the ecosystem yourself.
Is the code perfect? Definitely not. But BulkPicTools is live, it loads instantly, and I didn't lose all my hair building it.
Want to see if my hacks held up? Go check it out:
👉 BulkPicTools.com
2026-01-29 23:52:01
When I started building Kiploks, the original idea came from Radiks Alijevs, the creator and lead developer behind the project.
The vision was straightforward (at least on paper):
Build a powerful system for optimizing algorithmic trading strategies.
Distributed computing.
Large parameter spaces.
Automation at scale.
Like many engineers, we assumed the main bottleneck was compute and tooling.
That assumption turned out to be wrong.
If you’ve worked with trading strategies (or any data-driven system), you’ve probably seen this pattern:
And then:
At first, the reaction is predictable:
“We just need better optimization.”
More data.
More parameters.
More compute.
But optimization doesn’t ask why something works.
It only asks how to maximize it.
After months of building and testing the engine, one thing became clear:
Optimization excels at:
The more powerful the optimizer became, the easier it was to generate strategies that looked robust - but weren’t.
As Radiks Alijevs noted during development:
A fast optimizer without deep analysis just produces convincing failures faster.
That insight forced a hard rethink.
We were focused on:
But those aren’t the questions professionals actually care about.
The real questions are:
Those questions aren’t answered by optimization.
They’re answered by analysis.
At some point, development paused and the vision was revisited.
Instead of building a bigger optimizer, the focus shifted toward exposing fragility:
The goal changed from:
“Find the best strategy”
to:
“Understand whether this strategy should be traded at all.”
This shift fundamentally changed how Kiploks is being built.
Kiploks is no longer about generating “winning configs”.
Under the direction of Radiks Alijevs, development is centered on answering harder questions early:
In practice, this means fewer features - but much deeper ones.
This lesson isn’t unique to trading.
It applies to:
Optimization without analysis creates false confidence.
And false confidence is expensive.
Kiploks continues to be built in public - but with a clearer goal.
Not to showcase impressive metrics,
but to expose where systems break.
That change has reshaped the engine, the research workflow, and the definition of progress itself.
Optimization is easy to sell.
Analysis is harder - and more honest.
Revisiting the vision wasn’t a setback.
It was a correction.
Kiploks is now about understanding robustness before trusting performance - a direction defined and developed by Radiks Alijevs.
And that turned out to be the real problem worth solving.