2026-02-11 22:18:59
Throughout my professional career, I’ve had the opportunity to work with people who approached challenges in very different ways. Different perspectives. Different styles of handling delivery. Different reactions to tight deadlines, innovation pressure, or critical production incidents.
I’ve seen technical managers visibly stressed over a production release gone wrong. I’ve seen managers push hard deadlines, insisting that a specific feature must be delivered by a certain date, no matter what.
But I’ve also worked with technical leads.
And I say lead intentionally.
Because these were the people who, regardless of the chaos around them, guided their teams toward success. They didn’t simply manage tasks — they led people. They created clarity in uncertainty. They built confidence in moments of doubt. They turned obstacles into opportunities to grow.
Years passed. I gained experience across projects, teams, and industries. Eventually, I found myself in a new position — mentoring a fairly large group of people.
To be honest, it was scary at first.
With responsibility comes pressure. And whenever I felt stress building up, I paused and asked myself one simple question: “What would a leader do?” More specifically: What would my last tech lead do?
If there is one essential lesson I learned from him, it’s this: When you give a team enough freedom, confidence, and resources, the outcome can surpass even your highest expectations. So that’s exactly what I chose to do. Building, Not Controlling.
For nearly five months, I focused on creating an environment of growth rather than control. I shared books, online courses, and practical learning materials. I designed requirements from scratch and let them implement solutions on their own. I encouraged learning by doing.
We evolved into working in teams, adopting Agile practices. We worked on presentation skills, collaboration, team spirit, and technical depth.
We discussed not only how to write code, but how to present yourself, how to communicate ideas, how to ask for help, and how to support others. It wasn’t just about technology. It was about building professionals.
The outcome? It amazed even me. Some of them are already contributing to commercial projects and delivering outstanding results. Others became involved in a Machine Learning proof of concept while also exploring Full Stack Development. Some are still in the learning phase — but growing so steadily that they could confidently walk into an interview and perform flawlessly. What changed wasn’t just their technical skills. It was their confidence.
So what was the secret ingredient? Leading without constraining. Empowering instead of controlling. Providing the right tools and trusting people to use them. Creating a mindset where saying “I don’t know” is not weakness — and asking for help is not failure.
True leadership isn’t about pressure. It isn’t about rigid control or fear of mistakes. It’s about building an environment where people feel capable, supported, and trusted. Because when you do that, they won’t just meet expectations. They’ll exceed them.
2026-02-11 22:17:53
Does your mobile site wobble side-to-side because of frustrating Elementor Flexbox Container Overflows that ruin the user experience? You finish a complex build. It looks great on your desktop monitor. Then you check the live site on your phone. The page slides left and right. This horizontal scroll is the "ghost of web development." It signals a breakdown in the structural relationship between parent containers and child widgets. You must master the logic of flexbox to kill this bug forever.
The horizontal scroll appears because a child element exceeds the parent container's maximum width. Imagine trying to fit a 12-inch ruler into a 10-inch box. The ruler will poke out. In web design, that "poking out" creates a white gap on the right and a shaky screen. Flexbox tries to fit everything on a single line by default. Sometimes a widget refuses to shrink. This forces the entire section to extend beyond the viewport edge.
What Is the Parent-Child Dance in Flexbox?
Many imported kits use fixed widths, such as 600px, for buttons or images. On a desktop, this looks fine. On a small mobile phone (375px), that 600px button is too big. It refuses to scale down. This forces the browser to display a horizontal scrollbar so you can see the rest of the button. You must find these hidden fixed values to stop the wobble.
The Navigator tool (Ctrl/Cmd + I) allows you to see the "skeleton" of your page without clicking the canvas. It reveals all hidden containers and widgets in an organized list. This is the fastest way to find a rebellious child element that is pushing the screen too wide.
Isolate the Section: Click the "Eye" icon next to a main section in the Navigator to hide it.
Test the Scroll: Check your site. If the horizontal scroll disappears, you found the broken section.
Drill Down: Open the section, then hide its children one by one.
Fix the Culprit: When you find the exact widget causing the leak, check its width and margin settings.
You solve this by teaching your containers to "Wrap" and your widgets to "Shrink." By default, Flexbox tries to keep everything in one straight horizontal line. If you have too many items, they will fly off the screen.
| Setting | Action | Real-World Result |
|---|---|---|
| Setting | Set to Wrap | Items stack vertically instead of pushing the screen wide |
| Flex-Shrink | Set to 1 | Forces the widget to get smaller to stay inside the box |
| Flex-Grow | Set to 0 | Prevents items from stretching and breaking the layout |
| Overflow | Set to Initial | Lets you see the overflow while you are still fixing it |
Negative margins are like pulling an object with a rope. If you pull a widget 50px to the right, you are literally dragging it off the screen. Beginners often use this to create overlapping images. However, the browser still counts that hidden 50px as part of the page width. This is the #1 cause of the horizontal wobble.
The Pro CSS Alternative
Instead of using negative margins, use Absolute Positioning. This allows the widget to float over other elements without affecting the page width.
CSS
/* ❌ THE BEGINNER MISTAKE */
.bad-widget {
margin-right: -100px; /* This breaks the mobile screen */
}
/* ✅ THE PRO SOLUTION */
.good-widget {
position: absolute;
right: -20px; /* Floats safely without stretching the page */
z-index: 5;
}
The "Nuclear Option" sets the container to Overflow: Hidden. This acts like a pair of scissors. It simply cuts off anything that tries to go outside the box. This is a great "safety net," but it should not be your only fix.
The Risks of Cutting Content
Some people set overflow-x: hidden on the entire <body>. While this stops the wobble, it comes with a major penalty: It breaks "Sticky" elements. If you use this global hack, your sticky headers or sidebars will likely stop working entirely. It is always better to fix the specific Elementor Flexbox Container Overflows one at a time.
Clean logic is about respecting boundaries. You must build your site like a set of nesting dolls. Every child must fit perfectly inside its parent. Use the Navigator to stay organized. Always set your containers to Wrap on mobile. Avoid negative margins that pull content into the "no-go" zone of the phone screen.
Stable designs require a solid foundation from the start. If you are building a complex marketplace, choose a framework designed for stability. A base like the Drivlex - Vehicles Buy/Sell Website Elementor Template uses professional flexbox architecture. It handles large amounts of automotive data without breaking the layout. Focus on growing your business and let the code stay firm. Apply these flexbox rules today and stop the wobble for good.
2026-02-11 22:06:31
Mobile browsers love making 100vh feel… optimistic.
You build a full-height layout, it looks fine — then the URL bar collapses, the on-screen keyboard opens, and suddenly:
Here’s a small, production-friendly approach: use the Visual Viewport (what the user can actually see) and expose it to CSS.
100vh breaks on mobile
On desktop, 100vh usually matches the visible area. On mobile, the visible area changes frequently because of:
So 100vh can behave like “maximum possible height” rather than “currently visible height”.
window.visualViewport.height (visible area height)--vvh
min-height: var(--vvh) for full-height containersThis tends to behave better when:
--vvh
Add this once on the client (no deps):
function setVVH() {
const vv = window.visualViewport;
const h = vv?.height ?? window.innerHeight;
document.documentElement.style.setProperty("--vvh", `${Math.round(h)}px`);
}
setVVH();
window.visualViewport?.addEventListener("resize", setVVH);
window.visualViewport?.addEventListener("scroll", setVVH);
window.addEventListener("resize", setVVH);
Math.round()?
Some browsers report fractional heights, which can cause tiny “layout jitter”. Rounding makes it steadier.
.fullHeight {
min-height: var(--vvh, 100vh);
}
I prefer min-height because it’s more forgiving for real content (forms, error messages, dynamic blocks). If you truly need strict sizing, you can use height, just be aware it may feel more brittle.
<main class="fullHeight">
...
</main>
.modal {
max-height: var(--vvh, 100vh);
overflow: auto;
}
.appShell {
min-height: var(--vvh, 100vh);
display: grid;
grid-template-rows: auto 1fr auto;
}
requestAnimationFrame
If you notice frequent events, schedule updates:
let scheduled = false;
function setVVH() {
const vv = window.visualViewport;
const h = vv?.height ?? window.innerHeight;
document.documentElement.style.setProperty("--vvh", `${Math.round(h)}px`);
}
function scheduleVVH() {
if (scheduled) return;
scheduled = true;
requestAnimationFrame(() => {
scheduled = false;
setVVH();
});
}
setVVH();
window.visualViewport?.addEventListener("resize", scheduleVVH);
window.visualViewport?.addEventListener("scroll", scheduleVVH);
window.addEventListener("resize", scheduleVVH);
window during server render. Run this on the client.100vh.2026-02-11 22:02:02
Merhabalar
Wazuh kurulumunu yaparken bileşenleri ayrı ayrı kurabilirsiniz. Ama ilk kurulum için bash dosyasıyla kurulumu öneririm. Bunun için ubuntu 24.04 bir sunucu gerekiyor. kendi sitesinde donanım ihtiyacı şu şekilde
| Agents | CPU | RAM | Storage (90 days) |
|---|---|---|---|
| 1–25 | 4 vCPU | 8 GiB | 50 GB |
| 25–50 | 8 vCPU | 8 GiB | 100 GB |
| 50–100 | 8 vCPU | 8 GiB | 200 GB |
bash ile kurulum içincurl -sO https://packages.wazuh.com/4.14/wazuh-install.sh && sudo bash ./wazuh-install.sh -a
Kurulum sorunsuz bittiyse size admin bilgilerini verecek onu kayıt edelim.
https://ip_adresi ile giriş yapalım ve kenara kayıt ettiğimiz admin bilgilerini girelim.
Dashboard solda bulunan menüden agent management seçeneğine gelelim.
Deploy new agent butonuna tıklayalım ve hangi işletim sistemine kurulum yapacaksak ona göre ayarlarını yapıp verdiği komutu uygun işletim sisteminde çalıştıralım. bir kaç dakika sonra wazuh da agent görülmeye başlayacaktır. Daha sonra isterseniz agentları gruplara ayırıp her agent grubunun ayar dosyasını ona göre ayarlayabilirsiniz.
2026-02-11 22:00:00
I need to talk about the developer who refactored our entire codebase over a weekend.
He replaced every function longer than 10 lines. Extracted every condition into a named method. Created abstractions for abstractions. He was proud. He sent the PR on Monday morning with the message: "Cleaned up the code 🧹"
The PR had 4,200 lines changed.
It took three people two weeks to review. We found bugs that didn't exist before. The code that used to be a little messy but obvious was now pristine and completely incomprehensible.
He was following every rule in the book. And the code was worse for it.
Somewhere along the way, "clean code" stopped being a philosophy and became a religion.
You know the commandments:
Individually, these are fine guidelines. Applied without judgment, they're a disaster.
Here's a real pattern I see constantly:
def process_order(order):
validate_order(order)
calculate_total(order)
apply_discount(order)
save_order(order)
send_confirmation(order)
Looks clean, right? Five short functions. Single responsibility. Uncle Bob would be proud.
Now try to understand what validate_order actually checks. You jump to that function. It calls three more functions. Each of those calls two more. You're seven levels deep and you've forgotten what you were looking for.
Compare:
def process_order(order):
# Validate
if not order.items:
raise EmptyOrderError()
if order.total < 0:
raise InvalidTotalError()
# Calculate
order.total = sum(item.price * item.qty for item in order.items)
if order.coupon:
order.total *= (1 - order.coupon.discount)
# Save and notify
db.save(order)
email.send_confirmation(order.user, order)
Longer? Yes. "Dirty"? By the book, sure. But I can read it top to bottom and understand the entire flow in 30 seconds. No jumping. No context switching. No "what does this function actually do" detective work.
Sometimes a 30-line function is cleaner than five 6-line functions.
public boolean isEligibleForPremiumDiscountBasedOnAccountAgeAndPurchaseHistory()
This is not a good name. This is a sentence pretending to be an identifier.
The "no comments, names should explain everything" rule produces this kind of code. Developers try to encode the entire function's behavior into its name because they've been told comments are a failure.
Comments aren't a failure. A well-placed // Users with 2+ years and $500+ in purchases get 15% off takes one second to read. That method name takes five seconds and still doesn't tell me the thresholds.
I watched a junior extract a shared utility function because two endpoints had similar-looking validation logic. Three months later, that utility had 14 parameters and an options object because every new use case needed slight variations.
The original "duplication" was two functions with 5 lines each. Clear. Independent. Easy to change.
The "clean" version was a 60-line generic monster that nobody dared to touch because changing it might break 8 different endpoints.
Sometimes duplication is cheaper than the wrong abstraction. That's not my hot take — that's Sandi Metz, and she's right.
Code tells you WHAT is happening. It cannot tell you WHY.
No amount of clean naming will explain:
These are comments. They're not code smells. They're context that the next developer needs to not break things.
Deleting comments because "clean code is self-documenting" is deleting knowledge.
Clean code isn't about following rules. It's about empathy.
Will the next person who reads this understand it quickly? That's it. That's the entire metric.
Sometimes that means short functions. Sometimes it means a long one with comments. Sometimes it means duplicating code. Sometimes it means a 40-character variable name.
The answer is always "it depends," and anyone who tells you otherwise is selling a book.
Optimize for reading, not writing. You write code once. People read it hundreds of times. If it's slightly more effort to write but way easier to read — do that.
Extract functions when they have a reason to exist. Not when a function hits some magic line count. If the extracted function would only ever be called from one place and its name is just a description of the code inside it — leave it inline.
Write comments for "why," not "what." // increment counter is useless. // retry up to 3 times because the payment gateway drops ~2% of first attempts is invaluable.
Ask your team. "Clean" is not objective. It's whatever your team can maintain. If everyone on the team finds the code clear, it's clean. Even if Uncle Bob wouldn't approve.
What's the worst "clean code" refactor you've witnessed? I know you have a story.
2026-02-11 22:00:00
When you build an agent-powered app, the instinct is to start with the app — set up a project, install dependencies, write scaffolding. Then somewhere in the middle, you start figuring out what the agent should actually do.
This is backwards.
Agent-first means starting with the agent. Get the brain working first. Once the agent behaves the way you want, expand outward: add tools, then build the shell around it. The agent is the product — everything else is infrastructure.
This matters because the agent will keep evolving. Prompts change, capabilities expand, behavior gets refined. If the agent is tangled with your application code, every change risks breaking something unrelated. Keep the brain separate from the body, and both can evolve on their own terms.
This guide uses Perstack — a toolkit for agent-first development. In Perstack, agents are called Experts: modular micro-agents defined in plain text (perstack.toml), executed by a runtime that handles model access, tool orchestration, and state management. Perstack supports multiple LLM providers including Anthropic, OpenAI, and Google. You define what the agent should do; the runtime makes it work.
Prerequisites: Node.js 22+ and an LLM API key.
export ANTHROPIC_API_KEY=sk-ant-...
An Expert is defined in a perstack.toml file:
[experts."reviewer"]
description = "Reviews code for security issues"
instruction = """
You are a security-focused code reviewer.
Check for SQL injection, XSS, and authentication bypass.
Explain each finding with a severity rating and a suggested fix.
"""
That's the entire definition. No SDK, no boilerplate, no orchestration code. Run it immediately:
npx perstack start reviewer "Review this login handler"
perstack start opens a text-based interactive UI where you can watch the Expert reason and act in real time.
Writing TOML by hand works, but there's a faster way. create-expert is a CLI that generates Expert definitions from natural language descriptions — it's itself an Expert that builds other Experts.
npx create-expert "A code review assistant that checks for security vulnerabilities, suggests fixes, and explains the reasoning behind each finding"
create-expert takes your description, generates a perstack.toml, test-runs the Expert against sample inputs, and iterates on the definition until behavior stabilizes. You get a working Expert — no code, no setup.
The description doesn't need to be precise. Start vague:
npx create-expert "Something that helps with onboarding new team members"
create-expert will interpret your intent, make decisions about scope and behavior, and produce a testable Expert. You can always refine from there.
create-expert reads the existing perstack.toml in your current directory. Run it again with a refinement instruction, and it modifies the definition in place:
npx create-expert "Make it more concise. It's too verbose when explaining findings"
npx create-expert "Add a severity rating to each finding: critical, warning, or info"
npx create-expert "Run 10 tests with different code samples and show me the results"
Each iteration refines the definition. The Expert gets better, and you never open an editor.
Prototyping isn't just about getting the agent to run — it's about finding where it fails.
Write a test case that your agent should catch. For the code reviewer, create a file with a deliberate vulnerability:
npx create-expert "Read the file test/vulnerable.py and review it. It contains a SQL injection — make sure the reviewer catches it and suggests a parameterized query fix"
If the reviewer misses it, you've found a gap in the instruction. Refine and test again:
npx create-expert "The reviewer missed the SQL injection in the raw query on line 12. Update the instruction to pay closer attention to string concatenation in SQL statements"
This is the feedback loop that matters: write a scenario the agent should handle, test it, fix the instruction when it fails, repeat. By the time you build the app around it, you already know what the agent can and can't do.
At some point you need feedback beyond your own testing. perstack start makes this easy — hand someone the perstack.toml and they can run the Expert themselves:
npx perstack start reviewer
The interactive UI lets them try their own queries and see how the Expert responds. No app to deploy, no environment to configure beyond the API key.
Every execution is recorded as checkpoints in the local perstack/ directory. After a round of feedback, inspect what happened:
npx perstack log
npx perstack log --tools # what tools were called
npx perstack log --errors # what went wrong
You can review specific runs, filter by step, or export as JSON for deeper analysis. See the CLI Reference for the full set of options.
This gives you a lightweight evaluation workflow: distribute the TOML, collect usage, analyze the logs, refine the instruction.
At some point, your prototype will need more. The same perstack.toml scales — you're not throwing away work.