MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

Code Smell 319 - Hardcoded Stateless Properties

2026-04-13 01:00:40

Don't turn collaborators into permanent roommates

TL;DR: You should avoid storing stateless utility classes as instance variables initialized with new.

Problems 😔

  • Hardcoded dependencies
  • Testing difficulties
  • High coupling
  • Hidden side effects
  • Rigid design
  • Misleading intent
  • Premature Optimization
  • Stack clutter

Solutions 😃

  1. Use dependency injection
  2. Pass as parameter
  3. Use static methods
  4. Inline the logic
  5. Use local variables
  6. Inline object creation

Refactorings ⚙️

https://hackernoon.com/refactoring-024-replace-global-variables-with-dependency-injection?embedable=true

https://hackernoon.com/refactoring-030-how-to-avoid-accidental-redundancy?embedable=true

https://hackernoon.com/refactoring-007-the-refactor-that-reveals-missing-concepts?embedable=true

Context 💬

Hardcoding a stateless class in the constructor creates permanent coupling.

Even if the class is cheap to instantiate, you lose the ability to swap it.

Stateless objects shouldn't be part of the object's internal state.

You confuse readers by making a tool look essential to the object's identity.

It makes testing harder because you can't mock the hardcoded dependency.

Sample Code 💻

Wrong 🚫

class UserProcessor {
  private provider: MockDataProvider;

  constructor() {
    // You hardcode the dependency here.
    // This makes the class harder to test.
    this.provider = new MockDataProvider();
  }

  process(data: any) {
    return this.provider.format(data);
  }
}

Right 👉

interface DataProvider {
  format(data: any): any;
}

class UserProcessor {
  // You inject the dependency via constructor.
  // Now you can swap it or mock it easily.
  constructor(private readonly provider: DataProvider) {}

  process(data: any) {
    return this.provider.format(data);
  }
}
// Simpler but coupled
 class UserProcessor {
    constructor() {
      // Empty
    }

    process(data: any) {
      return new MockDataProvider().format(data);
    }
  }

Detection 🔍

Look for the new keyword inside constructors.

Watch for private properties instantiated directly in the constructor rather than passed as parameters.

Most linters flag this pattern automatically when you create instances and assign them to private fields.

Tags 🏷️

  • Premature Optimization

Level 🔋

[X] Beginner

Why the Bijection Is Important 🗺️

Software should mimic a MAPPER of the real world.

In reality, a worker might use a tool to complete a task.

The tool is not a permanent physical attachment to the worker.

When you refactor to use dependency injection, you respect the bijection by treating collaborators as external entities, not internal state.

This keeps your simulation flexible and accurate.

AI Generation 🤖

AI generators frequently create this smell.

They often suggest code that just works by instancing dependencies directly in the constructor to save time.

AI Detection 🧲

AI can easily detect this smell without explicit instructions.

When you show AI a class with new keywords in the constructor, it recognizes the pattern as hardcoded coupling.

AI identifies that stateless utility classes should be injected rather than instantiated internally.

The detection is straightforward because the pattern is syntactically obvious and semantically harmful.

Try Them! 🛠

Remember: AI Assistants make lots of mistakes

Suggested Prompt: remove the cached attribute

Without Proper Instructions 📵

With Specific Instructions 👩‍🏫

Conclusion 🏁

Storing stateless dependencies as instance variables makes your code rigid.

When you inject these dependencies instead, you improve testability and keep your objects focused on their true purpose.

Relations 👩‍❤️‍💋‍👨

https://hackernoon.com/code-smell-06-trying-to-be-a-clever-programmer?embedable=true

https://hackernoon.com/how-to-find-the-stinky-parts-of-your-code-part-iv-7sc3w8n?embedable=true

More Information 📕

https://hackernoon.com/coupling-the-one-and-only-software-designing-problem-9z5a321h?embedable=true

Disclaimer 📘

Code Smells are my opinion.

Credits 🙏

Photo by Possessed Photography on Unsplash


Coupling is the enemy of change

Rich Hickey

https://hackernoon.com/400-thought-provoking-software-engineering-quotes?embedable=true


This article is part of the CodeSmell Series.

https://hackernoon.com/how-to-find-the-stinky-parts-of-your-code-part-i-xqz3evd?embedable=true

\

I Let Karpathy's AutoResearch Agent Run Overnight!

2026-04-13 01:00:01

Andrej Karpathy’s viral autoresearch repo automates the most tedious part of machine learning: the trial-and-error experiment loop. By simply writing research goals in a markdown file, you can set an AI agent loose to modify code, run 5-minute training batches, and log the results. After leaving it running overnight, the agent completed 40+ experiments and made surprisingly creative architectural tweaks proving that the future of ML is less about manually tuning parameters and more about writing great instructions for AI agents.

SEO Isn't Dead But Your Strategies Have to Change

2026-04-13 01:00:00

The research took me deep enough that I have a lot more to share - how different AI systems handle citations differently, what the actual citation mechanics look like across ChatGPT versus Claude versus Perplexity, how to structure content specifically for AI summarization, and what the verification data across 168 platforms actually reveals about the crawling landscape.

You Should Be Managing Your AI Agents as Engineers: Here's Why

2026-04-13 00:59:59

Vibe coding is fine. We all need the speed it provides. High-quality engineering organizations are increasingly comfortable giving up line-by-line control over generated code. Vibe management is a different problem entirely.

As AI Models Converge, System Design Becomes the Differentiator

2026-04-13 00:47:03

buy the car, not the engine

every week someone posts “Claude destroyed GPT” or “Gemini is catching up.” Grok 4.20 just launched with four agents arguing with each other. DeepSeek V4 is imminent. it’s sports for nerds, and a distraction from the question that matters: what gets me a smart model with the tools it needs to do real work.

engines are not cars

think of AI as a car. the model (GPT, Claude, Gemini, Grok, DeepSeek) is the engine. the harness is the rest of the car — steering, brakes, fuel system, navigation, trunk.

Latent Patterns defines an agent harness as “the orchestration layer that constructs context, executes tool calls, enforces guardrails, and decides when each loop iteration should continue or stop.” if the model is the reasoning engine, the harness is the operating system that makes the engine useful, safe, and repeatable. they break it into five concerns: instruction layering, action mediation, loop control, policy enforcement, and memory strategy. in practice, most reliability problems blamed on “the model” are harness design problems.

same engine, completely different car

the Lotus Evora and the Toyota Camry share the same 3.5L V6. Toyota tunes it to 301hp for commuting. Lotus supercharges it to 400hp in a mid-engine track weapon. same engine. one hauls groceries, the other races. what changed? everything around the engine. this is happening in AI right now and it’s not subtle.

\

Gemini 3 Pro powers both Google Sheets and NotebookLM. in Sheets, it hits a 350-cell ceiling, can’t see your full spreadsheet, and has no undo. in NotebookLM, the same model uploads your entire document library, cites every claim back to its source, and generates audio overviews. one’s a formula helper in a cage. the other’s a research analyst.

GPT-5 powers both Copilot in Excel and ChatGPT. enterprise users report Copilot fails simple column sums and feels “night and day” slower than ChatGPT — despite using the same underlying model. ChatGPT gets file uploads, web search, custom GPTs, memory, and a model picker. one’s in a straitjacket. the other’s a full workbench.

Claude Sonnet 4 powers both GitHub Copilot and Claude Code. in Copilot it gets ~128K context (vs 1M native), a hidden system prompt, and no thinking control. in Claude Code it gets repo-wide reasoning, explicit thinking budgets, full MCP tool use, and your own custom instructions. one’s on a leash. the other’s unleashed.

or as Latent Patterns puts it: “two tools can use the same model and produce dramatically different outcomes because their harnesses differ in context assembly, policy checks, and loop control semantics.”

Evangelos Pappas tested this empirically: frontier models scored 24% pass@1 on real professional tasks in the APEX-Agents benchmark. the failures were overwhelmingly orchestration problems, not knowledge gaps. the engine knew the answer. the car couldn’t get there.

even OpenAI agrees. their “harness engineering” write-up describes building a million-line codebase with zero manually-written code. the bottleneck was never the model. it was the environment. “early progress was slower than we expected, not because Codex was incapable, but because the environment was underspecified.” when something failed, the fix was almost never “try harder.” it was: what tool, guardrail, or context is missing from the harness?

the convergence problem

every engine got dramatically more powerful. but they all got powerful at the same time.

take GPQA Diamond — 198 PhD-level science questions where human experts score about 65%. in November 2023, GPT-4 scored 39% — barely above a coin flip. one engine, mediocre.

by mid-2024, Claude 3 Opus hit ~56%, GPT-4o managed ~51%, Gemini 1.5 Pro was in the mix. four engines, all below human experts, 30+ point spread.

today? Gemini 3 Pro scores 91.9%, GPT-5.2 hits 92.4%, Claude Opus 4.5 reaches 87%. six engines, all above human experts, clustered within five points. the engines went from 39% to 92%. incredible. but the gap collapsed.

the small engines? GPT-5 mini, Haiku 4.5, Gemini 3 Flash, Phi-4, Mistral 7B — beat where frontier models were 18 months ago. run on your phone, cost pennies. Gartner predicts 3x more small task-specific models than general-purpose LLMs by 2027.

six companies make great V8s. a dozen more make great four-cylinders. the engine is a solved problem.

what this means for you

if you’re picking or building a car, you make different decisions depending on what you need. do you want a workhorse? a beater? do you plan to drive on rugged terrain? freeways all the way?

the same holds true when you pick or build “AI products”. the harness is where your taste and decision making live. every decision is a trade-off, and the right trade-off depends entirely on what you’re trying to do.

\

  • depth vs speed: do you let the model think for 30 seconds and return a thorough answer, or force a 2-second response that’s 80% as good? a legal research tool and a customer service bot need opposite answers to this question. same engine, opposite harness.
  • context vs cost: do you stuff the full conversation history into every call, or summarize aggressively and risk losing nuance? a therapy app and a code assistant make different bets here.
  • autonomy vs control: does the AI act on its own or wait for approval? a scheduling agent should book the meeting. a financial advisor should not execute the trade.

these are the same trade-offs car designers make. speed vs comfort. luxury vs mainstream. track suspension vs grocery-run ride quality. nobody asks “which engine does a Cayenne use?” because the engine isn’t the only thing that makes it a Cayenne. it’s every decision made around the engine to serve a specific driver.

make decisions that are engine-swappable: route hard questions to the V8, simple ones to the golf cart engine. know that your moat is the trade-offs you chose and why.

if you’re picking tools: stop asking “which model does it use?” start asking: what can it read? what can it do with my files? does it remember me? how long can it focus? how does it handle mistakes? those are harness questions. that’s why the same model feels magic in one app and useless in another.

the analogy goes further than you think


once you stop arguing about engines, the design space explodes. you start asking better questions.

\

\

  • maybe you don’t need a faster car. maybe you need a shorter route. (that’s context engineering: the same engine covers more ground when you stop feeding it a 4,000-word system prompt and start giving it a map.)
  • maybe you don’t need a car at all. maybe you need a fleet of bicycles. (that’s small model routing: twenty Haiku calls that each cost a fraction of a cent, instead of one Opus call that takes 30 seconds and costs a dollar.)
  • maybe the problem isn’t the vehicle. maybe it’s the road. (that’s your data infrastructure: the smartest model in the world can’t reason about customers who haven’t converted yet if nobody’s piping that data into the context window.)
  • and maybe you’ve been optimizing the car when you should’ve been building a boat. (that’s the real question: not “how do I make AI better at this task?” but “is this even the right task for AI?”)

the engine debate is comfortable because it has a leaderboard. it’s measurable. it updates every week. but the hard problems, the ones where AI actually transforms a business, are all harness problems, road problems, route problems. they don’t have benchmarks. they require taste.

the engine matters less every quarter. the rest of the vehicle, the route, and the terrain is what determines whether you arrive.

Why More Code Doesn’t Necessarily Mean More Progress

2026-04-13 00:39:01

This article argues that while AI has dramatically accelerated code generation, software delivery systems—CI pipelines, code reviews, and approval processes—haven’t kept pace. The result is a new bottleneck in engineering workflows, where decision latency and slow feedback loops stall progress. To fully benefit from AI, teams must optimize infrastructure, streamline processes, and reinforce ownership, ensuring speed gains don’t come at the cost of quality.