MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

加密货币白皮书未明确说明的5件事——你应该了解

2026-04-14 05:20:54

We get it: crypto whitepapers can be a bit intimidating. They’re full of technical terms, diagrams, equations, charts. It’s still important to read them, though, even if you don’t understand everything. They’re the foundational theory of a project, and they can tell a lot about its viability. Of course, sometimes, the language may sound precise, but the implications stay blurry. Certain things may not seem clear if you don’t have the whole context.

\ This document will explain the design and architecture of a protocol, but it spends far less time exploring how that design would behave in the wild. Let’s see some of the implications.

Tokenomics In the Long-Term

This is an important section of many whitepapers, and it describes the economic model of every crypto network. That includes supply, distribution, allocations, rewards, token burns, minting schedule, and any other relevant features related to the currency.

\ You’ll likely see a pie chart or similar, depicting percentages or the number of tokens. Seems fair, seems tidy. What they often fail to mention is the implication of those figures.

Example of Tokenomics with Ethereum. Image by Quadency

Does the currency have an unlimited supply? Then it’s inflationary (its value may decrease over time). Insiders have large token allocations? This could be a hit to governance outcomes in the future, because they’ll have more voting power. Is there a portion reserved to aid development? Are there incentives for participants to stay long-term?

\ Another concern is vesting schedules (or gradual token release). If a large batch of tokens becomes transferable at once, early holders may sell, increasing pressure on the market and affecting prices. At the same time, the way a token is integrated into its own network matters. If the same token is required for gas fees, staking, governance, and rewards, demand can reinforce it and give it more value eventually.

Network Security

Discovering how a crypto ecosystem protects itself is important, and you’ll find terms like Byzantine Fault Tolerance (BFT), timestamp server, or public-key cryptography. Fees, rewards, and penalties are applied through different consensus mechanisms to maintain network integrity. That’s just the beginning of the story, though. The design might be all nice and good, but participation is just as important.

\ Security is rarely absolute. It depends on costs, rewards, and distribution of influence —and all of this implies participation from an active community. If attacking the network becomes cheaper than defending it, the potential scenarios become gloomy. It’s already happened, by the way. Networks like Bitcoin Gold and Ethereum Classic have suffered 51% attacks, where malicious miners colluded to manipulate the chain.

Both networks survived, and again, it was because of participation. Their developers were there to modify the code, release emergency updates, and deal with the fallout. The more participation a distributed network has (in every front: nodes, average users, developers), the more secure it is. That’s why newly-created ledgers are more fragile until they acquire real users.

Laws and More Laws

Today, the crypto space isn’t the unregulated wilderness that it used to be. There are actual laws tied to it now, even if at the creation and release of many whitepapers (Bitcoin included, of course), they didn’t exist yet. Some more recent whitepapers may have sections or very small print warning something like this:

“This crypto-asset whitepaper has not been approved by any EU competent authority. The issuer is solely responsible for its content. Prospective holders must read the entire document and understand the risks.”

The risks include a total loss of investment or lack of compensation, and crypto-regulations like MiCA in the EU want you to know that upfront. This law established rules for issuers and service providers, and banned algorithmic stablecoins.

\ Beyond that particular region, rules vary wildly. A few countries have totally banned crypto, including Bangladesh, China, Tunisia, Egypt, and Morocco. If you use any cryptocurrency in these places, you may face hard consequences. Meanwhile, in the rest of the world, you may still need to pay taxes and share your identity with centralized crypto exchanges to comply with Anti-Money Laundering (AML) and Countering the Financing of Terrorism (CFT) rules.

\ In several countries (especially in the US), if a token is considered a security, its developers may face costly access requirements or high penalties. This has killed many, many crypto projects over the years. To know if a token can be considered a security in the US, it’s necessary to apply the Howey Test.

Simplicity vs. Complexity

If a network promises an innovative system built from scratch, never seen before, it could actually deliver in the end… but not without tradeoffs. Ethereum did just that. It created a Turing-complete environment for smart contracts and its own programming language (Solidity).

\ This offered a flexibility that expanded possibilities, but it also expanded the surface area for bugs and exploits. Auditing complex contracts requires expertise and time, and a single oversight can lead to large losses. That's what happened with The DAO by Slock.it in 2016, indeed.

Old Slock.it and The DAO website retrieved by Internet Archive

The more complex a system is to use, the riskier it will be until its maturity. We need to know that beyond the technical descriptions in a whitepaper. Now, this doesn’t mean that a simpler system is always better or more secure. Some ledgers sacrifice functionality in favor of security (as in the conservative development of Bitcoin), but sometimes simplicity is just an intermediary.

\ If a cryptocurrency doesn’t allow full ownership with private keys, someone else (a company or organization) will be around handling your funds on your behalf. This may offer convenience, but it introduces counterparty risks: insolvency, hacks, operational failure, or mismanagement. It’s important, then, to find a balance between complexity and simplicity and be aware of the risks.

Decentralized Consensus

The more decentralized a network, the freer (as in freedom) it is. Consensus mechanisms like Proof-of-Work (PoW) or Proof-of-Stake (PoS) aim to achieve that, with mixed results. Whitepapers often describe these systems with technical detail and clean diagrams, yet they spend less time exploring how power is distributed. Decentralization depends on how easy it is to participate and how influence is concentrated in practice.

\ In PoW systems, mining requires hardware, electricity, and scale. Over time, mining pools can dominate block production, which means decision-making power clusters around a smaller group of operators. PoS networks shift the dynamic from hardware to capital: “validators” are chosen based on how many tokens they lock up, but they still have too much power over the network. Some instances in the past have shown how they can block and censor transactions.

\ Other networks explore different structures. Obyte, for example, uses a Directed Acyclic Graph (DAG) without miners or “validators”, aiming to reduce all gatekeepers. This way, only users can “approve” their own transactions, thus having the ultimate power over them, and censorship isn’t possible. Besides, an on-chain governance platform allows token holders to vote on important network parameters and updates.

Choose wisely, then. A good consensus mechanism can tell you how decentralized and secure a system can remain over time. It’s also necessary to consider all other factors missing from whitepapers to make better, informed decisions.


Featured Vector Image by user15245033 / Freepik

传统即是心灵感应:文化如何催生集体智慧

2026-04-14 04:26:06

\ The development of political fragmentation and the rigid division of groups into us and them has disrupted a primordial connection, one that once allowed humans to participate in something like a shared, distributed intelligence. Think of it as a legacy peer-to-peer network, running for millennia without central servers, gradually throttled to near-zero bandwidth by the demands of modernity. Ancient civilizations maintained what you might call a worldwide knowledge graph, coherent, interconnected, holistic, before it was shattered and replaced by the reign of quantity; metrics; the tyranny of what can be measured.

The human capacity to read the emotional and cognitive states of others, to sync without explicit communication, is not mysticism. It is a latent biological feature. Mirror neurons. Embodied cognition. The social brain hypothesis. Modern neuroscience has been slowly rediscovering what ancient cultures engineered entire civilizations around. We deprecated the Ziggurats into temples and misplaced their actual function, communal solidarity.

The obsession with private property and the commercial logic of ownership has reduced human existence to a zero-sum resource allocation problem. Every transaction becomes adversarial. Every relationship becomes transactional. This isn't a bug in capitalism; it's a core feature, and it is expensive. Because it requires humans beings to suppress every collective trait acquired through millennia of evolution. The cognitive overhead of constant status competition, of social uncertainty, of managing impression instead of reality, it burns cycles that evolution intended for something else entirely. Our top-of-the-line brains have become optimized purely for material conquest, corrupting the capacity for genuine synchrony, resonance, and coordinated intelligence.


Sacred tradition is not nostalgia. It is not religion in the political sense. It is shared protocol, a common language of sign, ritual, and rhythm that allows large groups of people to coordinate at the subconscious level, far below the bottleneck of explicit verbal communication. The reason teams at successful companies feel like something beyond the sum of their parts is that they have developed, consciously or not, the rudiments of a tribal culture: shared vocabulary, rituals, values, and mythologies. The best engineering cultures aren't built on org charts; they are built on tradition.

When you walk into a company where the culture is genuinely alive, where the values aren't a PDF on a shared drive but something people actually embody, you are witnessing tradition functioning as designed. It is high-bandwidth, low-latency coordination. It is telepathy, operationalized.


Within every human being lies what older frameworks called the universal subconscious, a substrate of intelligence that predates language, that communicates in pattern and emotion rather than proposition. Think of this as the Big Me: the self that is embedded in the collective, that resonates with others, that can perceive signal through the noise of explicit cognition. The conscious, rational, analytical mind, the Little Me, the one writing pitch decks and doing system design, is a useful but narrow interface layer. It is the watchman at the gate. And like any overzealous security system, it routinely blocks legitimate traffic.

The founder who can't let go of control. The engineer who can't trust her intuition. The executive paralyzed by analysis. These are failure modes of the Little Me strangling the Big Me. The fear, the status anxiety, the performance dread, these aren't irrational weaknesses to be optimized away. They are signals that the deeper system is under-resourced and over-managed.


The restoration of this capacity is not a spiritual project. It is an engineering problem. The formula is straightforward: reduce the noise of the Little Me, the status anxiety, the defensive rationalism, the zero-sum competitive dread, and the deeper bandwidth opens naturally. Meditation does this. Deep creative work does this. Great ritual does this. Great tradition does this. The Buddhist, the Stoic, the indigenous ceremony, the pre-game locker-room ritual, all are implementations of the same protocol: suspend the watchman, open the channel, let the collective intelligence route through.

The question Silicon Valley should be asking is not how to eliminate the ancient and the inherited in the pursuit of the new. It is how to recover and re-engineer the protocols that allowed human groups to think together, across time, across distance, across the limits of individual cognition.

Tradition is not the opposite of innovation. It is the substrate without which genuine collective intelligence is impossible. And collective intelligence, real telepathy, the kind that actually scales, is the only thing that has ever built anything worth building. It is the secret we hide from the machines, it is the secret that AI can never replicate.

\

Burmese-Coder-4B:一款适用于资源匮乏语言人工智能的缅甸语编程大语言模型

2026-04-14 04:17:43

I built Burmese-Coder-4B, a Burmese coding LLM for low-resource language AI. This article covers the motivation, data creation, two-stage fine-tuning pipeline, evaluation approach, and lessons from building a practical Burmese coding model with limited compute.

如何设置 Claude Cowork 项目:分步指南

2026-04-14 04:17:09

Claude Cowork with nothing set up is just a smarter ChatGPT. Claude Cowork set up right runs my business before I wake up.

Most people open the desktop app, start chatting, and miss the entire point. Cowork is not a chatbot. It's an operating system for how you work. And the gap between those two versions is entirely in the setup.

This is the step-by-step I use for every project I run. LinkedIn posts, newsletters, HackerNoon columns, client ops, internal Zen Media work. Same skeleton every time.

Follow this and your first project will be running your playbook by the end of the afternoon.

Before You Start

Three prerequisites before you open the app:

\

  1. Install the Claude desktop app. Cowork runs in the desktop app, not the browser. Download it from claude.ai/download.
  2. Know your plan tier. Free won't cut it. Pro gives you a solid base to work from, but won’t help do what I’m about to show you. Max is what I run, because I needto use Opus and other third party apps without burning through limits mid-week. If you're building a real operating system for your work, this level is required.
  3. Decide what this project is for. One job per project, one bite at a time. Pick one repeatable workflow: LinkedIn posts, client updates, blog content, daily briefings. Projects are scoped for a reason.

Step 1: Create the Project

Open the Claude desktop app. Click Projects. Create New Project. Name it after the actual job, not the tool. "Sarah's LinkedIn Posts" tells me what it does. "

\ Every project is a self-contained workspace. Each one has its own:

\

  • Folder — a specific location on your computer that Cowork can read from and write to. Everything the project produces lives here.
  • Custom instructions — standing orders you write once that tell Claude how to behave inside this project (your voice, your rules, your non-negotiables).
  • Knowledge sources — reference documents Claude reads but never changes. Brand guides, past work, pillar pages.
  • Memory — a structured file system where Claude saves what it has learned about you over time. It persists from session to session.
  • Skills — playbooks that load automatically when Claude detects a matching task ("write a LinkedIn post" triggers the LinkedIn voice skill).
  • MCPs (connected tools) — Model Context Protocol servers that give Claude API-level access to your other tools: Gmail, Slack, Calendar, Drive, Monday, Todoist.
  • Scheduled tasks — jobs you set to run automatically on a cadence (daily briefings, weekly reports, competitive scans).

\

Step 2: Point It at a Folder (Security Matters Here)

\ Do not pass go. Setting up a local system to work with Claude is one of the best things you can do.

Cowork asks you to pick a folder on your computer. That folder becomes the project's workspace. Cowork can read, write, edit, and create files inside it.

Read that again. Cowork can see EVERYTHING in that folder.

What NOT to do:

\

  • Do not point it at your entire Documents folder
  • Do not point it at your Desktop
  • Do not point it at anything that contains tax returns, client contracts you haven't redacted, or a random passwords.txt from 2018

What to do instead: Create a scoped subfolder just for this project. Mine live at:

\

/Documents/Claude/Projects/[Project Name]

\ One folder per project. Cowork only sees what's inside that folder. Nothing else.

If you want to share files into the project later, you drop them into that folder. If you want Cowork to stop seeing a file, you move it out.

\

Step 3: Write Your Custom Instructions (Keep Them Short on Purpose)

\ Custom instructions are the standing orders for this project. You set them once. They apply to every session.

Here's the mistake most people make: they try to cram everything they want Claude to know into custom instructions. Their voice rules. Their formatting preferences. Their frameworks. Their past work. Their entire operating philosophy.

Don't do that. Custom instructions are the top layer. They should be short, high-leverage, and mostly about telling Claude where to go and what to avoid.

The rest of what you'd want to tell Claude belongs in deeper layers:

\

  • Detailed voice and structure rules → live in skills
  • Facts Claude needs to remember about you → live in memory
  • Reference material Claude should read → lives in knowledge sources

Here are my actual LinkedIn Posts project custom instructions, word for word:

\

"Always review my LinkedIn channel if you don't know my style and voice: www.linkedin.com/in/prsarahevans. Reference tools, resources and frameworks I've built. Use my site asksarah.ai, or review my PR@CTICAL newsletter for inspiration. Never use em dashes. Use exclamation points. Really nail my voice on LinkedIn."

\ That's it. Sixty words. But my LinkedIn voice skill is thousands of words. My memory has dozens of files on post styles, banned phrases, brand entities, and formatting rules. My knowledge sources include my pillar pages and top-performing posts.

Custom instructions point to all of that without duplicating it.

What to include in yours:

\

  • Where Claude should go to learn your voice (LinkedIn, website, past work)
  • A handful of non-negotiables you want enforced every time (for me: never use em dashes)
  • Cross-references to your owned properties (site, newsletter, show)
  • The posture you want Claude to take (draft first, ask after, or vice versa)

What to keep OUT:

\

  • Long lists of voice rules (those belong in a skill)
  • Past examples of your work (those belong in knowledge sources)
  • Evolving preferences (those belong in memory, where they can grow and change)

\ Short custom instructions plus deep skills plus growing memory is the stack that actually works. Cramming it all into one text box is how projects drift.

Step 4: Add Knowledge Sources

\ Knowledge sources are reference documents Claude reads but does not write to. Different from the folder (where Claude reads AND writes) and different from memory (which Claude builds over time).

Upload documents that Claude should treat as authoritative background:

\

  • Your brand voice guide
  • Past top-performing work (your best LinkedIn posts, your best emails, your best decks)
  • Your messaging framework
  • Your style guide
  • Any pillar page or resource you want Claude to cite

For my LinkedIn Posts project: my asksarah.ai AI visibility pillar page, my top 20 highest-engagement posts from the last 90 days, and my brand voice guide.

Claude pulls from these automatically when it drafts. You don't have to re-paste them every time.

Step 5: Choose Your Default Model

Cowork lets you pick which Claude model handles your work. This is where tokens start to matter.

\

  • Sonnet is your daily driver. Fast, capable, cheap on tokens. Handles drafting, formatting, research pulls, email responses, calendar formatting, daily ops. 90% of what I do runs on Sonnet.
  • Opus is the big brain. Higher reasoning, deeper synthesis, better at strategy, complex multi-step work, long-form writing where the stakes are high. It burns roughly 5x the tokens of Sonnet per turn.

\ My rule: Sonnet by default. I switch to Opus when I'm writing a flagship research post, doing strategic planning, or working through something genuinely complex.

If you run Opus on "clean up this email," you're paying Opus prices for Sonnet work. You'll hit your usage cap faster and have nothing left when you actually need the big brain.

Set Sonnet as your default. Promote to Opus per conversation when you need it.

Step 6: Install or Connect Skills

Skills are the part most people don't know exists.

A skill is a folder of instructions that tells Claude exactly how to handle a specific kind of task. Think of it as a playbook. When Claude detects a trigger (like "write a LinkedIn post"), the skill loads and Claude runs YOUR system instead of defaulting to generic advice.

When to add a skill

The signal is repetition. If you find yourself doing any of these, it's time to build a skill:

\

  • You do the same kind of task more than three times
  • You give Claude the same correction every single session ("I told you: no em dashes")
  • You have a format, voice, or structure you want enforced every time without re-prompting
  • The task has specific rules (brand voice, compliance guardrails, formatting requirements)
  • You want the same output quality whether you're running it or a team member is
  • The task has sub-decisions (a LinkedIn post has 10 possible styles, and the choice changes the output)

\ If a task is a one-off or genuinely exploratory, a skill is overkill. Prompt it in the moment and move on.

Why skills matter more than better prompts

A better prompt helps you once. A skill compounds forever.

\

  • It loads on trigger, not on memory. You don't have to remember to tell Claude about your formatting rules. The skill fires automatically when you say "write a LinkedIn post."
  • It removes cognitive load. You stop thinking "did I tell Claude about the hashtag rule?" The skill already knows.
  • It enforces consistency across sessions. Monday's post follows the same system as Thursday's post, whether you wrote the prompt the same way or not.
  • It scales to a team. Drop the skill into a shared project and everyone runs the same playbook. Institutional knowledge stops living in one person's head.
  • It captures your edge. The skill IS your system. Every time you improve the skill, every future use benefits from that improvement.

I run custom skills for:

\

  • LinkedIn voice (my 10 post styles, my formatting rules, my banned phrases)
  • Newsletter production
  • Carousel image generation in my brand system
  • Client ops updates
  • Morning briefings
  • Editorial QA (catches AI fingerprints before I publish)

Every one of those skills exists because I found myself giving the same correction over and over. The correction became a rule. The rule became a skill.

How to add skills to a project

  1. Install a plugin from the Cowork plugin marketplace (plugins are bundles of skills + MCPs + tools)
  2. Or write your own skill file and drop it into your project folder under a skills/ directory
  3. Describe each skill clearly so Claude knows when to trigger it (the description is the trigger, so be specific)

A well-written skill is the difference between "Claude is helpful" and "Claude runs my playbook."

Step 7: Connect Your MCPs

MCPs (Model Context Protocol servers) are how Cowork reaches into your other tools. Think of them as precise, API-level connectors. Not screen-scraping. Not clicking around.

The core four I connect to almost every project:

\

  • Gmail (read, search, draft)
  • Google Calendar (read, create events, find meeting times)
  • Slack (read channels, send messages, search threads)
  • Google Drive (search and fetch docs)

Project-specific connections:

\

  • Granola (meeting transcripts) for anything that touches calls
  • Monday.com (via Slack/Gmail notifications) for client ops
  • Todoist for task management
  • Todoist + Calendar for planning work

Security rule for MCPs: approve each one deliberately. Read the scope. Don't blanket-approve because it's faster. If a connector asks for more access than it needs, question it.

Step 8: Build Memory Over Time

Memory is the piece most people misunderstand.

Memory is not your chat history. Your chat history disappears when you close the window. Memory is a structured file system where Claude saves what it learns about you and it persists from session to session.

You don't set up memory once. You build it every time you work.

Every correction you make is a memory Claude writes. Every preference you state. Every piece of feedback. Every framework you introduce. Over time, your memory becomes the most valuable part of your project because it's trained on how YOU work.

Examples from my LinkedIn Posts project memory:

\

  • My 10 post styles and when to use each one
  • My banned AI phrases
  • Formatting rules I've corrected before
  • My Q2 2026 content themes
  • My brand entity strings (exact title, exact trademarks, exact author bio)

The more you correct, the smarter the project gets. Don't tolerate output you don't like. Correct it, name why, and Claude will remember.

Step 9: Schedule Your First Task

Scheduled tasks are the unlock. This is where Cowork stops being a chatbot and starts being an operating system.

You can schedule any task inside Cowork to run automatically. Daily, weekly, or on a specific time.

My morning AI briefing runs at 5 AM, 10 AM, and 1 PM every weekday. It scans my Monday.com, Slack, Gmail, and Granola, then hands me a formatted briefing with tasks ready to paste into Todoist. I didn't open anything. It just shows up.

Start with one. Pick a repeatable task you do every week. Schedule it. See what happens.

Good candidates for your first scheduled task:

\

  • Daily briefing of what's on your plate
  • Weekly client ops update
  • Competitive scan in your industry
  • Morning news roundup in your niche
  • Weekly review of your own metrics

One scheduled task will change how you work. Three will change your business.

Step 10: Do a Security Review Before You Ship

Before you call the project live, walk through this checklist:

\

  • Is your folder scoped to this project only? Not your entire drive?
  • Do you know exactly what's inside that folder?
  • Did you approve each MCP with intention? Do you understand what data each one can access?
  • Is Computer Use and Chrome access set to per-session, not standing?
  • Does your plan tier give you the usage headroom for this workload?
  • If you're running client work, does your platform terms agreement allow AI processing of client data?

Skip any of these and you'll find out the hard way.

The Mental Model

Here's the full surface area of a Cowork project. Miss any one of these and you're using 30% of the platform.

\

  • Folder = what Cowork reads and writes
  • Knowledge sources = reference library it reads
  • Custom instructions = standing orders
  • Model default = how much brain (and how many tokens) per turn
  • Memory = what Claude has learned about you over time
  • Skills = playbooks that load on demand
  • MCPs = external tools Claude reaches into
  • Scheduled tasks = things that run without you

That's the whole game. Seven setup decisions, one security review, and you have a project that runs.

Maintenance: What to Revisit Monthly

Cowork projects aren't set-and-forget. Every month, I spend 15 minutes per project doing this:

\

  • Review memory files and delete anything outdated
  • Prune knowledge sources (old documents, stale references)
  • Check which scheduled tasks are still useful and which aren't
  • Audit MCP connections (remove anything I stopped using)
  • Update custom instructions if my workflow has shifted

Projects drift if you don't tend them. A 15-minute monthly review keeps each one sharp.

The Winners Aren't Who You Think

The people winning with Cowork right now are not the ones with the most AI knowledge. They're not developers. They're not power users.

They're the ones who stopped treating it like a chatbot and started treating it like infrastructure.

Scope tight folders. Set clear custom instructions. Feed memory. Run skills. Connect MCPs. Schedule the repeatable work.

\


About the author: Sarah Evans is an AI visibility strategist and communications expert with 23+ years in PR. She's a partner at Zen Media and writes at asksarah.ai.

Related reading:

  • Vibecoding for Comms: asksarah.ai/vibecoding-for-comms
  • AI Visibility Guide: asksarah.ai/ai-visibility-guide
  • Subscribe to PR@CTICAL: Sarah's weekly newsletter on AI, PR, and communications

Found this useful? Share it with someone still using Cowork as a chatbot.

Max Barinov 如何通过分析 Adentris 的医疗记录,将 AI 代币消耗量降低 10 倍

2026-04-14 03:53:45

The problem is daunting. Imagine layers of human medical records, stacked on top of each other, often jotted down in a hurry with barely legible handwriting. Diagnoses, lab results, medication adjustments, billing codes, doctor's notes. Even the most seasoned archaeologist would be intimidated. And even if they have been fully digitalized, electronic medical records are still a swamp to navigate. They are the problem that nobody warns you about. A real challenge.

But Max Barinov is not afraid of such challenges.

The Promise of Large Language Models

When navigating EMRs, large language models are an appealing tool. But they also have their drawbacks and limits. If you input too much information. If you feed them haphazardly, recklessly, and they become overwhelmed. That's not something anyone involved in healthcare can afford. LLMs were supposed to make our lives easier, to eliminate the tedious, grunt work.

With Adentris, an Austin, Texas-based, Y Combinator-backed company, Barinov has set to work on developing an AI-supported platform that can help hospitals keep their EMRs compliant with US healthcare regulations, and 1996's Health Insurance Portability and Accountability Act in particular. This has not proven easy, as these medical records are larger than any model's context window. Checking the compliance of a single patient chart can be very, very expensive.

The Failure of the Naive Approach

To remain compliant with HIPAA, which was created in part to protect patient healthcare data, personally identifiable information, such as patient names, social security numbers, or dates of birth are replaced with secure, unique identifiers called data tokens. Processing a token has a cost of course, and if one were to take what is referred to as the "naive approach" and simply feed a tokenized EMR into a model and ask it to assess its compliance, the costs will balloon. While the elegance of LLMs was supposed to be their ability to crunch masses of information, when it comes to EMRs, this approach falls flat.

First, all of those tokens are billable, making the use of LLMs unsustainable. Processing EMRs also takes more time, and clinicians don't see a benefit as it takes time for the model to accommodate all of those data tokens. Finally the model itself breaks down in the face of such overload. Overcome with data noise for enormous EMRs, it cannot really do its job properly. LLMs thus make faulty judgements about compliance, making their usage almost irrelevant.

Max knew this was the case and last year set out to provide an off-the-shelf solution that could tame these massive EMRs. With more than a dozen years of programming under his belt, having moved from producing web applications into founding engineer roles at Y Combinator startups like Ziina, Max knew that resolving the issue would require some architectural redesign.

Cutting EMRs to Size, Chart by Chart

"My goal has been to build reliable AI systems that reduce cognitive load for users," says Max. "And I do this by focusing on deterministic interaction layers, token efficiency, and measurable outcomes."

He credits his experience with helping to hone this design philosophy. "Over time I gravitated to systems that make teams faster and products more reliable," Max recalls, "and then applied that approach to AI and conversational systems where determinism and cost control are critical."

This was the case with Adentris and the challenge of bloated EMRs. His solution was elegant in its simplicity. Max designed a multi-agent architecture that processed it chart by chart, rather than naively, all at once and in one go. These Adentris AI agents were connected directly to hospitals' EMR systems through a custom model context protocol (MCP) server exposing structured EMR data contained in Kipu Health, a software platform that allows EMR sharing.

As the AI agents went to work, each chart was cut to size for a compliance check. Information on medications, diagnoses, doctor's notes became bite sized and digestible for the LLM. For once, the model was not overwhelmed, but processed in a systematic and efficient way. The data stack was attainable for this project. Max used the Nest.js microservices, React and Next.js interfaces, MongoDB and PostgreSQL for storage. These were all containerized and deployed on Azure Kubernetes Service. He also kept its operations restrained and minimalistic, and trained the architecture to stick to its primary purpose: maintaining compliance with the HIPAA.

"This has been my defining engineering challenge at Adentris," he says, "because medical records are far larger than any LLM's context window."

With Adentris, Max Barinov had developed a solution for a problem that was plaguing hospitals. The only remaining question was if it worked.

Optimization and Evaluation

There was another innovation that helped make the Adentris platform worthwhile. As Barinov notes, medical records are constantly being added to. And a LLM processing that data would hypothetically need to parse all of that data over and over again to assess its compliance. This still would bump up the processing costs, which is why he hit upon another novel idea: track deltas. Rather than rescanning everything over, Adentris's system keeps tabs on what's been scanned. Only update chart sections trigger additional analysis. The other data is left cached.

When Max carried out an assessment of cost reduction, he found that token consumption using his system was 10 times less. Inspired by OpenAI's evals methodology, he also embedded an evaluation framework into the development lifecycle of Adentris's platform. What he found was that the system he had designed passed an HIPAA audit on the first audit and reduced clinician documentation time by about 80 percent. He was able to ship the system within three months. This was a real achievement in the slow-maving world of regulated healthcare. The company then began offering its tool to US hospitals in a series of commercial pilots. Adentris had arrived.

For Max Barinov, it was par for the course. A computer science major at ITMO University, he had cut his teeth rebuilding a web platform at Ziina and once launched a UK-based investment platform from scratch in fewer than four months. "I have a fascination with making complex systems navigable," he says. It's a design mindset that he will continue to use in future projects.


:::tip This article is published under HackerNoon's Business Blogging program.

:::

\

是什么让一名资深软件工程师具备足够的公信力来评判他人

2026-04-14 03:49:39

Olha Krasnozhon is a Senior Software Developer whose work spans complex enterprise systems, peer-reviewed research, and jury roles at international technology competitions. This is a profile of how she got there — and what she thinks it means.

There is a version of a software engineering career that looks impressive on paper: strong companies, senior titles, years of experience. And then there is a different kind of signal — the kind that comes when the profession itself asks you to evaluate someone else's work. To serve on a competition jury. To peer-review a research submission. To be the person in the room whose job is to separate what actually holds up from what merely looks good.

Olha Krasnozhon has accumulated both kinds of signals. A Senior Software Developer with experience across international companies and complex systems, she has combined hands-on engineering practice with applied research, and in 2024 held jury roles at two separate international events. Neither of those roles is something you apply for. They are invitations — and they tell you something about how a field sees a person.

What real quality looks like when things go wrong

Krasnozhon is precise about where engineering quality actually reveals itself. A good demo, she argues, can impress people. The real test comes later — when the system is under load, or when something starts to fail. What matters then is whether the system remains understandable and predictable, and whether the team can clearly explain why it behaves the way it does.

This applies to engineers as much as to their systems. Strong engineers are not defined by speed alone but by judgment: the ability to make sound decisions, weigh trade-offs, and design things that remain coherent over time. In enterprise-level work, where systems evolve for years, that kind of thinking is what separates code that holds from code that accrues debt.

Why she publishes research alongside shipping code

Krasnozhon does not treat engineering practice and research as separate disciplines. For her, they address the same underlying problems from different angles. Practice shows whether ideas hold up when systems become difficult, unstable, or unpredictable. Research forces a different discipline: slowing down, defining a problem clearly, proposing a method, and presenting an idea in a form that others can examine and challenge.

In 2025 she published two papers. The first, in the Bulletin of Cherkasy State Technological University, proposed a strategy for adaptive quorum adjustment to achieve deterministic consensus under variable latencies. The second, in Information Technologies and Computer Engineering, laid out a methodology for designing memory-safe high-performance applications using layered resource isolation. Both address problems that appear in real production systems: predictability, coordination, safety, performance. The goal, as she describes it, is not to produce knowledge for its own sake but to turn ideas into structured thinking that engineers can reuse and build upon.

DEV Challenge XXI: what judging at scale actually involves

In 2024, Krasnozhon was invited to serve on the backend jury for DEV Challenge XXI — one of the largest IT competitions in Eastern Europe. The numbers give a sense of the responsibility involved: 2,575 registered participants across all categories, 107 finalists. The jury's work is part of what makes that narrowing possible.

The backend track included tasks drawn from real operational contexts — among them a challenge involving the processing and classification of large volumes of phone-call data for the Ministry of Foreign Affairs of Ukraine, and a FIFO-based warehouse profit calculation problem. Her role was not only to check whether solutions worked. The more substantive part was assessing architectural decisions, technical reasoning, and how participants handled edge cases and design trade-offs. Competitions like this, she observes, offer something that standard hiring processes rarely do: a window into how engineers think when they encounter problems they have never seen before.

Armenia Digital Awards 2024: evaluating products that are already in people's hands

Also in 2024, Krasnozhon served as a Jury Member at the Armenia Digital Awards, which covered a broad range of live digital products and services — platforms like e-hr.am, financial applications including Evocabank, AMIO Mobile, and EasyPay, and government services such as workpermit.am. These are not prototypes or competition submissions. They are products that real users interact with every day.

Her responsibility was to evaluate whether those products were genuinely strong from a technical and product standpoint — not simply whether they made a good first impression. Product maturity, technical architecture, and practical value were the factors that mattered. Awards events, in her view, offer a wider perspective than most evaluation contexts because they make it easier to see how meaningful a solution actually is, rather than how well it performs in a controlled setting.

Why senior engineers should say yes to these invitations

Krasnozhon is clear about why experienced engineers should accept evaluation and jury roles when they are offered. Competitions, hackathons, and awards are often where new ideas appear before they enter production systems or formal research channels. Practitioners with serious experience can recognize promising approaches early — and when they participate as judges or reviewers, they help keep the focus on substance rather than presentation.

There is also a generational dimension to it. These roles connect senior engineers with people earlier in their careers, and in doing so, help shape the professional standards that outlast any single product or company. For Krasnozhon, accepting such invitations is not primarily about recognition. It is about contributing to the community that shaped her own development — giving back to a field that invested in her before she had anything to show for it.

Build substance before reputation

The advice she offers to engineers who want lasting careers is simple and unsentimental: build substance before reputation. Technologies will keep changing. But clear thinking about systems, architecture, and trade-offs remains valuable across every generation of software. Good design is not about writing elegant code. It is about learning to think precisely and make decisions that remain understandable in the future.

External evaluation — research, technical competitions, peer review — is part of how the profession tests ideas and recognizes expertise. In the long run, the engineers who earn sustained trust are not the loudest ones. They are the ones whose judgment proves reliable over time.

That is, more or less, what an invitation to sit on a jury means. The profession has decided your judgment is reliable enough to use it on someone else's work. It is a quiet signal. But in this field, quiet signals tend to be the ones that matter.


:::tip This article is published under HackerNoon's Business Blogging program.

:::

\