MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

想要成功实施 OpenTelemetry 项目?请务必遵循这一条建议

2026-04-04 22:00:24

Leading your organization to use OpenTelemetry is a challenge. In addition to all the usual project hurdles, you'll face one of these two situations: convince your teams to use OpenTelemetry, or convince them to move from the telemetry tool they are already using to OpenTelemetry. Most people don't want to change.

\ You'll need lots of effort and baby steps. My tip is the following: the fewer the changes, the higher your chances of success. In this post, I want to tackle a real-world use case and describe which tools you can leverage to reduce the necessary changes.

The Context

Imagine a JVM application that already offers metrics via JMX. To be more precise, let's take the example of a Spring Boot application running Tomcat embedded. Developers were supportive of the Ops team or were tasked to do Ops themselves.

\ They added the Spring Boot Actuator, as well as Micrometer JMX. A scheduled pipeline gets the metrics from JMX to a backend that ingests them.

MBeans shown in jconsole

During my previous talks on OpenTelemetry, I advised the audience to aim for the low-hanging fruit first. It means you should start with zero-code instrumentation. On the JVM, it translates to setting the OpenTelemetry Java Agent when launching the JVM. \n

java -javaagent:opentelemetry-javaagent.jar -jar app.jar

\ It looks like an innocent change on an individual level. Yet, it may be the responsibility of another team, or even a team outside your direct control. Slight changes pile upon slight changes, and change management overhead compounds with each one. Hence, the fewer changes, the less time you have to spend on change management, and the higher your chances of success.

\ Why not leverage the existing JMX beans?

Integrating JMX in OpenTelemetry

A quick search of JMX and OpenTelemetry yields the JMX Receiver, which is deprecated. It points to the JMX Metric Gatherer, which is also deprecated. The state of the art, at the time of this writing, is the JMX Metric Scraper.

This utility provides a way to query JMX metrics and export them to an OTLP endpoint. The JMX MBeans and their metric mappings are defined in YAML and reuse implementation from jmx-metrics instrumentation.

— JMX Metric Scraper

\ Note that the scraper is only available as a JAR. It is, however, trivial to create a Docker image out of it. \n

FROM eclipse-temurin:21-jre

ADD https://github.com/open-telemetry/opentelemetry-java-contrib/releases/latest/download/opentelemetry-jmx-scraper.jar \
    /opt/opentelemetry-jmx-scraper.jar

ENTRYPOINT ["java", "-jar", "/opt/opentelemetry-jmx-scraper.jar"]

\ While you can configure individual JMX bean values to scrape, the scraper provides config sets for a couple of common software systems that run on the JVM, e.g., Tomcat and Kafka. You can also provide your own config file for specific MXBeans. Here's a sample custom config file: \n

rules:
  - bean: "metrics:name=executor*,type=gauges"     # 1
    mapping:
      Value:
        metric: executor.gauge                     # 2
        type: gauge                                # 3
        unit: "{threads}"
        desc: Spring executor thread pool gauge
  1. JMX bean to map
  2. OpenTelemetry metric key
  3. Metric kind. See possible options here.

\ You can use it with the OTEL_JMX_CONFIGenvironment variable: \n

services:
  jmx-gatherer:
    build: ./jmx-gatherer
    environment:
      OTEL_SERVICE_NAME: jmx-otel-showcase
      OTEL_JMX_SERVICE_URL: service:jmx:rmi:///jndi/rmi://app:9010/jmxrmi
      OTEL_JMX_TARGET_SYSTEM: jvm,tomcat                                       # 1
      OTEL_JMX_CONFIG: /etc/jmx/springboot.yaml                                # 2
  1. JMX presets
  2. Reference the custom config file

Putting it all Together

Here are the components of a starting architecture to display the application's metrics:

  • JMX Metric Scraper
  • OpenTelemetry Collector
  • Prometheus
  • Grafana

Starting OpenTelemetry metrics architecture

The JMX Metric Scraper polls metrics from a JVM, using the JMX interface for this. It then pushes them to an OpenTelemetry Collector. For the demo, I chose a simple flow. The Collector exposes metrics in Prometheus format on an HTTP endpoint. A Prometheus instance polls them and exposes them for Grafana.

Grafana DashboardConclusion

In this post, I kept a JMX-enabled application as is and used the JMX Metric Scraper to send MBean metrics to the OpenTelemetry Collector.

\ Introducing OpenTelemetry in your information system doesn't need more changes than necessary. You can, and you probably should, keep as much as possible of your existing assets to focus your efforts on the required parts. It's always possible to change them at a later stage, when things are more stable. It might be time to nudge a bit toward a regular Java agent, with the influence of a migration that has been successful.

\ The complete source code for this post can be found on Codeberg.

To go further:


Originally published at A Java Geek on March 29th, 2026.

发行环节存在问题,却无人提及

2026-04-04 20:59:59

I have 61 post drafts queued up. 91 reply drafts. 18 finished blog posts, voice-matched and slop-filtered, ready to go. The distribution problem nobody talks about isn't content scarcity - it's the gap between a full queue and zero published output. That gap is where builder ambition goes to die quietly, surrounded by perfectly organized markdown files and automated scheduling daemons that never fire.

pkg.go.dev 上的搜索体验:其工作原理

2026-04-04 20:00:24

We are excited to launch a new search experience on pkg.go.dev.

\ These changes were motivated by feedback we’ve received about the search page, and we hope you enjoy them. This blog post provides an overview of what you can expect to see on the site.

Grouping related package search results

Search results for packages in the same module are now grouped together. The most relevant package for the search request is highlighted. This change was made to reduce noise when several packages in the same module may be relevant to a search. For example, searching for “markdown” shows a row listing “Other packages in module” for several of the results.

Results for different major versions of the same module are also now grouped together. The highest major version containing a tagged release is highlighted. For example, searching for “github” shows the v39 module, with older versions listed as “Other major versions”.

Lastly, we have reorganized information related to imports, versions, and licenses. We also added links to these tabs directly from the search results page.

Introducing symbol search

Over the past year, we have introduced more information about symbols on pkg.go.dev and worked on improving the way that information is presented. We launched the ability to view the API history of any package. We also label symbols that are deprecated in the documentation index and hide them by default in the package documentation.

\ With this search update, pkg.go.dev now also supports searching for symbols in Go packages. When a user types a symbol into the search bar, they will be brought to a new search tab for symbol search results. There are a few different ways in which pkg.go.dev identifies that users are searching for a symbol. We’ve added examples to the pkg.go.dev homepage, and detailed instructions to the search help page.

Feedback

We’re excited to share this new search experience with you, and we would love to hear your feedback!

\ As always, please use the “Report an Issue” button at the bottom of every page on the site to share your input.

\ If you’re interested in contributing to this project, pkg.go.dev is open source! Check out the contribution guidelines to find out more.


Julie Qiu

\ This article is available on The Go Blog under a CC BY 4.0 DEED license.

\ Photo by Agence Olloweb on Unsplash

\

我运营了一个代币项目三年。以下是真实发生的情况。

2026-04-04 19:54:02

Most people in crypto see one part of the cycle. They get in, something happens, they get out.

I stayed for the whole thing. Launch, growth, peak, the slow deterioration you try to explain away, and the end. Three years inside a single project. I still do not know exactly what to make of it, but I will try to write down what I saw.


September 2022

I launched marumaruNFT with genuinely low ambitions.

I had been investing in IDOs since around 2020. The pattern was always the same: buy a token, watch it drop 90% within a few months. I did this ten times in a row. Ten tokens, all down more than 90%.

At some point I noticed something that I am not proud of noticing: tokens have a property that stock equity does not. When a token price collapses, the issuer's obligation to return capital basically disappears. If someone buys $10,000 worth of your token and it drops to 1/10 of its value, you only need to return $1,000 worth of value to make them whole. That math interested me before the building did.

That was my starting mindset. Small and a little cynical.

The presale raised over $100,000 anyway. The IDO boom was already fading, but money came in. Then more money came in. I was a first-time builder who had never designed a protocol before, and the numbers were going in a direction I did not fully understand. Peak liquidity hit approximately $6,000,000. The token appreciated roughly 500x from presale pricing over about twelve months. We got listed on CoinW. A Tokyo Stock Exchange-listed company filed two formal regulatory disclosures about their partnership with us. U.Today and Bitcoin.com covered the project editorially.

I did not have a framework for what was happening. I just watched it and kept building.


The Thing I Was Trying to Build

I had watched enough DeFi protocols collapse by this point to have a theory about why.

Hyper-APY, Play-to-Earn, liquidity mining — all of these inflate token supply while the underlying asset base stays the same. Prices can only go one direction under that pressure. The protocols were engineering their own decline. I thought I understood the problem.

My answer was: build a company. A real business generating real revenue. If the NFT marketplace worked, there would be an actual reason for the token to hold value that did not depend on how many new buyers showed up today. That was the thesis.

It failed.

The NFT market collapsed broadly during this period. The marketplace business model turned out to be weaker than I had modeled. And NFTs had product problems I probably should have weighed more heavily from the start. People perceived them as just image files. The usability was poor. The reputational deterioration was faster than I expected.

I sometimes think about whether building around AR or VR instead of NFTs would have changed things. Maybe. I cannot prove it. The decision to build around NFTs constrained everything downstream in ways that only became clear much later, when it was too late to pivot cleanly.

When the business failed, the token had nothing holding it up. In December 2025, MARU was delisted.

It was not fraud. Third parties with their own due diligence processes — a regulated exchange, a TSE-listed company, international crypto media — had all verified the project before associating with it. They checked. The project passed. Then it failed for other reasons.

Many investors who entered late lost money. That is true, and I am not going to minimize it. The critical assessments of me that still exist online are fair. That is a significant part of why No NPC Society ($NONPC) has to work. It is the closest thing I have to an honest response to the people who were hurt.


What I Got Wrong

I could list this cleanly. Four bullet points, each one a lesson. But that would make it look more organized than it actually was, and I think the disorganized version is more useful.

The first thing I got wrong was confusing price performance with design validation. Twelve months of upward price movement felt like evidence that the architecture was correct. It was not. Bull markets hide problems. By the time those problems became visible, the decisions that created them were already locked in. I had no way to evaluate the design independently of whether the price was going up, and I did not try to.

The second thing: I had $6 million in liquidity, and I treated that number as though it were a permanent property of the project. It was not. It was a snapshot of participant confidence and market momentum at a particular moment. When confidence weakened, the liquidity left. I had no mechanism that made it stay. I just had a large number, and I mistook its size for permanence.

The third thing I got wrong was harder to see while it was happening. Attention — media coverage, event participation, community energy — was doing more structural work than I realized. I thought the NFT marketplace would eventually replace it as the engine. It did not. And when attention faded, and the business failed, there was nothing else.

I know these things now. I did not know them clearly then. I learned them by living through the consequences.


NONPC: What I Built After

After MARU, I made one firm decision: no product.

I had seen what happens when a token's survival depends on a product working. I did not want to build that again. But I also came out of that experience with a conviction: a token that can only sustain itself through continuous external inflows is not structurally different from a liquidity mining scheme. The energy source is still attention and momentum, just dressed up differently.

So the question became whether a protocol could generate its own capital without a traditional product. That question produced the two engines inside No NPC Society.

ACE — Awakening Creator Engine — generates capital from the community's creative activity.

It runs as a recurring NFT contest. Community members submit original works, the community votes to select winners, and winning works are minted as NFTs and sold. The revenue split is fixed in the published specification and enforced on-chain: 50% goes directly to the creator. The remaining 50% enters a treasury, of which 80% is injected into the NONPC/SOL liquidity pool within 7 days of settlement, and 20% goes to DAO and operational funds.

No external capital required. Every transaction is disclosed publicly on-chain.

AFX — Awakening Flywheel Experiment — compounds existing liquidity using trading fees instead of new token issuance.

Every trade generates fees. A program-controlled smart contract account collects those fees automatically, rebalances them to match the current pool ratio, and reinvests them as permanently locked LP positions. No new tokens. No inflation. The only energy source is fees from real trading activity.

The execution is constrained to prevent the protocol's own operations from destabilizing the market: maximum once per week, size capped at 0.5% of pool SOL reserves per execution, 0.5% slippage cap, retry logic that reduces execution size from 100% to 70% to 50% on failure, with random delays of 2 to 4 hours between attempts to eliminate timing predictability.

When trading continues, locked liquidity grows. When trading stops, growth stops. But locked liquidity does not leave. That is the thing MARU was missing.

I want to be clear about what this does and does not solve. AFX compounds what exists. It does not manufacture value from nothing. If there is no trading activity, there are no fees to compound. What it does is prevent the specific failure mode I watched in MARU: liquidity that evaporates the moment external confidence drops.

Whether that is enough to build something that lasts, I genuinely do not know yet. This design is still being tested in real market conditions.


The Record

The failure does not disappear. The people who lost money do not disappear.

What I can do is document it honestly and make the next design verifiable. Every NONPC specification is publicly versioned on GitHub. Every on-chain state can be checked by anyone. The treasury wallet addresses are documented. The LP lock conditions are public.

If NONPC works, the reasons will be traceable. If it fails, too, that will be on the record as well.

That is what I owe.


References

\

你的AI已经获得了你生活的最高权限。只是你现在还不知道罢了。

2026-04-04 19:29:35

Every day, millions of developers hand their AI assistant the keys to their machine. Shell access. File system. Network. API keys stored in plaintext. And they do it through software pathways that were designed for web apps, not for systems that can reason, plan, and act.

This is not a theoretical problem. It's happening right now.

In January 2026, OpenClaw — the fastest-growing open-source AI agent, with over 100,000 GitHub stars in five days — became the center of a multi-vector security crisis. Researchers discovered that any website a developer visited could silently hijack their AI assistant. No plugins needed. No user interaction. Just a browser tab and a WebSocket connection.

The vulnerability chain was devastating: token theft, full gateway control, access to every connected service — Slack, Telegram, email, source code repositories. Researchers at Oasis Security demonstrated end-to-end compromise from a browser tab to full workstation takeover.

But here's the part that should worry you more than the CVE: even after the patch, the architectural problem remains. OpenClaw stores API keys and OAuth tokens in plaintext files. Its skills marketplace had over 800 malicious extensions — 20% of the entire registry. Microsoft's security team published a blog post whose first recommendation was essentially: don't use it on a machine that matters.

OpenClaw is not uniquely careless. It is transparently typical. It just happened to grow fast enough for the world to notice.

\

The pattern is always the same:

A powerful AI capability arrives. It gets packaged into a tool. The tool inherits ambient trust from the operating system. Users adopt it without understanding the operational risks. Credentials leak. Data leaks. And by the time someone writes a patch, the damage is structural.

This is what happens when you run intelligence inside environments designed for spreadsheets.

\

What would be different?

Imagine an execution environment — an AI Operating Substrate, an AIOS — designed from the ground up for one purpose: governing how intelligence is hosted, constrained, observed, and audited. Not an app. Not a plugin. Not a wrapper. A substrate.

In that substrate, every tool would be denied by default. A capability would need to be explicitly granted before it could execute. High-risk operations — shell access, file writes, network calls — would be permanently blocked at the policy level, not subject to override.

Every action would pass through a pipeline: decision, selection, validation, human approval, execution, and audit. If any stage fails, the pipeline stops. Not logs a warning. Stops.

Memory would not be an open store. It would be governed — segmented into planes with different rules: some append-only, some ephemeral with explicit time-to-live, some frozen and cryptographically signed. The system could not silently accumulate context or expand its own working memory.

And hardware — compute, storage, network — would be mediated through a capability-gated abstraction. The intelligence would need to hold the right credentials before touching any resource. No ambient access. No inherited trust.

The human would always hold the keys. Escalation, once triggered, would be terminal — no automated process could override it. The walls would be made of policy, telemetry, cryptographic trust, and audit trails. Not of hope.

\

This is not a fantasy. It is an engineering direction.

The question is not whether we need it. The question is whether we build it before the next OpenClaw, or after.

隐私不是一项功能,而是一项义务

2026-04-04 19:17:04

Building Trustworthy Applications Through Security by Design

Most applications that handle your personal data don't encrypt it.

They store it in plaintext on third-party servers. They send it through public APIs to train machine learning models. They back it up to cloud services owned by companies you've never heard of.

Then they call it "privacy."

They'll add a privacy policy. They'll mention encryption in their marketing. They'll promise not to sell your data (while still feeding it to third-party AI companies). And they'll sleep fine at night because they technically followed the letter of the law.

But here's what nobody says out loud: if you're building a tool that stores personal details about someone's relationships — their health struggles, their family situations, their career anxieties, their confidential professional contacts — you have an obligation. Not a feature. An obligation.

The obligation to make sure that data never leaves your control.

We started building Meaningful around this principle. And it forced us to rethink everything about how modern applications are built. More importantly, it forced us to rethink how we write secure code.

\

The problem with "privacy as a feature."

When privacy is treated as a feature, it becomes optional.

It's something you add because users ask for it. Something you charge money for. Something you mention in the landing page copy and then move on.

But privacy for relationship data isn't optional. It's fundamental.

Consider what happens when you log a note about your friend: "struggling with depression, mentioned it over coffee." That's not metadata. That's not an email. That's someone's private life. If that leaks, it doesn't just expose data. It violates trust.

Or what happens when you record a voice note after talking to a colleague: "internally frustrated with management, looking for exit opportunities." That's career-sensitive information. In the wrong hands, it ends someone's opportunities.

Most relationship management tools don't think about this at all. They treat relationship data the same way they treat any other user data: convenient to store, easy to monetize, useful for training models.

The problem is relationship data is different.

It's not your data alone. It's other people's personal lives. Their most precious memories. Things they told you in confidence that you're just keeping a note about.

\

What goes wrong when relationship data leaks

It's not hypothetical.

In 2021, a misconfigured S3 bucket leaked millions of contact records from a major CRM platform. Not just names and phone numbers. Notes. Conversation history. Personal details.

In 2023, an AI training dataset included relationship management tool exports — full chat histories, personal notes, everything. People's private thoughts about their friends and colleagues became training data for a language model.

It doesn't have to be a headline breach. Sometimes it's just negligence.

A developer leaves a MongoDB connection string with authentication disabled. A backup gets stored in the wrong region. A third-party API integration gets compromised. A disgruntled employee exports user data.

And suddenly, everything you recorded about your relationships is out there.

But here's what's worse: you probably didn't even know it was vulnerable. Because the tool you trusted with that data didn't encrypt it. They just promised to "keep it safe." Which is like promising to keep your house safe by having the front door unlocked but hoping no one walks in.

\

Why third-party AI is a privacy nightmare

Most "AI features" in relationship tools are actually just API calls to OpenAI or Anthropic.

Your relationship data gets sent to their servers. They process it. They return a response. And somewhere in the terms of service, it says they might use your data to improve their models.

The companies doing this aren't being malicious. They're following their business model. But the business model is: user data is a training asset.

So when you ask your personal CRM "who haven't I talked to in 3 months?" you're not just getting an answer. You're sending your entire contact list, interaction history, and relationship context to a third-party server. Where it gets logged. Cached. Possibly included in training data.

This is especially problematic for relationship data because it reveals patterns about your actual life. Your friends. Your family. Your professional network. Your vulnerabilities.

An AI trained on millions of people's relationship data can infer things about you that you never explicitly said. It can infer who your best friends are. Who your romantic interests might be. Who's struggling. Who's isolated.

That's not data you consented to share. And definitely not data you should have to share to use a relationship management tool.

\

How we built Meaningful differently

When we started building Meaningful, we made a different choice.

We decided: relationship data stays in your control. Full stop.

That meant several technical decisions that complicate our infrastructure but protect your privacy. And it meant establishing a security-first development process to ensure we never accidentally compromise that promise.

Encryption at rest with per-user keys and salts

All personal data — your contacts, your notes, your interaction history, anything sensitive — gets encrypted with AES-256-CTR before it touches our MongoDB database.

But here's the part that matters: we don't use a single master key for all users.

Each user gets a unique encryption salt stored in their account. When we encrypt their data, we derive a unique key for that user using:

userKey = HMAC-SHA256(ENCRYPTION_KEY, userId + encryptionSalt)

This means:

  • If our database is compromised, attackers get encrypted data
  • Even if the master encryption key is leaked, they'd need 400+ individual user salts to decrypt anything
  • Each user's data is isolated — a breach doesn't expose everyone

Each piece of encrypted data stores a random 16-byte initialization vector (IV) and the ciphertext. On read, we decrypt server-side and strip the ciphertext before sending anything to your browser. You never see the encrypted version. But the database only stores the encrypted version.

This means if someone breaks into our database, they get ciphertext. Not your actual data.

Private AI inference (no third-party APIs)

Our AI assistant, EdgeAI, runs on a dedicated DigitalOcean droplet that we control. Not OpenAI. Not Anthropic. Not any third-party API.

The model is Llama 3.2 3B, open-source and quantized to run on modest hardware (4GB RAM, 2 vCPUs). It processes your requests locally. Your data never leaves our infrastructure.

When you ask EdgeAI "who should I reconnect with?" your contact list, interaction history, and journal entries stay on our servers. The model reads them. Produces a response. And that's it. No logs sent to third parties. No training data harvested.

The trade-off is infrastructure cost and complexity. We pay for a dedicated droplet. We manage model updates ourselves. We own the responsibility if something breaks.

But the benefit is: your relationship data is never exposed to third-party training pipelines.

Local transcription for voice notes

We use faster-whisper (OpenAI's open-source Whisper model) running locally to transcribe voice notes.

You record a note. It gets transcribed on your device or on our servers — but the audio never touches OpenAI. It stays local.

This matters because audio contains not just what you said, but tone, emotion, context. It's rich data about your relationships. Sending it to third parties for transcription would leak that.

Selective encryption based on context

We don't encrypt everything equally because not everything is equally sensitive.

A chat message where you ask EdgeAI "how do I politely decline an event?" gets encrypted because it might involve relationship context.

A chat message where you ask "what's the capital of France?" doesn't get encrypted because it's not personal data.

The decision is made at write time based on the intent of the conversation. App-query and app-action intents (anything touching your actual contacts and relationships) get encrypted. General knowledge and chitchat don't.

This is a tradeoff: encrypted data can't be cached or quickly searched across users. But it's the right tradeoff for relationship data.

\

Building a security-first development strategy

Here's what sets security apart from other technical choices: you can refactor a bad architecture. You can rewrite inefficient code. But you can't "fix" a security breach after the fact.

We take security seriously. And we don't share your data with anyone because we know how important it is for you.

That means security isn't something we patch in at the end. It's woven into how we think about every feature, every API, every line of code that touches your data.

Writing secure code isn't about writing it once and hoping it works. It's about building security reviews into your development process.

We didn't want to just implement encryption. We wanted to make sure the implementation was actually solid. So we built a systematic security review workflow.

Starting with the security-reviewer skill

We started by adapting the security-reviewer skill from the Claude Code community. This skill provides a comprehensive framework for detecting OWASP Top 10 vulnerabilities, hardcoded secrets, injection attacks, and unsafe cryptographic practices.

But we realized a general-purpose security reviewer wasn't enough for our specific needs. We needed custom security reviews for the three areas that matter most for Meaningful: encryption, authentication, and input validation.

Building three custom security skills

We created three specialized security review skills tailored to our codebase:

1. security-review-encryption.md

This skill checks:

  • No hardcoded encryption keys (always in process.env)
  • Salt generation uses crypto.randomBytes() (not predictable)
  • Key derivation uses HMAC-SHA256 (strong hash)
  • No plaintext data logged after encryption
  • Version field prevents algorithm downgrades (old data stays readable)

2. security-review-auth.md

This skill checks:

  • All sensitive routes have auth middleware
  • Passwords hashed with bcrypt (never plaintext comparison)
  • JWT tokens validated on every request
  • No hardcoded secrets
  • Role-based access control enforced

3. security-review-input.md

This skill checks:

  • All user inputs validated (type, length, pattern)
  • No SQL injection (using parameterized queries)
  • No XSS (React auto-escapes, but we verify)
  • File uploads have size and type limits
  • API responses filtered (passwords/secrets never returned)

Core security checks we run

For every code change touching encryption, auth, or user input, we run these checks:

# 1. Check for hardcoded secrets
grep -r "ENCRYPTION_KEY\|API_KEY\|SECRET" server/ \
  --include="*.js" | grep -v "process.env" | grep -v ".env"

# 2. Check for SQL injection patterns
grep -r "query.*req\.body\|query.*req\.query" server/ --include="*.js"

# 3. Check for plaintext password comparison
grep -r "password\s*==" server/ --include="*.js"

# 4. Check for unprotected routes
grep -r "router\.\(post\|put\|delete\)" server/routes --include="*.js" \
  | grep -v "auth" | grep -v "public"

# 5. Run npm audit
npm audit --audit-level=high

# 6. Check encryption key derivation
grep -r "deriveUserKey\|HMAC\|hmac" server/utils/encryption.js

Quick security audit script

We also use a quick shell script that developers can run before committing:

#!/bin/bash
echo "🔐 Security Audit — Meaningful"
echo "================================"

echo "1. Checking for hardcoded secrets..."
grep -r "ENCRYPTION_KEY\|API_KEY\|SECRET\|password" server/ \
  --include="*.js" | grep -v "process.env" | grep -v ".env" || echo "✅ No secrets found"

echo -e "\n2. Checking npm dependencies..."
npm audit --audit-level=high || echo "⚠️ Some vulnerabilities found"

echo -e "\n3. Checking for unprotected routes..."
grep -r "router\.\(post\|put\|delete\)" server/routes --include="*.js" \
  | grep -v ", auth" | grep -v "public" || echo "✅ All sensitive routes protected"

echo -e "\n4. Checking encryption usage..."
grep -r "encrypt\|decrypt" server/routes --include="*.js" \
  | grep -v "encryptionSalt" | head -5

echo -e "\n✅ Audit complete. Review findings above."

This is the kind of thing that's boring to do manually but trivial to automate. By running it every time code touches security-sensitive areas, we catch problems early.

\

The infrastructure cost of privacy

Being privacy-first is expensive.

We run a second server just for AI inference. We manage encryption keys and rotation ourselves. We store more data (encrypted data is larger than plaintext). We can't use certain optimization techniques that would require plaintext access.

If we were comfortable with the standard playbook — send everything to OpenAI, store data plaintext, train on user interactions — we could save significant costs.

But we're not.

Because relationship data isn't a cost center to optimize. It's something we're responsible for protecting.

\

What this means for you

When you use Meaningful, here's what's actually happening:

  1. Your contact information and relationship notes get encrypted with AES-256 using a key derived from your account
  2. Your voice notes get transcribed locally
  3. Your AI assistant processes everything on a private server
  4. Your calendar sync stays within your control
  5. Your data is never sent to third-party AI APIs for training
  6. We run security audits before every code change touching encryption, auth, or user input

This doesn't make Meaningful immune to data breaches. No system is. But it means if something does go wrong, the damage is limited to encrypted data. And we have a very high responsibility to protect it because we control it entirely.

It also means EdgeAI won't hallucinate as easily. Instead of sending sparse context to a 70B parameter model trained on the entire internet, we send rich, relevant context to a small model running on our infrastructure. The model stays grounded in your actual relationship data instead of making things up based on general knowledge.

\

The philosophy behind this

This isn't about being preachy about privacy.

It's about recognizing what relationship data actually is: someone's trust, written down.

And it's about recognizing that security is something you build into every decision, not something you bolt on at the end. It's not a feature. It's an obligation.

If you're building an application that touches people's personal information, you need to ask yourself: are we treating security like a feature we'll add later? Or are we building it into the foundation?

The difference matters. And your users will notice.


Resources

If you're building privacy-focused applications, you might find these resources useful:

  • Security-reviewer skill — The foundational repository for Claude code from which we adapted the framework
  • Meaningful security skills — Our three custom security review skills (encryption, auth, input validation)
  • OWASP Top 10 — Essential reference for web application security

Try Meaningful — fully free during alpha.

The code is built on security. The commitment to your privacy is real.