MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

基础设施将决定下一代互联网的走向

2026-03-14 16:05:16

\ We’ve all known for a long time that the internet has been all about apps, and growth basically meant keeping users in one interface.

So, chats live in one product, tasks in another, calls in a third, files in a fourth, and each tool makes users open a separate account, carries its own permission model, and contains its own data policy. At this point, being online often means switching between closed apps.

Today, though, such an approach has reached limits. AI is accelerating automation and synthetic content, regulation is turning data location into a product requirement, and users spend more time in live interactions — calls, meetings, streams — where any glitch is noticed instantly and can cost you trust.

That’s why I think the real advantage now is going below the interface, to infrastructure that can scale, adapt, and be resilient.

Apps Hit a Ceiling

In fact, apps still work on the same core assumptions, and those assumptions now suppress growth. The ceiling shows up in three places: platforms own everything, real-time communication remains tied to one vendor and scale now depends on geography and compliance.

Platforms Own the Relationship

For the past 15–20 years, the apps that grew into closed ecosystems have driven the internet, as they became so-called entry points for social life, work, and communication. Yet, the main tradeoff was control. People don’t really own their contacts, content, or interaction history; they just lease it from platforms.

So when a service gets blocked, changes the rules, or shuts down, users end up being deprived of their account, audience, history, and the entire network. That’s why “platform risk” feels so personal.

X is the most recent reminder. The platform can tighten rules to get rid of low-quality AI spam, content degradation, and that’s, unarguably, good for users. Even so, for businesses built on that ecosystem, an abrupt policy shift can cut reach, break workflows, and trigger costly rebuilds. You can build a strong product, but still get involved in rework just because a single vendor changed the terms.

Real-Time Communication Is Locked In

Real-time communication (RTC), meaning live audio and video calls, reflects the ceiling in the following way: a Zoom meeting exists inside Zoom, and a Google Meet call exists inside Google Meet. In other words, the vendor owns the infrastructure. If users join on that vendor’s terms, connecting one system to another still takes custom work.

Even teams that build real-time features into their own product can end up with the same dependency. They choose one cloud region map, one pricing model, one legal setup and then design the entire work around it. Because of that, over time, switching stops being a technical change and becomes a real business risk.

So the main issue is the dependency — one provider can set pricing, limit coverage and define what “compliant” means in practice.

Scale Now Means Geography

In RTC products, scale punishes small mistakes. When usage spikes, delay and choppy audio appear quickly, and costs rise accordingly with them. And that’s the reason why many teams learn that the cheapest option only looks cheap at launch.

Now, scaling also means geography and rules. A product has to work for users in different countries, with different uptime promises, data-location rules and access restrictions by country — this is where old stacks start to back down. Vendors may promise compliance, but proving what routes traffic took and where data actually resided gets complicated.

Decentralized Networks Change the Tradeoffs

For a long time, teams treated RTC as a forced choice, as you either build it in-house and stay liable for operations, or accept deep cloud dependency and live with its risks. In this context, decentralized infrastructure adds a third option, because it removes the assumption that one company has to own the entire network.

Build It Yourself or Rent It

If a team builds RTC in-house, it pays with time and complexity, since it has a strong need for servers in multiple regions, support for many SDKs and browsers, and constant tuning as traffic goes up and down. It also needs people to tackle malfunctions when they happen — on-call shifts, late-night fixes, incident reports. And even then, the team must keep extra capacity for peak times, so some servers sit idle much of the time, while the company still pays for them.

That’s why many teams choose a classic cloud provider instead. The start is easier, as you plug in, launch, and let someone else run the hardest parts. Even so, the caveats appear later. Costs become harder to predict as traffic grows, while coverage, in turn, remains uneven by region.

Distributed Scale Feels Different

A decentralized network spreads traffic across many machines, rather than one central cluster, so users experience fewer outages. If one location fails, the whole service doesn’t necessarily fail with it, or, if one region is weak, the system can route around it — so you get fewer “all-or-nothing” failures.

It also changes the way the team expands. “We need reliable calls in country X” doesn’t imply a rebuild anymore, as capacity can come from those places that are closer to users. For real-time products, such proximity matters because users tend to feel issues immediately. From here, a bad call isn’t a minor bug anymore — in that moment, it’s the product.

Escape the One-Vendor Trap

As I mentioned, in RTC, dependence reveals itself quickly when one provider can set pricing, decide coverage, and limit what you can do in a session. Moreover, exactly the same provider sets the rules for handling data and access.

Decentralized real-time networks, sometimes called decentralized RTC, or dRTC, change the balance, as a team can keep cloud-like speed and scale, but spread the risk across a network instead of one owner. I’m not claiming this is a panacea. It just gives teams more room to tackle regional outages, control cost, and avoid panic fixes when a vendor changes terms.

The Next Internet Demands Infrastructure

In the next 5–10 years, the internet will likely reward infrastructure that can prove, control, and adapt. Yes, that’s a headline. Anyway, it’s the reality teams have to accept, as the web is becoming more automated, more global, and harder to trust. Apps still matter, but they are no longer capable of carrying that weight alone. From here, this is what I expect to matter most:

  • Trust stops being a UI promise. Deepfakes and voice cloning make it easy to fake what at first glance looks real. So treat every session like an event you may need to prove later, which means tracking who joined, when it ran, what changed and which rules applied. Build tamper-evident logs from the start, then, in case of disputes or audits, make them exportable.
  • Data sovereignty moves into product design. More buyers will ask where the data went, and expect proof, simply put, “we comply” won’t be persuasive on its own. Provide infrastructure that can keep traffic inside a region when required, and make that provable in case of reviews.
  • AI gets wired into the network. Translation, noise suppression and quality repair work best in real time, close to the session. End devices will stay constrained, so teams should rely on the network to place lightweight compute near users and keep performance stable as conditions change.

That’s why the next decade is likely to forget what it was to just build more apps. The real work will drift under the screen, into infrastructure that can prove, route, and protect by design.

超越原生机器人:将生成式人工智能和第三方智能集成到 ServiceNow 中

2026-03-14 15:52:12

Most organizations that have deployed ServiceNow's native Virtual Agent eventually run into the same wall. The out-of-the-box bot handles password resets and ticket creation reasonably well, but the moment a user asks something nuanced, like why their laptop order is delayed or how to navigate a policy exception for remote work equipment, the conversation collapses into a dead end. The agent either escalates to a human or drops the user entirely. For enterprises operating at scale, that gap between what users expect and what the native tooling delivers is not just a usability problem. It is a measurable drag on productivity and IT credibility.

Over the past several years, a new class of third-party AI platforms has matured enough to fill that gap in a meaningful way. Integrating tools like Moveworks, Glean Search, and X-Bot AI into the ServiceNow ecosystem does not just patch existing weaknesses. It fundamentally changes what an intelligent virtual agent can do and how it reasons about enterprise context.

The Limitations of Native Virtual Agent

ServiceNow's Virtual Agent is a capable conversation designer with solid integration into ITSM workflows. It understands topics, can trigger flows, and surfaces catalog items. But it is fundamentally template-driven. The interaction logic relies on predefined conversation trees, which means it handles known problems well but fails unpredictably on anything outside those paths. There is also no real language understanding in the generative sense. Users must phrase requests in ways the bot has been trained to recognize, and when they do not, the experience degrades quickly.

The other challenge is knowledge fragmentation. Most enterprise environments do not keep all their support content inside ServiceNow. Policies live in Confluence. Procurement data sits in SAP. HR documentation is in Workday. Native Virtual Agent has no native way to reach across those boundaries in real time, which leaves users with partial answers or no answers at all.

Moveworks: Reasoning Across the Enterprise Stack

Moveworks introduced a different architecture for enterprise support automation, one built around large language model reasoning rather than conversation trees. When integrated with ServiceNow, Moveworks acts as the natural language layer that intercepts user intent, interprets it across multiple possible action paths, and then surfaces the right resolution, whether that means creating a ticket, approving a request, retrieving a policy document, or escalating with full context already populated.

What makes this integration technically interesting is how it connects to ServiceNow's underlying APIs without replacing the platform's workflow engine. Moveworks uses the Now Platform's REST APIs and integration spokes to execute actions, which means all the governance, auditing, and CMDB integrity that organizations rely on remains intact. The AI layer sits on top as an orchestration and understanding engine, not as a replacement. From an architectural standpoint, this is a much safer approach than trying to rebuild workflow logic inside a generative model.

In practice, organizations using this integration have seen measurable improvements in deflection rates for L1 support, particularly for access provisioning, software requests, and benefits-related questions that cut across multiple systems. The key differentiator is that Moveworks can handle multi-step resolution, not just single-turn Q&A.

Glean Search: Bringing Enterprise Knowledge Into the Conversation

One of the most underappreciated problems in enterprise AI deployments is retrieval quality. A language model is only as useful as the information it can access at inference time. For ServiceNow integrations, this matters a lot because users frequently ask questions that require pulling context from systems the platform does not natively index. Glean addresses this directly by acting as a unified enterprise search layer that connects to dozens of data sources, including Confluence, Google Drive, Slack, Salesforce, and GitHub, and then makes that index available to the virtual agent at query time.

When integrated into a ServiceNow virtual agent flow, Glean's API can be called as an enrichment step before the agent formulates its response. This means that instead of a static knowledge base lookup, the bot is conducting a live, permission-aware search across the entire enterprise knowledge graph. Permission-aware is the critical phrase here. Glean respects the user's existing access controls when returning results, which eliminates a common security concern around AI-powered search tools surfacing restricted content.

From a technical implementation perspective, Glean exposes a clean REST API that can be embedded inside a ServiceNow Integration Hub spoke or called through a Flow Designer action. The response payload includes ranked results with source metadata, which can be formatted and surfaced inside the chat interface without significant custom development. Teams that have implemented this describe the result as giving the virtual agent a memory that spans the entire organization.

X-Bot AI: Conversational Intelligence with Domain Specificity

X-Bot AI takes a slightly different approach by focusing on domain-specific conversational intelligence that can be fine-tuned to an organization's particular processes and vocabulary. For enterprises with complex, industry-specific support needs, such as financial services firms with regulatory workflows or healthcare organizations with compliance-heavy request types, off-the-shelf language understanding often struggles. X-Bot's architecture allows for targeted model customization that reflects the actual language and logic of the business.

When connected to ServiceNow, X-Bot functions as a conversational front end that handles the natural language understanding and dialogue management while delegating transactional actions back to the Now Platform. This separation of responsibilities is technically clean and mirrors how mature enterprise AI architectures tend to be structured. The bot handles the conversation; the platform handles the record. Both sides do what they are built for.

Architectural Considerations for Integration

Building an intelligent virtual agent by combining these tools requires thinking carefully about where each component lives in the request lifecycle. A common pattern that works well in practice is to let the third-party AI platform handle intent classification and initial response generation, then use ServiceNow as the system of action for anything requiring a transaction, a workflow trigger, or a logged record. The middleware between these layers is usually Integration Hub or a lightweight API gateway, depending on the organization's existing infrastructure.

Latency is a real concern in these designs. Each additional API call adds response time, and users in a chat interface have very little tolerance for delays beyond two to three seconds. Caching frequently accessed knowledge results, parallelizing API calls where possible, and setting strict timeouts with graceful fallback behaviors are all necessary engineering considerations rather than optional optimizations.

Security and data residency are equally important. When third-party AI platforms process user queries, they may be handling sensitive HR, legal, or financial content. Organizations need to review each vendor's data handling policies carefully and ensure that integration designs align with their data classification requirements. In regulated industries, this often means deploying certain components in a private cloud configuration rather than relying on shared SaaS infrastructure.

Where This Is Heading

The trajectory of enterprise AI in the ServiceNow ecosystem is moving toward what practitioners are starting to call agentic support, where the virtual agent does not just answer questions but takes sequences of actions autonomously, monitors outcomes, and adapts based on results. Moveworks, Glean, and X-Bot are all investing heavily in this direction. The organizations that will be best positioned to leverage these capabilities are the ones that have already done the foundational work of connecting their systems cleanly through ServiceNow and have established solid data governance practices.

Native Virtual Agent will continue to improve, and ServiceNow's own investments in Now Assist signal that the platform vendor takes generative AI seriously. But the pace of innovation in the third-party ecosystem is faster, and for enterprises with complex, heterogeneous environments, the composable approach of integrating specialized AI tools into the ServiceNow backbone is likely to remain the most practical path to delivering genuinely intelligent support at scale.

支撑百万SKU自动化定价引擎的风险工程

2026-03-14 15:40:22

Most automated pricing systems are built to optimize. \n Very few are built to survive failure.

When a system manages 1,000,000+ SKUs and processes 500,000 daily price updates across multiple marketplaces, pricing stops being a growth feature.

It becomes financial infrastructure.

At that scale:

  • A single incorrect price can erase 30% margin instantly
  • A misinterpreted API response can affect thousands of SKUs within minutes
  • A faulty rollout can create $15,000–$20,000 daily exposure per seller

The problem is not that errors occur. \n The problem is propagation velocity.

This article breaks down the architecture of a high-load pricing infrastructure engineered to minimize systemic financial risk through blast-radius containment, exposure modeling, and multi-layer validation.

System Scale and Operational Constraints

Before discussing architecture, scale must be understood.

Operational Footprint

  • 1,000,000+ SKUs under management
  • 500,000 price updates per day
  • Up to 70% of updates concentrated during peak hours
  • Sustained ~2000 RPS
  • 256 distributed workers (scaled from 20)
  • Event-driven microservices architecture

SLA Constraints

  • Price refresh ≤ 10 minutes
  • Anomaly detection ≤ 1 minute
  • 99.9% uptime (improved from 80%)
  • 100% SKU processing coverage

Financial Exposure Profile

  • Average item price: $15–$20
  • Margin exposure per incorrect price: ~30%
  • Potential daily exposure per large seller: $15,000–$20,000

At this scale:

  • Retry logic becomes financial risk logic
  • API contract changes become systemic threats
  • AI recommendations require containment layers

Architectural Principle: Survivability Over Optimization

The system was designed around a single constraint:

Every price change must be financially survivable.

Not optimal. \n Not aggressive. \n Survivable.

The architecture follows an event-driven microservice model with strict isolation boundaries.

Core Components

  • Market Data Ingestion
  • Canonical Pricing Model
  • Price Control Core
  • Risk & Guardrails Layer
  • Execution Service
  • Post-Apply Verification
  • Audit & Metrics

Optimization logic is deliberately separated from risk enforcement.

The pricing engine proposes. \n The risk layer governs.

Isolation and Blast-Radius Containment

One of the most underestimated risks in automated pricing is cross-client propagation.

If pricing anomalies spread across:

  • Sellers
  • Marketplaces
  • Product categories

…the blast radius becomes uncontrollable.

Isolation exists at multiple layers:

  • Marketplace-level queue segregation
  • Seller-level financial boundaries
  • SKU-level hard caps
  • Category-based staged execution

Queues are not performance tools. \n They are containment mechanisms.

Execution pipelines are partitioned by:

  • Marketplace
  • Store size
  • Inventory depth
  • Risk tier

Staged Rollout Model

New features are deployed using controlled expansion:

  1. Internal stores
  2. Risk-aware voluntary sellers
  3. Limited SKU segments
  4. Gradual scaling

Financial simulation precedes rollout. \n Rollback mechanisms are prepared before deployment.

Propagation is engineered — not assumed.

The Two-Phase Validation Model

The system does not trust its own output.

Every price passes through multi-layer validation.

Phase 1: Pre-Calculation Validation

  • Data freshness checks
  • Canonical normalization
  • Type and null enforcement

Phase 2: Pre-Send Guardrails

  • Hard min/max bounds
  • Percentage change caps (e.g., ±30% for new sellers)
  • Margin floor protection
  • Price corridor enforcement

Phase 3: Post-Apply Verification

  • Marketplace confirmation validation
  • Drift detection between calculated and applied values
  • Automatic anomaly escalation

This layered model reduced the incident rate from 3% to 0.1% during system evolution.

Engineering the Risk Layer

At scale, pricing mistakes are not bugs.

They are financial events.

The risk layer was designed as an independent subsystem with its own:

  • Thresholds
  • Escalation paths
  • Freeze logic

It is not embedded inside the pricing engine.

It governs it.

\

\

Budget-at-Risk Modeling

Instead of validating SKUs individually, the system models exposure at portfolio level:

Risk Exposure ≈ Inventory × (Cost − Target Price) × Expected Velocity

If projected exposure exceeds seller-defined thresholds:

  • Price application is blocked
  • Fallback pricing is activated
  • Alerts are triggered
  • Manual confirmation may be required

Even small price deviations, when multiplied by inventory and velocity, become systemic financial events.

Exposure modeling prevents mass underpricing cascades.

Self-Healing Data Architecture

External APIs introduce volatility:

  • Field format changes (e.g., null → 0)
  • Discount semantic shifts
  • Undocumented contract modifications
  • Stale or inconsistent responses

Mitigation mechanisms include:

  • Cross-source consistency validation
  • Anomaly scoring
  • Data quarantine
  • Automatic re-fetch and correction flows

Erroneous datasets are isolated before they affect pricing logic.

This increased SKU processing coverage from 80% to 100%.

\ \

AI With Guardrails — Not AI in Control

Machine learning is used as an advisory layer only.

ML provides:

  • Anomaly detection scores
  • Competitive pricing pattern recognition
  • Promotion impact modeling

It does not have write authority.

All outputs pass through:

  • Hard financial bounds
  • Risk-tier enforcement
  • Budget-at-risk limits

AI without containment in financial systems is not innovation.

It is volatility amplification.

Incident Case Study: API Contract Shift

A marketplace modified discount semantics without prior notice.

Previously:

  • Removing a discount required sending null

After update:

  • The API expected 0

Impact

  • ~15% of sellers affected
  • Discount values misinterpreted
  • Exposure risk could have escalated to six figures

Containment Mechanisms Activated

  • Two-phase validation detected abnormal deltas
  • Risk-tier escalation triggered freeze logic
  • Fallback pricing applied
  • Batch rollback restored last safe state
  • Alerts notified engineering

Financial losses were contained.

Post-incident improvements included:

  • Contract anomaly monitoring
  • Semantic abstraction layers

External volatility must be treated as systemic risk.

Measurable Transformation

The architectural redesign produced sustained improvements:

  • Incident rate reduced from 3% to 0.1%
  • Uptime improved from 80% to 99.9%
  • SKU coverage increased from 80% to 100%
  • Test coverage expanded from 20% to 95%
  • Worker infrastructure scaled from 20 to 256
  • Client base grew to 10,000+ active sellers

Support load remained stable despite exponential growth.

This was not incremental optimization.

It was resilience engineering.

\

Industry Implications

Automated pricing is converging toward financial infrastructure.

Three macro-trends are emerging:

  1. Consolidation under larger financial institutions
  2. Marketplace-native pricing infrastructures
  3. AI-driven systems without adequate guardrails

The third trend is the most dangerous.

At scale, optimization without containment becomes fragility.

Financial automation requires architectural maturity comparable to banking systems.

Conclusion

When a system manages 1 million SKUs and processes 500,000 daily price updates, pricing becomes a financial control surface.

Engineering responsibility increases accordingly.

Automated pricing at scale is not about outperforming competitors.

It is about ensuring that optimization never becomes financial catastrophe.

That requires risk engineering — not just algorithms.

About the Author

Rodion Larin is a Financial Systems Architect and Head of Pricing Automation Engineering specializing in distributed high-load infrastructures for marketplace ecosystems.

He designed and implemented a financially resilient pricing architecture managing 1M+ SKUs and 500k daily price updates, reducing systemic pricing incidents from 3% to 0.1% while achieving 99.9% uptime.

智能科技,被动人类:自动化生活背后的心理学

2026-03-14 15:21:44

Technology was supposed to make life easier. And it did.

Food arrives without effort. Navigation requires no memory. Recommendations eliminate decision fatigue. Smart systems adjust lighting, temperature, and even our daily routines. Everything moves smoothly, efficiently, and with minimal friction.

Convenience has become the defining feature of modern innovation.

But what if convenience, while solving one problem, is quietly creating another?

Not environmental pollution. Not data pollution.

==Behavioral pollution.==

The Comfort of Effortless Living

Convenience reduces friction. And friction, historically, forced awareness.

When something required effort, we noticed it. We thought about it. We engaged with it. Walking to a store made distance real. Cooking made ingredients visible. Managing waste made consumption tangible.

Smart systems remove those moments.

Food delivery apps remove the experience of sourcing. Cloud storage removes the physicality of consumption. Digital payments remove the sensation of spending. AI suggestions remove the effort of deciding.

Nothing feels heavy anymore. And because nothing feels heavy, nothing feels consequential.

Convenience does not only change what we do. It changes how much we feel responsible for doing it.

The Psychology of Automation

Automation shifts cognitive load from human to machine. That’s the design goal.

But when responsibility shifts with it, something subtle happens: agency weakens.

When systems auto-correct, auto-fill, auto-renew, auto-recommend, and auto-optimize, participation becomes optional. The world feels managed. The invisible machinery of algorithms gives the impression that someone—or something—is handling the complexity.

The result is not laziness. It is distance.

We become observers of optimized systems instead of participants in conscious decisions.

Over time, this distance compounds.

Environmental Impact Without Friction

Take sustainability as an example.

We can now track carbon footprints through apps. We can invest in ESG funds with a click. We can purchase “carbon-neutral” services without altering habits.

Technology makes environmental alignment easier.

But ease can dilute intention.

When responsibility is packaged as a feature, it risks becoming symbolic rather than transformative. We check a box. We toggle a setting. We assume progress.

Yet true sustainability often requires inconvenience—less consumption, slower choices, intentional trade-offs.

Convenience smooths over the discomfort that meaningful change demands.

The Illusion of Efficiency

Efficiency feels virtuous. It signals improvement. Faster processes, streamlined logistics, reduced waste. But efficiency optimized for growth can mask broader consequences.

Streaming eliminates physical media—but increases energy demand. Fast delivery reduces waiting—but expands packaging waste. Digital convenience reduces travel—but increases data center emissions.

Each innovation solves a surface friction while redistributing impact elsewhere.

Convenience rarely eliminates cost. It relocates it.

When cost becomes invisible, responsibility follows.

Passive Consumption in a Smart World

The more intelligent systems become, the less active users need to be.

Recommendations decide what we watch. Feeds decide what we read. Maps decide where we go. Smart assistants decide what we need.

Choice remains technically available, but default behavior dominates.

Convenience trains predictability. Predictability reduces awareness.

And awareness is where responsibility begins.

Without awareness, behavior becomes automated. Without reflection, consumption becomes habitual.

A passive user is not malicious. But passivity scales.

Is Convenience the Problem?

Convenience itself is not inherently harmful. It improves accessibility. It saves time. It reduces barriers.

The problem emerges when convenience becomes the highest value.

When ease outranks intention.

When speed outranks reflection.

When optimization outranks responsibility.

Technology does not force passivity. But it makes passivity comfortable.

And comfort is persuasive.

Reclaiming Friction

Perhaps the solution is not to reject convenience, but to reintroduce conscious friction.

Not inefficiency for its own sake—but pauses that restore awareness.

Choosing slower options occasionally. Reviewing subscriptions instead of auto-renewing. Cooking instead of ordering. Reading beyond headlines. Questioning algorithmic suggestions.

Small acts of friction re-anchor responsibility.

Technology should assist agency, not replace it.

The goal is not to feel guilty for using smart tools. The goal is to remain awake while using them.

The Responsibility Question

As technology advances, responsibility becomes a design question as much as a personal one.

Can systems be built to encourage reflection instead of eliminating it?

Can convenience coexist with awareness?

Can innovation enhance responsibility rather than dilute it?

These are not technical limitations. They are value decisions.

Technology amplifies whatever incentives guide it.

If convenience remains the dominant incentive, passivity will scale alongside progress.

If responsibility becomes embedded in design, engagement might scale instead.

Why This Matters

In a world moving toward automation, responsibility cannot be outsourced entirely.

Tools can optimize outcomes. But intention still begins with humans.

Convenience is powerful. It reduces effort. It removes barriers. It accelerates life.

But when ease becomes the default expectation, participation weakens.

And a world run smoothly but thoughtlessly is still vulnerable.

Technology may continue to get smarter.

The deeper question is whether we will remain attentive enough to use it consciously.

Because the real risk is not intelligent machines.

It is comfortable humans.

《TechBeat》:无需编程即可创建网站:Fabricate 如何将对话转化为全栈应用(2026年3月14日)

2026-03-14 14:11:26

How are you, hacker? 🪐Want to know what's trending right now?: The Techbeat by HackerNoon has got you covered with fresh content from our trending stories of the day! Set email preference here. ## Create a Website Without Code: How Fabricate Turns Conversations Into Full-Stack Apps By @boostlegends1 [ 9 Min read ] Build full-stack web apps without coding. Fabricate uses AI to generate frontend, backend, databases, and payments from simple prompts in minutes. Read More.

How to Build an AI Medical Scribe With AssemblyAI

By @assemblyai [ 6 Min read ] Building a medical scribe requires more than transcription accuracy. It's about creating a system that fits into clinical workflows while respecting privacy. Read More.

The Engineering Leader’s Playbook for Achieving Real ROI with AI

By @playerzero [ 8 Min read ] Learn how engineering leaders maximize AI ROI by reducing MTTR, preventing defects, reclaiming capacity, and turning incidents into operational advantage. Read More.

SERP Benchmarks: Success Rates and Latency at Scale

By @brightdata [ 8 Min read ] ​​We benchmark SERP APIs for success rate, ​​speed, and stability under load. Learn which setup delivers consistent results for AI agents ​​and deep research. Read More.

SecurityMetrics Announces Suite of CMMC Solutions for Defense Contractors of All Sizes

By @pr-securitymetrics [ 2 Min read ] SecurityMetrics announces new CMMC tool for compliance and validation, helping primes and subcontracts streamline the CMMC process. Read More.

It's Not Kubernetes. It Never Was.

By @davidiyanu [ 8 Min read ] Kubernetes isn't breaking your multi-cloud strategy; your org structure is. A senior engineer explains why your deployment pipeline is fighting you. Read More.

Beyond 10x Engineers: Designing AI-Native Teams Around Context

By @playerzero [ 4 Min read ] AI-native teams replace 10x engineer dependency with distributed judgment, boosting resilience, speed, and scalable execution. Read More.

A Beginner-to-Advanced Guide to Using DomoAI

By @jonstojanjournalist [ 4 Min read ] Learn how to use DomoAI from beginner to advanced, including 4K video, lip sync avatars, anime animation, and pro workflows. Read More.

The OpenClaw Saga: How the Last Two Weeks Changed the Agentic AI World Forever

By @thomascherickal [ 22 Min read ] OpenClaw has exposed the biggest issues in the hyperscaler companies, and it is the leader in AI agents. However, it is a security risk. Use local LLMs instead! Read More.

Which Crypto Exchanges Can You Actually Trust in 2026?

By @oleggbaglai [ 9 Min read ] MiCA is live. Bybit survived the biggest crypto hack in history. And trust, in 2026, is something users must verify for themselves - here is how. Read More.

C# Barcode Library In-Depth Comparison: Ranked by Use Case

By @ironsoftware [ 34 Min read ] We tested 12 .NET barcode libraries side by side to help you choose the right SDK for scanning, generation, PDFs, and mobile apps. Read More.

A Guide to HIPAA Compliance for Software Development

By @vanta [ 8 Min read ] HIPAA requires organizations to minimize access to protected health information. Read More.

AI Is Not Being Adopted. It Is Being Installed.

By @thegeneralist [ 8 Min read ] AI isn’t being adopted so much as installed. Sci-fi saw the power structure first, and that lens helps explain AI, work, and who pays first. Read More.

Why Integrate Dandelion++ Onto the Beldex Network?

By @beldexcoin [ 5 Min read ] Dandelion++ enhances Beldex privacy by protecting transaction origins at the network layer, preventing timing analysis and metadata-based deanonymization. Read More.

I Automated 80% of My Code Review With 5 Shell Scripts

By @tyingshoelaces [ 11 Min read ] Six months. One sticky note. Five shell scripts. Here's how to stop doing a computer's job every time Claude Code touches your codebase. Read More.

The Future of Millennial Relationships: How Virtual Intimacy Is Transforming Our Connections

By @socialdiscoverygroup [ 7 Min read ] In an age of constant connectivity, millennials are quietly redefining what intimacy looks like. Read More.

A Developer’s Introduction to Solana

By @manishmshiva [ 7 Min read ] A practical guide to Solana’s architecture, Proof of History, Rust-based development, and how it enables fast, low-cost decentralized applications. Read More.

I Built a Free Bloomberg Alternative and Made It Open Source Because $24K/yr Is Insane

By @raviteja-nekkalapu [ 4 Min read ] Bloomberg does all of this. But not for $2,000 a month. One developer built the 25-section stock report he actually wanted and open-sourced it. Read More.

MCP is Already Dying

By @omotayojude [ 3 Min read ] Is the Model Context Protocol a step backward? Explore why the traditional command line is still the most powerful & transparent interface for AI agents in 2026 Read More.

Why SaaS Companies Waste 60% of Their Google Ads Budget

By @kingdavvd [ 7 Min read ] Up to 60% of SaaS Google Ads budgets go to waste. Learn the structural mistakes behind it and how to fix campaigns for better ROI. Read More. 🧑‍💻 What happened in your world this week? It's been said that writing can help consolidate technical knowledge, establish credibility, and contribute to emerging community standards. Feeling stuck? We got you covered ⬇️⬇️⬇️ ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who'll love you for it. See you on Planet Internet! With love, The HackerNoon Team ✌️

CFG 树遍历:精通配对函数与双射

2026-03-14 10:25:17

Table of Links

Abstract and 1. Introduction

2. Pairing functions

  1. Enumerating trees
  2. LZ-trees
  3. Conclusion and References

2 Pairing functions

To construct a bijection between trees and integers, we use a construction that has its roots in Cantor [12]’s proof that the rationals can be put into one-to-one correspondence with the integers. Cantor used a pairing function [13] to match up N × N with N itself:

\

\ Other pairing functions are more convenient for some applications. A popular alternative, illustrated in Figure 1 is the Rosenberg-Strong pairing function [17],

\

\ Then, for example, n = 147 can be broken down as,

\

\ It may not also be obvious how to use this approach to generate from an arbitrary CFG where the productions allowed at each step vary depending on the CFG’s nonterminal types. In particular, there may be multiple ways of expanding each nonterminal, which differs depending on which non-terminal is used. A simple scheme such as giving each CFG rule an integer code and then using a pairing function like R to recursively pair them together will not, in general, produce a bijection because there may be integer codes that do not map onto full trees (for instance, pairings of two terminal rules in the CFG).

\ The issue is that in generating from a CFG, we have to encode a choice of which rule to expand next, of which there are only finitely many options. In fact, the number of choices will in general depend on the nonterminal. Our approach to address this is to use two different pairing functions: a modular “pairing function” to encode which nonterminal to use and the Rosenberg-Strong pairing function to encode integers for the child of any node. Thus, if a given nonterminal has k expansions, define a pairing function that pairs {0, 1, 2, . . . , k − 1} × N with N. A simple mod operation, shown in Figure 1, will work:

\

\

:::info Author:

(1) Steven T. Piantadosi.

:::


:::info This paper is available on arxiv under CC BY 4.0 license.

:::

[1] A function f such that f(x, y) > max(x, y).