MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

为人工智能代理设计

2026-02-14 17:55:35

For two decades, product design has revolved around a stable premise: users initiate, software responds. Even as AI crept into products such as recommendation engines, predictive text, and fraud detection, the interface still framed intelligence as reactive.

AI agents break that contract. An agent does not wait for instruction. It monitors context, forms intentions, takes actions, and adapts its strategy over time. It delegates to APIs, coordinates across systems, and sometimes executes decisions without asking first. In other words, it behaves less like a feature and more like a junior operator.

In late 2025, McKinsey reported that more than two-thirds of surveyed organizations use AI in more than one business function, and 23% say they are already scaling an “agentic AI system” somewhere in the enterprise. Meanwhile, Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI (up from <1% in 2024), enabling 15% of day-to-day work decisions to be made autonomously.

Designers are no longer shaping tools. They are shaping artificial actors.

The core UX question shifts from How does a user operate this system? to How does a human supervise, collaborate with, and constrain an autonomous one?

Designing for Delegation, Not Interaction

Traditional UX optimizes friction in task execution. Agent UX optimizes clarity in delegation. For example, a revenue operations lead at a SaaS company might export reports and adjust forecasts manually. When she assigns a standing objective to an agent to monitor pipeline health and intervene when conversion drops below a threshold, the agent reviews CRM data, identifies weak segments, proposes pricing experiments, and drafts emails to account managers. No button was clicked to initiate each action. The system is acting continuously.

The design problem here is not button placement. It is delegation architecture:

  • What scope of authority does the agent have?
  • Under what conditions does it act autonomously versus seek approval?
  • How are boundaries defined?

This means designing authority settings as first-class objects, not buried toggles. An agent might operate under clearly tiered modes:

  • Observation only
  • Recommendation with preview
  • Conditional auto-execution
  • Full autonomy within defined limits \n **

Delegation becomes configurable, inspectable, and adjustable. If users cannot see the boundaries of an agent’s power, they will not trust it.

2. Making Autonomy Observable

The most destabilizing property of AI agents is invisibility. They act in background threads, across integrations, outside the visible screen. When humans don’t understand what automation is doing, they disengage—until something goes wrong.

UX for agents must therefore prioritize observability:

  • Real-time activity logs written in plain language
  • Clear articulation of triggers
  • A visible chain of decisions and data sources

In agent-driven systems, the audit trail is the UI.

4. Calibrating Trust in Uncertain Systems

Autonomous agents don’t produce the exact same result every time. They make judgments based on patterns, signals, and probabilities. If people trust them too much, they stop paying attention. If they trust them too little, they step in and block useful automation.

UX design needs to calibrate trust. This involves:

  • Signaling confidence levels when presenting recommendations
  • Differentiating between high-certainty and exploratory actions
  • Surfacing uncertainty transparently (“Based on incomplete customer data”)

6. Orchestrating Across Ecosystems

AI agents rarely live inside a single interface. They coordinate across tools—CRM, billing systems, messaging apps, analytics platforms. This introduces a systems-level design challenge: the user experience spans multiple surfaces.

For instance, an operations agent may:

  • Detect a contract renewal date in a database
  • Draft a renewal proposal
  • Send a notification in Slack
  • Update forecast projections
  • Trigger billing workflows

The user’s awareness of this chain must persist across environments. UX cannot assume a centralized dashboard as the only locus of interaction. Instead, designers must build:

  • Cross-platform identity continuity (the agent feels like one entity everywhere)
  • Consistent intervention controls regardless of entry point
  • Context-aware notifications that explain why the agent is acting

The design canvas becomes distributed. The interface is no longer a screen; it is an ecosystem of coordinated touchpoints.

Behavioral Infrastructure Design

The companies that will lead in this next wave will be the ones that approach autonomy with discipline — defining clear boundaries and building systems that can be audited and adjusted over time. When software begins to act independently, usability is no longer the only benchmark. Accuracy and speed still matter, but they are not enough.

The deeper question is behavioral and psychological: are people comfortable allowing a system to take action in their name? The future of AI products will be determined less by technical capability and more by whether autonomy feels understandable, controllable, and worthy of trust.


:::info This story was published under HackerNoon’s Business Blogging Program.

:::

\

评判移动人工智能的未来:专访伊万·米申科

2026-02-14 17:54:08

\

An Interview with Android Engineer and AI Security Researcher Ivan Mishchenko

As artificial intelligence becomes deeply embedded in mobile applications, the role of engineering judgment is changing. Today, building AI-powered mobile products is no longer only about innovation speed or model accuracy. It is increasingly about responsibility, security, and the ability to translate experimental ideas into production-ready systems.

Hackathons have become an early reflection of this shift. Once focused primarily on rapid prototyping, they now function as testing grounds for applied AI, mobile security practices, and engineering maturity. In this environment, technical jury members play a critical role in shaping how emerging products are evaluated and refined.

Ivan Mishchenko, a Senior Android Engineer, AI security researcher, and recipient of the Digital Leaders Award 2025 (Developer of the Year – Artificial Intelligence), has served as a jury member at multiple international technology events, including the NextGen Hackathon (December 2025) and Armenia Digital Awards (November 2025). His expertise lies at the intersection of Android development, artificial intelligence, and mobile application security.

We spoke with Ivan about his role as a hackathon judge, the challenges of evaluating AI-driven mobile projects, and how real-world banking experience influences his approach to innovation and security.

Ivan, what was the primary focus of the NextGen Hackathon you judged?

The NextGen Hackathon focused on building practical digital products that integrate artificial intelligence in a meaningful way. A significant portion of the submissions were mobile-first applications, which immediately raises questions about privacy, data protection, and secure AI integration.

From the jury’s perspective, the challenge was not to reward novelty alone, but to assess whether AI genuinely improved the product and whether the solution could realistically evolve beyond a demo. This included evaluating how teams handled user data, how AI models were deployed, and whether the overall system demonstrated engineering discipline.

What criteria did you personally prioritize when evaluating mobile AI projects?

My evaluation framework was built around several core dimensions:

1. System Architecture

I paid close attention to how teams structured their applications. Even in hackathon conditions, architectural decisions matter. Clear separation of concerns, modular design, and scalability indicate whether a project can survive real-world growth.

2. Security by Design

Security was a central focus. Mobile applications often handle sensitive personal or financial data, and AI components introduce additional attack surfaces. I evaluated authentication flows, data storage practices, encryption usage, and awareness of AI-specific risks such as data leakage or model misuse.

3. Quality of AI Integration

I looked at whether AI was used appropriately for the problem domain. Strong projects demonstrated a clear understanding of why AI was needed, how inference was performed (on-device or cloud-based), and how the system behaved when models failed or returned uncertain results.

4. Engineering Maturity

Clean APIs, understandable logic, and realistic trade-offs matter even in early prototypes. Teams that demonstrated disciplined engineering consistently stood out.

From hackathon evaluation to production banking systems

One factor that strongly shaped my approach as a jury member is my direct experience implementing similar AI-driven security solutions in production banking environments.

I have worked on mobile banking applications in Georgia, including projects for Space Bank, where security, reliability, and regulatory compliance are non-negotiable requirements. In these environments, architectural mistakes or insecure design decisions can lead to financial loss, regulatory violations, or reputational damage.

In my professional work, I have been involved in:

  • Designing secure Android architectures for financial applications
  • Implementing biometric authentication flows with fallback mechanisms
  • Applying AI-assisted fraud detection logic based on user behavior analysis
  • Ensuring secure data storage, encryption, and controlled access to sensitive resources

This experience directly influenced how I evaluated hackathon projects. I assessed not only whether an idea worked in a demo, but whether it could realistically transition into a regulated, high-risk environment such as banking or fintech.

How did this real-world experience affect your feedback to teams?

One of the most valuable roles of an experienced judge is translating industry-grade constraints into guidance that early-stage teams can apply immediately.

When reviewing projects, I often referenced real-world challenges encountered in banking systems:

  • regulatory expectations around data protection
  • limitations of mobile hardware and battery usage
  • threat models involving compromised devices or network interception

By grounding feedback in these realities, teams were encouraged to rethink their solutions not merely as prototypes, but as potential future products. In several cases, teams adjusted their architectures during the hackathon itself by introducing on-device inference, improving authentication flows, or separating AI decision-making from UI logic.

What common issues did you observe in AI-powered mobile projects?

A recurring issue was over-reliance on external AI services without sufficient consideration for data security. Some teams transmitted sensitive user data to third-party APIs without anonymization or clear safeguards.

Another common challenge was misunderstanding the limitations of on-device AI. While on-device models improve privacy and latency, they require careful optimization and realistic expectations. I often suggested hybrid architectures that combine local inference with secure backend validation.

The strongest teams treated security as a design constraint rather than an obstacle. This mindset consistently led to more robust and credible solutions.

How does your research background influence your role as a judge?

My research focuses on AI-assisted security in Android applications, including phishing detection, biometric authentication, anomaly detection, and on-device machine learning. I have published peer-reviewed articles on these topics and presented research at international conferences.

Because of this background, I naturally evaluate products through a threat-oriented lens:

  • How could this system be abused?
  • What assumptions does it make about user behavior or device integrity?
  • How would it behave under adversarial conditions?

At the same time, I aim to balance academic rigor with practical guidance. Hackathons are about creativity and momentum, and my role is to help teams align innovation with realistic engineering expectations.

Developing a security-first framework for evaluating mobile AI

Through research, production work, and jury participation, I have developed a consistent framework for evaluating AI-powered mobile applications.

This framework combines:

  • mobile security principles
  • AI risk assessment
  • architectural scalability considerations

Rather than treating AI as an isolated feature, I evaluate it as part of a broader system that includes authentication, data storage, networking, and user interaction. This system-level perspective is still relatively uncommon in early-stage development environments, where AI is often judged primarily by accuracy metrics or novelty.

My contribution lies in introducing security-first, system-level thinking into AI product evaluation for mobile platforms, where constraints and risks differ significantly from web or cloud-only systems.

International recognition and professional trust

My invitations to serve as a jury member at events such as the NextGen Hackathon and Armenia Digital Awards reflect growing international recognition of my expertise in mobile AI security.

These roles involved direct responsibility for evaluating the work of competing teams, assessing technical quality, security posture, and innovation potential. Combined with international awards such as:

  • Digital Leaders Award 2025 (Developer of the Year – Artificial Intelligence)
  • Cases & Faces Award (AI & Machine Learning, USA)

they demonstrate sustained professional recognition across multiple countries and industry contexts.

\

Why jury participation matters for senior engineers

Serving as a judge is both a responsibility and a form of professional contribution. It allows experienced engineers to:

  • influence emerging technical standards
  • mentor early-stage teams
  • identify recurring industry risks at an early stage

For me, jury work reinforces the idea that engineering leadership extends beyond writing code. It includes evaluation, mentorship, and shaping the broader technology ecosystem.

Advice for teams building AI-powered mobile applications

There are three principles I consistently emphasize:

**Security is a feature, not an afterthought \ Users may not immediately notice good security, but they will notice its absence.

**AI should augment, not obscure \ If a team cannot explain what a model does and why it is needed, it is likely being applied too early or incorrectly.

**Engineer for reality, not demos \ Edge cases, failures, and misuse scenarios are where professional products are defined.

Looking ahead: responsible AI as a competitive advantage

As artificial intelligence becomes a standard component of mobile applications, long-term success will depend on trust, security, and responsible engineering.

My work — across banking systems, academic research, and international hackathon juries — is driven by the same goal: ensuring that innovation does not outpace reliability and user trust. The most successful AI-powered products of the future will be built by engineers who understand both technical capability and responsibility.

Participating as a jury member is one way I contribute to shaping that future by helping emerging teams align creativity with real-world standards.


:::info This story was published under HackerNoon’s Business Blogging Program.

:::

\

洛伦兹逻辑:通过可微图熵求解未知聚类数

2026-02-14 10:37:43

Table of Links

Abstract and 1. Introduction

  1. Related Work

  2. Preliminaries and Notations

  3. Differentiable Structural Information

    4.1. A New Formulation

    4.2. Properties

    4.3. Differentiability & Deep Graph Clustering

  4. LSEnet

    5.1. Embedding Leaf Nodes

    5.2. Learning Parent Nodes

    5.3. Hyperbolic Partitioning Tree

  5. Experiments

    6.1. Graph Clustering

    6.2. Discussion on Structural Entropy

  6. Conclusion, Broader Impact, and References Appendix

\ A. Proofs

B. Hyperbolic Space

C. Technical Details

D. Additional Results

5.1. Embedding Leaf Nodes

\

\

\ where ⊙ is the Hadamard product, masking the attentional weight if the corresponding edge does not exist in the graph.

\

:::info Authors:

(1) Li Sun, North China Electric Power University, Beijing 102206, China ([email protected]);

(2) Zhenhao Huang, North China Electric Power University, Beijing 102206, China;

(3) Hao Peng, Beihang University, Beijing 100191, China;

(4) Yujie Wang, North China Electric Power University, Beijing 102206, China;

(5) Chunyang Liu, Didi Chuxing, Beijing, China;

(6) Philip S. Yu, University of Illinois at Chicago, IL, USA.

:::


:::info This paper is available on arxiv under CC BY-NC-SA 4.0 Deed (Attribution-Noncommercial-Sharelike 4.0 International) license.

:::

\

LSEnet:掌握曲面双曲空间中的自动数据聚类

2026-02-14 10:29:28

Table of Links

Abstract and 1. Introduction

  1. Related Work

  2. Preliminaries and Notations

  3. Differentiable Structural Information

    4.1. A New Formulation

    4.2. Properties

    4.3. Differentiability & Deep Graph Clustering

  4. LSEnet

    5.1. Embedding Leaf Nodes

    5.2. Learning Parent Nodes

    5.3. Hyperbolic Partitioning Tree

  5. Experiments

    6.1. Graph Clustering

    6.2. Discussion on Structural Entropy

  6. Conclusion, Broader Impact, and References Appendix

\ A. Proofs

B. Hyperbolic Space

C. Technical Details

D. Additional Results

5. LSEnet

We propose a novel Lorentz Structural Entropy neural Net (LSEnet), which aims to learn the optimal partitioning tree T ∗ net in the Lorentz model of hyperbolic space, where we further incorporate node features with structural information by graph convolution net. First, we show the reason we opt for hyperbolic space, rather than Euclidean space.

\

\ Hyperbolic space is well suited to embed the partitioning tree, and Theorem 5.1 does not hold for Euclidean space.

\ Overall architecture of LSEnet is sketched in Figure 2. In hyperbolic space, LSEnet first embeds leaf nodes of the tree, and then recursively learns parent nodes, self-supervised by our new clustering objective Eq. (4).

\

:::info Authors:

(1) Li Sun, North China Electric Power University, Beijing 102206, China ([email protected]);

(2) Zhenhao Huang, North China Electric Power University, Beijing 102206, China;

(3) Hao Peng, Beihang University, Beijing 100191, China;

(4) Yujie Wang, North China Electric Power University, Beijing 102206, China;

(5) Chunyang Liu, Didi Chuxing, Beijing, China;

(6) Philip S. Yu, University of Illinois at Chicago, IL, USA.

:::


:::info This paper is available on arxiv under CC BY-NC-SA 4.0 Deed (Attribution-Noncommercial-Sharelike 4.0 International) license.

:::

智能数据聚类:LSEnet与曲面空间中的自动图集群

2026-02-14 10:25:26

Table of Links

Abstract and 1. Introduction

  1. Related Work

  2. Preliminaries and Notations

  3. Differentiable Structural Information

    4.1. A New Formulation

    4.2. Properties

    4.3. Differentiability & Deep Graph Clustering

  4. LSEnet

    5.1. Embedding Leaf Nodes

    5.2. Learning Parent Nodes

    5.3. Hyperbolic Partitioning Tree

  5. Experiments

    6.1. Graph Clustering

    6.2. Discussion on Structural Entropy

  6. Conclusion, Broader Impact, and References Appendix

\ A. Proofs

B. Hyperbolic Space

C. Technical Details

D. Additional Results

4.3. Differentiability & Deep Graph Clustering

\

\

\

:::info Authors:

(1) Li Sun, North China Electric Power University, Beijing 102206, China ([email protected]);

(2) Zhenhao Huang, North China Electric Power University, Beijing 102206, China;

(3) Hao Peng, Beihang University, Beijing 100191, China;

(4) Yujie Wang, North China Electric Power University, Beijing 102206, China;

(5) Chunyang Liu, Didi Chuxing, Beijing, China;

(6) Philip S. Yu, University of Illinois at Chicago, IL, USA.

:::


:::info This paper is available on arxiv under CC BY-NC-SA 4.0 Deed (Attribution-Noncommercial-Sharelike 4.0 International) license.

:::

\

智能图集群:自动组织网络

2026-02-14 10:19:12

Table of Links

Abstract and 1. Introduction

  1. Related Work

  2. Preliminaries and Notations

  3. Differentiable Structural Information

    4.1. A New Formulation

    4.2. Properties

    4.3. Differentiability & Deep Graph Clustering

  4. LSEnet

    5.1. Embedding Leaf Nodes

    5.2. Learning Parent Nodes

    5.3. Hyperbolic Partitioning Tree

  5. Experiments

    6.1. Graph Clustering

    6.2. Discussion on Structural Entropy

  6. Conclusion, Broader Impact, and References Appendix

\ A. Proofs

B. Hyperbolic Space

C. Technical Details

D. Additional Results

4.2. Properties

This part shows some general properties of the new formulation and theoretically demonstrates the inherent connection between structural entropy and graph clustering. We first give an arithmetic property regarding Definition 4.3 to support the following claim on graph clustering. The proofs of the lemma/theorems are detailed in Appendix A.

\

\

\

\

\

:::info Authors:

(1) Li Sun, North China Electric Power University, Beijing 102206, China ([email protected]);

(2) Zhenhao Huang, North China Electric Power University, Beijing 102206, China;

(3) Hao Peng, Beihang University, Beijing 100191, China;

(4) Yujie Wang, North China Electric Power University, Beijing 102206, China;

(5) Chunyang Liu, Didi Chuxing, Beijing, China;

(6) Philip S. Yu, University of Illinois at Chicago, IL, USA.

:::


:::info This paper is available on arxiv under CC BY-NC-SA 4.0 Deed (Attribution-Noncommercial-Sharelike 4.0 International) license.

:::

\