2026-02-14 17:55:35
For two decades, product design has revolved around a stable premise: users initiate, software responds. Even as AI crept into products such as recommendation engines, predictive text, and fraud detection, the interface still framed intelligence as reactive.
AI agents break that contract. An agent does not wait for instruction. It monitors context, forms intentions, takes actions, and adapts its strategy over time. It delegates to APIs, coordinates across systems, and sometimes executes decisions without asking first. In other words, it behaves less like a feature and more like a junior operator.
In late 2025, McKinsey reported that more than two-thirds of surveyed organizations use AI in more than one business function, and 23% say they are already scaling an “agentic AI system” somewhere in the enterprise. Meanwhile, Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI (up from <1% in 2024), enabling 15% of day-to-day work decisions to be made autonomously.
Designers are no longer shaping tools. They are shaping artificial actors.
The core UX question shifts from How does a user operate this system? to How does a human supervise, collaborate with, and constrain an autonomous one?
Traditional UX optimizes friction in task execution. Agent UX optimizes clarity in delegation. For example, a revenue operations lead at a SaaS company might export reports and adjust forecasts manually. When she assigns a standing objective to an agent to monitor pipeline health and intervene when conversion drops below a threshold, the agent reviews CRM data, identifies weak segments, proposes pricing experiments, and drafts emails to account managers. No button was clicked to initiate each action. The system is acting continuously.
The design problem here is not button placement. It is delegation architecture:
This means designing authority settings as first-class objects, not buried toggles. An agent might operate under clearly tiered modes:
Delegation becomes configurable, inspectable, and adjustable. If users cannot see the boundaries of an agent’s power, they will not trust it.
The most destabilizing property of AI agents is invisibility. They act in background threads, across integrations, outside the visible screen. When humans don’t understand what automation is doing, they disengage—until something goes wrong.
UX for agents must therefore prioritize observability:
In agent-driven systems, the audit trail is the UI.
Autonomous agents don’t produce the exact same result every time. They make judgments based on patterns, signals, and probabilities. If people trust them too much, they stop paying attention. If they trust them too little, they step in and block useful automation.
UX design needs to calibrate trust. This involves:
AI agents rarely live inside a single interface. They coordinate across tools—CRM, billing systems, messaging apps, analytics platforms. This introduces a systems-level design challenge: the user experience spans multiple surfaces.
For instance, an operations agent may:
The user’s awareness of this chain must persist across environments. UX cannot assume a centralized dashboard as the only locus of interaction. Instead, designers must build:
The design canvas becomes distributed. The interface is no longer a screen; it is an ecosystem of coordinated touchpoints.
The companies that will lead in this next wave will be the ones that approach autonomy with discipline — defining clear boundaries and building systems that can be audited and adjusted over time. When software begins to act independently, usability is no longer the only benchmark. Accuracy and speed still matter, but they are not enough.
The deeper question is behavioral and psychological: are people comfortable allowing a system to take action in their name? The future of AI products will be determined less by technical capability and more by whether autonomy feels understandable, controllable, and worthy of trust.
:::info This story was published under HackerNoon’s Business Blogging Program.
:::
\
2026-02-14 17:54:08
\
As artificial intelligence becomes deeply embedded in mobile applications, the role of engineering judgment is changing. Today, building AI-powered mobile products is no longer only about innovation speed or model accuracy. It is increasingly about responsibility, security, and the ability to translate experimental ideas into production-ready systems.
Hackathons have become an early reflection of this shift. Once focused primarily on rapid prototyping, they now function as testing grounds for applied AI, mobile security practices, and engineering maturity. In this environment, technical jury members play a critical role in shaping how emerging products are evaluated and refined.
Ivan Mishchenko, a Senior Android Engineer, AI security researcher, and recipient of the Digital Leaders Award 2025 (Developer of the Year – Artificial Intelligence), has served as a jury member at multiple international technology events, including the NextGen Hackathon (December 2025) and Armenia Digital Awards (November 2025). His expertise lies at the intersection of Android development, artificial intelligence, and mobile application security.
We spoke with Ivan about his role as a hackathon judge, the challenges of evaluating AI-driven mobile projects, and how real-world banking experience influences his approach to innovation and security.
The NextGen Hackathon focused on building practical digital products that integrate artificial intelligence in a meaningful way. A significant portion of the submissions were mobile-first applications, which immediately raises questions about privacy, data protection, and secure AI integration.
From the jury’s perspective, the challenge was not to reward novelty alone, but to assess whether AI genuinely improved the product and whether the solution could realistically evolve beyond a demo. This included evaluating how teams handled user data, how AI models were deployed, and whether the overall system demonstrated engineering discipline.
My evaluation framework was built around several core dimensions:
1. System Architecture
I paid close attention to how teams structured their applications. Even in hackathon conditions, architectural decisions matter. Clear separation of concerns, modular design, and scalability indicate whether a project can survive real-world growth.
2. Security by Design
Security was a central focus. Mobile applications often handle sensitive personal or financial data, and AI components introduce additional attack surfaces. I evaluated authentication flows, data storage practices, encryption usage, and awareness of AI-specific risks such as data leakage or model misuse.
3. Quality of AI Integration
I looked at whether AI was used appropriately for the problem domain. Strong projects demonstrated a clear understanding of why AI was needed, how inference was performed (on-device or cloud-based), and how the system behaved when models failed or returned uncertain results.
4. Engineering Maturity
Clean APIs, understandable logic, and realistic trade-offs matter even in early prototypes. Teams that demonstrated disciplined engineering consistently stood out.
One factor that strongly shaped my approach as a jury member is my direct experience implementing similar AI-driven security solutions in production banking environments.
I have worked on mobile banking applications in Georgia, including projects for Space Bank, where security, reliability, and regulatory compliance are non-negotiable requirements. In these environments, architectural mistakes or insecure design decisions can lead to financial loss, regulatory violations, or reputational damage.
In my professional work, I have been involved in:
This experience directly influenced how I evaluated hackathon projects. I assessed not only whether an idea worked in a demo, but whether it could realistically transition into a regulated, high-risk environment such as banking or fintech.
One of the most valuable roles of an experienced judge is translating industry-grade constraints into guidance that early-stage teams can apply immediately.
When reviewing projects, I often referenced real-world challenges encountered in banking systems:
By grounding feedback in these realities, teams were encouraged to rethink their solutions not merely as prototypes, but as potential future products. In several cases, teams adjusted their architectures during the hackathon itself by introducing on-device inference, improving authentication flows, or separating AI decision-making from UI logic.
A recurring issue was over-reliance on external AI services without sufficient consideration for data security. Some teams transmitted sensitive user data to third-party APIs without anonymization or clear safeguards.
Another common challenge was misunderstanding the limitations of on-device AI. While on-device models improve privacy and latency, they require careful optimization and realistic expectations. I often suggested hybrid architectures that combine local inference with secure backend validation.
The strongest teams treated security as a design constraint rather than an obstacle. This mindset consistently led to more robust and credible solutions.
My research focuses on AI-assisted security in Android applications, including phishing detection, biometric authentication, anomaly detection, and on-device machine learning. I have published peer-reviewed articles on these topics and presented research at international conferences.
Because of this background, I naturally evaluate products through a threat-oriented lens:
At the same time, I aim to balance academic rigor with practical guidance. Hackathons are about creativity and momentum, and my role is to help teams align innovation with realistic engineering expectations.
Through research, production work, and jury participation, I have developed a consistent framework for evaluating AI-powered mobile applications.
This framework combines:
Rather than treating AI as an isolated feature, I evaluate it as part of a broader system that includes authentication, data storage, networking, and user interaction. This system-level perspective is still relatively uncommon in early-stage development environments, where AI is often judged primarily by accuracy metrics or novelty.
My contribution lies in introducing security-first, system-level thinking into AI product evaluation for mobile platforms, where constraints and risks differ significantly from web or cloud-only systems.
My invitations to serve as a jury member at events such as the NextGen Hackathon and Armenia Digital Awards reflect growing international recognition of my expertise in mobile AI security.
These roles involved direct responsibility for evaluating the work of competing teams, assessing technical quality, security posture, and innovation potential. Combined with international awards such as:
they demonstrate sustained professional recognition across multiple countries and industry contexts.

\
Serving as a judge is both a responsibility and a form of professional contribution. It allows experienced engineers to:
For me, jury work reinforces the idea that engineering leadership extends beyond writing code. It includes evaluation, mentorship, and shaping the broader technology ecosystem.
There are three principles I consistently emphasize:
**Security is a feature, not an afterthought \ Users may not immediately notice good security, but they will notice its absence.
**AI should augment, not obscure \ If a team cannot explain what a model does and why it is needed, it is likely being applied too early or incorrectly.
**Engineer for reality, not demos \ Edge cases, failures, and misuse scenarios are where professional products are defined.
As artificial intelligence becomes a standard component of mobile applications, long-term success will depend on trust, security, and responsible engineering.
My work — across banking systems, academic research, and international hackathon juries — is driven by the same goal: ensuring that innovation does not outpace reliability and user trust. The most successful AI-powered products of the future will be built by engineers who understand both technical capability and responsibility.
Participating as a jury member is one way I contribute to shaping that future by helping emerging teams align creativity with real-world standards.
:::info This story was published under HackerNoon’s Business Blogging Program.
:::
\
2026-02-14 10:37:43
Experiments

\

\

\ where ⊙ is the Hadamard product, masking the attentional weight if the corresponding edge does not exist in the graph.
\
:::info Authors:
(1) Li Sun, North China Electric Power University, Beijing 102206, China ([email protected]);
(2) Zhenhao Huang, North China Electric Power University, Beijing 102206, China;
(3) Hao Peng, Beihang University, Beijing 100191, China;
(4) Yujie Wang, North China Electric Power University, Beijing 102206, China;
(5) Chunyang Liu, Didi Chuxing, Beijing, China;
(6) Philip S. Yu, University of Illinois at Chicago, IL, USA.
:::
:::info This paper is available on arxiv under CC BY-NC-SA 4.0 Deed (Attribution-Noncommercial-Sharelike 4.0 International) license.
:::
\
2026-02-14 10:29:28
Experiments
We propose a novel Lorentz Structural Entropy neural Net (LSEnet), which aims to learn the optimal partitioning tree T ∗ net in the Lorentz model of hyperbolic space, where we further incorporate node features with structural information by graph convolution net. First, we show the reason we opt for hyperbolic space, rather than Euclidean space.
\

\ Hyperbolic space is well suited to embed the partitioning tree, and Theorem 5.1 does not hold for Euclidean space.
\ Overall architecture of LSEnet is sketched in Figure 2. In hyperbolic space, LSEnet first embeds leaf nodes of the tree, and then recursively learns parent nodes, self-supervised by our new clustering objective Eq. (4).
\
:::info Authors:
(1) Li Sun, North China Electric Power University, Beijing 102206, China ([email protected]);
(2) Zhenhao Huang, North China Electric Power University, Beijing 102206, China;
(3) Hao Peng, Beihang University, Beijing 100191, China;
(4) Yujie Wang, North China Electric Power University, Beijing 102206, China;
(5) Chunyang Liu, Didi Chuxing, Beijing, China;
(6) Philip S. Yu, University of Illinois at Chicago, IL, USA.
:::
:::info This paper is available on arxiv under CC BY-NC-SA 4.0 Deed (Attribution-Noncommercial-Sharelike 4.0 International) license.
:::

2026-02-14 10:25:26
Experiments

\

\

\
:::info Authors:
(1) Li Sun, North China Electric Power University, Beijing 102206, China ([email protected]);
(2) Zhenhao Huang, North China Electric Power University, Beijing 102206, China;
(3) Hao Peng, Beihang University, Beijing 100191, China;
(4) Yujie Wang, North China Electric Power University, Beijing 102206, China;
(5) Chunyang Liu, Didi Chuxing, Beijing, China;
(6) Philip S. Yu, University of Illinois at Chicago, IL, USA.
:::
:::info This paper is available on arxiv under CC BY-NC-SA 4.0 Deed (Attribution-Noncommercial-Sharelike 4.0 International) license.
:::
\
2026-02-14 10:19:12
Experiments
This part shows some general properties of the new formulation and theoretically demonstrates the inherent connection between structural entropy and graph clustering. We first give an arithmetic property regarding Definition 4.3 to support the following claim on graph clustering. The proofs of the lemma/theorems are detailed in Appendix A.
\

\

\

\

\
:::info Authors:
(1) Li Sun, North China Electric Power University, Beijing 102206, China ([email protected]);
(2) Zhenhao Huang, North China Electric Power University, Beijing 102206, China;
(3) Hao Peng, Beihang University, Beijing 100191, China;
(4) Yujie Wang, North China Electric Power University, Beijing 102206, China;
(5) Chunyang Liu, Didi Chuxing, Beijing, China;
(6) Philip S. Yu, University of Illinois at Chicago, IL, USA.
:::
:::info This paper is available on arxiv under CC BY-NC-SA 4.0 Deed (Attribution-Noncommercial-Sharelike 4.0 International) license.
:::
\