MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

5 Risks You Have To Take as a Leader

2026-01-22 06:28:17

The biggest risk as a leader is playing safe and not taking any risks—going with popular decisions instead of pushing for unusual prospects, faking confidence and projecting an image of perfection instead of showing up authentically by admitting limitations and acknowledging what they don’t know, saying yes all the time to people please and build likability instead of saying no to focus on high-impact work even if it displeases someone in the short-term, staying silent to maintain peace and harmony instead of speaking up and voicing their concerns, maintaining the status quo with fear of failure instead of pushing for continuous reinvention and maintaining a tight control over their team instead of empowering and letting go.

\ Leaders need to have a high appetite for taking risks, not just in choosing unconventional paths, taking bold risks or setting aggressive business targets, but also in the way they lead their teams—what they choose to hide and what they choose to share, how do they balance freedom and control, what image they project and the message that passes to their teams and how they handle difficult situations that are messy and hard. It’s often a tricky balance, one that requires taking risks without going overboard and stepping into the unproductive zone.

\ For example:

Giving boundaryless freedom can lead to very bad decisions.

Sharing information that doesn’t concern people in the team can confuse and distract them.

Displaying extreme emotions in the name of authenticity can dilute the impact of the message being conveyed.

Speaking truth without a sign of compassion can seem cruel and inhuman.

\ Every situation at work has some risk involved—risk of failure, risk of reputation, risk of judgment, risk of criticism, risk of disappointment, risk of misunderstanding. These risks can often prevent leaders from engaging in behaviors that are uncomfortable at first. When risk hijacks the amygdala in the brain, it exaggerates negative outcomes and sidelines logical reasoning, making leaders hyper-focused on avoiding threats rather than exploring opportunities.

\ But leaders who don’t take these risks limit their team’s growth and potential. People in the organization take their cues from their leaders and model their behaviors—leaders who don’t embrace risks indirectly tell their teams to play it safe too.

\

Leadership is scarce because few people are willing to go through the discomfort required to lead. This scarcity makes leadership valuable.…It’s uncomfortable to stand up in front of strangers. It’s uncomfortable to propose an idea that might fail. It’s uncomfortable to challenge the status quo. It’s uncomfortable to resist the urge to settle. When you identify the discomfort, you’ve found the place where a leader is needed. If you’re not uncomfortable in your work as a leader, it’s almost certain you’re not reaching your potential as a leader.

— Seth Godin

\ Here are the 5 risks every leader must take daily because it’s impossible to get better at anything without consistent practice:

Making Unpopular Decisions

It’s safe to go with the majority and nod in agreement to a popular decision. You don’t have to voice your concern, share your opinion, or express a disagreement because doing these things often comes with a risk.

\ What if it goes wrong?

What if others don’t like it?

What if they turn against you?

\ But prioritizing consensus, popularity, approval, and likability keeps the possibility of a better decision out of reach. You may not share your opinion when it doesn’t align with the majority because it requires standing up with courage and conviction. You may not speak up when you disagree because you worry about how it will be perceived. You may agree to a decision that you know won’t work because telling others they’re wrong is often scary.

\ Challenging the status quo, voicing your concerns, and sharing disruptive ideas is risky—but it’s the risk you’ve got to take as a leader. It may subject you to frowns from people who think your ideas are crazy. You may face resistance at first. Some might even disapprove of it. Others might resent you for your ability to think creatively and provide a fresh perspective.

\

The true mark of a leader is the willingness to stick with a bold course of action — an unconventional business strategy, a unique product-development roadmap, a controversial marketing campaign — even as the rest of the world wonders why you're not marching in step with the status quo. In other words, real leaders are happy to zig while others zag. They understand that in an era of hyper-competition and non-stop disruption, the only way to stand out from the crowd is to stand for something special.

— Bill Taylor

\ To build risk-taking capacity for speaking up without falling for groupthink, ask these questions:

  1. Am I saying yes to this decision because I really believe in it or because it aligns with the majority?
  2. Are all ideas simply small variations of one another, tried-and-tested approaches, or things that have less risk involved? What would be a completely unique approach that we haven’t explored yet?
  3. Why are other options less exciting compared to the current choice?
  4. What’s the worst that could happen if this decision does not work out as expected? What’s my plan B?
  5. How can I get a buy-in without intimidating and pushing others away?

\ Avoiding new opportunities with fear of failure, dismissing ideas because they seem too risky, or defaulting to tried-and-tested methods over bold initiatives caps your team’s potential. Have the courage and conviction to stand alone. Take the risk by navigating the uncharted territory.

Showing Vulnerability

You may put on a facade of strength by hiding your vulnerabilities to protect yourself from being exposed. Bringing your authentic self to work by admitting gaps in your knowledge, sharing your mistakes, or expressing your true emotions and feelings often comes with a risk.

\ What if people doubt your competence?

What if they don’t respect you?

What if it makes you look weak?

\ But projecting an image of confidence, faking knowledge when you don’t know something, and hiding your true emotions and feelings prevent you from building a bond with people at work. Leaders aren’t expected to be perfect—they’re required to be human. What builds respect isn’t your successes but how gracefully you handled failures. What develops a sense of connection isn’t your imperfections, but the flaws you were willing to share. What enables safety isn’t the fancy messages or the words of encouragement, but how you model safety through your own behaviors and actions.

\ Vulnerability is not weakness—admitting mistakes, not having all the answers, or saying “I don’t know” does not hurt your credibility as a leader. In fact, it increases approachability, builds likability, and increases respect. Pretending to know something or coming across as a “know-it-all” frustrates others—they can see when you genuinely have the knowledge and experience and when you’re just faking it. But remember this: authenticity can’t be an excuse for burdening others by oversharing or justifying your reckless behavior. You have to seek a balance by defining clear boundaries for yourself and others.

\

Fear of being shamed causes people to put on masks and live in fear and pretense, creating a stronghold of pride. Authentic, transparent leaders encourage people to develop trust through their own honesty and vulnerability. They do not view transparency as weakness, but recognize it as a source of their virtue, power and anointing because power flows through humility.

― Laura Gagnon

\ To build risk-taking capacity while showing up authentically without going overboard, ask these questions:

  1. What information do I need to share with others? Is it important for them to know? How will it be helpful without overwhelming them?
  2. How can I combine my struggles with the solutions I implemented so that it encourages others to stay resilient and not develop a complaining attitude?
  3. How can I express my lack of confidence in something without coming across as unsure or indecisive?
  4. How can I share what I’m feeling without unsettling others or making them feel responsible for fixing it?

\ Leaders aren’t deeply admired for their intelligence, knowledge, experience, or skills, but for the way they make others feel—human. Don’t hide your mistakes. Don’t cover your flaws. Show up authentically.

Speaking Hard Truths

Difficult conversations, by nature, are tricky. They are touchy topics that no one likes to talk about. They involve addressing differences of opinion, emotional issues, sensitive subjects, or other potential reasons for conflict—anything you find hard to talk about. They are challenging because they require you to navigate through discomfort, uncertainty, and a wide range of complex emotions.

\ You may ignore difficult situations at work or put them off for too long—an employee not performing, a high performer displaying toxic behavior, or stakeholders making unreasonable demands. These situations are sensitive and often need to be handled with care. Staying silent and doing nothing seems like a safer option when speaking up and not getting the alignment you need can be even more risky. It’s much easier to avoid emotionally draining and mentally exhausting situations than step right into them consciously.

\ What if they don’t agree with you?

What if they quit?

What if they go behind your back to seek approvals?

\ But putting off difficult conversations is a bad idea because issues left unaddressed escalate over time. What was once a manageable problem can grow into a much larger issue if not addressed on time. Constant worry about unresolved issues can take a toll on your mental health and lead to increased stress, anxiety, and even feelings of helplessness. When important issues are being ignored or swept under the rug, it can erode trust, build resentment, and damage relationships.

\ No matter how hard a conversation is, you can’t put it off or delay it forever. Addressing issues directly, providing clarity, and seeking closure can help you gain trust and respect, and also alleviate stress.

\

Beginning a conversation is an act of bravery. When you initiate a conversation, you fearlessly step into the unknown. Will the other person respond to favorably or unfavorably? Will it be a friendly or hostile exchange? There is a feeling of being on the edge. That nanosecond of space and unknowing can be intimidating. It shows your vulnerability.

— Sakyong Mipham

\ To build risk-taking capacity for speaking hard truths, ask these questions:

  1. How am I dealing with this difficult situation—am I facing the situation head-on or seeing the problem, closing my eyes, and getting busy with something else?
  2. What’s the impact of not addressing this issue at the right time?
  3. What’s the worst that can happen if I speak the truth?
  4. How can I communicate in a manner that does not cause the other person to react badly or turn defensive?

\ Difficult conversations, though necessary, are hard to crack. Fear of a bad outcome or not knowing what to say can prevent you from speaking hard truths. Stop playing silly games. Engage in healthy dialogue right when you need it the most.

Letting go of control.

You may be involved in every small decision, every minute detail, and every communication that happens in your team. Staying on top of everything makes it less likely for things to go wrong—risk factor is minimized when you’re in control. Letting go requires you to relinquish control, which can leave you with feelings of anxiety, insecurity, and helplessness.

\ What if they make a big mistake?

What if they misuse it?

What if they overshadow you?

\ But not empowering your team to make their own decisions or demanding that they consult you on every problem prevents them from developing the skills required to grow in their role—if you keep doing all the thinking for your team, they’ll never develop creative thinking skills. If you keep solving their problems, they’ll never learn to navigate complexity. If you keep preventing mistakes, they’ll become more reckless and inattentive.

\ Empowerment is risky, but it’s the only way to develop future leaders. Unless people in the team get the freedom and opportunity to own their decisions, make mistakes, and try different strategies to achieve their goals, they’ll always be dependent on someone else, which will not only slow them down but also prevent them from developing the skills required to grow in their role. Both freedom and control are necessary—but you have to seek the right balance. Without taking that risk, you’ll be left with a team that can’t keep up as business scales and expectations expand.

\

Micromanagement happens when you keep power to yourself. Empowerment is when you give power to your team.

― Nick Chellsen

\ To build risk-taking capacity for letting go by enabling your team to do great things independently, ask these questions:

  1. Is my team clear on the goals and the outcomes they are expected to achieve? What information might be missing that can prevent them from succeeding?
  2. Do people in the team have the skills and knowledge required to make their own decisions? What gaps exist? How can these gaps be filled without my continuous intervention?
  3. Have I set clear decision boundaries with my team on the kind of decisions they can make independently and the ones where I need to be involved?
  4. Do I hold my team accountable to meet their deadlines while not compromising on quality?
  5. Do I encourage my team to learn from their mistakes, put a new plan into place, and keep moving forward instead of berating them and filling them with feelings of incompetence and self-doubt?

\ Keeping tight control over your team for the risk of failure prevents you from scaling and building a high-performing team. It’s a recipe for short-term wins, not long-term success. Coach, don’t spoon-feed your team. Let them spread their wings.

Saying no.

You may say “yes” to every request, every opportunity, and every change you’re asked to consider. Being agreeable puts less burden on you to prioritize and also reduces chances of conflict—saying no can be risky because you don’t know how others will respond or how your decisions will turn out.

\ What if they take it personally?

What if it upsets them?

What if you let go of a great opportunity?

\ But committing more than you could handle or saying “yes” to inconsequential activities will ultimately hurt your reputation as you fail to meet commitments or create the desired impact. Saying “yes” brings short-term comfort—you don’t have to worry about how others will respond or the fear of making the wrong decision. But not considering the consequences of your decision turns into regret when you finally have to face them in the future.

\ Your responsibility as a leader isn’t to please everyone or make them happy; it’s to multiply your impact and the value you add by risking saying no. Saying no that lands right does not need lengthy explanations—they come across as justifications and often distract and confuse the other person. Instead, be precise. State your reason by being straightforward, clear, and concise—three elements of good communication.

\

The great art is to learn to integrate the two, to marry yes and no. That is the secret to standing up for yourself and what you need, without destroying valuable agreements and precious relationships. That is what a ‘Positive No’ seeks to achieve.

― William Ury

\ Instead of a knee-jerk yes or no, build risk-taking capacity for saying no by asking these questions:

  1. What’s this request about—what exactly is it asking me to do?
  2. What excites me about this opportunity?
  3. What’s the cost of taking it on—in terms of effort, time required, and the impact on the team’s existing priorities? What’s the scale and scope of the request? What kind of time commitment does it demand?
  4. What’s the cost of not doing it? How important is it to the person and the organization?
  5. How does it align with my team’s current plans and commitments?
  6. What could be my reason for saying no?

\ No is risky, but so is yes. Every “yes” you say has an opportunity cost—doing something will always come at the cost of not doing something else. Give yourself permission to say no. Protect your team’s time and energy.

\

Leadership is all about making the jump, taking risks, and learning from your mistakes. It's about falling, dusting ourselves off, and getting back up again and again and again.

― Sebastien Richard

Summary

  1. Leaders need to build a very strong appetite for taking risks, not just in business decisions—defining strategies, setting targets, and taking bets, but also in how they lead their teams.
  2. Standing up and suggesting an unpopular choice is often risky—it may not work, others may not like it, or you may face a lot of resistance. But not challenging the status quo and staying with safe options limits your impact. Don’t take the easy road—fight for choices that are hard at first, but rewarding in the end.
  3. Expressing gaps in your knowledge or sharing your fears can be risky—what if others doubt your competence or your authenticity is mistaken for weakness? But faking knowledge or pretending to be someone you’re not prevents you from bonding and building trust. Showing up authentically as a leader builds connection—seeing the real you makes you more trustworthy and appealing as a human. Vulnerability is not weakness—balance it by defining boundaries without overwhelming others with too much information or excessive emotions.
  4. Facing difficult situations head-on and resolving the conflict evokes strong feelings of fear as pointing out performance gaps, addressing toxic behavior, or confronting unreasonable stakeholders is often risky—others may turn defensive or resent you for speaking the truth. But not addressing them at the right time makes the problem worse. Care personally and challenge directly—be candid and compassionate to make yourself heard.
  5. Being too involved with your team gives you a sense of control and makes it feel less risky, as it gives you the opportunity to make decisions, solve problems, and avoid mistakes. But doing all the thinking for your team keeps them dependent and prevents them from learning and growing. Let go of control. Empower your team—optimize for long-term growth, not short-term wins.
  6. Saying yes to every request and every change appears less risky, as you don’t have to worry about upsetting someone or letting go of a great opportunity. But not prioritizing work makes you overcommit—you overpromise and underdeliver, which hurts your credibility. Learn to say no without feelings of shame or guilt. Don’t just make commitments, keep them, too.

\ This story was previously published here. Follow me on LinkedIn or here for more stories.

Does MariaDB Depend on MySQL?

2026-01-22 05:32:43

When MariaDB was first announced in 2009 by Michael “Monty” Widenius, it was positioned as a “fork of MySQL.” I think that was a Bad Idea™. Okay, maybe it wasn’t a bad idea as such. After all, MariaDB indeed is a fork of MySQL. But what is a fork in the software sense, and how is this reflected in MariaDB? A fork is a software project that takes the source code of another project and continues development independently from the original.

\ Forks often start by maintaining compatibility with their parent project, but they can evolve to become detached from their own features, architecture, bug tracker, mailing list, development philosophy, and community. This is the case of MariaDB, with the addition that it continues to be highly compatible with old MySQL versions and with its current ecosystem at large.

\ Before we dig into it, let me clarify that I like MySQL. It was the very first database that I installed during my university time, and I have used it in my hobby as well as production projects for a long time. So, why did I affirm that positioning MariaDB as a fork of MySQL was a bad idea? In short, because MariaDB doesn’t depend on MySQL. The idea of defining MariaDB merely as a fork of MySQL leads to misconceptions around its future. Take, as an example, this old comment on Hacker News which refers to the phrase “RIP Open Source MySQL”:

\ “Forgive my ignorance, but doesn’t this harm MySQL forks as well? Since the test cases are unavailable from now on, say for example they wanted to reimplement a certain feature, isn’t it much harder for them to validate that their implementation works correctly?”

\ I sympathize with the author of this comment. We were unintentionally misled by the “fork of MySQL” slogan. I encounter this kind of lack of clarity more often than I would like. But the reality is that the development of MariaDB has been independent for many years already. MariaDB developers don’t wait for MySQL to implement features, test cases, fix bugs, or innovate. They write their own tests, create their own features, and solve problems in their own way.

\ When Oracle changes something in MySQL or restricts access to a component, that has no meaningful impact on MariaDB’s roadmap because the projects have diverged so significantly that they’re essentially different database systems that happen to share some common ancestry, be highly compatible (you can use MySQL connectors and tools with MariaDB), and are named after Monty’s children.

\ So, how come projects like Ubuntu “depend” on upstream projects (e.g., Debian) and others like MariaDB don’t? In his paper Why Open Source Software/Free Software (OSS/FS, FLOSS, or FOSS)? Look at the Numbers!, David A. Wheeler (Director of Open Source Supply Chain Security at the Linux Foundation) identifies four potential outcomes for software fork attempts:

\

  1. The death of the fork: The most common outcome, since keeping a software project alive requires considerable effort.
  2. A re-merging of the fork with the original: Both software projects rejoin each other.
  3. The death of the original: Users and developers move to the new, younger project.
  4. Successful branching: Both find success with different developers and end users.

\ For years, the MySQL-MariaDB situation was clearly a successful branching where both projects found new homes. One in Oracle, the other in the new MariaDB Foundation/MariaDB plc duo. Contrary to what many would have thought, Oracle invested in MySQL and continued its development in the open despite having its own closed-source relational database.

\ For a period of time, MariaDB kept merging MySQL code commit by commit. However, this changed in 2014 when Oracle stopped publishing MySQL’s source code on Launchpad. Even though MariaDB still merges changes from InnoDB, this marked a clear point of divergence in codebases.

\ Recent (and not so recent) findings and events show that Oracle has slowed down at least on the innovation front and at worst on the maintenance side. In his article Stop using MySQL in 2026, it is not true open source, Otto Kekäläinen (former Software Development Manager at AWS) shows that “the number of git commits on github.com/mysql/mysql-server has been significantly declining in 2025.”

\ He also highlights the steep decrease in MySQL’s popularity according to DB-Engines, as well as the reported “degraded performance with newer MySQL versions.” Are we witnessing a “death of the original” here? I don’t know.

\ In light of all this, many developers are starting to evaluate migration strategies to other relational databases, with MariaDB and TiDB being two of the most attractive options. According to Otto Kekäläinen, “TiDB only really shines with larger distributed setups, so for the vast majority of regular small and mid-scale applications currently using MySQL, the most practical solution is probably to just switch to MariaDB.”

\ How about the elephant in the room, you might ask? PostgreSQL is a database with tons of forks and third-party extensions that you can download, which makes it popular not only due to its features but also the sheer number of companies marketing their PostgreSQL flavor online. For applications currently using MySQL, migrating to PostgreSQL requires a lot of work, including SQL code and connector migrations. Two tasks that can be close to zero-effort with MariaDB. Check, for example, this crazy live broadcast where Cantamen (Germany’s leading car-sharing service provider) migrates from MySQL to MariaDB with the help of Monty himself.

\ Let’s get back to my highly opinionated introductory statement… MariaDB is a—now we have learned—detached fork of MySQL, and, to be fair, it has also been positioned as a “MySQL replacement,” which is something very accurate to state. I’m glad to see the “replacement” slogan more and more often as opposed to the “fork” one. I personally suggested to Kaj Arnö (Executive Chairman at the MariaDB Foundation) going with something even stronger, like “MariaDB fixes MySQL.” That’s a bit too strong, perhaps. I’m glad they softened it to “MariaDB is the Future of MySQL”.

AI Hype vs Reality in Cybersecurity Explained

2026-01-22 05:11:46

Artificial intelligence (AI) has quickly become a hot topic in modern cybersecurity and is often talked about as the cure-all for an increasingly hostile threat landscape. From automated threat detection to self-healing systems, AI is frequently touted as the technology that will finally tip the balance in defenders’ favor.

\ Yet, behind the bold claims and vendor marketing, the day-to-day reality of how AI is really used in security operations is far more nuanced. As cyber threats continue to grow, separating what AI can deliver realistically today from what remains aspirational has become essential.

The Hype: AI as the Ultimate Cybersecurity Behavior

Much of the conversation around AI in cybersecurity has been shaped by bold promises and rapid adoption, often blurring the line between what the technology can do and what it is expected to do. Before examining AI’s role in security operations, it’s worth unpacking how hype, perception, and pressure have influenced its reputation.

The “Silver Bullet” Myth

In marketing materials and conference keynotes, AI is often promoted as a flawless, all-seeing defense mechanism — one capable of identifying every threat, stopping every attack, and doing so with minimal human intervention.

\ This framing is particularly appealing as security teams must contend with rising alert volumes and increasingly automated attack techniques. However, real-world research reveals a gap between expectation and execution. In the 2025 Exabeams report, 71% of executives said AI had improved productivity, but only 22% of frontline security analysts agreed. Therefore, there is a sharp disconnect between leadership perception and operational reality.

\ In practice, AI tools perform best when automating narrow, well-defined tasks rather than serving as a comprehensive or autonomous security solution.

The Influence of Generative AI

The rapid rise of generative AI has further intensified these inflated expectations. Tools like ChatGPT have demonstrated how convincingly AI can generate responses, analyze information, and adapt to user input, leading many to assume similar capabilities can be seamlessly applied across cybersecurity.

\ The technology is undoubtedly influential, but research helps clarify where those assumptions break down. Studies examining the use of generative AI in security operations show that while these models can streamline tasks, such as alert summarization and phishing analysis, they still struggle with contextual decision-making.

\ This can be especially true when understanding attacker intent and organizational risk tolerance. As a result, generative AI is most effective when supporting analysts rather than replacing human judgment.

The C-Suite Squeeze

Beyond the tech marketing and media narratives, executive pressure has become a powerful driver of AI adoption in cybersecurity. Boards and C-suite leaders increasingly expect security teams to be using AI, even when expectations are loosely defined or misaligned with operational readiness.

\ For CISOs, this often creates a top-down mandate driven by fear of falling behind competitors or missing out on perceived innovation. In many organizations, AI becomes a strategic checkbox rather than a capability deployed with clear goals and constraints. As a result, some teams find themselves implementing AI tools before they have the data quality, governance structures, or internal expertise to support them effectively.

The Current Reality of AI in Cybersecurity

While the hype often frames AI as transformational, its real-world role in cybersecurity is far more practical and constrained. Today’s AI deployments focus less on replacing analysts and more on improving speed, scale, and consistency across specific security tasks.

Current Capabilities

In practice, AI is most effective when applied to well-scoped, data-intensive problems. Security teams commonly use machine learning models to enhance threat detection, identify anomalous behavior across large datasets, and automate repetitive workflows such as alert triage and log correlation.

\ To understand how widely these capabilities are being applied, researchers have examined the current body of work on AI in cybersecurity. A systematic review of AI in cybersecurity found that out of 2,395 studies surveyed, 236 were identified as primary works focused on use cases.

\ This number demonstrates the growing body of documented research where AI is actively deployed across functions like detection, response, and protection rather than only in theory. Therefore, this analysis suggests that AI’s role in cybersecurity has moved beyond isolated experimentation and into task-specific operational use.

\ Real-world case studies also reinforce this role. Analysis of AI-driven detection techniques shows that machine learning-based systems can adapt to evolving threat behaviors and support faster investigation workflows, provided the underlying data is robust and relevant. These outcomes point to AI’s strength as an efficiency multiplier rather than a stand-alone defensive system.

The Limitations

Despite these gains, AI in cybersecurity remains constrained by several structural limitations. Effective models need large volumes of high-quality training data, but this is something many organizations struggle to maintain. Incomplete datasets, noisy logs, or biased inputs can lead to inaccurate detections or missed threats, undermining trust in automated systems.

\ More critically, machine learning models can themselves be vulnerable to manipulation. Research in adversarial machine learning shows that carefully crafted inputs can cause models to misclassify or evade detection, creating opportunities for attackers to defeat defenses built around AI logic.

\ These findings show why human oversight remains essential. AI may accelerate analysis, but it can’t independently reason about threat intent, business impact, or novel attack strategies. As a result, most organizations continue to deploy AI as part of a layered defense strategy rather than as a primary decision-maker.

Where Management and Strategy Make a Difference

Even the most advanced AI systems remain tools. In cybersecurity, their effectiveness depends more on how security teams deploy them than on algorithmic sophistication. AI can surface anomalies, correlate signals, and accelerate analysis.

\ What it can’t do is independently prioritize risk, weigh business impact, or adapt strategy in response to changing organizational goals. Without clearly defined escalation paths and informed human judgment, AI becomes another source of alerts.

\ This is where people and processes play a decisive role. Research across industries has shown that management work accounts for over 20% of productivity variation, and cybersecurity is no exception. Teams with strong leadership and well-defined response strategies are far better off integrating AI into daily operations to amplify analyst expertise rather than replace it.

\ Conversely, poorly managed teams often struggle to extract value even from sophisticated AI platforms, finding that automation without strategy can exacerbate confusion instead of reducing it. In short, successful AI adoption in cybersecurity hinges on the human systems that guide its use.

A Glimpse Into the Next Generation of AI in Cybersecurity

Looking ahead, much of the innovation in AI-driven cybersecurity is focused on making defenses more adaptive. One area gaining traction is the use of AI-powered deception technologies, which aim to shift security from passive detection to active engagement.

\ For instance, AI-driven honeypots are increasingly made to dynamically adapt their behavior in real time, learning from attacker interactions and automatically modifying decoys to better mirror real production environments. This approach allows defenders to gather higher-quality intelligence on attacker techniques while increasing the cost and complexity of successful intrusions.

\ Still, these emerging capabilities point toward evolution, not replacement. While AI-enhanced honeypots and autonomous response systems may improve visibility and slow attackers, they also introduce new operational challenges like model governance and the risk of false confidence.

\ The most likely future state is not fully autonomous security, but increasingly intelligent tools that extend a hand out to human teams. As AI systems become more capable of interaction and adaptation, their success will continue to depend on careful oversight and a realistic understanding of where automation ends, and human judgment must take over.

Separating Signal from Noise

AI has undeniably changed how cybersecurity teams detect and respond to threats, but its impact is often overstated as a stand-alone solution. In reality, today’s AI tools work best when applied to specific problems and guided by experienced teams who understand their limitations.

\ As the technology continues to evolve, the gap between hype and value will depend on how carefully organizations integrate it into their security strategies. For most teams, progress will come from using AI as one part of a balanced, human-led defense.

Probabilistic ML: Natural Gradients and Statistical Manifolds Explained

2026-01-22 05:00:06

Table of Links

Abstract and 1. Introduction

  1. Some recent trends in theoretical ML

    2.1 Deep Learning via continuous-time controlled dynamical system

    2.2 Probabilistic modeling and inference in DL

    2.3 Deep Learning in non-Euclidean spaces

    2.4 Physics Informed ML

  2. Kuramoto model

    3.1 Kuramoto models from the geometric point of view

    3.2 Hyperbolic geometry of Kuramoto ensembles

    3.3 Kuramoto models with several globally coupled sub-ensembles

  3. Kuramoto models on higher-dimensional manifolds

    4.1 Non-Abelian Kuramoto models on Lie groups

    4.2 Kuramoto models on spheres

    4.3 Kuramoto models on spheres with several globally coupled sub-ensembles

    4.4 Kuramoto models as gradient flows

    4.5 Consensus algorithms on other manifolds

  4. Directional statistics and swarms on manifolds for probabilistic modeling and inference on Riemannian manifolds

    5.1 Statistical models over circles and tori

    5.2 Statistical models over spheres

    5.3 Statistical models over hyperbolic spaces

    5.4 Statistical models over orthogonal groups, Grassmannians, homogeneous spaces

  5. Swarms on manifolds for DL

    6.1 Training swarms on manifolds for supervised ML

    6.2 Swarms on manifolds and directional statistics in RL

    6.3 Swarms on manifolds and directional statistics for unsupervised ML

    6.4 Statistical models for the latent space

    6.5 Kuramoto models for learning (coupled) actions of Lie groups

    6.6 Grassmannian shallow and deep learning

    6.7 Ensembles of coupled oscillators in ML: Beyond Kuramoto models

  6. Examples

    7.1 Wahba’s problem

    7.2 Linked robot’s arm (planar rotations)

    7.3 Linked robot’s arm (spatial rotations)

    7.4 Embedding multilayer complex networks (Learning coupled actions of Lorentz groups)

  7. Conclusion and References

2.2 Probabilistic modeling and inference in DL

Learning in general can be viewed as a process of updating certain beliefs about the state of the world based on the new information. This abstract point of view underlies the broad field of Probabilistic ML [17]. In this Subsection we mention certain aspects of this field which are the most relevant for the present study.

\ The general idea of updating beliefs can be formalized as learning an optimal (according to a certain criterion) probability distribution. This further implies that implementation of probabilistic ML algorithms involves optimization over spaces of probability distributions. Therefore, gradient flows on spaces of probability measures [18, 19] are essential ingredients of probabilistic modeling in ML. The notion of gradient flow requires the metric structure. The distance between two probability measures should represent the degree of difficulty to distinguish between them provided that a limited number of samples is available. Metric on spaces of probability measures are induced by the Hessians of various divergence functions [20, 21]. The classical (and parametric invariant) choice is the Kullback-Leibler divergence (KL divergence), also referred to as relative entropy. This divergence induces the Fisher (or Fisher-Rao) information metric on spaces of probability measures thus turning them into statistical manifolds [20]. When optimizing over a family of probability distributions, Euclidean (or, so called, "vanilla") gradient is inappropriate. Ignoring of this fact, leads to inaccurate or incorrect algorithms. Instead, one should use the gradient w.r. to Fisher information metric, which is named natural gradient [22, 23, 24]. In RL this must be taken into account when designing stochastic policies. In particular, well known actor-critic algorithm has been modified in order to respect geometry of the space of probability distributions [25].

\ Another way of introducing metric on spaces of probability distributions is the Wasserstein metric. Fokker-Planck equations are gradient flows in the Wasserstein metric. The potential function for these flows is the KL divergence between the instant and (unknown) stationary distribution [26]. Yet another metric sometimes used in ML is the Stein metric [27].

\

:::info Author:

(1) Vladimir Jacimovic, Faculty of Natural Sciences and Mathematics, University of Montenegro Cetinjski put bb., 81000 Podgorica Montenegro ([email protected]).

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Benchmarking 1B Vectors with Low Latency and High Throughput

2026-01-22 04:56:20

As AI-driven applications move from experimentation into real-time production systems, the expectations placed on vector similarity search continue to rise dramatically. Teams now need to support billion-scale datasets, high concurrency, strict p99 latency budgets, and a level of operational simplicity that reduces architectural overhead rather than adding to it.

ScyllaDB Vector Search was built with these constraints in mind. It offers a unified engine for storing structured data alongside unstructured embeddings, and it achieves performance that pushes the boundaries of what a managed database system can deliver at scale. The results of our recent high scale 1-billion-vector benchmark show that ScyllaDB demonstrates both ultra-low latency and highly predictable behaviour under load.

\

Architecture at a Glance

To achieve low-single-millisecond performance across massive vector sets, ScyllaDB adopts an architecture that separates the storage and indexing responsibilities while keeping the system unified from the user’s perspective. The ScyllaDB nodes store both the structured attributes and the vector embeddings in the same distributed table. Meanwhile, a dedicated Vector Store service – implemented in Rust and powered by the USearch engine optimized to support ScyllaDB’s predictable single-digit millisecond latencies – consumes updates from ScyllaDB via CDC and builds approximate-nearest-neighbour (ANN) indexes in memory. Queries are issued to the database using a familiar CQL expression such as:

\

SELECT … ORDER BY vector_column ANN_OF ? LIMIT k;

They are then internally routed to the Vector Store, which performs the similarity search and returns the candidate rows. This design allows each layer to scale independently, optimising for its own workload characteristics and eliminating resource interference.

\

Benchmarking 1 Billion Vectors

To evaluate real-world performance, ScyllaDB ran a rigorous benchmark using the publicly available yandex-deep_1b dataset, which contains 1 billion vectors of 96 dimensions. The setup consisted of six nodes: three ScyllaDB nodes running on i4i.16xlarge instances, each equipped with 64 vCPUs, and three Vector Store nodes running on r7i.48xlarge instances, each with 192 vCPUs. This hardware configuration reflects realistic production deployments where the database and vector indexing tiers are provisioned with different resource profiles. The results focus on two usage scenarios with distinct accuracy and latency goals (detailed in the following sections).

A full architectural deep-dive, including diagrams, performance trade-offs, and extended benchmark results for higher-dimension datasets, can be found in the technical blog post Building a Low-Latency Vector Search Engine for ScyllaDB. These additional results follow the same pattern seen in the 96-dimensional tests: exceptionally low latency, high throughput, and stability across a wide range of concurrent load profiles.

Scenario #1 — Ultra-Low Latency with Moderate Recall

The first scenario was designed for workloads such as recommendation engines and real-time personalisation systems, where the primary objective is extremely low latency and the recall can be moderately relaxed. We used index parameters m = 16, ef-construction = 128, ef-search = 64 and Euclidean distance. \n At approximately 70% recall and with 30 concurrent searches, the system maintained a p99 latency of only 1.7 milliseconds and a p50 of just 1.2 milliseconds, while sustaining 25,000 queries per second.

When expanding the throughput window (still keeping p99 latency below 10 milliseconds), the cluster reached 60,000 QPS for k = 100 with a p50 latency of 4.5 milliseconds, and 252,000 QPS for k = 10 with a p50 latency of 2.2 milliseconds. Importantly, utilizing ScyllaDB’s predictable performance, this throughput scales linearly: adding more Vector Store nodes directly increases the achievable QPS without compromising latency or recall.

\ Latency and throughput depending on the concurrency level for recall equal to 70%.

\

Scenario #2 — High Recall with Slightly Higher Latency

The second scenario targets systems that require near-perfect recall, including high-fidelity semantic search and retrieval-augmented generation pipelines. Here, the index parameters were significantly increased to m = 64, ef-construction = 512, and ef-search = 512. This configuration raises compute requirements but dramatically improves recall.

With 50 concurrent searches and recall approaching 98%, ScyllaDB kept p99 latency below 12 milliseconds and p50 around 8 milliseconds while delivering 6,500 QPS. When shifting the focus to maximum sustained throughput while keeping p99 latency under 20 milliseconds and p50 under 10 milliseconds, the system achieved 16,600 QPS. Even under these settings, latency remained notably stable across values of k from 10 to 100, demonstrating predictable behaviour in environments where query limits vary dynamically.

\ Latency and throughput depending on the concurrency level for recall equal to 98%.

\

Detailed Results

The table below presents the summary of the results for some representative concurrency levels.

\

\

Unified Vector Search Without the Complexity

A big advantage of integrating Vector Search with ScyllaDB is that it delivers substantial performance and networking cost advantages. The vector store resides close to the data with just a single network hop between metadata and embedding storage in the same availability zone. This locality, combined with ScyllaDB’s shard-per-core execution model, allows the system to provide real-time latency and massive throughput even under heavy load. The result is that teams can accomplish more with fewer resources compared to specialised vector-search systems.

In addition to being fast at scale, ScyllaDB’s Vector Search is also simpler to operate. Its key advantage is its ability to unify structured and unstructured retrieval within a single dataset. This means you can store traditional attributes and vector embeddings side-by-side and express queries that combine semantic search with conventional search. For example, you can ask the database to “find the top five most similar documents, but only those belonging to this specific customer and created within the past 30 days.” This approach eliminates the common pain of maintaining separate systems for transactional data and vector search, and it removes the operational fragility associated with syncing between two sources of truth.

This also means there is no ETL drift and no dual-write risk. Instead of shipping embeddings to a separate vector database while keeping metadata in a transactional store, ScyllaDB consolidates everything into a single system. The only pipeline you need is the computational step that generates embeddings using your preferred LLM or ML model. Once written, the data remains consistent without extra coordination, backfills, or complex streaming jobs.

Operationally, ScyllaDB simplifies the entire retrieval stack. Because it is built on ScyllaDB’s proven distributed architecture, the system is highly available, horizontally scalable, and resilient across availability zones and regions. Instead of operating two or three different technologies – each with its own monitoring, security configurations, and failure modes – you only manage one. This consolidation drastically reduces operational complexity while simultaneously improving performance.

\

Roadmap

The product is now in Geeral Availability. This includes Cloud Portal provisioning, on-demand billing, a full range of instance types, and additional performance optimisations. Self-service scaling is planned for Q1. By the end of Q1 we will introduce native filtering capabilities, enabling vector search queries to combine ANN results with traditional predicates for more precise hybrid retrieval.

Looking further ahead, the roadmap includes support for scalar and binary quantisation to reduce memory usage, TTL functionality for lifecycle automation of vector data, and integrated hybrid search combining ANN with BM25 for unified lexical and semantic relevance.

\

Conclusion

ScyllaDB has demonstrated that it is capable of delivering industry-leading performance for vector search at massive scale, handling a dataset of 1 billion vectors with p99 latency as low as 1.7 milliseconds and throughput up to 252,000 QPS. These results validate ScyllaDB Vector Search as a unified, high-performance solution that simplifies the operational complexity of real-time AI applications by co-locating structured data and unstructured embeddings.

The current benchmarks showcase the current state of ScyllaDB’s scalability. With planned enhancements in the upcoming roadmap, including scalar quantization and sharding, these performance limits are set to increase in the next year. Nevertheless, even now, the feature is ready for running latency critical workloads such as fraud detection or recommendation systems.

Deep Learning via Continuous-Time Systems: Neural ODEs and Normalizing Flows Explained

2026-01-22 04:45:02

Table of Links

Abstract and 1. Introduction

  1. Some recent trends in theoretical ML

    2.1 Deep Learning via continuous-time controlled dynamical system

    2.2 Probabilistic modeling and inference in DL

    2.3 Deep Learning in non-Euclidean spaces

    2.4 Physics Informed ML

  2. Kuramoto model

    3.1 Kuramoto models from the geometric point of view

    3.2 Hyperbolic geometry of Kuramoto ensembles

    3.3 Kuramoto models with several globally coupled sub-ensembles

  3. Kuramoto models on higher-dimensional manifolds

    4.1 Non-Abelian Kuramoto models on Lie groups

    4.2 Kuramoto models on spheres

    4.3 Kuramoto models on spheres with several globally coupled sub-ensembles

    4.4 Kuramoto models as gradient flows

    4.5 Consensus algorithms on other manifolds

  4. Directional statistics and swarms on manifolds for probabilistic modeling and inference on Riemannian manifolds

    5.1 Statistical models over circles and tori

    5.2 Statistical models over spheres

    5.3 Statistical models over hyperbolic spaces

    5.4 Statistical models over orthogonal groups, Grassmannians, homogeneous spaces

  5. Swarms on manifolds for DL

    6.1 Training swarms on manifolds for supervised ML

    6.2 Swarms on manifolds and directional statistics in RL

    6.3 Swarms on manifolds and directional statistics for unsupervised ML

    6.4 Statistical models for the latent space

    6.5 Kuramoto models for learning (coupled) actions of Lie groups

    6.6 Grassmannian shallow and deep learning

    6.7 Ensembles of coupled oscillators in ML: Beyond Kuramoto models

  6. Examples

    7.1 Wahba’s problem

    7.2 Linked robot’s arm (planar rotations)

    7.3 Linked robot’s arm (spatial rotations)

    7.4 Embedding multilayer complex networks (Learning coupled actions of Lorentz groups)

  7. Conclusion and References

2.1 Deep Learning via continuous-time controlled dynamical system

In 2017. Weinan E proposed new architectures of NN’s realized through the continuous-time controlled dynamical systems [10]. This proposal was motivated by the previous observations that NN’s (most notably, ResNets) can be regarded as Euler discretizations of controlled ODE’s. In parallel, a number of studies [11, 12, 13] enhanced and expanded theoretical foundations of ML by adapting classical control-theoretic techniques to the new promising field of applications.

\ This line of research resulted in a tangible outcome which was named Neural ODE [14]. The underlying idea is to formalize some ML tasks as optimal control problems. In fact, deep limits of ResNets with constant weights yield continuous-time dynamical systems [15]. In such a setup weights of the NN are replaced by control functions. Training of the model is realized through minimization of the total error (or total loss) using the Pontryagin’s maximum principle. Backpropagation corresponds to the adjoint ODE which is solved backwards in time.

\ A similar way of encoding maps underlies the concept of continuous-time normalizing flows [16]. Normalizing flows are dynamical systems, usually described by ODE’s or PDE’s. These systems are trained with the goal of learning a sequence (or a flow) of invertible maps between the observed data originating from an unknown complicated target probability distribution and some simple (typically Gaussian) distribution. Once the normalizing flow is trained, the target distribution is approximated. The model is capable of generalizing the observed data and making predictions by sampling from the simple distribution and mapping the samples along the learned flow.

\ We have mentioned two concepts (neural ODE and normalizing flows) that recently had a significant impact. Their success reflects the general trend of growing interest in control-theoretic point of view on ML. Most of theoretical advances in Reinforcement Learning (RL) rely on Control Theory (CT) [12, 13]. As theoretical foundations of RL are being established, the boundary between RL and CT is getting blurred.

\

:::info Author:

(1) Vladimir Jacimovic, Faculty of Natural Sciences and Mathematics, University of Montenegro Cetinjski put bb., 81000 Podgorica Montenegro ([email protected]).

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\