MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

The HackerNoon Newsletter: AI Doesn’t Mean the End of Work for Us (1/14/2026)

2026-01-15 00:01:51

How are you, hacker?


🪐 What’s happening in tech today, January 14, 2026?


The HackerNoon Newsletter brings the HackerNoon homepage straight to your inbox. On this day, The First Outer Solar System Landing Occurred in 2005, NSA was reported using spyware on other countries in 2014, Net Neutrality took a big hit! in 2014, and we present you with these top quality stories. From AI Doesn’t Mean the End of Work for Us to When a Product Actually Needs AI (And When It’s Just Slop), let’s dive right in.

SEO + Storytelling: Write Content That Ranks and Resonates


By @hackernoon-courses [ 3 Min read ] SEO is about writing so your audience — and search engines — both understand and trust your work. Read More.

Mass Schooling Invented “Smart” and “Dumb”: Heres How It Happened


By @praisejamesx [ 6 Min read ] Explore why IQ tests are trainable proxies, how mass schooling was designed for obedience, and why the real test of intelligence is simply getting what you want Read More.

AI Doesn’t Mean the End of Work for Us


By @bernard [ 4 Min read ] I believe that AI’s impact and future pathways are overstated because human nature is ignored in such statements. Read More.

Symfony Search That Doesn’t Go Down: Zero-Downtime Elasticsearch + Async Indexing


By @mattleads [ 11 Min read ] Stop blocking user saves on Elasticsearch. Learn a senior Symfony pattern: decouple indexing with Messenger and ship zero-downtime reindexing using aliases. Read More.

When a Product Actually Needs AI (And When It’s Just Slop)


By @mindaugascaplinskas [ 6 Min read ] The challenge for product managers, CEOs, or anyone in digital business is knowing when your product actually needs AI. Read More.


🧑‍💻 What happened in your world this week?

It's been said that writing can help consolidate technical knowledge, establish credibility, and contribute to emerging community standards. Feeling stuck? We got you covered ⬇️⬇️⬇️


ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME


We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who'll love you for it.See you on Planet Internet! With love, The HackerNoon Team ✌️


SEO + Storytelling: Write Content That Ranks and Resonates

2026-01-15 00:00:24

Search engine optimization often sounds like a dark art, but at its core it’s just good writing with a bit of extra care. The goal isn’t to chase algorithms — it’s to make your content as discoverable and useful as possible. Why? Because when done right, SEO brings more readers to your work and keeps them there — without sacrificing the voice and style that make you unique.

But how does one do SEO? Is it the keywords? Are there hacks that you’re unaware of? Or is the internet at large simply gaming Google?

Well, the truth is a lot simpler: SEO is about writing so your audience — and search engines — both understand and trust your work. Despite the fact that algorithms keep updating, SERPs keep mutating, and trends age like milk, the fundamentals of useful, findable, and readable content remain consistent over the years.

So, how should you do SEO? Well, here are some tips to think about the next time you’re writing an article.

\

Start With Intent

Every search begins with a problem to solve. A reader who types “best laptops under $800” doesn’t want theory; they want a shortlist they can trust. Someone searching “how to clear browser cache” isn’t looking for history — they need clear, step-by-step instructions.

The first step in SEO is recognizing that intent drives everything. Write to solve the problem behind the query, and you’ll already be ahead of most content online.

\

Structure Like a Journalist

Readers skim. Search engines skim too. That’s why good structure matters.

When writing for SEO, a strong headline that mirrors the search query sets the stage. Following it up with a short intro that acknowledges the reader’s problem builds trust. Subheads then help guide the readers’ eye through the piece, while concise paragraphs and the occasional table, screenshot, or list keep things digestible.

If someone can scan just the headings and still understand your article, you’ve nailed the structure.

\

Don’t Forget the Edges

The “edges” of a piece — title tags, meta descriptions, URLs, image alt text, and internal links — are often where visibility is won or lost. They might feel like small details, but these are the signposts that tell both readers and search engines what your content is about. Spend a few minutes polishing these, and you’ll dramatically improve how easily people find and click on your work.

\

Keep Content Alive

SEO doesn’t end when you hit publish. The best-performing content is updated over time. Maybe it’s swapping in new data, fixing outdated screenshots, or linking to a newer article. These refreshes send a strong signal that our content is current and reliable. Think of it as maintenance that pays compounding returns: a small update today can keep a post relevant — and ranking — for months or even years.

\


Search engines don’t buy into clickbait; readers don’t tolerate obscurity. When you solve the exact problem the query implies, in a structure that’s easy to scan and easy to trust, you win the click, the dwell time, the snippet—and the next assignment. Treat SEO like an editor’s craft, not a trick. The craft compounds.

TL;DR: The Four Rules That Do 80% of the Work \n

  • Write for intent, not keywords. One search = one job to be done. Solve it completely.
  • Structure like a journalist. Clear H1 → scannable H2/H3 → crisp paragraphs → helpful visuals.
  • Optimize the edges. Titles, URLs, intros, alt text, internal links, and schema win featured real estate.
  • Update with purpose. Refreshes and internal links are compounding interest for content.

\ Taking in all of this at once might seem a bit overwhelming. The good news is that you don’t have to.

\ The HackerNoon Blogging Course with its self-paced structure, on-demand video lessons, practical tools and templates (yours to keep), exercises, and a community to learn with, allows you to digest all the resources you need to grow your reach and authority as a writer. And that’s just in one of eight modules curated by a stellar Editorial team responsible for publishing 150,000+ drafts from contributors all over the world.

\ Want to write a winning SEO post that thrives even in the age of AI?

\

:::tip Sign up for the HackerNoon Blogging Course today!

:::

\

Optimization Performance on Synthetic Gaussian and Tree Embeddings

2026-01-15 00:00:03

Table of Links

Abstract and 1. Introduction

  1. Related Works

  2. Convex Relaxation Techniques for Hyperbolic SVMs

    3.1 Preliminaries

    3.2 Original Formulation of the HSVM

    3.3 Semidefinite Formulation

    3.4 Moment-Sum-of-Squares Relaxation

  3. Experiments

    4.1 Synthetic Dataset

    4.2 Real Dataset

  4. Discussions, Acknowledgements, and References

    \

A. Proofs

B. Solution Extraction in Relaxed Formulation

C. On Moment Sum-of-Squares Relaxation Hierarchy

D. Platt Scaling [31]

E. Detailed Experimental Results

F. Robust Hyperbolic Support Vector Machine

4.1 Synthetic Dataset

\ In general, we observe a small gain in average test accuracy and weighted F1 score from SDP and Moment relative to PGD. Notably, we observe that Moment often shows more consistent improvements compared to SDP, across most of the configurations. In addition, Moment gives smaller optimality gaps 𝜂 than SDP. This matches our expectation that Moment is tighter than the SDP.

\ Although in some case, for example when 𝐾 = 5, Moment achieves significantly smaller losses compared to both PGD and SDP, it is generally not the case. We emphasize that these losses are not direct measurements of the max-margin hyperbolic separators’ generalizability; rather, they are combinations of margin maximization and penalization for misclassification that scales with 𝐶. Hence, the observation that the performance in test accuracy and weighted F1 score is better, even though the loss computed using extracted solutions from SDP and Moment is sometimes higher than that from PGD, might be due to the complicated loss landscape. More specifically, the observed increases in loss can be attributed to the intricacies of the landscape rather than the effectiveness of the optimization methods. Based on the accuracy and F1 score results, empirically SDP and Moment methods identify solutions that generalize better than those obtained by running gradient descent alone. We provide a more detailed analysis on the effect of hyperparameters in Appendix E.2 and runtime in Table 4. Decision boundary for Gaussian 1 is visualized in Figure 5.

\ Figure 3: Three Synthetic Gaussian (top row) and Three Tree Embeddings (bottom row). All features are in H2 but visualized through stereographic projection on B2. Different colors represent different classes. For tree dataset, the graph connections are also visualized but not used in training. The selected tree embeddings come directly from Mishne et al. [6].

\ Synthetic Tree Embedding. As hyperbolic spaces are good for embedding trees, we generate random tree graphs and embed them to H2 following Mishne et al. [6]. Specifically, we label nodes as positive if they are children of a specified node and negative otherwise. Our models are then evaluated for subtree classification, aiming to identify a boundary that includes all the children nodes within the same subtree. Such task has various practical applications. For example, if the tree represents a set of tokens, the decision boundary can highlight semantic regions in the hyperbolic space that correspond to the subtrees of the data graph. We emphasize that a common feature in such subtree classification task is data imbalance, which usually lead to poor generalizability. Hence, we aim to use this task to assess our methods’ performances under this challenging setting. Three embeddings are selected and visualized in Figure 3 and performance is summarized in Table 1. The runtime of the selected trees can be found in Table 4. Decision boundary of tree 2 is visualized in Figure 6.

\ Similar to the results of synthetic Gaussian datsets, we observe better performance from SDP and Moment compared to PGD, and due to data imbalance that GD methods typically struggle with, we have a larger gain in weighted F1 score in this case. In addition, we observe large optimality gaps for SDP but very tight gap for Moment, certifying the optimality of Moment even when class-imbalance is severe.

\ Table 1: Performance on synthetic Gaussian and tree dataset for 𝐶 = 10.0: 5-fold test accuracy and weighted F1 score plus and minus 1 standard deviation, and the average relative optimality gap 𝜂 for SDP and Moment.

\

:::info Authors:

(1) Sheng Yang, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA ([email protected]);

(2) Peihan Liu, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA ([email protected]);

(3) Cengiz Pehlevan, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, Center for Brain Science, Harvard University, Cambridge, MA, and Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA ([email protected]).

:::


:::info This paper is available on arxiv under CC by-SA 4.0 Deed (Attribution-Sharealike 4.0 International) license.

:::

\

Performance Analysis of Optimization Methods on Hyperbolic Embeddings

2026-01-14 23:45:03

Table of Links

Abstract and 1. Introduction

  1. Related Works

  2. Convex Relaxation Techniques for Hyperbolic SVMs

    3.1 Preliminaries

    3.2 Original Formulation of the HSVM

    3.3 Semidefinite Formulation

    3.4 Moment-Sum-of-Squares Relaxation

  3. Experiments

    4.1 Synthetic Dataset

    4.2 Real Dataset

  4. Discussions, Acknowledgements, and References

    \

A. Proofs

B. Solution Extraction in Relaxed Formulation

C. On Moment Sum-of-Squares Relaxation Hierarchy

D. Platt Scaling [31]

E. Detailed Experimental Results

F. Robust Hyperbolic Support Vector Machine

4 Experiments

We validate the performances of semidefinite relaxation (SDP) and sparse moment-sum-of-squares relaxations (Moment) by comparing various metrics with that of projected gradient descent (PGD) on a combination of synthetic and real datasets. The PGD implementation follows from adapting the MATLAB code in Cho et al. [4], with learning rate 0.001 and 2000 epochs for synthetic and 4000 epochs for real dataset and warm-started with a Euclidean SVM solution.

\ Datasets. For synthetic datasets, we construct Gaussian and tree embedding datasets following Cho et al. [4], Mishne et al. [6], Weber et al. [7]. Regarding real datasets, our experiments include two machine learning benchmark datasets, CIFAR-10 [34] and Fashion-MNIST [35] with their hyperbolic embeddings obtained through standard hyperbolic embedding procedure [1, 3, 5] to assess image classification performance. Additionally, we incorporate three graph embedding datasets—football, karate, and polbooks obtained from Chien et al. [5]—to evaluate the effectiveness of our methods on graph-structured data. We also explore cell embedding datasets, including Paul Myeloid Progenitors developmental dataset [36], Olsson Single-Cell RNA sequencing dataset [37], Krumsiek Simulated Myeloid Progenitors dataset[38], and Moignard blood cell developmental trace dataset from single-cell gene expression [39], where the inherent geometry structures well fit into our methods.

\ We emphasize that all features are on the Lorentz manifold, but visualized in Poincaré manifold through stereographic projection if the dimension is 2.

\ Evaluation Metrics. The primary metrics for assessing model performance are average training and testing loss, accuracy, and weighted F1 score under a stratified 5-fold train-test split scheme. Furthermore, to assess the tightness of the relaxations, we examine the relative suboptimality gap, defined as

\

\ Implementations Details. We use MOSEK [40] in Python as our optimization solver without any intermediate parser, since directly interacting with solvers save substantial runtime in parsing the problem. MOSEK uses interior point method to update parameters inside the feasible region without projections. All experiments are run and timed on a machine with 8 Intel Broadwell/Ice Lake CPUs and 40GB of memory. Results over multiple random seeds have been gathered and reported.

\ We first present the results on synthetic Gaussian and tree embedding datasets in Section 4.1, followed by results on various real datasets in Section 4.2. Code to reproduce all experiments is available on GitHub.[1]

\

:::info Authors:

(1) Sheng Yang, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA ([email protected]);

(2) Peihan Liu, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA ([email protected]);

(3) Cengiz Pehlevan, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, Center for Brain Science, Harvard University, Cambridge, MA, and Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA ([email protected]).

:::


:::info This paper is available on arxiv under CC by-SA 4.0 Deed (Attribution-Sharealike 4.0 International) license.

:::

[1] https://github.com/yangshengaa/hsvm-relax

Improving Global Optimization in HSVM and SDP Problems

2026-01-14 23:30:03

Table of Links

Abstract and 1. Introduction

  1. Related Works

  2. Convex Relaxation Techniques for Hyperbolic SVMs

    3.1 Preliminaries

    3.2 Original Formulation of the HSVM

    3.3 Semidefinite Formulation

    3.4 Moment-Sum-of-Squares Relaxation

  3. Experiments

    4.1 Synthetic Dataset

    4.2 Real Dataset

  4. Discussions, Acknowledgements, and References

    \

A. Proofs

B. Solution Extraction in Relaxed Formulation

C. On Moment Sum-of-Squares Relaxation Hierarchy

D. Platt Scaling [31]

E. Detailed Experimental Results

F. Robust Hyperbolic Support Vector Machine

3.4 Moment-Sum-of-Squares Relaxation

The SDP relaxation in Equation (8) may not be tight, particularly when the resulting W has a rank much larger than 1. Indeed, we often find W to be full-rank empirically. In such cases, moment-sum-of-squares relaxation may be beneficial. Specifically, it can certifiably find the global optima, provided that the solution exhibits a special structure, known as the flat-extension property [30, 32].

\

\ With all these definitions established, we can present the moment-sum-of-squares relaxation [9] to the HSVM problem, outlined in Equation (7), as

\

\ Note that 𝑔(q) ⩾ 0, as previously defined, serves as constraints in the original formulation. Additionally, when forming the moment matrix, the degree of generated monomials is 𝑠 = 𝜅 − 1, since all constraints in Equation (7) has maximum degree 1. Consequently, Equation (13) is a convex programming and can be implemented as a standard SDP problem using mainstream solvers. We further emphasize that by progressively increasing the relaxation order 𝜅, we can find increasingly better solutions theoretically, as suggested by Lasserre [33]

\

\ where 𝐵 is an index set of the moment matrix to entries generated by w along, ensuring that each moment matrix with overlapping regions share the same values as required. We refer the last constraint as the sparse-binding constraint.

\ Unfortunately, our solution empirically does not satisfy the flat-extension property and we cannot not certify global optimality. Nonetheless, in practice, it achieves significant performance improvements in selected datasets over both projected gradient descent and the SDP-relaxed formulation. Similarly, this formulation does not directly yield decision boundaries and we defer discussions on the extraction methods to Appendix B.2.

\ Figure 2: Star-shaped Sparsity pattern in Equation (13) visualized with 𝑛 = 4

\

:::info Authors:

(1) Sheng Yang, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA ([email protected]);

(2) Peihan Liu, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA ([email protected]);

(3) Cengiz Pehlevan, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, Center for Brain Science, Harvard University, Cambridge, MA, and Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA ([email protected]).

:::


:::info This paper is available on arxiv under CC by-SA 4.0 Deed (Attribution-Sharealike 4.0 International) license.

:::

\

Semidefinite Relaxation for Hyperbolic SVMs: A Polynomial Approach

2026-01-14 23:15:03

Table of Links

Abstract and 1. Introduction

  1. Related Works

  2. Convex Relaxation Techniques for Hyperbolic SVMs

    3.1 Preliminaries

    3.2 Original Formulation of the HSVM

    3.3 Semidefinite Formulation

    3.4 Moment-Sum-of-Squares Relaxation

  3. Experiments

    4.1 Synthetic Dataset

    4.2 Real Dataset

  4. Discussions, Acknowledgements, and References

    \

A. Proofs

B. Solution Extraction in Relaxed Formulation

C. On Moment Sum-of-Squares Relaxation Hierarchy

D. Platt Scaling [31]

E. Detailed Experimental Results

F. Robust Hyperbolic Support Vector Machine

3.3 Semidefinite Formulation

Note that Equation (7) is a non-convex quadratically-constrained quadratic programming (QCQP) problem, we can apply a semidefinite relaxation (SDP) [8]. The SDP formulation is given by

\

\

:::info Authors:

(1) Sheng Yang, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA ([email protected]);

(2) Peihan Liu, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA ([email protected]);

(3) Cengiz Pehlevan, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, Center for Brain Science, Harvard University, Cambridge, MA, and Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA ([email protected]).

:::


:::info This paper is available on arxiv under CC by-SA 4.0 Deed (Attribution-Sharealike 4.0 International) license.

:::

\