MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

NewsUnfold Tests a Crowdsourced Approach to Detecting Media Bias

2025-12-03 06:19:47

Table of Links

  1. Abstract and Introduction
  2. Related Work
  3. Feedback Mechanisms
  4. The NewsUnfold Platform
  5. Results
  6. Discussion
  7. Conclusion
  8. Acknowledgments and References

A. Feedback Mechanism Study Texts

B. Detailed UX Survey Results for NewsUnfold

C. Material Bias and Demographics of Feedback Mechanism Study

D. Additional Screenshots

\

4 The NewsUnfold Platform

Tailored toward news readers, NewsUnfold highlights potentially biased sentences in articles ( in Figure 4) and incorporates the Highlights feedback module ( in Figure 4) assessed in Section 3 to create a comprehensive, costeffective, crowdsourced dataset through reader-feedback. The feedback mechanism additionally includes a free-text field ( in Figure 4) where readers can justify their feedback.

\ Application Design

\ NewsUnfold’s responsive design draws inspiration from news aggregation platforms,[10] aiming to represent an environment where users, given updating content, return to regularly. By clarifying the purpose of our research, the societal importance of media bias, and giving access to automated bias classification, we encourage voluntary feedback contributions.

\ The landing page states NewsUnfold’s mission: encouraging bias-aware reading and collecting feedback to refine bias detection to mitigate its negative effects. To further motivate contributions, it emphasizes the value of individual users’ feedback in enhancing bias-detection capabilities. Clicking a call-to-action button guides users to the Article Overview Page (Figure 14). As a preliminary stage, this page displays 12 static articles spanning nine subjects, balanced by the bias amount and political orientation. Different articles enable readers to compare the amount of bias in one article. Selecting an article directs users to NewsUnfold’s Article Reading Page, which integrates the bias highlights and feedback mechanism. Table 2 outlines its essential components. The sparkles highlight controversial sentences or sentences that received the least feedback to enable balanced feedback collection ( in Figure 4). From the Article Overview Page (Figure 14), users can additionally initiate a tutorial ( in Figure 14) guiding them through the bias highlights ( in Figure 4), the feedback mechanism ( in Figure 4), and concluding with a pointer to the UX survey ( in Figure 16). After each article, we show three recommended articles ( in Figure 16).

\ Study Design

\ 1. Engagement: Measure the amount of voluntary feed- back from readers without monetary incentives.

\

  1. Data Quality: Assessing quality of feedback.

\ 3. Classifier: Investigating classifier performance when integrating feedback-generated labels.

\

  1. User Experience: Evaluating user experience and perception of NewsUnfold, focusing on bias highlights (1 in Figure 4) and feedback (2 in Figure 4) for a user-centered design approach.

\ During the study, readers can freely explore the platform, select articles, decide to provide anonymous feedback, and choose to participate in the UX survey. Unlike the preliminary study, participants are not sourced from crowdworking platforms but reached via LinkedIn, Instagram, and university boards. The outreach briefly introduces NewsUnfold with a link to its landing page. Readers are informed of feed- back data collection beforehand.

\ To understand the readers’ experiences, a voluntary UX survey (5 in Figure 16) is available after reading an article.11 In this study, we prioritize identifying UX issues among readers to boost participation and feedback efficiency, focusing on UX-oriented data collection over comprehensive quantitative analysis. To obtain user analytics, we use Umami[12], a privacy-centric tool logging the number of clicks, unique visitors, country, language settings, device types, most-visited pages, and the number of tutorial initiations while keeping the anonymity of visitors.

\ Table 2: Key Elements of NewsUnfold. Yellow numbers appear in NewsUnfold figures.

\ system with a minimum of five votes per sentence to estab- lish sentence labels. The labels are stored in the same structure as BABE (Spinde et al. 2021b) to enable the merging of the two datasets. We apply a spam detection method by Raykar and Yu (2011) to filter out unreliable annotations. We calculate a score between 0 and 1 for each annotator and eliminate annotators in the 0.05th percentile. We assess the quality of the resulting dataset, similar to Section 3, using the IAA metric Krippendorff’s α and manual analysis.

\ As HITL systems center around iteratively improving machine performance through user input, we evaluate the integration of feedback data into classifier training. The training process adopts hyperparameter configurations from Spinde et al. (2021b) with a pre-trained model from Hugging Face.[13] We train and evaluate the model with data from NUDA added to the 3700 BABE sentences and compare it against the baseline classifier (Spinde et al. 2021b) using the F1-Score (Powers 2008).

\

:::info Authors:

(1) Smi Hinterreiter;

(2) Martin Wessel;

(3) Fabian Schliski;

(4) Isao Echizen;

(5) Marc Erich Latoschik;

(6) Timo Spinde.

:::


:::info This paper is available on arxiv under CC0 1.0 license.

:::

[10] E.g., Google News (https://news.google.com).

\ [11] The survey consists of 9 questions: two scales and eight optional open-ended queries. Appendix B contains a detailed break- down of the survey and its results.

\ [12] https://umami.is

\ [13] https://huggingface.co/mediabiasgroup/DA-RoBERTa-BABE

Color-Coded Bias Warnings Boost Accuracy and Efficiency in AI Feedback

2025-12-03 06:19:39

Table of Links

  1. Abstract and Introduction
  2. Related Work
  3. Feedback Mechanisms
  4. The NewsUnfold Platform
  5. Results
  6. Discussion
  7. Conclusion
  8. Acknowledgments and References

A. Feedback Mechanism Study Texts

B. Detailed UX Survey Results for NewsUnfold

C. Material Bias and Demographics of Feedback Mechanism Study

D. Additional Screenshots

\

3 Feedback Mechanisms

As the evaluation of feedback mechanisms for media bias re- mains unexplored, in a preliminary study, we design and as- sess two HITL feedback mechanisms for their suitability for data collection. Using sentences from news articles labeled by the classifier from Spinde, Hamborg, and Gipp (2020), we compare the mechanisms Highlights, Comparison, and a control group without visual highlights. Our analysis focuses on (1) dataset quality, assessed using Krippendorff’s α; (2) engagement, quantified by feedback given on each sentence6; (3) agreement with expert annotations, evaluated through F1 scores; and (4) feedback efficiency, measured by the time required in combination with engagement and agreement.

\ In the Highlights mechanism, biased sentences are col- ored yellow, and non-biased ones are grey, inspired by Spinde et al. (2022). Participants indicate their agreement or disagreement with these classifications through a floating module (Figure 2). The Comparison mechanism dis- plays sentence pairs. For the first sentence, participants pro- vide feedback on the AI’s classification as in Highlights. The second sentence has no color coding, prompting users with ”What do you think?” (Figure 3), thereby aiming to foster an independent bias assessment and mitigate anchoring effects. Participants in the control group do not see any highlights, solely encountering the feedback module with the second question from Comparison.

\ We use the BABE classifier trained by Spinde et al. (2021b) to generate the sentence labels and highlights. Currently, the classifier showcases the highest performance by fine-tuning the large language model RoBERTa with an extensive dataset on linguistic bias annotated by experts on both sentence and word levels. The BABE-based model on Huggingface[7] generates the probability of a sentence being biased or not biased for each article. We accordingly assign the label with the higher probability.

\ Study Design

\ To assess the two mechanisms, we recruit 240 participants, balanced regarding gender, from Prolific.[8] On the study website built for this purpose, depicted in Figure 13, they view two articles from different political orientations paired with one feedback mechanism per group. During the study, users freely determine their annotation count and time spent, with a progress bar showing the number of annotated sentences. Not interacting with any sentences prompts a pop-up, but they can click ’next’ to proceed.

\

\ We guarantee GDPR conformity through a preliminary data processing agreement. A demographic survey and

\ Figure 2: The feedback mechanism Highlights uses the BABE classifier to highlight biased sentences in yellow and not biased sentences in grey. Readers can agree or disagree with this classification through the feedback module on the right.

\ Figure 3: The feedback mechanism Comparison operates on sentence pairs and uses the BABE classifier to highlight the first sentence as biased in yellow. Readers can agree or disagree with this classification through the feedback module on the right. The next sentence is merely outlined. Here, the feedback module asks for a bias rating without the classifier anchor.

\ an introduction to media bias follow (Appendix A). A post-introduction attention test confirms participants’ understanding of media bias, which, if failed twice, results in study exclusion. Then, participants read through a description of the study task and proceed to give feedback on the two articles. Lastly, a concluding trustworthiness question ensures data reliability. If participants clicked through the study inattentively, they could indicate that their data is not usable for research (Draws et al. 2021) while still receiving full pay (Spinde et al. 2022).

\ Results

\ The 240 participants in the study spent an average of 11:24 minutes, with a compensation rate of £7.89/hr. Twelve participants failed the attention test once, but only one was excluded for a second failure. We further excluded 33 participants who flagged their data as unsuitable for research. Therefore, the analysis includes data from 206 participants: 69 control group participants, 66 Comparison group participants, and 71 Highlights group participants (p = .84, f = .23, α = .05). 104 participants identified as female, 99 as male, and 3 as other, with an average age of 36.62 years (SD = 13.74). The sample, on average, exhibits a left slant (Figure 11 and Figure 12) with higher education (Figure 7). 196 participants indicated advanced English levels, 9 inter- mediate, and 1 beginner (Figure 9). News reading frequency averaged around once a day (Figure 10).

\ Notably, we observe a high overall engagement, with even the least annotated sentences receiving feedback from 70% of the participants. We detail the results of the feedback mechanism study, including engagement, IAA, F1 scores, and efficiency, in Table 1. The Highlights group exhibits higher engagement than the Comparison group, containing more collected data. Also, Highlights demonstrates higher efficiency by collecting more feedback data in less time without compromising quality measured by IAA and agree- ment with the expert standard.

\ The increases in engagement and efficiency are significant at a .05 significance level. Due to variance inhomogeneity indicated by a significant Levene test (p <.05), we applied Welch’s ANOVA for unequal variances. Post-hoc HolmBonferroni adjustments revealed significant differences between the CONTROL and HIGHLIGHTS groups, with p <.0167 for efficiency and p <.025 for engagement. The Games-Howell post-hoc test confirmed these results.As in previous research, IAA and F1 scores from crowdsourcers are low due to the complex and subjective task (Spinde et al. 2021c). F1 score differences are not significant (ANOVA

\ Table 1: Overview of Feedback Interactions per Group.

\ with Holm-Bonferroni, p >.05). Given the comparable IAA and F1 scores across groups, we integrate Highlights within NewsUnfold to optimize data collection efficiency.

\

:::info Authors:

(1) Smi Hinterreiter;

(2) Martin Wessel;

(3) Fabian Schliski;

(4) Isao Echizen;

(5) Marc Erich Latoschik;

(6) Timo Spinde.

:::


:::info This paper is available on arxiv under CC0 1.0 license.

:::

[6] Readers can modify their annotations at any time; however, each unique sentence annotation counts as a single interaction for our feedback metric.

\ [7] https://huggingface.co/mediabiasgroup/da-roberta-babe-ft

\ [8] https://www.prolific.co

\ [9] Experts have at least six months experience in media bias. Consensus was achieved through majority or discussion.

Human-in-the-Loop Models Could Improve Media Bias Detection, Study Finds

2025-12-03 06:19:31

Table of Links

  1. Abstract and Introduction
  2. Related Work
  3. Feedback Mechanisms
  4. The NewsUnfold Platform
  5. Results
  6. Discussion
  7. Conclusion
  8. Acknowledgments and References

A. Feedback Mechanism Study Texts

B. Detailed UX Survey Results for NewsUnfold

C. Material Bias and Demographics of Feedback Mechanism Study

D. Additional Screenshots

\

2 Related Work

Media Bias

\ Various studies (Lee et al. 2022; Recasens, DanescuNiculescu-Mizil, and Jurafsky 2013; Raza, Reji, and Ding 2022; Hube and Fetahu 2019; Ardevol-Abreu and Z ` u´niga ˜ 2017; Eberl, Boomgaarden, and Wagner 2017) highlight the complex nature of media bias, or, more specifically, linguistic bias (Recasens, Danescu-Niculescu-Mizil, and Jurafsky 2013; Wessel et al. 2023; Spinde et al. 2024). Individual backgrounds, such as demographics, news consumption habits, and political ideology, influence the perception of media bias (Druckman and Parkin 2005; Eveland Jr. and Shah 2003; Ardevol-Abreu and Z ` u´niga 2017; Kause, ˜ Townsend, and Gaissmaier 2019). Content resonating with a reader’s beliefs is often viewed as neutral, while dissenting content is perceived as biased (Kause, Townsend, and Gaissmaier 2019; Feldman 2011). Enhancing awareness of media bias can improve the ability to detect bias at various levels — word-level, sentence-level, article-level, or outletlevel (Spinde et al. 2022; Baumer et al. 2015).

\ While misinformation is closely connected to media bias and has received much research attention, most news articles do not fall into strict categories of veracity (Weil and Wolfe 2022). Instead, they frequently exhibit varying degrees of bias, underlining the importance of media bias research.

\ Automatic Media Bias Detection

\ NLP methods can automate bias detection, enabling large-scale bias analysis and mitigation systems (Wessel et al. 2023: Spinde et al. 2021lb; Liu et al. 2021; Lee et al. 2022; Pryzant et al. 2020; He, Majumber, and McAuley 2021). Yet, current bias models’ reliability or end-consumer applications is limited (Spinde et al. 2021b) due to their dependency on the training dataset’s quality. These models often rely on small, handcrafted, and domain-specific datasets, frequently using crowdsourcing (Wessel et al. 2023), which cost-effectively delegates annotation to a diverse, non-expert community (Xintong et al. 2014). The subjective nature of bias and potential inaccuracies from non-experts can result in lower agreement, more noise (Spinde et al. 2021c), and the perpetuation of harmful stereotypes (Otterbacher 2015). Conversely, expert-curated datasets offer higher agreement but come at a greater cost (Spinde et al. 2024).

\ Dataset used for automated media bias detection need to stay updated (Wessel et al. 2023), annotations should be collected across demographics (Pryzant et al. 2020), and media bias awareness reduces misclassification (Spinde et al. 2021b). The limited range of topics and periods covered by current datasets and the complexities involved in annotating bias decreases the accuracy of media bias detection tools. This, in turn, impedes their widespread adoption and accessibility for everyday users (Spinde et al. 2024). To make the data collection process less resource-intensive and optimize gathering human feedback, we raise media bias awareness by algorithmically highlighting bias and gathering feedback from readers.

\ Media Bias Awareness

\ News-reading websites like AllSides[4]or GroundNews[5] offer approaches for media bias awareness at article and topic levels (Spinde et al. 2022; An et al. 2021; Park et al. 2009). However, research on these approaches is sparse. One ap- proach uses ideological classifications (An et al. 2021; Park et al. 2009; Yaqub et al. 2020) to show contrasting views at the article level. At the text level, studies use visual bias indi- cators like bias highlights (Spinde et al. 2020, 2022; Baumer et al. 2015) with learning effects persisting post-highlight removal (Spinde et al. 2022). As the creation of media bias datasets does not include media bias awareness research, NewsUnfold connects these research areas.

\ HITL Platforms For Crowdsourcing Annotations

\ HITL learning improves machine learning algorithms through user feedback, refining existing classifiers in- stead of creating new labels (Mosqueira-Rey et al. 2022; Sheng and Zhang 2019). Enhanced classifier precision can be achieved by combining crowdsourcing and HITL ap- proaches, leveraging user feedback to generate labels via repeated-labeling, and increasing the number of annotations (Xintong et al. 2014; Karmakharm, Aletras, and Bontcheva 2019; Sheng and Zhang 2019; Stumpf et al. 2007). For instance, ”Journalists-In-The-Loop” (Karmakharm, Aletras, and Bontcheva 2019) continuously refines rumor detection by soliciting visual veracity ratings from journalist’s feed- back. Similarly, Mavridis et al. (2018) suggest a HITL sys- tem to detect media bias in videos. They plan to extract bias cues through comparative analysis and sentiment anal- ysis and rely on scholars to validate the output. However, their system stays in the conceptual phase. Brew, Greene, and Cunningham (2010)’s web platform crowdsources news article sentiments and re-trains classifiers based on non- expert majority votes, emphasizing the effectiveness of di- versified annotations and user demographics over mere an- notator consensus. Demartini, Mizzaro, and Spina (2020) propose combining automatic methods, crowdsourced work- ers, and experts to balance cost, quality, volume, and speed. Their concept uses automated methods to identify and clas- sify misinformation, passing some to the crowd and experts for verification in unclear cases. Similar to Mavridis et al. (2018), they do not implement their system and describe no UI details.

\ As no HITL system has been implemented to address me- dia bias, we aim to close this gap by integrating automatic bias highlights based on expert annotation data readers can review. To mitigate possible anchoring bias and uncritical acceptance of machine judgments, we test a second feed- back mechanism aimed at increasing critical thinking (Vac- caro and Waldo 2019; Furnham and Boo 2011; Jakesch et al. 2023; Shaw, Horton, and Chen 2011).

\

:::info Authors:

(1) Smi Hinterreiter;

(2) Martin Wessel;

(3) Fabian Schliski;

(4) Isao Echizen;

(5) Marc Erich Latoschik;

(6) Timo Spinde.

:::


:::info This paper is available on arxiv under CC0 1.0 license.

:::

[4] https://www.allsides.com/

\ [5] https://ground.news/

Can User Feedback Fix Slanted News? Researchers Think So

2025-12-03 06:19:24

:::info Authors:

(1) Smi Hinterreiter;

(2) Martin Wessel;

(3) Fabian Schliski;

(4) Isao Echizen;

(5) Marc Erich Latoschik;

(6) Timo Spinde.

:::

Table of Links

  1. Abstract and Introduction
  2. Related Work
  3. Feedback Mechanisms
  4. The NewsUnfold Platform
  5. Results
  6. Discussion
  7. Conclusion
  8. Acknowledgments and References

A. Feedback Mechanism Study Texts

B. Detailed UX Survey Results for NewsUnfold

C. Material Bias and Demographics of Feedback Mechanism Study

D. Additional Screenshots

Abstract

Media bias is a multifaceted problem, leading to one-sided views and impacting decision-making. A way to address digital media bias is to detect and indicate it automatically through machine-learning methods. However, such detection is limited due to the difficulty of obtaining reliable training data. Human-in-the-loop-based feedback mechanisms have proven an effective way to facilitate the data-gathering process. Therefore, we introduce and test feedback mechanisms for the media bias domain, which we then implement on NewsUnfold, a news-reading web application to collect reader feedback on machine-generated bias highlights within online news articles. Our approach augments dataset quality by significantly increasing inter-annotator agreement by 26.31% and improving classifier performance by 2.49%. As the first human-in-the-loop application for media bias, the feedback mechanism shows that a user-centric approach to media bias data collection can return reliable data while being scalable and evaluated as easy to use. NewsUnfold demonstrates that feedback mechanisms are a promising strategy to reduce data collection expenses and continuously update datasets to changes in context.

1 Introduction

Media bias, slanted or one-sided media content, impacts public opinion and decision-making processes, especially on web platforms and social media (Ardevol-Abreu and Z ` u´niga ˜ 2017; Eberl, Boomgaarden, and Wagner 2017; Spinde et al. 2023). News consumers are frequently unaware of the extent and influence of bias (Kause, Townsend, and Gaissmaier 2019; Spinde et al. 2020; Ribeiro et al. 2018), leading to limited awareness of specific issues and narrow, one-sided points of view (Ardevol-Abreu and Z ` u´niga 2017; Eberl, ˜ Boomgaarden, and Wagner 2017). As promoting media bias awareness has beneficial effects (Park et al. 2009; Spinde et al. 2022), emphasis on the need for methods that automatically detect media bias is growing (Wessel et al. 2023). Such methods potentially impact user behavior, as they facilitate the development of systems that analyze various subtypes of bias comprehensively and in real-time (Spinde et al. 2021a).

\ Several approaches have been developed for automated media bias classification (Wessel et al. 2023; Spinde et al. 2024; Liu et al. 2021; Hube and Fetahu 2019; Vraga and Tully 2015). However, they share a challenge: While datasets are vital for training machine-learning models, the intricate and subjective nature of media bias makes the manual creation of these datasets time-consuming and expensive (Spinde et al. 2021b). Crowdsourcing is cost-effective but can yield unreliable annotations with low annotator agreement (Recasens, Danescu-Niculescu-Mizil, and Jurafsky 2013). In contrast, expert raters ensure consistency but

\ lead to substantial costs (Spinde et al. 2021b),1 making scaling data collection challenging (Spinde et al. 2021b). Consequently, the media bias domain lacks reliable datasets for effective training of automatic detection systems (Wessel et al. 2023). Successful Human-in-the-loop (HITL) approaches addressing similar challenges (Mosqueira-Rey et al. 2022; Karmakharm, Aletras, and Bontcheva 2019) remain untested for media bias, particularly visual methods (Karmakharm, Aletras, and Bontcheva 2019).

\ We propose a HITL ffedback mechanim showcased on NewsUnfold, a news-reading platform that visually indicates linguistic bias to readers and collects user input to improve dataset quality. NewsUnfold is the first approach employing feedback collection to gather a media bias dataset. In the first of three phases (Figure 1), since visual HITL (Human-in-the-Loop) methods for media bias annotation have not previously been tested, we conducted a study comparing two feedback mechanisms (Section 3). Second, we implement a feedback mechanism on NewsUnfold (Section 4). Third, we use NewsUnfold with 12 articles to curate the NewsUnfold Dataset (NUDA), comprising approximately 2000 annotations (Section 4). Notably, the collected feedback annotations exhibit a 90.97% agreement with expert annotations and a 26.31% higher inter-annotator agreement (IAA) than the baseline, the expert-annotated BABE dataset (Spinde et al. 2021b).[2] This increase is also visible when the dataset is used in classifier training, resulting in an F1-score of .824, an increase of 2.49% compared to the baseline BABE performance. While the platform’s design is adaptable to diverse subtypes of bias, we facilitate our evaluation by focusing on linguistic bias. Linguistic bias is defined by Spinde et al. (2024) as a bias by word choice to transmit a perspective that manifests prejudice or favoritism towards a specific group or idea (Spinde et al. 2024). Despite being neither objective nor binary, collecting binary labels is a promising solution regarding the challenges arising from its ambiguous and complex nature (Spinde et al. 2021b). A UX study involving 13 participants highlights high ease of use and enthusiasm for the concept. Participants also reported a strong perceived impact on critical reading and expressed positive sentiment toward the highlights.

\ In this work, we:

\

  1. Explore feedback mechanisms for the first time in the context of automated media bias detection methods.

    \

  2. Introduce and evaluate NewsUnfold, a news-reading platform highlighting bias in news articles, making media bias detection models accessible for everyday news consumers. NewsUnfold collects feedback on bias highlights to improve its automatic detection.[3]

\ Figure 1: Three-step process of the NewsUnfold Development and Evaluation.

\

  1. Generate the NewsUnfold Dataset (NUDA) incorporating approximately 2,000 annotations.

    \

  2. Present classifiers trained using NUDA and benchmarked against existing methodologies, enhancing performance when combined with other datasets.

\ This paper proposes a design for a cost-effective HITL system to improve and scale media bias datasets. Such feedback mechanisms can be integrated into various media platforms to highlight media bias and related concepts. Further, the system can adapt to changes in language and context, facilitating applied endeavors to run models on news sites and social media to understand and mitigate media bias and increase readers’ awareness.

\

:::info This paper is available on arxiv under CC0 1.0 license.

:::

[1] For example, in the expert-based BABE dataset, one sentence label costs four to six euros, varying with rater count.

\ [2] The IAA evaluates how consistently different individuals assess or classify the same dataset (Hayes and Krippendorff 2007).

A Traveler's Guide to Method Dispatch in Swift

2025-12-03 06:18:10

Introduction

Method dispatch is an algorithm used to select the appropriate method that needs to be invoked upon a call. The primary goal of method dispatch is to provide the program with information on where it can find the executable code for a specific method in memory.

Types of Method Dispatch

Compiled languages have three types of method dispatch:

  • Static or direct dispatch
  • Table or virtual dispatch
  • Message dispatch

Static Dispatch

Static dispatch is the fastest dispatch method in Swift. Since there is no method overriding available, there is only one implementation of the method, and it resides at a single location in memory.

\ We can use static dispatch using keywords such as static ,final, private.

\ Static dispatch is a default method dispatch for the value types since the value types can’t be overridden.

\ Let’s look at some examples:

Final Keyword

Once we add the final keyword to a class, its methods do not support overriding, and this is when static dispatch comes into play.

// MARK: Final class
final class ClassExample {
    // MARK: Static dispatch
    func method() {
        // implementation ...
    }
}

Protocol Extension

Once you add a default implementation of a protocol using an extension, its dispatch method switches to static dispatch instead of using a Witness Table.

// MARK: Prorocol Extension
extension ProtocolExample {
    // MARK: Direct Dispatch
    func method() {
        // implementation ...
    }
}

class ClassExample2: ProtocolExample {}

let classExample2 = ClassExample2()
classExample2.method()

Class Extension

When a method is implemented in an extension, it means it can’t be overridden by subclasses. In this case, there is room for static dispatch.

// MARK: Example Class Extension
class ClassExample3 {}

extension ClassExample3 {
    // MARK: Direct Dispatch
    func method() {
        // implementation ...
    }
}

let classExample3 = ClassExample3()
classExample3.method()

Access Control

We can’t access a private method outside of the class body. This means that the method can’t be overridden and uses static dispatch.

// MARK: Access Control
class ClassExample4 {
    // MARK: Direct Dispatch
    private func method() {
        // implementation ...
    }
}

Table Dispatch

Table dispatch is used when we have to deal with inheritance. This is a default type of dispatch used in Swift.

Virtual Table

For each instance of a class or subclass, a virtual table is created that contains information about implemented methods for each class and stores a reference to the appropriate implementation. The main disadvantage of virtual table dispatch is that it has a lower speed than static dispatch.

\ Let’s look at an example:

// MARK: Virtual Table
class ParentClass {
    func method1() {}
    func methdod2() {}
}

class ChildClass: ParentClass {
    override func method1() {}
    func method3() {}
}

\ For each instance, its own virtual table is created as follows:

\ Virtual Table

Witness Table

A Witness Table is used by protocols and is created for each class that conforms to the protocol. The CPU uses this table to determine where it should look for an appropriate implementation. Each type (value and reference) that conforms to a protocol has its own Protocol Witness Table, which contains pointers to the methods of the type required by the protocol.

\ Let’s look at an example:

// MARK: Witness Table Dispatch
protocol ProtocolExample {
    func method1()
    func method2()
}

class ClassExample1: ProtocolExample {
    func method1() {}
    func method2() {}
}

class ClassExample2: ProtocolExample {
    func method1() {}
    func method2() {}
}

\ In this case, a witness table is created for each class:

\ Witness Table

Message Dispatch

Message Dispatch is the most dynamic method dispatch style. It looks for an appropriate implementation during runtime. Because it operates during runtime, we can use Method Swizzling to change method implementations.

\ If you want to use message dispatch, you need to add @objc dynamic before a method implementation.

// MARK: Message Dispatch
class ClassExample: NSObject {
    @objc dynamic
    func method() {}
}

class SubClassExample: ClassExample {
    @objc dynamic
    override func method() {}
}

let subclass = SubClassExample()
subclass.method()

\ The implementation of the method is searched for within SubClassExample. If there is no implementation of this method in that class, the search continues in the parent class, and so on until it reaches NSObject.

\ Method Dispatch

Let’s combine all the types into a single table:

\

Conclusion

In summary, method dispatch in Swift is a critical aspect of code execution, impacting performance and flexibility. By choosing the right dispatch method, developers can optimize their code, ensure adaptability, and leverage Swift’s dynamic features effectively. Understanding and mastering method dispatch is essential for building efficient and adaptable Swift applications.

Meet Klink Finance: HackerNoon Company of the Week

2025-12-03 04:03:42

Welcome one, welcome all to another HackerNoon Company of the Week feature.

Every week, we highlight a standout company from our Tech Company Database that’s making waves in the global tech ecosystem and positively impacting the lives of its users. Our database features everything from S&P giants to rising stars in the Startup scene.

This week’s pick is Klink Finance—an international earnings platform and affiliate network that links users, advertisers, and partners across the globe.


:::tip Want to be featured on HackerNoon’s Company of the Week?

Request Your Tech Company Page on HackerNoon Today!

:::


Fun Facts About Klink Finance

  • Klink serves a global user base across more than 140 countries, making it a truly international earnings & affiliate platform.
  • It’s more than a “get-paid-to” site: Klink combines a consumer-facing earnings app (mobile/web) and a Web3 affiliate infrastructure for advertisers and partners.

\

  • Through its network, it partners with major players across the crypto and fintech sectors — names like Arbitrum Foundation, Coinbase, Crypto.com, Revolut and several more reportedly integrate via Klink’s offer network.

https://litepaper.klinkfinance.com/?embedable=true

  • The platform doesn’t require upfront investment: users reportedly earn via simple tasks — surveys, app trials, games — making it accessible even to first-time crypto earners.

HackerNoon 🤝 Klink Finance

Klink Finance has published a series of insightful articles on HackerNoon that collectively outline their vision for the future of digital advertising and user-driven rewards. Rather than product updates alone, their pieces map out a broader shift from fractured Web2 ad systems to a transparent, token-powered Web3 economy.

Check out the Klink Finance business profile here.

Here are their top 3 best performing articles on HackerNoon:

Join HackerNoon Business Blogging

HackerNoon’s Business Blogging Program is one of the many ways we help brands grow their reach and connect with the right audience. This program lets businesses publish content directly on HackerNoon to boost brand awareness and build SEO authority by tapping into ours.

Here’s what’s in it for you:

  • Full editorial support – we’ll help refine your story so it truly shines.
  • Multiple permanent placements – across HackerNoon, plus social media amplification.
  • Audio storytelling – your articles converted into audio format and distributed via RSS feeds.
  • Global reach – automatic translation into 12–76 languages.
  • SEO & domain authority boost – piggyback on HackerNoon’s trusted brand to strengthen your search rankings.

:::tip Publish Your First Story with HackerNoon Today

:::


And that’s all for this week, hackers.

Stay creative, Stay iconic.

HackerNoon Team

\