MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Exploratory testing on mobile: the messy checks that find real bugs

2025-12-22 17:00:00

Portfolio version (canonical, with full context and styling):

https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/articles/exploratory-testing.html

TL;DR

  • What it is: risk-driven exploratory sessions where design, execution, and analysis happen together.
  • Platform context: mobile (Android), where interruptions and device state changes are normal.
  • Timebox: short focused sessions, not long wandering playthroughs.
  • Approach: charters, controlled variation, observation-led decisions.
  • Outputs: defects and observations that explain behaviour, with enough context to reproduce.

Flow diagram of a mobile exploratory test session: test charter → timebox (20 to 45 minutes) → controlled variation → outputs: defects, context notes, bug reports, evidence.

Exploratory testing on mobile in practice: chartered, timeboxed sessions with controlled variation, producing defects, context notes, bug reports, and evidence.

About this article

Exploratory testing is often summarised as “testing without scripts”. In real mobile QA work, that description is incomplete.

This article explains exploratory testing on mobile as it is actually applied in a practical workflow: session structure, risk focus, interruptions and recovery, and how this approach consistently finds issues that scripted checks often miss.

Examples are drawn from a real Android mobile game pass, but the focus here is the method, not the case study.

What exploratory testing actually means

In practice, exploratory testing is a way of working where test design, execution, and analysis happen together.

You are not following a pre-written script. You are observing behaviour and choosing the next action based on risk, evidence, and what the product is doing right now.

That does not mean “random testing”. It means structured freedom: you keep a clear intent, and you keep your changes controlled so outcomes remain interpretable.

Why exploratory testing matters on mobile

Mobile products rarely fail under perfect conditions. They fail when something changes unexpectedly. On Android especially, many failure modes are contextual and lifecycle-driven.

  • Alarms, calls, and notifications interrupt active flows.
  • Apps are backgrounded and resumed repeatedly.
  • Network quality changes during critical moments (login, purchase, reward claim).
  • UI must remain usable on small screens and unusual aspect ratios.

Applied insight: For mobile exploration, compare performance across devices where possible and probe interruptions: lock screen, phone calls, network drops, switching Wi-Fi/data, rotation, and kill/restart recovery.

Radu Posoi, Founder, AlkoTech Labs (ex Ubisoft QA Lead)

Exploratory sessions target these risks directly instead of assuming a clean uninterrupted journey.

Exploratory testing workflow in practice

Exploratory test charters, not scripts

Sessions start with a charter: a short statement of intent.

For example, “Explore reward claim behaviour under interruptions” or “Explore recovery after network loss”.

The charter defines focus, not steps.

Timeboxed exploratory testing sessions

Exploratory testing works best in short sessions. Timeboxing forces prioritisation and prevents unfocused wandering.

Typical sessions range from 20 to 45 minutes.

Applied insight: Before you go deep, verify the basics first. A short daily smoke test protects the golden path, so deeper exploratory work is not wasted rediscovering obvious breakage.

Nathan Glatus, ex Senior QA / Game Integrity Analyst (Fortnite, ex Epic Games)

Controlled variation: one variable at a time

Rather than changing everything at once, one variable is altered at a time: lock state, network type, lifecycle state.

This keeps results interpretable and defects reproducible.

Exploratory testing session checklist (charter, timebox, evidence)

  • Charter chosen (risk and focus)
  • Timebox set (20 to 45 mins)
  • Variables defined (one at a time)
  • Notes captured live
  • Evidence captured when it happens
  • Bug report drafted while context is fresh

Common mobile bugs found with exploratory testing

Exploratory testing is effective at surfacing issues that are low-frequency but high-impact, especially on mobile.

  • Soft locks where the UI appears responsive but progression is blocked.
  • State inconsistencies after backgrounding or relaunch.
  • Audio or visual desynchronisation after OS-level events.
  • UI scaling or readability problems that only appear in specific contexts.

Android exploratory testing example: reward claim soft lock

Scenario: reward claim flow under interruptions (Android).

During an exploratory session, repeatedly backgrounding and resuming the app while a reward flow was mid-animation triggered a soft lock: the UI stayed visible, but the claim state never completed, blocking progression.

This did not appear during clean uninterrupted smoke testing because the trigger was lifecycle timing and state recovery.

Why this matters: it is normal user behaviour on mobile, not a rare edge case. Exploratory sessions hit it because they are designed to.

Bug reporting for exploratory testing: notes and evidence

Because exploratory testing is adaptive, notes and evidence matter more than in scripted runs. Findings must be supported with enough context to reproduce and diagnose.

Applied insight: High impact exploratory bugs live or die by their evidence. Capture context (client and device state), include frequency (for example 3/3 or 10/13), and attach a clear repro so the issue is actionable.

Nathan Glatus, ex Senior QA / Game Integrity Analyst (Fortnite, ex Epic Games)

  • Screen recordings captured during the session, not recreated later.
  • Notes that include context, not just actions (device state, network, lifecycle transitions).
  • Bug reports that clearly separate expected behaviour from actual behaviour.

The goal is to make exploratory findings actionable, not anecdotal.

Exploratory testing skills shown in this mobile pass

  • Risk-based testing decisions
  • Test charter creation and execution
  • Defect analysis and clear bug reporting
  • Reproduction step clarity under variable conditions
  • Evidence-led communication
  • Mobile UI and interaction awareness
  • Device and network variation testing

Key takeaways for mobile QA

  • Exploratory testing is structured, not random.
  • Mobile risk is contextual, not just functional.
  • Interruptions and recovery deserve dedicated exploration.
  • Good notes and evidence make exploratory work credible and actionable.

Exploratory testing FAQ (mobile QA)

How do you stop exploratory testing becoming random wandering?

By using a clear charter, a strict timebox, and controlled variation. If you can’t explain what you were trying to learn in that session, the charter is too vague.

What do you write down during an exploratory session?

The variables that matter for reproduction: device state, network, lifecycle transitions, and what changed between attempts. Notes should capture context, not just button presses.

How do you reproduce a bug found through exploration?

First, reduce the scenario to the smallest set of steps that still triggers the issue. Then rerun it while changing one variable at a time until the trigger conditions are clear.

What makes mobile exploratory testing different from PC or console?

Mobile failure modes are often lifecycle and OS-driven: backgrounding, notifications, lock/unlock, network switching, permissions, battery and performance constraints. Normal user behaviour creates timing and recovery issues that clean runs will miss.

Evidence and case study links

This dev.to post stays focused on the workflow. The case study links out to the workbook structure, runs, and evidence.

Announcing NgRx 21: Celebrating a 10 Year Journey with a fresh new look and @ngrx/signals/events

2025-12-22 16:59:27

We are pleased to announce the latest major version of the NgRx framework with some exciting new features, bug fixes, and other updates.

10 Year Anniversary 🎉

The NgRx team

Ten years ago, NgRx started as a small experiment created by Rob Wormald, Mike Ryan, and Brandon Roberts to bring Redux-style state management to Angular applications. Since then, Angular has changed a lot, our tooling has changed a lot, and the way we build applications has changed a lot – but NgRx is still the go-to solution many of us reach for managing state in Angular.

Over the years, NgRx has grown from @ngrx/store into a full platform: router integration, entity helpers, devtools, component utilities, ESLint rules, and most recently Signals.

Huge thanks to Alex Okrushko, Brandon Roberts, Marko Stanimirović, Mike Ryan, Rainer Hahnekamp, Tim Deschryver, everyone who has been on the team along the way, and the many contributors. Equally important are the countless content creators who helped the rest of us learn how to use these tools well.

If you have used NgRx in the last decade, you’re part of that story. Thank you. 🙏

Some key-numbers to highlight teams are confident that NgRx brings value when building quality production-ready applications throughout these years:

  • 483 contributors
  • 2.250 commits
  • 2.400 issues closed
  • @ngrx/store has been downloaded 174.070.253 times
  • @ngrx/signals has been downloaded 14.665.960 times

New Website 🎨

To celebrate this milestone we’re shipping a refreshed website at https://ngrx.io.

The new site is build with Analog and has a modernized look and feel, improved navigation, and better performance. We're also looking to update the docs and examples to make it easier to find what you need, whether you're just getting started or looking for advanced patterns.

A huge shout-out to Mike Ryan for the new design and setup for the new website, and to Brandon Roberts, Duško Perić, and Tim Deschryver for the content migration and improvements.

Welcome Rainer 🏆

We are thrilled to welcome Rainer to the NgRx core team! Rainer has been an active member of the NgRx community for years, from giving talks and workshops at conferences and meet-ups, to contributing code, documentation, and support to fellow developers. His expertise and passion for Angular and NgRx make him a valuable addition to our team. Please join us in welcoming Rainer to the NgRx family!

The Events Plugin is stable 👏

In NgRx 19 we introduced @ngrx/signals/events as an experimental way to model event-driven workflows in your Angular applications using Signals and Signal Stores.

Thanks to your feedback the APIs have been cleaned up and the rough edges have been sanded down. This makes us confident to promote the Events plugin to stable in this release.

If you want to explore how to use Events plugin in your app, check out the Events documentation.

The addition of Scoped Events

The most important addition is support for scoped events. Instead of broadcasting every event globally, you can scope events to a specific part of your app – the local scope, parent scope, or the global scope. Typical examples include local state management scenarios where events should stay within a specific feature, or micro-frontend architectures where each remote module needs its own isolated event scope.

import { provideDispatcher } from '@ngrx/signals/events';

@Component({
  providers: [
    // 👇 Provide local `Dispatcher` and `Events` services
    // at the `BookSearch` injector level.
    provideDispatcher(),
    BookSearchStore,
  ],
})
export class BookSearch {
  readonly store = inject(BookSearchStore);
  readonly dispatch = injectDispatch(bookSearchEvents);

  constructor() {
    // 👇 Dispatch event to the local scope.
    this.dispatch.opened();
  }

  changeQuery(query: string): void {
    // 👇 Dispatch event to the parent scope.
    this.dispatch({ scope: 'parent' }).queryChanged(query);
  }

  triggerRefresh(): void {
    // 👇 Dispatch event to the global scope.
    this.dispatch({ scope: 'global' }).refreshTriggered();
  }
}

To learn more about scoped events, check out the scoped events documentation.

signalMethod and rxMethod accept a computation method

In addition to providing a Signal, it is also possible to provide a computation function to the methods signalMethod and rxMethod. This is a small change that improves the developer experience, and aligns our API with Angular's resource and linkedSignal methods.

@Component({
  /* ... */
})
export class Numbers {
  readonly logSum = signalMethod<{ a: number; b: number }>(
    ({ a, b }) => console.log(a + b)
  );

  constructor() {
    const num1 = signal(1);
    const num2 = signal(2);
    this.logSum(() => ({ a: num1(), b: num2() }));
    // console output: 3

    setTimeout(() => num1.set(3), 3_000);
    // console output after 3 seconds: 5
  }
}

Special thanks to Rainer and Marko for their incredible work on @ngrx/signals!

Updating to NgRx v21 🚀

To start using NgRx 21, make sure to have the following minimum versions installed:

  • Angular version 21.x
  • Angular CLI version 21.x
  • TypeScript version 5.9.x
  • RxJS version ^6.5.x or ^7.5.x

NgRx supports using the Angular CLI ng update command to update your NgRx packages. To update your packages to the latest version, run the command:

ng update @ngrx/store@21
ng update @ngrx/signals@21

A Big Thank You to Our Community! ❤️

NgRx is a community-driven project, and we are immensely grateful for everyone who contributes their time and expertise. Your bug reports, feature requests, documentation improvements, and pull requests are what make this project thrive.

We want to give a special shout-out to a few individuals for their direct contributions to this release:

We also want to extend a huge thank you to our sponsors. Your financial support is crucial for the continued development and maintenance of NgRx.
A special thanks to our Gold sponsor, Nx, and our Bronze sponsor, House of Angular.

Sponsor NgRx 🤝

If you are interested in sponsoring the continued development of NgRx, please visit our GitHub Sponsors page for different sponsorship options, or contact us directly to discuss other sponsorship opportunities.

What's Next?

We are incredibly excited about what you will build with NgRx v21. We encourage you to try out the new features and share your feedback with us. We are especially interested in hearing your thoughts on the new experimental Events plugin. Please open issues and discussions on our GitHub repository to let us know what you think.

To stay up-to-date with the latest news, follow on X / Twitter and LinkedIn.

Thank you for being part of the NgRx community! Happy coding!

How AI Knows a Cat Is Like a Dog: An Intuitive Guide to Word Embeddings

2025-12-22 16:53:51

Have you ever wondered how a computer knows that a cat is more like a dog than a car?

To a machine, words are just strings of characters or arbitrary ID numbers. But in the world of Natural Language Processing, we’ve found a way to give words a home in a multi-dimensional space. In this space, the neighbors are their semantic relatives.

In this post, we’ll explore the fascinating world of word embeddings. We’ll start with the intuition (no deep technical dives) and build up a clear understanding of what word embeddings really are (along with code), and how they enable AI systems to capture meaning and relationships in human language.

Word embeddings in NLP, comparing cat, dog, and car in space.

The Magic of Word Math: Static Embeddings

Imagine if you could do math with ideas. The classic example in the world of embeddings is:

King - Man + Woman ≈ Queen

This isn’t just a clever trick! This is the power of Static Embeddings like GloVe (Global Vectors for Word Representation).

GloVe works by looking at massive amounts of text to see how often words appear near each other. It then assigns each word a fixed numerical vector. Because these vectors represent the “meaning”, words that are semantically similar end up close together.

King is closer to queen than man or woman.
King is closer to queen than man or woman.

The Bank Problem: When One Vector Isn’t Enough

As powerful as static models like GloVe are, they have a blind spot called polysemy: words with multiple meanings.

Think about the word “bank”:

  1. I need to go to the bank to deposit some money. (A financial institution).
  2. We sat on the bank of the river. (The edge of a river).

bank vs river bank meaning
Bank vs river bank (two different meanings of one word).

In a static model like GloVe, a bank has one single, fixed vector. This single meaning is an average across all contexts the model saw during training. This means the model can’t truly distinguish between a place where you keep your savings and the grassy side of a river.

The Solution: Contextual Embeddings with BERT

This is where Dynamic or Contextual Embeddings, like BERT (Bidirectional Encoder Representations from Transformers), have changed the game. Unlike GloVe, BERT doesn’t just look up a word in a fixed dictionary. It looks at the entire sentence to generate a unique vector for a word every single time it appears.

When BERT processes our two bank sentences, it recognizes the surrounding words (like “river” or “deposit”) and generates two completely different vectors. It understands that the context changes the core identity of the word.

Here is the simple usage of BERT with PyTorch in code:

import torch
from transformers import BertTokenizer, BertModel

# Load tokenizer and model
tokenizer = BertTokenizer.from_pretrained('./bert_model')
model_bert = BertModel.from_pretrained('./bert_model')

# Prevent training
model_bert.eval()

def print_token_embeddings(sentence, label):
    """
    Tokenizes a sentence, runs it through BERT,
    and prints the first 5 values of each token's embedding.
    """
    # Tokenize input
    inputs = tokenizer(sentence, return_tensors="pt")

    # Forward pass
    with torch.no_grad():
        outputs = model_bert(**inputs)

    # Contextual embeddings for each token
    embeddings = outputs.last_hidden_state[0]

    # Convert token IDs back to readable tokens
    tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0])

    # Print token + partial embedding
    print(f"\n--- {label} ---")
    for token, vector in zip(tokens, embeddings):
        print(f"{token:<12} {vector.numpy()[:5]}")

# Example sentences
sentence1 = "I went to the bank to deposit money."
sentence2 = "We sat on the bank of the river."

# Compare contextual embeddings
print_token_embeddings(sentence1, "Sentence 1")
print_token_embeddings(sentence2, "Sentence 2")

Output

The output shows that BERT assigns different vectors to the word bank based on its surrounding context.

Sentence 1: I went to the bank to deposit money.

Sentence 1: I went to the bank to deposit money.

Sentence 2: We sat on the bank of the river.
Sentence 2: We sat on the bank of the river.

Which Model Should You Use?

Choosing the right embedding depends entirely on your specific task and your available computational resources.

Static Embeddings (like GloVe) are the best choice when you need a fast, computationally lightweight solution with a small memory footprint. They are perfect for straightforward tasks like document classification, where the broader meaning of words is usually sufficient.

On the other hand, Contextual Embeddings (such as BERT) are necessary when your task requires a deep understanding of language and ambiguity, such as question answering or advanced chatbots. They excel at handling words with multiple meanings, which is often the key to an application’s success. However, keep in mind that they require more computational power and a larger memory footprint.

Wrapping Up

Embeddings are the foundation of how AI reads and processes our human world. Whether you are using a pre-trained model like BERT or building a simple embedding model from scratch using PyTorch’s nn.Embedding layer, you are essentially building a bridge between human thought and machine calculation.

What do you think? If you were training a model from scratch today, what specific vocabulary or niche topic would you want it to learn first? Let me know in the comments 👇.

Note: All illustrations in this post were generated using DALL·E 3.

Quick Quiz

Let’s test your understanding. Share your answer in the comments 👇.

How does text data differ from image data in machine learning?

References

  1. Devlin, J. et al. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.
  2. Stanford NLP Group. GloVe: Global Vectors for Word Representation.
  3. Spot Intelligence. GloVe Embeddings Explained.

My Journey Contributing to Django: From Intimidation to a Merged Ticket 🚀

2025-12-22 16:46:57

Contributing to Django felt intimidating at first.

This isn’t just any open-source project. Django powers millions of applications worldwide. The codebase is massive, the standards are high, and every change goes through careful review. As someone still growing as a backend engineer, I often wondered:

Am I really ready to contribute here?

Spoiler: You don’t need to feel ready; you just need the right environment, guidance, and consistency. That environment, for me, was Djangonaut Space.

Djangonaut Space

Djangonaut Space is more than a mentorship program; it’s a launchpad into real-world open source. It pairs contributors with experienced navigators and captains who help you move from "I want to contribute" to "my code is live in Django."

Through the program, I learned:

  • How to read and understand Django’s internals
  • How to navigate tickets, discussions, and code reviews
  • How to communicate clearly with maintainers
  • How to accept feedback without ego, and improve fast
  • Most importantly, I learned that open source is collaborative, not competitive.

The Ticket That Changed Everything

My contribution focused on improving argparse error handling in Django management commands, a small but meaningful enhancement that improves developer experience.

At first, the problem looked simple. But as with most Django work, the real challenge wasn’t writing code, it was:

  • Understanding existing behavior
  • Making changes without breaking backwards compatibility
  • Writing code that aligns with Django’s philosophy
  • Justifying decisions clearly during review

The review process taught me a lot:

  • My code was reviewed line by line
  • I rebased multiple times
  • I received suggestions that genuinely improved the implementation
  • I learned when to push back and when to listen

And then it happened.

🎉 The ticket was merged into Django’s main branch.

Seeing that merge wasn’t just exciting, it was validating. It confirmed that with the right mentorship and persistence, contributing to major open-source projects is absolutely achievable.

What This Journey Taught Me

This experience reshaped how I think about open source and software engineering in general.

Here are a few lessons I’m carrying forward:

  • You don’t need to know everything to contribute
  • Good questions are as valuable as good code
  • Feedback is a gift, even when it’s tough
  • Consistency beats confidence
  • Community accelerates growth

Open source isn’t about being perfect. It’s about showing up, learning in public, and improving with every iteration.

If you’re a developer sitting on the fence, wondering whether you’re good enough to contribute to a large project like Django, this is your sign.

Start small. Ask questions. Join a mentorship program if you can. And don’t underestimate the impact of one well-reviewed contribution. That one ticket might just change how you see yourself as a developer.

🙏 Acknowledgements & Thanks

This journey wouldn’t have been possible without the incredible people behind the Djangonaut Space team Mars.

Navigator: @lilian — thank you for the guidance, structure, and encouragement throughout the journey.

Captain: @sean — thank you for the thoughtful reviews, patience, and for the encourangement.

Djangonauts:
@eddy,** @rim Choi**, thank you for the collaboration, discussions, and shared learning. Building alongside you made the experience even better.

And to the wider Django community, thank you for maintaining such a welcoming and high-quality open-source ecosystem.

Future of AI

2025-12-22 16:36:10

The Future of AI: Trends Shaping the Next Decade

A New Era Where AI Becomes Ubiquitous in Our Lives

The future of artificial intelligence is poised for significant transformation as it integrates deeper into our daily lives. In a rapidly evolving landscape, several key trends are driving advancements that promise to reshape industries and transform how we work and interact with technology.

Key Facts Driving the Future of AI:

  • 2025 saw an exponential growth in AI-driven devices: According to the 2025 AI Index Report by Stanford HAI, 223 FDA-approved AI-enabled medical devices were introduced in 2023 alone, marking a significant leap from just six in 2015. Similarly, Waymo’s autonomous rides increased from approximately 150,000 weekly trips in early 2024 to over 150,000 by the end of the year, demonstrating AI’s robust integration into daily transportation.
  • Generative AI models like GPT-4 are revolutionizing various sectors: In a recent interview with Forbes Insights, Ramakrishnan from Accenture noted that generative AI models such as GPT-4 have shown unprecedented accuracy and speed compared to custom-built machine learning models. He emphasized the cost-effectiveness of these models, noting that their affordability will make them more accessible to businesses of all sizes.
  • AI democratization: The rise of user-friendly platforms enabling non-experts to use AI is a critical aspect of this transformation. These platforms aim to bridge the gap between technical expertise and everyday applications, making AI solutions more attainable for entrepreneurs, educators, and small businesses.

The Role of Human Oversight in AI:

  • Ethical considerations are paramount: As AI becomes more prevalent, it is crucial that there be a robust system of human oversight. In an interview with The Economist, Dr. Smith from the University of California, Berkeley highlighted the importance of integrating human decision-making into AI systems to ensure ethical outcomes.
  • Emerging trends in AI ethics and governance: The 2025 AI Index Report underscores the increasing focus on ethical AI practices. For instance, a significant number of respondents (64%) in a survey by McKinsey & Company identified generative AI as having the potential to be the most transformative technology in a generation.

Key Takeaways:

  • Increased Accessibility: Generative AI models are becoming more accessible and affordable, enabling their use across various sectors.
  • Human Oversight is Essential: While AI accelerates innovation, human oversight remains indispensable for ethical deployment.
  • Ethical Considerations Are Central: As AI integration deepens, the importance of addressing ethical challenges cannot be overstated.

Conclusion:

The future of AI promises a world where it seamlessly integrates into our daily lives, driving significant advancements in healthcare, transportation, and beyond. However, this integration also brings forth critical considerations such as ensuring human oversight and fostering ethical AI practices. As we move forward, the key to harnessing these transformative powers will be balancing innovation with responsibility.

Sources

Containerization 2025: Why containerd 2.0 and eBPF are Changing Everything

2025-12-22 16:36:01

The containerization landscape, perennially dynamic, has seen a flurry of practical, sturdy advancements over late 2024 and through 2025. As senior developers, we're past the "hype cycle" and into the trenches, evaluating features that deliver tangible operational benefits and address real-world constraints. This past year has solidified several trends: a relentless push for enhanced security across the supply chain, fundamental improvements in runtime efficiency, a significant leap in build ergonomics for multi-architecture deployments, and the emergence of WebAssembly as a credible, albeit nascent, alternative for specific workloads. Here's a deep dive into the developments that genuinely matter.

The Evolving Container Runtime Landscape: containerd 2.0 and Beyond

The foundation of our containerized world, the container runtime, has seen significant evolution, most notably with the release of containerd 2.0 in late 2024. This isn't merely an incremental bump; it's a strategic stabilization and enhancement of core capabilities seven years after its 1.0 release. The shift away from dockershim in Kubernetes v1.24 pushed containerd and CRI-O to the forefront, solidifying the Container Runtime Interface (CRI) as the standard interaction protocol between the kubelet and the underlying runtime.

containerd 2.0 brings several key features to the stable channel that warrant close attention. The Node Resource Interface (NRI) is now enabled by default, providing a powerful extension mechanism for customizing low-level container configurations. This allows for finer-grained control over resource allocation and policy enforcement, akin to mutating admission webhooks but operating directly at the runtime level. Developers can leverage NRI plugins to inject specific runtime configurations or apply custom resource management policies dynamically, a capability that was previously more cumbersome to implement without direct runtime modifications. Consider a scenario where an organization needs to enforce specific CPU pinning or memory page allocation for performance-critical workloads; an NRI plugin can now mediate this at container startup, ensuring consistent application across diverse node types without altering the core containerd daemon.

Another notable advancement is the stabilization of image verifier plugins. While the CRI plugin in containerd 2.0 doesn't yet fully integrate with the new transfer service for image pulling, and thus isn't immediately available for Kubernetes workloads, its presence signals a robust future for image policy enforcement at pull-time. These plugins are executable programs that containerd can invoke to determine if an image is permitted to be pulled, offering a critical control point for supply chain security. Once integrated with the CRI, this will allow Kubernetes administrators to define granular policies – for instance, only allowing images signed by specific keys or those with a verified Software Bill of Materials (SBOM) – directly at the node level, before a container even attempts to start. This shifts policy enforcement left, preventing potentially compromised images from ever landing on a node.

The containerd configuration has also seen an update, moving to v3. Migrating existing configurations is a straightforward process using containerd config migrate. While most settings remain compatible, users leveraging the deprecated aufs snapshotter will need to transition to a modern alternative. This forces a necessary cleanup, promoting more performant and maintained storage backends.

Bolstering the Software Supply Chain: Sigstore's Ascent

The year 2025 marks a definitive pivot in container image signing, with Sigstore firmly establishing itself as the open standard for software supply chain security. Docker, recognizing the evolving landscape and the limited adoption of its legacy Docker Content Trust (DCT), began formally retiring DCT (which was based on Notary v1) in August 2025. This move, while requiring migration for a small subset of users, clears the path for a more unified and robust approach to image provenance.

graph TD
    A["📥 OIDC Identity"] --> B{"🔍 Fulcio Check"}
    B -->|Valid| C["⚙️ Issue Certificate"]
    B -->|Invalid| D["🚨 Reject Request"]
    C --> E["📊 Sign & Log (Rekor)"]
    D --> F["📝 Audit Failure"]
    E --> G(("✅ Image Signed"))
    F --> G

    classDef input fill:#6366f1,stroke:#4338ca,color:#fff
    classDef process fill:#3b82f6,stroke:#1e40af,color:#fff
    classDef success fill:#22c55e,stroke:#15803d,color:#fff
    classDef error fill:#ef4444,stroke:#b91c1c,color:#fff
    classDef decision fill:#8b5cf6,stroke:#6d28d9,color:#fff
    classDef endpoint fill:#1e293b,stroke:#475569,color:#fff

    class A input
    class C,E process
    class B decision
    class D,F error
    class G endpoint

Sigstore addresses the critical need for verifiable supply chain integrity through a suite of tools: Cosign for signing and verifying OCI artifacts, Fulcio as a free, public root Certificate Authority issuing short-lived certificates, and Rekor as a transparency log for all signing events. This trifecta enables "keyless" signing, a significant paradigm shift. Instead of managing long-lived static keys, developers use OIDC tokens from their identity provider (e.g., GitHub, Google) to obtain ephemeral signing certificates from Fulcio. Cosign then uses this certificate to sign the image, and the signature, along with the certificate, is recorded in the immutable Rekor transparency log.

For instance, signing an image with Cosign is remarkably streamlined:

# Authenticate with your OIDC provider
# cosign will often pick this up automatically from environment variables.

# Sign an image (keyless)
cosign sign --yes <your-registry>/<your-image>:<tag>

# Verify an image
cosign verify <your-registry>/<your-image>:<tag>

The --yes flag in cosign sign bypasses interactive prompts, crucial for CI/CD pipelines. The verification step, cosign verify, queries Rekor to ensure the signature's authenticity and integrity, linking it back to a verifiable identity. This provides strong, auditable provenance without the operational overhead of traditional PKI.

Turbocharging Builds with Buildx and BuildKit

Docker's Buildx, powered by the BuildKit backend, has matured into an indispensable tool for any serious container development workflow, particularly for multi-platform image builds and caching strategies. The traditional docker build command, while functional, often suffers from performance bottlenecks and limited cross-architecture support. BuildKit fundamentally re-architects the build process using a Directed Acyclic Graph (DAG) for build operations, enabling parallel execution of independent steps and superior caching mechanisms.

The standout feature, multi-platform builds, is no longer a niche capability but a practical necessity in a world diversifying rapidly into amd64, arm64, and even arm/v7 architectures. Buildx allows a single docker buildx build command to produce a manifest list containing images for multiple target platforms, eliminating the need for separate build environments.

Consider this Dockerfile:

# Dockerfile
FROM --platform=$BUILDPLATFORM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
ARG TARGETOS TARGETARCH
RUN CGO_ENABLED=0 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /app/my-app ./cmd/server

FROM --platform=$BUILDPLATFORM alpine:3.18
COPY --from=builder /app/my-app /usr/local/bin/my-app
CMD ["/usr/local/bin/my-app"]

To build for both linux/amd64 and linux/arm64 and push to a registry:

docker buildx create --name multiarch-builder --use
docker buildx inspect --bootstrap
docker buildx build \
  --platform linux/amd64,linux/arm64 \
  -t myregistry/my-app:latest \
  --push .

Performance-wise, BuildKit's caching is superior. Beyond local layer caching, Buildx supports registry caching, where previous build layers pushed to a registry can be leveraged for subsequent builds, significantly reducing build times for frequently updated projects. This is particularly impactful in CI/CD pipelines where build environments are often ephemeral.

eBPF: Redefining Kubernetes Networking and Observability

The integration of eBPF (extended Berkeley Packet Filter) into Kubernetes networking and observability stacks has moved from experimental curiosity to a foundational technology in late 2024 and 2025. eBPF allows sandboxed programs to run directly within the Linux kernel, triggered by various events, offering unprecedented performance and flexibility without the overhead of traditional kernel-to-user-space context switches.

For networking, eBPF-based Container Network Interface (CNI) plugins like Cilium and Calico are actively replacing or offering superior alternatives to iptables-based approaches. The core advantage lies in efficient packet processing. Instead of traversing complex iptables chains for every packet, eBPF programs can make routing and policy decisions directly at an earlier point in the kernel's network stack. This drastically reduces CPU overhead and latency, especially in large-scale Kubernetes clusters.

Beyond performance, eBPF profoundly enhances observability. By attaching eBPF programs to system calls, network events, and process activities, developers can capture detailed telemetry data directly from the kernel in real-time. Tools like Cilium Hubble leverage eBPF to monitor network flows in Kubernetes, providing deep insights into service-to-service communication, including latency, bytes transferred, and policy enforcement decisions.

WebAssembly: A New Paradigm for Cloud-Native Workloads

WebAssembly (Wasm), initially conceived for the browser, has undeniably crossed the chasm into server-side and cloud-native environments, presenting a compelling alternative to traditional containers for specific use cases in 2025. Its core advantages—blazing fast startup times, minuscule footprint, and strong sandbox security—make it particularly attractive for serverless functions and edge computing. As we see in the evolution of Node.js, Deno, Bun in 2025, the runtime landscape is diversifying to meet these performance demands.

Wasm modules typically start in milliseconds, a stark contrast to the seconds often required for traditional container cold starts. Integrating Wasm with Kubernetes is primarily achieved through CRI-compatible runtimes and shims. Projects like runwasi provide a containerd shim that enables Kubernetes to schedule Wasm modules alongside traditional Linux containers.

For example, to run a Wasm application with crun:

# runtimeclass.yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasm-crun
handler: crun
---
# wasm-app.yaml
apiVersion: v1
kind: Pod
metadata:
  name: wasm-demo
  annotations:
    module.wasm.image/variant: compat
spec:
  runtimeClassName: wasm-crun
  containers:
  - name: my-wasm-app
    image: docker.io/myuser/my-wasm-app:latest
    command: ["/my-wasm-app"]

Kubernetes API Evolution: Staying Ahead of Deprecations

Kubernetes consistently refines its API surface to introduce new capabilities and remove deprecated features. In late 2024 and 2025, vigilance against API deprecations and removals remains a critical operational task. The Kubernetes project adheres to a well-defined deprecation policy across Alpha, Beta, and GA APIs.

The implications are clear: developers must actively monitor deprecation warnings. Since Kubernetes v1.19, any request to a deprecated REST API returns a warning. Automated tooling and CI/CD pipeline checks are essential for identifying resources using deprecated APIs.

# Example: Find deployments using deprecated extensions/v1beta1 API
kubectl get deployments.v1.apps -A -o custom-columns="NAMESPACE:.metadata.namespace,NAME:.metadata.name,APIVERSION:.apiVersion" | grep "extensions/v1beta1"

Proactive migration planning, well before an upgrade window, is the only sturdy approach to maintaining cluster stability. The Kubernetes v1.34 release (August 2025) and v1.31 (July 2024) both included deprecations and removals that required attention.

Enhanced Container Security Primitives: Beyond Image Scanning

While vulnerability scanning remains a fundamental best practice, recent developments focus on bolstering security primitives at the runtime level. A significant advancement in containerd 2.0 is the improved support for User Namespaces. This feature allows containers to run as root inside the container but map to an unprivileged User ID (UID) on the host system, drastically reducing the blast radius of a container escape.

Beyond user namespaces, the emphasis on immutable infrastructure and runtime monitoring has intensified. Runtime security solutions, often leveraging eBPF, provide crucial visibility into container behavior, detecting anomalies and policy violations in real-time. Furthermore, the push for least privilege extends to the container's capabilities. Developers are encouraged to drop unnecessary Linux capabilities (e.g., CAP_NET_ADMIN) and enforce read-only filesystems where possible.

Developer Experience and Tooling Refinements

The continuous refinement of developer tooling, particularly around Docker Desktop and local Kubernetes environments, has been a persistent theme throughout 2025. These improvements focus on enhancing security and simplifying complex workflows for the millions of developers relying on these platforms.

Docker Desktop has seen a steady stream of security patches addressing critical vulnerabilities (e.g., CVE-2025-9074). For local Kubernetes development, tools like kind and minikube continue to evolve, offering faster cluster provisioning. The integration of BuildKit and Buildx into local environments has significantly improved the efficiency of image building, particularly for those working with multi-architecture targets.

In essence, the developer experience has become more secure by default, with an emphasis on robust build processes and continuous security patching. The tools are making existing workflows more practical, secure, and efficient, which for senior developers, is often the most valuable kind of progress.

Sources

🛠️ Related Tools

Explore these DataFormatHub tools related to this topic:

📚 You Might Also Like

This article was originally published on DataFormatHub, your go-to resource for data format and developer tools insights.