MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Majestic Labs vs. the Memory Wall

2025-11-11 23:47:01

Originally published at allenarch.dev

On November 10, 2025, three former Google and Meta silicon executives announced they've raised $100 million to build what they're calling a fundamentally different kind of AI server. Not faster chips. Not more GPUs. More memory orders of magnitude more packed into a single box that could replace entire racks of today's hardware (CNBC, Nov 10).

Majestic Labs' pitch is simple: the bottleneck in AI inference isn't compute anymore. It's memory. Specifically, the fixed compute-to-memory ratio that every GPU ships with and the KV cache bloat that comes free with every long-context request.

Key context:

  • Majestic Labs: $100M raised (Series A led by Bow Wave Capital, Sept 2025); founders Ofer Shacham (CEO, ex-Google/Meta silicon lead), Sha Rabii (President, ex-Google Argos video chip lead), Masumi Reynders (COO, ex-Google TPU biz dev). Claims patent-pending architecture delivers 1,000× typical server memory. Prototypes target 2027 (CNBC, Nov 10)
  • Global AI capex surge: Alphabet $91–93B (2025), Meta ≥$70B, Microsoft $34.9B in Q3 alone (+74% YoY), Amazon ~$118B (TrendForce, Oct 30)
  • vLLM PagedAttention: 2–4× throughput vs state-of-the-art at same latency; achieves near-zero KV cache waste (arXiv, Sept 2023)
  • CXL memory pooling: 100 TiB commercial pools available in 2025; XConn/MemVerge demo showed >5× performance boost for AI inference vs SSD (AI-Tech Park, Oct 2025)

The memory wall isn't new, but the scale is

You feel it first as a ceiling, not a wall. Batch a few more requests and tokens-per-second look great until you stretch the context or let tenant count creep up. Suddenly the GPU says no. Not because FLOPs tapped out. Because memory did.

"Nvidia makes excellent GPUs and has driven incredible AI innovation. We're not trying to replace GPUs across the board we're solving for memory-intensive AI workloads where the fixed compute-to-memory ratio becomes a constraint."

Ofer Shacham, Majestic Labs CEO (CNBC, Nov 10)

Translation: inference is a KV-cache business. Every token you generate requires storing attention keys and values for every previous token in the sequence. Increase context length and memory grows quadratically. Serve multi-tenant RAG and your index footprints follow you into VRAM. Disaggregate prefill and decode and now you're passing state across workers which means duplicating it or bottlenecking on fabric.

The cheapest way to buy back throughput is often not "more compute." It's more room.

Software has done heroic work to bend this curve. vLLM's PagedAttention achieves near-zero KV cache waste by borrowing virtual memory tricks from operating systems, delivering 2–4× higher throughput than prior systems at the same latency (arXiv, Sept 2023). NVIDIA's open-source Grove (part of Dynamo) popularized disaggregated prefill/decode workers so you can scale the hot path without over-provisioning the cold one (NVIDIA Developer Blog, Nov 2025). And CXL memory pooling moved from "interesting research" to 100 TiB commercial deployments in 2025, with demos showing >5× performance boost for AI workloads vs SSD-backed memory (AI-Tech Park, Oct 2025).

Still, the physics are stubborn. HBM ships in fixed ratios. Datacenter memory is expensive and fragmented. The only way to get "more room" today is to scale horizontally add more nodes, duplicate state, pay network tax.

Majestic is betting that flipping the ratio at the box level changes the game. If each server carries 1,000× typical memory (their claim), you consolidate footprint, reduce duplication, and push batch/context limits higher without paying OOM tax.

Prototypes won't land until 2027. Bandwidth, latency, fabric integration, and TCO will determine whether this is a real shift or just a bigger box. But the thesis is grounded: memory-bound workloads are real, growing, and under-served by today's hardware.

What a T4 tells us about the slope

We ran a small vLLM benchmark on Google Colab (Tesla T4 16GB) to make the memory-throughput tradeoff concrete. Not production scale, just the shape of the curve.

Setup:

  • Hardware: Tesla T4 (16GB VRAM, Compute Capability 7.5)
  • Model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 (max_model_len=2048, derived from model config)
  • Backend: vLLM with TORCH_SDPA attention (fp16 fallback), gpu_memory_utilization=0.70
  • Test grid: context lengths {512, 1024, 2048} tokens × batch sizes {1, 4}
  • Generation: 32 tokens per request, 3 iterations per config

Results:

Context Batch Decode TPS (median) E2E Latency (median) GPU Memory Used
512 1 4.57 ~10,990 MiB
512 4 98.43 1.30s ~11,006 MiB
1024 1 26.85 1.19s ~10,988 MiB
1024 4 96.81 1.32s ~11,010 MiB
2048 1 21.59 1.48s ~11,390 MiB
2048 4 80.27 1.59s ~11,396 MiB

Key observations:

  1. Batch scales throughput hard. Single-request runs deliver 4.57–26.85 tok/s. Batch 4 jumps to 80–98 tok/s. That's a 3.6–21× multiplier depending on context length.

  2. Long context taxes throughput and memory. At batch 4, going from 512 → 2048 tokens drops TPS from 98.43 → 80.27 (-18%), while GPU memory climbs ~390 MiB. The KV cache is visible in the numbers.

  3. Latency stays reasonable but creeps up. Median end-to-end for 32 tokens ranges 1.19–1.59s. P99 was 1.36–1.61s (not shown in table). This is a small model on modest hardware, so the absolute numbers are forgiving, but the slope is there.

This is exactly where Majestic's thesis lands. If you had 10× or 100× the memory per box, you could push batch and context higher without the OOM cliff. Long-context multi-tenant inference the stuff that's memory-bound today gets headroom to breathe. The TPS-per-server number climbs, and you consolidate footprint instead of scaling horizontally and paying network tax.

It's a small test on a small model. But the curve is the curve. Memory limits batch. Batch limits throughput. More memory buys you more throughput per box for the workloads that matter.

Resources:

Further Reading:

Connect

Entre Código e Pedagogia: o que aprendi ao ensinar IA a escrever tutoriais de Programação Funcional

2025-11-11 23:40:23

Neste semestre, decidi experimentar algo diferente na disciplina Introdução à Programação Funcional.

Em vez de manter o foco integral em Haskell, utilizei a linguagem apenas no primeiro terço do curso, como base conceitual.

Nas etapas seguintes, os estudantes trabalharam com duas stacks funcionais amplamente utilizadas na web: Clojure/ClojureScript e Elixir/Phoenix LiveView.

O objetivo era duplo: explorar a aplicabilidade prática da programação funcional moderna e investigar o papel da Inteligência Artificial na produção de materiais didáticos e arquiteturas de software.

Duas abordagens, um mesmo problema

Ambos os tutoriais resolvem o mesmo desafio: desenvolver uma aplicação Todo List completa e persistente, mas com filosofias distintas.

Versão Stack Abordagem
Clojure/ClojureScript Reagent 2.0 (React 18), Ring, Reitit, next.jdbc Reatividade explícita no frontend e API REST modular
Elixir/Phoenix LiveView LiveView, Ecto, Tailwind Reatividade integrada no backend, sem API intermediária

Os dois tutoriais podem ser acessados aqui:

O papel da Inteligência Artificial

Os tutoriais foram produzidos em colaboração com diferentes modelos de IA — ChatGPT, Gemini e Perplexity — a partir de prompts detalhados.

As IAs conseguiram gerar código funcional e explicações coerentes, mas sem estrutura pedagógica.

Faltava a intencionalidade didática: o porquê de cada decisão, o encadeamento entre etapas e a reflexão sobre erros comuns.

As IAs entregaram aproximadamente 80% do trabalho técnico.

Os 20% restantes — os mais importantes — dependeram de engenharia humana: testar, corrigir, modularizar e transformar o material em uma narrativa de aprendizagem.

Foram cerca de seis horas de curadoria, revisão e depuração, até que o conteúdo atingisse um padrão consistente e instrutivo.

“Produzir código com IA é simples. Transformá-lo em conhecimento exige experiência, método e propósito.”

O que essa experiência revelou

O processo reforçou uma lição essencial: a IA é uma ferramenta poderosa para acelerar o desenvolvimento e inspirar soluções,

mas a mediação humana continua insubstituível.

É o professor, o pesquisador e o engenheiro que atribuem sentido, constroem contexto e transformam o código em aprendizado.

Esses tutoriais representam mais do que guias técnicos.

São um experimento sobre como ensinar programação funcional no século XXI, integrando tecnologia, pedagogia e reflexão crítica sobre o papel da inteligência artificial no processo de aprendizagem.

📚 Referências dos tutoriais

Publicado por Sergio Costa

#Clojure #Elixir #ProgramaçãoFuncional #Educação #InteligênciaArtificial #Des

NestJS Week 2: Exception Filters, Query Params, and Why You Should Stop Using Try-Catch Everywhere

2025-11-11 23:35:29

Welcome back! If you missed Part 1 of my NestJS journey, I covered the basics: modules, controllers, services, DTOs, and validation.

Now we're getting into the good stuff—the patterns that separate messy code from clean, scalable backends.

What I learned this week:

  • Global exception filters
  • Environment configuration with ConfigService
  • The difference between @ Param, @ Query, and @ Body (Have to add space to the names because I noticed some users have them as their username)
  • Building flexible filtering logic for APIs

Let's dive in.

Day 7: Global Exception Filters - Stop Repeating Yourself

The Problem with Try-Catch Everywhere

When I first started, my controllers looked like this:

@Get(':id')
async findOne(@Param('id') id: string) {
  try {
    return await this.productsService.findOne(id);
  } catch (error) {
    if (error instanceof NotFoundException) {
      return { statusCode: 404, message: 'Product not found' };
    }
    return { statusCode: 500, message: 'Something went wrong' };
  }
}

This is scattered, repetitive, and hard to maintain. Every endpoint needs the same error handling logic copy-pasted.

The Solution: Global Exception Filters

NestJS has a better way. Exception filters catch all thrown errors in one place and format responses consistently.

Here's how I set it up:

Step 1: Create the Exception Filter

// global-filters/http-exception-filter.ts
import {
  ExceptionFilter,
  Catch,
  ArgumentsHost,
  HttpException,
  Logger,
} from '@nestjs/common';
import { Response } from 'express';
import { ConfigService } from '@nestjs/config';

@Catch(HttpException)
export class HttpExceptionFilter implements ExceptionFilter {
  private readonly logger = new Logger(HttpExceptionFilter.name);

  constructor(private configService: ConfigService) {}

  private buildErrorResponse(
    status: number,
    errorResponse: any,
    includeStack: boolean,
    stack?: string,
  ) {
    const base = {
      success: false,
      statusCode: status,
      error: errorResponse,
      timestamp: new Date().toISOString(),
    };
    return includeStack ? { ...base, stackTrace: stack } : base;
  }

  catch(exception: HttpException, host: ArgumentsHost) {
    const ctx = host.switchToHttp();
    const response = ctx.getResponse<Response>();
    const status = exception.getStatus();
    const errorResponse = exception.getResponse();

    const isProduction =
      this.configService.get<string>('NODE_ENV') === 'production';

    this.logger.error(
      `Status ${status} Error Response: ${JSON.stringify(errorResponse)}`,
    );

    response.status(status).json(
      isProduction
        ? this.buildErrorResponse(status, errorResponse, false)
        : this.buildErrorResponse(status, errorResponse, true, exception.stack),
    );
  }
}

What's happening here?

  1. @Catch(HttpException) - This filter only catches HTTP exceptions
  2. Logger - Logs errors to console (can extend to log files/Sentry later)
  3. ConfigService - Reads environment variables
  4. Environment-based responses:
    • Development: Shows full stack traces for debugging
    • Production: Hides sensitive details

Step 2: Set Up Environment Variables

Install the config package:

npm i --save @nestjs/config

Create a .env file:

NODE_ENV=development
APP_PORT=3000

Update app.module.ts:

import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';

@Module({
  imports: [
    ConfigModule.forRoot({ isGlobal: true }), // Makes .env available everywhere
    // ... other modules
  ],
})
export class AppModule {}

Step 3: Apply the Filter Globally

In main.ts:

import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { ValidationPipe } from '@nestjs/common';
import { HttpExceptionFilter } from './global-filters/http-exception-filter';
import { ConfigService } from '@nestjs/config';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  const configService = app.get<ConfigService>(ConfigService);

  app.useGlobalPipes(
    new ValidationPipe({
      whitelist: true,
      forbidNonWhitelisted: true,
      transform: true,
    }),
  );

  app.useGlobalFilters(new HttpExceptionFilter(configService));

  const port = configService.get<number>('APP_PORT') || 3000;
  await app.listen(port);
}
bootstrap();

Now Your Controllers Are Clean AF

@Get(':id')
findOne(@Param('id') id: string) {
  return this.productsService.findOne(+id);
}

That's it. No try-catch. If the service throws a NotFoundException, the filter catches it and returns:

Development response:

{
  "success": false,
  "statusCode": 404,
  "error": "Product not found",
  "timestamp": "2025-11-11T10:30:00.000Z",
  "stackTrace": "Error: Product not found\n    at ProductsService.findOne..."
}

Production response:

{
  "success": false,
  "statusCode": 404,
  "error": "Product not found",
  "timestamp": "2025-11-11T10:30:00.000Z"
}

Why This Approach Wins

  1. Centralized error handling - One place to manage all errors
  2. Cleaner code - Controllers focus on business logic
  3. Consistent API responses - Frontend devs will love you
  4. Better debugging - Errors are logged and formatted properly
  5. Scalability - Add error tracking (Sentry, Datadog) in one place

Note: This doesn't mean you'll never use try-catch in NestJS. But for standard HTTP exceptions, filters handle it better.

Days 8-12: WordPress Side Quest

Real talk—I worked on my 9-5 WordPress project these days. Learning NestJS doesn't pay the bills yet. 😅

But I kept the concepts fresh by thinking about how I'd structure the WordPress project if it were NestJS (spoiler: it would be way cleaner).

Day 13: Mastering @ Param, @ Query, and @ Body

Time to understand how to extract data from incoming requests properly.

The Three Decorators

Decorator Used For Example URL
@Body() POST/PUT request body N/A
@Param() Route parameters /products/:id
@Query() Query strings /products?name=shirt&type=clothing

@ Body - Extracting Request Body

Used for creating or updating resources:

@Post()
create(@Body() createProductDto: CreateProductDto) {
  return this.productsService.create(createProductDto);
}

Request body:

{
  "productName": "Laptop",
  "productType": "electronics",
  "price": 50000
}

@ Param - Extracting Route Parameters

Used for identifying specific resources:

@Get(':id')
findOne(@Param('id') id: string) {
  return this.productsService.findOne(+id);
}

URL: http://localhost:3000/products/5

You can extract multiple params:

@Get(':category/:id')
findByCategory(
  @Param('category') category: string,
  @Param('id') id: string,
) {
  return this.productsService.findByCategoryAndId(category, +id);
}

URL: http://localhost:3000/products/electronics/5

@Query - Building Flexible Filters

This is where things get interesting. Query parameters are optional and perfect for filtering.

Step 1: Create a Query DTO

// dto/find-product-query.dto.ts
import { IsOptional, IsString } from 'class-validator';

export class FindProductQueryDto {
  @IsOptional()
  @IsString()
  productName?: string;

  @IsOptional()
  @IsString()
  productType?: string;
}

The @IsOptional() decorator means these fields don't have to be present.

Step 2: Use It in the Controller

@Get()
findAll(@Query() query: FindProductQueryDto) {
  return this.productsService.findAll(query);
}

Step 3: Implement Filtering Logic in the Service

findAll(query?: FindProductQueryDto) {
  // If no query params, return all products
  if (!query || Object.keys(query).length === 0) {
    return {
      message: 'All products fetched successfully',
      data: this.products,
    };
  }

  // Filter based on query params
  const filtered = this.products.filter((prod) => {
    const matchesName = query.productName
      ? prod.productName.toLowerCase().includes(query.productName.toLowerCase())
      : true;

    const matchesType = query.productType
      ? prod.productType.toLowerCase() === query.productType.toLowerCase()
      : true;

    return matchesName && matchesType; // Both conditions must match
  });

  return {
    message: 'Filtered products',
    data: filtered,
  };
}

How It Works

Request: GET /products?productName=laptop&productType=electronics

Logic:

  1. Check if productName matches (case-insensitive, partial match)
  2. Check if productType matches (case-insensitive, exact match)
  3. Return products that match both conditions

Using && vs ||:

  • && (AND) - Narrows the search (must match all filters)
  • || (OR) - Broadens the search (matches any filter)

I used && because I wanted to narrow down results progressively.

When to Use What?

Use @Query for:

  • Optional filtering (/products?name=laptop&sort=asc)
  • Pagination (/products?page=1&limit=10)
  • Search functionality

Use @ Param for:

  • Required identifiers (/products/:id)
  • Hierarchical resources (/users/:userId/orders/:orderId)

Use @ Body for:

  • Creating resources (POST)
  • Updating resources (PUT/PATCH)

Key Takeaways from Week 2

  1. Global exception filters > try-catch everywhere
  2. ConfigService makes environment management clean
  3. @Query is perfect for flexible, optional filtering
  4. @ Param is for required identifiers in the URL
  5. DTOs with @IsOptional() make query validation easy

What's Next?

Now that I've got the fundamentals down, it's time to level up:

  • Database integration (Mongoose + MongoDB)
  • Authentication with JWT
  • Guards and middleware
  • Database relations (User → Products → Orders)

If you're following along, let me know what you'd like me to cover next!

Connect With Me

I'm documenting my transition from frontend to full-stack. Let's connect and learn together!

🌐 Portfolio: tochukwu-nwosa.vercel.app

🚀 Latest Project: Tech Linkup - Discover tech events in Nigerian cities

💼 LinkedIn: nwosa-tochukwu

🐙 GitHub: @tochukwunwosa

🐦 Twitter/X: @tochukwudev

📝 Dev.to: @tochukwu_dev

Drop a comment if you're also learning NestJS or have tips for backend beginners! 👇

This is part of my #LearningInPublic series where I document my journey from frontend to full-stack development. Follow along for more!

Building Lootboxes with Verifiable Randomness on Polkadot Parachains

2025-11-11 23:35:02

Key Takeaways:

  • The Ideal Network is Polkadot’s randomness daemon for ink! smart contracts and parachain runtimes.

  • In this guide, we will implement a randomized lootbox in an ink! smart contract with the IDN!

Introduction

The Ideal Network enable verifiable randomness as a service (VRaaS) for the Polkadot ecosystem, like /dev/random for blockchains. Any parachain runtime or ink! smart contract (v5+) can now access cryptographically secure randomness through a simple subscription model!

The IDN brings unpredictable, verifiable randomness as a service for parachain runtimes and smart contracts:

  • On-demand Randomness - All outputs are unpredictable until revealed, unlike consensus-driven VRFs
  • Subscription-based Pricing - Predictable costs under your control, not per-call fees.
  • Blockchain-Native - No manual oracle interaction is needed for parachains, just create a subscription and receive randomness.
  • Multiple Integration Paths - The solution can be integrated with smart contracts or directly into a runtime. In addition, contracts deployed on the IDN receive randomness for free.

The full documentation and more integration guides can be found at https://docs.idealabs.network.

Just show me the code!

→ The full code for this contract is available on github: https://github.com/ideal-lab5/contracts/blob/main/examples/vraas/lootbox/lib.rs

→ The IDN-SDK code is available at https://github.com/ideal-lab5/idn-sdk

Building Lootboxes with VRaaS

This guide demonstrates how to build a basic lootbox with ink! smart contracts using verifiable randomness as a service from the Ideal Network. Our lootbox will be quite simple, with the flow being [register → receive randomness → get random reward].

Setup & Prerequisites

  1. Install and Configure Rust

To get started, we need to install rust and configure the toolchain to default to the latest stable version by running:

curl --proto=https’ --tlsv1.2 -sSf https://sh.rustup.rs | sh 
rustup default stable 
rustup update 
rustup target add wasm32-unknown-unknown 
rustup component add rust-src
  1. Install Cargo-Contract

The cargo contract tool helps you to setup and manage wasm smart contracts written with ink! To get started, simply run

cargo install --force --locked cargo-contract

→ Note: in case of issues, the full installation guide is here.

  1. Setup the Execution Environment

You can deploy the resulting contract on any chain that has an open hrmp channel with the Ideal Network. For easy testing purposes, you can checkout the IDN from github and run it on zombienet, granting you a local version of the IDN with RPC port 9933 and an example Consumer chain on port 9944. To do so:

Checkout the repo: git clone https://github.com/ideal-lab5/idn-sdk

Create a release build from the root: cargo build —release

3'. Install zombienet

Navigate to the e2e dir, follow the guide to setup zombienet
https://github.com/paritytech/zombienet, and finally run:

cd e2e/
zombienet setup polkadot -y
export PATH=/home/driemworks/ideal/idn-sdk/e2e:$PATH
zombienet spawn -p native zombienet.toml 

⚠️ Once both the IDN and IDNC (test chain for deploying contracts, receiving subscriptions) are ready, you must open an HRMP channel between them. We have provided a convenient script to handle this:

cd e2e/scripts/setup
chmod +x open_hrmp.sh
./open_hrmp.sh

Failing to open an HRMP channel will guarantee that all calls to the IDN will fail.

Create the Contract

First we will create a new contract and install the idn-contracts library

cargo contract new lootbox
cd lootbox

Now, we need to include both idn-contracts and parity-scale-codec to the Cargo.toml. Make sure that default-features = false you add the dependency to the std features array in your Cargo.toml:

[dependencies]
codec = { package = “parity-scale-codec”, version = 3.7.4, default-features = false, features = [
    "derive",
] }
idn-contracts = { version = 0.1.0, default-features = false }
# other deps here

...

[features]
default = [”std”]
std = [
    "codec/std",
    “idn-contracts/std”, 
     # other deps here
]

Now we’re ready to get building our lootbox!

VRaaS Setup & Integration

First, we need to configure the idn-client in the contract. This is the primary mechanism through which you can create and manage subscriptions, as well handling logic for when randomness is received. To configure it, you need to configure a few parameters. On Paseo, you can use our test network to deploy contracts for testing out VRaaS (explorer: https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fidnc0-testnet.idealabs.network#/explorer).

Parameter Value
The IDN Parachain ID (on Paseo) 4502
The IDN Manager Pallet Index in the IDN (on Paseo) 40
Your parachain ID (e.g. IDNC on Paseo) 4594
Contracts Pallet index on your chain (e.g. IDNC on Paseo) 16
Contracts callback call index on your chain (e.g. IDNC on Paseo) 6
Maximum XCM fees e.g. 1_000_000_000 (1 token)
#![cfg_attr(not(feature = std), no_std, no_main)]

#[ink::contract]
mod lootbox {

    use idn_contracts::prelude::*;

    #[ink(storage)]
    pub struct Lootbox {
        idn_client: IdnClient,
        subscription_id: Option<SubscriptionId>,
    }

    impl Lootbox {

        #[ink(constructor)]
        pub fn new() -> Self {
            Self {
                idn_client: IdnClient::new(
                    4502, 40, 4594, 16, 6, 1_000_000_000
                ),
                subscription_id: None,
            }
        }

        #[ink(message)]
        pub fn create_subscription(&mut self) {
            self.subscription_id = Some(
                self.idn_client
                    .create_subscription(100, 10, None, None, None, None)
                    .unwrap(),
            );
        }
    }
}

For a sanity check, run cargo contract build —release to compile the contract, resolve any issues now.

Note: by default, subscriptions are made with 100 ‘credits’ with randomness delivered every 10 blocks. You can change this by specifying the first two parameters passed to the create_subscription call.

→ See our price simulator for a full breakdown of subscription pricing.

⚠️ There MUST be an open HRMP channel between the target parachain and the Ideal Network: https://substrate.stackexchange.com/questions/5445/how-to-open-hrmp-channels-between-parachains.

Adding Lootbox Mechanics

Now we’re ready to modify the contract to introduce our lootbox mechanics!

User Registration Mechanics

For our lootbox, we need to let users register to open the lootbox when we receive a pulse of randomness.

Update imports, the storage struct, and lets introduce a new event type:

use idn_contracts::prelude::*;
use ink::prelude::vec::Vec;
use ink::storage::Mapping;

#[ink(storage)]
pub struct Lootbox {
    idn_client: IdnClient,
    subscription_id: Option<SubscriptionId>,
    // track registered users for the next lootbox
    registered_users: Vec<AccountId>,
    // track if a user is already registered (to prevent duplicates)
    is_registered: Mapping<AccountId, bool>,
}

#[ink(event)]
pub struct UserRegistered {
    #[ink(topic)]
    user: AccountId,
    total_registered: u32,
}

Update the constructor:

#[ink(constructor)]
pub fn new() -> Self {
    Self {
       idn_client: IdnClient::new(4502, 40, 4594, 16, 6, 1_000_000_000),
       subscription_id: None,
       registered_users: Vec::new(),
       is_registered: Mapping::new(),
   }
}

Remove the default flipper functions and replace with registration functionality:

/// Register to get a reward with the next dispatch
#[ink(message)]
pub fn register(&mut self) {
    let caller = self.env().caller();

    // Check if user is already registered
    if self.is_registered.get(caller).unwrap_or(false) {
        panic!(Already registered for this lootbox);
    }

    // Add user to registered list
    self.registered_users.push(caller);
    self.is_registered.insert(caller, &true);

    // Emit event
    self.env().emit_event(UserRegistered {
        user: caller,
        total_registered: u32::try_from(self.registered_users.len()).unwrap(),
    });
}

/// Get the number of registered users
#[ink(message)]
pub fn get_registered_count(&self) -> u32 {
    u32::try_from(self.registered_users.len()).unwrap()
}

/// Check if caller is registered
#[ink(message)]
pub fn is_user_registered(&self) -> bool {
    let caller = self.env().caller();
    self.is_registered.get(caller).unwrap_or(false)
}

/// Get all registered users (for testing/admin purposes)
#[ink(message)]
pub fn get_registered_users(&self) -> Vec<AccountId> {
    self.registered_users.clone()
}

Now, let’s build the lootbox

Define the lootbox rewards schema (put it above the lootbox struct)

#[derive(Debug, PartialEq, Eq, Clone)]
#[ink::scale_derive(Encode, Decode, TypeInfo)]
pub enum Reward {
    Bronze,  // 50% chance
    Silver,  // 30% chance
    Gold,    // 15% chance
    Diamond, // 5% chance
}

/// track user’s reward counts
#[derive(Debug, Default, PartialEq, Eq, Clone)]
#[ink::scale_derive(Encode, Decode, TypeInfo)]
#[cfg_attr(feature = std, derive(ink::storage::traits::StorageLayout))]
pub struct RewardStats {
    bronze: u32,
    silver: u32,
    gold: u32,
    diamond: u32,
}

Add a new mapping to track user rewards, and update the constructor, and introduce new getters.

#[ink(storage)]
pub struct Lootbox {
    idn_client: IdnClient,
    subscription_id: Option<SubscriptionId>,
    // Track registered users for the next lootbox
    registered_users: Vec<AccountId>,
    // Track if a user is already registered (to prevent duplicates)
    is_registered: Mapping<AccountId, bool>,
    // Track rewards received by users
    user_rewards: Mapping<AccountId, RewardStats>,
}

// ink events defined here

impl Lootbox {

        #[ink(constructor)]
        pub fn new() -> Self {
            Self {
                idn_client: IdnClient::new(
                    4502, 40, 4594, 16, 6, 1_000_000_000),
                subscription_id: None,
                registered_users: Vec::new(),
                is_registered: Mapping::new(),
                user_rewards: Mapping::new(),
            }
        }

        /// Get reward stats for a specific user
        #[ink(message)]
        pub fn get_user_rewards(&self, user: AccountId) -> RewardStats {
            self.user_rewards.get(user).unwrap_or_default()
        }

        /// Get reward stats for caller
        #[ink(message)]
        pub fn get_my_rewards(&self) -> RewardStats {
            let caller = self.env().caller();
            self.user_rewards.get(caller).unwrap_or_default()
        }

}

Add new events below the UserRegistered event

   #[ink(event)]
    pub struct LootboxOpened {
        total_users: u32,
    }

    #[ink(event)]
    pub struct RewardGranted {
        #[ink(topic)]
        user: AccountId,
        reward: Reward,
    }

And then implement the core lootbox logic:

/// Process lootbox with random bytes (would be called by VRaaS callback)
#[ink(message)]
pub fn open_lootbox(&mut self, random_bytes: [u8; 32]) {
    let user_count = self.registered_users.len();
    assert!(user_count > 0,No users registered);

    self.env().emit_event(LootboxOpened {
        total_users: u32::try_from(user_count).unwrap(),
    });

    // Distribute rewards to each registered user
    for (index, user) in self.registered_users.iter().enumerate() {
        // Use different bytes for each user to get unique randomness
        let user_random = self.get_user_random(&random_bytes, index);
        let reward = self.determine_reward(user_random);

        // Update reward stats
        let mut stats = self.user_rewards.get(user).unwrap_or_default();
        match reward {
            Reward::Bronze => stats.bronze.saturating_add(1),
            Reward::Silver => stats.silver.saturating_add(1),
            Reward::Gold => stats.gold.saturating_add(1),
            Reward::Diamond => stats.diamond.saturating_add(1),
        };

        self.user_rewards.insert(user, &stats);

        // Emit event
        self.env().emit_event(RewardGranted {
            user: *user,
            reward,
        });
    }

    // Clear registration for next lootbox
    self.clear_registrations();
}

/// Determine reward based on random value
fn determine_reward(&self, random_value: u8) -> Reward {
    // Convert to 0-100 scale
    let roll =
        (u16::from(random_value).saturating_mul(100)).saturating_div(255);

    match roll {
        0..=49 => Reward::Bronze,    // 50%
        50..=79 => Reward::Silver,   // 30%
        80..=94 => Reward::Gold,     // 15%
        95..=100 => Reward::Diamond, // 5%
        _ => Reward::Bronze,
    }
}

/// Get unique random byte for each user
fn get_user_random(&self, random_bytes: &[u8; 32], user_index: usize) -> u8 {
    // Combine multiple bytes for better distribution
    let idx = user_index % 32;
    let next_idx = (user_index.saturating_add(1)) % 32;

    // XOR two bytes for more randomness
    random_bytes[idx] ^ random_bytes[next_idx]
}

Almost There! Implement the randomness receiver to power the lootbox

Paste this beneath the impl Lootbox {} block:

 impl IdnConsumer for Lootbox {
    #[ink(message)]
    fn consume_pulse(
        &mut self,
        pulse: Pulse,
        subscription_id: SubscriptionId,
    ) -> Result<(), Error> {
        let randomness = pulse.rand();
        self.open_lootbox(randomness);
        Ok(())
    }

    // Handle subscription quotes (optional)
    #[ink(message)]
    fn consume_quote(&mut self, quote: Quote) -> Result<(), Error> {
        Ok(())
    }

    // Handle subscription information responses (optional)
    #[ink(message)]
    fn consume_sub_info(&mut self, sub_info: SubInfoResponse) -> Result<(), Error> {
        Ok(())
    }
}

And that’s it! Now we just need build and deploy the contract, create the subscription, and we are good to go!

Build and Deploy

  1. Build the contract with cargo contract build —release.

  2. Navigate to your chain explorer (e.g. polkadotjs) and upload the contract.
    a. If you are using the IDNC example chain described above, navigate to: https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fidnc0-testnet.idealabs.network#/contracts.
    b. Then click ‘upload + deploy code’ and select the lootbox.contract file

upload_deploy

⚠️ Important!!! Once the contract is deployed, you must fund the contract on BOTH the target chain and the IDN! Without this, xcm fees cannot be accounted for properly. To do this, simply copy the contract address and send it some tokens.

  1. Create a subscription by calling the create_subscription function

create_sub

Wait a ~3-4 blocks for the XCM to reach the IDN and create a subscription.

  1. Register for the lootbox

register

  1. And finally, wait until randomness is received and then query to get your rewards!

If you are not receiving pulses, it’s likely that your contract on the IDN is underfunded.

  1. Query the contract to view your rewards!

get_rewards

Troubleshooting

Create subscription is failing!

→ Ensure you have opened the hrmp channel (see above). Failing to open an HRMP channel will guarantee that all calls to the IDN will fail.

My subscription was created but I’m not receiving anything!

→ Make sure you have funded the contract address on BOTH chains (IDN + IDNC or whichever you are using).

→ Ensure you maximum xcm fees are high enough in your contract.

→ Double check that all parameters are correct when configuring the IDN client in your contract.

Conclusion

If you found this useful, give us some love!

How does it scale? The most basic benchmark on MongoDB

2025-11-11 23:32:53

Choosing a database requires ensuring that performance remains fast as your data grows. For example, if a query takes 10 milliseconds on a small dataset, it should still be quick as the data volume increases and should never approach the 100ms threshold that users perceive as waiting. Here’s a simple benchmark: we insert batches of 1,000 operations into random accounts, then query the account with the most recent operation in a specific category—an OLTP scenario using filtering and pagination. As the collection grows, a full collection scan would slow down, so secondary indexes are essential.

We create an accounts collection, where each account belongs to a category and holds multiple operations—a typical one-to-many relationship, with an index for our query on operations per categories:

db.accounts.createIndex({
  category: 1,
  "operations.date": 1,
  "operations.amount": 1,
});

To increase data volume, this function inserts operations into accounts (randomly distributed to ten million accounts over three categories):

function insert(num) {
  const ops = [];
  for (let i = 0; i < num; i++) {
    const account  = Math.floor(Math.random() * 10_000_000) + 1;
    const category = Math.floor(Math.random() * 3);
    const operation = {
      date: new Date(),
      amount: Math.floor(Math.random() * 1000) + 1,
    };
    ops.push({
      updateOne: {
        filter: { _id: account },
        update: {
          $set: { category: category },
          $push: { operations: operation },
        },
        upsert: true,
      }
    });
  }
  db.accounts.bulkWrite(ops);
}

This adds 1,000 operations and should take less than one second:

let time = Date.now();
insert(1000);
console.log(`Elapsed ${Date.now() - time} ms`);

A typical query fetches the account, in a category, that had the latest operation:

function query(category) {
  return db.accounts.find(
    { category: category },
    { "operations.amount": 1 , "operations.date": 1 }
  )
    .sort({ "operations.date": -1 })
    .limit(1);
}

Such query should take a few milliseconds:

let time = Date.now();
print(query(1).toArray());
console.log(`Elapsed ${Date.now() - time} ms`);

I repeatedly insert new operations, by batches of one thousand, in a loop, and measure the time taken for the query while the collection grows, stopping once I reach one billion operations randomly distributed into the accounts:

for (let i = 0; i < 1_000_000; i++) { 
  // more data  
  insert(1000);  
  // same query
  const start = Date.now();  
  const results = query(1).toArray();  
  const elapsed = Date.now() - start;  
  print(results);  
  console.log(`Elapsed ${elapsed} ms`);  
}  
console.log(`Total accounts: ${db.accounts.countDocuments()}`);  

In a scalable database, the response time should not significantly increase while the collection grows. I've run that in MongoDB, and response time stays in single digit milliseconds. I've run that in an Oracle Autonomous Database, with the MongoDB emulation, but I can't publish the results as Oracle Corporations forbids the publication of database benchmarks (DeWitt Clause). However, you can copy/paste this test and watch the elapsed time while data is growing, on your own infrastructure.

✅Quick Tip: .wait() a Second

2025-11-11 23:31:12

TL;DR - Add a .wait()

cy.get('YOUR-SELECTOR').wait(1000).click()

For the past week, I’ve been working on the simple task of automating the click of a tab within a page. This should be simple, right?

//Navigate to the page
//Click the tab on the page
//Click the button that displays within the tab contents

.click() failed to open the tab.

.invoke() set the correct attribute, but the clicked tab contents did not display.

I applied breakpoints to the click action and watched each point in the code that triggered the click.

I added .trigger() and used mouseover, hover, and mousedown

.focus() was added.

At this point, I was throwing everything in my power to click. this. tab.

Animated yellow character frantically typing on a laptop with flames and repeated letter ‘A’s behind them, showing panic or overwhelm.

It wasn’t until I found this answer on Stack Overflow that I found my first clue.

At first, I laughed. Not at the person but with them. My word! We’re just trying to click a thing! Then, for funsies, I applied their answer to my code.

The tab opened.

Text reads ‘OMG I FINALLY CLICKED THE TAB’ in all caps, expressing excitement or relief.

Happy to find a solution, but wanting to know what triggered the click, I stripped away each part that might not be necessary. Here’s the result:

cy.get('YOUR-SELECTOR').wait(1000).click()

All it took was a second.

I know, I know, as QA Engineers, we stress and strain not to add waits to our code. It’s a code smell. It will slow down the tests, etc. And, that’s correct. But, sometimes, in code and life, all you need is a second.

Hope this helps someone and saves you DAYS of searching.

🙏

Want to support what you're reading? You can support my caffeine addiction writing by buying me a ☕️