MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

High-Throughput IoT Log Aggregator

2025-12-26 03:47:17

Imagine an industrial monitoring system receiving telemetry packets from thousands of sensors every second. The system must:

  • Ingest a batch of raw data packets.
  • Filter inactive sensors.
  • Aggregate temperature readings by Device ID.
  • Generate a textual summary log for the dashboard.

The Challenge: Since this runs continuously, any inefficiency (like unnecessary memory allocation) will cause "Garbage Collection" pauses, causing data loss. We must use memory-efficient patterns.

We used Go language for this performance test

System Flow Diagram

    A[Raw Data Ingestion] -->|Slice Pre-allocation| B(Batch Processing)
    B -->|Value Semantics| C{Filter Inactive}
    C -->|Map Pre-allocation| D[Aggregation]
    D -->|strings.Builder| E[Log Generation]
    E --> F[Final Report]
Code Segment Optimization Technique Why it matters in this scenario?
type SensorPacket struct Struct Alignment Millions of packets are kept in RAM. Saving 8 bytes per packet saves ~8MB of RAM per million records.
make([]SensorPacket, 0, n) Slice Pre-allocation The simulation loads 100,000 items. Without this, Go would resize the array ~18 times, copying memory each time.
make(map[int32]float64, 100) Map Size Hint We aggregate by device. Allocating buckets upfront prevents expensive "rehashing" when the map fills up.
var sb strings.Builder String Builder Generating the report log involves many string additions. Builder prevents creating hundreds of temporary trash strings.
func processBatch(...) Value vs Pointer config is passed by Value (fast stack access). The Report is returned by Pointer (avoids copying the big map).

Optimized (Current Code):

[ Timestamp (8) ] [ Value (8) ] [ DeviceID (4) | Active (1) | Pad (3) ]
Total: 24 Bytes / Block

Unoptimized (If mixed):

[ Active (1) | Pad (7) ] [ Timestamp (8) ] [ DeviceID (4) | Pad (4) ] [ Value (8) ]
Total: 32 Bytes / Block (33% Wasted Memory!)

Example Results

--- Processing Complete in 6.5627ms ---
--- BATCH REPORT ---
Batch ID: 1766689634
Device 79: Avg Temp 44.52
Device 46: Avg Temp 46.42
Device 57: Avg Temp 45.37
Device 11: Avg Temp 44.54
Device 15: Avg Temp 46.43
... (truncated)

📊 Benchmark Results

The following benchmarks compare the performance of "naive" implementations versus "optimized" patterns using Go's best practices. The tests were run on an Intel Core i5-10300H CPU @ 2.50GHz.

Operation Type Implementation Time (ns/op) Memory (B/op) Allocations (op) Performance Gain
Slice Append Inefficient 66,035 ns 357,626 B 19 -
Efficient (Pre-alloc) 15,873 ns 81,920 B 1 ~4.1x Faster
String Build Inefficient (+) 8,727 ns 21,080 B 99 -
Efficient (Builder) 244.7 ns 416 B 1 ~35.6x Faster
Map Insert Inefficient 571,279 ns 591,485 B 79 -
Efficient (Size hint) 206,910 ns 295,554 B 33 ~2.7x Faster
Struct Pass By Value (Copy) 0.26 ns 0 B 0 -
By Pointer (Ref) 0.25 ns 0 B 0 Similar
  • Note on Structs: In micro-benchmarks with tight loops, the Go compiler heavily optimizes (inlines) function calls, making the difference between Value and Pointer negligible. However, in real-world applications with complex call stacks, passing large structs by Pointer significantly reduces CPU usage by avoiding memory copy operations.

💡 Key Takeaways
String Concatenation: Never use + in loops. The strings.Builder approach is over 35 times faster and uses 98% less memory because it avoids creating intermediate garbage strings.

Memory Pre-allocation: Telling Go how much memory you need upfront (for Slices and Maps) eliminates the overhead of resizing and copying data (Rehashing/Reallocation).

Slice: Reduced allocations from 19 to 1.

Map: Reduced allocations from 79 to 33.

Allocations matter: Every allocation (allocs/op) forces the Garbage Collector to work harder. Keeping this number low makes the entire application more stable and responsive.

Contact 📧

  • If anyone is interested, I can also share the code blocks I used for the performance test. You can contact me via my GitHub profile.

Build a Twitter-Style Microblog with Feeds (React Native)

2025-12-26 03:32:29

TL;DR

Build a Twitter-style microblog in React Native using Expo and Stream’s React Native tooling, backed by a minimal Express server that issues short-lived Stream tokens for demo users (Alice and Bob). The tutorial shows how to post short updates, follow and unfollow users, display a real-time aggregated timeline, and add reactions. The entire project is divided into server/ and client/ folders, containing runnable code snippets. The guide ensures that the steps are reproducible in an Expo environment, enabling readers to quickly prototype.

Introduction

Mobile users spend a notable amount of time in apps. According to a report by data.ai (formerly App Annie) via Influencer Marketing Hub, people globally now spend an average of 4.2 hours per day using mobile apps. On community forums like r/reactnative and r/reactjs, developers often ask how to build a Twitter-style feed with posts, follows, and reactions without having to write the backend logic from scratch.

This tutorial walks you through that exact scenario: you’ll build a mobile microblog using Expo and the Stream SDK, paired with a minimal Express backend for token generation and demo user flows.

What You’ll Build

You’ll create a Twitter-style microblog in React Native, powered by Stream’s Feed APIs, that demonstrates a social timeline architecture on mobile. The app will include sign-in with mock users (Alice and Bob), post composition, following, liking, and commenting. It runs locally using Expo, communicates with a lightweight Express backend for token generation, and syncs posts in real-time through Stream’s managed WebSocket infrastructure. The idea is to replicate the social feed mechanics, not Twitter’s design, by using minimal custom logic and relying on Stream’s aggregation and reaction system to handle ranking, pagination, and live updates.

The tutorial is structured for reproducibility. Each section introduces a major feature (posting, timeline aggregation, following) and immediately ties it to a code snippet that can be run within your local environment. The backend remains stateless, focusing solely on secure token creation, while the Stream SDK handles feeds, activities, and reactions. This pattern is valuable for mobile developers who need social functionality but prefer outsourcing heavy feed logic rather than maintaining it in Firebase, MongoDB, or through custom SQL joins.

By the end of this build, you’ll have:

  • A working mobile microblog with user timelines.
  • The follow/unfollow flow updates feeds instantly.
  • Like and comment functionality using Stream reactions.
  • Full real-time updates, delivered through Stream’s managed sockets.

Optional stretch improvements include a basic profile screen (avatar and bio), image upload using Expo Image Picker, and an entry-level moderation setup using Stream’s dashboard. The entire stack runs in two directories: /server for the token backend and /client for the Expo project, allowing developers to explore or extend individual layers without losing context. You can check the repo here.

Homepage
Homepage

Alice Login View
Alice Login View

Like Feature
Like Feature

Tweet Draft
Tweet Draft

Following Feature
Following Feature

Sign Out
Sign Out

Bob Login View
Bob Login View

Alice has 1 follower: BOB
Alice has 1 follower: BOB

Feed Logs
Feed Logs

Prerequisites

Before writing any code, ensure your development environment is set up for both Node.js and React Native (Expo). You’ll need Node.js 18 or later, npm, and the Expo CLI (npm install -g expo-cli) for project scaffolding. The backend server runs on Express and connects to Stream’s cloud API using getstream. The frontend depends on Stream’s React Native SDK, [axios](https://www.npmjs.com/package/axios) for HTTP calls, and a few utility libraries for convenience.

Here’s a quick summary of the packages we’ll install during the tutorial:

npm install react-native-activity-feed getstream axios express dotenv expo-image-picker

(expo-image-picker is optional, used only if you decide to add image uploads.)

You’ll also need a Stream account. Visit https://getstream.io/dashboard and sign in. You'll also need a Stream account and API key, which we'll review in the next step. Keep in mind, the secret key never leaves the server.

The final project structure will look like this:

stream-microblog/
 ├── server/
 │   └── index.js
 ├── client/
 │   ├── App.js
 │   └── components/
 │       ├── Feed.js
 │       └── PostComposer.js
 ├── .env
 └── package.json

This setup prepares you for the main tutorial steps: initializing the Stream client, wiring token authentication between backend and mobile, and displaying feeds using Stream’s components. Once your keys and dependencies are ready, the next step will be to scaffold the project and configure the Express token service.

Project Architecture

A working feed system needs to separate client identity, secure token handling, and feed state management. In this project, the React Native app handles UI and local interactions, while the Express backend issues scoped Stream tokens. Stream’s managed API stores and distributes all feed activities.

The architecture is intentionally lean.

The mobile client interacts only with the Express server to obtain a temporary session token. This prevents secret keys from ever being shipped with the app bundle. Once authenticated, the app connects directly to Stream using the issued token. Stream then manages feed aggregation, ranking, and reaction fan-out in real-time, without requiring any database writes on your end.

In this pattern:

  • The React Native app requests a session token for a demo user.
  • The Express backend calls the Stream Node SDK to generate that token and returns it to the client.
  • The Stream API maintains feeds, reactions, and WebSocket subscriptions.

This separation models how production systems handle feed features while maintaining a fast local development flow. When the app posts an activity or follows another user, Stream distributes those updates to all related timelines through its managed fan-out layer, removing the need for manual socket logic or database triggers.

Step 1: Set up and Configure

a. Create a new Stream app

Start by creating a free developer account at https://getstream.io/dashboard. Once signed in, create a new app and copy the following values from the App Settings page:

  • API_KEY
  • API_SECRET
  • APP_ID

Save them for later; these credentials connect the Express server to Stream. The secret will remain private on the backend, while the key and app ID will be sent to the client through the token response.

b. Scaffold the mobile app

From your terminal, create a new Expo project named stream-microblog:

npx expo init stream-twitter
cd stream-twitter
npm install getstream axios express dotenv concurrently

The getstream package connects both the backend and the mobile client to Stream’s API; axios handles HTTP requests between the client and the server; and express powers the token endpoint.

c. Set up development scripts

To run the backend and Expo client together, define the following scripts in package.json:

"scripts": {
  "dev": "concurrently \"npm run start:server\" \"expo start\"",
  "start:server": "node server/index.js"
}

Now you can start both services with:

npm run dev

The Expo project will be accessible at localhost:8081, and the backend token server will be listening on port 3001.

With the environment ready, the next step will be to create the Express server that issues Stream tokens and seeds initial users.

Step 2: Create the Express Server

Where to get your credentials from the dashboard

Open the Stream dashboard and navigate to the Apps section. Click the app tile for your project, then open App Settings. Copy these three values into your .env file:

  • STREAM_API_KEY
  • STREAM_API_SECRET
  • STREAM_APP_ID

Use the same settings to populate the backend environment. Never commit STREAM_API_SECRET to source control.

Server goals

The server has three responsibilities:

  1. Issue a short-lived Stream token for a requested demo user via POST /session.
  2. Provide endpoints to follow and unfollow other users via Stream feeds.
  3. Seed demo users and initial follow relationships with a POST /seed route used only during local development.

Install and project layout

Create the server folder and install packages:

mkdir server
cd server
npm init -y
npm install express dotenv getstream cors

Project layout under server/:

server/
 ├─ index.js          # server entry
 ├─ seed.js           # optional seeding helper
 └─ .env              # local env values (not committed)

.env example

Create server/.env and add:

STREAM_API_KEY=your_api_key_here
STREAM_API_SECRET=your_api_secret_here
STREAM_APP_ID=your_app_id_here
PORT=5000

Replace placeholders with values from the Stream dashboard.

server/index.js

Save this file as server/index.js. It is minimal and fully runnable for local development. It creates session tokens, seeds demo users, and exposes follow/unfollow and post routes.

require('dotenv').config();
const express = require('express');
const cors = require('cors');
const bodyParser = require('body-parser');
const { StreamChat } = require('stream-chat');
const Stream = require('getstream');

const app = express();
const PORT = process.env.PORT || 8080;

app.use(cors());
app.use(bodyParser.json());

// Load env
const STREAM_API_KEY = process.env.STREAM_API_KEY;
const STREAM_API_SECRET = process.env.STREAM_API_SECRET; // Keep ONLY on server
const STREAM_APP_ID = process.env.STREAM_APP_ID; // optional but handy for client

if (!STREAM_API_KEY || !STREAM_API_SECRET) {
  console.error('Missing Stream credentials in .env. See server/.env.example');
  process.exit(1);
}

// Stream clients
const serverClient = Stream.connect(STREAM_API_KEY, STREAM_API_SECRET);

// Helper: ensure a user exists in Stream
async function ensureUser(userId, data = {}) {
  // Upsert a user to Stream using server client
  // We use Stream Chat API user upsert for convenience, though regular Feeds also allow user creation when adding activities
  const chat = StreamChat.getInstance(STREAM_API_KEY, STREAM_API_SECRET);
  await chat.upsertUsers([{ id: userId, name: data.name || userId, ...data }]);
}

// POST /session - returns token + basic config
app.post('/session', async (req, res) => {
  try {
    const { userId } = req.body || {};
    if (!userId) return res.status(400).json({ error: 'userId required' });

    await ensureUser(userId, req.body.userData || {});

    // Create a Stream Feeds user token
    const token = serverClient.createUserToken(userId);

    return res.json({
      token,
      apiKey: STREAM_API_KEY,
      appId: STREAM_APP_ID,
      userId,
    });
  } catch (err) {
    console.error('session error', err);
    res.status(500).json({ error: 'failed to create session' });
  }
});

// POST /post - creates a post on user feed
app.post('/post', async (req, res) => {
  try {
    const { userId, text } = req.body || {};
    if (!userId || !text) return res.status(400).json({ error: 'userId and text required' });

    const userFeed = serverClient.feed('user', userId);
    const activity = await userFeed.addActivity({
      actor: `User:${userId}`,
      verb: 'post',
      object: 'text:post',
      text,
      time: new Date().toISOString(),
    });

    res.json(activity);
  } catch (err) {
    console.error('post error', err);
    res.status(500).json({ error: 'failed to post' });
  }
});

// POST /follow - currentUser follows target user
app.post('/follow', async (req, res) => {
  try {
    const { followerId, targetUserId } = req.body || {};
    if (!followerId || !targetUserId) return res.status(400).json({ error: 'followerId and targetUserId required' });

    const timeline = serverClient.feed('timeline', followerId);
    await timeline.follow('user', targetUserId);

    res.json({ ok: true });
  } catch (err) {
    console.error('follow error', err);
    res.status(500).json({ error: 'failed to follow' });
  }
});

// POST /unfollow
app.post('/unfollow', async (req, res) => {
  try {
    const { followerId, targetUserId } = req.body || {};
    if (!followerId || !targetUserId) return res.status(400).json({ error: 'followerId and targetUserId required' });

    const timeline = serverClient.feed('timeline', followerId);
    await timeline.unfollow('user', targetUserId, { keepHistory: true });

    res.json({ ok: true });
  } catch (err) {
    console.error('unfollow error', err);
    res.status(500).json({ error: 'failed to unfollow' });
  }
});

// Seed endpoint: creates demo users and default follow relations
app.post('/seed', async (req, res) => {
  try {
    const demoUsers = [
      { id: 'alice', name: 'Alice' },
      { id: 'bob', name: 'Bob' },
    ];

    for (const u of demoUsers) {
      await ensureUser(u.id, { name: u.name });
    }

    // Optional: have Alice follow Bob by default
    const aliceTimeline = serverClient.feed('timeline', 'alice');
    await aliceTimeline.follow('user', 'bob');

    res.json({ ok: true, users: demoUsers });
  } catch (err) {
    console.error('seed error', err);
    res.status(500).json({ error: 'failed to seed' });
  }
});

app.get('/', (_, res) => res.send('Stream Microblog server running'));

app.listen(PORT, () => {
  console.log(`Server listening on http://0.0.0.0:${PORT}`);
});

Start the server

From the project root or server/ folder run:

node server/index.js

If you prefer automatic restarts during development, install nodemon and run npx nodemon server/index.js. When running, POST /seed can be used to create demo users; POST /session returns the token the client will use.

Security notes

  • Do not expose STREAM_API_SECRET in any client code or public repository. Keep it server-side only.
  • Use proper authentication in production before returning tokens. For this tutorial, we use mocked authentication for learning and prototyping.

Step 3: Connect React Native to Stream

This section wires the client to the token endpoint and shows a minimal Expo app with a feed composer and a timeline. The code uses the react-native-activity-feed style components, which are straightforward to run in Expo. If you prefer a lower-level implementation later, we can switch to a different UI library; however, this approach yields runnable examples quickly.

Install client dependencies

From client/ inside the project root:

cd client
npx expo init   # choose blank template if prompted
npm install react-native-activity-feed getstream axios

If you do not want to use the react-native-activity-feed component wrappers, we can replace them with a simple FlatList and custom renderers. For this tutorial, we will use the existing components because they reduce UI boilerplate, allowing us to focus on feed wiring.

client/App.js

Replace the default App.js with the file below. It requests a session token for Alice, initializes Stream, provides a status-update composer, and displays the aggregated timeline. Save as client/App.js.

import React, { useEffect, useMemo, useState } from 'react';
import { SafeAreaView, View, Text, Button, Platform } from 'react-native';
import axios from 'axios';
import { StreamApp, FlatFeed, StatusUpdateForm } from 'react-native-activity-feed';
import { OverlayProvider } from 'react-native-activity-feed';
import { StreamClient } from 'getstream';
import PostComposer from './components/PostComposer';
import Feed from './components/Feed';

// Use your LAN IP for mobile devices
const DEFAULT_SERVER = 'http://localhost:8080';

export default function App() {
  const [userId, setUserId] = useState(null);
  const [session, setSession] = useState(null);
  const [serverUrl, setServerUrl] = useState(DEFAULT_SERVER);

  const signIn = async (id) => {
    const res = await axios.post(`${serverUrl}/session`, { userId: id, userData: { name: id } });
    setSession(res.data);
    setUserId(id);
  };

  if (!userId || !session) {
    return (
      <SafeAreaView style={{ flex: 1, alignItems: 'center', justifyContent: 'center', padding: 24 }}>
        <Text style={{ fontSize: 22, fontWeight: 'bold', marginBottom: 16 }}>Stream Microblog</Text>
        <Text style={{ marginBottom: 8 }}>Start the backend, then choose a demo user:</Text>
        <Button title="Sign in as Alice" onPress={() => signIn('alice')} />
        <View style={{ height: 12 }} />
        <Button title="Sign in as Bob" onPress={() => signIn('bob')} />
        <View style={{ height: 24 }} />
        <Text style={{ fontSize: 12, color: '#666', textAlign: 'center' }}>
          If testing on a physical device, edit DEFAULT_SERVER in App.js to use your computer's LAN IP
        </Text>
      </SafeAreaView>
    );
  }

  return (
    <OverlayProvider>
      <StreamApp
        apiKey={session.apiKey}
        appId={session.appId}
        token={session.token}
        userId={session.userId}
      >
        <SafeAreaView style={{ flex: 1 }}>
          <PostComposer userId={userId} serverUrl={serverUrl} />
          <Feed userId={userId} />
        </SafeAreaView>
      </StreamApp>
    </OverlayProvider>
  );
}

This component does the following:

  • POST /session to the Express server to obtain apiKey, appId, and token.
  • Initializes StreamApp with those values so the SDK can connect to Stream directly.
  • Renders StatusUpdateForm so the demo user can post updates.
  • Renders FlatFeed, which displays aggregated timeline activities and updates in real-time.

How posting works end-to-end

  1. A user types text into StatusUpdateForm and submits.
  2. The react-native-activity-feed UI uses the initialized Stream client with the server-issued token and calls the Stream API to add an activity to the user:alice.
  3. Stream handles fan-out to all timelines that follow that user and emits updates to subscribed clients.

Follow and like buttons

To wire follow/unfollow, the client can call the server endpoints we created. Example function to follow a user:

// followUser.js (client helper snippet)
import axios from "axios";

export async function follow(follower, followee) {
  await axios.post("http://localhost:3001/follow", { follower, followee });
}

Call that from a button press inside a custom Activity renderer or an inline control. The server will invoke Stream's follow API to connect timelines.

Likes and comments are handled by the react-native-activity-feed components through Stream Reactions. The SDK will manage reaction creation and propagation when the user presses the like button rendered by the Activity.

Local testing notes

  • When testing from a physical device running Expo, use your machine IP address instead of localhost for the server URL, for example, http://192.168.1.12:3001/session.
  • If Expo fails to connect to localhost, open Expo dev tools and use the LAN or tunnel options. For stable local testing, I use Expo in LAN mode and the machine IP for server calls.

Step 4: Add Follow and Like Functionality

The feed is live, but currently, users cannot follow one another or view their updated timelines when following. We’ll add two things here:

  1. A Follow/Unfollow button for each activity author.
  2. Built-in Like and Comment reactions using Stream’s reaction system.

Server: Follow and unfollow routes

If you already created these routes earlier, confirm they match this version. They simply call the Stream API to follow or unfollow another user.

// server/index.js (add if missing)
app.post("/follow", async (req, res) => {
  const { follower, followee } = req.body;
  if (!follower || !followee) {
    return res.status(400).json({ error: "follower and followee required" });
  }
  const userFeed = client.feed("timeline", follower);
  await userFeed.follow("user", followee);
  return res.json({ ok: true });
});

app.post("/unfollow", async (req, res) => {
  const { follower, followee } = req.body;
  if (!follower || !followee) {
    return res.status(400).json({ error: "follower and followee required" });
  }
  const userFeed = client.feed("timeline", follower);
  await userFeed.unfollow("user", followee);
  return res.json({ ok: true });
});

Each time the mobile client hits these endpoints, Stream automatically updates all related timelines. There’s no need to refresh manually; fan-out propagation handles it.

Client: Custom activity component

The built-in Activity component from react-native-activity-feed is helpful but limited in terms of customization. To display a Follow button and personalized actions, wrap it in your own component.

Create a new file: client/components/feed.js

// client/components/CustomActivity.js
import React, { useCallback } from 'react';
import { View, Button } from 'react-native';
import { FlatFeed, Activity, LikeButton } from 'react-native-activity-feed';

export default function Feed({ userId }) {
  const renderActivity = useCallback(({ item }) => {
    const actorId = (item.actor && item.actor.id) || (item.actor || '').replace('User:', '');
    const isFollowing = false; // We do not have follow state in this simple example

    return (
      <View style={{ paddingHorizontal: 12 }}>
        <Activity
          activity={item}
          Footer={() => (
            <View style={{ flexDirection: 'row', alignItems: 'center', paddingVertical: 8 }}>
              <LikeButton reactionKind="like" reactionCounts={item.reaction_counts} activity={item} />
            </View>
          )}
        />
      </View>
    );
  }, [userId]);

  return (
    <FlatFeed
      feedGroup="timeline"
      userId={userId}
      notify
      options={{ withRecentReactions: true }}
      renderActivity={renderActivity}
    />
  );
}

This component renders:

  • The activity content using Stream’s renderer.
  • A like button using ReactionToggleIcon, which syncs automatically with Stream’s reaction backend.
  • A Follow/Unfollow button that calls your server endpoints.

Integrate this into the main feed by replacing the default Activity component.

// client/App.js (inside return)
<FlatFeed
  feedGroup="timeline"
  Activity={(props) => <Feed {...props} session={session} />}
  notify
/>

This renders your custom component with access to the session context for authenticated API calls.

Testing follow logic

  1. Run both the server and Expo client (npm run dev).
  2. Open two simulators or devices (Alice and Bob).
  3. In Alice’s app, click Follow on Bob’s activity.
  4. Bob’s future posts appear in Alice’s timeline instantly.

That flow confirms the follow endpoint and feed propagation are working correctly.

Step 5: Real-Time Updates

The best part of Stream’s feed architecture is real-time delivery. You do not have to manage socket connections or polling intervals manually. The Stream React Native SDK opens a WebSocket connection on initialization and automatically pushes new activities or reactions to subscribed feeds.

When a user posts, likes, or follows someone, other users see updates instantly in their FlatFeed.

Behind the scenes

Each feed (for example, timeline:alice) subscribes to Stream’s WebSocket gateway. When the server or another client adds an activity to any feed that timeline:alice follows, Stream broadcasts that event to connected clients. The React Native SDK receives those updates and automatically re-renders the list.

You can confirm this behavior by opening two simulator windows side by side:

  • User A (Alice) posts a new message.
  • Within seconds, User B (Bob) sees the post appear in the feed without manual refresh.

Example timeline refresh demonstration

Add a log listener inside FlatFeed for visibility:

<FlatFeed
  feedGroup="timeline"
  Activity={(props) => <Feed {...props} session={session} />}
  notify
  onAddReaction={(reaction) => console.log("New reaction:", reaction)}
  onRemoveReaction={(reaction) => console.log("Reaction removed:", reaction)}
  onRefresh={() => console.log("Feed refreshed")}
  onLoadMore={() => console.log("Loading more activities")}
  style={{ flex: 1 }}
/>

Each time you post or like, you’ll see Stream events arriving through the console in real time.

Troubleshooting

If real-time updates do not appear:

  • Verify that both users belong to the same Stream app (the App ID in .env must match).
  • Ensure each client connects with a valid token from the backend.
  • Verify that your internet connection is open; Stream’s WebSocket relies on port 443.

Step 6: Production Hardening

This section covers the production concerns you will face after the prototype. It focuses on security, moderation, scalability, observability, and cost control. Each subsection includes concrete steps and code when applicable.

Security and token management

Problem to solve

Do not ship the STREAM_API_SECRET in your client. If a secret leaks, all app feed operations can be abused. Tokens issued to clients must be scoped and rotated to limit the blast radius.

Practical pattern

  • Authenticate users on the backend before issuing a Stream token. For prototyping, we used a mocked flow. For production, require a session token, OAuth exchange, or JWT in the Authorization header before issuing Stream credentials.
  • Keep Stream tokens short-lived by rotating them on re-login and by providing a session-renew endpoint that re-issues a fresh Stream token when your server validates the application session. If sessions expire, the client must reauthenticate.

1. Minimal auth middleware example

Add a simple middleware that validates a server session token before returning a Stream token. This example uses a placeholder function, validateAppSession, which you must replace with your actual authentication check.

// server/auth-middleware.js
export async function requireSession(req, res, next) {
  const auth = req.header("Authorization"); // Bearer <session-token>
  if (!auth) return res.status(401).json({ error: "no auth" });

  const token = auth.replace("Bearer ", "");
  const valid = await validateAppSession(token); // implement this

  if (!valid) return res.status(403).json({ error: "invalid session" });

  req.userId = valid.userId;
  return next();
}

Use it on the session endpoint:

// server/index.js (use the middleware)
import { requireSession } from "./auth-middleware.js";

app.post("/session", requireSession, async (req, res) => {
  const userId = req.userId;
  const token = client.createUserToken(userId);
  return res.json({
    apiKey: process.env.STREAM_API_KEY,
    appId: process.env.STREAM_APP_ID,
    token,
    userId
  });
});

2. Token rotation endpoint

Provide a renew endpoint so the client can request a new Stream token without having to re-enter credentials.

app.post("/session/renew", requireSession, async (req, res) => {
  const userId = req.userId;
  const token = client.createUserToken(userId);
  return res.json({ token, userId, issuedAt: Date.now() });
});

Client-side strategy

  • On app foreground or token expiry, send a POST request to /session/renew.
  • If the renewal fails, fall back to your regular sign-in flow.

Activity validation and moderation

1. Validate content on the server

Implement server-side validation of activity payloads to prevent the storage of malformed or malicious content. Reject overly long text, strip dangerous HTML, and sanitize attachments.

Sample validation before posting:

function validatePostPayload({ actor, text }) {
  if (!actor || !text) return { ok: false, reason: "missing actor or text" };
  if (typeof text !== "string" || text.length > 280) return { ok: false, reason: "text too long" };
  // apply simple profanity filter or call external moderation service
  return { ok: true };
}

app.post("/post", requireSession, async (req, res) => {
  const { text } = req.body;
  const check = validatePostPayload({ actor: req.userId, text });

  if (!check.ok) return res.status(400).json({ error: check.reason });

  // add activity code here
});

2. Use Stream moderation features

Stream provides moderation tools in the dashboard. For higher assurance:

  • Tag activities with metadata, such as moderation status (e.g., { status: "pending" }), and surface flagged items in your admin UI.
  • Use webhooks to receive Stream moderation events and handle them in background jobs.

Simple webhook receiver stub:

// server/webhook.js
app.post("/webhook/stream", express.json(), async (req, res) => {
  // verify origin or verify signature if Stream provides it
  const event = req.body;
  // example: event.type === "flag" => take action
  console.log("stream webhook", event);
  res.sendStatus(200);
});

3. Automated content scanning

If you need tighter moderation, run new posts through an automated classifier (an external API or an in-house one) in a background worker before making them visible in public feeds. For example, add the activity to a staging feed, scan it, then move to the public timeline if it passes.

Scaling, throughput, and write amplification

1. Understand fan-out trade-offs

There are two standard models:

  • Push (fan-out on write): when a user posts, the server writes that activity into every follower's timeline. This favors fast reads but more writing work. Stream uses optimized fan-out internally.
  • Pull (fan-in on read): compute timeline on read by querying recent activities from followed users or a central aggregated store. This favors fewer writes but heavier read computation.

2. Use Stream API batching for heavy write bursts

If you onboard users or migrate large datasets, use addActivities to write many activities in a single request.

Example of batch writes:

const activities = [
  { actor: "user:alice", verb: "post", object: "post:1", text: "hello" },
  { actor: "user:bob", verb: "post", object: "post:2", text: "hi" },
];
await client.feed("user", "alice").addActivities(activities);

3. Background workers for heavy operations

Offload fan-out triggers, indexing, or expensive enrichment to a worker queue. Use Bull, Bee-Queue, or a cloud queue to process background jobs. Keep the Express request path fast by enqueuing work instead of doing long operations in-line.

4. Rate limits and retries

Monitor Stream API responses for rate limit errors. Implement exponential backoff and idempotent retries for operations like follow/unfollow and addActivity. For idempotency, add a client-supplied foreign_id to activities and use upsert semantics when possible.

Example activity with foreign_id:

const activity = {
  actor: "user:alice",
  verb: "post",
  object: "post:12345",
  foreign_id: "post:12345",
  text: "hello again",
};
await client.feed("user", "alice").addActivity(activity);

Observability and metrics

Track key signals:

  • API call counts and latencies for token issuance, follows, and posts.
  • Feed operation errors and rate limit events.
  • Active WebSocket connections and dropped connections.

Logging best practice

Log structured events with fields such as userId, operation, status, and durationMs. Ship logs to a central store, such as ELK, Datadog, or a cloud logging service. Alert on unusual spikes in errors or rate limit responses.

Example minimal logging:

const start = Date.now();
try {
  await client.feed("user", "alice").addActivity(activity);
  console.log(JSON.stringify({ userId: "alice", op: "addActivity", status: "ok", dur: Date.now() - start }));
} catch (err) {
  console.error(JSON.stringify({ userId: "alice", op: "addActivity", status: "error", msg: err.message }));
}

Data retention, privacy, and compliance

1. Data lifecycle

Decide how long you will retain activities and user data. Stream allows you to delete activities via client.feed(...).removeActivity(activityId).

2. GDPR and user requests

Provide endpoints for exporting and deleting data. When a user requests deletion, remove all required activities and user data, and document the flow.

Example remove activity:

await client.feed("user", "alice").removeActivity("post:12345");

Cost control

  • Estimate costs: Monitor API call volume, reaction counts, and MAU usage. For high-volume apps, use sampling or rate-limited features in dev and staging to keep costs predictable. Use Stream dashboard usage reports to identify hotspots.
  • Throttle large operations: When importing data or running batch operations, throttle the rate to keep usage within expected budgets.

Comparison: DIY Feed vs Stream

The compact table below provides a quick side-by-side view, followed by a deeper analysis of each key area.

Aspect DIY feed (self-managed) Stream (managed feed API)
Time to initial prototype Weeks Minutes to a few hours
Real-time updates Implement sockets, queueing Built-in WebSocket pub/sub
Ranking and aggregation Build and tune ranking logic API-level ranking and personalization
Moderation Build filters, dashboards Dashboard and moderation tools available
Scalability Database sharding and queue complexity Managed sharding and fan-out
Operational burden High: backups, scaling, incidents Lower: vendor-managed infrastructure
Cost predictability DevOps and infra costs can be high Usage-based; predictable with monitoring
Feature velocity Slow for advanced features Fast: reactions, enrichment, personalization

Expanded comparison

1. Time to ship

DIY requires designing a feed schema, writing aggregation logic, implementing pagination and cursors, and building real-time delivery. Teams often underestimate this work. With Stream, you can focus on UX and client features, as aggregation and delivery are handled.

2. Real-time delivery and reliability

DIY real-time means maintaining socket servers, reconnection strategies, and backfills. Stream provides managed sockets and backfills for disconnected clients, lowering the engineering overhead.

3. Ranking and personalization

Implementing an effective "For You" ranking function requires signals, offline model training, and heuristics for freshness and relevance. Stream provides ranking and personalization primitives, and you can feed your own signals for hybrid ranking.

4. Moderation and safety

DIY requires a moderation pipeline, admin UI, and tooling to handle flags. Stream includes moderation tooling and methods for pulling flagged items into administrative workflows.

5. Operational complexity

DIY places maintenance for backups, scale testing, sharding, and incident response on your team. With a managed service, the vendor handles the heavy lifting, but you still own client-side resilience and server-side token issuance.

When to choose DIY

  • You need full data residency control and cannot use a third-party for compliance reasons.
  • Your feed logic is highly specialized in a way that managed services cannot support.

When to use Stream

  • You want to reduce time-to-market for social features and avoid building complex fan-out logic.
  • You prefer a managed system for scaling and real-time delivery while retaining the ability to extend with custom metadata and enrichment.

Conclusion

You have built a fully functional Twitter-style microblog using React Native and Stream’s feed infrastructure. The project demonstrates how to separate client experience from backend complexity, keeping your mobile layer lightweight while Stream handles aggregation, ranking, and real-time synchronization.

Your app now supports:

  • Posting short updates
  • Following and unfollowing other users
  • Likes and reactions
  • Real-time timeline updates across sessions

This setup works well for prototypes, MVPs, or small production apps where the goal is to quickly validate feed functionality. Once you move beyond this baseline, consider the following next steps to expand the system:

  • Add media attachments using Expo Image Picker and Stream’s upload API.
  • Integrate mobile push notifications for likes, follows, and mentions.
  • Introduce a personalized ranking feed (“For You”) using Stream’s enrichment and ranking features.
  • Build a profile screen that aggregates a user’s posts and followers.

The Stream SDK for React Native already exposes most of these features, so extending the app will involve more UI composition than backend code. To continue exploring, review Stream’s official documentation at Quick Start - React Native Activity Feeds Docs.

Additionally, to view a production-ready SwiftUI implementation, check out the Twitter clone project by Stream.

FAQs

1. Can I use Expo?

Yes. Expo works seamlessly with the React Native Activity Feed SDK and the Stream React Native SDK. You can run this tutorial entirely within an Expo-managed workflow without ejecting to a bare React Native project.

2. How are activities stored?

All activities, reactions, and feed updates are stored within Stream’s managed backend. Each post is an activity document that includes metadata, such as actor, verb, and object, which Stream uses to aggregate and distribute updates.

3. Can I upload images?

Yes. Stream supports image uploads through its content endpoint, and Expo’s expo-image-picker module integrates easily with it. You can capture images locally, upload them, and attach their URLs to activities as media attachments.

4. Does this work offline?

You can cache or queue posts locally and replay them when the connection is restored. Use AsyncStorage or another persistent storage to temporarily hold pending activities and resubmit them once the device is online.

5. Is Stream free to use?

Stream offers a developer plan that includes up to 3 million API calls per month, which is sufficient for most prototypes and small apps. You can monitor usage and upgrade if needed through the Stream dashboard.

AI Look-Alike Search for OF Creators — Need Advice on Better Face Models

2025-12-26 03:31:23

AI Look-Alike Search for OF Creators — Need Advice on Better Face Models

I’m currently building an AI-based face similarity (look-alike) search for OF models as part of a real-world side project.

The dataset contains 100,000+ public OF model images, and the goal is to help users discover visually similar OF models based on facial features rather than usernames or text-based search.

This is not identity verification — the focus is purely on visual similarity.

What I’m Building (Quick Overview)

  • Users upload an image (reference photo / celebrity image)
  • The system finds OF models with similar facial characteristics
  • Results are ranked using face embeddings + vector similarity search
  • Everything currently runs on CPU, but I’m considering a move to GPU for scale and experimentation

What I’m Building (More Detail)

The system allows users to upload an image and receive a list of OF models with similar facial characteristics.

The intent is to support visual discovery, where perceived similarity matters more than exact identity matching.

Key Constraints

  • Similarity over identity

    The system ranks faces by perceived similarity (look-alike matching), not by strict identity verification.

  • Low tolerance for false positives

    Returning visually different faces as “similar” is more harmful than missing a potential match.

  • Real-world images

    The dataset consists of non-studio images with varying lighting, poses, resolutions, and overall quality.

  • Scalability

    The solution needs to scale beyond 100k+ images without significant drops in accuracy or performance.

Current Pipeline (CPU-Based)

At the moment, the entire pipeline runs on CPU only.

The setup looks like this:

  • Face detection and alignment
  • Feature extraction using a pre-trained face model
  • Storing embeddings in a vector index
  • Nearest-neighbor search using cosine similarity

At this scale, the system works reasonably well, but both accuracy and performance are starting to become limiting factors.

Current Model Setup (InsightFace)

Face embeddings are currently generated using InsightFace, specifically the buffalo_l model bundle.

The pipeline includes:

  • Face detection and alignment via InsightFace
  • Feature extraction using the buffalo_l model
  • Embeddings stored for similarity search
  • Cosine similarity for ranking similar faces

This provides a solid baseline, but for look-alike matching, small inaccuracies are very noticeable.

Where the System Struggles

As the dataset grows, several issues become more apparent:

  • Visually similar faces sometimes rank lower than expected
  • Different individuals with shared facial traits can appear as false positives
  • Lighting, pose, and image quality introduce noise
  • CPU inference becomes a bottleneck during re-indexing and experimentation

Because this is a look-alike use case, even small errors can significantly affect perceived quality.

CPU vs GPU — Is the Move Worth It?

I’m planning to migrate the pipeline to GPU-based inference, but I want to make sure the model choice justifies the move.

Some of the questions I’m evaluating:

  • Which face models provide the best results for visual similarity, not identity recognition?
  • Does GPU inference unlock meaningfully better accuracy, or is it mainly a speed improvement?
  • Are there models that are simply not practical to run on CPU at this scale?

If I’m going to reprocess 100k+ OF model images, I want to do it with the right model.

What I’m Looking for in a Better Face Model

I’m particularly interested in models that:

  • Produce high-quality embeddings for similarity search
  • Perform well on non-ideal, real-world images
  • Scale efficiently beyond 100k images
  • Benefit from GPU acceleration
  • Can be fine-tuned (or perform well out of the box) for look-alike matching

I’m open to both open-source and commercial solutions.

Real-World Context

This work is part of a discovery platform where users can upload an image and find visually similar OF models using AI-based face similarity.

The project is called Explore.Fans, and face similarity search is one of its core components.

👉 https://explore.fans

(Shared only for technical context.)

Questions for the Community

If you’ve worked with face similarity or face recognition models at scale, I’d really appreciate your input:

  • Which models gave you the best results for look-alike similarity?
  • Did GPU inference improve accuracy, or mostly performance?
  • Any experience fine-tuning models for similarity-based ranking?
  • Anything you’d avoid based on real-world experience?

Thanks in advance — happy to share more details if helpful.
Have a wonderfull holiday!

References

Understandable AI

2025-12-26 03:31:04

Understandable AI

The Next AI Revolution

In today’s AI landscape, we are witnessing a paradox: as systems become more capable, they become less comprehensible. The current trajectory prioritizes raw power over transparency, leading to the Black Box era.

Jan Klein is a key figure challenging this trajectory. His work at the intersection of architecture, standardization, and ethics advocates for a shift from systems that merely function to systems that can be intuitively understood. This evolution is known as Understandable AI (UAI).

1. The “Simple as Possible” Philosophy

Klein’s work is anchored in the Einsteinian principle:

“Everything should be made as simple as possible, but not simpler.”

In the context of AI, this is not about reducing capability, but about eliminating unnecessary complexity through code clarity and modular design.

Core Principles

  • Architectural Simplicity

    Rather than managing millions of opaque parameters, Klein advocates for modular architectures where data flows are traceable.

  • Cognitive Load Reduction

    A truly intelligent system should not require a manual; it should adapt to the user’s mental model, making decisions that are logically consistent with human reasoning.

2. Differentiating Explainable AI (XAI) vs. Understandable AI (UAI)

While the industry currently focuses on Explainable AI (XAI)—which attempts to interpret AI decisions after they occur—Klein proposes Understandable AI (UAI) as an intrinsic design standard.

Feature Explainable AI (XAI) Understandable AI (UAI)
Timing Post-hoc (Explanation after the fact) Design-time (Intrinsic logic)
Method Approximations and heat maps Logical transparency and reasoning
Goal Interpretation of a result Verification of the process

3. Real-Life Challenges: When XAI Fails and UAI Succeeds

The “Explainability Trap” occurs when post-hoc explanations give a false sense of security. UAI provides concrete solutions for high-stakes sectors.

Healthcare Diagnostic Errors

  • XAI Failure: A deep learning model flags an X-ray for pneumonia. The heat map highlights a hospital watermark instead of the lungs.
  • UAI Solution: UAI restricts the model’s attention to biological features using Knowledge Representation, making it impossible for a watermark to influence the outcome.

Financial Credit Bias

  • XAI Failure: An AI denies a loan and cites “debt ratio,” while hidden logic uses “Zip Code” as a proxy for race.
  • UAI Solution: A modular glass box explicitly defines approved variables; unapproved variables are rejected at the design level.

Autonomous Vehicle “Ghost Braking”

  • XAI Failure: A car brakes suddenly. Saliency maps show a blurry area with no logical reason.
  • UAI Solution: Using Cognitive AI, the system must log a logical reason (e.g., “Obstacle detected”) before executing the brake command.

Recruitment and Talent Screening

  • XAI Failure: An AI penalizes resumes containing the word “Women’s” due to historical bias.
  • UAI Solution: Explicit Knowledge Modeling hard-codes job-relevant skills, preventing hidden discriminatory criteria.

Algorithmic Trading Feedback Loops

  • XAI Failure: Bots enter a feedback loop and crash the market.
  • UAI Solution: Verifiable Logic Chains enforce sanity checks and trigger a “Pause and Explain” mode for human intervention.

4. Shaping Global Standards (W3C & AI KR)

Klein is a driving force within the World Wide Web Consortium (W3C), defining how the future web handles intelligence.

  • AI KR (Artificial Intelligence Knowledge Representation)

    A common language enabling AI systems to share context and verify conclusions with semantic interoperability.

  • Cognitive AI

    Models reflecting human thinking—planning, memory, abstraction—transforming AI into a genuine assistant rather than a statistical tool.

5. UAI as a Legal Safeguard: The Audit Trail

As AI enters regulated sectors such as law, finance, and insurance, black-box systems become a legal liability.

  • The Problem: You cannot show a judge a million neurons and prove there was no bias.
  • The UAI Solution: UAI generates a human-readable record of every decision step, transforming outputs into admissible evidence and protecting organizations from regulatory penalties.

6. Business Compliance Checklist for UAI Implementation

  • Inventory & Risk Classification – Categorize AI systems by risk level
  • Architectural Audit – Shift from monolithic to modular “Glass Box” designs
  • Explicit Knowledge Modeling – Integrate AI KR with verifiable rules
  • Human-in-the-Loop – Present reasoning chains before execution
  • Continuous Logging – Maintain chronological records of decision rationales

7. The Klein Principle

“The intelligence of a system is worthless if it does not scale with its ability to be communicated.”

Klein emphasizes the “Simple as Possible” mandate. AI architecture must be stripped of unnecessary layers so every function remains visible and auditable. Simplicity is not a reduction of intelligence—it is its highest form.

Conclusion: Understandable AI (UAI)

Why Is Understandable AI the Next AI Revolution?

UAI represents the next revolution because the “Bigger is Better” era of AI has reached its social and ethical limit. While computational power has produced impressive results, it has failed to produce Trust.

Without trust, AI cannot be safely integrated into medicine, justice, or critical infrastructure.

The revolution led by Jan Klein redefines intelligence itself—shifting focus from massive parameter counts to Clarity. In this new era, an AI’s value is measured not only by output, but by its ability to be audited, controlled, and understood.

By adhering to the principle of Simple as Possible, Klein ensures that humanity remains the master of its tools. UAI is the bridge between human intuition and machine power, built to ensure technology serves humanity rather than dominating it through complexity.

Jan Klein

CEO @ dev.ucoz.org

The Most Expensive Sentence In The World?

2025-12-26 03:19:06

Most websites try to do more over time. More features, more content, more users, more engagement loops.

I wanted to see what would happen if I did the opposite.

So I built a site that will only ever show one public sentence.

That’s it.

No feed. No profiles. No infinite scroll. No accounts. Just one sentence that everyone sees when they visit the homepage.

When the site opens on New Years 2026, anyone will be able to change that sentence but only by paying more than the last person did. Whatever they pay becomes the new base price. The next person has to beat it.

Every overwrite will be logged permanently with a timestamp, price, and position in history.

**[www.overwriteme.com]

Why build something this constrained?

Because most of the internet removes friction from speech entirely. Posting is cheap. Deleting is easy. Context disappears. Consequences are temporary.

I’m curious what happens when you introduce three constraints at once:
• Scarcity: there’s only one slot
• Cost: participation isn’t free
• Permanence: history is public and immutable

Those constraints already change how people talk about the idea, and I’m interested to see how they change behavior once it’s live.

Early messages will probably be jokes. As the price rises, people may hesitate. Messages may get shorter. Intent may become clearer. Eventually, the cost itself could become part of the meaning of the sentence.

The same words feel different at $5 than they do at $5,000.

The mechanics (kept deliberately simple)

When it launches, the system will work like this:
• One sentence is visible at a time
• To overwrite it, you pay the current base price plus a per-character cost
• Whatever you pay becomes the next base price
• The price never decreases
• All past sentences are preserved forever in a public history

There are no growth hacks here. No retention tricks. The system is intentionally small so the behavior is easy to observe.

What I’m curious to learn

I don’t know if people will treat this as a joke, a billboard, a piece of art, or something else entirely.

What I want to observe is whether introducing cost and permanence changes:
• what people say
• how long they wait
• whether they feel ownership or hesitation
• how value alters language

If it fizzles out, that’s still a result.
If it turns into something people argue about, that’s interesting too.

This isn’t a startup (at least not in the usual sense)

I’m not trying to optimize this into a product with funnels and metrics. It’s closer to an experiment that happens to be implemented in code.

The goal isn’t growth. It’s observation.

Why I’m sharing this here

DEV has always been a place where people build strange things just to see what happens. This project comes from that same impulse: curiosity first, polish second, certainty last.

If you were building something intentionally small or constrained, what rule would you impose to meaningfully change user behavior?

Sometimes the most interesting systems aren’t the ones that grow fastest, but the ones that force people to pause.