2025-12-26 03:47:17
Imagine an industrial monitoring system receiving telemetry packets from thousands of sensors every second. The system must:
The Challenge: Since this runs continuously, any inefficiency (like unnecessary memory allocation) will cause "Garbage Collection" pauses, causing data loss. We must use memory-efficient patterns.
We used Go language for this performance test
System Flow Diagram
A[Raw Data Ingestion] -->|Slice Pre-allocation| B(Batch Processing)
B -->|Value Semantics| C{Filter Inactive}
C -->|Map Pre-allocation| D[Aggregation]
D -->|strings.Builder| E[Log Generation]
E --> F[Final Report]
| Code Segment | Optimization Technique | Why it matters in this scenario? |
|---|---|---|
type SensorPacket struct |
Struct Alignment | Millions of packets are kept in RAM. Saving 8 bytes per packet saves ~8MB of RAM per million records. |
make([]SensorPacket, 0, n) |
Slice Pre-allocation | The simulation loads 100,000 items. Without this, Go would resize the array ~18 times, copying memory each time. |
make(map[int32]float64, 100) |
Map Size Hint | We aggregate by device. Allocating buckets upfront prevents expensive "rehashing" when the map fills up. |
var sb strings.Builder |
String Builder | Generating the report log involves many string additions. Builder prevents creating hundreds of temporary trash strings. |
func processBatch(...) |
Value vs Pointer |
config is passed by Value (fast stack access). The Report is returned by Pointer (avoids copying the big map). |
Optimized (Current Code):
[ Timestamp (8) ] [ Value (8) ] [ DeviceID (4) | Active (1) | Pad (3) ]
Total: 24 Bytes / Block
Unoptimized (If mixed):
[ Active (1) | Pad (7) ] [ Timestamp (8) ] [ DeviceID (4) | Pad (4) ] [ Value (8) ]
Total: 32 Bytes / Block (33% Wasted Memory!)
--- Processing Complete in 6.5627ms ---
--- BATCH REPORT ---
Batch ID: 1766689634
Device 79: Avg Temp 44.52
Device 46: Avg Temp 46.42
Device 57: Avg Temp 45.37
Device 11: Avg Temp 44.54
Device 15: Avg Temp 46.43
... (truncated)
The following benchmarks compare the performance of "naive" implementations versus "optimized" patterns using Go's best practices. The tests were run on an Intel Core i5-10300H CPU @ 2.50GHz.
| Operation Type | Implementation | Time (ns/op) | Memory (B/op) | Allocations (op) | Performance Gain |
|---|---|---|---|---|---|
| Slice Append | Inefficient | 66,035 ns | 357,626 B | 19 | - |
| Efficient (Pre-alloc) | 15,873 ns | 81,920 B | 1 | ~4.1x Faster | |
| String Build | Inefficient (+) | 8,727 ns | 21,080 B | 99 | - |
| Efficient (Builder) | 244.7 ns | 416 B | 1 | ~35.6x Faster | |
| Map Insert | Inefficient | 571,279 ns | 591,485 B | 79 | - |
| Efficient (Size hint) | 206,910 ns | 295,554 B | 33 | ~2.7x Faster | |
| Struct Pass | By Value (Copy) | 0.26 ns | 0 B | 0 | - |
| By Pointer (Ref) | 0.25 ns | 0 B | 0 | Similar |
💡 Key Takeaways
String Concatenation: Never use + in loops. The strings.Builder approach is over 35 times faster and uses 98% less memory because it avoids creating intermediate garbage strings.
Memory Pre-allocation: Telling Go how much memory you need upfront (for Slices and Maps) eliminates the overhead of resizing and copying data (Rehashing/Reallocation).
Slice: Reduced allocations from 19 to 1.
Map: Reduced allocations from 79 to 33.
Allocations matter: Every allocation (allocs/op) forces the Garbage Collector to work harder. Keeping this number low makes the entire application more stable and responsive.
2025-12-26 03:32:29
Build a Twitter-style microblog in React Native using Expo and Stream’s React Native tooling, backed by a minimal Express server that issues short-lived Stream tokens for demo users (Alice and Bob). The tutorial shows how to post short updates, follow and unfollow users, display a real-time aggregated timeline, and add reactions. The entire project is divided into server/ and client/ folders, containing runnable code snippets. The guide ensures that the steps are reproducible in an Expo environment, enabling readers to quickly prototype.
Mobile users spend a notable amount of time in apps. According to a report by data.ai (formerly App Annie) via Influencer Marketing Hub, people globally now spend an average of 4.2 hours per day using mobile apps. On community forums like r/reactnative and r/reactjs, developers often ask how to build a Twitter-style feed with posts, follows, and reactions without having to write the backend logic from scratch.
This tutorial walks you through that exact scenario: you’ll build a mobile microblog using Expo and the Stream SDK, paired with a minimal Express backend for token generation and demo user flows.
You’ll create a Twitter-style microblog in React Native, powered by Stream’s Feed APIs, that demonstrates a social timeline architecture on mobile. The app will include sign-in with mock users (Alice and Bob), post composition, following, liking, and commenting. It runs locally using Expo, communicates with a lightweight Express backend for token generation, and syncs posts in real-time through Stream’s managed WebSocket infrastructure. The idea is to replicate the social feed mechanics, not Twitter’s design, by using minimal custom logic and relying on Stream’s aggregation and reaction system to handle ranking, pagination, and live updates.
The tutorial is structured for reproducibility. Each section introduces a major feature (posting, timeline aggregation, following) and immediately ties it to a code snippet that can be run within your local environment. The backend remains stateless, focusing solely on secure token creation, while the Stream SDK handles feeds, activities, and reactions. This pattern is valuable for mobile developers who need social functionality but prefer outsourcing heavy feed logic rather than maintaining it in Firebase, MongoDB, or through custom SQL joins.
By the end of this build, you’ll have:
Optional stretch improvements include a basic profile screen (avatar and bio), image upload using Expo Image Picker, and an entry-level moderation setup using Stream’s dashboard. The entire stack runs in two directories: /server for the token backend and /client for the Expo project, allowing developers to explore or extend individual layers without losing context. You can check the repo here.
Before writing any code, ensure your development environment is set up for both Node.js and React Native (Expo). You’ll need Node.js 18 or later, npm, and the Expo CLI (npm install -g expo-cli) for project scaffolding. The backend server runs on Express and connects to Stream’s cloud API using getstream. The frontend depends on Stream’s React Native SDK, [axios](https://www.npmjs.com/package/axios) for HTTP calls, and a few utility libraries for convenience.
Here’s a quick summary of the packages we’ll install during the tutorial:
npm install react-native-activity-feed getstream axios express dotenv expo-image-picker
(expo-image-picker is optional, used only if you decide to add image uploads.)
You’ll also need a Stream account. Visit https://getstream.io/dashboard and sign in. You'll also need a Stream account and API key, which we'll review in the next step. Keep in mind, the secret key never leaves the server.
The final project structure will look like this:
stream-microblog/
├── server/
│ └── index.js
├── client/
│ ├── App.js
│ └── components/
│ ├── Feed.js
│ └── PostComposer.js
├── .env
└── package.json
This setup prepares you for the main tutorial steps: initializing the Stream client, wiring token authentication between backend and mobile, and displaying feeds using Stream’s components. Once your keys and dependencies are ready, the next step will be to scaffold the project and configure the Express token service.
A working feed system needs to separate client identity, secure token handling, and feed state management. In this project, the React Native app handles UI and local interactions, while the Express backend issues scoped Stream tokens. Stream’s managed API stores and distributes all feed activities.
The architecture is intentionally lean.
The mobile client interacts only with the Express server to obtain a temporary session token. This prevents secret keys from ever being shipped with the app bundle. Once authenticated, the app connects directly to Stream using the issued token. Stream then manages feed aggregation, ranking, and reaction fan-out in real-time, without requiring any database writes on your end.
In this pattern:
This separation models how production systems handle feed features while maintaining a fast local development flow. When the app posts an activity or follows another user, Stream distributes those updates to all related timelines through its managed fan-out layer, removing the need for manual socket logic or database triggers.
Start by creating a free developer account at https://getstream.io/dashboard. Once signed in, create a new app and copy the following values from the App Settings page:
API_KEYAPI_SECRETAPP_IDSave them for later; these credentials connect the Express server to Stream. The secret will remain private on the backend, while the key and app ID will be sent to the client through the token response.
From your terminal, create a new Expo project named stream-microblog:
npx expo init stream-twitter
cd stream-twitter
npm install getstream axios express dotenv concurrently
The getstream package connects both the backend and the mobile client to Stream’s API; axios handles HTTP requests between the client and the server; and express powers the token endpoint.
To run the backend and Expo client together, define the following scripts in package.json:
"scripts": {
"dev": "concurrently \"npm run start:server\" \"expo start\"",
"start:server": "node server/index.js"
}
Now you can start both services with:
npm run dev
The Expo project will be accessible at localhost:8081, and the backend token server will be listening on port 3001.
With the environment ready, the next step will be to create the Express server that issues Stream tokens and seeds initial users.
Open the Stream dashboard and navigate to the Apps section. Click the app tile for your project, then open App Settings. Copy these three values into your .env file:
STREAM_API_KEYSTREAM_API_SECRETSTREAM_APP_IDUse the same settings to populate the backend environment. Never commit STREAM_API_SECRET to source control.
The server has three responsibilities:
POST /session.POST /seed route used only during local development.Create the server folder and install packages:
mkdir server
cd server
npm init -y
npm install express dotenv getstream cors
Project layout under server/:
server/
├─ index.js # server entry
├─ seed.js # optional seeding helper
└─ .env # local env values (not committed)
Create server/.env and add:
STREAM_API_KEY=your_api_key_here
STREAM_API_SECRET=your_api_secret_here
STREAM_APP_ID=your_app_id_here
PORT=5000
Replace placeholders with values from the Stream dashboard.
Save this file as server/index.js. It is minimal and fully runnable for local development. It creates session tokens, seeds demo users, and exposes follow/unfollow and post routes.
require('dotenv').config();
const express = require('express');
const cors = require('cors');
const bodyParser = require('body-parser');
const { StreamChat } = require('stream-chat');
const Stream = require('getstream');
const app = express();
const PORT = process.env.PORT || 8080;
app.use(cors());
app.use(bodyParser.json());
// Load env
const STREAM_API_KEY = process.env.STREAM_API_KEY;
const STREAM_API_SECRET = process.env.STREAM_API_SECRET; // Keep ONLY on server
const STREAM_APP_ID = process.env.STREAM_APP_ID; // optional but handy for client
if (!STREAM_API_KEY || !STREAM_API_SECRET) {
console.error('Missing Stream credentials in .env. See server/.env.example');
process.exit(1);
}
// Stream clients
const serverClient = Stream.connect(STREAM_API_KEY, STREAM_API_SECRET);
// Helper: ensure a user exists in Stream
async function ensureUser(userId, data = {}) {
// Upsert a user to Stream using server client
// We use Stream Chat API user upsert for convenience, though regular Feeds also allow user creation when adding activities
const chat = StreamChat.getInstance(STREAM_API_KEY, STREAM_API_SECRET);
await chat.upsertUsers([{ id: userId, name: data.name || userId, ...data }]);
}
// POST /session - returns token + basic config
app.post('/session', async (req, res) => {
try {
const { userId } = req.body || {};
if (!userId) return res.status(400).json({ error: 'userId required' });
await ensureUser(userId, req.body.userData || {});
// Create a Stream Feeds user token
const token = serverClient.createUserToken(userId);
return res.json({
token,
apiKey: STREAM_API_KEY,
appId: STREAM_APP_ID,
userId,
});
} catch (err) {
console.error('session error', err);
res.status(500).json({ error: 'failed to create session' });
}
});
// POST /post - creates a post on user feed
app.post('/post', async (req, res) => {
try {
const { userId, text } = req.body || {};
if (!userId || !text) return res.status(400).json({ error: 'userId and text required' });
const userFeed = serverClient.feed('user', userId);
const activity = await userFeed.addActivity({
actor: `User:${userId}`,
verb: 'post',
object: 'text:post',
text,
time: new Date().toISOString(),
});
res.json(activity);
} catch (err) {
console.error('post error', err);
res.status(500).json({ error: 'failed to post' });
}
});
// POST /follow - currentUser follows target user
app.post('/follow', async (req, res) => {
try {
const { followerId, targetUserId } = req.body || {};
if (!followerId || !targetUserId) return res.status(400).json({ error: 'followerId and targetUserId required' });
const timeline = serverClient.feed('timeline', followerId);
await timeline.follow('user', targetUserId);
res.json({ ok: true });
} catch (err) {
console.error('follow error', err);
res.status(500).json({ error: 'failed to follow' });
}
});
// POST /unfollow
app.post('/unfollow', async (req, res) => {
try {
const { followerId, targetUserId } = req.body || {};
if (!followerId || !targetUserId) return res.status(400).json({ error: 'followerId and targetUserId required' });
const timeline = serverClient.feed('timeline', followerId);
await timeline.unfollow('user', targetUserId, { keepHistory: true });
res.json({ ok: true });
} catch (err) {
console.error('unfollow error', err);
res.status(500).json({ error: 'failed to unfollow' });
}
});
// Seed endpoint: creates demo users and default follow relations
app.post('/seed', async (req, res) => {
try {
const demoUsers = [
{ id: 'alice', name: 'Alice' },
{ id: 'bob', name: 'Bob' },
];
for (const u of demoUsers) {
await ensureUser(u.id, { name: u.name });
}
// Optional: have Alice follow Bob by default
const aliceTimeline = serverClient.feed('timeline', 'alice');
await aliceTimeline.follow('user', 'bob');
res.json({ ok: true, users: demoUsers });
} catch (err) {
console.error('seed error', err);
res.status(500).json({ error: 'failed to seed' });
}
});
app.get('/', (_, res) => res.send('Stream Microblog server running'));
app.listen(PORT, () => {
console.log(`Server listening on http://0.0.0.0:${PORT}`);
});
From the project root or server/ folder run:
node server/index.js
If you prefer automatic restarts during development, install nodemon and run npx nodemon server/index.js. When running, POST /seed can be used to create demo users; POST /session returns the token the client will use.
STREAM_API_SECRET in any client code or public repository. Keep it server-side only.This section wires the client to the token endpoint and shows a minimal Expo app with a feed composer and a timeline. The code uses the react-native-activity-feed style components, which are straightforward to run in Expo. If you prefer a lower-level implementation later, we can switch to a different UI library; however, this approach yields runnable examples quickly.
From client/ inside the project root:
cd client
npx expo init # choose blank template if prompted
npm install react-native-activity-feed getstream axios
If you do not want to use the react-native-activity-feed component wrappers, we can replace them with a simple FlatList and custom renderers. For this tutorial, we will use the existing components because they reduce UI boilerplate, allowing us to focus on feed wiring.
Replace the default App.js with the file below. It requests a session token for Alice, initializes Stream, provides a status-update composer, and displays the aggregated timeline. Save as client/App.js.
import React, { useEffect, useMemo, useState } from 'react';
import { SafeAreaView, View, Text, Button, Platform } from 'react-native';
import axios from 'axios';
import { StreamApp, FlatFeed, StatusUpdateForm } from 'react-native-activity-feed';
import { OverlayProvider } from 'react-native-activity-feed';
import { StreamClient } from 'getstream';
import PostComposer from './components/PostComposer';
import Feed from './components/Feed';
// Use your LAN IP for mobile devices
const DEFAULT_SERVER = 'http://localhost:8080';
export default function App() {
const [userId, setUserId] = useState(null);
const [session, setSession] = useState(null);
const [serverUrl, setServerUrl] = useState(DEFAULT_SERVER);
const signIn = async (id) => {
const res = await axios.post(`${serverUrl}/session`, { userId: id, userData: { name: id } });
setSession(res.data);
setUserId(id);
};
if (!userId || !session) {
return (
<SafeAreaView style={{ flex: 1, alignItems: 'center', justifyContent: 'center', padding: 24 }}>
<Text style={{ fontSize: 22, fontWeight: 'bold', marginBottom: 16 }}>Stream Microblog</Text>
<Text style={{ marginBottom: 8 }}>Start the backend, then choose a demo user:</Text>
<Button title="Sign in as Alice" onPress={() => signIn('alice')} />
<View style={{ height: 12 }} />
<Button title="Sign in as Bob" onPress={() => signIn('bob')} />
<View style={{ height: 24 }} />
<Text style={{ fontSize: 12, color: '#666', textAlign: 'center' }}>
If testing on a physical device, edit DEFAULT_SERVER in App.js to use your computer's LAN IP
</Text>
</SafeAreaView>
);
}
return (
<OverlayProvider>
<StreamApp
apiKey={session.apiKey}
appId={session.appId}
token={session.token}
userId={session.userId}
>
<SafeAreaView style={{ flex: 1 }}>
<PostComposer userId={userId} serverUrl={serverUrl} />
<Feed userId={userId} />
</SafeAreaView>
</StreamApp>
</OverlayProvider>
);
}
This component does the following:
POST /session to the Express server to obtain apiKey, appId, and token.StreamApp with those values so the SDK can connect to Stream directly.StatusUpdateForm so the demo user can post updates.FlatFeed, which displays aggregated timeline activities and updates in real-time.StatusUpdateForm and submits.react-native-activity-feed UI uses the initialized Stream client with the server-issued token and calls the Stream API to add an activity to the user:alice.To wire follow/unfollow, the client can call the server endpoints we created. Example function to follow a user:
// followUser.js (client helper snippet)
import axios from "axios";
export async function follow(follower, followee) {
await axios.post("http://localhost:3001/follow", { follower, followee });
}
Call that from a button press inside a custom Activity renderer or an inline control. The server will invoke Stream's follow API to connect timelines.
Likes and comments are handled by the react-native-activity-feed components through Stream Reactions. The SDK will manage reaction creation and propagation when the user presses the like button rendered by the Activity.
localhost for the server URL, for example, http://192.168.1.12:3001/session.localhost, open Expo dev tools and use the LAN or tunnel options. For stable local testing, I use Expo in LAN mode and the machine IP for server calls.The feed is live, but currently, users cannot follow one another or view their updated timelines when following. We’ll add two things here:
If you already created these routes earlier, confirm they match this version. They simply call the Stream API to follow or unfollow another user.
// server/index.js (add if missing)
app.post("/follow", async (req, res) => {
const { follower, followee } = req.body;
if (!follower || !followee) {
return res.status(400).json({ error: "follower and followee required" });
}
const userFeed = client.feed("timeline", follower);
await userFeed.follow("user", followee);
return res.json({ ok: true });
});
app.post("/unfollow", async (req, res) => {
const { follower, followee } = req.body;
if (!follower || !followee) {
return res.status(400).json({ error: "follower and followee required" });
}
const userFeed = client.feed("timeline", follower);
await userFeed.unfollow("user", followee);
return res.json({ ok: true });
});
Each time the mobile client hits these endpoints, Stream automatically updates all related timelines. There’s no need to refresh manually; fan-out propagation handles it.
The built-in Activity component from react-native-activity-feed is helpful but limited in terms of customization. To display a Follow button and personalized actions, wrap it in your own component.
Create a new file: client/components/feed.js
// client/components/CustomActivity.js
import React, { useCallback } from 'react';
import { View, Button } from 'react-native';
import { FlatFeed, Activity, LikeButton } from 'react-native-activity-feed';
export default function Feed({ userId }) {
const renderActivity = useCallback(({ item }) => {
const actorId = (item.actor && item.actor.id) || (item.actor || '').replace('User:', '');
const isFollowing = false; // We do not have follow state in this simple example
return (
<View style={{ paddingHorizontal: 12 }}>
<Activity
activity={item}
Footer={() => (
<View style={{ flexDirection: 'row', alignItems: 'center', paddingVertical: 8 }}>
<LikeButton reactionKind="like" reactionCounts={item.reaction_counts} activity={item} />
</View>
)}
/>
</View>
);
}, [userId]);
return (
<FlatFeed
feedGroup="timeline"
userId={userId}
notify
options={{ withRecentReactions: true }}
renderActivity={renderActivity}
/>
);
}
This component renders:
ReactionToggleIcon, which syncs automatically with Stream’s reaction backend.Integrate this into the main feed by replacing the default Activity component.
// client/App.js (inside return)
<FlatFeed
feedGroup="timeline"
Activity={(props) => <Feed {...props} session={session} />}
notify
/>
This renders your custom component with access to the session context for authenticated API calls.
npm run dev).That flow confirms the follow endpoint and feed propagation are working correctly.
The best part of Stream’s feed architecture is real-time delivery. You do not have to manage socket connections or polling intervals manually. The Stream React Native SDK opens a WebSocket connection on initialization and automatically pushes new activities or reactions to subscribed feeds.
When a user posts, likes, or follows someone, other users see updates instantly in their FlatFeed.
Each feed (for example, timeline:alice) subscribes to Stream’s WebSocket gateway. When the server or another client adds an activity to any feed that timeline:alice follows, Stream broadcasts that event to connected clients. The React Native SDK receives those updates and automatically re-renders the list.
You can confirm this behavior by opening two simulator windows side by side:
Add a log listener inside FlatFeed for visibility:
<FlatFeed
feedGroup="timeline"
Activity={(props) => <Feed {...props} session={session} />}
notify
onAddReaction={(reaction) => console.log("New reaction:", reaction)}
onRemoveReaction={(reaction) => console.log("Reaction removed:", reaction)}
onRefresh={() => console.log("Feed refreshed")}
onLoadMore={() => console.log("Loading more activities")}
style={{ flex: 1 }}
/>
Each time you post or like, you’ll see Stream events arriving through the console in real time.
If real-time updates do not appear:
.env must match).This section covers the production concerns you will face after the prototype. It focuses on security, moderation, scalability, observability, and cost control. Each subsection includes concrete steps and code when applicable.
Do not ship the STREAM_API_SECRET in your client. If a secret leaks, all app feed operations can be abused. Tokens issued to clients must be scoped and rotated to limit the blast radius.
Add a simple middleware that validates a server session token before returning a Stream token. This example uses a placeholder function, validateAppSession, which you must replace with your actual authentication check.
// server/auth-middleware.js
export async function requireSession(req, res, next) {
const auth = req.header("Authorization"); // Bearer <session-token>
if (!auth) return res.status(401).json({ error: "no auth" });
const token = auth.replace("Bearer ", "");
const valid = await validateAppSession(token); // implement this
if (!valid) return res.status(403).json({ error: "invalid session" });
req.userId = valid.userId;
return next();
}
Use it on the session endpoint:
// server/index.js (use the middleware)
import { requireSession } from "./auth-middleware.js";
app.post("/session", requireSession, async (req, res) => {
const userId = req.userId;
const token = client.createUserToken(userId);
return res.json({
apiKey: process.env.STREAM_API_KEY,
appId: process.env.STREAM_APP_ID,
token,
userId
});
});
Provide a renew endpoint so the client can request a new Stream token without having to re-enter credentials.
app.post("/session/renew", requireSession, async (req, res) => {
const userId = req.userId;
const token = client.createUserToken(userId);
return res.json({ token, userId, issuedAt: Date.now() });
});
Client-side strategy
/session/renew.Implement server-side validation of activity payloads to prevent the storage of malformed or malicious content. Reject overly long text, strip dangerous HTML, and sanitize attachments.
Sample validation before posting:
function validatePostPayload({ actor, text }) {
if (!actor || !text) return { ok: false, reason: "missing actor or text" };
if (typeof text !== "string" || text.length > 280) return { ok: false, reason: "text too long" };
// apply simple profanity filter or call external moderation service
return { ok: true };
}
app.post("/post", requireSession, async (req, res) => {
const { text } = req.body;
const check = validatePostPayload({ actor: req.userId, text });
if (!check.ok) return res.status(400).json({ error: check.reason });
// add activity code here
});
Stream provides moderation tools in the dashboard. For higher assurance:
{ status: "pending" }), and surface flagged items in your admin UI.Simple webhook receiver stub:
// server/webhook.js
app.post("/webhook/stream", express.json(), async (req, res) => {
// verify origin or verify signature if Stream provides it
const event = req.body;
// example: event.type === "flag" => take action
console.log("stream webhook", event);
res.sendStatus(200);
});
If you need tighter moderation, run new posts through an automated classifier (an external API or an in-house one) in a background worker before making them visible in public feeds. For example, add the activity to a staging feed, scan it, then move to the public timeline if it passes.
There are two standard models:
If you onboard users or migrate large datasets, use addActivities to write many activities in a single request.
Example of batch writes:
const activities = [
{ actor: "user:alice", verb: "post", object: "post:1", text: "hello" },
{ actor: "user:bob", verb: "post", object: "post:2", text: "hi" },
];
await client.feed("user", "alice").addActivities(activities);
Offload fan-out triggers, indexing, or expensive enrichment to a worker queue. Use Bull, Bee-Queue, or a cloud queue to process background jobs. Keep the Express request path fast by enqueuing work instead of doing long operations in-line.
Monitor Stream API responses for rate limit errors. Implement exponential backoff and idempotent retries for operations like follow/unfollow and addActivity. For idempotency, add a client-supplied foreign_id to activities and use upsert semantics when possible.
Example activity with foreign_id:
const activity = {
actor: "user:alice",
verb: "post",
object: "post:12345",
foreign_id: "post:12345",
text: "hello again",
};
await client.feed("user", "alice").addActivity(activity);
Track key signals:
Log structured events with fields such as userId, operation, status, and durationMs. Ship logs to a central store, such as ELK, Datadog, or a cloud logging service. Alert on unusual spikes in errors or rate limit responses.
Example minimal logging:
const start = Date.now();
try {
await client.feed("user", "alice").addActivity(activity);
console.log(JSON.stringify({ userId: "alice", op: "addActivity", status: "ok", dur: Date.now() - start }));
} catch (err) {
console.error(JSON.stringify({ userId: "alice", op: "addActivity", status: "error", msg: err.message }));
}
Decide how long you will retain activities and user data. Stream allows you to delete activities via client.feed(...).removeActivity(activityId).
Provide endpoints for exporting and deleting data. When a user requests deletion, remove all required activities and user data, and document the flow.
Example remove activity:
await client.feed("user", "alice").removeActivity("post:12345");
The compact table below provides a quick side-by-side view, followed by a deeper analysis of each key area.
| Aspect | DIY feed (self-managed) | Stream (managed feed API) |
|---|---|---|
| Time to initial prototype | Weeks | Minutes to a few hours |
| Real-time updates | Implement sockets, queueing | Built-in WebSocket pub/sub |
| Ranking and aggregation | Build and tune ranking logic | API-level ranking and personalization |
| Moderation | Build filters, dashboards | Dashboard and moderation tools available |
| Scalability | Database sharding and queue complexity | Managed sharding and fan-out |
| Operational burden | High: backups, scaling, incidents | Lower: vendor-managed infrastructure |
| Cost predictability | DevOps and infra costs can be high | Usage-based; predictable with monitoring |
| Feature velocity | Slow for advanced features | Fast: reactions, enrichment, personalization |
DIY requires designing a feed schema, writing aggregation logic, implementing pagination and cursors, and building real-time delivery. Teams often underestimate this work. With Stream, you can focus on UX and client features, as aggregation and delivery are handled.
DIY real-time means maintaining socket servers, reconnection strategies, and backfills. Stream provides managed sockets and backfills for disconnected clients, lowering the engineering overhead.
Implementing an effective "For You" ranking function requires signals, offline model training, and heuristics for freshness and relevance. Stream provides ranking and personalization primitives, and you can feed your own signals for hybrid ranking.
DIY requires a moderation pipeline, admin UI, and tooling to handle flags. Stream includes moderation tooling and methods for pulling flagged items into administrative workflows.
DIY places maintenance for backups, scale testing, sharding, and incident response on your team. With a managed service, the vendor handles the heavy lifting, but you still own client-side resilience and server-side token issuance.
You have built a fully functional Twitter-style microblog using React Native and Stream’s feed infrastructure. The project demonstrates how to separate client experience from backend complexity, keeping your mobile layer lightweight while Stream handles aggregation, ranking, and real-time synchronization.
Your app now supports:
This setup works well for prototypes, MVPs, or small production apps where the goal is to quickly validate feed functionality. Once you move beyond this baseline, consider the following next steps to expand the system:
The Stream SDK for React Native already exposes most of these features, so extending the app will involve more UI composition than backend code. To continue exploring, review Stream’s official documentation at Quick Start - React Native Activity Feeds Docs.
Additionally, to view a production-ready SwiftUI implementation, check out the Twitter clone project by Stream.
Yes. Expo works seamlessly with the React Native Activity Feed SDK and the Stream React Native SDK. You can run this tutorial entirely within an Expo-managed workflow without ejecting to a bare React Native project.
All activities, reactions, and feed updates are stored within Stream’s managed backend. Each post is an activity document that includes metadata, such as actor, verb, and object, which Stream uses to aggregate and distribute updates.
Yes. Stream supports image uploads through its content endpoint, and Expo’s expo-image-picker module integrates easily with it. You can capture images locally, upload them, and attach their URLs to activities as media attachments.
You can cache or queue posts locally and replay them when the connection is restored. Use AsyncStorage or another persistent storage to temporarily hold pending activities and resubmit them once the device is online.
Stream offers a developer plan that includes up to 3 million API calls per month, which is sufficient for most prototypes and small apps. You can monitor usage and upgrade if needed through the Stream dashboard.
2025-12-26 03:31:23
I’m currently building an AI-based face similarity (look-alike) search for OF models as part of a real-world side project.
The dataset contains 100,000+ public OF model images, and the goal is to help users discover visually similar OF models based on facial features rather than usernames or text-based search.
This is not identity verification — the focus is purely on visual similarity.
The system allows users to upload an image and receive a list of OF models with similar facial characteristics.
The intent is to support visual discovery, where perceived similarity matters more than exact identity matching.
Similarity over identity
The system ranks faces by perceived similarity (look-alike matching), not by strict identity verification.
Low tolerance for false positives
Returning visually different faces as “similar” is more harmful than missing a potential match.
Real-world images
The dataset consists of non-studio images with varying lighting, poses, resolutions, and overall quality.
Scalability
The solution needs to scale beyond 100k+ images without significant drops in accuracy or performance.
At the moment, the entire pipeline runs on CPU only.
The setup looks like this:
At this scale, the system works reasonably well, but both accuracy and performance are starting to become limiting factors.
Face embeddings are currently generated using InsightFace, specifically the buffalo_l model bundle.
The pipeline includes:
buffalo_l modelThis provides a solid baseline, but for look-alike matching, small inaccuracies are very noticeable.
As the dataset grows, several issues become more apparent:
Because this is a look-alike use case, even small errors can significantly affect perceived quality.
I’m planning to migrate the pipeline to GPU-based inference, but I want to make sure the model choice justifies the move.
Some of the questions I’m evaluating:
If I’m going to reprocess 100k+ OF model images, I want to do it with the right model.
I’m particularly interested in models that:
I’m open to both open-source and commercial solutions.
This work is part of a discovery platform where users can upload an image and find visually similar OF models using AI-based face similarity.
The project is called Explore.Fans, and face similarity search is one of its core components.
👉 https://explore.fans
(Shared only for technical context.)
If you’ve worked with face similarity or face recognition models at scale, I’d really appreciate your input:
Thanks in advance — happy to share more details if helpful.
Have a wonderfull holiday!
2025-12-26 03:31:04
In today’s AI landscape, we are witnessing a paradox: as systems become more capable, they become less comprehensible. The current trajectory prioritizes raw power over transparency, leading to the Black Box era.
Jan Klein is a key figure challenging this trajectory. His work at the intersection of architecture, standardization, and ethics advocates for a shift from systems that merely function to systems that can be intuitively understood. This evolution is known as Understandable AI (UAI).
Klein’s work is anchored in the Einsteinian principle:
“Everything should be made as simple as possible, but not simpler.”
In the context of AI, this is not about reducing capability, but about eliminating unnecessary complexity through code clarity and modular design.
Architectural Simplicity
Rather than managing millions of opaque parameters, Klein advocates for modular architectures where data flows are traceable.
Cognitive Load Reduction
A truly intelligent system should not require a manual; it should adapt to the user’s mental model, making decisions that are logically consistent with human reasoning.
While the industry currently focuses on Explainable AI (XAI)—which attempts to interpret AI decisions after they occur—Klein proposes Understandable AI (UAI) as an intrinsic design standard.
| Feature | Explainable AI (XAI) | Understandable AI (UAI) |
|---|---|---|
| Timing | Post-hoc (Explanation after the fact) | Design-time (Intrinsic logic) |
| Method | Approximations and heat maps | Logical transparency and reasoning |
| Goal | Interpretation of a result | Verification of the process |
The “Explainability Trap” occurs when post-hoc explanations give a false sense of security. UAI provides concrete solutions for high-stakes sectors.
Klein is a driving force within the World Wide Web Consortium (W3C), defining how the future web handles intelligence.
AI KR (Artificial Intelligence Knowledge Representation)
A common language enabling AI systems to share context and verify conclusions with semantic interoperability.
Cognitive AI
Models reflecting human thinking—planning, memory, abstraction—transforming AI into a genuine assistant rather than a statistical tool.
As AI enters regulated sectors such as law, finance, and insurance, black-box systems become a legal liability.
“The intelligence of a system is worthless if it does not scale with its ability to be communicated.”
Klein emphasizes the “Simple as Possible” mandate. AI architecture must be stripped of unnecessary layers so every function remains visible and auditable. Simplicity is not a reduction of intelligence—it is its highest form.
UAI represents the next revolution because the “Bigger is Better” era of AI has reached its social and ethical limit. While computational power has produced impressive results, it has failed to produce Trust.
Without trust, AI cannot be safely integrated into medicine, justice, or critical infrastructure.
The revolution led by Jan Klein redefines intelligence itself—shifting focus from massive parameter counts to Clarity. In this new era, an AI’s value is measured not only by output, but by its ability to be audited, controlled, and understood.
By adhering to the principle of Simple as Possible, Klein ensures that humanity remains the master of its tools. UAI is the bridge between human intuition and machine power, built to ensure technology serves humanity rather than dominating it through complexity.
Jan Klein
CEO @ dev.ucoz.org
2025-12-26 03:19:06
Most websites try to do more over time. More features, more content, more users, more engagement loops.
I wanted to see what would happen if I did the opposite.
So I built a site that will only ever show one public sentence.
That’s it.
No feed. No profiles. No infinite scroll. No accounts. Just one sentence that everyone sees when they visit the homepage.
When the site opens on New Years 2026, anyone will be able to change that sentence but only by paying more than the last person did. Whatever they pay becomes the new base price. The next person has to beat it.
Every overwrite will be logged permanently with a timestamp, price, and position in history.
Why build something this constrained?
Because most of the internet removes friction from speech entirely. Posting is cheap. Deleting is easy. Context disappears. Consequences are temporary.
I’m curious what happens when you introduce three constraints at once:
• Scarcity: there’s only one slot
• Cost: participation isn’t free
• Permanence: history is public and immutable
Those constraints already change how people talk about the idea, and I’m interested to see how they change behavior once it’s live.
Early messages will probably be jokes. As the price rises, people may hesitate. Messages may get shorter. Intent may become clearer. Eventually, the cost itself could become part of the meaning of the sentence.
The same words feel different at $5 than they do at $5,000.
The mechanics (kept deliberately simple)
When it launches, the system will work like this:
• One sentence is visible at a time
• To overwrite it, you pay the current base price plus a per-character cost
• Whatever you pay becomes the next base price
• The price never decreases
• All past sentences are preserved forever in a public history
There are no growth hacks here. No retention tricks. The system is intentionally small so the behavior is easy to observe.
What I’m curious to learn
I don’t know if people will treat this as a joke, a billboard, a piece of art, or something else entirely.
What I want to observe is whether introducing cost and permanence changes:
• what people say
• how long they wait
• whether they feel ownership or hesitation
• how value alters language
If it fizzles out, that’s still a result.
If it turns into something people argue about, that’s interesting too.
This isn’t a startup (at least not in the usual sense)
I’m not trying to optimize this into a product with funnels and metrics. It’s closer to an experiment that happens to be implemented in code.
The goal isn’t growth. It’s observation.
Why I’m sharing this here
DEV has always been a place where people build strange things just to see what happens. This project comes from that same impulse: curiosity first, polish second, certainty last.
If you were building something intentionally small or constrained, what rule would you impose to meaningfully change user behavior?
Sometimes the most interesting systems aren’t the ones that grow fastest, but the ones that force people to pause.