2026-01-12 15:52:13
The EU proposed the Digital Omnibus on November 19, 2025, updating consent and cookie handling requirements for websites operating in Europe.
This affects any site with EU traffic, regardless of where the company is based. The changes impact technical implementation, not just legal compliance.
The Digital Omnibus introduces machine-readable consent signals. Instead of relying only on click-based consent banners, websites must now process automated signals from browsers and operating systems.
This works through existing web standards. Browsers send headers or use APIs to communicate user preferences. Websites read these signals and apply them to cookie and tracking decisions.
Sites still need traditional consent interfaces for users who haven't set browser-level preferences. But the architecture must support both manual and automated consent flows.
Previous setup: User visits site, sees banner, clicks accept or reject, site stores preference in a cookie or local storage.
New setup: Browser checks if user set global preferences, sends signal to site, site applies preference automatically, only shows banner if no signal exists.
The backend needs logic to handle both scenarios. Check for automated signals first. Fall back to manual consent collection if no signal present.
GDPR and ePrivacy previously operated as separate regulations. Developers implemented different solutions for each, sometimes with conflicting approaches.
The Digital Omnibus merges these into one framework. Same consent standards apply whether dealing with cookies, tracking pixels, analytics, or data collection forms.
This simplifies architecture. One consent system covers all use cases instead of maintaining separate implementations for different regulation types.
Sites must maintain detailed consent logs. Every interaction needs recording: timestamp, user identifier, what was consented to, method of consent collection.
These logs must survive server crashes, database migrations, and system updates. They need to be queryable for audits and accessible for user data requests.
Consent Management Platforms handle this infrastructure. They provide APIs for consent capture, storage systems for logs, and admin interfaces for audit access.
Building this from scratch takes significant development time. Most teams integrate existing CMP solutions rather than creating custom systems.
Functional cookies don't require consent. These include session management, authentication, load balancing, and security features.
Everything else needs permission: analytics, advertising, social media widgets, chat plugins, recommendation engines.
The Digital Omnibus tightens definitions around what qualifies as essential. Teams need to audit their cookie usage and categorize each one accurately.
Misclassifying cookies creates compliance risk. A cookie marked essential that isn't truly necessary for site function violates the rules.
Start by checking if the current consent system supports machine-readable signals. Many older implementations only handle click events.
Update the consent checking logic. Before setting any non-essential cookie, verify consent through either automated signals or manual user action.
Implement proper logging. Every consent decision needs recording with full context for regulatory review.
Test across different browsers and operating systems. Signal implementations vary, and the system needs to handle all variations correctly.
Treating consent as a one-time implementation. Regulations evolve, and systems need updates to stay compliant.
Storing consent state only in cookies or local storage. These can be cleared, losing the consent record even though the interaction happened.
Assuming all consent tools are equivalent. Different platforms offer different features, and choosing the wrong one creates technical debt.
Not documenting cookie purposes clearly. Users and regulators need to understand what each cookie does and why consent is requested.
Consent checking happens on every page load. Inefficient implementations create latency.
Cache consent states where possible. Use fast lookups instead of querying databases repeatedly.
Load consent interfaces asynchronously. Don't block page rendering waiting for consent systems to initialize.
Monitor consent system performance separately from main application metrics. Slowdowns here affect user experience across the entire site.
Consent logs contain personal data, so they fall under the same protection requirements as other user information.
Encrypt sensitive fields. Control access to consent records. Implement retention policies that match regulatory requirements.
When users request data deletion, consent logs usually remain for legal compliance purposes. Document this in privacy policies and deletion workflows.
The Digital Omnibus creates clearer technical requirements for consent implementation. Sites that build proper systems now avoid retrofitting later.
Plan the architecture to support both current and future consent mechanisms. Regulations will continue evolving, and flexible systems adapt easier.
Use established tools where possible. Custom implementations take time and create ongoing maintenance burden.
The technical changes align with improving user experience. Fewer intrusive banners and smoother interactions benefit everyone when implemented correctly.
2026-01-12 15:51:11
Hey fellow devs 👋!
As a developer who’s been building cross-platform apps for years, I’ve been frustrated to no end by app upgrades over the past six months:
I’d had enough. So I spent 9 months turning all my pain points and desired features into one project – the one I’m excited to share today: UpgradeLink, a fully open-source, all-in-one cross-platform app upgrade management system.
My core goal with this project is simple: to save developers from reinventing the wheel for app upgrades. One system to handle version iteration across all platforms.
Let me start with my core requirements – I wanted an upgrade system that’s full-platform compatible, easy to deploy, and customizable – but nothing on the market fit the needs of small teams or individual developers. So I built the foundation on Go Zero + simple-admin, and added these design choices based on my own hard-learned lessons:
This was my top priority – I never wanted to write separate upgrade logic for Windows vs. Mac again. So UpgradeLink natively supports:
I tested it myself: integrating it into my Tauri tool took just 10 minutes – way faster than writing custom scripts.
With handwritten upgrade scripts, my biggest pain was lack of granular control. So I added these to UpgradeLink:
As an indie developer, I don’t have time to maintain complex deployment workflows – so:
I’ve put the full deployment docs in the repo README – including Docker Compose and cluster deployment options, all validated by me personally.
I didn’t build this to create a "bloated, all-encompassing framework" – purely to solve a pain point for myself and fellow developers. After all, we want to focus on building our apps, not spending 90% of our time setting up upgrade services.
Right now, UpgradeLink has been running smoothly in several of my own open-source projects (like note-gen, MarkFlowy, and other Tauri tools) for almost a month – that’s why I feel confident releasing it as open source.
This project is still evolving fast, and I can’t do it all alone. I’d love your help:
If this tool helps you avoid the same upgrade headaches I faced, please give the repo a ⭐️ Star! It’s the biggest motivation for me to keep maintaining it.
GitHub Repo: https://github.com/toolsetlink/upgradelink
Gitee Repo: https://gitee.com/toolsetlink/upgradelink
Finally – the best part of open source is turning your own struggles into tools that help others. I hope UpgradeLink saves you some time and frustration!
Let’s make app upgrades easier together! 🚀
2026-01-12 15:42:31
I still remember the day I felt confident with Python functions.
I could write def without hesitation.
I could pass arguments.
I could even explain return to someone else.
And yet… my program was broken.
No errors.
No crashes.
Just logic that looked right and behaved wrong.
I was writing a small script. Nothing serious.
def calculate_total(price, tax):
print(price + tax)
total = calculate_total(100, 10)
print(total)
When I ran it, I saw:
110
None
My first thought was:
“Python is being weird.”
My second thought was worse:
*“Maybe I don’t actually understand functions.”
*
I stared at the code longer than I’d like to admit.
The function worked.
The math was correct.
The number printed.
So why was total equal to None?
Then it hit me.
The function didn’t return anything.
It only talked to me, not to the program.
print() had trained me badly.
Every time I printed something, my brain thought:
“The function produced a value.”
But it didn’t.
It only showed output on the screen.
def calculate_total(price, tax):
return price + tax
One word changed everything.
Now the function didn’t just do work.
It communicated.
Feeling smarter, I moved on.
def increment(value):
value += 1
count = 10
increment(count)
print(count)
Still 10.
I felt betrayed again.
I passed the variable.
I modified it.
So why didn’t it change?
Because the function never touched count.
It received a temporary name pointing to the same value.
When execution ended, that name vanished.
No magic.
No shared memory.
Just execution boundaries.
Looking back, I realized something uncomfortable.
I wasn’t treating functions as execution steps.
I was treating them like containers.
Places where I dumped logic and hoped it worked.
But Python doesn’t work that way.
A function:
Starts execution only when called
Lives briefly
Dies immediately after returning
Leaves behind only what you return
Nothing else survives.
That day changed how I wrote Python.
I stopped asking:
“Does this function run?”
I started asking:
“What does this function return?”
I stopped trusting printed output.
I stopped assuming variables would change.
I stopped writing functions that felt correct but failed quietly.
Later, reading real-world code, I noticed something.
Professional functions:
Boring code is predictable code.
If you’ve ever:
You’re not alone.
I broke down this entire experience—step by step—through real mistakes developers make here:
👉 How to Define and Call Functions in Python
🔗 https://emitechlogic.com/define-and-call-functions-in-python/
Python functions aren’t confusing.
Our mental models are.
Once you understand that:
Functions stop surprising you.
And that’s when Python finally starts feeling calm instead of chaotic.
2026-01-12 15:40:08
TL;DR: I built a production-grade e-commerce cart in 14 hours to explore Laravel Volt. This article focuses on the engineering decisions that matter: concurrency handling, partial failure recovery, data snapshotting, and query optimization. Full source code available on GitHub.
Over the holiday break, I carved out a focused window to build something I'd been meaning to explore: a production-grade e-commerce cart using Laravel Volt. The entire project—from first commit to final polish—took roughly 14 hours spread across December 29-30, 2025.
This wasn't a side project for fun. It was part of a technical assessment where I was given flexibility to choose my stack. I opted to try Laravel Volt for the first time, knowing it would be a learning experience. Ultimately, the team was looking for a different implementation approach, and we parted ways amicably. But the exercise itself was valuable—it forced me to think deeply about correctness, concurrency, and the kind of engineering judgment that separates junior implementations from senior ones.
📦 The complete source code is available here: https://github.com/Ojsholly/laravel-ecommerce-cart
This article isn't about the UI or feature completeness. It's about the backend decisions that matter when you're building systems that handle real money, real inventory, and real user expectations.
| Layer | Technology |
|---|---|
| Framework | Laravel 12 |
| Language | PHP 8.4 |
| Frontend | Livewire 3 + Volt (first time using Volt) |
| Styling | Tailwind CSS |
| Database (Dev) | PostgreSQL |
| Database (Test) | SQLite (in-memory) |
| Background Jobs | Laravel Queues & Scheduler |
| Testing | Pest |
| Version Control | GitHub |
The choice of Volt was deliberate—I wanted to see how it felt to build reactive components without writing explicit Livewire classes. The learning curve was steeper than expected, but it paid off in reduced boilerplate.
When you're building an e-commerce cart, the interesting problems aren't in the UI. They're in the edge cases:
These are the problems that junior engineers often miss—and the ones that cause production incidents.
The most critical part of checkout is stock validation. If you don't lock rows properly, you'll oversell inventory.
Here's the naive approach (don't do this):
// ❌ Race condition: two requests can both see stock = 1
$product = Product::find($productId);
if ($product->stock_quantity >= $quantity) {
$product->decrement('stock_quantity', $quantity);
}
The problem: Between checking stock and decrementing it, another request can sneak in. Both requests see stock_quantity = 1, both proceed, and you've just sold two units of an item you only had one of.
| Timeline | User A | User B | Database Stock |
|---|---|---|---|
| T1 | Read stock → sees 1
|
1 |
|
| T2 | Read stock → sees 1
|
1 |
|
| T3 | Check: 1 >= 1 ✅ |
1 |
|
| T4 | Check: 1 >= 1 ✅ |
1 |
|
| T5 | Decrement → stock = 0
|
0 |
|
| T6 | Decrement → stock = -1 ❌ |
-1 |
Result: Both purchases succeed, but inventory is now negative. You've oversold.
// ✅ Lock the row for the duration of the transaction
$product = Product::lockForUpdate()->find($productId);
if ($product && $product->hasStock($quantity)) {
$product->decrement('stock_quantity', $quantity);
}
This is wrapped in a database transaction, so the lock is held until the transaction commits. No other request can read or modify that row until we're done.
public function processCheckout(Cart $cart): Order
{
return DB::transaction(function () use ($cart) {
[$availableItems, $unavailableItems] = $this->validateStock($cart);
if ($availableItems->isEmpty()) {
throw new InsufficientStockException('All items are out of stock.');
}
$order = Order::create([/* ... */]);
foreach ($availableItems as $item) {
$this->createOrderItem($order, $item);
$this->decrementStock($item->product, $item->quantity);
}
$this->removeAvailableItemsFromCart($cart, $availableItems);
return $order;
});
}
private function validateStock(Cart $cart): array
{
$availableItems = collect();
$unavailableItems = collect();
foreach ($cart->items as $item) {
$product = Product::lockForUpdate()->find($item->product_id);
if ($product && $product->hasStock($item->quantity)) {
$availableItems->push($item);
} else {
$unavailableItems->push($item);
}
}
return [$availableItems, $unavailableItems];
}
💡 Key Takeaway: Without row-level locking, you'll have angry customers who paid for items you don't have. With it, you have a system that behaves correctly under load.
Here's a scenario that trips up a lot of implementations:
What should happen?
| Approach | Junior Implementation ❌ | Senior Implementation ✅ |
|---|---|---|
| Error Handling | Fail entire checkout with generic error | Identify which items are available |
| Cart Management | Delete all items (losing user intent) | Process only available items |
| Stock Handling | Proceed anyway and oversell | Leave unavailable items in cart |
| User Feedback | Confusing error message | Clear feedback on what was purchased |
// Separate available from unavailable items
[$availableItems, $unavailableItems] = $this->validateStock($cart);
// Only process available items
$pricing = $this->priceCalculationService->calculateOrderPricing($availableItems);
// Remove only what was purchased
$this->removeAvailableItemsFromCart($cart, $availableItems);
The key insight: Preserve user intent. If they wanted 5 items and only 4 are available, sell them 4 and keep the 5th in their cart. Don't make them start over.
On the frontend, unavailable items are clearly distinguished:
💡 Key Takeaway: This isn't just good UX—it's correct behavior. Partial failure is a reality in distributed systems. Handle it gracefully.
Here's a mistake I see often: storing only foreign keys in order items.
// ❌ What happens if the product is deleted or its price changes?
OrderItem::create([
'order_id' => $order->id,
'product_id' => $product->id,
'quantity' => $quantity,
]);
Six months later, the product is deleted or repriced. Now your order history is broken. You can't show customers what they actually paid for.
The fix: snapshot the product data at the time of purchase.
// ✅ Preserve historical accuracy
OrderItem::create([
'order_id' => $order->id,
'product_id' => $product->id,
'quantity' => $quantity,
'price_snapshot' => $product->price,
'product_snapshot' => $product->toSnapshot(),
]);
The toSnapshot() method captures everything needed to reconstruct the order:
public function toSnapshot(): array
{
return [
'id' => $this->id,
'uuid' => $this->uuid,
'name' => $this->name,
'description' => $this->description,
'price' => $this->price,
'images' => $this->images,
'primary_image' => $this->primary_image,
];
}
💡 Key Takeaway: Orders are legal documents. They need to be immutable and accurate, even if the underlying product data changes or disappears.
Low-stock notifications are a common feature. The naive implementation:
// ❌ Sends a notification every time stock decrements below threshold
if ($product->stock_quantity <= $lowStockThreshold) {
SendLowStockNotification::dispatch($product);
}
The problem: if stock is at 5 and the threshold is 10, you'll send a notification on every sale until stock hits 0. That's 5 emails for the same issue.
The fix: only notify when crossing the threshold.
private function decrementStock(Product $product, int $quantity): void
{
$lowStockThreshold = config('cart.low_stock_threshold', 10);
$stockBeforeDecrement = $product->stock_quantity;
$product->decrement('stock_quantity', $quantity);
$stockAfterDecrement = $product->fresh()->stock_quantity;
// Only notify when crossing from above to below threshold
if ($stockBeforeDecrement > $lowStockThreshold
&& $stockAfterDecrement <= $lowStockThreshold) {
SendLowStockNotification::dispatch($product);
}
}
💡 Key Takeaway: Notification fatigue is real. Admins will ignore your alerts if you spam them. Send one notification per threshold crossing, not one per sale.
Another common feature: daily sales reports sent via email. The challenge is aggregating data efficiently without N+1 queries.
Here's the job structure:
public function handle(): void
{
$stats = $this->calculateStats();
$recentOrders = $this->getRecentOrders();
$topProducts = $this->getTopSellingProducts();
Mail::to($adminEmail)->send(
new DailySalesReport($stats, $recentOrders, $topProducts, $this->date)
);
}
The interesting part is getTopSellingProducts(). You need to:
Here's how I did it:
private function getTopSellingProducts(): Collection
{
// Single query to get aggregated sales data
$topProductsData = Order::completed()
->whereDate('orders.created_at', $this->date)
->join('order_items', 'orders.id', '=', 'order_items.order_id')
->selectRaw('
order_items.product_id,
sum(order_items.quantity) as total_quantity,
sum(order_items.quantity * order_items.price_snapshot) as total_sales
')
->groupBy('order_items.product_id')
->orderByDesc('total_quantity')
->limit(5)
->get();
// Single query to fetch order items (which contain snapshots)
$orderItemsByProductId = OrderItem::whereIn('product_id', $topProductsData->pluck('product_id'))
->whereHas('order', fn($q) => $q->completed()->whereDate('created_at', $this->date))
->get()
->keyBy('product_id');
// Map aggregated data to snapshots
return $topProductsData->map(function ($item) use ($orderItemsByProductId) {
$orderItem = $orderItemsByProductId->get($item->product_id);
$snapshot = $orderItem->product_snapshot ?? [];
return [
'name' => $snapshot['name'] ?? 'Unknown Product',
'quantity' => $item->total_quantity,
'revenue' => number_format((float) $item->total_sales, 2),
];
});
}
💡 Key Takeaway: N+1 queries kill performance. Two queries for any number of products is better than N+1 queries for N products.
One pattern I stuck to throughout: keep business logic out of controllers and Livewire components.
Components should be thin:
public function placeOrder(): void
{
try {
$checkoutService = app(CheckoutService::class);
$order = $checkoutService->processCheckout($this->cart);
$this->dispatch('cart-updated');
$this->dispatch('notify', message: 'Order placed successfully!', type: 'success');
$this->redirect(route('orders.show', $order), navigate: true);
} catch (\Exception $e) {
$this->dispatch('notify', message: $e->getMessage(), type: 'error');
}
}
All the complexity lives in CheckoutService. This makes testing easier, logic reusable, and the codebase more maintainable.
This was my first time using Laravel Volt, and it was a mixed experience.
What I liked:
What I found challenging:
getXProperty())Would I use it again? For rapid prototyping, yes. For a large production app, I'd probably stick with traditional Livewire classes for better tooling and discoverability.
Building this cart in 14 hours was a forcing function. I couldn't afford to bikeshed or over-engineer. I had to focus on correctness first, polish second.
The things that mattered:
The things that didn't matter (in this timeframe):
💡 Key Takeaway: This is the kind of prioritization that defines senior engineering: knowing what to build, what to defer, and what to skip entirely.
The team ultimately went with a different tech stack, which is fine. Not every project is a fit, and that's okay. But the exercise itself was valuable—it forced me to think through problems I don't encounter in my day-to-day work, and it gave me a chance to explore Volt in a real-world context.
The repo is open source. Star it if you find the locking logic useful: https://github.com/Ojsholly/laravel-ecommerce-cart
The codebase includes:
It's not perfect, but it's correct. And in production systems, correctness is what matters.
Thanks for reading! If this helped you think differently about e-commerce cart implementation, consider sharing it with your team.
2026-01-12 15:38:25
Intro:
Citizen developers often face a subtle but critical challenge in Power Automate: many connectors look almost identical but behave very differently depending on whether they are tied to a personal account or an organizational account. Selecting the wrong connector can lead to Data Loss Prevention (DLP) violations, blocked flows, or compliance risks.
This article explores overlapping connectors, why they matter, and how to apply Design Thinking principles to make the right choice.
Why This Matters
| Category | Personal Connector | Business Connector | Learn More (URLs) | Typical Use Case | Why Business Is Preferred | Triggers/Actions Highlights | DLP Impact | Design Thinking Insight |
|---|---|---|---|---|---|---|---|---|
| File Storage | OneDrive | OneDrive for Business | https://learn.microsoft.com/connectors/onedrive/ ; https://learn.microsoft.com/connectors/onedriveforbusiness/ | Personal file automation | Built on SharePoint; enterprise sharing, legal hold, auditing | Advanced link sharing, file versioning | Personal often blocked | Is this file organizational? If yes → use Business connector |
| Spreadsheets | Excel Online (OneDrive) | Excel Online (Business) | https://learn.microsoft.com/connectors/excelonline/ ; https://learn.microsoft.com/connectors/excelonlinebusiness/ | Tables in personal OneDrive | Larger files, robust triggers across SharePoint/Teams | Table triggers, pagination | Personal often blocked | If file is in Teams/SharePoint → use Business connector |
| Note-Taking | OneNote | OneNote (Business) | https://learn.microsoft.com/connectors/onenote/ ; https://learn.microsoft.com/connectors/onenotebusiness/ | Personal notebooks | Shared team notebooks on SharePoint/OneDrive for Business | Page/section CRUD; search | Personal sometimes blocked | Shared notebooks require Business connector |
| Forms | Forms (Personal) | Microsoft Forms (Business) | https://learn.microsoft.com/connectors/microsoftforms/ | Personal surveys | Organizational data routing, governance, storage in SharePoint/Excel | When a new response is submitted; get response details | Personal often blocked | Is the form tied to work account? If yes → Business |
| Outlook.com | Office 365 Outlook | https://learn.microsoft.com/connectors/outlook/ ; https://learn.microsoft.com/connectors/office365/ | Personal mail flows | Enterprise security, DLP, throttling for scale; deep integration | High-volume send limits, Teams hooks | Personal blocked in corporate | Always use Office 365 for work/school accounts | |
| Calendar | Outlook.com Calendar | Office 365 Outlook (Calendar) | https://learn.microsoft.com/connectors/office365/ | Personal calendar automation | Teams meeting integration; shared calendars; org controls | Create/update events; get events | Personal often blocked | If Teams integration needed → Business connector |
| Tasks | Microsoft To Do | Planner / To Do (Business) | https://learn.microsoft.com/connectors/todo/ ; https://learn.microsoft.com/connectors/planner/ | Personal task lists | Team boards, assignments, buckets; enterprise auth | Create tasks, assign users, update status | Personal limited | Is the task collaborative? If yes → Planner |
| Cloud Storage | Dropbox | Dropbox for Business | https://learn.microsoft.com/connectors/dropbox/ | Personal file backup | Team folders, audit logs, admin controls | Create/Move files; list folders | Personal may be restricted | Enterprise Dropbox → Business connector |
| File Sharing | Box | Box for Business | https://learn.microsoft.com/connectors/box/ | Personal document sharing | Compliance features, admin policies, enterprise SSO | Upload/download, share links | Personal may be restricted | Use Business for retention/legal holds |
| Google Mail | Gmail | Gmail (Google Workspace) | https://learn.microsoft.com/connectors/gmail/ | Personal automation | Admin-controlled, org directory integration | Send email, read threads | Personal often blocked | Workspace accounts → Business connector |
| Google Drive | Google Drive | Google Drive (Workspace) | https://learn.microsoft.com/connectors/googledrive/ | Personal cloud storage | Shared drives, admin retention, security | Upload/download, list files | Personal may be restricted | Shared drives → Workspace connector |
| Google Calendar | Google Calendar | Google Workspace Calendar | https://learn.microsoft.com/connectors/googlecalendar/ | Personal scheduling | Shared calendars, room resources, admin policies | Create events, list calendars | Personal often blocked | Is calendar shared? If yes → Workspace connector |
| Contacts | Outlook.com Contacts | Office 365 Contacts | https://learn.microsoft.com/connectors/office365/ | Personal contacts | Global address list, Entra ID integration | Create/update contact; search | Personal blocked often | Org directory → Business connector |
Closing Recommendations:
2026-01-12 15:33:09
Chat history and memory allow agents to maintain context across conversations and remember user preferences, which enables agents to provide personalized experiences. Using the Microsoft Agent Framework, we can use in-memory chat message stores, persistent databases, and specialized memory services to cater to a variety of different use cases.
In this article, I'll show you a simple example of how we can use an Azure Cosmos DB Vector store to store conversations we have with an agent, and how we can retrieve conversations so that our agents can maintain context.
It's important to note that as I'm writing this (12th January 2026) the Microsoft Agent Framework is still in preview, so expect API's and features to change as time goes by.
There are two main supported scenarios:
In-memory storage: This is when an Agent is built on a service, like OpenAI Chat Completion, that doesn't support in-service storage of chat history. Agent Framework stores the full chat history in-memory using the AgentThread object. In order to store the chat history to a third-party store, we can create a custom ChatMessageStore implementation to do so.
In-service storage: This is when an Agent is built on a service, like Azure AI Foundry Persistent Agents, that requires in-service storage of chat history. The framework stores the ID of the remote chat history in the AgentThread object, and no other chat history storage options are supported.
This article covers implementing a custom ChatMessageStore so that we can use Azure Cosmos DB to persist our agent conversations.
The Agent Framework stores chat history in-memory using the AgentThread object. The full chat history is stored in the thread object which is provided to the service (for example, OpenAI Chat Completion), on every run as context.
OpenAI Chat Completion is an example of a service that doesn't support in-service storage of chat history, but what we can do is use the Agent Framework to replace the default in-memory storage of chat history with a third-party storage option, such as Cosmos DB.
To use Azure Cosmos DB to store our chat messages, we can use the Azure CosmosDB NoSQL Vector Store connector. We'll need to install the following NuGet package:
dotnet add package Microsoft.SemanticKernel.Connectors.CosmosNoSql --prerelease
Once you've installed the package, we can construct the instance to our Azure CosmosDB NoSQL Vector store directly like so:
// Configure Cosmos DB as the Vector Store
var cosmosClient = new CosmosClient(config["cosmos-db-endpoint"], new AzureCliCredential(), new CosmosClientOptions()
{
UseSystemTextJsonSerializerWithOptions = JsonSerializerOptions.Default
});
var database = cosmosClient.GetDatabase(config["database-name"]);
var vectorStore = new CosmosNoSqlVectorStore(database);
When initializing our CosmosClient manually, we have to specify the UseSystemTextJsonSerializerWithOptions due to limitations in the default serializer.
There are a couple of different methods you can use to create a connection to Azure Cosmos DB, which are described in the documentation.
To create a custom ChatMessageStore, we need to implement the abstract ChatMessageStore class and implement some required methods.
internal class CosmosChatMessageStore : ChatMessageStore
{
private readonly CosmosNoSqlVectorStore _cosmosVectorStore;
public CosmosChatMessageStore(
CosmosNoSqlVectorStore cosmosVectorStore,
JsonElement serializedStoreState,
JsonSerializerOptions? jsonSerializerOptions = null)
{
this._cosmosVectorStore = cosmosVectorStore ?? throw new ArgumentNullException(nameof(cosmosVectorStore));
if (serializedStoreState.ValueKind is JsonValueKind.String)
{
this.ThreadDbKey = serializedStoreState.Deserialize<string>();
Console.WriteLine($"[CosmosChatMessageStore] Initialized with existing ThreadDbKey: {this.ThreadDbKey}");
}
else
{
Console.WriteLine($"[CosmosChatMessageStore] Initialized with no ThreadDbKey (new thread)");
}
}
public string? ThreadDbKey { get; private set; }
}
The ThreadDBKey is a unique key that is used to identify the chat history in the vector store for subsequent calls. This key is persisted as part of the AgentThread state, allowing the thread to be resumed later and continued using the same chat history.
In the constructor, we pass through our CosmosNoSqlVectorStore instance that we'll use to store our conversation history. We also pass through the JsonElement for previously stored state that contains the ThreadDBKey. The purpose of this constructor is to either initiate a new conversation, should the ThreadDBKey not exist, or it will resume the conversation if it does.
We also need to implement three other methods:
InvokedAsync - This is called at the end of the agent invocation to add new messages to the store.InvokingAsync - This returns messages in the correct chronological order.Serialize - The purpose of this method is to allow for the retrieval of strongly-typed services that might be provided by the Microsoft.Agents.AI.ChatMessageStore, including itself or any services it might be wrapping.Let's take a look at the InvokedAsync method first:
public override async ValueTask InvokedAsync(InvokedContext context, CancellationToken cancellationToken = default)
{
this.ThreadDbKey ??= Guid.NewGuid().ToString("N");
Console.WriteLine($"[InvokedAsync] Storing {context.ChatMessageStoreMessages.Count()} messages with ThreadDbKey: {this.ThreadDbKey}");
var collection = this._cosmosVectorStore.GetCollection<string, ChatHistoryItem>("ChatHistory");
await collection.EnsureCollectionExistsAsync(cancellationToken);
var allNewMessages = context.RequestMessages.Concat(context.AIContextProviderMessages ?? []).Concat(context.ResponseMessages ?? []);
await collection.UpsertAsync(allNewMessages.Select(x => new ChatHistoryItem()
{
Key = this.ThreadDbKey + x.MessageId,
Timestamp = DateTimeOffset.UtcNow,
ThreadId = this.ThreadDbKey,
SerializedMessage = JsonSerializer.Serialize(x),
MessageText = x.Text
}), cancellationToken);
}
The purpose of this method is to save messages to the Vector Storage after we get a response from the agent.
If this is the first message, it will generate a ThreadDbKey. It will then collect all messages from the current turn (allNewMessages), then it will create new records for each message before inserting them into our Cosmos DB store.
Now let's take a look at the InvokingAsync method:
public override async ValueTask<IEnumerable<ChatMessage>> InvokingAsync(InvokingContext context, CancellationToken cancellationToken = default)
{
Console.WriteLine($"[InvokingAsync] Retrieving messages for ThreadDbKey: {this.ThreadDbKey}");
if (string.IsNullOrEmpty(this.ThreadDbKey))
{
Console.WriteLine($"[InvokingAsync] No ThreadDbKey - returning empty message list");
return [];
}
var collection = this._cosmosVectorStore.GetCollection<string, ChatHistoryItem>("ChatHistory");
await collection.EnsureCollectionExistsAsync(cancellationToken);
var records = collection.GetAsync(
x => x.ThreadId == this.ThreadDbKey, 10,
new() { OrderBy = x => x.Descending(y => y.Timestamp) },
cancellationToken);
List<ChatMessage> messages = [];
await foreach (var record in records)
{
Console.WriteLine($"[InvokingAsync] Retrieved record - Key: {record.Key}, Timestamp: {record.Timestamp}, MessageText: {record.MessageText?.Substring(0, Math.Min(50, record.MessageText?.Length ?? 0))}...");
messages.Add(JsonSerializer.Deserialize<ChatMessage>(record.SerializedMessage!)!);
}
messages.Reverse();
Console.WriteLine($"[InvokingAsync] Returning {messages.Count} messages in chronological order");
return messages;
}
This reads messages and is called before we send requests to the AI Agent to provide our conversation history as context. It checks for the ThreadDBKey, queries Cosmos DB if it exists, and deserializes the messages from JSON back to ChatMessage objects
Finally, let's take a look at the Serialize method:
public override JsonElement Serialize(JsonSerializerOptions? jsonSerializerOptions = null)
{
Console.WriteLine($"[Serialize] Serializing ThreadDbKey: {this.ThreadDbKey}");
return JsonSerializer.SerializeToElement(this.ThreadDbKey);
}
This method serializes the current object's state to a JsonElement using specified serialization options and it saves the ThreadDBKey so that the conversation can be resumed later.
Our entire CosmosChatMessageStore should look similar to this:
using Microsoft.Agents.AI;
using Microsoft.Extensions.AI;
using Microsoft.Extensions.VectorData;
using Microsoft.SemanticKernel.Connectors.CosmosNoSql;
using System.Text.Json;
namespace CosmosDBChatHistory
{
internal class CosmosChatMessageStore : ChatMessageStore
{
private readonly CosmosNoSqlVectorStore _cosmosVectorStore;
public CosmosChatMessageStore(
CosmosNoSqlVectorStore cosmosVectorStore,
JsonElement serializedStoreState,
JsonSerializerOptions? jsonSerializerOptions = null)
{
this._cosmosVectorStore = cosmosVectorStore ?? throw new ArgumentNullException(nameof(cosmosVectorStore));
if (serializedStoreState.ValueKind is JsonValueKind.String)
{
this.ThreadDbKey = serializedStoreState.Deserialize<string>();
Console.WriteLine($"[CosmosChatMessageStore] Initialized with existing ThreadDbKey: {this.ThreadDbKey}");
}
else
{
Console.WriteLine($"[CosmosChatMessageStore] Initialized with no ThreadDbKey (new thread)");
}
}
public string? ThreadDbKey { get; private set; }
// AddMessageAsync
public override async ValueTask InvokedAsync(InvokedContext context, CancellationToken cancellationToken = default)
{
this.ThreadDbKey ??= Guid.NewGuid().ToString("N");
Console.WriteLine($"[InvokedAsync] Storing {context.ChatMessageStoreMessages.Count()} messages with ThreadDbKey: {this.ThreadDbKey}");
var collection = this._cosmosVectorStore.GetCollection<string, ChatHistoryItem>("ChatHistory");
await collection.EnsureCollectionExistsAsync(cancellationToken);
var allNewMessages = context.RequestMessages.Concat(context.AIContextProviderMessages ?? []).Concat(context.ResponseMessages ?? []);
await collection.UpsertAsync(allNewMessages.Select(x => new ChatHistoryItem()
{
Key = this.ThreadDbKey + x.MessageId,
Timestamp = DateTimeOffset.UtcNow,
ThreadId = this.ThreadDbKey,
SerializedMessage = JsonSerializer.Serialize(x),
MessageText = x.Text
}), cancellationToken);
}
// GetMessagesAsync
public override async ValueTask<IEnumerable<ChatMessage>> InvokingAsync(InvokingContext context, CancellationToken cancellationToken = default)
{
Console.WriteLine($"[InvokingAsync] Retrieving messages for ThreadDbKey: {this.ThreadDbKey}");
if (string.IsNullOrEmpty(this.ThreadDbKey))
{
Console.WriteLine($"[InvokingAsync] No ThreadDbKey - returning empty message list");
return [];
}
var collection = this._cosmosVectorStore.GetCollection<string, ChatHistoryItem>("ChatHistory");
await collection.EnsureCollectionExistsAsync(cancellationToken);
var records = collection.GetAsync(
x => x.ThreadId == this.ThreadDbKey, 10,
new() { OrderBy = x => x.Descending(y => y.Timestamp) },
cancellationToken);
List<ChatMessage> messages = [];
await foreach (var record in records)
{
Console.WriteLine($"[InvokingAsync] Retrieved record - Key: {record.Key}, Timestamp: {record.Timestamp}, MessageText: {record.MessageText?.Substring(0, Math.Min(50, record.MessageText?.Length ?? 0))}...");
messages.Add(JsonSerializer.Deserialize<ChatMessage>(record.SerializedMessage!)!);
}
messages.Reverse();
Console.WriteLine($"[InvokingAsync] Returning {messages.Count} messages in chronological order");
return messages;
}
public override JsonElement Serialize(JsonSerializerOptions? jsonSerializerOptions = null)
{
Console.WriteLine($"[Serialize] Serializing ThreadDbKey: {this.ThreadDbKey}");
return JsonSerializer.SerializeToElement(this.ThreadDbKey);
}
}
}
sealed class ChatHistoryItem
{
[VectorStoreKey]
public string? Key { get; set; }
[VectorStoreData]
public string? ThreadId { get; set; }
[VectorStoreData]
public DateTimeOffset? Timestamp { get; set; }
[VectorStoreData]
public string? SerializedMessage { get; set; }
[VectorStoreData]
public string? MessageText { get; set; }
}
To use the custom ChatMessageStore, we'll need to provide this using the ChatMessageStoreFactory when creating the agent like so:
AIAgent fitnessAgent = new AzureOpenAIClient(
new Uri(config["foundry-endpoint"]),
new AzureCliCredential())
.GetChatClient("gpt-4.1-mini")
.CreateAIAgent(new ChatClientAgentOptions
{
Name = "Fitness Agent",
ChatOptions = new ChatOptions()
{
Instructions = "You are a helpful fitness assistant."
},
ChatMessageStoreFactory = ctx =>
{
// This is our custom `ChatMessageStore`
return new CosmosChatMessageStore(vectorStore, ctx.SerializedState, ctx.JsonSerializerOptions);
}
});
With our ChatMessageStore provided to our agent, we can interact with our agent and create a new thread that will hold conversation state that will be persisted to Cosmos DB.
Within our CosmosChatMessageStore, we can use the ThreadDbKey to restore our conversation state and reload it into our resumed thread by deserializing the state into a JsonElement.
// First conversation
Console.WriteLine("=== First Conversation ===\n");
AgentThread agentThread = fitnessAgent.GetNewThread();
Console.WriteLine(await fitnessAgent.RunAsync("Give me 5 exercises for my legs", agentThread));
// Serialize and persist the thread state
JsonElement serializedThread = agentThread.Serialize();
string threadStateJson = JsonSerializer.Serialize(serializedThread, new JsonSerializerOptions() { WriteIndented = true });
Console.WriteLine("\n--- Serialized Thread State ---");
Console.WriteLine(threadStateJson);
Console.WriteLine("--- End Serialized Thread State ---\n");
// Resume the conversation
Console.WriteLine("\n=== Resuming Conversation ===\n");
JsonElement restoredThreadState = JsonSerializer.Deserialize<JsonElement>(threadStateJson);
AgentThread resumedThread = fitnessAgent.DeserializeThread(restoredThreadState);
Console.WriteLine(await fitnessAgent.RunAsync("Using the same exercises, give me a workout plan that I can use as part of a weight loss program", resumedThread));
var messageStore = resumedThread.GetService<CosmosChatMessageStore>();
Console.WriteLine($"\nThread is stored in vector store under key: {messageStore.ThreadDbKey}");
Now that we've configured our agent to use our custom ChatMessageStore, let's see it in action. I've created a fitness agent that I'll ask to suggest 5 leg exercises, and then create a weight loss program using those same exercises. We should the agent respond, and our conversation state being persisted to Cosmos DB.
To run the agent, we can use dotnet run.
After our first request, we should see the following response:
=== First Conversation ===
[CosmosChatMessageStore] Initialized with no ThreadDbKey (new thread)
[InvokingAsync] Retrieving messages for ThreadDbKey:
[InvokingAsync] No ThreadDbKey - returning empty message list
[InvokedAsync] Storing 0 messages with ThreadDbKey: 3d5a0472eec6458899b0caa1e17786c7
Sure! Here are 5 effective leg exercises you can try:
1. **Squats**
- Targets: Quadriceps, hamstrings, glutes
- How to: Stand with feet shoulder-width apart, lower your hips down and back as if sitting in a chair, keep chest up, then stand back up.
2. **Lunges**
- Targets: Quadriceps, hamstrings, glutes, calves
- How to: Step forward with one leg, lower your body until both knees are bent at 90 degrees, push back to starting position, then switch legs.
3. **Deadlifts**
- Targets: Hamstrings, glutes, lower back
- How to: With feet hip-width apart, hold weights (optional), hinge at the hips while keeping back flat, lower the weights down, then stand back up.
4. **Calf Raises**
- Targets: Calves
- How to: Stand on the edge of a step or flat ground, raise your heels as high as possible, then lower back down slowly.
5. **Step-Ups**
- Targets: Quadriceps, hamstrings, glutes
- How to: Step onto a bench or sturdy platform with one foot, press through the heel to lift your body up, then step down and switch legs.
Let me know if you'd like tips on reps and sets!
[Serialize] Serializing ThreadDbKey: 3d5a0472eec6458899b0caa1e17786c7
--- Serialized Thread State ---
{
"storeState": "3d5a0472eec6458899b0caa1e17786c7"
}
--- End Serialized Thread State ---
A ThreadDbKey has been created, which the agent will use to retrieve our messages as part of its context when answering our second question. We should see the following response:
=== Resuming Conversation ===
[CosmosChatMessageStore] Initialized with existing ThreadDbKey: 3d5a0472eec6458899b0caa1e17786c7
[InvokingAsync] Retrieving messages for ThreadDbKey: 3d5a0472eec6458899b0caa1e17786c7
[InvokingAsync] Retrieved record - Key: 3d5a0472eec6458899b0caa1e17786c7chatcmpl-Cx242P0pnZcK7Ucoy6FrX4CV6o9cB, Timestamp: 12/01/2026 2:35:27 AM +00:00, MessageText: Sure! Here are 5 effective leg exercises you can t...
[InvokingAsync] Retrieved record - Key: 3d5a0472eec6458899b0caa1e17786c7, Timestamp: 12/01/2026 2:35:27 AM +00:00, MessageText: Give me 5 exercises for my legs...
[InvokingAsync] Returning 2 messages in chronological order
[InvokedAsync] Storing 2 messages with ThreadDbKey: 3d5a0472eec6458899b0caa1e17786c7
Certainly! Here's a beginner-friendly leg workout plan using those exercises, designed to support your weight loss goals by combining strength and calorie-burning movements. Aim to do this workout 2-3 times per week, alongside a balanced diet and regular cardio.
### Leg Workout for Weight Loss
**Warm-up (5-10 minutes):**
- Light jogging or brisk walking
- Leg swings and bodyweight squats
---
**Workout:**
1. **Squats**
- 3 sets of 12-15 reps
- Rest 30-45 seconds between sets
2. **Lunges** (alternating legs)
- 3 sets of 10-12 reps per leg
- Rest 30-45 seconds between sets
3. **Deadlifts** (use light to moderate weights or just bodyweight at first)
- 3 sets of 10-12 reps
- Rest 45 seconds between sets
4. **Calf Raises**
- 3 sets of 15-20 reps
- Rest 30 seconds between sets
5. **Step-Ups** (alternating legs)
- 3 sets of 10-12 reps per leg
- Rest 30-45 seconds between sets
---
### Additional Tips:
- Move quickly between exercises to keep your heart rate up, turning this into a circuit if you can.
- After completing all 5 exercises, rest 1-2 minutes, then repeat the circuit 2-3 times based on your fitness level.
- Include 20-30 minutes of moderate cardio (like walking, cycling, or swimming) on your non-strength days.
Would you like me to help create a full weekly workout schedule or include nutrition tips?
Thread is stored in vector store under key: 3d5a0472eec6458899b0caa1e17786c7
As we can see, the conversation has been reloaded using our ThreadDBKey (3d5a0472eec6458899b0caa1e17786c7) and the agent has created an exercise plan based of the exercises it suggested in it's first answer.
In our Cosmos DB collection, our conversation history will be persisted as documents that look like our ChatHistoryItem object. Here's one document as an example:
{
"ThreadId": "3d5a0472eec6458899b0caa1e17786c7",
"Timestamp": "2026-01-12T02:35:27.7616746+00:00",
"SerializedMessage": "{\"AuthorName\":\"Fitness Agent\",\"CreatedAt\":\"2026-01-12T02:35:22+00:00\",\"Role\":\"assistant\",\"Contents\":[{\"$type\":\"text\",\"Text\":\"Sure! Here are 5 effective leg exercises you can try:\\n\\n1. **Squats** \\n - Targets: Quadriceps, hamstrings, glutes \\n - How to: Stand with feet shoulder-width apart, lower your hips down and back as if sitting in a chair, keep chest up, then stand back up.\\n\\n2. **Lunges** \\n - Targets: Quadriceps, hamstrings, glutes, calves \\n - How to: Step forward with one leg, lower your body until both knees are bent at 90 degrees, push back to starting position, then switch legs.\\n\\n3. **Deadlifts** \\n - Targets: Hamstrings, glutes, lower back \\n - How to: With feet hip-width apart, hold weights (optional), hinge at the hips while keeping back flat, lower the weights down, then stand back up.\\n\\n4. **Calf Raises** \\n - Targets: Calves \\n - How to: Stand on the edge of a step or flat ground, raise your heels as high as possible, then lower back down slowly.\\n\\n5. **Step-Ups** \\n - Targets: Quadriceps, hamstrings, glutes \\n - How to: Step onto a bench or sturdy platform with one foot, press through the heel to lift your body up, then step down and switch legs.\\n\\nLet me know if you\\u0027d like tips on reps and sets!\",\"Annotations\":null,\"AdditionalProperties\":null}],\"MessageId\":\"chatcmpl-Cx242P0pnZcK7Ucoy6FrX4CV6o9cB\",\"AdditionalProperties\":null}",
"MessageText": "Sure! Here are 5 effective leg exercises you can try:\n\n1. **Squats** \n - Targets: Quadriceps, hamstrings, glutes \n - How to: Stand with feet shoulder-width apart, lower your hips down and back as if sitting in a chair, keep chest up, then stand back up.\n\n2. **Lunges** \n - Targets: Quadriceps, hamstrings, glutes, calves \n - How to: Step forward with one leg, lower your body until both knees are bent at 90 degrees, push back to starting position, then switch legs.\n\n3. **Deadlifts** \n - Targets: Hamstrings, glutes, lower back \n - How to: With feet hip-width apart, hold weights (optional), hinge at the hips while keeping back flat, lower the weights down, then stand back up.\n\n4. **Calf Raises** \n - Targets: Calves \n - How to: Stand on the edge of a step or flat ground, raise your heels as high as possible, then lower back down slowly.\n\n5. **Step-Ups** \n - Targets: Quadriceps, hamstrings, glutes \n - How to: Step onto a bench or sturdy platform with one foot, press through the heel to lift your body up, then step down and switch legs.\n\nLet me know if you'd like tips on reps and sets!",
"id": "3d5a0472eec6458899b0caa1e17786c7chatcmpl-Cx242P0pnZcK7Ucoy6FrX4CV6o9cB",
"_rid": "s5xYAJun3qoCAAAAAAAAAA==",
"_self": "dbs/s5xYAA==/colls/s5xYAJun3qo=/docs/s5xYAJun3qoCAAAAAAAAAA==/",
"_etag": "\"6b0025f9-0000-1a00-0000-69645def0000\"",
"_attachments": "attachments/",
"_ts": 1768185327
}
In this article, we talked about how we can create a CustomMessageStore so that we can store agent chat history in external storage.
While I used Azure Cosmos DB in this article, there are a number of other connectors that you can use to store conversation history for agents like SQL Server, Redis, MongoDB and more! Take a look at the docs for more information.
If you have any questions about the content here, please feel free to reach out to me on BlueSky or comment below.
Until next time, Happy coding! 🤓🖥️