2025-11-18 12:31:34
Modern developers work across multiple databases, cloud environments, and tools. Yet, most existing database GUIs either focus on a single engine, lack AI assistance, or lock core features behind paid tiers.
To solve this, I’ve been building dbfuse-ai, an open-source, cross-database GUI with prompt-based AI-assisted SQL generation, driver-based extensibility, and a roadmap to support advanced features such as MCP integration, ER diagrams, schema exploration, and more.
This project is growing quickly, and I’m actively looking for contributors who are interested in databases, Node.js, backend systems, AI integrations, and open-source collaboration.
dbfuse-ai is a flexible JavaScript/TypeScript-based library that provides:
Repository:
https://github.com/kshashikumar/dbfuse-ai
NPM Link:
https://www.npmjs.com/package/dbfuse-ai
Docker Image:
https://hub.docker.com/r/shashikumarkasturi/dbfuse-ai
This project started as a simplified MySQL GUI but evolved into a fully modular system designed for long-term extensibility and cross-database operability.
Developers often deal with:
dbfuse-ai solves these problems by providing:
The goal is to build a one-stop open-source database platform powered by LLMs and modern developer workflows.
Already supported:
Planned:
All integrations follow a unified strategy adapter pattern.
dbfuse-ai allows you to generate SQL queries using major LLM providers:
Example usage:
const dbFuse = new DBFuseAI();
const query = await dbFuse.generateSQL({
prompt: "Get the top 10 customers by revenue",
});
The goal is to make database work conversational, faster, and more intelligent.
Supports:
This enables secure and flexible access to local and remote databases.
The roadmap includes ambitious improvements, and contributors can pick issues they find interesting.
This will enable:
This unlocks powerful workflows such as AI-driven schema exploration and automated query reviews.
Adding:
All through a standardized driver SDK.
Planned features:
Coming soon:
A pluggable developer-friendly SDK to add:
This allows the community to shape the tool in any direction.
dbfuse-ai is built on a clean, layered architecture:
The codebase is beginner-friendly and open for contributions in all layers.
If you're passionate about:
…then you’re very welcome to contribute.
Star the repository
https://github.com/kshashikumar/dbfuse-ai
Explore open issues
New contributors can pick “good first issue” tasks.
Suggest new features
Open a Discussion or Issue with ideas.
Submit pull requests
Even small improvements are valuable.
Help build documentation
Tutorials and examples are highly needed.
dbfuse-ai is still early in development but has strong potential to become one of the most flexible, AI-powered, open-source database tools available.
With contributions from the community, the project can grow into a powerful platform for database developers, analysts, students, and AI-driven workflows.
If this project interests you, I would love to collaborate.
Repo:
https://github.com/kshashikumar/dbfuse-ai
Thank you for reading, and I look forward to building something meaningful with the open-source community.
2025-11-18 12:27:19
I spent alot of time reading this article might as well write about it >.<
https://wok.oblomov.eu/tecnologia/google-killing-open-web-2/
XSLT stands for Extensible Stylesheet Language Transformations.
Extensible Stylesheet Language (XSL): A language for describing how XML documents should be presented.
Transformations (T): The part that actually converts XML data into another format, like HTML, text, or another XML structure.
Essentially, XSLT is the engine that transforms XML documents according to rules defined in XSL stylesheets.
Google removing XSLT support from Chrome/Chromium is a big deal. Even if XSLT feels old, it affects compatibility, the open web, and legacy systems in ways that matter.
Many agencies, banks, healthcare systems, and enterprise tools rely on XSLT to transform XML into readable HTML directly in the browser.
Examples:
Impact: Chrome removing XSLT means these systems stop rendering correctly. Fallbacks usually involve exporting static HTML files, which is expensive and slow to maintain.
XSLT is one of the only native, standardized, zero-JS ways to transform XML into:
Without XSLT:
While removing XSLT may seem like a security improvement, it has downsides:
Google dropping it:
Organizations need to:
This is non-trivial, especially for systems with millions invested.
Removing browser support doesn’t kill XSLT itself—but it removes an important runtime that made it accessible.
| Reason | Why It Matters |
|---|---|
| Breaks legacy systems | Thousands of public XML tools and portals stop working |
| Removes native transforms | Developers forced to use heavier JS |
| Security trade-offs | Larger JS = more vulnerabilities |
| Chrome dominance | Web standards drift toward Google’s decisions |
| Cost to enterprises | Migration, rewrites, maintenance overhead |
| Loss of a great tool | XSLT still solves XML transforms better than JS |
Next Steps:
2025-11-18 12:25:13
Timothy squinted at a code review comment on his screen. "Margaret, someone left a note saying I should use the 'walrus operator' here. I looked it up and found := but... I don't understand when I'm supposed to use it. It just looks like a typo to me."
Margaret came over and examined his code:
def process_file(filename):
with open(filename, 'r') as file:
line = file.readline()
while line:
if len(line) > 80:
print(f"Long line: {line[:80]}...")
line = file.readline()
"Ah, see the repetition?" Margaret pointed at the two line = file.readline() statements. "You're reading a line before the loop, then reading again at the end of each iteration. The walrus operator lets you assign and use a value in the same expression."
"But... where would I put it?" Timothy asked.
The Problem: Assignment Outside the Expression
"Let me show you," Margaret said, rewriting the function:
def process_file(filename):
with open(filename, 'r') as file:
while (line := file.readline()):
if len(line) > 80:
print(f"Long line: {line[:80]}...")
"Wait," Timothy blinked. "The assignment is... inside the while condition?"
"Exactly. Let me show you the structure."
Tree View:
process_file(filename)
With open(filename, 'r') as file
While (line := file.readline())
If len(line) > 80
└── print(f'Long line: {line[:80]}...')
English View:
Function process_file(filename):
With open(filename, 'r') as file:
While (line := file.readline()):
If len(line) > 80:
Evaluate print(f'Long line: {line[:80]}...').
Timothy studied the structure. "So line := file.readline() assigns the value to line AND returns it for the while condition to check?"
"Precisely," Margaret confirmed. "The walrus operator := is an assignment expression - it assigns a value and evaluates to that value. That's why you can use it inside conditions, comprehensions, anywhere an expression is valid."
"But I thought you couldn't do assignment inside expressions?" Timothy asked.
"You couldn't with the regular = operator," Margaret said. "That's statement-level assignment. The walrus operator := is expression-level assignment. Python 3.8 added it specifically for cases like this."
Understanding the Pattern
Margaret pulled up another example where the walrus operator shines:
def analyze_data():
numbers = [1, 2, 3, 4, 5]
squared = [y for x in numbers if (y := x * 2) > 4]
return squared
Tree View:
analyze_data()
numbers = [1, 2, 3, 4, 5]
squared =
Comprehension: y
For x in numbers
If (y := (x * 2)) > 4
Return squared
English View:
Function analyze_data():
Set numbers to [1, 2, 3, 4, 5].
Set squared to:
List comprehension: y
For each x in numbers
If (y := (x * 2)) > 4
Return squared.
"Look at the structure," Margaret said. "The assignment y := x * 2 happens inside the comprehension's filter condition. Without the walrus operator, you'd have to compute x * 2 twice - once to check if it's greater than 4, and again to include it in the result."
Timothy traced the flow. "So I'm assigning y the value of x * 2, then immediately using that y value both in the comparison and as the result?"
"Exactly. The structure makes it clear - the assignment happens right where you need the value."
She showed him what the code would look like without the walrus operator:
def analyze_data_verbose():
numbers = [1, 2, 3, 4, 5]
squared = []
for x in numbers:
temp = x * 2
if temp > 4:
squared.append(temp)
return squared
Tree View:
analyze_data_verbose()
numbers = [1, 2, 3, 4, 5]
squared = []
For x in numbers
temp = x * 2
If temp > 4
└── squared.append(temp)
Return squared
English View:
Function analyze_data_verbose():
Set numbers to [1, 2, 3, 4, 5].
Set squared to [].
For each x in numbers:
Set temp to x * 2.
If temp > 4:
Evaluate squared.append(temp).
Return squared.
"See the difference?" Margaret pointed at the two structures side by side. "The verbose version has a temporary variable and a separate loop. The walrus version does it all in the comprehension."
Where You Can Use It
Timothy nodded slowly. "So the walrus operator is useful when you want to use a computed value multiple times in the same expression?"
"That's one use case," Margaret said. "It's also great in if statements when you want to capture and test a value:"
def process_data(filename):
if (data := read_file(filename)) is not None:
return analyze(data)
return None
Tree View:
process_data(filename)
If (data := read_file(filename)) is not None
└── Return analyze(data)
Return None
English View:
Function process_data(filename):
If (data := read_file(filename)) is not None:
Return analyze(data).
Return None.
"Without the walrus operator, you'd either call read_file() twice or add an extra line to store it first," Margaret explained. "The structure shows the assignment happens right in the condition check."
Timothy looked at his original while loop. "So in my file reading code, I was doing the assignment outside the loop and then again at the end. The walrus operator lets me do it right in the while condition."
"Exactly. The assignment happens where you need it - inside the expression."
"Does it work everywhere?" Timothy asked.
"Almost. You can use it in any expression context - if conditions, while conditions, comprehensions, function arguments. But you can't use it at the statement level. x := 5 by itself would be a syntax error. It has to be part of a larger expression."
Timothy refactored his file reading function, then sat back. "The := doesn't look like a typo anymore. It looks like I'm putting the assignment exactly where I need it in the structure."
"Now you understand why they call it the walrus operator," Margaret smiled. "The := looks like a walrus lying on its side. And just like a walrus, it's distinctive and purposeful."
Explore Python structure yourself: Download the Python Structure Viewer - a free tool that shows code structure in tree and plain English views. Works offline, no installation required.
Aaron Rose is a software engineer and technology writer at tech-reader.blog and the author of Think Like a Genius.
2025-11-18 12:19:43
This article is Part 2 of my Prometric-Go series.
If you haven't read Part 1 yet — the introduction to the library — you can check it out here:
🔗 https://dev.to/asraful_haque/simplifying-prometheus-metrics-in-go-with-prometric-go-22ef
In this article we will instrument a real Go API using prometric-go, collect metrics in Prometheus, and visualize everything in Grafana — including a load-test using k6 to generate traffic.
To make this super easy, I created a ready-to-run sample project:
🔗 GitHub Repo: https://github.com/peek8/prometric-go-sample
This repository contains:
| Purpose | Tool |
|---|---|
| Metric instrumentation | prometric-go |
| Metrics storage | Prometheus |
| Metrics visualization | Grafana |
| Web API | Go + net/http |
| Traffic Generation | Grafan k6 |
k6 hits the API → API exposes metrics via /metrics → Prometheus scrapes → Grafana visualizes.
$ git clone https://github.com/peek8/prometric-go-sample
$ cd prometric-go-sample
$ make run
This will run the api server at port 7080. You can explore the prometheus metrics exposed by prometric-go library at url:
http://localhost:7080/metrics
See the full metrics list exposed at prometric-go library readme
I use Grafan k6 to generate traffic against the prometric API. This k6-scripts demonstrates a simple scenario that exercises the CRUD endpoints for Person objects.
These above iterations are enough to generate some adequate prometheus metrics which can be used to play with prometheus and grafana dashboard.
$ k6 run ./scripts/k6-scripts.js
And now if you hit the metric endpoint, you will see different metric values keep changing.
Run prometheus using the prometheus.yml file:
$ docker run \
-p 9090:9090 \
-v ./resources/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus
N.B: If you are using podman use host.containers.internal as targets at prometheus.yml file, ie:
targets: ["host.containers.internal:7080"]
Run Grafan using docker:
$ docker run -d -p 3000:3000 grafana/grafana
Then grafana will be available at http://localhost:3000, use admin:admin as to login for the first time.
Create Grafana Data Source
Import Grafana Dashboard
Voila, You’ll now see real-time metrics at the dashboard from the sample app:
| Category | Benefit |
|---|---|
| Go backend | Clean way to expose metrics |
| Prometheus | Scraping & querying the app |
| Grafana | Dashboarding for API performance |
| k6 | Generating programmable load |
| Observability | From raw counters → real visual insights |
The repo is intentionally simple, so you can fork it and adapt it for your own services.
Prometheus + Grafana can feel complex when you start from scratch —
but with prometric-go, you get:
👌 Meaningful metrics by default
📦 Domain-specific metrics with 2–3 lines of code
🚀 Dashboards ready to plug into production
If you try out the sample, I’d love to hear your feedback!
Support the project by giving a ⭐ to both repos:
🔗 https://github.com/peek8/prometric-go
🔗 https://github.com/peek8/prometric-go-sample
And feel free to reach out if you want help adding alerting rules, histograms tuning, Grafana provisioning, or Kubernetes deployment.
2025-11-18 12:06:02
The relationship between data and applications is undergoing a fundamental shift. For decades, we've moved data to applications. Now, we're moving applications to data. This isn't just an architectural preference—it's becoming a necessity as businesses demand richer context, faster insights, and real-time operations. Here's what's driving this change:
Consider how Uber thinks about a driver who just completed a ride.
Without connected data: "Driver #47291 completed an 18-minute ride. Rating: 5 stars."
With connected data: "Driver #47291 completed an 18-minute ride during rush hour in San Francisco. Has a 4.92 rating over 2,847 trips, typically works evenings, now in a surge zone. The passenger is gold-status but gave 3 stars today (usually gives 5). Heavy rain—this driver's cancellation rate jumps 8% in rain."
Same event, different universe. The first tells you what happened. The second tells you why, predicts what might happen next, and suggests what action to take. When you view information through multiple dimensions—user behavior, location patterns, time series, weather, operational metrics—you move from reporting to insight.
In large organizations, data from everywhere converges in a central Enterprise Data Platform: CRM systems, transaction data, product telemetry, marketing attribution, customer service interactions.
This wasn't arbitrary. Connecting data is hard, and doing it repeatedly across different tools is wasteful. The EDP became the natural convergence point where data gets cleaned once, relationships between different sources get mapped, historical context accumulates, and governance gets enforced.
When you need to understand customer lifetime value, you need purchase history, support interactions, usage patterns, and marketing touchpoints. These don't naturally live together—they get connected in the EDP. This made it perfect for contextual analytics: not just because data lives there, but because the relationships and ability to view information from multiple angles exist in one place.
For years, the workflow was straightforward. When teams wanted to improve customer experience, run marketing campaigns, or optimize products, they'd: identify needed data from the EDP, procure a specialized tool (Qualtrics for customer experience, Segment for customer data, Hightouch for reverse ETL), build pipelines to extract and load data, then let the specialized tool work its magic.
Marketing got Braze. Customer success got Gainsight. Product got Amplitude. Each loaded with curated enterprise data.
This made sense—these platforms had years of domain expertise and optimized databases for specific use cases. But cracks started showing.
Every specialized tool works better with richer data. Your NPS scores don't just tell you satisfaction dropped—you want to know it dropped specifically among enterprise customers with multiple support tickets coming up for renewal.
Theoretically, send more data. Practically, this creates three problems:
First, you're duplicating your entire dataset across multiple tools. Your customer data lives in the EDP, in marketing, in customer success, in product analytics. Each copy needs syncing. Each represents another data quality surface.
Second, you're creating brittle pipelines. Different data models, different APIs, different limitations. Each pipeline is a failure point needing maintenance as schemas evolve.
Third, you're siloing insights. Marketing sees one version of the customer, product sees another, support a third. The connected data you built in the EDP gets disconnected as it flows into specialized tools.
This becomes an anti-pattern—working against what made the EDP valuable: keeping data connected.
If moving data to applications creates these problems, what if we inverted the pattern? Instead of extracting data from the EDP to specialized tools, build those capabilities on top of the EDP itself.
When the most contextual, connected data already lives in your Enterprise Data Platform, why ship it elsewhere? Why not build your customer experience dashboards, your marketing segmentation engines, your operational applications directly on the EDP?
This is where most people raise an objection.
For decades, we've been taught that analytical systems and operational systems are fundamentally different. Analytics platforms—data warehouses, lakes, EDPs—handle complex queries over large datasets, optimized for throughput. Operational systems—transactional databases—handle fast queries on specific records, optimized for latency.
You wouldn't run e-commerce checkout on a data warehouse. You wouldn't build real-time fraud detection on overnight batch jobs.
But here's what changed: the line between analytical and operational has blurred dramatically over the past five years.
Applications have become analytics-hungry. A decade ago, an operational application might look up a customer record. Today, that same application needs to compute lifetime value in real-time, analyze 90 days of behavior, compare against historical patterns, and aggregate data across multiple dimensions.
Meanwhile, data freshness requirements have compressed. Marketing campaigns that used to refresh daily now need hourly or minute-level updates. Customer health scores calculated overnight now need to reflect recent interactions within minutes.
And context requirements have exploded. It's no longer enough to know what a customer bought—you need what they viewed but didn't buy, what promotions they've seen, what support issues they've had, and what predictive models say about their churn likelihood.
This creates a new reality: operational applications need the rich, connected context of the EDP, but with operational characteristics—low latency, high availability, and fresh data.
EDPs can no longer be P2 or P3 systems that indirectly support business. They're becoming P1 systems powering business directly, in real-time, at the point of customer interaction.
For EDPs to power operational applications, three characteristics must change:
Traditional data pipelines moved data in batches—often daily, sometimes hourly, occasionally every 15 minutes if you were pushing it. This worked fine when insights were consumed the next morning in dashboard reviews.
It doesn't work when you're trying to trigger a marketing campaign based on a customer's action taken 30 seconds ago. It doesn't work when flagging potentially fraudulent transactions while they're still pending. It doesn't work when customer service needs to see what happened during the call that just ended.
The solution: Event-driven architecture, end to end. This isn't just about having a message queue somewhere. It means rethinking how data flows through your entire enterprise. When a customer completes a purchase, that event should propagate through your systems in seconds, not hours.
This is the architecture that makes an enterprise truly data-driven—not from yesterday's data, but from what's happening right now. Technologies like Kafka, Debezium for change data capture, and streaming platforms become foundational, not optional.
Users won't wait three seconds for a dashboard to load. They definitely won't wait 30 seconds for a page to render. Applications need to respond in hundreds of milliseconds, not seconds.
But here's the fundamental issue: modern data warehouses and lakes are built on storage-compute separation. This isn't a bug—it's an intentional design choice that provides enormous benefits for analytical workloads. You can scale storage and compute independently. You can spin up compute when needed and shut it down when you don't.
However, this separation introduces a first-principles problem: when you run a query, data needs to move from remote storage to compute nodes. Even with optimized formats like Parquet, even with clever caching—data still needs to travel. For analytical queries over large datasets, a few seconds is acceptable. For operational APIs, it's not.
Why this matters for operational workloads: Operational applications don't make single queries. They chain hundreds of API calls. A single page load might trigger dozens of queries. Real-time business decisions—approve this transaction, show this offer, flag this behavior—can't wait for data to move from storage to compute. They need millisecond responses.
The solution: Relational databases, where compute and storage live together. This is where solutions like Neon and serverless Postgres come into play.
The pattern: Keep your rich, historical, connected data in the EDP where it belongs—that's still the system of record. But sync the operational subset—the data that needs to power real-time applications—into a relational database optimized for low-latency queries.
This operational database becomes your fast access layer, holding the most frequently accessed data: current customer states, recent transactions, active orders. Everything else—full history, rarely accessed dimensions, large analytical datasets—stays in the EDP and is linked when needed.
Why relational databases? When compute and storage are together, query latency drops dramatically. No network hop to fetch data. Indexes live next to the data. Query planners optimize on actual data locality.
Why serverless Postgres? It solves the operational challenges that traditionally made databases hard to scale—automatic scaling, no provisioning for peak load—while maintaining the low-latency benefits of the relational model.
When your data platform is used for monthly reports and strategic planning, a few hours of downtime is annoying but not catastrophic. When your data platform powers customer-facing applications, every minute of downtime directly impacts revenue.
This means treating your EDP—or at least the operational layer sitting on top of it—with the same availability standards you'd apply to any production application.
The solution: Active-active configurations, multi-region deployments, automatic failover. At minimum, the operational database layer needs production-grade infrastructure.
This shift is cultural as much as technical. It means your data team needs to adopt DevOps practices. It means SLAs matter. It means on-call rotations become part of data platform management.
None of these ideas are entirely new. People have been talking about operational analytics for years. So why is this pattern becoming critical now?
Several trends have converged:
The cost of computation has dropped dramatically. What was prohibitively expensive five years ago—maintaining real-time data pipelines, running operational databases on large datasets—is now economically feasible. Serverless architectures have made it even more accessible.
Competitive pressure has increased. Customers expect personalization, immediate responses, and consistency across channels. Companies that can deliver these experiences with richer context have a meaningful advantage.
The technology has matured. Event streaming platforms are production-ready. Change data capture tools reliably sync databases. Serverless databases handle operational workloads without traditional overhead. The pieces needed to build this architecture actually work now.
Data teams have the skills. A generation of data engineers who grew up building real-time pipelines and thinking about data as something that flows rather than sits have moved into leadership positions. The organizational knowledge exists to execute this pattern.
Here's what this looks like in practice:
Your Enterprise Data Platform remains the system of record—the place where data is cleaned, connected, and stored historically. Data flows into it through event-driven pipelines that capture changes as they happen, not in overnight batches.
On top of the EDP, an operational layer provides fast, consistent access to the subset of data needed for real-time applications. This might be a serverless Postgres instance that's automatically synced with your data platform, maintaining operational data with sub-second query latency.
Applications—whether internal tools, customer-facing features, or analytical dashboards—query the operational layer directly. They get the rich context of the EDP with the performance characteristics of an operational database.
The operational layer is treated as a P1 system: multi-region if needed, highly available, monitored like any production service.
Data flows through this architecture in near real-time. An event happens in a source system, gets captured and streamed to the EDP, triggers processing and transformation, and updates the operational layer—all within seconds or minutes.
When you build applications on top of your connected data rather than extracting subsets to specialized tools, several things become possible:
Richer insights. You're not limited to the subset of data you could feasibly extract and load. Your application has access to the full context of the EDP.
Faster iteration. Adding a new dimension to your analysis doesn't require building a new pipeline and waiting for data to load. It's already there.
Reduced duplication. Data lives in fewer places. Updates happen in one location. Data quality issues are fixed once.
Better cross-functional work. When everyone is building on the same data foundation, insights are easier to share. Marketing and product aren't looking at different versions of customer behavior.
Lower operational overhead. Fewer pipelines to maintain, fewer data synchronization issues to debug, fewer copies of data to govern and secure.
You're trading specialized tools' optimizations for platform flexibility. You need teams capable of building applications and organizational buy-in to treat data platforms as production infrastructure. But for many organizations, the benefits—flexibility, reduced duplication, faster iteration, contextual insights—justify the investment.
Here's a legitimate question: what about all those cutting-edge features that specialized platforms offer? Qualtrics has StatsIQ and TextIQ—sophisticated analytics capabilities built over years. Segment has identity resolution algorithms refined across thousands of companies.
If we're building on our EDP instead of using these tools, are we throwing away innovation? Are we asking data teams to rebuild complex models from scratch?
Not necessarily. The key insight: you don't need to move data to leverage specialized capabilities. Bring those capabilities to where data lives, or let them operate on your data in place.
First, bring capabilities to the EDP. This is already happening. Many specialized analytics capabilities are becoming available as standalone services or libraries that operate directly on data platforms. Modern EDPs support user-defined functions, external ML service calls, and integration with specialized processing engines. You can invoke sentiment analysis APIs on text stored in your EDP. You can run statistical models using libraries that operate directly on your warehouse tables.
Second, let specialized tools operate in place. Instead of extracting data into Qualtrics, imagine Qualtrics connecting directly to your EDP and running its StatsIQ algorithms on your data where it sits. This "compute on data in place" trend is accelerating—it's the core idea behind data clean rooms, query federation, and interoperability standards.
Every time you add a step to move data, you introduce:
The most successful solutions operate on existing data in place. Think about dbt's success—it transforms data where it sits. Or how BI tools evolved from requiring extracts to connecting directly to warehouses. The winning pattern is always "work with data in place."
From vendors: APIs that operate on external data, federated query engines, embedded analytics libraries. Some will resist—their business models depend on data lock-in. But those that embrace this will win in a world where enterprises are consolidating their data.
From data platforms: Rich API layers, fine-grained access control, performance for external queries, support for specialized compute through user-defined functions and external procedures. Modern platforms like Snowflake's external functions, Databricks' ML capabilities, and BigQuery's remote functions are steps in this direction.
Your Enterprise Data Platform holds your connected, contextual data. Your operational layer provides fast access for real-time applications. And specialized analytics capabilities—whether built in-house or licensed from vendors—operate on this data without requiring it to be moved.
You get the rich context and operational efficiency of centralized, connected data. And you get the specialized capabilities of best-in-class tools. Without the brittleness, cost, and complexity of moving data between systems.
The shift from "move data to applications" to "move applications to data" reflects how central data has become. The line between operational and analytical systems has blurred.
Organizations adapting to this—event-driven architectures, operational databases near data platforms, treating EDPs as P1 systems—will act on richer context, respond faster, and deliver better experiences. Those maintaining old extraction patterns will fight complex pipelines and synchronization issues.
The technology exists. The question is organizational readiness.
The future of enterprise data isn't choosing between analytical power and operational performance. It's architectures delivering both.
2025-11-18 12:04:47
I spent 30 days building and polishing this Claude Code plugin for agent builders. It has 5 slash commands I use every day.
Now let me show you how magical they are and how I built and polished them over the past 30 days.
/generate-code-map-headers - Generate code map headers
When we use Claude Code to vibe code, sometimes it needs multiple steps to search and find the most related code and figure out the data flow trace stack, etc.
If you run it every day when you finish work, the next morning Claude will understand the code relationships in one step and code much faster.
/design-refine - Iteratively refine website design to professional standards
If you're building frontend like me, you'll find it's annoying to deal with small design problems. You have to screenshot and tell Claude Code to fix them.
This slash command will run a browser agent and take screenshots of mobile and desktop at different sizes and fix the design problems.
You can also run it when you finish a day's work, and the next morning it will have fixed most of the small problems.
/linus-review-my-code - Get roasted for complexity (Linus-style: direct & honest)
You'll always find that Claude Code loves to add try-catch, if-else, and other over-engineering things.
So you need a style guide for it, like "let it crash," or "not too many classes, just simple functions," or "no over-abstraction."
I found that this prompt makes it review code like Linus, which finds most problems. It's really good when you just finish some auto-accept edits—let this review your code and fix the problems.
/aaron-review-my-code - Get reviewed by the creator (Aaron: educational & principled)
If you're using ConnectOnion to build agents and don't want to read documentation, but still want to build elegant agents that follow the principles "make simple things simple and make complicated things possible," then run this!
/aaron-build-my-agent - Let Aaron build your agent (scaffolding done right)
If you want to build an agent but don't want to build it yourself, just run this and input what you want to build—let this prompt build one for you!
Use this command to install the marketplace:
/plugin marketplace add openonion/connectonion-claude-plugin
Then install the plugin:
/plugin install connectonion
The plugin is open source using Apache 2.0. Please check and install it. If you have any problems, welcome to discuss with me in our Discord server.