2026-02-08 20:07:03
In the realm of business intelligence, the distance between a raw spreadsheet and a strategic decision is bridged by the data analyst’s technical workflow. Using Power BI, analysts do not merely report numbers; they architect a system that transforms chaotic inputs into clear, actionable insights. This process follows a rigorous path: harmonising messy data, structuring it for performance, applying business logic through DAX, and delivering clarity via interactive dashboards.
Real-world data is rarely ready for immediate analysis. It arrives full of inconsistencies that can break calculations and skew results. Before any visualisation occurs, analysts use Power Query to clean and transform this raw material into trusted information.
• Harmonising Data: A common issue is the presence of "pseudo-blanks"—text entries like "NA," "error," "blank," or "not provided" mixed into columns. Power BI reads this as valid text rather than missing values. Analysts must use the "Replace Values" function to harmonise these into a single standard category, such as "unknown," to ensure accurate categorisation without deleting potentially valuable raw data.
• Ensuring Precision: Small formatting errors can lead to duplication. For instance, "Kenya " (with a space) and "Kenya" are treated as different values. Analysts use the TRIM function to remove leading and trailing whitespace, ensuring that categories aggregate correctly.
• Data Typing: Attempting to sum a column will fail if the data type is set to text. Analysts must rigorously define columns—setting revenue to "Decimal Number" for calculation while keeping identifiers like phone numbers as "Text" to prevent accidental aggregation.
A major pitfall in data management is the "flat table"—a single, massive spreadsheet containing every detail. This structure leads to duplication, wasted memory, and maintenance nightmares.
To solve this, analysts employ the Star Schema, a modelling technique that separates data into two distinct types:
• Fact Tables: These contain transactional metrics (e.g., Sales, Quantity, Total Revenue) and sit at the centre of the model.
• Dimension Tables: These contain descriptive attributes (e.g., Customers, Products, Stores) and surround the fact table.
This structure allows for "write once, use many" efficiency. When a store relocates from one city to another, the analyst updates a single row in the Dimension table, rather than millions of rows in the Fact table. This model ensures that when stakeholders ask complex questions, the relationships between tables allow filters to flow correctly, providing accurate answers instantly.
Once the data is structured, DAX (Data Analysis Expressions) is the language used to extract business logic. Analysts distinguish between Calculated Columns (row-by-row logic) and Measures (dynamic aggregations) to answer specific business questions.
• Automating Business Logic: Analysts use logical functions like IF and SWITCH to automate categorisation. For example, a nested IF statement or a SWITCH function can scan phone number prefixes (e.g., 254, 256) and automatically classify the country of origin as Kenya or Uganda.
• Time Intelligence: Business decisions rely heavily on historical context. Using time intelligence functions like DATEADD and SAMEPERIODLASTYEAR inside a CALCULATE function, analysts can generate metrics like "Revenue Last Month" or "Revenue Last Year”. This shifts the context of the data, allowing a manager to instantly see if performance is trending up or down compared to previous periods without manual recalculation.
• Handling Complexity: Advanced iterators like SUMX allow for calculations that require row-by-row evaluation before aggregating, such as multiplying yield by market price for every single transaction to get a precise total revenue.
A dashboard is not just a collection of charts; it is a tool for decision-making. Analysts select specific visuals to answer specific questions, ensuring the report is intuitive for non-technical stakeholders.
• Trends and Comparisons: To show how revenue evolves over time, analysts use Line Charts or Area Charts, which emphasise volume and trends. For comparing categories, such as revenue by county, Column Charts (vertical) or Bar Charts (horizontal) are used.
• Correlations: To test hypotheses, such as "Does higher profit correlate with higher revenue?", analysts use Scatter Charts. If the bubbles trend upward, it indicates a positive correlation, validating the business strategy.
• Managing High-Volume Data: When dealing with many categories (e.g., revenue by county and then by crop type), standard pie charts become cluttered. Analysts use Tree Maps or Decomposition Trees to visualise hierarchies and drill down into the data to understand exactly why a number is high or low.
The final output is the Dashboard—a one-page summary designed to answer the most important questions at a glance.
• Immediate Health Checks: Critical numbers (Total Profit, Total Yield) are placed at the top using KPI Cards or Multi-row Cards. This ensures that decision-makers see the most vital metrics immediately.
• Interactivity: Static reports limit discovery. Analysts add Slicers to allow users to filter the entire dashboard by specific segments, such as "County" or "Crop Type." This transforms a generic report into a tailored tool for specific regional managers.
• AI-Driven Insights: Tools like Q&A allow users to type questions in plain English (e.g., "Total yield by crop type") and receive an instant visual answer, bridging the gap between technical data models and ad-hoc business inquiries.
By mastering these steps—cleaning data in Power Query, modelling with Star Schemas, calculating with DAX, and visualising in Power BI—analysts transform raw, messy data into a coherent narrative that drives real-world business action.
2026-02-08 20:00:17
I’m a software engineer working in the industry, and I’ve spent my nights and weekends grinding on this open-source project.
Personally, I really love Streamlit. Its intuitive syntax is a game-changer, and many of my friends in AI/Data research use it for prototyping every day.
But as projects grew, I saw them suffering from a structural performance issue: "The Full-Script Rerun."
Every time you click a button, the entire Python script runs from top to bottom. Watching the loading spinner spin endlessly (or seeing the app crash) made me want to fix this as a developer.
I tried recommending NiceGUI as an alternative. It's great, but the feedback was consistent:
"The syntax is too different from Streamlit."
"Customizing the design (Material Design) is too hard."
So, I decided to take matters into my own hands.
"What if I built a tool that is as easy as Streamlit, but as fast as React?"
That’s how Violit was born.
Violit is built on top of FastAPI and Shoelace (Web Components). My architectural goals were clear:
theme="cyberpunk".Violit inherits the developer experience of Streamlit. You don't need to know complex callbacks or useEffect. The flow of your Python code becomes the UI.
Python
import violit as vl
app = vl.App(title="Violit Demo")
# State declaration (Similar to React's useState or SolidJS signals)
count = app.state(0)
# Clicking the button updates ONLY the 'count' variable.
# No full script rerun happens.
app.button("Increment", on_click=lambda: count.set(count.value + 1))
# When 'count' changes, only this text component updates.
app.text("Current Count:", count)
app.run()
Because Violit uses fine-grained reactivity, it doesn't need to re-read data or re-render the entire DOM when a state changes.
Here is a comparison of rendering speeds when plotting graphs with large datasets:
| Data Points | Streamlit Rendering | Violit Rendering |
|---|---|---|
| 100K | ~62ms | ~14ms |
| 500K | ~174ms | ~20ms |
| 1M | ~307ms | ~24ms |
As you can see, Violit shows minimal performance degradation even with large datasets, as it bypasses the heavy rerun cycle.
This comparison shows the rendering of a graph plot using 5 million generated data points. (Top: Streamlit, Bottom: Violit)
As you can see, Violit demonstrates a drastically faster app startup speed compared to Streamlit.
I believe that "Aesthetics are a feature."
Violit comes with over 20 preset themes ranging from bootstrap to cyberpunk and vaporwave.
Bootstrap

Dracula

Hand-drawn

You can switch the entire look of your app with a single argument.
Talk is cheap. To prove that Violit is ready for production (MVP), I built the official landing page and documentation entirely using Violit.
It supports Hybrid Rendering (HTMX for large traffic, WebSocket for real-time) and can even be Desktop mode using pywebview.
Violit aims to go beyond being just another Streamlit alternative. Our goal is to enable developers to build complete web services (MVPs) entirely in Python.
I’m developing this in public, and it’s currently at v0.1.11. It’s still early days, but the response from Reddit and the community has been amazing.
If you are looking for a faster alternative to Streamlit for your dashboards or AI demos, please give it a try and let me know what you think!
GitHub: github.com/violit-dev/violit (Stars make me happy! ⭐)
Build Your Own Blog in 10 Minutes with Violit!: Violit blog example
Thanks for reading! Happy coding. 💜
2026-02-08 20:00:00
The human brain is an astonishing paradox. It consumes roughly 20 watts of power, about the same as a dim light bulb, yet it performs the equivalent of an exaflop of operations per second. To put that in perspective, when Oak Ridge National Laboratory's Frontier supercomputer achieves the same computational feat, it guzzles 20 megawatts, a million times more energy. Your brain is quite literally a million times more energy-efficient at learning, reasoning, and making sense of the world than the most advanced artificial intelligence systems we can build.
This isn't just an interesting quirk of biology. It's a clue to one of the most pressing technological problems of our age: the spiralling energy consumption of artificial intelligence. In 2024, data centres consumed approximately 415 terawatt-hours of electricity globally, representing about 1.5 per cent of worldwide electricity consumption. The United States alone saw data centres consume 183 TWh, more than 4 per cent of the country's total electricity use. And AI is the primary driver of this surge. What was responsible for 5 to 15 per cent of data centre power use in recent years could balloon to 35 to 50 per cent by 2030, according to projections from the International Energy Agency.
The environmental implications are staggering. For the 12 months ending August 2024, US data centres alone were responsible for 105 million metric tonnes of CO2, accounting for 2.18 per cent of national emissions. Under the IEA's central scenario, global data centre electricity consumption could more than double between 2024 and 2030, reaching 945 terawatt-hours by the decade's end. Training a single large language model like OpenAI's ChatGPT-3 required about 1,300 megawatt-hours of electricity, equivalent to the annual consumption of 130 US homes. And that's just for training. The energy cost of running these models for billions of queries adds another enormous burden.
We are, quite simply, hitting a wall. Not a wall of what's computationally possible, but a wall of what's energetically sustainable. And the reason, an increasing number of researchers believe, lies not in our algorithms or our silicon fabrication techniques, but in something far more fundamental: the very architecture of how we build computers.
In 1977, John Backus stood before an audience at the ACM Turing Award ceremony and delivered what would become one of the most influential lectures in computer science history. Backus, the inventor of FORTRAN, didn't use the occasion to celebrate his achievements. Instead, he delivered a withering critique of the foundation upon which nearly all modern computing rests: the von Neumann architecture.
Backus described the von Neumann computer as having three parts: a CPU, a store, and a connecting tube that could transmit a single word between the CPU and the store. He proposed calling this tube “the von Neumann bottleneck.” The problem wasn't just physical, the limited bandwidth between processor and memory. It was, he argued, “an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand.”
Nearly 50 years later, we're still living with that bottleneck. And its energy implications have become impossible to ignore.
In a conventional computer, the CPU and memory are physically separated. Data must be constantly shuttled back and forth across this divide. Every time the processor needs information, it must fetch it from memory. Every time it completes a calculation, it must send the result back. This endless round trip is called the von Neumann bottleneck, and it's murderously expensive in energy terms.
The numbers are stark. Energy consumed accessing data from dynamic random access memory can be approximately 1,000 times more than the energy spent on the actual computation. Moving data between the CPU and cache memory costs 100 times the energy of a basic operation. Moving it between the CPU and DRAM costs 10,000 times as much. The vast majority of energy in modern computing isn't spent calculating. It's spent moving data around.
For AI and machine learning, which involve processing vast quantities of data through billions or trillions of parameters, this architectural separation becomes particularly crippling. The amount of data movement required is astronomical. And every byte moved is energy wasted. IBM Research, which has been at the forefront of developing alternatives to the von Neumann model, notes that data fetching incurs “significant energy and latency costs due to the requirement of shuttling data back and forth.”
The brain takes a radically different approach. It doesn't separate processing and storage. In the brain, these functions happen in the same place: the synapse.
Synapses are the junctions between neurons where signals are transmitted. But they're far more than simple switches. Each synapse stores information through its synaptic weight, the strength of the connection between two neurons, and simultaneously performs computations by integrating incoming signals and determining whether to fire. The brain has approximately 100 billion neurons and 100 trillion synaptic connections. Each of these connections is both a storage element and a processing element, operating in parallel.
This co-location of memory and processing eliminates the energy cost of data movement. When your brain learns something, it modifies the strength of synaptic connections. When it recalls that information, those same synapses participate in the computation. There's no fetching data from a distant memory bank. The memory is the computation.
The energy efficiency this enables is extraordinary. Research published in eLife in 2020 investigated the metabolic costs of synaptic plasticity, the brain's mechanism for learning and memory. The researchers found that synaptic plasticity is metabolically demanding, which makes sense given that most of the energy used by the brain is associated with synaptic transmission. But the brain has evolved sophisticated mechanisms to optimise this energy use.
One such mechanism is called synaptic caching. The researchers discovered that the brain uses a hierarchy of plasticity mechanisms with different energy costs and timescales. Transient, low-energy forms of plasticity allow the brain to explore different connection strengths cheaply. Only when a pattern proves important does the brain commit energy to long-term, stable changes. This approach, the study found, “boosts energy efficiency manifold.”
The brain also employs sparse connectivity. Because synaptic transmission dominates energy consumption, the brain ensures that only a small fraction of synapses are active at any given time. Through mechanisms like imbalanced plasticity, where depression of synaptic connections is stronger than their potentiation, the brain continuously prunes unnecessary connections, maintaining a lean, energy-efficient network.
While the brain accounts for only about 2 per cent of body weight, it's responsible for about 20 per cent of our energy use at rest. That sounds like a lot until you realise that those 20 watts are supporting conscious thought, sensory processing, motor control, memory formation and retrieval, emotional regulation, and countless automatic processes. No artificial system comes close to that level of computational versatility per watt.
The question that's been nagging at researchers for decades is this: why can't we build computers that work the same way?
Carver Mead had been thinking about this problem since the 1960s. A pioneer in microelectronics at Caltech, Mead's interest in biological models dated back to at least 1967, when he met biophysicist Max Delbrück, who stimulated Mead's fascination with transducer physiology. Observing graded synaptic transmission in the retina, Mead became interested in treating transistors as analogue devices rather than digital switches, noting parallels between charges moving in MOS transistors operated in weak inversion and charges flowing across neuronal membranes.
In the 1980s, after intense discussions with John Hopfield and Richard Feynman, Mead's thinking crystallised. In 1984, he published “Analog VLSI and Neural Systems,” the first book on what he termed “neuromorphic engineering,” involving the use of very-large-scale integration systems containing electronic analogue circuits to mimic neuro-biological architectures present in the nervous system.
Mead is credited with coining the term “neuromorphic processors.” His insight was that we could build silicon hardware that operated on principles similar to the brain: massively parallel, event-driven, and with computation and memory tightly integrated. In 1986, Mead and Federico Faggin founded Synaptics Inc. to develop analogue circuits based on neural networking theories. Mead succeeded in creating an analogue silicon retina and inner ear, demonstrating that neuromorphic principles could be implemented in physical hardware.
For decades, neuromorphic computing remained largely in research labs. The von Neumann architecture, despite its inefficiencies, was well understood, easy to program, and benefited from decades of optimisation. Neuromorphic chips were exotic, difficult to program, and lacked the software ecosystems that made conventional processors useful.
But the energy crisis of AI has changed the calculus. As the costs, both financial and environmental, of training and running large AI models have exploded, the appeal of radically more efficient architectures has grown irresistible.
The landscape of neuromorphic computing has transformed dramatically in recent years, with multiple approaches emerging from research labs and entering practical deployment. Each takes a different strategy, but all share the same goal: escape the energy trap of the von Neumann architecture.
Intel's neuromorphic research chip, Loihi 2, represents one vision of this future. A single Loihi 2 chip supports up to 1 million neurons and 120 million synapses, implementing spiking neural networks with programmable dynamics and modular connectivity. In April 2024, Intel introduced Hala Point, claimed to be the world's largest neuromorphic system. Hala Point packages 1,152 Loihi 2 processors in a six-rack-unit chassis and supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores. The entire system consumes 2,600 watts of power. That's more than your brain's 20 watts, certainly, but consider what it's doing: supporting over a billion neurons, more than some mammalian brains, with a tiny fraction of the power a conventional supercomputer would require. Research using Loihi 2 has demonstrated “orders of magnitude gains in the efficiency, speed, and adaptability of small-scale edge workloads.”
IBM has pursued a complementary path focused on inference efficiency. Their TrueNorth microchip architecture, developed in 2014, was designed to be closer in structure to the human brain than the von Neumann architecture. More recently, IBM's proof-of-concept NorthPole chip achieved remarkable performance in image recognition, blending approaches from TrueNorth with modern hardware designs to achieve speeds about 4,000 times faster than TrueNorth. In tests, NorthPole was 47 times faster than the next most energy-efficient GPU and 73 times more energy-efficient than the next lowest latency GPU. These aren't incremental improvements. They represent fundamental shifts in what's possible when you abandon the traditional separation of memory and computation.
Europe has contributed two distinct neuromorphic platforms through the Human Brain Project, which ran from 2013 to 2023. The SpiNNaker machine, located in Manchester, connects 1 million ARM processors with a packet-based network optimised for the exchange of neural action potentials, or spikes. It runs at real time and is the world's largest neuromorphic computing platform. In Heidelberg, the BrainScaleS system takes a different approach entirely, implementing analogue electronic models of neurons and synapses. Because it's implemented as an accelerated system, BrainScaleS emulates neurons at 1,000 times real time, omitting energy-hungry digital calculations. Where SpiNNaker prioritises scale and biological realism, BrainScaleS optimises for speed and energy efficiency. Both systems are integrated into the EBRAINS Research Infrastructure and offer free access for test usage, democratising access to neuromorphic computing for researchers worldwide.
At the ultra-low-power end of the spectrum, BrainChip's Akida processor targets edge computing applications where every milliwatt counts. Its name means “spike” in Greek, a nod to its spiking neural network architecture. Akida employs event-based processing, performing computations only when new sensory input is received, dramatically reducing the number of operations. The processor supports on-chip learning, allowing models to adapt without connecting to the cloud, critical for applications in remote or secure environments. BrainChip focuses on markets with sub-1-watt usage per chip. In October 2024, they announced the Akida Pico, a miniaturised version that consumes just 1 milliwatt of power, or even less depending on the application. To put that in context, 1 milliwatt could power this chip for 20,000 hours on a single AA battery.
Neuromorphic chips that mimic biological neurons represent one approach to escaping the von Neumann bottleneck. But they're not the only one. A broader movement is underway to fundamentally rethink the relationship between memory and computation, and it doesn't require imitating neurons at all.
In-memory computing, or compute-in-memory, represents a different strategy with the same goal: eliminate the energy cost of data movement by performing computations where the data lives. Rather than fetching data from memory to process it in the CPU, in-memory computing performs certain computational tasks in place in memory itself.
The potential energy savings are massive. A memory access typically consumes 100 to 1,000 times more energy than a processor operation. By keeping computation and data together, in-memory computing can reduce attention latency and energy consumption by up to two and four orders of magnitude, respectively, compared with GPUs, according to research published in Nature Computational Science in 2025.
Recent developments have been striking. One compute-in-memory architecture processing unit delivered GPU-class performance at a fraction of the energy cost, with over 98 per cent lower energy consumption than a GPU over various large corpora datasets. These aren't marginal improvements. They're transformative, suggesting that the energy crisis in AI might not be an inevitable consequence of computational complexity, but rather a symptom of architectural mismatch.
The technology enabling much of this progress is the memristor, a portmanteau of “memory” and “resistor.” Memristors are electronic components that can remember the amount of charge that has previously flowed through them, even when power is turned off. This property makes them ideal for implementing synaptic functions in hardware.
Research into memristive devices has exploded in recent years. Studies have demonstrated that memristors can replicate synaptic plasticity through long-term and short-term changes in synaptic efficacy. They've successfully implemented many synaptic characteristics, including short-term plasticity, long-term plasticity, paired-pulse facilitation, spike-time-dependent plasticity, and spike-rating-dependent plasticity, the mechanisms the brain uses for learning and memory.
The power efficiency achieved is remarkable. Some flexible memristor arrays have exhibited ultralow energy consumption down to 4.28 attojoules per synaptic spike. That's 4.28 × 10⁻¹⁸ joules, a number so small it's difficult to comprehend. For context, that's even lower than a biological synapse, which operates at around 10 femtojoules, or 10⁻¹⁴ joules. We've built artificial devices that, in at least this one respect, are more energy-efficient than biology.
Memristor-based artificial neural networks have achieved recognition accuracy up to 88.8 per cent on the MNIST pattern recognition dataset, demonstrating that these ultralow-power devices can perform real-world AI tasks. And because memristors process operands at the location of storage, they obviate the need to transfer data between memory and processing units, directly addressing the von Neumann bottleneck.
Traditional artificial neural networks, the kind that power systems like ChatGPT and DALL-E, use continuous-valued activations. Information flows through the network as real numbers, with each neuron applying an activation function to its weighted inputs to produce an output. This approach is mathematically elegant and has proven phenomenally successful. But it's also computationally expensive.
Spiking neural networks, or SNNs, take a different approach inspired directly by biology. Instead of continuous values, SNNs communicate through discrete events called spikes, mimicking the action potentials that biological neurons use. A neuron in an SNN only fires when its membrane potential crosses a threshold, and information is encoded in the timing and frequency of these spikes.
This event-driven computation offers significant efficiency advantages. In conventional neural networks, every neuron performs a multiply-and-accumulate operation for each input, regardless of whether that input is meaningful. SNNs, by contrast, only perform computations when spikes occur. This sparsity, the fact that most neurons are silent most of the time, mirrors the brain's strategy and dramatically reduces the number of operations required.
The utilisation of binary spikes allows SNNs to adopt low-power accumulation instead of the traditional high-power multiply-accumulation operations that dominate energy consumption in conventional neural networks. Research has shown that a sparse spiking network pruned to retain only 0.63 per cent of its original connections can achieve a remarkable 91 times increase in energy efficiency compared to the original dense network, requiring only 8.5 million synaptic operations for inference, with merely 2.19 per cent accuracy loss on the CIFAR-10 dataset.
SNNs are also naturally compatible with neuromorphic hardware. Because neuromorphic chips like Loihi and TrueNorth implement spiking neurons in silicon, they can run SNNs natively and efficiently. The event-driven nature of spikes means these chips can spend most of their time in low-power states, only activating when computation is needed.
The challenges lie in training. Backpropagation, the algorithm that enabled the deep learning revolution, doesn't work straightforwardly with spikes because the discrete nature of firing events creates discontinuities that make gradients undefined. Researchers have developed various workarounds, including surrogate gradient methods and converting pre-trained conventional networks to spiking versions, but training SNNs remains more difficult than training their conventional counterparts.
Still, the efficiency gains are compelling enough that hybrid approaches are emerging, combining conventional and spiking architectures to leverage the best of both worlds. The first layers of a network might process information in conventional mode for ease of training, while later layers operate in spiking mode for efficiency. This pragmatic approach acknowledges that the transition from von Neumann to neuromorphic computing won't happen overnight, but suggests a path forward that delivers benefits today whilst building towards a more radical architectural shift tomorrow.
All of this raises a profound question: is energy efficiency fundamentally about architecture, or is it about raw computational power?
The conventional wisdom for decades has been that computational progress follows Moore's Law: transistors get smaller, chips get faster and more power-efficient, and we solve problems by throwing more computational resources at them. The assumption has been that if we want more efficient AI, we need better transistors, better cooling, better power delivery, better GPUs.
But the brain suggests something radically different. The brain's efficiency doesn't come from having incredibly fast, advanced components. Neurons operate on timescales of milliseconds, glacially slow compared to the nanosecond speeds of modern transistors. Synaptic transmission is inherently noisy and imprecise. The brain's “clock speed,” if we can even call it that, is measured in tens to hundreds of hertz, compared to gigahertz for CPUs.
The brain's advantage is architectural. It's massively parallel, with billions of neurons operating simultaneously. It's event-driven, activating only when needed. It co-locates memory and processing, eliminating data movement costs. It uses sparse, adaptive connectivity that continuously optimises for the tasks at hand. It employs multiple timescales of plasticity, from milliseconds to years, allowing it to learn efficiently at every level.
The emerging evidence from neuromorphic computing and in-memory architectures suggests that the brain's approach isn't just one way to build an efficient computer. It might be the only way to build a truly efficient computer for the kinds of tasks that AI systems need to perform.
Consider the numbers. Modern AI training runs consume megawatt-hours or even gigawatt-hours of electricity. The human brain, over an entire lifetime, consumes perhaps 10 to 15 megawatt-hours total. A child can learn to recognise thousands of objects from a handful of examples. Current AI systems require millions of labelled images and vast computational resources to achieve similar performance. The child's brain is doing something fundamentally different, and that difference is architectural.
This realisation has profound implications. It suggests that the path to sustainable AI isn't primarily about better hardware in the conventional sense. It's about fundamentally different hardware that embodies different architectural principles.
The transition to neuromorphic and in-memory architectures faces three interconnected obstacles: programmability, task specificity, and manufacturing complexity.
The programmability challenge is perhaps the most significant. The von Neumann architecture comes with 80 years of software development, debugging tools, programming languages, libraries, and frameworks. Every computer science student learns to program von Neumann machines. Neuromorphic chips and in-memory computing architectures lack this mature ecosystem. Programming a spiking neural network requires thinking in terms of spikes, membrane potentials, and synaptic dynamics rather than the familiar abstractions of variables, loops, and functions. This creates a chicken-and-egg problem: hardware companies hesitate to invest without clear demand, whilst software developers hesitate without available hardware. Progress happens, but slower than the energy crisis demands.
Task specificity presents another constraint. These architectures excel at parallel, pattern-based tasks involving substantial data movement, precisely the characteristics of machine learning and AI. But they're less suited to sequential, logic-heavy tasks. A neuromorphic chip might brilliantly recognise faces or navigate a robot through a cluttered room, but it would struggle to calculate your taxes. This suggests a future of heterogeneous computing, where different architectural paradigms coexist, each handling the tasks they're optimised for. Intel's chips already combine conventional CPU cores with specialised accelerators. Future systems might add neuromorphic cores to this mix.
Manufacturing at scale remains challenging. Memristors hold enormous promise, but manufacturing them reliably and consistently is difficult. Analogue circuits, which many neuromorphic designs use, are more sensitive to noise and variation than digital circuits. Integrating radically different computing paradigms on a single chip introduces complexity in design, testing, and verification. These aren't insurmountable obstacles, but they do mean that the transition won't happen overnight.
Despite these challenges, momentum is building. The energy costs of AI have become too large to ignore, both economically and environmentally. Data centre operators are facing hard limits on available power. Countries are setting aggressive carbon reduction targets. The financial costs of training ever-larger models are becoming prohibitive. The incentive to find alternatives has never been stronger.
Investment is flowing into neuromorphic and in-memory computing. Intel's Hala Point deployment at Sandia National Laboratories represents a serious commitment to scaling neuromorphic systems. IBM's continued development of brain-inspired architectures demonstrates sustained research investment. Start-ups like BrainChip are bringing neuromorphic products to market for edge computing applications where energy efficiency is paramount.
Research institutions worldwide are contributing. Beyond Intel, IBM, and BrainChip, teams at universities and national labs are exploring everything from novel materials for memristors to new training algorithms for spiking networks to software frameworks that make neuromorphic programming more accessible.
The applications are becoming clearer. Edge computing, where devices must operate on battery power or energy harvesting, is a natural fit for neuromorphic approaches. The Internet of Things, with billions of low-power sensors and actuators, could benefit enormously from chips that consume milliwatts rather than watts. Robotics, which requires real-time sensory processing and decision-making, aligns well with event-driven, spiking architectures. Embedded AI in smartphones, cameras, and wearables could become far more capable with neuromorphic accelerators.
Crucially, the software ecosystem is maturing. PyNN, an API for programming spiking neural networks, works across multiple neuromorphic platforms. Intel's Lava software framework aims to make Loihi more accessible. Frameworks for converting conventional neural networks to spiking versions are improving. The learning curve is flattening.
Researchers have also discovered that neuromorphic computers may prove well suited to applications beyond AI. Monte Carlo methods, commonly used in physics simulations, financial modelling, and risk assessment, show a “neuromorphic advantage” when implemented on spiking hardware. The event-driven nature of neuromorphic chips maps naturally to stochastic processes. This suggests that the architectural benefits extend beyond pattern recognition and machine learning to a broader class of computational problems.
Stepping back, the story of neuromorphic computing and in-memory architectures is about more than just building faster or cheaper AI. It's about recognising that the way we've been building computers for 80 years, whilst extraordinarily successful, isn't the only way. It might not even be the best way for the kinds of computing challenges that increasingly define our technological landscape.
The von Neumann architecture emerged in an era when computers were room-sized machines used by specialists to perform calculations. The separation of memory and processing made sense in that context. It simplified programming. It made the hardware easier to design and reason about. It worked.
But computing has changed. We've gone from a few thousand computers performing scientific calculations to billions of devices embedded in every aspect of life, processing sensor data, recognising speech, driving cars, diagnosing diseases, translating languages, and generating images and text. The workloads have shifted from calculation-intensive to data-intensive. And for data-intensive workloads, the von Neumann bottleneck is crippling.
The brain evolved over hundreds of millions of years to solve exactly these kinds of problems: processing vast amounts of noisy sensory data, recognising patterns, making predictions, adapting to new situations, all whilst operating on a severely constrained energy budget. The architectural solutions the brain arrived at, co-located memory and processing, event-driven computation, massive parallelism, sparse adaptive connectivity, are solutions to the same problems we now face in artificial systems.
We're not trying to copy the brain exactly. Neuromorphic computing isn't about slavishly replicating every detail of biological neural networks. It's about learning from the principles the brain embodies and applying those principles in silicon and software. It's about recognising that there are multiple paths to intelligence and efficiency, and the path we've been on isn't the only one.
The energy consumption crisis of AI might turn out to be a blessing in disguise. It's forcing us to confront the fundamental inefficiencies in how we build computing systems. It's pushing us to explore alternatives that we might otherwise have ignored. It's making clear that incremental improvements to the existing paradigm aren't sufficient. We need a different approach.
The question the brain poses to computing isn't “why can't computers be more like brains?” It's deeper: “what if the very distinction between memory and processing is artificial, a historical accident rather than a fundamental necessity?” What if energy efficiency isn't something you optimise for within a given architecture, but something that emerges from choosing the right architecture in the first place?
The evidence increasingly suggests that this is the case. Energy efficiency, for the kinds of intelligent, adaptive, data-processing tasks that AI systems perform, is fundamentally architectural. No amount of optimisation of von Neumann machines will close the million-fold efficiency gap between artificial and biological intelligence. We need different machines.
The good news is that we're learning how to build them. The neuromorphic chips and in-memory computing architectures emerging from labs and starting to appear in products demonstrate that radically more efficient computing is possible. The path forward exists.
The challenge now is scaling these approaches, building the software ecosystems that make them practical, and deploying them widely enough to make a difference. Given the stakes, both economic and environmental, that work is worth doing. The brain has shown us what's possible. Now we have to build it.
Energy Consumption and AI: – International Energy Agency (IEA), “Energy demand from AI,” Energy and AI Report, 2024. Available: https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai– Pew Research Center, “What we know about energy use at U.S. data centers amid the AI boom,” October 24, 2024. Available: https://www.pewresearch.org/short-reads/2025/10/24/what-we-know-about-energy-use-at-us-data-centers-amid-the-ai-boom/– Global Efficiency Intelligence, “Data Centers in the AI Era: Energy and Emissions Impacts in the U.S. and Key States,” 2024. Available: https://www.globalefficiencyintel.com/data-centers-in-the-ai-era-energy-and-emissions-impacts-in-the-us-and-key-states
Brain Energy Efficiency: – MIT News, “The brain power behind sustainable AI,” October 24, 2024. Available: https://news.mit.edu/2025/brain-power-behind-sustainable-ai-miranda-schwacke-1024– Texas A&M University, “Artificial Intelligence That Uses Less Energy By Mimicking The Human Brain,” March 25, 2025. Available: https://stories.tamu.edu/news/2025/03/25/artificial-intelligence-that-uses-less-energy-by-mimicking-the-human-brain/
Synaptic Plasticity and Energy: – Schieritz, P., et al., “Energy efficient synaptic plasticity,” eLife, vol. 9, e50804, 2020. DOI: 10.7554/eLife.50804. Available: https://elifesciences.org/articles/50804
Von Neumann Bottleneck: – IBM Research, “How the von Neumann bottleneck is impeding AI computing,” 2024. Available: https://research.ibm.com/blog/why-von-neumann-architecture-is-impeding-the-power-of-ai-computing– Backus, J., “Can Programming Be Liberated from the Von Neumann Style? A Functional Style and Its Algebra of Programs,” ACM Turing Award Lecture, 1977.
Neuromorphic Computing – Intel: – Sandia National Laboratories / Next Platform, “Sandia Pushes The Neuromorphic AI Envelope With Hala Point 'Supercomputer',” April 24, 2024. Available: https://www.nextplatform.com/2024/04/24/sandia-pushes-the-neuromorphic-ai-envelope-with-hala-point-supercomputer/– Open Neuromorphic, “A Look at Loihi 2 – Intel – Neuromorphic Chip,” 2024. Available: https://open-neuromorphic.org/neuromorphic-computing/hardware/loihi-2-intel/
Neuromorphic Computing – IBM: – IBM Research, “In-memory computing,” 2024. Available: https://research.ibm.com/projects/in-memory-computing
Neuromorphic Computing – Europe: – Human Brain Project, “Neuromorphic Computing,” 2023. Available: https://www.humanbrainproject.eu/en/science-development/focus-areas/neuromorphic-computing/– EBRAINS, “Neuromorphic computing – Modelling, simulation & computing,” 2024. Available: https://www.ebrains.eu/modelling-simulation-and-computing/computing/neuromorphic-computing/
Neuromorphic Computing – BrainChip: – Open Neuromorphic, “A Look at Akida – BrainChip – Neuromorphic Chip,” 2024. Available: https://open-neuromorphic.org/neuromorphic-computing/hardware/akida-brainchip/– IEEE Spectrum, “BrainChip Unveils Ultra-Low Power Akida Pico for AI Devices,” October 2024. Available: https://spectrum.ieee.org/neuromorphic-computing
History of Neuromorphic Computing: – Wikipedia, “Carver Mead,” 2024. Available: https://en.wikipedia.org/wiki/Carver_Mead– History of Information, “Carver Mead Writes the First Book on Neuromorphic Computing,” 2024. Available: https://www.historyofinformation.com/detail.php?entryid=4359
In-Memory Computing: – Nature Computational Science, “Analog in-memory computing attention mechanism for fast and energy-efficient large language models,” 2025. DOI: 10.1038/s43588-025-00854-1 – ERCIM News, “In-Memory Computing: Towards Energy-Efficient Artificial Intelligence,” Issue 115, 2024. Available: https://ercim-news.ercim.eu/en115/r-i/2115-in-memory-computing-towards-energy-efficient-artificial-intelligence
Memristors: – Nature Communications, “Experimental demonstration of highly reliable dynamic memristor for artificial neuron and neuromorphic computing,” 2022. DOI: 10.1038/s41467-022-30539-6 – Nano-Micro Letters, “Low-Power Memristor for Neuromorphic Computing: From Materials to Applications,” 2025. DOI: 10.1007/s40820-025-01705-4
Spiking Neural Networks: – PMC / NIH, “Spiking Neural Networks and Their Applications: A Review,” 2022. Available: https://pmc.ncbi.nlm.nih.gov/articles/PMC9313413/– Frontiers in Neuroscience, “Optimizing the Energy Consumption of Spiking Neural Networks for Neuromorphic Applications,” 2020. Available: https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2020.00662/full
Tim Green
UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795
Email: [email protected]
2026-02-08 19:57:12
For the first 21 years of my life, it felt like I had been fighting an uphill battle against myself. Half abandoned projects everywhere, constantly thinking "I could and should be doing more".
It felt like I had shipped with broken software. Everyone else seemed to have such an easier time completing even simple tasks, and I was struggling.
It turns out I hadn't been running broken software, I was running a completely different OS. And recently, I got the documentation: ADHD.
Once I understood the specs of this environment, I realized standard tools like Notion or a to-do list on my phone weren't going to cut it. They assume a functioning environment I simply don't have.
On my devices, I was already using Doom Emacs because I'm a tech nerd who loves to tinker. So instead of trying to change myself to fit a certain tool, I decided to learn some elisp and reshape the tool to fit me.
Here's how I weaponized my config against executive dysfunction.
Before I even touched a single line of lisp, I had to solve a different problem: Syncing my notes between devices.
My brain has a very small L1 cache. If I have an idea while in the kitchen, but my to-do list is on my desktop in the other room, the "latency" (and mental barrier) is high enough to kill the task. By the time I get to my desktop, the context is lost.
I used some solutions for this in the past, I used my phone's note-taking app, but this caused me to have my notes split across my devices, which wasn't ideal. I also used a personal discord server, but this became unmaintainable and unsearchable very quickly.
I needed a Single Source of Truth that was available everywhere, with zero friction.
I have 3 main devices that I use:
But there is a secret fourth device, and you're accessing it right now! It is my VPS (which will hopefully become a homelab setup in the future)!
To solve this problem, I used Syncthing. Before I had my VPS, I was already considering using Syncthing to sync between devices, but that would require me to keep my devices on at the same time, and in practice, my laptop and my desktop are never turned on simultaneously. That's where my VPS comes into play, it acts as a sort of "cloud" so even if my desktop is turned off, my laptop will be able to sync my notes from my VPS.
Standard productivity advice tells you to sort tasks by "Deadline" or "Importance". This works great for neurotypical brains. For me, a list of 20 "Important" tasks just looks like a wall of noise. It triggers immediate analysis paralysis.
I realized my bottleneck isn't time, it's Battery.
Some days I run at 110%, some days I run at 10%. On a low-energy day, seeing a big task that requires a lot of energy will not only make me skip that task, it'll make me skip the entire list.
I modified my workflow so every task on my list includes an ENERGY property. I stopped asking myself "When is this due?", and started asking "How much energy will this cost me?"
I defined three states:
Here's how I implemented this in Doom Emacs. First, I made sure to map priorities to colours
;; Map priorities
(setq org-priority-highest ?A
org-priority-lowest ?C
org-priority-default ?B)
;; Colour code priorities
(setq org-priority-faces
'((?A :foreground "#ff6c6b" :weight bold)
(?B :foreground "#98be65")
(?C :foreground "#51afef")))
;; Define energy levels
(setq org-global-properties
'(("ENERGY_ALL" . "Low Medium High")))
;; Colour code task status
(setq org-todo-keyword-faces
'(("TODO" :foreground "#51afef" :weight bold)
("DONE" :foreground "#98be65" :weight bold)
("WAIT" :foreground "#da8548")))
The real magic happens in the agenda overview (SPC o A x). I created some custom commands that sort task by energy cost.
When I sit down and look at my agenda, I check my internal battery:
This prevents a negative spiral. Even on a bad day, I can usually clear some "Low Energy" items, which keeps me moving forward.
(setq org-agenda-custom-commands
'(("x" "Overview"
((tags-todo "PRIORITY=\"A\"+ENERGY=\"Low\""
((org-agenda-overriding-header "Quick Wins (High Impact, Low Energy)")))
(tags-todo "PRIORITY=\"A\"+ENERGY=\"High\""
((org-agenda-overriding-header "Deep Work (Focus Required)")))
(tags-todo "PRIORITY=\"A\"+ENERGY=\"Medium\""
((org-agenda-overriding-header "High Priority (Medium Energy)")))))))
As I've mentioned previously, I struggle with forgetting (the context of) ideas, so a standard solution like opening a file browser, navigating to the correct file, typing it all out, etc, was way too much friction for me. I needed a solution that allowed me to effortlessly capture ideas, tasks, to-do's, etc, on the fly.
I have two strategies for this. the first is the usage of org-capture. No matter what I'm doing on my computer, I can hit my custom keybind that launches emacsclient (Ctrl+Alt+e on KDE Plasma), then hit SPC X within emacs. Then, a custom window pops up that asks me for the type of thought I want to capture. In this example, let's choose a to-do (t). After that, I get prompted for five bits of vital information:
Importantly, this forces me to assign a priority and energy level to the task immediately. If I don't assign these now, I risk staring at a list of untagged tasks later.
(setq org-capture-templates
'(("t" "Todo" entry (file+headline "~/org/inbox.org" "Tasks")
"* TODO [#%^{Priority|A|B|C}] %^{Task Name} %^g\nDEADLINE: %^t\n:PROPERTIES:\n:ENERGY: %^{Energy?|Low|Medium|High}\n:END:\n")
("i" "Idea/Note" entry (file+headline "~/org/inbox.org" "Notes")
"* %?\n%U\n")
("p" "Project Task" entry (file+headline "~/org/projects.org" "Projects")
"* TODO [#%^{Priority|A|B|C}] %^{Task Name} [/]\nDEADLINE: %^t\n:PROPERTIES:\n:ENERGY: %^{Energy?|Low|Medium|High}\n:END:\n")))
Aside from org-capture, I also use org-roam for journalling and long-term knowledge, which I'll get into in more depth in another blog post. For now, let's look at some final visual tweaks.
Historically, I've struggled quite a bit with digital sensory overload. I can't browse normal YouTube without a certain extension that removes all the clickbait titles and thumbnails. Naturally, this translates over to my editor as well. I use the Catppuccin theme (mocha), both because it is low contrast, but also out of personal preference. The most important tweak are the fonts.
I use a monospace font for codeblocks, and a variable-pitch font for text. This makes it easier to make a distinction between "Human Language" and "Computer Language", and reduces the cognitive load on scanning a file.
;; Fonts
(setq doom-font (font-spec :family "Fira Code" :size 16 :weight 'semi-light)
doom-variable-pitch-font (font-spec :family "Fira Sans" :size 17))
;; UI
(setq doom-theme 'catppuccin)
(setq display-line-numbers-type t) ;; Set to nil if you prefer not seeing line numbers
I also use "Zen mode" to center the text and hide the modeline if I really need to focus on something (like writing this blogpost!)
These methods don't "cure" or "fix" my ADHD. there are certainly going to be days where I don't even look at the agenda, and days where I won't even be able to handle "low energy" tasks.
But because the system is built on plain text files and synced via Syncthing, it is resilient. It doesn't shame or judge me for being inactive. If i come back after some downtime, the data will still be there, waiting for me.
I stopped fighting my brain's operating system, and started writing software patches instead. For the first time in 21 years, it feels like I have root access.
If you want to steal my config, feel free! You can find my doom emacs dotfiles here, in my nix-config:
https://git.kittycloud.eu/0xMillyByte/nix-config
Thank you for reading all the way through. I have plans for a future blogpost on how I use org-roam in a similar style to this one. Stay tuned!
As always, if you're having trouble, or have questions, feel free to reach out, you can find my Discord linked on the homepage. Always happy to help ;3
Read the original article on my blog here.
2026-02-08 19:57:02
The Problem
She is Sarah. Single mum. Two kids. £50 a week.
She moved to Leeds with nothing. No family nearby. No idea where the cheapest shop is. No one to ask.
Every week she sits at the kitchen table at midnight. Lily finally asleep. Scrolling through Tesco, Asda, Aldi, Lidl — five different apps — comparing prices on the same pack of nappies. The same loaf of bread. The same tin of formula.
Some weeks she manages. Some weeks it's toast for dinner again.
There are 14.4 million people in the UK living like Sarah. Families spending hours comparing prices just to stretch a weekly budget that was never enough to begin with.
What I Built
🤝 Let's Make a Deal — a conversational deal-finding assistant that does the work for them.
One conversation. Tell it your postcode, your budget, what you need. It finds the cheapest real deals — locally and online.
🔗 Live App: https://cute-jelly-4af391.netlify.app/
How It Works
Sarah types "cheapest nappies near me" and it tells her:
🏪 Aldi, Roundhay Road — £2.49
🏪 Lidl, York Road — £2.29
📦 Amazon — bulk pack, cheaper per nappy
She types "kids school shoes under £15" and it finds:
👟 Shoe Zone — £12.99
👟 George at Asda — £14.00
She doesn't have to walk into six shops with a toddler on her hip.
But the thing that really matters — she types "free food near me" and discovers:
🥫 Community fridge — two streets away
📱 Olio app — free surplus food nearby
⛪ St. Vincent's church — food parcel every Thursday
Nobody tells you these things when you're new somewhere. This agent does.
Key Features
🍞 Budget-first — never recommends above the user's stated budget
📍 Location-aware — finds deals within 10 miles + online options
🏪 Real retailers — Tesco, Asda, Aldi, Lidl, Argos, B&M, Poundland, Home Bargains, Amazon UK, eBay UK
🆓 Free options — food banks, Olio, Freecycle, community fridges
💬 Conversational — no forms, no filters, just type what you need
📱 Mobile-friendly — works on any phone, no app to download
How I Used Algolia Agent Studio
Agent Configuration
The agent lives inside Algolia's dashboard with a specialised system prompt that:
Collects the user's postcode, budget, and family size upfront
Searches for real UK deals from budget retailers
Formats results with prices, store names, distances, and savings
Surfaces free alternatives when relevant
Includes money-saving tips with every search
Frontend Integration
Built with React using Algolia's official react-instantsearch Chat widget:
jsx import { liteClient as algoliasearch } from 'algoliasearch/lite'; import { InstantSearch, Chat } from 'react-instantsearch'; import 'instantsearch.css/components/chat.css'; const searchClient = algoliasearch( process.env.REACT_APP_ALGOLIA_APP_ID, process.env.REACT_APP_ALGOLIA_API_KEY ); export function App() { return ( ); }
The Chat widget handles all communication with Agent Studio. No custom API calls. No proxy functions. No CORS issues. Clean and simple.
Why Algolia Makes This Work
What Why It Matters
Sub-50ms retrieval Budget families don't have time to wait
Contextual search Grounds responses in real data, no hallucinated prices
Conversational UX InstantSearch Chat widget — polished, mobile-first
Zero friction No sign-ups, no downloads, just type and find
Tech Stack
Layer Technology
Frontend React + Algolia InstantSearch Chat Widget
Agent Algolia Agent Studio
Deployment Netlify
Styling Custom dark theme, CSS overrides
The Real Impact
Last week Lily got a cold. Sarah needed Calpol. She typed "cheapest Calpol near LS8" — Boots had it for £3.59. Asda two streets further had it for £2.10.
Same medicine. Same dose. £1.49 saved.
That doesn't sound like much. But when you're counting coins at the self-checkout, hoping the card doesn't decline —
£1.49 is tomorrow's bread.
"This isn't a tech demo. 1 in 5 UK families are in fuel or food poverty. Every penny saved is a decision they didn't have to make — heating or eating, new shoes or new coat. This agent doesn't judge. It just finds the best deal."
🤝 Let's Make a Deal — One conversation. Real prices. Real shops. Every penny counts.
🔗 Live App: https://cute-jelly-4af391.netlify.app/
📺 Demo Video: https://youtu.be/huvIjJJoBqM
2026-02-08 19:55:42
Date: 2026-02-08
Claude Code has a handy auto memory feature: a project-scoped MEMORY.md file that persists across conversations. Agents read it at startup and write to it when they learn something worth remembering — patterns, gotchas, user preferences. It works great for single-agent sessions.
Then Anthropic ships agent teams.
Agent teams let multiple Claude Code instances work on the same project simultaneously. A lead agent coordinates, teammates work independently in their own context windows, and they share a task list. They also share the same project-scoped memory directory.
The memory file (~/.claude/projects/<project>/memory/MEMORY.md) is a plain file on disk. The Edit tool reads the file, does a string match-and-replace, and writes the whole file back. No locking, no compare-and-swap, no merge strategy.
If two agents discover something worth recording at roughly the same time:
MEMORY.md (version 1)MEMORY.md (version 1)This is a classic lost-update race condition. It also affects background agents launched via Task with run_in_background, which have been around longer than agent teams.
Memory writes are infrequent in single-agent sessions, so the window for collision is small. But agent teams change the calculus:
You won't know something was lost until a future session acts on stale or missing knowledge.
We filed anthropics/claude-code#24130 with four suggested approaches:
File locking — flock or equivalent around reads and writes. Simple, proven, platform-dependent.
Append-only log — agents only append; a periodic compaction pass deduplicates. This eliminates the lost-update race, but the file grows, needs garbage collection, and contradictory entries can coexist until compaction resolves them.
Per-agent memory files — each agent writes to memory/<agent-id>.md, and a merged view is assembled for the system prompt. Clean separation, but needs a merge strategy for contradictory entries.
Compare-and-swap — re-read the file before writing, retry if it changed since the last read. Works well for low-contention scenarios (which memory writes are).
For now, if you're using agent teams or background agents, be aware that concurrent memory writes can lose data. You can mitigate by:
MEMORY.md after parallel sessions to catch any gaps