2026-01-29 09:22:12
JSONL is a neat and kind of weird data format. It is well-known to be useful for logs and API calls, among other things. And your favorite AI assistant API is one place you'll probably find it.
CsvPath Framework supports validating JSONL. (In fact, it supports JSONL for the whole data preboarding lifecycle, but that's a longer story).
And now CsvPath Validation Language supports JSONPath expressions. Since AI prompts are only kinda sorta JSONL, having JSONPath to dig into them is helpful.
What I mean by kinda-sorta is that your basic prompt sequence is a series of JSON lines, but the lines are all 1-column wide and arbitrarily deep. That sounds more like a series of JSON "documents" than it does like single JSONL stream. Or, anyway, that's my take.
Let's look at how to use JSONPath to inspect a JSONL file using CsvPath Validation Language in FlightPath Data. For those of you who don't already know, FlightPath Data is the dev and ops frontend to CsvPath Framework. It is a free and open source download from the Apple MacOS Store or the Microsoft Store.
The file is a common example prompt. We'll start by looking at one line.
Here's the last line:
{
"messages": [
{
"role": "system",
"content": "You are a happy assistant that puts a positive spin on everything."
},
{
"role": "user",
"content": "I'm hungry."
},
{
"role": "assistant",
"content": "Eat a banana!"
}
]
}
From CsvPath Framework's perspective this is a one-header document. The one header is messages. If you open this in the grid view you see only the one header. (i.e. one column; but with delimited files we try to stick to the word "header" because with "column" your RDBMS-soaked brain starts to make incorrect assumptions).
Here's what it looks like:
That's not super fun. The reason is that:
messages header is arbitrarily deeply nested, unlike the typical JSONL log lineNevertheless, that's what we have. Will it blend? I mean validate? Yes. JSONPath to the rescue. That said I'll pause to admit that I'm not a JSONPath expert.
Right-click in the project files tree on the left of FlightPath and create a new .csvpath file, e.g. messages.csvpath. Drop this simple example in it.
$[*][
push("roles", jsonpath(#messages, "$[?(@.role == 'assistant')].content") )
last() -> print("See the variables tab for results")
]
You can see the jsonpath() function. It is acting on the messages header, as we'd want. We're pushing the data pulled by the JSONPath expression into a stack variable named roles.
A stack variable is like a Python list or tuple. You create a variable by using it. roles is part of the set of zero or more variables that are available throughout the csvpath statement run. They are captured to the Variables tab for a one-off FlightPath Data test run. For a production run they end up in the vars.json file in the run results.
Put your cursor in the csvpath statement and click cmd-r (or ctrl-r on Windows). The output tabs should open at the bottom-middle of the screen, below your csvpath file. Click to the Variables tab and have a look.
Our JSONPath:
$[?(@.role == 'assistant')].content
picked out the objects in the messages array where role equaled assistant. And from those objects it extracted and returned the value of the content key.
Pretty simple stuff. Tho, I have to admit it took me a few minutes to wrap my JSONPath-inexperienced head around the context for the JSONPath expression. I was thinking of the whole document or the whole line, but that wasn't right.
It is obviously the JSON value assigned to the messages key, which is an array, in this case. Once I was operating from that correct context, the JSONPath became pretty straightforward. (Those of us with XPath scars need not be as afraid as we might be!)
The point here is two-part. First, we can deal with AI prompts or any other JSONL that is deeply nested. Hooray! The data may look odd, if you are comparing it to regular tabular data, but that's no reason to not validate.
Second, this example makes the point that we're doing JSONPath rules-based validation within our CsvPath context. How very Schematron-like, since Schematron does XPath validation within XSLT.
Maybe this sounds complicated, but really it's not. CsvPath Validation Language is great for all things line-oriented. In this case, there isn't much for it to do, except hand off to JSONPath, which is great at documents (a.k.a. objects). Simple enough.
If we wanted to create a bunch of JSONPath rules to validate our AI prompt JSONL, we could do that. To just do a quick throw-away second rule as an example try this:
$[*][
push("roles", jsonpath(#messages, "$[?(@.role == 'assistant')].content") )
@stmts = jsonpath(#0, "$.`len`")
print("Line $.csvpath.line_number: $.variables.stmts")
@stmts == 3
last.nocontrib() -> print("See the variables tab for results")
]
That new rule will net you 2 lines, which are either valid or failed, depending on how you want to use your csvpath statement. You will see them in the Matches tab.
At the same time the expanded csvpath statement will continue to pull in the same data to the variables tab that we got with the first version of the csvpath.
To clean it up just a little, you can do:
$[*][
push("roles", jsonpath(#messages, "$[?(@.role == 'assistant')].content") )
jsonpath(#0, "$.`len`") == 3
]
There you go, a valid 2-rule validation statement using JSONPath on nested JSON in a JSONL document. Useful? Totally! Give it a try.
2026-01-29 09:14:40
Detailed routing is one of the slowest stages in an ASIC physical design flow. Even with TritonRoute’s multithreading support, routing large designs still becomes a bottleneck as complexity grows.
At WIOWIZ, we explored a practical question:
Can OpenROAD detailed routing be parallelized at the process level—without modifying TritonRoute or OpenROAD itself?
The answer is yes, with careful region partitioning and orchestration.
Why Multithreading Isn’t Enough
TritonRoute uses multithreading inside a single routing process. While helpful, it still means:
True scalability requires multiple OpenROAD processes running in parallel, each routing a portion of the design.
Why Routing Is Hard to Parallelize
Routing is inherently global:
Naive region splits fail because wall-time is dominated by the most complex region, eliminating parallel gains.
Our Approach: OpenROAD as a Black Box
Instead of modifying TritonRoute, we built an external orchestration flow:
Each OpenROAD process remains internally multithreaded, enabling process-level + thread-level parallelism.
The Key Insight: Balance by Complexity
Equal-area partitioning does not work. Routing difficulty does not correlate with geometry.
We achieved real speedups only after moving to complexity-aware region partitioning, ensuring that routing effort—not area—was balanced across regions.
Results
Using a RISC-V core in SkyWater 130nm:
All achieved without modifying OpenROAD or TritonRoute.
Why This Matters
This work demonstrates that:
Full Technical Details
This dev.to post is a concise summary.
For full implementation details, partitioning strategy, iterations, and limitations, read the original article:
Parallel Region-Based Routing on OpenROAD
2026-01-29 09:07:11
Logic gates are the basic building blocks of any digital system. They take binary inputs (0 or 1) and provide a single output based on a specific rule.
The AND gate is like a strict boss. It only gives a "True" (1) output if all inputs are "True".
[Image of AND gate logic symbol and truth table]
The OR gate is more relaxed. It gives a "True" (1) output if at least one input is "True".
The NOT gate is also called an Inverter. It simply flips the input.
These gates are special because they are "inversions" of the basic gates.
The NAND gate is the opposite of the AND gate.
[Image of NAND gate logic symbol and truth table]
The NOR gate is the opposite of the OR gate.
| Gate | Input A | Input B | Output |
|---|---|---|---|
| AND | 1 | 1 | 1 |
| OR | 1 | 0 | 1 |
| NAND | 1 | 1 | 0 |
| NOR | 0 | 0 | 1 |
2026-01-29 09:02:10
Contemporary geopolitics is based on silicon, which has become a strategic resource surpassing the former role of coal or oil. Chips, at the heart of billions of transistors, determine not only economic growth but, above all, the military capabilities and technological sovereignty of states. This article explores the unique role of Taiwan's TSMC and the Netherlands' ASML, whose dominance in EUV production and lithography creates bottlenecks in the global supply chain. Through the lens of weaponized interdependence and the concept of the entrepreneurial state, the text analyzes how control over computing power is becoming a tool of political pressure. This comprehensive look at the production architecture, risk logistics, and barriers to entry that define the new political economy of the 21st century, where national security is inextricably linked to innovation in the semiconductor sector.
2026-01-29 08:53:58
Firstly you might be wondering what is a VCS(Version Control System), lets say you are writing a code and in that code you add a feature, fix a bug, then if a problem occurs after doing these changes how will you go back? Imagine if had checkpoints in code like games wouldn’t it be too good, that’s what VCS overall does. In this blog you will get an detailed overview of VCS.
In the past, when people coded, they often faced the problem of tracking their code and getting help from others to improve it. For example, if I was writing code for a product and needed to hire an intern to add features, I had to send the code files through mail, drive, or using a pendrive with folders like final final-v2 etc. This process was very messy.
When working in team, there was problems such as :
Overwriting Code : If i am trying to fix a bug, and i can’t do it i will share the file to someone else like bug_fix1, then another person will figure out the bug and will try to solve it but while fixing it he might repeat some lines i already wrote to fix the bug.
Loosing Changes : While fixing a bug, if another bug pops up you don’t have an option to go back.
No Collaboration History : Who made a specific change in the code is unkonwn.
Then comes VCS(Version Control System) : a code tracking system that kept the history of all the changes done in the code by everyone, this also helped in better collaboration of people.
The time before VCS, when there was no version control, no tracking of code people used to share the codes using pendrives and had to tackle with folders like final_v2, let’s say i give my code to my friend in a pendrive then he works upon it and give me back the folder like final_final_v2.
With no shared tracking system there were many problems like :
“Who has the latest file” : In collaboration many people have their own version of code, with no source of truth
“I fixed it yesterday” Problem : without version history nobody can prove anything properly, like - this bug was not there yesterday - you broke it - overall blame game becomes strong

The pendrive workflow might workout with 2 people, but when there are 10s or more devs then it becomes pure chaos that was very hard to control. So VCS solved not only the tracking changes problem but also made the collaboration way better, that is why for single source of truth VCS is handy
2026-01-29 08:46:32
I recently started working on a logistics side project that required real-time geofencing—specifically, detecting when assets enter or exit defined polygon zones.
I looked at the market leaders (Radar, Google Maps Geofencing API, etc.), and while they are excellent, the pricing models usually charge per tracked user or per API call. For a bootstrapped project where I might have thousands of "pings" but zero revenue initially, paying for every spatial check wasn't viable.
So, I decided to engineer my own solution.
Here is a breakdown of how I built a serverless, event-driven Geo-fencing Engine using Go, PostGIS, and Cloud Run.
I chose Google Cloud Platform (GCP) for the infrastructure, managed via Terraform.
I wrote the ingestion service in Go 1.22. Go was the obvious choice for two reasons:
Instead of doing "Point-in-Polygon" math in the application layer (which is CPU intensive and complex to handle for complex polygons/multipolygons), I offload this to the database.
The core logic effectively boils down to efficient spatial indexing using GiST indexes and queries like:
SELECT zone_id
FROM geofences
WHERE ST_Intersects(geofence_geometry, ST_SetSRID(ST_MakePoint($1, $2), 4326));
Building the backend was only half the battle. The friction usually lies in the mobile app integration—handling location permissions, battery-efficient tracking, and buffering offline requests.
To solve this, I built (and open-sourced) client SDKs. For example, the Flutter SDK handles the ingestion stream and retries, acting as a clean interface to the engine.
Trade-offs & Decisions
Why not Redis (Geo)? Redis has geospatial capabilities (GEOADD, GEORADIUS), but it is primarily optimized for "radius" (point + distance) queries. My use case required strict Polygon geofencing (complex shapes). While Redis 6.2+ added some shape support, PostGIS remains the gold standard for robust topological operations.
Why Serverless? The traffic pattern for logistics is spiky. It peaks during business hours and drops to near zero at night. Cloud Run allows me to pay strictly for the CPU time used during ingestion, rather than provisioning a fixed server.
Open Source?
While the core backend engine runs internally for my project (to keep the infrastructure managed), I realized the Client SDKs are valuable on their own as a reference for structuring location ingestion.
I’ve open-sourced the SDKs to share how the protocol works:
I'm currently optimizing the "State Drift" issue in Terraform and looking into moving the event bus to Pub/Sub for better decoupling.
I’d love to hear feedback on the architecture—specifically if anyone has experience scaling PostGIS for high-write workloads!