2026-03-08 07:00:00
HyTRec splits long histories: linear attention learns stable taste, softmax captures recent intent—fast, accurate recs even at 10k interactions.
2026-03-08 03:01:05
Like the rest of the Rust community, crates.io has been growing rapidly, with download and package counts increasing 2-3x year-on-year. This growth doesn't come without problems, and we have made some changes to download handling on crates.io to ensure we can keep providing crates for a long time to come.
This growth has brought with it some challenges. The most significant of these is that all download requests currently go through the crates.io API, occasionally causing scaling issues. If the API is down or slow, it affects all download requests too. In fact, the number one cause of waking up our crates.io on-call team is "slow downloads" due to the API having performance issues.
\ Additionally, this setup is also problematic for users outside of North America, where download requests are slow due to the distance to the crates.io API servers.
To address these issues, over the last year we have decided to make some changes:
Starting from 2024-03-12, cargo will begin to download crates directly from our static.crates.io CDN servers.
\
This change will be facilitated by modifying the config.json file on the package index. In other words: no changes to cargo or your own system are needed for the changes to take effect. The config.json file is used by cargo to determine the download URLs for crates, and we will update it to point directly to the CDN servers, instead of the crates.io API.
\ Over the past few months, we have made several changes to the crates.io backend to enable this:
We announced the deprecation of "non-canonical" downloads, which would be harder to support when downloading directly from the CDN.
We changed how downloads are counted. Previously, downloads were counted directly on the crates.io API servers. Now, we analyze the log files from the CDN servers to count the download requests.
The latter change has caused the download numbers of most crates to increase, as some download requests were not counted before. Specifically, crates.io mirrors were often downloading directly from the CDN servers already, and those downloads had previously not been counted. For crates with a lot of downloads these changes will be barely noticeable, but for smaller crates, the download numbers have increased quite a bit over the past few weeks since we enabled this change.
We expect these changes to significantly improve the reliability and speed of downloads, as the performance of the crates.io API servers will no longer affect the download requests. Over the next few weeks, we will monitor the performance of the system to ensure that the changes have the expected effects.
\
We have noticed that some non-cargo build systems are not using the config.json file of the index to build the download URLs. We will reach out to the maintainers of those build systems to ensure that they are aware of the change and to help them update their systems to use the new download URLs. The old download URLs will continue to work, but these systems will be missing out on the potential performance improvement.
\ We are excited about these changes and believe they will greatly improve the reliability of crates.io. We look forward to hearing your feedback!
Tobias Bieniek on behalf of the crates.io team
\ Also published here
\ Photo by Ruben Mavarez on Unsplash
\
2026-03-08 03:00:44
\ \ There’s a question you’ve asked yourself a thousand times.
Why is it that you know exactly what you should be doing… and you still don’t do it?
You think it’s procrastination. \n Or discipline. \n Or laziness.
The classic explanations we reach for when nothing else makes sense.
Here’s the deal, my friend. You don’t have a thinking problem. You have a nervous system problem.
Your mind decides what it wants. \n Your body decides what you’re allowed to become.
And your body? \n It is loyal to one thing only:
The old you. The familiar you. \n The predictable you.
Because predictable means safe. \n And safe means survival.
Everything else (the future you, the ambitious you, the version of you who actually does the thing you promised yourself) that part is unfamiliar. Which means your nervous system rejects it before you even get one step in the door.
I call this identity dissonance. This is when your mind is negotiating with a past you that refuses to move out.
Your brain is not sitting there trying to sabotage you. It’s trying to keep you alive.
Even when “alive” means stuck. \n Even when “alive” means miserable. \n Even when “alive” means repeating the same patterns every year, in every relationship, in every attempt to reinvent yourself.
Your nervous system is obsessed with one thing: predictive safety.
If it can predict how your day goes, it feels secure. \n If it can predict what version of you wakes up in the morning, it relaxes. \n If it can predict your bad habits, your avoidance patterns, your self-sabotage routines…
…it actually feels safer than when you try to change.
Read that again.
The part of you that wants to grow is competing with the part of you that wants to stay alive. And your biology always chooses survival over evolution.
Your Reticular Activating System (RAS) decides what enters your awareness.
It filters the world for one purpose: confirm the identity it already believes you are.
If you believe you’re someone who hesitates? \n Your RAS spotlights hesitation.
If you believe you’re someone who can’t follow through? \n Your RAS highlights every example that reinforces that story.
Identity isn’t just who you think you are. \n It’s what your biology is trained tonotice.
Changing your life requires you to change what your brain pays attention to.
And that doesn’t happen with motivation. \n Or productivity hacks. \n Or waiting for the “right time.”
It happens when you stop negotiating with the old identity… \n and start rewiring the new one.
The biggest lie in self-growth is that you need motivation to change.
Wrong.
You need internal permission.
Permission to become someone your past self doesn’t recognize. \n Permission to break the habits that kept you safe but stuck. \n Permission to outgrow the identity your nervous system has memorized.
This is where the real work begins. \n Not with action. \n Not with discipline. \n But with identity alignment.
Your biology wants to know: \n Who am I now? \n Who am I becoming? \n And is it safe to let go of who I used to be?
When you answer those questions, everything else (habits, discipline, consistency) stops feeling like a fight. Because now your body and your mind want the same thing.
This is why you can know things intellectually, and still not be able to change. This is why I was still struggling with perfectionism even after have written The Manuscript.
This is why the smartest people can get stuck for years, having read all the books, watched the podcasts, bought the courses.
Identity isn’t a “decision.” \n It’s not a mindset. \n It’s not even a conscious belief.
Identity is a prediction system.
Your brain constantly asks one question: “Who do I need to be to stay safe?”
And it answers that question using three systems working together:
Your brain isn’t reacting to the world. it’s predicting it. It uses your past behaviors, your old fears, your familiar patterns, and says: “Okay, this is who we’ve been. This version survived. Stick to that.”
That becomes your baseline identity, because it’s predictable.
Predictability = survival.
This is why change feels threatening even when it’s good for you.
When your mind wanders, when you self-reflect, when you imagine the future, \n your DMN lights up. This network builds the story of “you.” It stitches your past, your fears, your expectations, your self-image… and turns it into a narrative your brain trusts.
If that story says:
“I’m someone who hesitates.” \n or \n “I’m someone who starts strong and falls off.” \n or \n “I’m someone who overthinks everything…”
Then your brain defends that story as if it were a survival strategy.
And in a way, it is.
Because the DMN is not optimized for growth. It’s optimized for coherence, \n even if the coherent story is keeping you small. (I know, this probably hurts reading).
Put all three together, and here’s the real truth:
Your identity is not chosen.\It’s reinforced.\Predicted.\Repeated.\Protected.
And your brain is protecting the only “you” it currently understands.
Being smart… and still stuck.
Knowing exactly what’s wrong with you and still waking up as the same person every day. That’s the real hell.
You think you’re growing because you understand more. But you’re not changing. You’re just becoming an expert in why you’re the same.
And if that hurts, good. \n It should.
Because 2024 didn’t change you. \n 2025 didn’t change you.
And nothing about you right now suggests 2026 will be any different.
Not unless something breaks. \n Not unless something snaps inside you. \n Not unless you reach that moment where you finally admit:
“I can’t think my way out of who I am.”
Because you can’t. \n If you could, you would have done it already.
And deep down, you know why.
You’re still negotiating with the identity that’s been running your life for years. And that version of you? It has no intention of dying quietly.
If you don’t interrupt it, it will repeat itself in 2026. \n And 2027. \n And every year after that until you finally break.
That’s the cost of staying the same.
\
2026-03-08 02:30:36
Visual regression testing is one of those problems that sounds simple until you actually scale it. Take a screenshot before your change, take one after, and diff them. If they don't match, something broke. Every CI pipeline doing visual testing runs thousands of times per day.
\ The JavaScript ecosystem has solid tools for this. pixelmatch is the standard — small, fast enough for most cases, zero dependencies. It's a well-written library. But “fast enough” starts to crack when your screenshots are 2K/4K resolution, when you're running batch comparisons across hundreds of pages, and when every millisecond in CI costs a penny.
\ I wanted to build something more efficient.
My first idea was to divide the image into blocks and skip unchanged regions entirely. Most UI screenshots between runs are 95%+ identical. Why scan 18 million pixels for a button color change?
\ At first glance, this sounds impossible. You can't know whether a block changed without touching its pixels.
\ But most image diff algorithms already use a cheap byte-level check before running expensive perceptual comparisons. The insight was to split those stages into two separate passes. In the cold pass, touch every pixel but only do the cheap check, marking blocks as changed or unchanged. In the hot pass, run the expensive work only on flagged blocks.
\ You still scan every pixel once. But the expensive work only runs on the 1-5% of blocks that actually differ.
\ The two-pass architecture:
\ The block size scales with image dimensions:
export function calculateOptimalBlockSize(
width: number,
height: number,
): number {
const area = width * height;
const scale = Math.sqrt(area) / 100;
const rawSize = 16 * Math.sqrt(scale);
// More efficient power-of-2 rounding using bit operations
const log2Val = Math.log(rawSize) * LOG2_E;
return 1 << Math.round(log2Val); // Bit shift instead of Math.pow(2, x)
}
\ Small images get 8×8 blocks. A 4K screenshot gets 128×128 blocks. The formula ensures blocks are always powers of two (cache-line friendly) and that the granularity scales with image area.
\ The cold pass is where most of the speedup comes from. For a 5600×3200 image with a small UI change, you might have 2,000 blocks total, but only 15 contain differences. The hot pass, which performs the expensive calculations, covers less than 1% of the image. The remaining 99% gets a fast grayscale fill and moves on. When nothing changed, the cold pass rejects every block immediately.
The first version of BlazeDiff was pure TypeScript. The block-based design gave significant speedups over pixelmatch's linear scan, but JavaScript itself became the next bottleneck.
\ I went through several optimization passes:
Uint8Array/Uint8ClampedArray, but the YIQ color comparison needs to work on 32-bit RGBA pixels. Casting to Uint32Array via DataView lets you read a full pixel in one operation instead of four-byte reads.changedBlocks array, avoiding intermediate buffers, reusing output buffers across comparisons. Buffer.compare(
Buffer.from(image1.data.buffer),
Buffer.from(image2.data.buffer)
);
\ The JavaScript implementation ended up 1.5 times faster than pixelmatch on average and up to 88% faster on identical images. The numbers from benchmarks on an M1 Max:
| Benchmark | pixelmatch | BlazeDiff | Improvement | |----|----|----|----| | 4K image (different) | 302.29ms | 211.92ms | 29.9% | | 4K image (identical) | 19.18ms | 2.39ms | 87.5% | | Full page (different) | 331.94ms | 92.77ms | 72.1% | | Full page (identical) | 63.18ms | 7.68ms | 87.8% |
\ The block-based optimization was clearly working. But 211ms for a single 4K diff is still slow when you're running hundreds of them. And at this point, the algorithm itself was optimized. JavaScript runtime overhead remains the bottleneck.
While benchmarking, I discovered odiff — an image diff tool written in Zig with OCaml bindings. It was consistently faster than my JavaScript implementation.
\ This wasn't surprising. Native code has fundamental advantages:
\ The reality is clear: once the algorithm is optimized, the language and runtime become the limiting factor. You can't out-optimize JavaScript's fundamental overhead.
The decision to port to Rust came down to a few factors:
\ The Rust port follows the same two-pass architecture. Here's the core of the cold pass (the part that identifies which blocks have changed):
// Cold pass: identify changed blocks
for by in 0..blocks_y {
for bx in 0..blocks_x {
let start_x = bx * block_size;
let start_y = by * block_size;
let end_x = (start_x + block_size).min(width);
let end_y = (start_y + block_size).min(height);
let has_diff = block_has_perceptual_diff(
a32, b32, width, start_x, start_y, end_x, end_y, max_delta,
);
if has_diff {
changed_blocks.push((start_x, start_y, end_x, end_y));
} else if let Some(ref mut out) = output {
if !options.diff_mask {
fill_block_gray_optimized(
image1, out, options.alpha, start_x, start_y, end_x, end_y,
);
}
}
}
}
\ The SIMD acceleration is where things get interesting. The YIQ color delta computation (conversion of RGB differences into perceptual differences) is the hottest inner loop. Here's the core of the NEON implementation for ARM (processing 4 pixels simultaneously):
// Load four pixels at once and extracts RGB channels using vector instructions before computing the YIQ perceptual delta.
#[cfg(target_arch = "aarch64")]
unsafe fn yiq_delta_4_neon(pixels_a: *const u32, pixels_b: *const u32) {
let pa = vld1q_u32(pixels_a); // Load 4 RGBA pixels
let pb = vld1q_u32(pixels_b);
// Extract channels
let r_a = vandq_u32(pa, mask_ff);
let g_a = vandq_u32(vshrq_n_u32(pa, 8), mask_ff);
let b_a = vandq_u32(vshrq_n_u32(pa, 16), mask_ff);
// Convert to float for YIQ transform
let r_a_f = vcvtq_f32_u32(r_a);
// Alpha blending with FMA
let br_a = vfmaq_f32(v255, vsubq_f32(r_a_f, v255), alpha_norm_a);
// YIQ: y²×0.5053 + i²×0.299 + q²×0.1957
}
\ On x86_64, the same logic runs as SSE4.1 (4 pixels) or AVX2+FMA (8 pixels), selected at runtime.
\ Feature detection happens once per diff, and the fastest available SIMD path is used for the entire operation. No runtime dispatching per-pixel.
\ The initial Rust port matched odiff-level performance. Same algorithm, different language, and suddenly the runtime overhead was gone.
Matching odiff was the baseline. Several rounds of optimization pushed BlazeDiff well past it.
&[u32] via a zero-copy cast. No allocation, no copy: fn load_images(path1: P1, path2: P2) -> Result<(Image, Image), DiffError> {
if fmt1 == fmt2 {
return match fmt1 {
ImageFormat::Png => load_pngs(&path1, &path2),
ImageFormat::Jpeg => load_jpegs(&path1, &path2),
};
}
let results: Vec<_> = [
(path1.as_ref().to_path_buf(), fmt1),
(path2.as_ref().to_path_buf(), fmt2),
]
.par_iter()
.map(|(path, fmt)| match fmt {
ImageFormat::Png => load_png(path),
ImageFormat::Jpeg => load_jpeg(path),
})
.collect();
Ok((iter.next().unwrap()?, iter.next().unwrap()?))
}
[profile.release]
opt-level = 3
lto = "fat"
codegen-units = 1
panic = "abort"
strip = true
\ The results against odiff (including full image I/O: decode, diff, encode):
| Benchmark | odiff | BlazeDiff | Improvement | |----|----|----|----| | 4K/1 (5600×3200) | 1190.92ms | 293.86ms | 75.3% | | 4K/2 | 1530.21ms | 363.50ms | 76.2% | | 4K/3 | 1835.47ms | 389.67ms | 78.8% | | Full page/1 | 1035.20ms | 472.99ms | 54.3% | | Full page/2 | 598.79ms | 263.90ms | 55.9% |
\ 3-4× faster on 4K images. The combination of block-based early exit, SIMD vectorization, parallel I/O, and aggressive compiler optimization adds up.
Having a fast Rust binary is great. But the target users are JavaScript developers writing visual tests with Playwright, Cypress, or Vitest. They need npm install, not cargo install.
\ The first approach was the obvious one:
Node.js → spawn binary → run diff → return result
\ Wrap the Rust binary in a Node.js package, spawn it as a child process, and parse the JSON output:
try {
await execFileAsync(binaryPath, args);
return { match: true };
} catch (err) {
// Parse exit codes: 0=identical, 1=pixel-diff, 2=error
}
\ This works. It's simple, portable, and the binary handles all the heavy lifting. But in a CI pipeline diffing 500 screenshots, you're spawning 500 processes. Process creation isn't free. Fork, exec, memory mapping, shared library loading. The overhead per spawn is small, but it adds up. Benchmarks showed it was eating 30-40% of the total time on small images where the actual diff takes <5ms.
odiff solves this problem by running a persistent server process. You start it once, send diff requests over a socket, and it handles them without the spawn overhead.
\ I considered this approach and rejected it. The trade-offs didn't fit the use case:
compare(a, b) should just work, without requiring the user to manage server instances.\ The design goal was clear: native performance with the simplicity of a function call.
The answer came from studying how other Rust-powered JavaScript tools handle this problem. Biome (the linter/formatter) uses a particularly clean architecture:
JavaScript API → N-API binding (.node file) → Rust core library
\
The key insight is N-API — Node.js's stable ABI for native addons. Instead of spawning a child process, the Rust code compiles to a .node shared library that loads directly into the Node.js process. Function calls from JavaScript to Rust become direct function pointer calls. No serialization, no IPC, no process overhead.
\
The packaging strategy is equally important. Platform-specific binaries are published as separate npm packages with os and cpu fields:
{
"name": "@blazediff/bin-darwin-arm64",
"os": ["darwin"],
"cpu": ["arm64"],
"files": ["blazediff", "blazediff.node"]
}
\
The main package lists these as optionalDependencies:
{
"name": "@blazediff/bin",
"optionalDependencies": {
"@blazediff/bin-darwin-arm64": "3.5.0",
"@blazediff/bin-darwin-x64": "3.5.0",
"@blazediff/bin-linux-arm64": "3.5.0",
"@blazediff/bin-linux-x64": "3.5.0",
"@blazediff/bin-win32-arm64": "3.5.0",
"@blazediff/bin-win32-x64": "3.5.0"
}
}
\ npm/pnpm/bun only installs the package matching the current platform. No postinstall scripts. No compilation. Just a prebuilt binary.
BlazeDiff's final architecture uses a tiered approach. The N-API binding is the fast path. The spawned binary is the fallback.
JavaScript API
↓
Try N-API binding (in-process, ~0 overhead)
↓ (if unavailable)
Fall back to execFile (spawn binary)
↓
Rust diff engine
↓
SIMD + parallel I/O
\ The N-API binding is defined with napi-rs:
#[napi]
pub fn compare(
base_path: String,
compare_path: String,
diff_output: Option<String>,
options: Option<NapiDiffOptions>,
) -> Result<NapiDiffResult> {
// Load images in parallel using rayon
let (img1, img2) = load_images(&base_path, &compare_path)?;
// Run the diff
let result = diff(&img1, &img2, output_image.as_mut(), &diff_options)?;
Ok(NapiDiffResult {
match_result: result.identical,
reason: if result.identical { None } else { Some("pixel-diff".into()) },
diff_count: Some(result.diff_count),
diff_percentage: Some(result.diff_percentage),
})
}
\ On the JavaScript side, the binding loads lazily and caches the result:
let nativeBinding: NativeBinding | null = null;
let nativeBindingAttempted = false;
function tryLoadNativeBinding(): NativeBinding | null {
if (nativeBindingAttempted) return nativeBinding;
nativeBindingAttempted = true;
const key = `${os.platform()}-${os.arch()}`;
const platformInfo = PLATFORM_PACKAGES[key];
if (!platformInfo) return null;
try {
const require = createRequire(import.meta.url);
const binding = require(platformInfo.packageName) as NativeBinding;
if (typeof binding?.compare === "function") {
nativeBinding = binding;
return binding;
}
} catch {
// Native binding not available, will use execFile fallback
}
return null;
}
\ The Cargo configuration compiles the same crate as both a standalone binary and a cdylib (shared library for N-API):
[lib]
name = "blazediff"
crate-type = ["lib", "cdylib"]
[features]
default = []
napi = ["dep:napi", "dep:napi-derive"]
\ One codebase, two distribution targets: a CLI binary and a Node.js native module. Same SIMD optimizations, same block-based algorithm, same everything, just a different entry point.
The block-based design produced 72-88% speedups in pure JavaScript, before any Rust was written. Choosing the right algorithm matters more than choosing the right language. If I'd gone straight to Rust with a pixel-by-pixel approach, I'd have a fast pixel scanner instead of a fast image differ.
Once the algorithm was optimized, JavaScript's runtime overhead — GC pauses, JIT warmup, lack of SIMD, property lookup chains — became the dominant cost. For tight numerical loops over large buffers, no amount of micro-optimization can overcome the V8 overhead. This isn't a criticism of JavaScript. It's a recognition that different tools serve different purposes.
The jump from spawn(binary) to N-API was a larger improvement than some algorithmic changes. Architecture decisions — how you invoke code, how you transfer data, how you manage process lifecycles — compound with every call. When you're diffing 500 images, the difference between a function call and a process spawn is the difference between seconds and minutes.
N-API provides a stable ABI across Node.js versions, clean platform-specific packaging via optionalDependencies, and near-zero overhead for Rust-to-JS function calls. It's the same approach used by Biome, SWC, and other high-performance JavaScript tools. If you're building a performance-critical JavaScript library and JavaScript itself is the bottleneck, N-API + Rust is a proven path.
Every optimization was validated with benchmarks running on consistent hardware. The WASM route was explored and abandoned (I actually tried both Rust WASM and AssemblyScript — both were slower than native for this workload). The block size formula was tuned empirically across image sizes from 100×100 to 5600×3200. Performance intuition is often wrong. Measure.
JavaScript remains one of the best ecosystems for developer tooling: the package manager story, the testing framework integration, and the sheer breadth of the community. But sometimes the fastest path forward is to let native code do the heavy lifting while JavaScript remains the interface developers love.
\
BlazeDiff ships as a single npm install. Under the hood, it's 2,000 lines of SIMD-optimized Rust processing pixels through a two-pass block algorithm. From the developer's perspective, it's compare(a, b).
\ That's the goal: invisible performance. The user shouldn't have to know or care that Rust exists.
\
2026-03-08 02:00:58
Hi everyone,
Ross here—I’m an Investigative Data Journalist at The Markup. We publish a lot of words telling you what AI can (and often shouldn’t) do, but how can public policy keep AI in check?
\ Governments are ramping up their role in keeping AI algorithms accountable and limiting their harms. We covered President Joe Biden’s AI order back in October; last month, a new executive order required all federal agencies to designate a Chief AI Officer, and by December agencies must implement AI safeguards and publish transparency reports on federal AI deployments. Many individual states like New Jersey, Colorado, Massachusetts, and California—which The Markup will be looking at closely in the future as we join forces with CalMatters—are proposing and passing legislation regulating AI to combat discrimination.
\ The EU recently passed the world’s first comprehensive regulation on AI systems, and Canada’s AI safety bill has been working its way through Parliament since 2022. Canada also announced $2.4 billion CAD earlier this month towards public computing resources for AI models, while the U.S. recently launched its own pilot in January (projected to cost $2.6 billion to be fully operational).
\ To get more insight on upcoming AI policy, I reached out to my good friend Christelle Tessono, a graduate student at the University of Toronto’s Faculty of Information and a policy and research assistant at the Dais, a public policy and leadership think tank at Toronto Metropolitan University. Her work focuses on tackling the relationship between racial inequality and digital technology, with special attention to AI deployments.
\ Ross: What are the main things you’re working on?
Christelle: I’ve been focusing for the past couple of years on AI governance in Canada, discussed in public policy processes. I’ve also acquired expertise on facial recognition technology, gig work in the Canadian context, as well as looking at social media platform governance.
\ Ross: So I know that there’s a bunch of new legislation being proposed in Canada and the US as well, about regulating AI. And one of the big issues, something you mentioned, is how do you even define AI? So, to you, how should we define AI?
Christelle: I define AI as the set of computational tools used to process large amounts of data to identify patterns, make inferences, and generate assessments and recommendations.
\ Ross: Tell me what’s happening in AI legislation in the policy space in Canada and the U.S., or anywhere else in the world that you’re tracking now.
Christelle: I’ve been tracking what’s happening in the US, Canada, and Europe. But let’s talk about Canada, because I feel like people at the international level don’t really know what’s happening. Canada was the first country to develop a national AI strategy back in 2017, the Pan Canadian Artificial Intelligence Strategy, which in recent years has received over $125 million in funding to conduct research and drive AI adoption in the country. However, we have been very slow about developing enforceable regulation to address the harms and risks caused by AI systems.
\ Canada introduced the AI and Data Act (AIDA) back in June 2022 to regulate the use of AI systems in the private sector. But it never received public consultation, has an overly narrow definition for systems in scope, and doesn’t have prohibitions on certain types of AI systems like the EU AI Act.
\ Ross: Can you talk more about the difference between different AI legislations?
Christelle: The EU has a “risk based” framework, meaning that they’ve taken the time to outline the different types of systems that would fall under different types of risks, such as “high risk”, “limited risk”, “minimal risk”, and “no risk” in the law itself. Whereas here in Canada, the legislation states that this will apply to “high impact” systems, but it remains unclear whether the government will determine if a given product is considered “high impact” or if it is at the discretion of the developer. So in short, the Canadian proposed framework is an empty shell.
\ Ross: And do you know which framework the U.S. uses in its proposed law?
Christelle: The U.S. approach is very decentralized, with multiple initiatives across different agencies. Multiple bills have been introduced in the past, but none of them have gotten enough traction to be considered the singular approach that the U.S. will undertake at the federal level. At the state level there are a lot of initiatives, some have become law, such as the Artificial Intelligence Video Interview Act in Illinois that regulates AI in employment contexts. There are several that are also slowly making their way through legislative houses, such as algorithmic discrimination acts in Oklahoma (HB3835) and Washington (HB 1951), so I would say it’s a very decentralized approach. Then there is the AI Bill of Rights, which is a guiding document – so not enforceable.
\ Ross: What are some good properties of an AI system that you think systems that are deployed out in the world should have?
Christelle: When I think about a system, about an AI system, I don’t only think about the physical machinery, the data, the computing. I think about the context in which it is designed, developed and deployed.
\ First, an AI system should have a clear accountability framework. That is, do we know who’s responsible for what? And how can people sort of complain or alert authorities that there is a problem? If to me there’s no accountability, then the AI system is simply doomed to fail.
\ Then there’s transparency. As a researcher, I’m curious about learning how these products are not only developed, but the procurement process. Who decided to make the call for offers? How many people were provided with mock-ups of the product? What was the decision-making that led to this choice? Why are we using this specific product?
\ I [also] think about functionality: does the system work? Can it even achieve its intended goal? If there’s no match between the task and the capabilities of the system, then the system shouldn’t be operating at all. That is the case with many facial recognition systems used for categorizing people or even identifying their emotions. Facial recognition works in verification contexts… but… when you’re using it to try to categorize people and make predictions based on them… the functionality piece is not there.
\ I think a lot of people talk about fairness, like ensuring that the system is robust and not perpetuating bias. That to me is good as a property, but that’s not the first one I think about when it comes to the robustness of a system. As a human person, I cannot be 100% fair. So how can I impose that on a system? I think it’s better to figure out whether [a system] is able to complete the desired tasks we wanted to do.
\ Ross: You’ve talked about premature AI deployments. How can/should an agency decide whether a technology is ready to be deployed in the real world?
Christelle: First, there should be public consultations as to whether this is the right approach to dealing with a problem that the agency has identified. A lot of the issue right now is that we’re seeing technology being deployed without consultation, without regard for prior consultations on a variety of matters. Is this a real need, or are we using technology to [solve] a problem that doesn’t really need a technological intervention?
\ The second thing is functionality and ensuring that the system is robust. What are the metrics that the company is using in order to prove that their technology is up to task? What standardization bodies are they following? What types of regulations are they respecting? Like, has the company proven that they’re following standards that are followed everywhere else in the world?
\ The third piece is, again, accountability. How are we gonna responsibly use [this technology]? Are we making sure that we’re not firing people and using technology as a replacement for labor? Who’s gonna be supervising the technology?
\ Ross: On the topic of accountability, what does good accountability look like? How can the public actually raise concerns or fight back against tech that they think is being improperly deployed?
Christelle: The Canadian framework for AI doesn’t have a complaint mechanism. And that to me is like the first step with regards to accountability. For example, I’m a student, and [let’s say] there’s a problem with one of the assignments, I cannot upload it onto the website. I can send an email to the professor and say “Hey, I couldn’t submit the assignment because the online platform doesn’t work,” and so on, that works. And if the professor doesn’t respond, then I can go to the dean or other student representative organizations. Like, there are mechanisms for me to alert of an issue.
\ For AI systems, it’s hard because you can tell the company “Hey, your product is faulty,” but what if the product already removed all the money in your bank account because it assumed that you were making fraudulent transactions? Who do you actually complain to? And who’s gonna listen to you and make sure that this is dealt with in a timely fashion and it’s not burdensome on the person complaining? So in more simple terms, the properties of a good accountability framework rely on making it easier for people to complain once something goes wrong, and also ensure that there are [complaint] options beyond the technology.
\ Ross: How do you design law to make sure that companies will actually abide by it?
Christelle: That’s something that we’ve been struggling a lot conceptually in Canada. Some companies say that providing criminal penalties for contraventions to the act is a heavy penalty. Others say that a small financial penalty just incentivizes companies to factor it in the operating costs of the product. So they’re just paying a small bill compared to what they can make if they continue producing and deploying that product.
\ I think that a way to answer those two challenges while also respecting human rights and building trust is having a flexible framework that has a regulator [who is] empowered to conduct proactive audits, impose fines, and draft regulations.
\ Ross: How should AI systems be audited?
Christelle: The proposed AI and Data Act in Canada says that if the minister suspects a violation, they can request the company to conduct an audit and deliver them the results. And the company itself has the choice of conducting the audit themselves or procuring the auditing service from a third party that they choose and pay. Deb Raji [a fellow accountability researcher argues]… when you let a company audit themselves, then you’re not getting… an impartial assessment of the problem.
\ I believe a way forward is to build specialized auditing teams within government that include [a variety of] expertise that understand the socio-technical implications of AI. From lawyers, technologists, sociologists, philosophers, [and others].
\ A lot of industry actors are rapidly developing [infrastructure for auditing AI]. While this is a positive thing for companies who want to use those services to assess their products, the government shouldn’t rely on them for audits. We shouldn’t be outsourcing expertise that we can develop in-house.
\ Ross: Given that Canada and the US are about to spend billions of dollars on AI, can you talk a bit about what that money will be used for, and what you see as any gaps in the funding?
Christelle: We need more money for regulatory infrastructure, and I really emphasize regulatory infrastructure as a term, because how can we be able to audit or even develop guidelines on how to use systems if we don’t have public servants thinking about these things. We shouldn’t let industry dictate how technologies are used, when they’re used, and whether they should be used. I think this is a responsibility that the government needs to take on.
\ [There is] a meager $5.1 million [CAD] for the office of the AI and data commissioner of the country.
\ The office of the Privacy Commissioner of Canada has five times that budget. So $5 million [CAD] is nothing. If you have $2 billion [CAD] for computing infrastructure, who’s gonna regulate it? We need money for that.
\ There’s [also] $50 million [CAD] for upskilling and “training” people who are impacted by AI. The government didn’t give too much detail, but they alluded that this would be for, for example, content creators, artists, creative artists who might be impacted by AI. They specifically use the word “training”, which is very interesting because creative artists, creative workers, artists, [they] don’t need more skills to use AI, they just want their intellectual property/copyright to be respected, and to not see their work stolen.
\ Ross: There was a new bill just introduced in the US by a senator, just by a senator that would require all companies that use AI to disclose any copyrighted works that are included in their training data and to have a database of that. Do you have thoughts on this? Is it feasible?
Christelle: It’s an interesting idea, but I don’t know much about copyright and whether… just disclosing that you use copyrighted work is enough to prevent harms for workers who rely on copyright for their income. When it comes to AI policy, we need to think about these types of interventions as part of a broader puzzle.
\ Ross: Do you have advice for readers to [make their voices heard about AI policy]?
Christelle: I highly encourage people to learn more about how government operates, how laws are made, even if it’s at the municipal or state level in the US. I highly encourage you, because a lot of people benefit from the majority not knowing how bills are passed. So I highly encourage you all to do that.
\ I want people to be excited about finding new ways to deal with issues and also building community. Talk to your neighbors, talk to your friends, talk to your parents.
Want to learn more? Check out the OECD tracker of over 1,000 AI initiatives from 69 countries and territories. Want to get involved? Learn more about AI harms and contact your elected officials in the U.S., Canada, or wherever you might be.
\ Thanks for reading!
\ Ross Teixeira
Investigative Data Journalist
The Markup
\ Also published here
\ Photo by Joshua Woroniecki on Unsplash
\
2026-03-08 01:00:51
Stop the memory rot
TL;DR: You can keep your AI sharp by forcing it to summarize and prune what it remembers (a.k.a. compacting).
You keep a single, long conversation open for hours.
\ You feed the AI with every error log and every iteration of your code.
\ Eventually, the AI starts to ignore your early instructions or hallucinate nonexistent functions.
context.md file with your current stack and rules.Large Language Models have limited attention.
\ Long context windows are a trap.
\ Many modern models offer a very large context window.
\ In practice, they ignore a lot of them to your frustration.
\ Even with large context windows, they prioritize the beginning and end of the prompt.
Here is the 500-line log of my failed build.
Also, remember that we changed the database schema
Three hours ago in this chat.
Add the unit tests as I described above.
Now, refactor the whole component.
I am starting a new session. Here is the current state:
We use *PostgreSQL* with the 'Users' table schema [ID, Email].
The AuthService`interface is [login(), logout()].
Refactor the LoginComponent` to use these.
You must ensure you don't purge essential context.
\ If you prune too much, the AI might suggest libraries that conflict with your current setup.
\ Review the compacted information.
[X] Semi-Automatic
You can use this tip manually in any chat interface.
\ If you use advanced agents like Claude Code or Cursor, they might handle some of this automatically, but manual pruning is still more reliable.
[X] Intermediate
https://maximilianocontieri.com/ai-coding-tip-004-use-modular-skills
https://hackernoon.com/ai-coding-tip-005-how-to-keep-context-fresh
AI Coding Tip 010 - Create Skill from Conversation
You are the curator of the AI's memory.
\ If you let the context rot, the code will rot, too.
\ Keep it clean and compact. 🧹
https://arxiv.org/abs/2307.03172?embedable=true
https://llmlingua.com/?embedable=true
https://www.ibm.com/think/topics/ai-hallucinations?embedable=true
https://www.promptingguide.ai/?embedable=true
Also Known As 🎭
The views expressed here are my own.
\ I am a human who writes as best as possible for other humans.
\ I use AI proofreading tools to improve some texts.
\ I welcome constructive criticism and dialogue.
\ I shape these insights through 30 years in the software industry, 25 years of teaching, and writing over 500 articles and a book.
This article is part of the AI Coding Tip series.
https://maximilianocontieri.com/ai-coding-tips
\