MoreRSS

site iconBear Blog Trending PostsModify

Ranked according to the following algorithm:Score = log10(U) + (S / D * 8600), U is Upvotes , S/D is time.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Bear Blog Trending Posts

My Default Apps for 2024

2024-12-26 22:33:00

This is my first time doing a post like this, but I was inspired by seeing so many others share their 2024 Default Apps1, like Lou and Brandon. I thought it’d be fun to capture my go-to apps now and revisit this at the end of 2025 to see what’s changed (if anything!).

  • 📧 Mail: Spark, Proton, Mail.app
  • 📝 Notes: Bear App, VoiceNotes
  • ✅ To-Do: Things 3
  • 📷 Photo: Default Camera, Lapse
  • 📸 Photo editing: Lightroom, RNI Films (iOS)
  • 🟦 Photo management: Photos.app, Flickr, Retro
  • 📆 Calendar: Gmail Cal, Amie
  • 📁 Cloud File Storage: iCloud, Proton Drive
  • 📖 RSS Reader: NetNewsWire
  • 👤 Contacts: Contacts.app
  • 🌐 Browser: Chrome, Orion (testing)
  • 💬 Messages: Messages.app, Discord, Signal
  • 📄 Word Processing: Obsidian
  • 💰 Budgeting: Budget
  • 🎧 Music: Spotify
  • 💿 Vinyl Catalog: Music Buddy
  • 🔒 Password Manager: 1Password
  • 🐘 Mastodon: IceCubes
  • 🦋 Bluesky: default Bluesky app
  • 📓 Journal: DayOne
  • 📺 Movie Tracker: Letterboxd
  • 📚 Book Tracker: GoodReads (unfortunately)
  • ✍️ Blogging: Bear blog
  • 🔗 Websites: Notes by JCProbably, Snaps by JCProbably, august morning
Comments and 💬 Community Echoes

If you'd like to comment, please send me an email, respond on any social media of mine you know, or sign my Guestbook.


Lou: Another person who does their writing in Obsidian! Do you use any writing specific plugins? I'm loving the Language Tool integration.


Sylvia: I made a list like this for my former blog and am going to nab yours and make a new version with some categories added 😃

💬 Related: Sylvia's default apps for 2024.


Alvan: Wow, a lot of great apps! Will give VoiceNotes and Budget a try. Amie looks very good too. I wish they had trial version to at least see how it feels like.


Alex W: Nice list. I really need to redo and update mine too.

💬 Related: Alex's default apps


💬 Additional Community Echoes

Reply by email
  1. Most of the above app usage happens on my iPhone 16 Pro, though I do use my iPad Pro and MacBook Pro from time to time.

The Military Con No One Talks About

2024-12-26 14:27:00

I come from a proud military family. Both my father and my step-father are retired veterans, and both of them are one-hundred percent disabled. I know what you are thinking, they must have been to war, lost a leg, or been shot. Nope, none of that happened to either one of them. Actually, in their twenty plus years in the service, both of them rode a desk for the majority of it, never seeing any action. Neither lost a limb in an IED or suffered any injury. So, how are they one-hundred percent disabled, and why are thousands of our tax dollars (a month) providing them a lifestyle that neither of them earned? That's a good question and one I can sort of answer.

Not surprisingly, many members of the service exit into civilian life and realize that the rank and power they held within the military means nothing in the civilian world. They constantly shake their heads and say things like, "Damn civilians" while complaining about wasting their time only for immigrants to come to this country, and for the lazy to get welfare checks. Their anger and frustration is voiced to fellow ex-service members and eventually they find their way to get revenge, and that is by getting disability from the military.

Large groups of ex-military have sort of an unofficial way to navigate these things. They tell each other exactly what to say to specific doctors to get diagnosed with specific conditions. If you word it just right, you can claim that the asthma you've had since you were three years old is somehow related to your military service or how your erectile dysfunction is not related to being morbidly obese and inactive for the past twenty years, but because of the stress you underwent while serving. (Both examples are 100% true)

I watched as both my father and step-father conned the system, along with their friends and fellow family members. These are the guys who walk around with military hats on just so you'll say, "Thank you for your service" like they stormed the beaches of Normandy. What you don't realize is, they sat at desks and went to seminars while stealing office supplies for their entire careers.

What pisses me off is how much they brag about getting what they are owed, and in the same breath shitting all over people who need government assistance. Neither man was incapcited in any way and both worked full-time jobs with no accomodations following their end of service. Yet, now they are in their sixties, living it up, and making more money now, than they did while they were working and you and I are paying for a lifestyle they don't deserve. If it was limited to just them two I wouldn't be so irritated, but I know of dozens of these grifters personally and I can only imagine there are thousands more out there while service members who underwent real trauma are denied benefits because they don't know the right words to say or how to find the right doctor who will write down that your backpain was serviced related even though it just started up a year ago.

What is equally bad is you won't find an easy way to report these type of things. I even saw a post from a few years ago on a branch sub-reddit and the guy was thrown to the wolves for even suggesting that you rat someone out. The media won't touch a story like this, it's suicide to run an anti-veteran story, and even worst if it's about disabled veterans.

It's driven me nuts for years. It takes everything in me not to just explode whenever I interact with these people. I have a lot of respect for our military, but the whole, "These guys are honorable heroes" is not always true, and that is why you won't see me walking up and thanking anyone for their service.

Reply via email

every love story is a ghost story

2024-12-26 12:31:00

I am sick again, for the second time in December. I can’t tell if this bout of congestion is a separate episode or simply a continuation of the last one from mid-December. Whichever it is, I’ve stopped taking my immunosuppressants again, and have spent all of Christmas break in bed.

Illness has visited me so frequently this year (especially the tail end) that it may spare both you and me time and words if I instead say when I feel well, rather than when I don’t. But the curse of being well is that after a day of two of good health you forget what a blessing it is. Even a fish tossed back to sea doesn’t go around thinking this is water, this is water for long.

That tomorrow is a workday feels cruel and rather pointless. Christmas Eve and Day fell on Tuesday and Wednesday this year, bisecting the week into two halves, a day and two days long. Why companies don’t give these days off baffles me; I’ve not known anyone who gets anything meaningful done on these weird abbreviated weeks punctuated by holidays. The silver lining is that nobody will be online, the day will crawl by, and Frith willing, I’ll have time to catch up on whatever needs catching up on. (Boy is there a lot, now that I’ve presented my outstretched hands looking for things to do at work.)

I have spent the days reading, mostly. Making up ground on my annual reading goal (I am halfway through number 40, the last one). Napping once a day around 1, for about sixty to ninety minutes. Eating soupy things — udon with bokchoy and fishcakes, pho made from leftover restaurant broth. Drinking steaming barley tea. Lots of water. Tissue box after tissue box after tissue box. Vaseline to make sure the skin around my nose doesn’t turn pink and sandpapery. Watching videos about city maps and public transit and NYC restaurants when I get bored of reading. And thinking about love, and friends, and people I haven’t seen in a while, maybe won’t ever see again. I am always thinking about them. That is the only line of Hemingway’s advice that I’ve actually stuck to, though I fear he may have meant something else when he said to always think of other people.

Server-Sent Events (SSE) Are Underrated

2024-12-26 05:26:00

Most developers know about WebSockets, but Server-Sent Events (SSE) offer a simpler, often overlooked alternative that deserves more attention. Let's explore why this technology is underrated and how it can benefit your applications.

What are Server-Sent Events?

SSE establishes a one-way communication channel from server to client over HTTP. Unlike WebSockets' bidirectional connection, SSE maintains an open HTTP connection for server-to-client updates. Think of it as a radio broadcast: the server (station) transmits, and clients (receivers) listen.

Why are they Underrated?

Two main factors contribute to SSE's underappreciation:

  1. WebSocket's Popularity: WebSockets' full-duplex communication capabilities often overshadow SSE's simpler approach
  2. Perceived Limitations: The unidirectional nature might seem restrictive, though it's often sufficient for many use cases

Key Strengths of SSE

1. Implementation Simplicity

SSE leverages standard HTTP protocols, eliminating the complexity of WebSocket connection management.

2. Infrastructure Compatibility

SSE works seamlessly with existing HTTP infrastructure:

  • Load balancers
  • Proxies
  • Firewalls
  • Standard HTTP servers

3. Resource Efficiency

Lower resource consumption compared to WebSockets due to:

  • Unidirectional nature
  • Standard HTTP connection usage
  • No persistent socket maintenance

4. Automatic Reconnection

Built-in browser support for:

  • Connection interruption handling
  • Automatic reconnection attempts
  • Resilient real-time experience

5. Clear Semantics

One-way communication pattern enforces:

  • Clear separation of concerns
  • Straightforward data flow
  • Simplified application logic

Practical Applications

SSE excels in these scenarios:

  1. Real-time News Feeds and Social Updates
  2. Stock Tickers and Financial Data
  3. Progress Bars and Task Monitoring
  4. Server Logs Streaming
  5. Collaborative Editing (for updates)
  6. Gaming Leaderboards
  7. Location Tracking Systems

Implementation Examples

Server-Side (Flask)

from flask import Flask, Response, stream_with_context
import time
import random

app = Flask(__name__)

def generate_random_data():
    while True:
        data = f"data: Random value: {random.randint(1, 100)}\n\n"
        yield data
        time.sleep(1)

@app.route('/stream')
def stream():
    return Response(
        stream_with_context(generate_random_data()),
        mimetype='text/event-stream'
    )

if __name__ == '__main__':
    app.run(debug=True)

Client-Side (JavaScript)

const eventSource = new EventSource("/stream");

eventSource.onmessage = function(event) {
    const dataDiv = document.getElementById("data");
    dataDiv.innerHTML += `<p>${event.data}</p>`;
};

eventSource.onerror = function(error) {
    console.error("SSE error:", error);
};

Code Explanation

Server-Side Components:

  • /stream route handles SSE connections
  • generate_random_data() continuously yields formatted events
  • text/event-stream mimetype signals SSE protocol
  • stream_with_context maintains Flask application context

Client-Side Components:

  • EventSource object manages SSE connection
  • onmessage handler processes incoming events
  • onerror handles connection issues
  • Automatic reconnection handled by browser

Like the article so far? Subscribe to the blog so you don’t miss the next part


Limitations and Considerations

When implementing SSE, be aware of these constraints:

1. Unidirectional Communication

  • Server-to-client only
  • Requires separate HTTP requests for client-to-server communication

2. Browser Support

  • Well-supported in modern browsers
  • May need polyfills for older browsers

3. Data Format

  • Primary support for text-based data
  • Binary data requires encoding (e.g., Base64)

4. Best works with HTTP/2

As stated in the MDN documentation:

Warning: When not used over HTTP/2, SSE suffers from a limitation to the maximum number of open connections, which can be especially painful when opening multiple tabs, as the limit is per browser and is set to a very low number (6). The issue has been marked as "Won't fix" in Chrome and Firefox. This limit is per browser + domain, which means that you can open 6 SSE connections across all of the tabs to www.example1.com and another 6 SSE connections to www.example2.com (per Stack Overflow). When using HTTP/2, the maximum number of simultaneous HTTP streams is negotiated between the server and the client (defaults to 100)

Best Practices

  1. Error Handling
eventSource.onerror = function(error) {
    if (eventSource.readyState === EventSource.CLOSED) {
        console.log("Connection was closed");
    }
};
  1. Connection Management
// Clean up when done
function closeConnection() {
    eventSource.close();
}
  1. Reconnection Strategy
let retryAttempts = 0;
const maxRetries = 5;

eventSource.onclose = function() {
    if (retryAttempts < maxRetries) {
        setTimeout(() => {
            // Reconnect logic
            retryAttempts++;
        }, 1000 * retryAttempts);
    }
};

Real-World Example: ChatGPT's Implementation

Modern Language Learning Models (LLMs) utilize Server-Sent Events (SSE) for streaming responses. Let's explore how these implementations work and what makes them unique.

The General Pattern

All major LLM providers implement streaming using a common pattern:

  • Return content-type: text/event-stream header
  • Stream data blocks separated by \r\n\r\n
  • Each block contains a data: JSON line

Important Note

While SSE typically works with the browser's EventSource API, LLM implementations can't use this directly because:

  • EventSource only supports GET requests
  • LLM APIs require POST requests

OpenAI Implementation

Basic Request Structure

curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello, world?"}],
    "stream": true,
    "stream_options": {
      "include_usage": true
    }
  }'

Response Format

Each chunk follows this structure:

"data":{
   "id":"chatcmpl-AiT7GQk8zzYSC0Q8UT1pzyRzwxBCN",
   "object":"chat.completion.chunk",
   "created":1735161718,
   "model":"gpt-4o-mini-2024-07-18",
   "system_fingerprint":"fp_0aa8d3e20b",
   "choices":[
      {
         "index":0,
         "delta":{
            "content":"!"
         },
         "logprobs":null,
         "finish_reason":null
      }
   ],
   "usage":null
}

"data":{
   "id":"chatcmpl-AiT7GQk8zzYSC0Q8UT1pzyRzwxBCN",
   "object":"chat.completion.chunk",
   "created":1735161718,
   "model":"gpt-4o-mini-2024-07-18",
   "system_fingerprint":"fp_0aa8d3e20b",
   "choices":[
      {
         "index":0,
         "delta":{
            
         },
         "logprobs":null,
         "finish_reason":"stop"
      }
   ],
   "usage":null
}

HTTP Headers

Key headers returned by OpenAI:

HTTP/2 200
date: Wed, 25 Dec 2024 21:21:59 GMT
content-type: text/event-stream; charset=utf-8
access-control-expose-headers: X-Request-ID
openai-organization: user-esvzealexvl5nbzmxrismbwf
openai-processing-ms: 100
openai-version: 2020-10-01
x-ratelimit-limit-requests: 10000
x-ratelimit-limit-tokens: 200000
x-ratelimit-remaining-requests: 9999
x-ratelimit-remaining-tokens: 199978
x-ratelimit-reset-requests: 8.64s
x-ratelimit-reset-tokens: 6ms

Implementation Details

Stream Completion

The stream ends with:

data: [DONE]

Usage Information

Final message includes token usage:

"data":{
   "id":"chatcmpl-AiT7GQk8zzYSC0Q8UT1pzyRzwxBCN",
   "object":"chat.completion.chunk",
   "created":1735161718,
   "model":"gpt-4o-mini-2024-07-18",
   "system_fingerprint":"fp_0aa8d3e20b",
   "choices":[
      
   ],
   "usage":{
      "prompt_tokens":11,
      "completion_tokens":18,
      "total_tokens":29,
      "prompt_tokens_details":{
         "cached_tokens":0,
         "audio_tokens":0
      },
      "completion_tokens_details":{
         "reasoning_tokens":0,
         "audio_tokens":0,
         "accepted_prediction_tokens":0,
         "rejected_prediction_tokens":0
      }
   }
}

Conclusion

SSE provides an elegant solution for real-time, server-to-client communications. Its simplicity, efficiency, and integration with existing infrastructure make it an excellent choice for many applications. While WebSockets remain valuable for bidirectional communication, SSE offers a more focused and often more appropriate solution for one-way data streaming scenarios.

My failed attempt at AGI on the Tokio Runtime

2024-12-26 01:41:00

Note: I am not, nor do I claim to be an expert in machine learning or neuroscience. This will become abundantly obvious as you continue reading.

A few weeks ago I decided to try and build AGI. OpenAI, Deepmind and xAI haven't delivered yet with the smartest researchers and billions in compute so I had to take matters into my own hands (what could go wrong?).

I bought a couple of books on Artificial Intelligence and Neuroscience and started:

cargo new agi

Meta Strategy

Assume you are racing a Formula 1 car. You are in last place. You are a worse driver in a worse car. If you follow the same strategy as the cars in front of you, pit at the same time and choose the same tires, you will certainly lose. The only chance you have is to pick a different strategy.

The same goes for me. If I go down the transformer / deep learning route I am outgunned. The only hope I have is to try something completely novel (or more precisely think I'm working on something novel only to discover this was done in the 1970s1).

Concrete Strategy

For reasons we'll cover in the following sections, I decided to go down the fully biologically inspired path. The rough idea was to build a fully asynchronous neural network and run it on a data center.

Neurons and Brains

When I started reading neuroscience I got the impression we don't really understand how the brain works. It's complicated and complex and the books I read model neuronal firing as partial differential equations. But before that a small primer.

At a high level a neuron consists of 3 main components.

neuron

The dendrites on the left act as inputs to the neuron from other neurons (we'll call those "pre-synaptic neurons"). The cell body has a cell wall which acts as a barrier between the internals of the neuron and the goop surrounding it. The axon on the right is connected to dendrites of other downstream neurons (we'll call those post-synaptic neurons).

When a neuron receives a signal from a pre-synaptic neuron it increases the potential in the neuron's cell body. If this potential increases past some threshold voltage (relative to the surrounding goop) it triggers a response where the neuron fires a signal down its axon to the post-synaptic neurons and resets its internal voltage. After firing a neuron has a rest period called the "refactory" period during which it does not respond to stimuli. After the refactory period the neuron is ready to fire again.

firing

This is massively simplified. There are different types of neurons, a bunch of chemistry but I'm going to hand-wave those away and call them "implementation details". In fact I'm going to assume that the continuous nature of the signals fired is an implementation detail due to the substrate i.e. the underlying biological wetware and is not functionally important. There is no reason why signals can't be binary.

Conductance-Based Models

I didn't mention earlier that the cell body leaks potential into the surrounding goop over time. In 1963 Alan Hodgkin and Andrew Huxley received the Nobel Prize in Physiology and Medicine for describing this as a dynamical system described by a series of nonlinear differential equations. They modelled the relationship between the flow of ions across the neuron's cell membrane and the voltage of the cell. The experimental work for this was done on the squid giant axon because it was large enough for an electrode to be placed inside it.

Again I'm going to hand wave the chemistry away and call it an implementation detail using a simplified "Leaky integrate and fire" model.

$${\displaystyle C_{\mathrm {m} }{\frac {dV_{\mathrm {m} }(t)}{dt}}=I(t)-{\frac {V_{\mathrm {m} }(t)}{R_{\mathrm {m} }}}}$$

This is also a differential equation over the capacitance, resistance and current across the neuron membrane and voltage of the cell. But really it boils down to:

  1. Pre-synaptic impulses increase membrane potential
  2. Time decreases membrane potential

Or in pseudocode:

let k = ... // some decay constant
let delta = ... // some potential difference constant
loop {
      if signal.next() {
         let now = time::now()
         membrane_potential = membrane_potential * e^-k(now - previous_firing)
         membrane_potential += delta
         if membrane_potental > firing_threshold {
            fire()
            membrane_potential = 0
            previous_firing = now
         }
   }
}

firing2

Encoding Information in Neuronal Signals

It looks like the jury is still out on how exactly neurons encode information. Namely is information encoded in neuron timings, i.e. when a neuron fires, or neuron firing rates, the rate at which a neuron fires. There's a bunch of statistics and math that's been developed to talk intelligently about neuronal firing rates, but I'm going to assume that I don't care because the firing rates are going to be emergent from the underlying neuron timings anyway.

Design

Meditating on the structure of a neuron described above and modern artificial neural networks like transformers, a few questions jump out at you.

Even if a network of these neurons is not being driven externally, there are certain configurations which allow for signals to propagate in cycles in your neuronal graph. There are configurations which sustain themselves without needing external stimuli to drive it while at the same time not having divergent outputs.

This is far-fetched but it feels like something that might implement consciousness rather than a pure feed-forward system.

Implementation

I decided to implement this network by employing something like an Actor Model on the Tokio runtime. Tokio is fast asynchronous runtime for Rust and exposes primitives which would make my life easier such as broadcast channels to implement synapses. Also it would be easy to hot-swap it for a non-local version if I want to run my AI across multiple machines.

Neurons

Neurons are implemented pretty much as described above.

pub struct Neuron {
    #[allow(unused)]
    index: usize,
    membrane_potential: u32,
    axon: broadcast::Sender<Impulse>,
    dendrites: Vec<broadcast::Receiver<Impulse>>,
}

A broadcast::Sender is used to broadcast signals to post-synaptic neurons and signals from the pre-synaptic neurons which are broadcast::Receiver are used to drive the neuron.

An Impulse is just an empty tuple for now - we are assuming that the signal potential isn't important (or is constant) and information is encoded purely in the timing of firings (and consequently the firing rates).

To run the neuron we combine the dendrite receivers into a single stream and keep popping them implementing the leaky integrate and fire method:

impl Neuron {
    async fn start(mut self) {
        // Convert each receiver to a stream of messages
        let streams = self
            .dendrites
            .drain(..)
            .map(|mut rx| {
                Box::pin(async_stream::stream! {
                        loop {
                            match rx.recv().await {
                                Ok(msg) => yield msg,
                                Err(broadcast::error::RecvError::Closed) => break,
                                Err(broadcast::error::RecvError::Lagged(skipped)) => {
                                    // debug!("Receiver lagged by {} messages", skipped);
                                    continue;
                                }
                            }
                        }
                })
            })
            .collect::<Vec<_>>();

        // Combine all streams into a single unified stream
        let mut combined = stream::select_all(streams);
        let mut last_fire = Instant::now();

        // Process each message as it arrives from any receiver
        while let Some(impulse) = combined.next().await {
            let firings = FIRINGS.fetch_add(1, Ordering::Relaxed);
            // Implement the "Integrate and fire" method.
            let now = Instant::now();
            if last_fire + ABSOLUTE_REFACTORY_PERIOD > now {
                self.membrane_potential = self.membrane_potential + 1;
                if self.membrane_potential > FIRING_THRESHOLD {
                    self.emit(Impulse);
                    self.membrane_potential = 0;
                    last_fire = now;
                }
            }
        }
    }

    fn emit(&self, impulse: Impulse) {
        if let Err(e) = self.axon.send(impulse) {
            println!("{}", FIRINGS.fetch_add(0, Ordering::Relaxed));
            panic!()
        }
    }
}

Brains

Brains are modelled as a bag of neurons with a set of inputs and outputs.

pub struct Brain {
    neurons: Vec<Neuron>,
    inputs: Vec<broadcast::Sender<Impulse>>,
    outputs: Vec<broadcast::Receiver<Impulse>>,
}

The synapses for the neurons are already constructed beforehand as a brain is built from DNA.

impl From<&Dna> for Brain {
    fn from(dna: &Dna) -> Self {
        let mut neurons = Vec::new();
        let mut broadcasts = Vec::new();

        // Step 1: Initialize neurons and broadcast channels
        for index in 0..Dna::num_neurons() {
            let (tx, rx) = broadcast::channel(CHANNEL_CAPACITY);
            neurons.push(Neuron {
                membrane_potential: 0,
                axon: tx.clone(),
                dendrites: Vec::new(),
            });
            broadcasts.push((tx, rx));
        }
        let connectivity = dna.connectivity();

        for (src, row) in connectivity.iter().enumerate() {
            for (dest, &value) in row.iter().enumerate() {
                if src == dest {
                    // do not allow neurons to wire back to themselves
                    continue;
                }
                if value == 1 {
                    let receiver = broadcasts[src].0.subscribe();
                    neurons[dest].dendrites.push(receiver);
                }
            }
        }

        let inputs = dna
            .inputs()
            .iter()
            .map(|input_id| broadcasts[*input_id].0.clone())
            .collect::<Vec<_>>();

        let outputs = dna
            .outputs()
            .iter()
            .map(|output_id| broadcasts[*output_id].0.subscribe())
            .collect::<Vec<_>>();

        Brain {
            neurons,
            inputs,
            outputs,
        }
    }
}

DNA

The average brain of a human being has 85 billion neurons and over 100 trillion synaptic connections. If every neuron is connected to every other neuron you get $$n(n-1)/2$$ synapses. Even in a sparsely connected brain you still get an unfeasibly large number of synapses for my 64 Gb RAM (neurons are thought to have 1,000-100,000 connections typically, depending to the type of neuron, its location etc.)

The sheer number of neurons and synapses mean that they are not deterministically encoded in your DNA. Instead your DNA defines rules for protein synthesis which generate these neurons and synapses.

This seems hard. I'm going to go down the road of the C. Elegans. nematode with exactly 302. I'm not sure I understand if its synapses are hard wired but mine will be.

pub struct Dna<const NUM_NEURONS: usize, const NUM_INPUT: usize, const NUM_OUTPUT: usize> {
    potential_decay_ns: f64,
    threshold: u16,
    initiation_delay_ns: u64,
    connectivity: Box<[[u8; NUM_NEURONS]; NUM_NEURONS]>,
    // point to the input neurons of the connectivity matrix.
    input_neurons: [usize; NUM_INPUT],
    // point to the output neurons of the connectivity matrix.
    output_neurons: [usize; NUM_OUTPUT],
}

We define a hard-coded connectivity matrix in our brain's DNA. The inputs and outputs point to specific neurons in the brain irrespective of positioning.

Games

Our brain is going to try to get better at playing a simple game I created for it. The game is basically snake. Your score increases every time you eat food. You can only go up, down, left and right. A higher score is better.

#[derive(Clone, Copy, PartialEq, Debug)]
pub enum Direction {
    Up,
    Down,
    Left,
    Right,
}

#[derive(Clone, PartialEq)]
pub struct Position {
    x: i32,
    y: i32,
}

pub struct Game {
    pub width: usize,
    pub height: usize,
    pub snake: Position,
    pub direction: Direction,
    pub food: Position,
    pub(crate) score: usize,
    pub game_over: bool,
}

Organism

In order for our brain to play this game, it needs to be wrapped up in an organism. The organism is responsible for driving the inputs of the brain by reading the game state and playing the game using the brain's outputs.

The brain is constantly driven by the organism being fed the game's state even if it hasn't changed (much like you keep seeing an image in front of you even if it hasn't changed).

pub struct Organism {
    pub(crate) dna: Dna,
    inputs: Vec<broadcast::Sender<Impulse>>,
    outputs: Vec<broadcast::Receiver<Impulse>>,
}

impl Organism {
    pub fn new(dna: Dna) -> Organism {
        let brain = Brain::from(&dna);
        let (inputs, outputs) = brain.start();
        Self {
            dna,
            inputs,
            outputs,
        }
    }

    // Given a 2D representation of the world state
    // stimulates the appropriate input neurons.
    pub(crate) fn drive_input(&self, state: Vec<Vec<u8>>) {
        for (i, row) in state.iter().enumerate() {
            for (j, val) in row.iter().enumerate() {
                match val {
                    0 => continue,
                    _ => {
                        let index = i * row.len() + j;
                        self.inputs
                            .get(index)
                            .unwrap()
                            .send(Impulse)
                            .expect(&format!("Failed at index {}", index));
                    }
                }
            }
        }
    }
...

Training

Ok how the hell do we train this thing? Stochastic gradient descent with back-propagation won't work here (or if it does I have no idea how to implement it).

Instead I resorted to using genetic algorithms. Genetic algorithms are a class of optimisation algorithms inspired by nature using a combination of genetic darwinian selection based on individual fitness along with a small probability of genetic mutation to help explore the domain's search space and escape from local minima.

To do this for our Tokio brains requires a few steps:

  1. Initialise a population of $$N$$ DNA with random connectivity matrices
  2. Create brains from the DNA and put those brains in organisms and let them play our game.
  3. The $$\sqrt{N}$$ individuals with the highest scores are bred with each other resulting in a new population.
  4. Breeding works by splitting the connectivity matrix into sections and randomly picking sections from each parent (along with any other relevant genes)
  5. Repeat
  6. Profit
    pub fn train(&mut self) {
        info!("Starting training.");
        let mut population = self.initialize_population();
        while self.epoch < self.max_epoch {
            let runtime = tokio::runtime::Runtime::new().unwrap();
            runtime.block_on(async {
                info!("Starting epoch: {}", self.epoch);
                let mut handles = vec![];
                for (id, dna) in population.iter().enumerate() {
                    let dna = dna.clone();
                    let handle = tokio::spawn(async move { Simulation::simulate(id, dna).await });
                    handles.push(handle);
                }
                let population_with_scores = join_all(handles)
                    .await
                    .into_iter()
                    .filter_map(|handle| match handle {
                        Ok(dna_and_score) => Some(dna_and_score),
                        Err(e) => {
                            error!("{}", e);
                            None
                        }
                    })
                    .collect::<Vec<_>>();
                let top_score = population_with_scores
                    .iter()
                    .map(|pop_with_score| pop_with_score.1)
                    .max();
                info!("Epoch: {}, Top Score: {:?}", self.epoch, top_score);
                population = self.reproduce_top_performers(population_with_scores);
                println!("{}", population.get(0).unwrap());
                self.epoch += 1;
            });
        }
    }

Results

Nothing. Nada. I couldn't get this to work at all past a score of 3 which would disappear in the next epoch!

For reference, a human easily gets arbitrarily high scores. My brains have 512 neurons with up to ~13,000 synapses. I'm not sure if this is due to the lack of Neurons but I doubt it.

If I had to guess I would say the culprits are:

  1. A huge number of impulses being generated means that tokio struggled to process them all in a timely manner and these neurons are timing sensitive.
  2. Trying to do optimisation over a connectivity matrix by breaking it down into small chunks probably doesn't work.

Mother nature has defeated me once more. I'm going to put this project on ice for now. I'm going to continue reading neuroscience and pick it back up if / when inspiration strikes.


  1. I later found out that what I was building has been known for at least 50 years and is called a spiking neural network.

on using parentheses (i have so much more to say)

2024-12-25 19:12:00

(...)

You've probably picked up by now on my writing style- particularly my frequent use of parentheses and dashes. You only have to look at past post titles to see the pattern- it's become such a part of my writing voice that it even shows up in school essays, where I have to use a more formal tone.

Why am I such a fan of parentheses? The closest thing I can find to a reason is an example- the word 'like' became a filler word (yes, popularised by teenage girls) with multiple purposes, but most importantly to me as a mitigating word, or used to lessen the impact of a statement. Saying "I'm like, so angry right now" versus "I'm so angry right now"- I definitely prefer the former. Using parentheses, for me, is a way to lessen the impact of a statement, making my writing less formal.

Of course, there are other reasons. I like the visual look of parentheses- they break up a paragraph of otherwise solid words, they give the feeling of a secret being told, or an aside. Parentheses also allow me to add in more information to an otherwise complete sentence, finishing my thoughts when they don't make grammatical sense in the sentence. Parentheses make my writing feel more like a conversation- they add tone and nuance to the text.

For me, parentheses in writing have become like saying 'like' in speaking. It has slipped into my vocabulary until it has become a part of my style, and developing a unique writing style is definitely something I want to work on.

Until next time (and see you soon),

xoxo cec