MoreRSS

site iconBear Blog Trending PostsModify

Ranked according to the following algorithm:Score = log10(U) + (S / D * 8600), U is Upvotes , S/D is time.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Bear Blog Trending Posts

Thoughts about interfaces acting like people

2025-11-19 04:43:00

Here's a photo of a tipped-over Bird scooter:

IMG_0912

"Please help me up!" it says on the underside of the scooter's main platform.

I've been thinking a lot about the ambiguity of human oversight and control in the "delivery robot" industry in west LA, where a lot of those devices are being tested. Living near delivery robots - or, rather, live human labor that is being marketed as autonomous and robot-like - is new for me. Usually, I deal with objects pretending to be people. They pretend either superficially, like the scooter above, or robustly and insidiously, like voice chat assistants.

Companies depicting their live-ops services as if they are people is, I think, the most harmful form of "pretend person." Voice UIs in particular hijack the human instinct to understand words as thought, and to treat thinking beings with sympathy and trust. After spending a lot of time making a two small Alexa games at a pair of game jams years and years ago, I grew to understand that even a relatively simple human-like voice UI can be extraordinarily manipulative.

Now that people can "speak to" an LLM, I think the issue is a lot more urgent. My strongest crank conviction is that we should regulate the use of software 'personalities' as disruptively as possible. I believe that it should be compulsory to disclose directly to the user - live, during a user session - when a voice assistant or audio UI is not a real person. After the seemingly-endless series of stories we got this past summer and fall about LLM-related deaths, I believe that text personalities like LLM chatbots also deserve extremely disruptive warnings, disclosures, and reminders.

We're very, very unlikely to ever get that kind of regulation. I'll probably always remain convinced that we needed it, though. Anything that can break the social spell of a conversational interface is beneficial to the humans who use it.

The other side of this earnestly-held crank opinion is that I don't have a problem when people are extremely rude to voice assistants. I think that when you have the full context on what they are, who made them, why, and for whom, you must grow to think of them as essentially tech company owners and investor boards wearing a little mask. Alexa is Jeff Bezos wearing a mask. You shouldn't feel the need to be polite to his little mask.

The trick is that I do reserve the right to judge people on the why and how of their rudeness. I don't think you should call the Alexa voice UI a cunt. I do think you should feel free to snap "Shut the fuck up, Jeff," at any Alexa product you're annoyed with. (There are certainly people who are rude to female voice assistants because they fantasize that they are able to berate, control, and demand service from a woman.) It would be self-protective for us all to be more distrustful of, and ruder to, tech company voice UIs. It should feel preposterous to extend Jeff's little mask the same courtesy you extend a real person.

And finally, to return to the delivery "robots" I wrote about yesterday: in a world where we need to protect one another from tech products harmfully pretending to be people, and where aggression and rudeness to voice UIs can be a perfectly good way of training yourself to distrust them... well, in that situation, it's even more insidious for companies to pretend their people are "robots". I believe that we have no hope of government regulation here, and that unions are probably going to be the last line along which human transparency might be defended. If your live human services are being sold to a customer as "autonomous", and if you can organize, I think you should demand that the company make your presence known to its customers.

The difficulty of unionizing teleoperation services at companies with more money than some nations is its own can of worms... but alongside an effort to break these companies up some more, I think it could have an impact. (I gotta keep telling myself that, anyway, because the alternative is extremely depressing!!)


These days, when I watch movies with voice interfaces or "AI" assistants in them, I find myself pretty surprised by how many fictional worlds seem to be full of people who never experience any ambiguity about whether they're talking to a person or to software. Everyone in a sci-fi setting has usually fully internalized the rules about what is or isn't "real" in their conversational world, and they usually all have a social script for how they're "supposed" to treat AI voice assistants.

Characters in modern film and TV are almost never rude or cruel to voice assistants except in scenes where they're being misunderstood by voice recognition. People in stories like these rarely ever get confused about whether something is a human or an AI unless that's, like, the entire point of the story. But in real life, we're constantly forced to interact with an unwanted voice UI, or a phone scammer voice that's pretending to be real. I have found myself really missing moments like these in movies, where humans express any material awareness of the false voices they interact with. Who made their voice assistant? How do they feel about that company or person? Are they ever tricked by a voice that is false when they expected it to be a real, live human?

I've also found myself increasingly frustrated by movies which use AI as a metaphor for real live human marginalization. The future is here, and "AI" is Sam Altman wearing a little mask; it is not a marginalized person. (It's possible that your delivery "robot" is actually a marginalized person, though!)

The reason I wrote this post at all was because I saw the new Running Man movie last week, and it contained a scene where the protagonist behaved toward an AI interface in a shockingly neutral way. It is so neutral that I was surprised, in the moment, that the script hadn't used this interaction to do a little more storytelling about the protagonist or about AI assistants in the world he lives in.

We all learned the forms of our classic stories too well over the last few decades. There is still too much urgency to understand all voice UI as either a person, as a Data or a Pinoochio. Or we understand it as as Mabel Roddenberry's voice on the Enterprise - a voice with no obvious material or social history, just existing to lubricate a scene or a plot.

I still haven't seen much media that reflects the way I actually feel about conversational interfaces in the real world - frustrated, tricked, manipulated, and inconvenienced. And I haven't seen any media at all recently about the equally insidious trend of real human labor being marketed as if it is an autonomous system.

I hope we don't have to wait too long to see some fiction that reflects the future-as-it-actually-arrived!

november 17 etc

2025-11-18 12:59:00

wizards_palette

Friday I shared a thought re: how I wanted to make stuff going forward.

The gist, if you didn't click through, is that I want to make the daily post a bit more writerly/freeform and push the linksy bits to a newsletter thinger that goes out once a week because the thing I always wanted to do was make a living being a guy who makes nice things for the internet and boy howdy I'm just going to fucking do it.

Anyway I thought about that thought, and the reasons behind it, and the reasons behind why I thought it was important to announce the thought and and and all damn weekend because I have never been guilty of underthinking anything. So, yeah, I am as self-obsessed as the next guy (if not more so) but it's because I actually put a lot of effort into doing these posts. I like them and I like people who read blogs (like you!) and moreover I making stuff for people who like stuff because it seems like no one likes stuff anymore.

I put a lot of effort into everything I love to do, actually. I always have. But because I've always loved to do a lot of things and grew up with that perfectionist mindset, there's always been a pain underneath that love.

For a while I thought the cure was loving things less (because I have a broken brain and thought the lesson was that the world desensitizes us over time, like water smoothing a rock over the course of millennia), but over time I realized it was the inverse —— that the cure is loving as many things as possible but doing fewer things with everything I have, and having more everything to give. As Jerry Maguire might say: "fewer clients, less money."

Anyway, the gist behind the gist is that the list of things I do with 100% of my whatever has gotten smaller as I've gotten older (and soberer, and healthier, and mellower, and happier) and that reducing has helped me understand something elementary and fundamental: love takes a lot out of a mf.

The colour wall at the top is all the colours that the Washington Wizards use in their uniforms, by the way. I was looking through hex codes for certain uniform colours the other day and found that one particularly soothing.

🌲 gonna
🌼 go
🌱 to
🌳 sleep
🌷 now

Be good to yourself.

==If you enjoyed this post, click the little up arrow chevron thinger below the tags to help it rank in Bear's Discovery feed and maybe consider sharing it with a friend or on your socials.==

I caught Google Gemini using my data—and then covering it up

2025-11-18 08:14:00

I asked Google Gemini a pretty basic developer question. The answer was unremarkable, apart from it mentioning in conclusion that it knows I previously used a tool called Alembic:

Cool, it's starting to remember things about me. Let's confirm:

Ok, maybe not yet.

However, clicking "Show thinking" for the above response is absolutely wild:

I know about the “Personal Context” feature now — it’s great. But why is Gemini instructed not to divulge its existence? And why does it decide to lie to cover up violating its privacy policies? I’m starting to believe that “maximally truth-seeking” might indeed be the right north star for AI.

Send us your bear cards!

2025-11-18 01:50:00

A bear card is a tiny pixel art postcard that you can send to your fellow bear bloggers as appreciation, represent your own blog or just make silly things on fun tiny canvases. Grab a template or go to the gallery to learn how to make your own and view the current collection.

bear post cards

how it started - pixel cliques

Ava mentioned the teacups pixel clique and how it would be nice to have another one for bear blog. I didn't know what a pixel clique was, so I found 32-Bit Cafe's description:

Web cliques (shortened to just "cliques") are usually split into a couple of variations: either the clique is a link that goes back to the main clique website which then lists off all the members of the clique (a text clique) or there is an interactive element, such as creating pixel art, displayed on your own website that then links to other members in the clique in order to join (a pixel clique).

So joining bear cards simply means, you submit your own unique postcard to the gallery, like the one above.

appreciation, expression and a go-to canvas

But I wanted a bit more than a pixel clique. I also wanted to make cards for other people, and found it fun and challenging to represent someone else's blog in just one card. I haven't started to many make cards for other people yet, but I think this along with emails on bear blog could be a nice excuse to not barge in and say I made fan art.

So bear cards, in addition to showing your designs, can also be used to send other people with a "from" and "to" tag in the bottom, when displaying on your website or as details on email to be featured in the gallery.

I also found that this was a good medium to do the pixel art equivalent of shitposting. Like the PICO-8's limited canvas and tools, the bear card template became a quick sketchbook to put down some ideas, or just explore how much can be conveyed in such a small canvas size, without worrying about colours, layers or details. It's nice to have a standardized set of constraints, thinking about how to use those constraints or even exploit it when I'm bored or procrastinating. So I've accordingly limited the canvas size to 88x62, and limited to colour palette to PICO-8's extended palette. You can choose you your own palette, but I hope you find a limited one, so you enjoy it as intended (more on breaking rules of the bear card in the bottom of the gallery page)

I hope you will find bear cards alluring as much as me. I can't wait to see your cards.

Reply via email

Licensed under CC BY NC ND 4.0


Verifiability

2025-11-18 01:00:00

AI has been compared to various historical precedents: electricity, industrial revolution, etc., I think the strongest analogy is that of AI as a new computing paradigm because both are fundamentally about the automation of digital information processing.

If you were to forecast the impact of computing on the job market in ~1980s, the most predictive feature of a task/job you'd look at is specifiability, i.e. are you just mechanically transforming information according to rote, easy to specify algorithm (examples being typing, bookkeeping, human calculators, etc.)? Back then, this was the class of programs that the computing capability of that era allowed us to write (by hand, manually). I call hand-written programs "Software 1.0".

With AI now, we are able to write new programs that we could never hope to write by hand before. We do it by specifying objectives (e.g. classification accuracy, reward functions), and we search the program space via gradient descent to find neural networks that work well against that objective. This is my Software 2.0 blog post from a while ago. In this new programming paradigm then, the new most predictive feature to look at is verifiability. If a task/job is verifiable, then it is optimizable directly or via reinforcement learning, and a neural net can be trained to work extremely well. It's about to what extent an AI can "practice" something. The environment has to be:

  • resettable (you can start a new attempt),
  • efficient (a lot attempts can be made) and
  • rewardable (there is some automated process to reward any specific attempt that was made).

The more a task/job is verifiable, the more amenable it is to automation in the new programming paradigm. If it is not verifiable, it has to fall out from neural net magic of generalization fingers crossed, or via weaker means like imitation. This is what's driving the "jagged" frontier of progress in LLMs. Tasks that are verifiable progress rapidly, including possibly beyond the ability of top experts (e.g. math, code, amount of time spent watching videos, anything that looks like puzzles with correct answers), while many others lag by comparison (creative, strategic, tasks that combine real-world knowledge, state, context and common sense).

  • Software 1.0 easily automates what you can specify.
  • Software 2.0 easily automates what you can verify.

The thing that libs don't realize about means testing

2025-11-17 09:47:00

...is that it is insanely annoying!!

I am now signed up for unemployment insurance in the state of California. I spent a lot of time on the phone recently trying to submit and the correct my application for it. This is the first time I have been laid off in my life (I had a 15 year streak of dodging layoffs by leaving for a new job immediately before they occurred!) and therefore this is the first time I have been eligible for UI in any part of the US.

I've been against aggressive means-testing for a long time because it seems stupid, unfair, and a waste of energy to restrict our social safety net programs with massive bureaucratic surveillance programs. I knew from my friends that it is also unbelievably annoying; I now have that experience myself. It is actually more annoying than doing my taxes, which is crazy. (I'd assumed that it would be about exactly as annoying as doing my taxes.)

The forms are huge, the websites are oddly laid-out, and everything is written in the strangest, densest language possible. The phone trees are word puzzles - every verbal question they pose will have multiple long asides referring to possible exceptions, often listing out specific document names or federal programs. By the time the phone tree completes a question, I sometimes have a hard time remembering how the sentence began.

I had the experience of being manually marked as "unable or unwilling to work full time", which I needed to call and correct; the guy at the EDD office had no idea why I'd been marked that way and simply manually reversed it. This is when I learned that my ability or willingness to work within the CA unemployment insurance system is not something that I can attest to positively myself, as the worker in question; it's something that a stranger at the back of the EDD tech decides about me and manually inputs after reviewing my material.

Anyway, I was on the phone solving that and other problems for hours. Amazing!

That's the whole blog post today: means testing is more annoying than doing your taxes. In my experience, it was about exactly as annoying as getting challenged by health insurance. On a stress level, it is even comparable to the time that airport security tried to take all my diabetes equipment away from me, which was possibly the most terrifying and morally objectionable bureaucratic challenge I've navigated in my life. (I was, I think, sixteen.) That's it! That's the post!!