MoreRSS

site iconBear Blog Trending PostsModify

Ranked according to the following algorithm:Score = log10(U) + (S / D * 8600), U is Upvotes , S/D is time.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Bear Blog Trending Posts

ai can't write like me

2026-02-20 11:09:00

AI can't write like me. It can't write like you. It can't write like any human, really. The only thing it is good as is bullshit PR speak, which is meant for companies that are not human. This is it's purpose.

AI can write legibly. But it won't go off on a random tangent halfway through a blog post (certainly not with parenthesis around it) and then reel itself back in.

AI isn't funny. It has all the charisma of a caricature of a Millennial written by an aging hollywood executive who's best stand up is a series of shitty knock-knock jokes.

AI can't showcase your passions like you can. It can't connect with your passion that has been formed since childhood.

AI can't be real with you. It'll shove out the most palatable thing out because it is programmed to statistically write what YOU think you'll like. It doesn't consciously wonder about how its words will be perceived.

I think there are a lot of people who are worried AI will copy their style. Which leads me to believe that these people sanitize themselves.

Writing is art, in order for something to be art it has to tap into the human experience. That's why conservative art is so dogshit, it's reactive rather than contemplative.

AI won't be able to capture your essence. Something will always be off about it. Even if it comes close, there's something in the way that will stop it from actually sounding human. It can only copy and paste short snippets at best, but it can't write something original from it's own mind.

AI can't write like me, and it sure as hell can't write like you.

{{ previous_post }} | {{ next_post }}

Reply via email: [email protected]


as of writing this...

Just got done playing Halo CE with my father in law. We're doing a run of all the Halo campaigns (going from Reach, CE, 2, 3, ODST). We beat Reach not too long ago, and I had the idea of us doing CE couch coop. Then for Halo 2 we do it through MCC so he can see the updated graphics (since he's never played MCC).

Also put an order for a 7th gen iPod, got a good deal on it. About $20 below normal pricing. Looked like it was in good condition, all I needed was the motherboard anyway. Also got a new stylus for my turntable. Hopefully it'll be a nice upgrade. It hasn't came in yet, but I'll probably post something about it on my microblog when it gets here.

Did lots of drumming today. Had fun jamming to various songs and trying to improvise drum beats to songs I haven't fully studied. "Ain't It Fun?" by Paramore, "Armatage Shanks" and "The American Dream is Killing Me" by Green Day, and "Up From the Bottom" by Linkin Park were what I played.

The world isn't ending

2026-02-20 06:26:00

Every old Christian person I know (which is basically my entire family) thinks it's the end of the world. They're convinced that we're living in "perilous" times and that it's a wrap. Get ready. Christ is coming! (Although they say this every decade.)

Even outside religious circles, I've noticed this energy of doom with everyone. People's views of the future are bleak. They're not sure what's on the horizon, but they don't think it's anything good. Everyone seems to think we've reached the "end" of something, and even though they might not say the quiet part out loud, you can tell they're thinking it.

But honestly, I'm tired of the doom and gloom. It feels very first-world/Americanized, as if there aren't other people and countries who are already experiencing their own end of the world (Palestine, Sudan, Ukraine).

And speaking of Ukraine, all this reminds me of an article I read when the war first started there. When the whole world was still in shock, The Guardian1 published an article where they interviewed Ukrainian college students about their thoughts about the war. Their outlooks were bleak, with many of them saying their futures "are gone now."

They all thought that this was the end. But, four years later, we all know that the world didn't end for Ukraine. They're still fighting and dying and the reality is that, when things go wrong, the world doesn't end, it just keeps going. The only difference is that reality changes to a newer, darker, shittier one. And, frustratingly, everyone just has to learn to deal with a new normal.

About a year after the war in Ukraine started, I had a client from Ukraine reach out to me for help with their web site. It was a basic Shopify site selling some beauty product, but I remember her telling me in an email that sometimes her country's Internet goes out and to not worry if I don't hear from her for days. I thought, "Wow, I can't imagine living that way," but that was the everyday reality for her and millions of others. Their world didn't end, they just now have to dodge bombs while the Internet cycles in and out. Oh, and they still have to go to work, too.

The last part is important because I suspect a lot of people are secretly hoping for the end of the world so they can get out of work, which sounds ridiculous, but it makes sense. I remember in 2024 when everyone was freaking out about drones and people were questioning if they were from another planet (some jokingly, some seriously). I came across a comment that said, "If the aliens invade, can they do it before I have to go to work on Monday?"

All of this just feels like suicide ideation at a global scale: we just want something to take us away from our lives. The reality is that we're not special, and entire worlds won't end for us the moment we're inconvenienced.

And that Ukrainian client of mine? She never got back to me.

  1. I unfortunately can't find the article for the life of me.

Saying goodbye to Discord.

2026-02-20 00:55:00

I’m going to delete my Discord account, that’s all but certain. I just haven’t gotten round to it yet. The moment BlueSky asked me to complete age verification I deleted my account. There was no hesitation and even a sense of relief. I’d been pushed into doing something I wanted to do anyway. It’s different this time.

Although I’m only part of one Discord community now, that’s exactly what it’s been, a community. Sure there’s features that are all in one place that many of the alternatives don’t have in full, but that’s not what I’ll miss.

Being part of a community that has its own history, its own inside jokes and unique relationships isn’t something that can be replicated. Some people will stay, some will move to one platform, others will land somewhere else and there’s those that will give it up all together. The community I’m part of is centred around a podcast that I’m a fan of so it’ll no doubt keep going but I won’t be a part of it.

So whilst there are alternatives, I’m not after an alternative. I want to preserve what I’ve loved being a part of but things won’t be ever be the same. There’s not really a point to this, just putting down how I’m feeling.

Hails!

a [revised] solarpunk ai manifesto

2026-02-20 00:15:00

technology for earth, community, and the future


note: this text revises an earlier version of the solarpunk ai manifesto. the original remains archived here. this revision reflects further research, sharper language, and a more careful grounding of claims. parts of the earlier manifesto leaned on directional signals and aggregated summaries. for instance, an industry sector pie chart in the previous version suggested a level of quantification that the evidence does not firmly support. this revision removes that visual and grounds the argument more carefully in available research. i made this revision because ideas evolve, and so should their articulation.


ai today: a turning point

the current path: we are seeing increasingly centralised ai systems, expanding data centre infrastructure, rising energy and water demands, and compute and capital-driven rather than human and environment-centric use of ai.

result of the current path: mounting ecological and social pressures. if this type of growth continues without constraint, we are in trouble.

a truth: this trajectory is not inevitable. energy use (or sources), governance, and deployment models are political design choices, not natural laws.12 the current reality of ai is shaped by poor judgment, horrendous decision-making, and tech-bro economic policies tethered to late-stage capitalism within the current technocratic and algorithmic dictatorship.3


social media and ai: a structural tension

today, social media platforms are major users of applied machine learning systems.

recommendation systems, ranking models, ad targeting, content moderation, and notification systems all rely heavily on machine learning infrastructure. exact global shares of ai usage by sector are not publicly disclosed, but personalisation and recommendation systems are among the most widely deployed ai applications worldwide.45

across billions of users, this results in continuous large-scale inference workloads being distributed across global data centre networks.

most platforms optimise for engagement, advertising yield, and growth. the ecological costs are rarely a primary optimisation metric. profit before the environment and people seems to be the persisting model.

environmental pressures

  • energy demand: data centre electricity use is rising rapidly. multiple analyses suggest ai-related workloads are a significant driver of projected growth in global data centre electricity demand through 2030.64 this required electricity is massive and the demand continues to grow without constraint restraint.
  • water usage: data centres require substantial water for cooling in many regions. measurement and disclosure remain inconsistent.2 one should ask, when and where water is a sacred resource, such as in queretaro mexico (where a data centre boom is occurring), why would we allow these data centres to exist. there is already plenty of water insecurity in queretaro, yet again ... capital before the environment and people.
  • carbon emissions: indirect emissions associated with large technology firms have increased in recent years alongside ai expansion, although exact attribution to ai versus other cloud services varies by report.7 needless to say, in places whose primary energy sources are not clean, this will remain a problem.
  • material throughput: hardware acceleration cycles increase demand for specialised chips and infrastructure, contributing to embodied carbon and electronic waste. furthermore, this increases demannd for these resources and imperialist extrationism is the result.

social pressures

  • behavioural optimisation: algorithmic feeds shape attention patterns and emotional responses. research documents measurable psychological and behavioural effects of recommender systems and social ranking mechanisms.8
  • opportunity cost: high performance ai talent and compute resources are disproportionately concentrated in advertising, finance, and consumer platforms.
  • centralised control: compute, capital, and model development remain highly concentrated in a small number of corporations.
  • extractive data practices: large-scale data mining has relied heavily on scraping text, images, code, and creative work without explicit consent from authors. generative systems are often trained on cultural production that was never offered for machine replication. this raises ethical tensions around authorship, attribution, and the transformation of creative labour into raw training material.
  • creative asymmetry: while models can remix, imitate, and synthesise artistic styles at scale, the original creators rarely share in the economic or infrastructural benefit. even outside of formal copyright frameworks, there remains a question of consent, acknowledgement, and reciprocity.
  • erosion of authorship: generative ai complicates the boundary between inspiration and extraction. creative communities depend on shared culture, but they also depend on recognition of contribution. when creative works are absorbed into opaque training pipelines without dialogue, the social fabric of artistic production is strained, creating greater tension between authors and users.

note: as an anarchist, i reject all rigid intellectual property regimes. yet rejecting copyright does not require accepting unconsented appropriation. authorship can be understood not as ownership in a capitalist sense, but as a form of relational integrity. creative work emerges from lived experience, labour, and context. its incorporation into machine systems without consent or acknowledgement presents a moral tension that cannot be dismissed, even though it arrived as a technological inevitability. it is reasonable to argue that a substantial share of frontier ai deployment today is directed toward advertising, recommendation, and consumer optimisation rather than ecological restoration. much of this development has relied on large scale scraping and ingestion of cultural material without explicit consent from creators. practices that would be contested or restricted at an individual level are often normalised when conducted at corporate scale. this asymmetry raises serious ethical concerns about power, accountability, and who bears the consequences of extraction.

the issue is not simply legality. it is structural. corporations operate within regulatory grey zones, while individual creators rarely have the resources to challenge appropriation of their work. the result is a system in which creative labour is absorbed into industrial-scale machine systems with limited transparency and little reciprocity.


green and solarpunk ai principles

  • local first: deploy lightweight, decentralised systems where possible.
  • renewable roots: power ai infrastructure with verifiable renewable energy sources wherever feasible.
  • small is beautiful: prioritise efficient, specialised models over unnecessarily large general systems.
  • open commons: share tools and knowledge for ecological and community use.9
  • carbon and water transparency: consistent public reporting of energy, water, and embodied material impacts.

from dirty ai to green ai

dirty ai green ai
centralised and opaque transparent and accountable
growth driven optimisation ecological constraint aware design
compute maximisation efficiency and sufficiency
concentrated ownership community and public governance

ai usage across sectors

public reports do not provide a precise global breakdown of ai compute by sector. however, surveys of enterprise adoption show high uptake in marketing, sales, operations, finance, and customer experience functions.45

energy, agriculture, biodiversity monitoring, and climate applications represent a smaller but growing portion of documented ai deployment.

the imbalance is qualitative rather than precisely quantified. commercial optimisation dominates current large scale deployment of ai systems. ecological and public interest applications remain comparatively underfunded.

this reflects the political economy of the industry. as ai capabilities scale, so do valuations and executive wealth. capital concentrates upward while ecological crises intensify. intelligence at planetary scale, artificial or otherwise, is presently engineered to multiply capital more reliably than it multiplies care, resilience, or shared wellbeing.

we can do better than this.


solarpunk reallocation vision

currently dominant uses possible reallocation
engagement ranking systems local renewable grid optimisation
ad targeting biodiversity monitoring
high frequency consumer prediction climate adaptation modelling
attention engineering public knowledge infrastructures
extractive data mining community health and resilience systems
speculative financial modelling cooperative economic planning tools
automated consumer persuasion participatory civic decision systems
proprietary black-box models transparent and auditable community models
centralised hyperscale data centres distributed, community-scale compute

our future: a solarpunk ai world

  • ai assisting solar microgrid balancing and local storage optimisation.
  • ai supporting forest monitoring, ecosystem restoration, and ocean observation.
  • citizen science networks augmented by open tools.
  • community-owned models trained with constrained, transparent energy budgets.
  • ai systems embedded in regenerative agriculture and low-energy housing design.
  • ai assisting in educational practices, encouraging exploration and curiosity rather than presenting shortcuts or replacing human memory and recall systems.
  • decentralised low-energy compute models where individuals control their data and dataflow.

these applications already exist in early forms within conservation technology networks and climate focused ai initiatives.910 but instead of being experiments in ai altruism, they need to be championed as the core goal.

another issue is that as compute models force centralisation and drag us further into the cloud, we are seeing the tech industry turning into a cloud-oligopoly effectively attacking local computing11 and therefore decentralisation. without decentralisation, the utopian dreams of a solarpunk world assisted by ai are not possible. so one immediate milestone is to decentralise the world. decentralise the future.


closing remarks

we must appropriate machines not be appropriated by them.
guattari (1989)

the very notion of the domination of nature by man stems from the very real domination of human by human.
bookchin (1982)

ai systems can either intensify extraction or support regeneration. they can remain centralised behemoths, corporately controlled with minimal communal input, or become decentralised, publicly accountable, and collectively shaped. the direction depends on governance, infrastructure, and shared priorities.

the choice is still open, though the window narrows as infrastructure hardens and power consolidates.

avoiding ai will not protect us from it. we need to understand it. we need to experiment with it at human scale. small projects, local experiments, community tools such as my own thought experiment, the solar grove. literacy is a form of agency, and we need agency in this game.

democratising this technology means more than open access. it means redistributing knowledge, compute, and decision-making power. it means building systems that serve ecological stability and communal resilience rather than short-term and destructive extraction.

ai should not remain concentrated in the hands of a narrow technical and financial elite. like the early internet, it can either consolidate into enclosed platforms or evolve into a distributed commons.

the future of intelligence is not predetermined. it is negotiated. we need to play our part in this negotiation.


references

  1. strubell, e., ganesh, a., & mccallum, a. (2019). energy and policy considerations for deep learning in nlp. acl proceedings. https://doi.org/10.18653/v1/P19-1355

  2. oecd. (2022). measuring the environmental impacts of artificial intelligence compute and applications. https://www.oecd.org/

  3. hart, m., bavin, k. & lynes, a. artificial intelligence, capitalism, and the logic of harm: toward a critical criminology of ai. crit crim 33, 513–532 (2025). https://doi.org/10.1007/s10612-025-09837-0

  4. stanford institute for human centered artificial intelligence. (2025). ai index report 2025. https://hai.stanford.edu/ai-index

  5. mckinsey & company. (2024). the state of ai in early 2024. https://www.mckinsey.com/

  6. international energy agency. (2024). electricity 2024: analysis and forecast to 2026. https://www.iea.org/

  7. reuters. (2025). reporting on indirect emissions growth among major technology companies linked to data centre expansion. https://www.reuters.com/

  8. nature communications. (2022). research on social media, machine learning, and behavioural effects. https://www.nature.com/

  9. climate change ai. (2019). tackling climate change with machine learning. https://www.climatechange.ai/

  10. wildlabs. conservation technology community network. https://wildlabs.net/

  11. velazquez, jason. the computational web and the old ai switcharoo. https://www.fromjason.xyz/p/notebook/the-computational-web-and-the-old-ai-switcharoo/ (tab:https://www.fromjason.xyz/p/notebook/the-computational-web-and-the-old-ai-switcharoo/)

Tiny pixel bears (Bearblog Creation Festival)

2026-02-19 23:01:00

For the Bearblog Creation Festival, I drew tiny pixel bears1 inspired by the blog themes and art of fellow Bearbloggers. Can you guess who inspired each of these?

P.S. Some bears look better in dark mode. You can toggle a dark theme with the menu in the upper right.


Blue pixel bear

Purple pixel bear with a pink butterfly

Pixel bear in a strawberry costume with a bow tie

Pixel bear in a tree costume

Pixel bear in sunglasses with an alien and button pin

Colorful pixel bear with a glass of fizzy drink

Pink pixel bear in a bow tie with a cat

Pink pixel bear in a yellow beanie with a rainbow and sparkles

Light blue and purple pixel bear with a clock and sleeping bunny

Poppy pixel bear with a cat

Pixel bear with snowflakes

Purple pixel bear with pixelated edges and bright highlights

Green and pink pixel bear with kitty whiskers and a bow

Mauve pixel bear with eyes closed and little snores

Pixel bear with circuit pattern and sparkles

Pink pixel bear with a flower

Grayscale pixel bear with tentacles

Teal pixel bear with a camera

Gray pixel bear in a hoodie with a blue bird

(I had a lot of fun drawing these and might add more later.)


==I welcome others to make these too! Here's a template if you're interested. :) (Let me know if you create one!)==

  1. The pixel bears are 48×48 but enlarged to 144×144 here. I think they're more fun to look at big.

I'm Not Reading That

2026-02-19 16:30:00

You're absolutely right! In today's rapidly evolving digital landscape, the way we create and consume content is undergoing a seismic shift — one that represents a pivotal moment in the broader conversation around authenticity. It's not just redefining what it means to write: it's fundamentally reshaping the very fabric of creative expression as we know it.

I'm so sorry you had to read that, but apparently an increasing number of people are not sorry to "write it". AI has brought the effort in manufacturing endless quantities of poetic slop down to nearly zero.

Everywhere you turn now there is another AI slop post to read. At this point we've almost all but removed the humanity from platforms like LinkedIn, if there was ever much there in the first place.

The issue with AI content is not that it inherently reads badly. It's that for all of its ability to produce elegant prose, it does so at the total expense of any actual meaning.

The most offensive part for me as a reader is that someone has produced content they themselves spent less time producing, with less original thought, than the reader they expect to engage. The disrespect of treating your reader to the output of your one-sentence prompt fundamentally undermines the purpose of sharing your thoughts in the first place. If you have something to say, say it. Don't let the AI speak for you. If I wanted to hear Claude's latest opinion, I'd ask it myself.

Of course we know why people resort to producing AI content: it's easy, and the goal is cheap engagement, usually to funnel you into some product or service. The irony is, when something has been so blatantly written with AI it actually just undermines everything you're trying to do. I, like so many others, simply disengage entirely.

The other issue with this, when everyone's feeding prompts into the same few models, you end up with a sea of content that all sounds identical. Millions of people publishing the same voice saying nothing. The whole point of putting your thoughts out there is that they're shaped by your experience, your perspective. Strip that away and what exactly are you contributing?

Now to be clear, I'm not talking about the odd use of AI to help rewrite a sentence or spellcheck - or even just for helping edit a draft. What I am talking about is the near wholesale production of AI slop. These tools are supposed to raise the bar, not lower it. Yes, your post looks beautifully written, but I'd frankly take the typoed, broken English version of a genuine novel insight over ChatGPT's perfectly polished "nothing" any day of the week.

I'm all for AI being used to boost productivity. There is absolutely a time and a place for it, but producing the content that is supposed to represent your thoughts and your expertise? That isn't it. If you've got nothing original to say, no amount of AI is going to fix that. Authenticity is the new order of the day.