2026-02-01 04:56:48
In Quantum Mechanics, there are two worlds. One is the 'real' world which you and I understand and observe and can discuss - where a ball is, how fast it moves, etc. The other is 'quantum' and is the world of the very small - atoms, electrons, etc.
Quantum Mechanics is essentially a set of mathematics that physicists use to determine what will happen. They set up an experiment, execute it, and they get certain results. Quantum Mechanics is the math that tells them what those results will be.
According to the 'Copenhagen Interpretation' of Quatum mechanics, this 'quantum' world is not real, and the math tells you what will happen in the real world, but does not tell you anything about the quantum world itself. According to [the] Copenhagen [Interpretation], the fundamental nature of reality is not to be understood. TLDR; Copenhagen says: "Shut up and calculate" - the math works, regardless what the nature of reality actually is.
There are various other interpretations of Quantum Mechanics, and there is a long history of disagreement within the physics community around this realm of interpretation. All the physicists agree that the math works, but disagree about what the math means about the nature of reality itself.
The funny thing about Quantum Math is that they can't actually predict what any one photon or electron (or other subatomic particle) will actually do. The math says: When you take a measurement, there is a 20% chance the photon will be in THIS POSITION, a 20% chance it'll be in THAT position, etc etc. But the math can't definitely say which of those positions the one photon will be in. If you shoot out 10 million photos, then 20% of all the photons shot out will be in THIS position, 20% in THAT position. So the Quantum Math can tell you the statistical distribution of what subatomic particles will do, but can't tell you what one specific particle will do.
It's furkin weird. Classical physics doesn't work this way. If you throw a ball with X power in Y direction, the math can tell you exactly where the ball is going to go. If you throw a million balls this way, they'll all follow the exact same trajectory (at least if we ignore outside forces like wind). But the subatomic particles just don't work this way, and it's because they act like waves. The wavy nature of subatomic particles causes this strangeness, but I'm not going to dive into that in any detail here. You can read the book if you care. It's fucking excellent.
So anyway, one of the other interpretations of Quantum Mechanics is the "Many Worlds Interpretation". Like I said before, the one single photon has a 20% chance of going HERE, a 20% chance of going THERE, etc. WELL, in [the] Many Worlds [Interpretation], the photon goes both places, and in doing so creates alternate realities, alternate universes. There is a universe where the photon goes HERE, and there is a universe where the photon goes THERE. It turns out, the human observer, only descends into one of these universes (and there is another human observer who descends into the other universes).
There are other valued theories as well that challenge the Copenhagen interpretation.
One of the wildest things about Quantum Mechanics is non-locality - the fact that some things happen faster than the speed of light. This violates Einstein's theories of Relativity, which says NOTHING can go faster than the speed of light. This has been validated experimentally too. Though, depending on your interpretation, you might be able to argue that speed-of-light (locality) is respected. I believe Many Worlds keeps locality in tact.
Another wild thing is - the subatomic "particles" might not be particles at all. The (I think) popular conception of an electron is that it's a tiny little ball, even if it acts like a wave sometimes. But there might not be any actual "particles" at all. I think this is still an open question. But either way, the math works!
The exploration of the physics itself is incredibly interesting to me, and is the main reason I checked out this book from the library. It's one of the best written books I've ever read, and it's on a topic that is just SO INTERESTING to me.
But what surprised me about this book is the HISTORY. It is primarily centered from about 1900 to about 2000. Not much seems to change from 1980 - 2018 in terms of the central conflict of the book - the Copenhagen Interpretation (and unwillingness to consider other interpretations) versus other interpretations (and the desire to continue investigating the fundamental nature of reality).
One of the most interesting history lessons in here is how Nazi Germany and World War 2 influenced the development of physics.
Einstein was a Jew, and so were many other prominent physicists of the time. Before Hitler came to power, German was the primary language used within the physics community. Because of Hitler's regime, a lot of Jewish scientists were pushed out of Academia, many going to the U.S. Through all of this, English became the new standard language in the physics community.
The War also caused a fundamental change in the physics community. Before-hand, there were about 400 physicists worldwide (the book does not discuss the Eastern World, so I assume this figure regards the West). But the War led to the development of new bombs (Nuclear) and new power (Nuclear), among other developments. Funding into physics ballooned from about 7 Million a year to about 400 Million a year, and the research in Physics became much more singularly focused on practical applications.
The philosophy of physics, before the war, was an incredibly prominent feature within the Physics community - the debates about the interpretations I discussed above. But the University curriculum changed with the new focus on pragmatism, and the philosophy side was lost. Textbooks started teaching the Copenhagen interpretation as if it was the one true philosophy, and other interpretations were almost entirely ignored. This is apparently still an issue in the most recent decade.
Also - Universities in the U.S. had "Jewish Quotas" starting around the 1920s, where Universities would only permit up to a certain number of Jewish students each year. Stanford's policy on this ended in the early 1960s.
Further, one scientist - David Bohm - presented an alternative to Copenhagen called the 'Pilot Wave Theory', which I'm not sure how to explain. But the point is, it's an alternative to Copenhagen and it was hidden for many years because of Fascism in the United States.
Bohm had spent some time in a communist group on campus. He eventually left because the group was basically pointless and didn't do anything useful. But this was during the Cold War. The House Unamerican Activities Committee (HUAC) interviewed him. He was blacklisted and so was not able to get a job at any American universities. He moved to Brazil, and the U.S. revoked his passport so he wouldn't be able to travel to Europe for physics conferences and such.
He did write a paper about Pilot Wave Theory and submitted it to a physics journal, but he wasn't able to go discuss and advocate for his theory due to the U.S.'s fascism. He eventually became a Brazilian citizen and was able to go to Europe with his Brazilian passport.
So yeah. The history is incredibly interesting. It's peppered in this book in a fantastic way. The story of physics is laid out so well. This is one of my all time favorite non-fiction books, and I give it a big recommend.
I have a couple ... are they complaints? notes? Whatever.
I've heard that Einstein's wife was actually responsible for a fair deal of his physics work. The book did not discuss this at all. I don't know if it's true. I'm curious.
Within physics there are Quarks. I've read a little bit about Quarks before (in a Michio Kaku book iirc), but I don't remember it well, and I was hoping this book would cover that. It didn't. I believe Quarks are a subatomic "particle".
Then there's String Theory. The book mentions String Theory but doesn't get into it. I'm curious and want to learn about it. I also had read about it from Michio Kaku, but like it was so long ago. I was a teenager and I'm 33 now.
I think there were a couple other science-theory things that I was interested in that this book didn't cover.
To be honest, these aren't really complaints. The book was about one (or two) thing(s) in particular - the history of Quantum Mechanics and the conflicts around the interpretations of Quantum Mechanics. These other topics probably didn't fit in all of that, and I'm not mad at that. They're just curiosities that were unsatisfied.
I am disappointed that this book didn't talk about the Einstein's Wife rumor. I wanted to learn if it was true, and learn some details about it. So that is a complaint.
Anyway. This is an absolutely fantastic book. I highly recommend it, if you have any interest in science. Some of the concepts are certainly difficult to understand, but I do feel the author does a good job of communicating them. Some shit's just complicated, yaknow?
2026-01-31 04:42:00
No Upvote Means No Upvote | Nick Hayes
I remember reading a post a bit back about finding a roundabout way of still upvoting them regardless of the measures OP put in place to stop getting them. And I remember thinking: Is this okay?
To me, I’d feel like my very clear boundary was disregarded.
Iiii was the one who circumvented a hidden upvote button (unless someone else did it too). I also had those thoughts, and idk ... I just thought it was a silly little funny thing, and wouldn't be a big deal. I guess I don't know how the blogger in question felt about it, though, or if they even knew about it.
Boundaries don’t stop being boundaries just because they’re digital and “unseen”.
I'm inclined to agree with you, and I will probably not do this again.
2026-01-30 05:07:00
I've been reading about the history of Physics over the last 120 years, and a lot happens through scholarly journals. Physicists do experimental research, craft theories, come up with new mathematical formulas, and they write papers and submit them to scholarly journals. Other physicists read these papers, dismiss ideas they don't like (more on this in an upcoming book review), and build upon or test ideas they're interested in.
Much of the progress of science seems to be a result of these scholarly journals.
There was a group of physicists that was somewhat rebellious, which formed a new semi-scholarly journal meant as sort of a pre-publication for controversial ideas, so that thoughts & theories could be discussed, even if they weren't a proper fit for the more established and orthodox journals.
And this SPARKED AN IDEA. An idea that may already exist in the world, idk.
But it would be so sick to have a journal like that for social change and activism. I think it would be best separated into two distinct journals.
One journal would focus on societal problems and what solutions are actually attempted. Like, take a city with a lot of gun crime, leaders implemented some new policies, and saw a reduction in that gun crime. An article would describe the issue in that city, what the policies actually did, and the outcome of those policies.
The other journal would be focused on the activism side - community members & activists advocating for policy changes. Take the same example of a city with high gun crime, and there's a group with policy ideas that would hopefully address the gun crime. An article in this journal would go over the process the group took to form itself, put together their proposals, and the advocacy efforts that eventually got city leaders to adopt those policies.
I have engaged in a fair bit of advocacy and activism myself, and there's at least three articles I would want to submit to the activist journal. One would be about an incredibly easy and successful effort to overturn a discriminatory housing policy. Another would be about getting Pride Fest off the ground in my community, along with the successes and failures of those organizing efforts. A third would be about an abortion rights group I co-organized, which was far less successful. There's possibly other articles I would be interested in submitting too.
Such journals would need to take certain stances about human rights, but that would get tricky. The racists are often extremely effective at making change happen. Leftists could learn from their efforts. But the journal shouldn't be promoting any sort of bigotry. So idunno where you'd wanna draw the line on that. It also shouldn't necessarily be leftist - a lot of important issues are decidedly not leftist - right to repair, clean air & water, access to healthcare.
Certain approaches are leftist, certain solutions are leftist, but not all successful approaches and solutions are, and so such a journal probably shouldn't be "leftist", even if it supports certain human rights that U.S. Republicans are opposed to.
2026-01-30 04:41:00
I often wonder where my soul goes when I sleep.
I don't know what a soul is, persay. I know I experience things. I experience sensations and thoughts and feelings. I often wonder why my meat sack (human body + mind) needs to be experienced at all. My meat sack could take in stimuli, process & encode it, and produce outputs. I don't see any reason why any of it has to be experienced.
So for me, the soul is ... either the experience itself, or the thing that does the experiencing. It very well may be a creation of the mind, or perhaps it comes from somewhere else and attaches to the mind. This "somewhere else" could be some 5th dimension we're not able to sense or detect. I don't know.
I also experience dreams. I find it incredibly likely that my mind is the generator of those dreams. But I also find it plausible that something more mystical is happening. My favorite idea is that my soul is generally attached to my mind/body, and then when I sleep it floats around to other planes and experiences other things. Kind of like changing TV channels.
I also wonder, in my dreams, do the other people have internal worlds like I do? Is there a soul experiencing from those other people's perspectives? Or are those people just figments of my dream hallucinations?
If they don't have souls, if they are imaginations, then why am I convinced that people in real life are ... well, real? Why am I convinced that my real life best friends experience their own meat sacks?
The entire concept of anything being "real" also blows my fucking mind. Like the fact that anything exists at all. So yeah, we have a universe and alla that. But where did the stuff of the universe come from? If it has always existed, umm, fucking how? The fact that there is any existence at all blows my fucking mind.
And so, I often wonder about the nature of reality, and especially consciousness. And for the large chunks of my sleep that are dead black, with no dreams, I wonder if my soul is elsewhere, and just completely detached from my body.
2026-01-29 04:53:00
As a teen, I considered myself a "Grammar Nazi". I would frequently correct people's grammar, asserting that I knew the one "right" way to speak.
In my early 20s, I learned about how (some) dictionaries are made. They look and see how words are actually used by regular people, then the dictionary records descriptions of what people mean when using those words.
I always thought language was prescriptive before that - an authoritative body says "this is the right way to use English" and then it was "smart" and "right" to use language that way.
Some dictionaries have worked this way historically, and there are writing guides that take a prescriptive approach, but I came to understand language differently. It was no longer a strict set of rules to be followed; language was a malleable thing created through everyday use by everyday people.
Those "rules" were just guidelines, suggestions. It was my choice whether I adhered to them or not. Other people get to make that same choice, and it doesn't have to be the same as mine.
Yesterday, I wrote about my thoughts on LLMs, and I took a tone of moral superiority, like I know what's "right" and what's "wrong". The stakes are higher when dealing with morals than with sentence structure, but I realized last night that it was a very similar mindset to my "Grammar Nazi" days.
I argued yesterday that it is immoral to use AI at all, because using it supports a deeply unethical system, even if the individual use isn't unethical in-and-of itself.
Morality is a tricky thing, because it depends on loads of underlying assumptions, and most actions in our modern world are deeply entangled with many complex systems.
Buying a tomato can be moral (good) because you're feeding yourself and your family, while being immoral (bad) because it depends on exploitative labor practices. But not buying a tomato means not feeding your family and letting a tomato go to waste - but hey, you didn't support exploitative labor practices!
Like I said, it's complicated.
But what is morality, anyway?
It's easy to feel there is an objective moral position sometimes, but just like dictionaries, I do not believe in a moral authority who prescribes what is right and wrong.
Morality really is just a set of personal values, and many decisions and actions don't fall perfectly in the "moral good" or "moral bad" camp, but often in some grey area, like with the tomatoes. Our values are often at odds with each other, and we have to make decisions about how to balance these conflicts.
I'd buy the tomato and feed my family, regardless of the downsides. I don't support exploitative labor, but I need to feed my family. (I don't have a family to feed, this is just a hypothetical)
So what does this mean with regard to my position on using AI? Quite frankly, I don't know.
I once had a reckoning with myself about being a "Grammar Nazi". And now I'm wondering if I need a similar reckoning with regard to morality.
But morality is not language, and morality has real-world consequences. I feel very strongly that the U.S. Government (ICE) should not murder people. I'm not going to say that my values on this are just "personal". I'm not going to let someone off the hook for feeling differently. I'm going to take the position of moral authority on that, and say that I'm "right" and if you disagree, then you're "wrong". I will not budge on this. I will not give grace on this. I will be prescriptive on this.
But on other issues, I'm not sure I'm comfortable taking such a strong stance about what's "right" and what's "wrong".
I've been having this same internal conflict with regard to eating animals. And now I'm having this internal battle about my prescriptive view regarding AI.
I thought I'd get to the bottom of my feelings in this post, and do some of that moral reckoning. But I've merely opened up some questions for me to explore. I'm curious where these thoughts will lead in the coming days, weeks, months, and years.
Will I have a reckoning about morality, like I did about language?
2026-01-28 04:20:00
I was reluctant to write this post because AI is already talked about a lot on Bear & I've seen a complaint about us being sort of an echo chamber and needing other topics to talk about - that's a fair complaint, but fuck it I blog, at least in part, to help me process, and if someone complains about it that's okay.
I've been accused of "black and white" thinking by friends. I take some offense to this, because it is dismissive of my views. It's probably not entirely wrong, but it is dismissive.
I think it is morally wrong to use LLMs. I think many of the individual uses in-and-of themselves are not immoral, but I think any use of LLMs is immoral because it supports the further development of these LLMs, and the LLMs are deeply unethical.
To build an LLM, one requires massive amounts of training data. That is - all of the books, art, movies & shows, YouTube Videos, blog posts, websites, software, and basically every single thing humans have ever made. You may not care if your blog posts or book or art were consumed by LLMs without your consent. But other people do. We have longstanding systems to prevent theft of creative works.
These laws have been entirely ignored when training LLMs. There have been some lawsuits about this, but it doesn't really change anything. It happened and the AI companies will keep trucking along. These works were stolen not for individual consumption like when you pirate a movie to watch at home, but so a machine could be built that will be used to replace human creativity, so that the few Capitalists (owners of these machines) can make exorbitant profits.
And then when you ask an AI for information, instead of you visiting a news website and reading journalism, you get a summary of that (stolen) journalism from an AI. When your google search result shows you an AI answer, you're not clicking through to websites, so they're not getting your clicks, Google is collecting all the ad revenue. Google is stealing both creative works and money from these websites through this process.
(I am actually PRO piracy, especially when it comes to works that have recuperated their costs. But theft for corporate profits and LLM use is not the same as theft for personal use.)
LLMs require a huge amount of processing power in order to be trained and in order to continue running. Many companies are building their own AI models and building data centers nationwide in the U.S. and probably globally. China is building data centers underwater to help keep them cool.
The raw materials to build data centers are already significant - building materials, GPUs, RAM, etc. The purchasing of computer parts for data centers has caused significant price increases for consumers buying computer parts. This part of it is the least of my gripes.
Data Centers require a significant amount of electricity, generate a lot of heat, and require a lot of cooling. This exacerbates climate change. The electricity is somewhat solvable by the development of more solar and wind energy, as well as the building of new nuclear energy. But in the short term, at least one data center has been caught using an illegal source of energy (methane-based I think?) that produces significant air pollution and health problems for nearby populations.
Further, what's actually been happening is that everybody's energy bills are increasing. There is too much demand for electricity, the grid can barely keep up, and so everybody's energy cost goes up. Data Centers don't pay for all of the increased demand. They get subsidized by everybody who has power in their homes. I suspect this increased-cost aspect will level out in a few years as energy capacity increases.
Next is the water. They use water for cooling, which depletes local resources. This is already an issue in farming where large agricultural producers over-use water, and smaller farmers and residents are hurt. We should not be taking people's water or polluting the air or exacerbating climate change, and using AI signals to these companies that they should continue doing these things. It signals to lawmakers that the people want it.
Republicans in the U.S. have been largely pro-data center, saying they create jobs. They lambast democrats for opposing data centers and ignore all of the downsides. But the job creation is mostly a lie. A lot of the jobs related to data centers are in the initial construction. A lot of the people who participate in that construction come in from other communities.
The influx of outside people puts a strain on local economies, drives up housing prices, and makes people's communities harder to live in, economically speaking. These construction jobs are short-term. Once the data centers are built, they are not great sources of jobs, yet they continue to pollute the local environment, drive up energy costs, and use tons of water.
And then there's the job losses and low quality coming from these LLMs. Many tech companies have laid off great numbers of programmers and are boasting about the amount of code they write using AI. Often times these same companies have increased cases of bugs in their code and security vulnerabilities. Lawyers have been caught using AI to write legal briefs, in which imaginary legal cases are cited.
Game developers are using AI to generate 3D assets, taking jobs from artists. Large corporations are using AI to generate commercials, taking jobs from artists. Corpos are also generating highly targeted ad campaigns using a bunch of ai-generated alternatives to target different racial groups. Again, this takes jobs but it also has the added dystopian aspect of giving large companies far more control over the population by manipulating us with infinitely malleable AI-generated ad campaigns.
I don't hate the idea of a world where we all work less and get a universal basic income or transition into a moneyless society or whatever. But (a lack of) AI isn't the reason we haven't done these things. Our global society has become exponentially more productive over the last 100 years, and yet our goal remains "full employment" or damn near it. The roadblock is social-political, not technological. Though, I do admit, if jobs are taken by AI, this could grow the social-political capital toward a post-work society.
Deepfakes are the most obvious. The ability to generate videos and images that seem entirely real, as a means to spread political propaganda and increase a government's fascistic control over their populations. This is happening now. Misinformation isn't new, but the AI deepfake nightmare has escalated things. Further, it's unsettling as a regular person just watching videos and having a whole new layer of "is this real?" Like that already existed to some extent, but it's a whole nother level now. (also, Grok is generating child porn and this problem will not go away)
They lie, but you can't tell. LLMs communicate with a confident tone. You ask them a question, they give you an answer, and they sound authoritative. Apple had previously rolled out notification summaries and rolled-back the feature because it was giving false information and misrepresentations of the news and of communications with friends and family.
It is incredibly important to understand that LLMs do not know anything, it isn't how they work. When you generate an image, nobody thinks the AI "remembers" this scene and is showing you what it "remembers". You understand the image is generated, fake, a figment of "imagination". Well, text is the same way. It does not remember (stolen) books and encyclopedias and web pages. What it is doing is generating text. It's kind of like autocomplete, except farrr more advanced. This is not a recitation of knowledge, but a generation of words that seem correct for the given context based on complex underlying mathematical models.
Of course, LLMs do get a lot right, with regard to information. But if you use them, you should understand that they're not operating from knowledge in the way you or I might.
I'm also utterly creeped out by the consumer-facing AI tools for images. The ability to change colors or remove people from photos or whatever else. It makes it incredibly easy for an (emotionally) abusive partner to lie to you about what happened. I don't want fake memories. I don't want my ex removed from the family photo after we break up. I don't want the stranger removed from the background. I don't want to be lied to about what my life was. But it is incredibly easy for anybody to forge a false reality through edited images now.
All of this stolen data, pollution, increased energy prices, depletion of resources, misinformation, and job destruction is purportedly to make our lives easier, but what it's really doing is replacing humanity.
I care about art - drawings, photos, videos, movies, poems (okay I'm not into poems), novels, etc - because people made it. I'm not interested in machine-generated art. The whole point of living is to be human and do human shit and connect with eachother. If there is no outlet for human creativity (because all the stuff we're consuming is AI-Generated), then there is no fucking point in being a human, in being alive. Of course, you can still draw and make silly little videos even if the AI has taken over Hollywood. But I just have no interest in the computer-generated "culture".
Next is communication. (FB) Messenger started prompting me recently to "summarize with AI" the last 3 or 4 messages from my best friend. In comments on FB, it tries really hard to get me to use phrases the AI has suggested for me to say.
If my friends were summarizing my messages with AI and sending me ai-written messages, I'd be really hurt. It's just a pretend humanity at that point. Actually processing MY words and coming up with YOUR words is human, and is a purpose of friendship and human connection. If you're using AI for this, that connection is fake.
We also have resumes being written by AI and job applications being reviewed by AI and its just all garbage and inhuman and cold and dystopian.
A lot of the ads for AI products have also been extremely anti-human and promoted basically being a piece of shit. One I remember was using AI to summarize documents you were supposed to read for a meeting, and faking your way through it, because you slacked off and didn't do your work. Another, much stupider one, is a dad with his kid racing a snail and a slug. The Dad asks his phone's AI to predict which will win. Just fucking hang out with your kid, bro.
And a small note about personal growth and learning - when you use an AI to "know" things for you, you do not learn and do not grow as a person. The shortcut you take to get the output you desire also skips the human aspect of putting in effort and learning and growing.
This is the part where I glaze AI a little bit.
AI has been advertised to watch you workout and give you tips and a workout plan, and photograph your plumbing and tell you how to fix a problem. These are both very useful things. They're also anti-human and rely on stolen information and are prone to giving you false information. But still, some potential usefulness.
Summarizing large documents - Let's say you want highlights from a school board meeting or a 200 page bill from the U.S. Congress. Summaries could be useful for regular people. But again, it is prone to error and there already journalists who do these things so that lowers reliability and destroys jobs. But still, potentially useful.
Health - This may be more of a U.S. problem, but it can be really hard to get good treatment for whatever condition you may or may not have. Being able to "discuss" your illness with an AI and get information from it has the potential to be extremely useful. If you're someone struggling with a health condition and doctors have not been helpful, it's hard for me to hold it against you for using AI to get help. Again, it is prone to misinformation, and this brings in a whole new slew of privacy concerns. But I admit the utility. If you're doing this, please talk to your doctor about what the AI tells you, and don't do anything risky based on the AI's advice.
I think it also has the potential to be useful in law, especially for laypeople. But you get the point by now. Useful, unreliable, kills jobs, removes humanity, but possibly useful.
Some mundane tasks are made much simpler by AI. I complained earlier about photo editing. WELL. We've had digital tools for extensive photo editing for several years now. Experts had the ability to remove people from photos, add things to photos, create amazing CGI, and all kinds of stuff. Simplifying this creation process can make it easier for people to turn their creative ideas into products that can be shared with others. There is definitely utility to that. I still hate it.
I could go on about utility. I stop here.
I am an AI hater. I have some very practical concerns about AI in regard to Earth's resources and the elimination of jobs and information problems. I also have deep ethical concerns about the stolen works and violation of consent, as well as the erasure of humanity.
There are many counterpoints to many of the things I've griped about here. There are also some specific uses I actually hope to see - medical breakthroughs, for example.
I also think that it can be extremely useful for programming (though I refuse to use it), and there's some question of, like: Why should I be working SO HARD to write software when I don't have to? This same question applies to many creative fields, journalism in some cases, and the legal field. Partially, I write code because I actually enjoy programming. But there are parts I don't like, and it might be nice to outsource those. I won't, though.
This post has two purposes. First is to advocate for people to stop using AI and to summarize the reasons why. Second, is to help me process my thoughts and feelings with regard to LLMs.
I believe that it is wrong to use AI for any purpose, not because the specific thing you're doing with it is immoral (though sometimes it is), but because using it supports a broader system that is highly unethical. Even using it for "good" things still supports all of the "bad" things.
Oh and I forgot to talk about AI's use in military operations. Whatever.
But I also understand the benefit for some people may be significant, like if you use AI to help you with your personal health.
My individual refusal to use AI isn't stopping anything. Your individual refusal won't either. But data centers are built in local communities, and one day this community may be yours. When it's coming to your town or your state, you should be ready to oppose it in a meaningful way, and you should do so without hypocrisy if you can.
I do the right thing, first and foremost, because it is right and I want to be the kind of person who does the right thing. I also care about the broader impact on the world. My actions alone aren't enough to fix things. But collective action is, and we should all see ourselves as part of this collective.
I don't use Spotify because they pay artists half as much as other platforms, because they paid Joe Rogan $100 million dollars to platform nazis, because their CEO contributes money to AI-based warfare. My choice to use Deezer isn't fixing the problems with Spotify. But it does mean I'm not contributing to the hellscape that is Spotify, and it means I'm contributing more money to artists than I would on Spotify. Plus, it was a super easy switch that has almost no impact on my life.
If everyone tried to do the right thing, and was willing to educate themselves (or participate in a community that advises them) on issues of the world, we would not have the problems we do today.
But it's also not that simple, I know. I pay taxes when I shop at stores. Those taxes help fund foreign wars and the genocide in Gaza and the ICE Nazis in our streets. I drive a pickup truck that uses gas. I watch hella YouTube videos, generating ad revenue for Google, a company that participates in warfare, builds LLMs, and violates everyone's privacy.
I am not innocent. I also contribute to the hellscape. I am not a big fan of purity tests. But I am a big fan of doing the right thing when you can. But perhaps it is unfair for me to decide that "there's not a viable alternative to YouTube" justifies my usage of it, and then suggest that you are unjustified in using AI. I think its more likely that my use of YouTube is unjustified and the moral, ethical thing would be to stop using. Maybe I'll think more deeply on this in the next few months.
At the very least, I ask that you consider the information I've shared today, and reflect upon your participation in this system. I don't share any of this to shame or judge you. I share this to inform you and to advocate for a better world and to ask you to participate in the better world. If you choose to use AI, it is my duty to give you grace, respect your choice, accept you, and let you be.
Take your time. Even if you're willing to hate AI with me, you don't need to make that decision today. Sleep on it.
Like I said about my post on Animal Agriculture:
I'm coming to think the root of all evil is not money nor power, but complacency and compartmentalization.
Please read my followup post, "Grammar Nazi", where I challenge the moral prescriptivism I display in this post.
Edit: I FORGOT TO MENTION - I watch Neural Viz. This is a YouTube channel which uses generative AI to create videos. It seems the person behind these videos is doing creative work, even if depending on AI to generate the product. This (Neural Viz) is my one personal exception for using and consuming generative AI. I also agree, in a rational sense, with my argument above that this is immoral. And I'm deeply uncomfortable about this. And almost unwilling to look at it honestly. I feel somewhat justified too. I'm not going to stop watching/supporting Neural Viz. Make of that what you will. It definitely makes me somewhat of a hypocrite. I'd still decommission all the ai-related data centers TODAY though, even if it meant no more Neural Viz.