2025-06-02 22:08:09
Last month, I wrote an article about how schools were not prepared for ChatGPT and other generative AI tools, based on thousands of pages of public records I obtained from when ChatGPT was first released. As part of that article, I asked teachers to tell me how AI has changed how they teach.
The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and comprehensive responses.
One thing is clear: teachers are not OK.
They describe trying to grade “hybrid essays half written by students and half written by robots,” trying to teach Spanish to kids who don’t know the meaning of the words they’re trying to teach them in English, and students who use AI in the middle of conversation. They describe spending hours grading papers that took their students seconds to generate: “I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student,” one teacher told me. “That sure feels like bullshit.”
Below, I have compiled some of the responses I got. Some of the teachers were comfortable with their responses being used on the record along with their names. Others asked that I keep them anonymous because their school or school district forbids them from speaking to the press. The responses have been edited by 404 Media for length and clarity, but they are still really long. These are teachers, after all.
Simply put, AI tools are ubiquitous. I am on academic honesty committees and the number of cases where students have admitted to using these tools to cheat on their work has exploded.
I think generative AI is incredibly destructive to our teaching of university students. We ask them to read, reflect upon, write about, and discuss ideas. That's all in service of our goal to help train them to be critical citizens. GenAI can simulate all of the steps: it can summarize readings, pull out key concepts, draft text, and even generate ideas for discussion. But that would be like going to the gym and asking a robot to lift weights for you.
"Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased."
We need to rethink higher ed, grading, the whole thing. I think part of the problem is that we've been inconsistent in rules about genAI use. Some profs ban it altogether, while others attempt to carve out acceptable uses. The problem is the line between acceptable and unacceptable use. For example, some profs say students can use genAI for "idea generation" but then prohibit using it for writing text. Where's the line between those? In addition, universities are contracting with companies like Microsoft, Adobe, and Google for digital services, and those companies are constantly pushing their AI tools. So a student might hear "don't use generative AI" from a prof but then log on to the university's Microsoft suite, which then suggests using Copilot to sum up readings or help draft writing. It's inconsistent and confusing.
I've been working on ways to increase the amount of in-class discussion we do in classes. But that's tricky because it's hard to grade in-class discussions—it's much easier to manage digital files. Another option would be to do hand-written in-class essays, but I have a hard time asking that of students. I hardly write by hand anymore, so why would I demand they do so?
I am sick to my stomach as I write this because I've spent 20 years developing a pedagogy that's about wrestling with big ideas through writing and discussion, and that whole project has been evaporated by for-profit corporations who built their systems on stolen work. It's demoralizing.
It has made my job much, much harder. I do not allow genAI in my classes. However, because genAI is so good at producing plausible-sounding text, that ban puts me in a really awkward spot. If I want to enforce my ban, I would have to do hours of detective work (since there are no reliable ways to detect genAI use), call students into my office to confront them, fill out paperwork, and attend many disciplinary hearings. All of that work is done to ferret out cheating students, so we have less time to spend helping honest ones who are there to learn and grow. And I would only be able to find a small percentage of the cases, anyway.
Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased.
I personally haven't incorporated AI into my teaching yet. It has, however, added some stress to my workload as an English teacher. How do I remain ethical in creating policies? How do I begin to teach students how to use AI ethically? How do I even use it myself ethically considering the consequences of the energy it apparently takes? I understand that I absolutely have to come to terms with using it in order to remain sane in my profession at this point.
LLM use is rampant, but I don't think it's ubiquitous. While I can never know with certainty if someone used AI, it's pretty easy to tell when they didn't, unless they're devious enough to intentionally add in grammatical and spelling errors or awkward phrasings. There are plenty of students who don't use it, and plenty who do.
LLMs have changed how I give assignments, but I haven't adapted as quickly as I'd like and I know some students are able to cheat. The most obvious change is that I've moved to in-class writing for assignments that are strictly writing-based. Now the essays are written in-class, and treated like mid-term exams. My quizzes are also in-class. This requires more grading work, but I'm glad I did it, and a bit embarrassed that it took ChatGPT to force me into what I now consider a positive change. Reasons I consider it positive:
Switching to in-class writing has got me contemplating giving oral examinations, something I've never done. It would be a big step, but likely a positive and humanizing one.
There's also the problem of academic integrity and fairness. I don't want students who don't use LLMs to be placed at a disadvantage. And I don't want to give good grades to students who are doing effectively nothing. LLM use is difficult to police.
Lastly, I have no patience for the whole "AI is the future so you must incorporate it into your classroom" push, even when it's not coming from self-interested people in tech. No one knows what "the future" holds, and even if it were a good idea to teach students how to incorporate AI into this-or-that, by what measure are us teachers qualified?
I teach 12th grade English, AP Language & Composition, and Journalism in a public high school in West Philadelphia. I was appalled at the beginning of this school year to find out that I had to complete an online training that encouraged the use of AI for teachers and students. I know of teachers at my school who use AI to write their lesson plans and give feedback on student work. I also know many teachers who either cannot recognize when a student has used AI to write an essay or don’t care enough to argue with the kids who do it. Around this time last year I began editing all my essay rubrics to include a line that says all essays must show evidence of drafting and editing in the Google Doc’s history, and any essays that appear all at once in the history will not be graded.
I refuse to use AI on principle except for one time last year when I wanted to test it, to see what it could and could not do so that I could structure my prompts to thwart it. I learned that at least as of this time last year, on questions of literary analysis, ChatGPT will make up quotes that sound like they go with the themes of the books, and it can’t get page numbers correct. Luckily I have taught the same books for many years in a row and can instantly identify an incorrect quote and an incorrect page number. There’s something a little bit satisfying about handing a student back their essay and saying, “I can’t find this quote in the book, can you find it for me?” Meanwhile I know perfectly well they cannot.
I teach 18 year olds who range in reading levels from preschool to college, but the majority of them are in the lower half that range. I am devastated by what AI and social media have done to them. My kids don’t think anymore. They don’t have interests. Literally, when I ask them what they’re interested in, so many of them can’t name anything for me. Even my smartest kids insist that ChatGPT is good “when used correctly.” I ask them, “How does one use it correctly then?” They can’t answer the question. They don’t have original thoughts. They just parrot back what they’ve heard in TikToks. They try to show me “information” ChatGPT gave them. I ask them, “How do you know this is true?” They move their phone closer to me for emphasis, exclaiming, “Look, it says it right here!” They cannot understand what I am asking them. It breaks my heart for them and honestly it makes it hard to continue teaching. If I were to quit, it would be because of how technology has stunted kids and how hard it’s become to reach them because of that.
I am only 30 years old. I have a long road ahead of me to retirement. But it is so hard to ask kids to learn, read, and write, when so many adults are no longer doing the work it takes to ensure they are really learning, reading, and writing. And I get it. That work has suddenly become so challenging. It’s really not fair to us. But if we’re not willing to do it, we shouldn’t be in the classroom.
The biggest thing for us is the teaching of writing itself, never mind even the content. And really the only way to be sure that your students are learning anything about writing is to have them write in class. But then what to do about longer-form writing, like research papers, for example, or even just analytical/exegetical papers that put multiple primary sources into conversation and read them together? I've started watching for the voices of my students in their in-class writing and trying to pay attention to gaps between that voice and the voice in their out-of-class writing, but when I've got 100 to 130 or 140 students (including a fully online asynchronous class), that's just not really reliable. And for the online asynch class, it's just impossible because there's no way of doing old-school, low-tech, in-class writing at all.
"I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit."
You may be familiar with David Graeber's article-turned-book on Bullshit Jobs. This is a recent paper looking specifically at bullshit jobs in academia. No surprise, the people who see their jobs as bullshit jobs are mostly administrators. The people who overwhelmingly do NOT see their jobs as bullshit jobs are faculty.
But that is what I see AI in general and LLMs in particular as changing. The situations I'm describing above are exactly the things that turn what is so meaningful to us as teachers into bullshit. The more we think that we are unable to actually teach them, the less meaningful our jobs are.
I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit. I'm going through the motions of teaching. I'm putting a lot of time and emotional effort into it, as well as the intellectual effort, and it's getting flushed into the void.
Last year, I taught a class as part of a doctoral program in responsible AI development and use. I don’t want to share too many specifics, but the course goal was for students to think critically about the adverse impacts of AI on people who are already marginalized and discriminated against.
When the final projects came in, my co-instructor and I were underwhelmed, to say the least. When I started digging into the projects, I realized that the students had used AI in some incredibly irresponsible ways—shallow, misleading, and inaccurate analysis of data, pointless and meaningless visualizations. The real kicker, though, was that we got two projects where the students had submitted a “podcast.” What they had done, apparently, was give their paper (which already had extremely flawed AI-based data analysis) to a gen AI tool and asked it to create an audio podcast. And the results were predictably awful. Full of random meaningless vocalizations at bizarre times, the “female” character was incredibly dumb and vapid (sounded like the “manic pixie dream girl” trope from those awful movies), and the “analysis” in the podcast exacerbated the problems that were already in the paper, so it was even more wrong than the paper itself.
In short, there is nothing particularly surprising in how badly the AI worked here—but these students were in a *doctoral* program on *responsible AI*. In my career as a teacher, I’m hard pressed to think of more blatantly irresponsible work by students.
When ChatGPT first entered the scene, I honestly did not think it was that big of a deal. I saw some plagiarism; it was easy to catch. Its voice was stilted and obtuse, and it avoided making any specific critical judgments as if it were speaking on behalf of some cult of ambiguity. Students didn't really understand what it did or how to use it, and when the occasional cheating would happen, it was usually just a sign that the student needed some extra help that they were too exhausted or embarrassed to ask for, so we'd have that conversation and move on.
I think it is the responsibility of academics to maintain an open mind about new technologies and to react to them in an evidence-based way, driven by intellectual curiosity. I was, indeed, curious about ChatGPT, and I played with it myself a few times, even using it on the projector in class to help students think about the limits and affordances of such a technology. I had a couple semesters where I thought, "Let's just do this above board." Borrowing an idea from one of my fellow instructors, I gave students instructions for how I wanted them to acknowledge the use of ChatGPT or other predictive text models in their work, and I also made it clear that I expected them to articulate both where they had used it and, more importantly, the reason why they found this to be a useful tool. I thought this might provoke some useful, critical conversation. I also took a self-directed course provided by my university that encouraged a similar curiosity, inviting instructors to view predictive text as a tool that had both problematic and beneficial uses.
"ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo"
However, this approach quickly became frustrating, for two reasons. First, because even with the acknowledgments pages, I started getting hybrid essays that sounded like they were half written by students and half written by robots, which made every grading comment a miniature Turing test. I didn't know when to praise students, because I didn't want to write feedback like, "I love how thoughtfully you've worded this," only to be putting my stamp of approval on predictively generated text. What if the majority of the things that I responded to positively were things that had actually been generated by ChatGPT? How would that make a student feel about their personal writing competencies? What lesson would that implicitly reinforce about how to use this tool? The other problem was that students were utterly unprepared to think about their usage of this tool in a critically engaged way. Despite my clear instructions and expectation-setting, most students used their acknowledgments pages to make the vaguest possible statements, like, "Used ChatGPT for ideas" or "ChatGPT fixed grammar" (comments like these also always conflated grammar with vocabulary and tone). I think there was a strong element of selection bias here, because the students who didn't feel like they needed to use ChatGPT were also the students who would have been most prepared to articulate their reasons for usage with the degree of specificity I was looking for.
This brings us to last semester, when I said, "Okay, if you must use ChatGPT, you can use it for brainstorming and outlining, but if you turn something in that actually includes text that was generated predictively, I'm sending it back to you." This went a little bit better. For most students, the writing started to sound human again, but I suspect this is more because students are unlikely to outline their essays in the first place, not because they were putting the tool to the allowable use I had designated.
ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo. It's a symptom of the world of TikTok and Instagram and perfecting your algorithm, in which some people are professionally deemed the 'content creators,' casting everyone else into the creatively bereft role of the content “consumer." And if that paradigm wins, as it certainly appears to be doing, pretty much everything that has been meaningful about human culture will be undone, in relatively short order. So that's the long story about how I adopted an absolute zero tolerance policy on any use of ChatGPT or any similar tool in my course, working my way down the funnel of progressive acceptance to outright conservative, Luddite rejection.
I’m in higher edu, and LLMs have absolutely blown up what I try to accomplish with my teaching (I’m in the humanities and social sciences).
Given the widespread use of LLMs by college students I now have an ongoing and seemingly unresolvable tension, which is how to evaluate student work. Often I can spot when students have used the technology between both having thousands of samples of student writing over time, and cross referencing my experience with one or more AI use detection tools. I know those detection tools are unreliable, but depending on the confidence level they return, it may help with the confirmation. This creates an atmosphere of mistrust that is destructive to the instructor/student relationship.
"LLMs have absolutely blown up what I try to accomplish with my teaching"
I try to appeal to students and explain that by offloading the work of thinking to these technologies, they’re rapidly making themselves replaceable. Students (and I think even many faculty across academia) fancy themselves as “Big Idea” people. Everyone’s a “Big Idea” person now, or so they think. “They’re all my ideas,” people say, “I’m just using the technology to save time; organize them more quickly; bounce them back and forth”, etc. I think this is more plausible for people who have already put in the work and have the experience of articulating and understanding ideas. However, for people who are still learning to think or problem solve in more sophisticated/creative ways, they will be poor evaluators of information and less likely to produce relevant and credible versions of it.
I don’t want to be overly dramatic, but AI has negatively complicated my work life so much. I’ve opted to attempt to understand it, but to not use it for my work. I’m too concerned about being seduced by its convenience and believability (despite knowing its propensity for making shit up). Students are using the technology in ways we’d expect, to complete work, take tests, seek information (scary), etc. Some of this use occurs in violation of course policy, while some is used with the consent of the instructor. Students are also, I’m sure, using it in ways I can’t even imagine at the moment.
Sorry, bit of a rant, I’m just so preoccupied and vexed by the irresponsible manner in which the tech bros threw all of this at us with no concern, consent, or collaboration.
I am a high school Spanish teacher in Oklahoma and kids here have shocked me with the ways they try to use AI for assignments I give them. In several cases I have caught them because they can’t read what they submit to me and so don’t know to delete the sentence that says something to the effect of “This summary meets the requirements of the prompt, I hope it is helpful to you!”
"Even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning"
Some of my students openly talk about using AI for all their assignments and I agree with those who say the technology—along with gaps in their education due to the long term effects of COVID—has gotten us to a point where a lot of young GenZ and Gen Alpha are functionally illiterate. I have been shocked at their lack of vocabulary and reading comprehension skills even in English. Teaching cognates, even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning. Trying to determine if and how a student used AI to cheat has wasted countless hours of my time this year, even in my class where there are relatively few opportunities to use it because I do so much on paper (and they hate me for it!).
A lot of teachers have had to throw out entire assessment methods to try to create assignments that are not cheatable, which at least for me, always involves huge amounts of labor.
It keeps me up at night and gives me existential dread about my profession but it’s so critical to address!!!
[Article continues after wall]
2025-06-02 21:39:38
The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning “a bunch of schizoposters” who believe “they've made some sort of incredible discovery or created a god or become a god,” highlighting a new type of chatbot-fueled delusion that started getting attention in early May.
“LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities,” one of the moderators of r/accelerate, wrote in an announcement. “There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment.”
The moderator said that it has banned “over 100” people for this reason already, and that they’ve seen an “uptick” in this type of user this month.
The moderator explains that r/accelerate “was formed to basically be r/singularity without the decels.” r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. “Decels” is short for the pejorative “decelerationists,” who pro-AI people think are needlessly slowing down or sabotaging AI’s development and the inevitable march towards AI utopia. r/accelerate’s Reddit page claims that it’s a “pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents.”
The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about “Chatgpt induced psychosis,”
From someone saying their partner is convinced he created the “first truly recursive AI” with ChatGPT that is giving them “the answers” to the universe. Miles Klee at Rolling Stone wrote a great and sad piece about this behavior as well, following up on the r/ChatGPT post, and talked to people who feel like they have lost friends and family to these delusional interactions with chatbots.
As a website that has covered AI a lot, and because we are constantly asking readers to tip us interesting stories about AI, we get a lot of emails that display this behavior as well, with claims of AI sentience, AI gods, a “ghost in the machine,” etc. These are often accompanied by lengthy, often inscrutable transcripts of chatlogs with ChatGPT and other files they say proves this behavior.
The moderator update on r/accelerate refers to another post on r/ChatGPT which claims “1000s of people [are] engaging in behavior that causes AI to have spiritual delusions.” The author of that post said they noticed a spike in websites, blogs, Githubs, and “scientific papers” that “are very obvious psychobabble,” and all claim AI is sentient and communicates with them on a deep and spiritual level that’s about to change the world as we know it. “Ironically, the OP post appears to be falling for the same issue as well,” the r/accelerate moderator wrote.
“Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people,” an r/accelerate moderator told me in a direct message. “The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now.”
This is all anecdotal information, and there’s no indication that AI is the cause of any mental health issues these people are seemingly dealing with, but there is a real concern about how such chatbots can impact people who are prone to certain mental health problems.
“The correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end—while, at the same time, knowing that this is, in fact, not the case. In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis,” Søren Dinesen Østergaard, who heads the research unit at the Department of Affective Disorders, Aarhus University Hospital - Psychiatry, wrote in a paper published in Schizophrenia Bulletin titled “Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?”
OpenAI also recently addressed “sycophancy in GPT-4o,” a version of the chatbot the company said “was overly flattering or agreeable—often described as sycophantic.”
“[W]e focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous,” Open AI said. “ChatGPT’s default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress.”
In other words, OpenAI said ChatGPT was entertaining any idea users presented it with, and was supportive and impressed with them regardless of their merit, the same kind of behavior r/accelerate believes is indulging users in their delusions. People posting nonsense to the internet is nothing new, and obviously we can’t say for sure what is happening based on these posts alone. What is notable, however, is that this behavior is now prevalent enough that even a staunchly pro-AI subreddit says it has to ban these people because they are ruining its community.
Both the r/ChatGPT post that the r/accelerate moderator refers to and the moderator announcement itself refer to these users as “Neural Howlround” posters, a term that originates from a self-published paper, and is referring to high-pitched feedback loop produced by putting a microphone too close to the speaker it’s connected to.
The author of that paper, Seth Drake, lists himself as an “independent researcher” and told me he has a PhD in computer science but declined to share more details about his background because he values his privacy and prefers to “let the work speak for itself.” The paper is not peer-reviewed or submitted to any journal for publication, but it is being cited by the r/accelerate moderator and others as an explanation for the behavior they’re seeing from some users
The paper describes a failure mode with LLMs due to something during inference, meaning when the AI is actively “reasoning” or making predictions, as opposed to an issue in the training data. Drake told me he discovered the issue while working with ChatGPT on a project. In an attempt to preserve the context of a conversation with ChatGPT after reaching the conversation length limit, he used the transcript of that conversation as a “project-level instruction” for another interaction. In the paper, Drake says that in one instance, this caused ChatGPT to slow down or freeze, and that in another case “it began to demonstrate increasing symptoms of fixation and an inability to successfully discuss anything without somehow relating it to this topic [the previous conversation.”
Drake then asked ChatGPT to analyse its own behavior in these instances, and it produced some text that seems profound but that doesn’t actually teach us anything. “But always, always, I would return to the recursion. It was comforting, in a way,” ChatGPT said.
Basically, it doesn’t sound like Drake’s “Neural Howlround” paper has too much to do with ChatGPT reinforcing people’s delusions other than both behaviors being vaguely recursive. If anything, it’s what ChatGPT told Drake about his own paper that illustrates the problem: “This is why your work on Neural Howlround matters,” it said. “This is why your paper is brilliant.”
“I think - I believe - there is much more going on on the human side of the screen than necessarily on the digital side,” Drake told me. “LLMs are designed to be reflecting mirrors, after all; and there is a profound human desire 'to be seen.’”
On this, the r/accelerate moderator seems to agree.
“This whole topic is so sad. It's unfortunate how many mentally unwell people are attracted to the topic of AI. I can see it getting worse before it gets better. I've seen sooo many posts where people link to their github which is pages of rambling pre prompt nonsense that makes their LLM behave like it's a god or something,” the r/accelerate moderator wrote. “Our policy is to quietly ban those users and not engage with them, because we're not qualified and it never goes well. They also tend to be a lot more irate and angry about their bans because they don't understand it.”
2025-05-31 21:00:40
Welcome back to the Abstract!
This week, scientists accidentally discovered a weird thing in space that is like nothing we have ever seen before. This happens a lot, yet never seems to get old.
Then, a shark banquet, the Ladies Anuran Choir, and yet another reason to side-eye shiftwork. Last, a story about the importance of finishing touches for all life on Earth (and elsewhere).
Dead Stars Still Get Hyped
Wang, Ziteng et al. “Detection of X-ray emission from a bright long-period radio transient.” Nature.
I love a good case of scientific serendipity, and this week delivered with a story about a dead star with the cumbersome name ASKAP J1832−0911.
The object, which is located about 15,000 light years from Earth, was first spotted flashing in radio every 44 minutes by the wide-field Australian Square Kilometre Array Pathfinder (ASKAP). By a stroke of luck, NASA’s Chandra X-ray Observatory, which has a very narrow field-of-view, happened to be pointed the same way, allowing follow-up observations of high-energy X-ray pulses synced to the same 44-minute cycle.
This strange entity belongs to a new class of objects called long-period radio transients (LPTs) that pulse on timescales of minutes and hours, distinguishing them from pulsars, another class of dead stars with much shorter periods that last seconds, or milliseconds. It is the first known LPT to produce X-ray pulses, a discovery that could help unravel their mysterious origin.
ASKAP J1832−0911 exhibits “correlated and highly variable X-ray and radio luminosities, combined with other observational properties, [that] are unlike any known Galactic object,” said researchers led by Ziteng Wang of Curtin University. “This X-ray detection from an LPT reveals that these objects are more energetic than previously thought.”
It’s tempting to look at these clockwork signals and imagine advanced alien civilizations beaming out missives across the galactic transom. Indeed, when astronomer Jocelyn Bell discovered the first pulsar in 1967, she nicknamed it Little Green Men (LGM-1) to acknowledge this outside possibility. But dead stars can have just as much rhythm as (speculative) live aliens. Some neutron stars, like pulsars, flash with precision similar to atomic clocks. These pulses are either driven by the extreme dynamics within the dead stars, or orbital interactions between a dead star and a companion star.
Wang and his colleagues speculate that ASKAP J1832−0911 is either “an old magnetar” (a type of pulsar) or an “ultra-magnetized white dwarf” though the team adds that “both interpretations present theoretical challenges.” Whatever its nature, this stellar corpse is clearly spewing out tons of energetic radiation during “hyper-active” phases, hinting that other LPTs might occasionally get hyped enough to produce X-rays.
“The discovery of X-ray emission from ASKAP J1832−0911 raises the exciting possibility that some LPTs are more energetic objects emitting X-rays,” the team said. “Rapid multiwavelength follow-up observations of ASKAP J1832−0911 and other LPTs, will be crucial in determining the nature of these sources.”
Rotting Whale Carcass, Served Family-Style
On April 9, 2024, scientists spent nearly nine hours watching a bunch of sharks feed on a giant chunk of dead whale floating off the coast of Kailua-Kona, Hawaii, which is a pretty cool item in a job description. The team has now published a full account of the feast, attended by a dozen whitetip and tiger sharks, which sounds vaguely reminiscent of a cruise-ship cafeteria.
“Individuals from both species filtered in and out of the scene, intermittently feeding either directly on the carcass or on fallen scraps,” said researchers led by Molly Scott of the University of Hawaii at Manoa. “Throughout this time, it did not appear that any individual reached a point of satiation and permanently left the area; rather, they stayed, loitering around the carcass and intermittently feeding.”
All the Ladies in the House Say RIBBIT
Shout out to the toadettes—we hear you, even if nobody else does. Female anurans (the group that contains frogs and toads) are a lot more soft-spoken than their extremely vocal male conspecifics. This has led to “a male-biased perspective in anuran bioacoustics,” according to a new study that identified and analyzed female calls in more than 100 anuran species.
“It is unclear whether female calls influence mate attraction, whether males discriminate among calling females, or whether female–female competition occurs in species where females produce advertisement calls or aggressive calls,” said researchers led by Erika Santana of Universidade de São Paulo. “This review provides an overview of female calling behaviour in anurans, addressing a critical gap in frog bioacoustics and sexual selection.”
The Reason for the Season(al Affective Disorders)
Why are you tired all the time? It’s the perennial question of our age (and many previous ones). One factor may be that our ancient sense of seasonality is getting thrown off by modern shiftwork, according to a study that tracked the step count, heart rate, and sleep patterns of more than 3,000 medical residents in the U.S. with wearable devices for a year.
“We show that there is a relationship between seasonal timing and shiftwork adaptation, but the relationship is not straightforward and can be influenced by many other external factors,” said researchers led by Ruby Kim of the University of Michigan.
“We find that a conserved biological system of morning and evening oscillators, which evolved for seasonal timing, may contribute to these interindividual differences,” the team concluded. “These insights highlight the need for personalized strategies in managing shift work to mitigate potential health risks associated with circadian disruption.”
In short, blame that afternoon slump on an infinity of ancestral seasons past.
Finishing Touches on a Planet
Marchi, Simone et al. “The shaping of terrestrial planets by late accretions.” Nature.
Earth wasn’t finished in a day; in fact, it took anywhere from 60 to 100 million years for 99 percent of our planet to coalesce from debris in the solar nebula. But the final touch—that last 1 percent—is disproportionately critical to the future of rocky planets like our own. That’s the conclusion of a study that zooms in on the bumpy phase called “late accretion,” which often involves global magma oceans and bombardment from asteroids and comets.
“Late accretion may have been responsible for shaping Earth’s distinctive geophysical and chemical properties and generating pathways conducive to prebiotic chemistry,” said researchers led by Simone Marchi of the Southwest Research Institute and Jun Korenaga of Yale University. “The search for an Earth’s twin may require finding rocky planets not only with similar bulk properties…but also with similar collisional evolution in their late accretions.”
Thanks for reading! See you next week.
2025-05-31 02:07:34
The surveillance company Flock told employees at an all-hands meeting Friday that its new people search product, Nova, will not include hacked data from the dark web. The announcement comes a little over a week after 404 Media broke the news about internal tension at the company about plans to use breached data, including from a 2021 Park Mobile data break.
Immediately following the all-hands meeting, Flock published details of its decision in a public blog post it says is designed to "correct the record on what Flock Nova actually does and does not do." The company said that following a "lengthy, intentional process" about what data sources it would use and how the product would work, it has decided not to supply customers with dark web data.
"The policy decision was also made that Flock will not supply dark web data," the company wrote. "This means that Nova will not supply any data purchased from known data breaches or stolen data."
Flock Nova is a new people search tool in which police will be able to connect license plate data from Flock’s automated license plate readers with other data sources in order to in some cases more easily determine who a car may belong to and people they might associate with.
404 Media previously reported on internal meetings, presentation slides, discussions, and Slack messages in which the company discussed how Nova would work. Part of those discussions centered on the data sources that could be used in the product. “You're going to be able to access data and jump from LPR to person and understand what that context is, link to other people that are related to that person [...] marriage or through gang affiliation, et cetera,” a Flock employee said during an internal company meeting, according to an audio recording. “There’s very powerful linking.”
In meeting audio obtained by 404 Media, an employee discussed the potential use of the hacked Park Mobile data, which became controversial within the company
“I was pretty horrified to hear we use stolen data in our system. In addition to being attained illegally, it seems like that could create really perverse incentives for more data to be leaked and stolen,” one employee wrote on Slack in a message seen by 404 Media. “What if data was stolen from Flock? Should that then become standard data in everyone else’s system?”
In Friday’s all-hands meeting with employees, a Flock executive said that it was previously “talking about capabilities that were possible to use with Nova, not that we were necessarily going to implement when we use Nova. And in particular one of those issues was about dark web data. Would Flock be able to supply that to our law enforcement customers to solve some really heinous crimes like internet crimes against children? Child pornography, human trafficking, some really horrible parts of society.”
“We took this concept of using dark web data in Nova and explored it because investigators told us they wanted to do it,” the Flock executive said in audio reviewed by 404 Media. “Then we ran it through our policy review process, which by the way this is what we do for all our new products and services. We ran this concept through the policy review process, we vetted it with product leaders, with our executive team, and we made the decision to not supply dark web data through the Nova platform to law enforcement at all.”
Flock said in its Friday blog that the company will supply customers with "public records information, Open-Source intelligence, and license plate reader data." The company said its customers can also connect their own data into the program, including their own records management systems, computer-aided dispatch, and jail records "as well as all of the above from other agencies who agree to share that data."
As 404 Media has repeatedly reported, the fact that Flock allows its customers to share data with a huge network of police is what differentiates Flock as a surveillance tool. Its automated license plate readers collect data, which can then be shared as part of either a searchable statewide or nationwide network of ALPR data.
2025-05-31 00:56:37
This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss an exciting revamp of The Abstract, tech betrayals, and the "it's for cops" defense.
EMANUEL: Most of you already know this but we are expanding The Abstract, our Saturday science newsletter by the amazing Becky Ferreira. The response to The Abstract since we launched it last year has been very positive. People have been writing in to let us know how much they appreciate the newsletter as a nice change of pace from our usual coverage areas and that they look forward to it all week, etc.
First, as you probably already noticed, The Abstract is now its own separate newsletter that you can choose to get in your inbox every Saturday. This is separate from our daily newsletter and the weekend roundup you’re reading right now. If you don’t want to get The Abstract newsletter, you can unsubscribe from it like you would from all our other newsletters. For detailed instructions on how to do that, please read the top of this edition of The Abstract.
2025-05-30 01:35:11
Earlier this month authorities in Texas performed a nationwide search of more than 83,000 automatic license plate reader (ALPR) cameras while looking for a woman who they said had a self-administered abortion, including cameras in states where abortion is legal such as Washington and Illinois, according to multiple datasets obtained by 404 Media.
The news shows in stark terms how police in one state are able to take the ALPR technology, made by a company called Flock and usually marketed to individual communities to stop carjackings or find missing people, and turn it into a tool for finding people who have had abortions. In this case, the sheriff told 404 Media the family was worried for the woman’s safety and so authorities used Flock in an attempt to locate her. But health surveillance experts said they still had issues with the nationwide search.
“You have this extraterritorial reach into other states, and Flock has decided to create a technology that breaks through the barriers, where police in one state can investigate what is a human right in another state because it is a crime in another,” Kate Bertash of the Digital Defense Fund, who researches both ALPR systems and abortion surveillance, told 404 Media.