2026-01-11 13:34:00
"We will make the new ð algorithm...open source in 7 days," Elon Musk posted Saturday on X.com. Musk says this is "including all code used to determine what organic and advertising posts are recommended to users," and "This will be repeated every 4 weeks, with comprehensive developer notes, to help you understand what changed." Some context from Engadget: Musk has been making promises of open-sourcing the algorithm since his takeover of Twitter, and in 2023 published the code for the site's "For You" feed on GitHub. But the code wasn't all that revealing, leaving out key details, according to analyses at the time. And it hasn't been kept up to date. Bloomberg also reported on Saturday's announcement: The billionaire didn't say why X was making its algorithm open source. He and the company have clashed several times with regulators over content being shown to users. Some X users had previously complained that they were receiving fewer posts on the social media platform from people they follow. In October, Musk confirmed in a post on X that the company had found a "significant bug" in the platform's "For You" algorithm and pledged a fix. The company has also been working to incorporate more artificial intelligence into its recommendation algorithm for X, using Grok, Musk's artificial intelligence chatbot... In September, Musk wrote that the goal was for X's recommendation engine to "be purely AI" and that the company would share its open source algorithm about every two weeks. "To the degree that people are seeing improvements in their feed, it is not due to the actions of specific individuals changing heuristics, but rather increasing use of Grok and other AI tools," Musk wrote in October. The company was working to have all of the more than 100 million daily posts published to X evaluated by Grok, which would then offer individual users the posts most likely to interest them, Musk wrote. "This will profoundly improve the quality of your feed." He added that the company was planning to roll out the new features by November.
Read more of this story at Slashdot.
2026-01-11 10:34:00
An R&D lab under America's Energy Department annnounced this week that "Neuromorphic computers, inspired by the architecture of the human brain, are proving surprisingly adept at solving complex mathematical problems that underpin scientific and engineering challenges." Phys.org publishes the announcement from Sandia National Lab: In a paper published in Nature Machine Intelligence, Sandia National Laboratories computational neuroscientists Brad Theilman and Brad Aimone describe a novel algorithm that enables neuromorphic hardware to tackle partial differential equations, or PDEs — the mathematical foundation for modeling phenomena such as fluid dynamics, electromagnetic fields and structural mechanics. The findings show that neuromorphic computing can not only handle these equations, but do so with remarkable efficiency. The work could pave the way for the world's first neuromorphic supercomputer, potentially revolutionizing energy-efficient computing for national security applications and beyond... "We're just starting to have computational systems that can exhibit intelligent-like behavior. But they look nothing like the brain, and the amount of resources that they require is ridiculous, frankly," Theilman said.For decades, experts have believed that neuromorphic computers were best suited for tasks like recognizing patterns or accelerating artificial neural networks. These systems weren't expected to excel at solving rigorous mathematical problems like PDEs, which are typically tackled by traditional supercomputers. But for Aimone and Theilman, the results weren't surprising. The researchers believe the brain itself performs complex computations constantly, even if we don't consciously realize it. "Pick any sort of motor control task — like hitting a tennis ball or swinging a bat at a baseball," Aimone said. "These are very sophisticated computations. They are exascale-level problems that our brains are capable of doing very cheaply..." Their research also raises intriguing questions about the nature of intelligence and computation. The algorithm developed by Theilman and Aimone retains strong similarities to the structure and dynamics of cortical networks in the brain. "We based our circuit on a relatively well-known model in the computational neuroscience world," Theilman said. "We've shown the model has a natural but non-obvious link to PDEs, and that link hasn't been made until now — 12 years after the model was introduced." The researchers believe that neuromorphic computing could help bridge the gap between neuroscience and applied mathematics, offering new insights into how the brain processes information. "Diseases of the brain could be diseases of computation," Aimone said. "But we don't have a solid grasp on how the brain performs computations yet." If their hunch is correct, neuromorphic computing could offer clues to better understand and treat neurological conditions like Alzheimer's and Parkinson's.
Read more of this story at Slashdot.
2026-01-11 06:34:00
Is there a trend? This week four different articles appeared on various tech-news sites with an author bragging about switching to Linux. "Greetings from the year of Linux on my desktop," quipped the Verge's senior reviews editor, who finally "got fed up and said screw it, I'm installing Linux." They switched to CachyOS — just like this writer for the videogame magazine Escapist: I've had a fantastic time gaming on Linux. Valve's Windows-to-Linux translation layer, Proton, and even CachyOS' bundled fork have been working just fine. Of course, it's not perfect, and there's been a couple of instances where I've had to problem-solve something, but most of the time, any issues gaming on Linux have been fixed by swapping to another version of Proton. If you're deep in online games like Fortnite, Call of Duty, Destiny 2, GTAV or Battlefield 6, it might not be the best option to switch. These games feature anti-cheats that look for versions of Windows or even the heart of the OS, the kernel, to verify the system isn't going to mess up someone's game.... CachyOS is thankfully pre-packed with Nvidia drivers, meaning I didn't have to dance around trying to find them.... Certain titles will perform worse than their counterparts, simply due to how the bods at Nvidia are handling the drivers for Linux. This said, I'm still not complaining when I'm pushing nearly 144fps or more in newer games. The performance hit is there, but it's nowhere near enough to stave off even an attempt to mess about with Linux. Do you know how bizarre it is to say it's "nice to have a taskbar again"? I use macOS daily for a lot of my work, which uses a design baked back in the 1990s through NeXT. Seeing just a normal taskbar that doesn't try to advertise to me or crash because an update killed it for some reason is fantastic. That's how bad it is out there right now for Windows. "I run Artix, by the way," joked a senior tech writer at Notebookcheck (adding "There. That's out of the way...") I dual-booted a Linux partition for a few weeks. After a Windows update (that I didn't choose to do) wiped that partition and, consequently, the Linux installation, I decided to go whole-hog: I deleted Windows 11 and used the entire drive for Linux... Artix differs from Arch in that it does not use SystemD as its init system. I won't go down the rabbit hole of init systems here, but suffice it to say that Artix boots lightning quick (less than 10 seconds from a cold power on) and is pretty light on system resources. However, it didn't come "fully assembled..." The biggest problem I ran into after installing Artix on the [MacBook] Air was the lack of wireless drivers, which meant that WiFi did not work out of the box. The resolution was simple: I needed to download the appropriate WiFi drivers (Broadcom drivers, to be exact) from Artix's main repository. This is a straightforward process handled by a single command in the Terminal, but it requires an internet connection... which my laptop did not have. Ultimately, I connected a USB-to-Ethernet adapter, plugged the laptop directly into my router, and installed the WiFi drivers that way. The whole process took about 10 minutes, but it was annoying nonetheless. For the record, my desktop (an AMD Ryzen 7 6800H-based system) worked flawlessly out-of-the-box, even with my second monitor's uncommon resolution (1680x1050, vertical orientation). I did run into issues with installing some packages on both machines. Trying to install the KDE desktop environment (essentially a different GUI for the main OS) resulted in strange artifacts that put white text on white backgrounds in the menus, and every resolution I tried failed to correct this bug. After reverting to XFCE4 (the default desktop environment for my Artix install), the WiFi signal indicator in the taskbar disappeared. This led to me having to uninstall a network manager installed by KDE and re-linking the default network manager to the runit services startup folder. If that sentence sounds confusing, the process was much more so. It has been resolved, and I have a WiFi indicator that lets me select wireless networks again, but only after about 45 minutes of reading manuals and forum posts. Other issues are inherent to Linux. Not all games on Steam that are deemed Linux compatible actually are. Civilization III Complete is a good example: launching the game results in the map turning completely black. (Running the game through an application called Lutris resolved this issue.) Not all the software I used on Windows is available in Linux, such as Greenshot for screenshots or uMark for watermarking photos in bulk. There are alternatives to these, but they don't have the same features or require me to relearn workflows... Linux is not a "one and done" silver bullet to solve all your computer issues. It is like any other operating system in that it will require users to learn its methods and quirks. Admittedly, it does require a little bit more technical knowledge to dive into the nitty-gritty of the OS and fully unlock its potential, but many distributions (such as Mint) are ready to go out of the box and may never require someone to open a command line... [T]he issues I ran into on Linux were, for the most part, my fault. On Windows or macOS, most problems I run into are caused by a restriction or bug in the OS. Linux gives me the freedom to break my machine and fix it again, teaching me along the way. With Microsoft's refusal (either from pride or ignorance) to improve (or at least not crapify) Windows 11 despite loud user outrage, switching to Linux is becoming a popular option. It's one you should consider doing, and if you've been thinking about it for any length of time, it's time to dive in. And tinkerer Kevin Wammer switched from MacOS to Linux, saying "Linux has come a long way" after more than 30 years — but "Windows still sucks..."
Read more of this story at Slashdot.
2026-01-11 05:34:00
A founder of Twitter and a founder of Pinterest are now working on "social media for people who hate social media," writes a Washington Post columnist. "When I heard that this platform would harness AI to help us live more meaningful lives, I wanted to know more..." Their bid for redemption is West Co. — the Workshop for Emotional and Spiritual Technology Corporation — and the platform they're testing is called Tangle, a "purpose discovery tool" that uses AI to help users define their life purposes, then encourages them to set intentions toward achieving those purposes, reminds them periodically and builds a community of supporters to encourage steps toward meeting those intentions. "A lot of people, myself included, have been on autopilot," Stone said. "If all goes well, we'll introduce a lot of people to the concept of turning off autopilot." But will all go well? The entrepreneurs have been at it for two years, and they've scrapped three iterations before even testing them. They still don't have a revenue model. "This is a really hard thing to do," Stone admitted. "If we were a traditional start-up, we would have probably been folded by now." But the two men, with a combined net worth of at least hundreds of millions, and possibly billions, had the luxury of self-funding for a year, and now they have $29 million in seed funding led by Spark Capital... [T]he project revolves around training existing AI models in "what good intentions and helpful purposes look like," explained Long Cheng, the founding designer. When you join Tangle, which is invitation-only until this spring at the earliest, the AI peruses your calendar, examines your photos, asks you questions and then produces "threads," or categories that define your life purpose. You're free to accept, reject or change the suggestions. It then encourages you to make "intentions" toward achieving your threads, and to add "reflections" when you experience something meaningful in your life. Users then receive encouragement from friends, or "supporters." A few of the "threads" on Tangle are about personal satisfaction (traveler, connoisseur), but the vast majority involve causes greater than self: family (partner, parent, sibling), community (caregiver, connector, guardian), service (volunteer, advocate, healer) and spirituality (seeker, believer). Even the work-related threads (mentor, leader) suggest a higher purpose. The column includes this caveat. "I have no idea whether they will succeed. But as a columnist writing about how to keep our humanity in the 21st century, I believe it's important to focus on people who are at least trying..." "Quite possibly, West Co. and the various other enterprises trying to nudge technology in a more humane direction will find that it doesn't work socially or economically — they don't yet have a viable product, after all — but it would be a noble failure."
Read more of this story at Slashdot.
2026-01-11 04:34:00
A new study "compared how well top AI systems and human workers did at hundreds of real work assignments," reports the Washington Post. They add that at least one example "illustrates a disconnect three years after the release of ChatGPT that has implications for the whole economy." AI can accomplish many impressive tasks involving computer code, documents or images. That has prompted predictions that human work of many kinds could soon be done by computers alone. Bentley University and Gallup found in a survey [PDF] last year that about three-quarters of Americans expect AI to reduce the number of U.S. jobs over the next decade. But economic data shows the technology largely has not replaced workers. To understand what work AI can do on its own today, researchers collected hundreds of examples of projects posted on freelancing platforms that humans had been paid to complete. They included tasks such as making 3D product animations, transcribing music, coding web video games and formatting research papers for publication. The research team then gave each task to AI systems such as OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude. The best-performing AI system successfully completed only 2.5 percent of the projects, according to the research team from Scale AI, a start-up that provides data to AI developers, and the Center for AI Safety, a nonprofit that works to understand risks from AI. "Current models are not close to being able to automate real jobs in the economy," said Jason Hausenloy, one of the researchers on the Remote Labor Index study... The results, which show how AI systems fall short, challenge predictions that the technology is poised to soon replace large portions of the workforce... The AI systems failed on nearly half of the Remote Labor Index projects by producing poor-quality work, and they left more than a third incomplete. Nearly 1 in 5 had basic technical problems such as producing corrupt files, the researchers found. One test involved creating an interactive dashboard for data from the World Happiness Report, according to the article. "At first glance, the AI results look adequate. But closer examination reveals errors, such as countries inexplicably missing data, overlapping text and legends that use the wrong colors — or no colors at all." The researchers say AI systems are hobbled by a lack of memory, and are also weak on "visual" understanding.
Read more of this story at Slashdot.
2026-01-11 03:34:00
Amazon "has submitted plans for a large-format store near Chicago that would be larger than a Walmart Supercenter," reports CNBC: As part of the plans, Amazon has proposed building a one-story, 229,000-square-foot building [on a 35-acre lot] in Orland Park, Illinois, that would offer a range of products, such as groceries, household essentials and general merchandise, the city said on Saturday. By comparison, Walmart's U.S. Supercenters typically average 179,000 square feet... The Orland Park Plan Commission approved Amazon's proposal on Tuesday, and it will now proceed to a vote from the full village board. That meeting is scheduled for January 19. In a statement cited by CNBC, an Amazon spokesperson called it "a new concept that we think customers will be excited about."
Read more of this story at Slashdot.