MoreRSS

site iconHackadayModify

Hackaday serves up Fresh Hacks Every Day from around the Internet. Our playful posts are the gold-standard in entertainment for engineers and engineering enthusiasts.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Hackaday

Your Supercomputer Arrives in the Cloud

2025-12-11 11:00:26

For as long as there have been supercomputers, people like us have seen the announcements and said, “Boy! I’d love to get some time on that computer.” But now that most of us have computers and phones that greatly outpace a Cray 2, what are we doing with them? Of course, a supercomputer today is still bigger than your PC by a long shot, and if you actually have a use case for one, [Stephen Wolfram] shows you how you can easily scale up your processing by borrowing resources from the Wolfram Compute Services. It isn’t free, but you pay with Wolfram service credits, which are not terribly expensive, especially compared to buying a supercomputer.

[Stephen] says he has about 200 cores of local processing at his house, and he still sometimes has programs that run overnight. If your program already uses a Wolfram language and uses parallelism — something easy to do with that toolbox — you can simply submit a remote batch job.

What constitutes a supercomputer? You get to pick. You can just offload your local machine using a single-core 8GB virtual machine — still a supercomputer by 1980s standards.  Or you get machines with up to 1.5TB of RAM and 192 cores. Not enough for your mad science? No worries, you can map a computation across more than one machine, too.

As an example, [Stephen] shows a simple program that tiles pentagons:

When the number of pentagons gets large, a single line of code sends it off to the cloud:

RemoteBatchSubmit[PentagonTiling[500]]

The basic machine class did the work in six minutes and 30 seconds for a cost of 5.39 credits. He also shows a meatier problem running on a 192-core 384GB machine. That job took less than two hours and cost a little under 11,000 credits (credit cost from just over $4/1000 to $6/1000, depending on how many you buy, so this job cost about $55 to run). If two hours is too much, you can map the same job across many small machines, get the answer in a few minutes, and spend fewer credits in the process.

Supercomputers today are both very different from old supercomputers and yet still somewhat the same. If you really want that time on the Cray you always wanted, you might think about simulation.

Volumetric Display With Lasers and Bubbly Glass

2025-12-11 08:00:52

King Tut, with less resolution than he's had since Deluxe Paint

There’s a type of dust-collector that’s been popular since the 1990s, where a cube of acrylic or glass is laser-etched in a three-dimensional pattern. Some people call them bubblegrams. While it could be argued that bubblegrams are a sort of 3D display, they’re more like a photograph than a TV. [Ancient] had the brainwave that since these objects work by scattering light, he could use them as a proper 3D video display by controlling the light scattered from an appropriately-designed bubblegram.

Appropriately designed, in this case, means a point cloud, which is not exactly exciting to look at on its own. It’s when [Ancient] adds the colour laser scanning projector that things get exciting. Well, after some very careful alignment. We imagine if this was to go on to become more than a demonstrator some sort of machine-vision auto-aligning would be desirable, but [Ancient] is able to conquer three-dimensional keystoning manually for this demonstration. Considering he is, in effect, projection-mapping onto the tiny bubbles in the crystal, that’s impressive work. Check out the video embedded below.

With only around 38,000 points, the resolution isn’t exactly high-def, but it is enough for a very impressive proof-of-concept. It’s also not nearly as creepy as the Selectric-inspired mouth-ball that was the last [Ancient] project we featured. It’s also a lot less likely to take your fingers off than the POV-based volumetric display [Ancient] was playing DOOM on a while back.

For the record, this one runs the same DOOM port, too– it’s using the same basic code as [Ancient]’s other displays, which you can find on GitHub under an MIT license.

Thanks to [Hari Wiguna] for the tip.

Production KiCad Template Covers All Your Bases

2025-12-11 05:00:53

Ever think about all the moving parts involving a big KiCad project going into production? You need to provide manufacturer documentation, assembly instructions and renders for them to reference, every output file they could want, and all of it has to always stay up to date. [Vincent Nguyen] has a software pipeline to create all the files and documentation you could ever want upon release – with an extensive installation and usage guide, helping you turn your KiCad projects truly production-grade.

This KiBot-based project template has no shortage of features. It generates assembly documents with custom processing for a number of production scenarios like DNPs, stackup and drill tables, fab notes, it adds features like table of contents and 3D renders into KiCad-produced documents as compared to KiCad’s spartan defaults, and it autogenerates all the outputs you could want – from Gerbers, .step and BOM files, to ERC/DRC reports and visual diffs.

This pipeline is Github-tailored, but it can also be run locally, and it works wonderfully for those moments when you need to release a PCB into the wild, while making sure that the least amount of things possible can go wrong during production. With all the features, it might take a bit to get used to. Don’t need fully-featured, just some GitHub page images? Use this simple plugin to auto-add render images in your KiCad repositories, then.

We thank [Jaac] for sharing this with us!

FLOSS Weekly Episode 858: YottaDB: Sometimes the Solution is Bigger Servers

2025-12-11 03:30:26

This week Jonathan chats with K. S. Bhaskar about YottaDB. This very high performance database has some unique tricks! How does YottaDB run across multiple processes without a daemon? Why is it licensed AGPL, and how does that work with commercial deployments? Watch to find out!

Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or have the guest contact us! Take a look at the schedule here.

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:


Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License

Why LLMs are Less Intelligent than Crows

2025-12-11 02:00:26

The basic concept of human intelligence entails self-awareness alongside the ability to reason and apply logic to one’s actions and daily life. Despite the very fuzzy definition of ‘human intelligence‘, and despite many aspects of said human intelligence (HI) also being observed among other animals, like crows and orcas, humans over the ages have always known that their brains are more special than those of other animals.

Currently the Cattell-Horn-Carroll (CHC) theory of intelligence is the most widely accepted model, defining distinct types of abilities that range from memory and processing speed to reasoning ability. While admittedly not perfect, it gives us a baseline to work with when we think of the term ‘intelligence’, whether biological or artificial.

This raises the question of how in the context of artificial intelligence (AI) the CHC model translate to the technologies which we see in use today. When can we expect to subject an artificial intelligence entity to an IQ test and have it handily outperform a human on all metrics?

Types Of Intelligence

While the basic CHC model contains ten items, the full model is even more expansive, as can be seen in the graphic below. Most important are the overarching categories and the reasoning for the individual items in them, as detailed in the 2014 paper by Flanagan and McGrew. Of these, reasoning (Gf, for fluid intelligence), acquired knowledge and memory (long and short term) are arguably the most relevant when it comes to ‘general intelligence’.

Current and expanded CHC theory of cognitive abilities. Source: Flanagan & McGrew (1997).
Current and expanded CHC theory of cognitive abilities. Source: Flanagan & McGrew (1997).

Fluid intelligence (Gf), or reasoning, entails the ability to discover the nature of the problem or construction, to use a provided context to fill in the subsequent steps, and to handle abstract concepts like mathematics. Crystallized intelligence (Gc) can be condensed to ‘basic skills’ and general knowledge, including the ability to communicate with others using a natural language.

The basic memory abilities pertain to short-term (Gsm) and long-term recall (Glr) abilities, in particular attention span, working memory and the ability to recall long-term memories and associations within these memories.

Beyond these basic types of intelligence and abilities we can see that many more are defined, but these mostly expand on these basic four, such as visual memory (Gv), various visual tasks, speed of memory operations, reaction time, reading and writing skills and various domain specific knowledge abilities. Thus it makes sense to initially limit evaluating both HI and AI within this constrained framework.

Are Humans Intelligent?

North American Common Raven (Corvus corax principalis) in flight at Muir Beach in Northern California (Credit: Copetersen)
North American Common Raven (Corvus corax principalis) in flight at Muir Beach in Northern California (Credit: Copetersen)

It’s generally considered a foregone conclusion that because humans as a species possesses intelligence, ergo facto every human being possesses HI. However, within the CHC model there is a lot of wriggle room to tone down this simplification. A big part of IQ tests is to test these these specific forms of intelligence and skills, after all, creating a mosaic that’s then boringly reduced to a much less meaningful number.

The main discovery over the past decades is that the human brain is far less exceptional than we had assumed. For example crows and their fellow corvids easily keep up with humans in a range of skills and abilities. As far as fluid intelligence is concerned, they clearly display inductive and sequential reasoning, as they can solve puzzles and create tools on the spot. Similarly, corvids regularly display the ability to count and estimate volumes, demonstrating quantitative reasoning. They have regularly demonstrated understanding water volume, density of objects and the relation between these.

In Japanese parks, crows have been spotted manipulating the public faucets for drinking and bathing, adjusting the flow to either a trickle or a strong flow depending on what they want. Corvids score high on the Gf part of the CHC model, though it should be said that the Japanese crow in the article did not turn the faucet back off again, which might just be because they do not care if it keeps running.

When it comes to crystallized intelligence (Gc) and the memory-related Gsm and Glr abilities, corvids score pretty high as well. They have been reported to remember human faces, to learn from other crows by observing them, and are excellent at mimicking the sounds that other birds make. There is evidence that corvids and other avian dinosaur species (‘birds’) are capable of learning to understand human language, and even communicating with humans using these learned words.

The key here is whether the animal understands the meaning of the vocalization and what vocalizing it is meant to achieve when interacting with a human. Both parrots and crows show signs of being able to learn significant vocabularies of hundreds of words and conceivably a basic understanding of their meaning, or at least what they achieve when uttered, especially when it comes to food.

Whether non-human animals are capable of complex human speech remains a highly controversial topic, of course, though we are breathlessly awaiting the day that the first crow looks up at a human and tells the hairless monkey what they really think of them and their species as a whole.

The Bears

The bear-proof garbage bins at Yosemite National Park. (Credit: detourtravelblog)
The bear-proof garbage bins at Yosemite National Park. (Credit: detourtravelblog)

Meanwhile there’s a veritable war of intellects going on in US National Parks between humans and bears, involving keeping the latter out of food lockers and trash bins while the humans begin to struggle the moment the bear-proof mechanism requires more than two hand motions. This sometimes escalates to the point where bears are culled when they defeat mechanisms using brute force.

Over the decades bears have learned that human food is easier to obtain and fills much better than all-natural food sources, yet humans are no longer willing to share. The result is an arms race where bears are more than happy to use any means necessary to obtain tasty food. Ergo we can put the Gf, Gc and memory-related scores for bears also at a level that suggests highly capable intellects, with a clear ability to learn, remember, and defeat obstacles through intellect. Sadly, the bear body doesn’t lend itself well to creating and using tools like a corvid can.

Despite the flaws of the CHC model and the weaknesses inherent in the associated IQ test scores, it does provide some rough idea of how these assessed capabilities are distributed across a population, leading to a distinct Bell curve for IQ scores among humans and conceivably for other species if we could test them. Effectively this means that there is likely significant overlap between the less intelligent humans and smarter non-human animals.

Although H. sapiens is undeniably an intelligent species, the reality is that it wasn’t some gods-gifted power, but rather an evolutionary quirk that it shares with many other lifeforms. This does however make it infinitely more likely that we can replicate it with a machine and/or computer system.

Making Machines Intelligent

Artificial Intelligence Projects for the Commodore 64, by Timothy J. O'Malley
Artificial Intelligence Projects for the Commodore 64, by Timothy J. O’Malley

The conclusion we have thus reached after assessing HI is that if we want to make machines intelligent, they need to acquire at least the Gf, Gc, Gsm and Glr capabilities, and at a level that puts them above that of a human toddler, or a raven if you wish.

Exactly how to do this has been the subject of much research and study the past millennia, with automatons (‘robots’) being one way to pour human intellect into a form that alleviates manual labor. Of course, this is effectively merely on par with creating tools, not an independent form of intelligence. For that we need to make machines capable of learning.

So far this has proved very difficult. What we are capable of so far is to condense existing knowledge that has been annotated by humans into a statistical model, with large language models (LLMs) as the pinnacle of the current AI hype bubble. These are effectively massively scaled up language models following the same basic architecture as those that hobbyists were playing with back in the 1980s on their home computers.

With that knowledge in mind, it’s not so surprising that LLMs do not even really register on the CHC model. In terms of Gf there’s not even a blip of reasoning, especially not inductively, but then you would not expect this from a statistical model.

As far as Gc is concerned, here the fundamental flaw of a statistical model is what it does not know. It cannot know what it doesn’t know, nor does it understand anything about what is stored in the weights of the statistical model. This is because it’s a statistical model that’s just as fixed in its functioning as an industrial robot. Chalk up another hard fail here.

Finally, although the context window of LLMs can be considered to be some kind of short-term memory, it is very limited in its functionality. Immediate recall of a series of elements may work depending on the front-end, but cognitive operations invariably fail, even very basic ones such as adding two numbers. This makes Gsm iffy at best, and more realistically a complete fail.

Finally, Glr should be a lot easier, as LLMs are statistical models that can compress immense amounts of data for easy recall. But this associative memory is an artefact of human annotation of training data, and is fixed at the time of training the model. After that, it does not remember outside of its context window, and its ability to associate text is limited to the previous statistical analysis of which words are most likely to occur in a sequence. This fact alone makes the entire Glr ability set a complete fail as well.

Piecemeal Abilities

Although an LLM is not intelligent by any measure and has no capacity to ever achieve intelligence, as a tool it’s still exceedingly useful. Technologies such as artificial neurons and large language models have enabled feats such as machine vision that can identify objects in a scene with an accuracy depending on the training data, and by training an LLM on very specific data sets the resulting model can be a helpful statistical tool, as it’s a statistical model.

These are all small fragments of what an intelligent creature is capable of, condensed into tool form. Much like hand tools, computers and robots, these are all tools that we humans have crafted to make certain tasks easier or possible. Like a corvid bending some wire into shape to open a lock or timing the dropping of nuts with a traffic light to safely scoop up fresh car-crushed nuts, the only intelligence so far is still found in our biological brains.

All of which may change as soon as we figure out a way to replicate abstract aspects such as reasoning and understanding, but that’s still a whole kettle of theoretical fish at this point in time, and the subject of future articles.

 

Cheap 10x10cm Hotplate Punches Above Its Weight

2025-12-11 00:30:23

For less than $30 USD, you can get a 10×10 centimeter hotplate with 350 Watts of power. Sounds mighty fine to us, so surely there must be a catch? Maybe not, as [Stefan Nikolaj]’s review of this AliExpress hotplate details, it seems to be just fine enough.

At this price, you’d expect some shoddy electronics inside, or maybe outright fiery design decisions, in the vein of other reviews for similar cheap heat-producing tech that we’ve seen over the years. Nope – the control circuitry seems to be more than well-built for our standards, with isolation and separation where it matters, the input being fused away, and the chassis firmly earthed. [Stefan] highlights just two possible problem areas: a wire nut that could potentially be dodgy, and lack of a thermal fuse. Both can be remedied easily enough after you get one of these, and for the price, it’s a no-brainer. Apart from the review, there’s also general usage recommendations from [Stefan] in the end of the blog post.

While we’re happy to see folks designing their own PCB hotplates or modifying old waffle irons, the availability of cheap turn-key options like this means there’s less of a reason to go the DIY route. Now, if you’re in the market for even more build volume, you can get one of the classic reflow ovens, and maybe do a controller upgrade while you’re at it.