MoreRSS

site iconHackadayModify

Hackaday serves up Fresh Hacks Every Day from around the Internet. Our playful posts are the gold-standard in entertainment for engineers and engineering enthusiasts.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Hackaday

TTY如何为听力障碍者开启电话通信

2026-04-30 22:00:04

The telephone was an invention that revolutionized human communication. No more did you have to physically courier a letter from one place to another, or send a telegram, or have a runner carry the message for you. Instead, you could have a direct conversation with another person a great distance away. All well and good if you can speak and hear, of course, but rather useless if you happen to be deaf.

Those hard of hearing were not left entirely out of the communication revolution, however. Well before IP switched networks and the Internet became a thing, there was already a way for the deaf to communicate over the plain old telephone network—thanks to the teletypewriter!

Over The Wires

The teletypewriter (TTY) has been around for a long time. The first device came into being in 1964, developed by James C. Marsters and Robert Weitbrecht, both deaf. Their idea was to create a method for deaf individuals to communicate over the phone network in a textual manner. To this end, the group sourced teleprinters formerly used by the US Department of Defense, and hooked them up with acoustic couplers that would allow them to mate with the then-ubiquitous AT&T Model 500 telephone. Thus, the TTY was born. A user could dial another TTY machine, and key in a message, which would print out at the other end. The receiving user could then respond in turn in the same manner.

A Miniprint 425 TDD device. Note the acoustic coupler on top,  the VFD for displaying messages, the printer, and the SK and GA keys which automatically key in these regularly-used abbreviations. Credit: public domain

The early machine used simple frequency-shift keying to encode the characters of the alphabet and some basic control codes, allowing text messages to be sent back and forth via a regular analog telephone call. In the US, where the devices eventually became known as telecommunications device for the deaf (TDDs), the devices used an improved development of Baudot code (the USA-TTY variant of ITA-2) to send signals over the phone lines.

This involved representing characters with five bits, which was enough to cover the 26 characters of the English alphabet, plus 0-9 and a few control codes. Transmission rates were slow—typically just 45.5 to 50 baud. With a 5-bit code, this limited transmission to approximately 10 characters per second.

The sign on the left indicates a payphone with a TTY device attached. These were rare installs back in the landline era, and vanishingly few remain today. Credit: CC BY-SA 4.0

TTYs quickly caught on as a useful device for the deaf and hard of hearing, and developed its own norms similar to other textual telecommunications methods that came before. Users would key “GA” for “go ahead,” to indicate the other party could “speak” on the half-duplex link, as two users typing at the same time would lead to garbled messages. “SK” stood for “stop keying” to indicate the ending of a call. Abbreviations were common to save effort, such as “CU” (see you) and “TMW” (tomorrow).

Relay Service

At its heart, the TTY was a very useful device for allowing its users to communicate via textual means to others with compatible hardware. However, alone, a TTY could not allow a deaf user to communicate effectively with regular telephone users. To enable greater accessibility, many organizations developed telecommunications relay services.

TTY machines led to the establishment of relay services that allowed deaf users to make regular phone calls with assistance from an operator. Credit: screenshot, Australian National Relay Service

These first existed as a number that deaf TTY users could call in order to connect to a human operator with their own TTY machine. This operator would place calls on behalf of the deaf individual, speaking on their behalf to other parties based on the deaf user’s inputs to their TTY device. In turn, the operator would key out the responses from the called party so the deaf individual could read back the conversation.

The first relay service was established by Converse Communications in Connecticut in 1974. The concept was quickly picked up by many other telecommunications operators around the world to provide an accessibility aid to those who needed it. These days, relay services still exist, though a great many relay services now operate over IP-based systems rather than via phone lines and TTY devices.

Hanging On

TTY still exists to some degree out in the world today. There are still subscribers with analog phone lines, and the basic TTY technology still fundamentally works over these links. However, the rise of SMS text messaging and widespread Internet connectivity have somewhat negated a lot of use cases for TTY technology these days. There have also been cases where digital upgrades to the phone network have made TTY operation more difficult, though some efforts have been made to ensure compatibility in some networks, particularly for emergency uses.

Ultimately, TTY was a technology that brought telecommunications access to a greater number of people than ever before. Like the landline phone and the fax machine, it’s no longer such a feature of modern life. However, it was an important link to the world for many in the deaf and hard of hearing community, and was greatly valued for the connection and accessibility it provided.

转写源代码:最初的IBM PC DOS

2026-04-30 19:00:05

Doing software archaeology can be a harrowing task, as rarely do you find complete snapshots of particular versions of software. Case in point the development of MS-DOS – also known as IBM PC DOS – from 86-DOS, which recently got a lucky break in the form of printed source listings. These printouts come courtesy of [Tim Paterson], the creator of 86-DOS and of MS-DOS during his time working for Microsoft.

These code listings contain the sources of the 86-DOS 1.00 kernel, multiple development snapshots, and also listings for utilities like CHKDSK. These printed listings additionally contain many handwritten notes, making transcribing it into working source code somewhat of a chore. The results can be found on the GitHub project page, with the original scans available on Archive.org.

Of the ten bundles of continuous feed paper prints all but two have been transcribed so far, though with the various DOS kernels and the Seattle Computer Products (SCP) assembler source already ready for compilation. This includes 86-DOS 1.00, MS-DOS 1.25 and PC-DOS 1.00-dev, requiring the same SCP assembler to create a binary.

In the project page README a number of blog posts are also linked that add even more technical detail. Anyone who wants to pitch in with transcribing and/or testing recovered source code is welcome to do so.

构建一台不使用Intel、NVIDIA或AMD部件的x86游戏电脑

2026-04-30 16:00:14

This is an interesting challenge from the “why not?” files — [GPUSpecs] over on YouTube built a gaming PC without using a single component from NVIDIA, Intel, or AMD. That immediately makes us think of the high-power ARM workstations or perhaps even perhaps the new “AI workstations” coming available with RISC V architecture, but the challenge here was specifically “gaming PC,” not workstation. A gaming PC, without a GPU by one of those three? To make it even more interesting, the x86 CPU isn’t Intel or AMD either.

If you’re of a certain vintage, you may remember Cyrix. Cyrix reverse-engineered the x86 ISA and made their own compatible chips in the 90s, before being bought out by National Semiconductor, and then VIA Technologies. VIA partnered with the Government of Shanghai to found Zhaoxin, and it is from Zhaoxin that the KaiXian KX 7000 CPU hails — an x86-64 device, that isn’t Intel or AMD. We’ve actually covered the company before. This particular chip benchmarks like an old i5, so not spectacular, but usable. 

The GPU is also Chinese: a Moore Threads MTT S80, with 16 GB of DDR6 vRAM, 4096 shading units, 256 texture mapping units, and 256 ROPs. On paper, that looks like a very respectable graphics card, but it’s not clear how well the games [GPUSpecs] tested were actually using it. Based on the numbers he was getting in his testing, there are some serious driver issues with this card. Even Black Myth: Wukong, which is supposed to be a game the card targets, was sitting at 13.6 FPS on low settings and 1080p. That almost feels like integrated graphics numbers, not something a beefy GPU would give you — but it matches what other reviewers were saying when the card first came out.

So if you’re looking for a sanction-proof gaming rig, we’re sorry to say it’s not quite ready for triple-A. On the other hand, it’s a neat hack and we didn’t know this box could even get built. Right now, it looks like you will need at least one of the big three names to game on–you can game on ARM with NVIDIA graphics,  or even with Intel graphics, and of course AMD, which has been in the works the longest.

网络扫描器发现每一个树莓派

2026-04-30 13:00:41

DHCP is great for getting machines on the network with a minimum of fuss. However, it can also make remote administration a pain because you never know which IP you’re supposed to be SSHing into. [Philipp] ran into this problem quite often, so decided to whip up an app to make things easier. 

At it’s heart, the app is a simple network scanner—of which many already exist. However, [Philipp] had found that many options on Android were peppered with ads that made them highly undesirable to use. Thus, he whipped up his own, with a particular eye to working with the Raspberry Pi. It’s not uncommon for a hacker to have a few scattered around the home network, and it can be a real chore keeping track of where they all end up in IP land. The scanner can specifically single out the Raspberry Pi boards on the network via MAC-OUI and mDNS detection. Plus, just in case you need it, [Philipp] threw in some GPIO pinouts and electronics calculators just to make the app more useful.

If you’ve been looking for an open-source network scanner without all the ugly junk, this project might just be for you. You can also check out the source over on Github if that’s relevant to your interests. We’ve seen some interesting custom network scanners before, too. If you’re whipping up some fun packet-flinging software of your own, don’t hesitate to notify the tipsline!

为何大型语言模型在自主学习中必然发生模型退化

2026-04-30 10:00:20

There is a persistent belief in the ‘AI’ community that large language models (LLMs) have the ability to learn and self-improve by tweaking the weights in their vector space. Although there’s scant evidence that tweaking a probability vector space is anything like the learning process in biological brains, we nevertheless get sold the idea that artificial general intelligence (AGI) is just around the corner if we do just enough tweaking.

Instead of emerging super intelligence, the most likely outcome is what is called model collapse, with a recent paper by [Hector Zenil] going over the details on why self-training/learning in LLMs and similar systems is a fool’s errand. For those who just want the brief summary with all the memes, [Metin] wrote a blog post covering the basics.

In the end an LLM as well as a diffusion model (DM) is a statistical model of input data using which a statistically likely output can be generated (inferred) based on an input query. It follows intuitively that by using said output  to adjust the model with, the model will over time converge on a kind of statistical singularity rather than some ‘AI singularity’ event. This is also why these models need to be constantly trained with external, human-generated data in order to prevent such a collapse.

In the paper by [Hector] a mathematical model is created to demonstrate that an LLM, DM or similar statistical model undergoes degenerative dynamics whenever said external input is reduced. Although in the paper a mechanism is suggested to counter the entropy decay within the model, the ultimate point is that a statistical model cannot improve itself without continuous external anchoring.

The idea of LLMs being at all intelligent in any sense has been a contentious one, with the concept of language models being equated with ‘AI’ dating back to the 20th century, including as fun home computer projects. Much of the problem probably lies in humans projecting intelligent behavior onto these statistical models, turning LLMs into ‘counterfeit humans’, not helped by how closely generated text can resemble something written by a human, even if completely confabulated.

Thanks to [deshipu] for the tip.

如何用湿度破坏湿度传感器

2026-04-30 07:00:37

An often overlooked section in the datasheets for popular humidity sensors like the BME280 and DHT22 is the ‘non-condensing humidity’ bit, which puts an important constraint on which environments you can use this sensor in. This was the painful lesson that [Mellow Labs] recently had to learn when multiple of such sensors had kicked the bucket after being used in a nicely steamed-up bathroom. Fortunately, it introduced him to sensors that are rated for use in condensing humidity environments, such as the SHT40 that’s demonstrated in the video.

This particular sensor is made by Sensirion, and as we can see in the datasheet it features a built-in heater that allows it to keep working even in a condensing environment. This heater has three heating levels which are controlled via the I2C interface, though duration is limited to one second in order to prevent overheating the sensor.

Of note is that you cannot take measurements while the heater is operating, and its use obviously increases power draw significantly. This then mostly leaves when to turn on the heater as an exercise to the engineer, with [Mellow Labs] opting to start the heater when relative humidity hit 70% as a conservative choice.

In the comments to the video other options for suitable sensors were pitched, including the Bosch BME690 which is similarly rated for condensing environments. All of which condenses down to the importance of reading the datasheet for any part that you intend to use in possibly demanding environments.