2025-06-21 04:24:21
Kenneth Chang and Iera Hwang of the New York Times take a deep dive into the unique data challenges of the new Vera C. Rubin Observatory, which is powered by a 3.2 gigapixel camera:
Each image taken by Rubin’s camera consists of 3.2 billion pixels that may contain previously undiscovered asteroids, dwarf planets, supernovas and galaxies. And each pixel records one of 65,536 shades of gray. That’s 6.4 billion bytes of information in just one picture…. Rubin will capture about 1,000 images each night.
Although Rubin will take a thousand images a night, those are not what will be sent out into the world at first. Rather, the computers at SLAC will create small snapshots of what has changed compared with what the telescope saw previously… Just one image will contain about 10,000 highlighted changes. An alert will be generated for each change — some 10 million alerts a night.
Storing, transmitting, and disseminating that much data leads to some interesting problems, like having enough storage onsite in case of outages, stringing fiberoptic cable across the Atacama desert, and processing the images to provide manageable data for astronomers to access remotely.
2025-06-21 02:15:58
Last week’s Six Colors podcast was recorded entirely on iPads running iPadOS 26, mine in California and Dan’s in Massachusetts. The podcast is usually just for Six Colors members, but you can listen to it here if you want.
You’ll be disappointed if you expect to hear anything special about it, though. We both recorded it on our usual Shure MV7 USB microphones, and it just doesn’t sound any different at all. (For the full iPad extravaganza, I should’ve edited it in Ferrite on my iPad, but for expediency’s sake, I didn’t at the time. I’ve since done that just for kicks, and that’s the image at the top of this story.)
It’s probably worth explaining why this feature has so many podcasters and other creators in a bit of a tizzy. Many podcasts record remotely, with people all over the world, and they usually use some sort of app to have that real-time conversation. It was Skype back in the day, and these days it’s often Zoom or a web-based recording program like Riverside. Because those apps prioritize real-time audio and video over quality, the quality is frequently bad by necessity.
To ensure that the very best audio and video is used in the final product, we tend to use a technique called a “multi-ender.” In addition to the lower-quality call that’s going on, we all record ourselves on our local device at full quality, and upload those files when we’re done. The result is a final product that isn’t plagued by the dropouts and other quirks of the call itself. I’ve had podcasts where one of my panelists was connected to us via a plain old phone line—but they recorded themselves locally and the finished product sounded completely pristine.
The problem has been iPadOS and iOS, which won’t let you run a videoconferencing app and simultaneously run a second app to capture your microphone and video locally. One app at a time is the rule, especially when it comes to using cameras and microphones. Individual iPhone and iPad videoconferencing apps can choose to build in local-recording features if they want, but in practice… they just don’t.
Apple has solved this in an interesting way. What it’s not doing is allowing multiple apps access to the microphone (so far as I can tell, I just tried it and the moment I started a FaceTime call, my local recording app stopped). Instead, Apple has just built in a system feature, found in Control Center, that will capture local audio and video when you’re on a call. It doesn’t work when another app is not currently using the microphone and camera, so it can’t be set to surreptitiously record stuff, and it displays a recording symbol at the top of the screen when it’s running. When you’re done, you can tap that symbol and it’ll save the file to the Files app.
The file it saves is marked as an mp4 file, but it’s really a container featuring two separate content streams: full-quality video saved in HEVC (H.265) format1, and lossless audio in the FLAC2 compression format. Regardless, I haven’t run into a single format conversion issue. My audio-sync automations on my Mac accept the file just fine, and Ferrite had no problem importing it, either. (The only quirk was that it captured audio at 24-bit, 48KHz and I generally work at 16-bit, 44.1KHz. I have no idea if that’s because of my microphone or because of the iPad, but it doesn’t really matter since converting sample rates and dithering bit depths is easy.)
Even in Developer Beta 1, this feature is pretty solid. What’s missing is a better preview of the audio levels and the ability to adjust audio gain, since different microphones have different gain levels and not all of them are easily adjustable. Beyond that, though, this feature is a winner. Podcasters should be rejoicing—I know I am.
2025-06-21 01:00:31
My thanks to Turbulence Forecast for sponsoring Six Colors this week. Whether you want to keep your nerves in check, are flying with kids, or even want to know the best time to get out of your airline seat for a dash to the bathroom, Turbulence Forecast is the easiest way to know in advance just how smooth or bumpy your next flight is going to be.
This is a weather app of a different kind, charting your flight routes and sending you a detailed PDF with multiple maps, with updates pushed right to your phone. You’ll see exactly where it might get bumpy and where it should stay nice and smooth, up to five days ahead, with regular updates as the weather changes.
Turbulence Forecast can see out as far as five days, making it easier to pick a calmer day if your plans are flexible. They’re even tracking their forecast stats so you can see how steady their predictions have been over time.
And if you want a personal touch? Turbulence Forecast offers a Handcrafted Forecast — where the founder, who’s been doing this for 20 years, personally looks at your flight, answers your questions, and even suggests smoother options if you’ve got some wiggle room. (Bonus, he’s a huge Six Colors fan.)
Fly smarter, and much smoother, with Turbulence Forecast, available for iOS and Android as well as at turbulenceforecast.com.
2025-06-20 00:08:44
This year, John Voorhees and I returned to the scene of the crime—the place where we got a demo in 2024 of Swift Assist, a feature that never shipped that we could’ve sworn we saw demoed live—to see the updated Xcode with AI assistance. Same room, same people, but this time the feature wasn’t just promised, it was shipping in Developer Beta 1.
More to the point, as John writes on MacStories, Apple had entirely rearchitected the tool so that developers can use any AI system they want and update to new models as they become available:
I’m not a developer, so I’m not going to review Swift Assist (a name that is conspicuously absent from Apple’s developer tool press release, by the way), but the changes are so substantial that the feature I was shown this year hardly resembles what I saw in 2024. Unlike last year’s demo, this version can revise multiple project files and includes support for multiple large language models, including OpenAI’s ChatGPT, which has been tuned to work with Swift and Xcode. Getting started with ChatGPT doesn’t require an OpenAI account, but developers can choose to use their account credentials from OpenAI or another provider, like Anthropic. Swift Assist also supports local model integration. If your chosen AI model takes you down a dead end, code changes can be rolled back incrementally at any time.
This is perhaps the best sign that Apple’s attitude toward AI and Apple’s role in the world has changed dramatically since 2024. If, in late 2025 or early 2026, a new coding model becomes all the rage with developers, Xcode will be able to use that model. That’s a big step forward for Apple.
2025-06-19 21:00:53
Another Worldwide Developers Conference is in the books, and after a week of keynotes, briefings, and travel, I’ve finally had a chance to sit and zoom out to the 35,000-foot view of the company’s latest announcements.
The Apple of 2025 has definitely learned some lessons.
In hindsight, last year’s event has seemed even more rocky, with the company hustling to unveil Apple Intelligence, including showing off features that still have yet to ship. To its credit, it avoided doubling down on those mistakes with this year’s announcements without fully repudiating its previous steps. Instead, the company went back to focusing on the assets that make it the best at what it does. In other words, the ones that let Apple be Apple.
WWDC is a time for Apple to talk about its platforms, and those platforms are one of the company’s biggest competitive advantages. Not just in terms of size, which reaches into the billions of devices, but the ardent ecosystem.
Or, as Steve Ballmer might put it…developers.
Apple’s 2025 announcements play to this strength, most prominently in the choice to open up access to on-device foundation AI models for third parties. Previously, developers’ only options were either to make a (probably expensive) deal with a third-party AI provider such as OpenAI, to pack in a third-party on-device model (a real can of worms), or to spend a lot of money and energy building their own model. While Apple’s on-device models might not be as powerful or cutting edge as those from competitors, they do have the benefit of being free, both in terms of cost and in terms of being part of the platform.
And that opens up so many opportunities for developers to enhance their apps or even create new types of apps, which in turn bestows benefits on Apple’s platform: reminding developers that it’s the best place to build their app.
That’s just the most prominent example, too. Apple also announced that Live Translate will be available as an API for developers of other voice communication apps, that the new animated lock screen for Music will also be available to other media apps, and that authentication codes can now be autofilled from other messaging apps. All of these features make third-party apps more useful by offering features that aren’t just restricted to Apple’s own software.
Speaking of third-party developers, one claim that often gets leveled at Apple is its tendency to “Sherlock” apps: that is, provide functionality in its OS that’s already offered by third-party apps, thus arguably making it harder for those apps to survive.
However, that phenomenon has always been more nuanced, thanks to the fact that Apple more often than not implements only the lowest common denominator version of these pieces of functionality.
For example, take the new Clipboard History feature of macOS.1 It does only what its name suggests: keeps a history of things on your clipboard. That’s useful enough in and of itself, but what it doesn’t do is any of the features offered by the slew of third-party clipboard managers. Myself, I’m a longtime user of Tapbots’s Pastebot, which not only provides a clipboard history but also lets me store snippets for frequent use or apply filters to text on the clipboard, such as converting it to plain text or turning Markdown into HTML. Those are features Apple’s not about to build into its own Clipboard History feature because they don’t really deal with the stated purpose: keeping track of items that were on your clipboard.
Moreover, you can make the argument that Apple’s approach can actually be a boon for third-party developers. For one thing, most customers of these apps are likely not about to give them up simply because Apple has implemented part of the feature set. For another, the existence of Apple’s own approach actually clues users into the fact that such features exist. A user who had never before thought of having a clipboard history might find themselves wishing the feature went even further, and as a result seek out more capable alternatives.
The same can be said for Apple’s improvements to Spotlight, which would seem to position it directly in competition with popular third-party launchers like Alfred, Raycast, and LaunchBar. But those apps provide a ton of features not available in Spotlight and the people who start using Spotlight’s new features and enjoy them might be inspired to check those out.
This brings us to one of the best moves from Apple in this year’s platform updates: a focus on features that people actually use.
In tech parlance, we often call much of what Apple showed off this year “quality of life” enhancements. Sometimes that term gets deployed with a negative connotation, as if to contrast to big marquee feature announcements, but to be honest, these are my favorite type of features. Because who doesn’t want their life to be a little higher quality?
A lot of these quality of life features are underpinned by some form of AI—or, to use the preferred term from the pre-LLM gold rush, “machine learning.” These are the kinds of things that Apple’s been doing for years, like making it possible to search through your photos by describing what’s in them, or even to copy text in columns.
And that’s a win for Apple, because it’s not just about throwing up your hands and saying “AI can solve this!” or assuming that LLMs are the answer to literally any computing program. It’s about tactically deploying the right tool for the job, whether it’s automatically categorizing reminders or providing summaries of voicemails.
The end result should always be about computers doing the tedious work that we don’t have to, not replacing the things that we really enjoy doing. And Apple this year seem to have remembered that its core business is making products that people like.
In recent years, Apple’s annual platform updates come with a raft of footnotes about these features being available “later this fall” or sometimes “later this year” or occasionally “¯\_(ツ)_/¯”.
Having learned its lesson from last year’s enhanced Siri features that never arrived, the company has apparently decided that it’s going to show off only what it plans on shipping. That’s why almost all the features announced in the keynote this year are available, right now, in the developer beta and, to my knowledge, all the features are intended to be available when the updates ship in the fall.
Obviously, that’s good from a marketing perspective, since it prevents Apple from running afoul of those issues like having to pull commercials that feature unshipped features. It also just bodes well for company watchers who are worried that Apple is getting over its skis by overpromising and underdelivering.
And, as a bonus, it means that Apple has the opportunity to roll out new features throughout the year in subsequent updates. That means more opportunities for them to capture the attention of the press, and, more importantly, more chances to, in their own oft-repeated words, surprise and delight their customers. Something we could all use a little more of these days.
2025-06-19 03:55:20
Apple has largely tied major revisions of tvOS to the launch of new Apple TV hardware over the years. Since the introduction of Apple TV+, WWDC’s tvOS “features” have largely focused on showcasing sizzle reels of Apple TV+ shows, and very little about tvOS itself. This WWDC gave us a trickle of announcements that don’t seem to align with what I would consider to be the rough spots in the tvOS user experience.
It is possible that Apple is holding back meaningful revisions until they launch an updated Apple TV box this fall. Maybe they’ll even mention the 10th anniversary of tvOS itself, which was unveiled in September of 2015 at the iPhone 6S launch event. Until then, I guess we should reflect on what’s announced, instead of wish lists of what could be.
I’m not going to rip into the design in beta 1. It’s also mostly a conservative evolution of what came before but with highlights on edges. However, Apple has really underscored a very specific part of the interface as working as intended and I will push back on that.
Apple has two kinds of Liquid Glass (Regular and Clear) and Clear is supposed to be used over rich media, like video. The only things that define the existence of the controls are the highlights and brighter/blurry refractions visible through the clear elements.
Well, gee whiz, aren’t clear glass playback controls going to be difficult to see over video, especially when it’s playing through the controls?
To make the controls easier to discern, Apple applies a dimming layer on everything around the controls, but not on the video visible through the controls. It’s like someone stenciled out aftermarket window tinting.
Apple says this is on purpose in its Meet Liquid Glass WWDC video, when demonstrating playback controls on iOS. In its Newsroom post for tvOS, it says: “tvOS 26 is designed to keep the focus on what’s playing so users never miss a moment.”
This is bananas. How is this getting out of the way of the content? You can barely discern the playback timeline and playhead while motion is occurring through the element, which causes it to pulse in a thin strip. What is being achieved here? The playback controls and timeline should be flat. No one is going to feel sad that there’s no glass effect in this one spot, where it serves no practical or artistic purpose other than being a wicked smart shader demo.
Another notable change in the interface is the pivot from horizontal thumbnails to portrait-orientation posters. Apple says that this means more tiles can fit on the screen, but that’s only more tiles visible in one row, and it’s only one additional tile over the smallest scale thumbnails (6 posters instead of 5 thumbnails). The older design had thumbnails that matched the aspect ratio of the TV in various sizes so you’d get more rows with fewer titles visible on screen in each row.
To compensate for this difference in aspect ratio, the text that was below or next to the thumbnails is now on top of them. I’ll let readers debate which is more legible, and whether or not the text is always helpful.
This decision pushes content downward. If you want to see what kind of category you’re in the mood for, you will do more scrolling down, which means it will take you longer to count the number of times the TV app recommends you watch “Stick.” Unless you really want to flip through one particular row of the interface one title faster, it’s not really an improvement.
I’m unclear about the continued push by Apple to get developers to adopt Apple’s user profile system. It really doesn’t provide any benefit to the developers of these large streaming services that need to have their own multi-platform profile systems with personalized content recommendations, and it doesn’t provide substantial benefit to households with shared viewing.
I have no animosity towards user profile improvements whatsoever, and I do appreciate that on your first boot of tvOS 26 you can say you never want to see the profile switcher. However, system-level user profiles just don’t feel like the area of the TV viewing experience that needs this much attention when compared to other aspects.
If I were being generous, I could hypothesize that this emphasis on user profiles is because there will be some genuine effort put into personalizing the TV app based on the active user profile.
Unfortunately, you still can’t express any kind of preference in “personalized” areas of the interface to mark a recommended show as watched (without first adding each title to your Watch List and then marking it there) nor can you express that you have no interest in a title.
Even if increased personalization is on the horizon, there’s no reason to expect that to work as well as the personalization offered in each streaming app’s own recommendation systems. Such a thing requires developer participation and cooperation with Apple.
Speaking of developer participation…
The 10th anniversary of Single Sign-On is next year, so we’ll be celebrating this latest attempt a little early. That first attempt used a convoluted system to recognize your cable provider to authenticate all the individual apps you had that worked with existing subscriptions so you wouldn’t have to sign in. Just 18 months later Apple announced zero sign-on, where if you were on a qualifying provider’s internet network, the apps would authenticate on their own.
It’s safe to say that these systems almost immediately became obsolete because they were centered on a business relationship between customers and service providers that was in quick decline. Apple’s blind spot here was believing that anything not subscribed to via a cable provider would be subscribed to via Apple. Due to Apple’s App Store policies on subscriptions, many streamers have left the App Store behind. That means people have to do little sign-on dances that makes using Apple products as frustrating as cheap streaming hardware.
Instead of repairing its relationships with streamers, it’s providing this very latest sign-on feature, which links accounts via your Apple Account email address… but requires streamers to want to implement it. I hope they do, and I hope it works to make everyone happier.
I find myself scratching my head at the announcement regarding using iPhones as microphones to do Apple TV-mediated karaoke.
Look, this feature won’t hurt me, or cause harm to the world—with the possible exception of those within earshot—but it’s such a niche thing to do. I have to imagine that someone took a look at the collection of technologies that Apple had built and realized they could put them together, you know, for fun!
I hope people who use this feature do have fun. But it’s a strangely specific thing to use as a selling point, when there are other use cases for the Apple TV, such as watching television, that might be better places to focus.
I want tvOS to improve, and am frustrated when another WWDC comes along and the changes are as minor as they were this year. I hold out some hope that there’s more to announce, and it’s being held back on for a new Apple TV hardware announcement. But for now, we’ve got tvOS 26… and it cuts down on information density and creates make see-through timelines.
tvOS needs to sort out the dichotomy between the home screen and the TV app. The current TV app is a mess and needs to be upgraded to support features that Apple has never taken a single pass at, like a universal live guide. I don’t expect them to be perfect, but it would be nice if we could see that Apple is making an effort. Change is long overdue for a platform that many take for granted. Apple needs to try harder at the TV part of tvOS.