2026-01-14 01:11:25
I enjoyed this thought-provoking piece by (award-winning developer) Matt Birchler, writing for Birchtree on how he’s been making so-called “micro apps” with AI coding agents:
I was out for a run today and I had an idea for an app. I busted out my own app, Quick Notes, and dictated what I wanted this app to do in detail. When I got home, I created a new project in Xcode, I committed it to GitHub, and then I gave Claude Code on the web those dictated notes and asked it to build that app.
About two minutes later, it was done…and it had a build error.
And:
As a simple example, it’s possible the app that I thought of could already be achieved in some piece of software someone’s released on the App Store. Truth be told, I didn’t even look, I just knew exactly what I wanted, and I made it happen. This is a quite niche thing to do in 2026, but what if Apple builds something that replicates this workflow and ships it on the iPhone in a couple of years? What if instead of going to the App Store, they tell you to just ask Siri to make you the app that you need?
John and I are going to discuss this on the next episode of AppStories about the second part of the experiments we did over our holiday break. As I’ll mention in the episode, I ended up building 12 web apps for things I have to do every day, such as appending text to Notion just how I like it or controlling my TV and Hue sync box. I didn’t even think to search the App Store to see if new utilities existed: I “built” (or, rather, steered the building of) my own progressive web apps, and I’m using them every day. As Matt argues, this is a very niche thing to do right now, which requires a terminal, lots of scaffolding around each project, and deeper technical knowledge than the average person who would just prompt “make me a beautiful todo app.” But the direction seems clear, and the timeline is accelerating.
I also can’t help but remember this old rumor from 2023 about Apple exploring the idea of letting users rely on Siri to create apps on the fly for the then-unreleased Vision Pro. If only the guy in charge of the Vision Pro went anywhere and Apple got their hands on a pretty good model for vibe-coding, right?
2026-01-14 00:33:14

Source: Apple.
Today, Apple announced Apple Creator Studio, a suite of creativity apps for the Mac and iPad combined with premium content and features for productivity apps across the company’s platforms. This collection of apps, which includes the debut of Pixelmator Pro for iPad, offers tools for creative professionals, aspiring artists, students, and others working across a wide variety of fields, including music, video, and graphic design.
The bundle includes a number of apps:
It also features a new Content Hub with premium graphics and photos for Apple’s iWork suite – Pages for word processing, Keynote for presentations, and Numbers for spreadsheets – as well as exclusive templates, themes, and AI features. The company says these features will also come to its Freeform canvas app soon.
Apple Creator Studio will be available on Wednesday, January 28, for $12.99/month or $129/year with a one-month free trial. Students and teachers can subscribe at a discounted rate of $2.99/month or $29.99/year, and three months of Apple Creator Studio will come free with the purchase of a new Mac or iPad. The subscription also includes Family Sharing, allowing users to share the apps and features with up to five family members.
With this offering, Apple is combining several disparate offerings for creatives into a single package that looks quite compelling. Because many of these apps are also available individually – some of them for free – there are a lot of details to get into regarding what’s new, what’s included, and what’s available elsewhere. Let’s get into it.

Beat Detection in Final Cut Pro. Source: Apple.
Final Cut Pro will continue to be available as a one-time $299.99 purchase on the Mac. Whether you purchase it that way or subscribe to access the app on both Mac and iPad, you’ll get a variety of new features, including Transcript Search to find moments in footage based on dialogue and Visual Search. Beat Detection will help video editors make cuts to match the rhythm of music playing under a video. And Montage Maker on the iPad is a new AI feature that automatically pulls together the best moments of a user’s footage into a montage, with options to adjust the pacing, edit the video to match a music track, and reframe from a horizontal aspect ratio to vertical for sharing on social media.
Mac-exclusive video tools Motion and Compressor are included in the bundle but also remain available to purchase separately for $49.99 each.

The Sound Library in Logic Pro. Source: Apple.
Logic Pro will similarly be available to purchase on the Mac for $199.99, but both the Mac and iPad versions will be included in Creator Studio. New features include the addition of a Synth Player to the app’s collection of AI Session Players; the player can be directed based on the sound the user is going for, and it can even access third-party Audio Units and play hardware synthesizers. Chord ID turns recorded music into readable chord progressions. The Mac version of Logic Pro gets a new Sound Library, while the iPad version gains natural language search for the Sound Browser via Music Understanding as well as the Quick Swipe Comping feature previously exclusive to the Mac.
MainStage for the Mac is available as part of Creator Studio or as a separate one-time $29.99 purchase.
Pixelmator Pro for iPad. Source: Apple.
Pixelmator Pro comes to the iPad for the first time with a UI optimized for touch and the Apple Pencil as well as file compatibility with its Mac counterpart. Familiar image editing tools like smart selection, Super Resolution, Deband, and Auto Crop are available in the iPad version, and the Apple Pencil integration is optimized for painting digital art. A new Warp tool is available as a Creator Studio exclusive in both the Mac and iPad versions of Pixelmator Pro.

The new Content Hub. Source: Apple.
Pages, Keynote, and Numbers will remain free and receive a Liquid Glass update, but Creator Studio subscribers gain access to new tools within Apple’s productivity apps. The Content Hub is a collection of images, graphics, and illustrations available for subscribers to include in their documents and presentations. Subscribers also have access to exclusive themes and templates, as well as experimental features. In Keynote, subscribers can generate presentations from an outline, create speaker notes based on slide content, and automatically clean up the layout of their slides. And Numbers includes a new Magic Fill feature to generate formulas and fill in tables automatically.
Apple has long offered powerful apps for creatives, but they’ve never been put together in a single package in this way before. For those who rely on these tools for their everyday work, it’s an exciting proposition to get access to everything, including exclusive and experimental features, for a single price. At the same time, it will be interesting to see how these changes are communicated in practice to those who aren’t subscribed. For example, how prominent will the Content Hub be for the many, many free users of the iWork apps?
We’ll find out exactly how it all works on January 28. In the meantime, I’m encouraged to see all the progress being made on Apple’s creative tools, especially on the iPad, and I look forward to giving Apple Creator Studio a try.
2026-01-13 04:26:59

Step 1: Transcribe with parakeet-mlx.
When I started transcribing AppStories and MacStories Unwind three years ago, I had wanted to do so for years, but the tools at the time were either too inaccurate or too expensive. That turned a corner with OpenAI’s Whisper, an open-source speech-to-text model that blew away other readily available options.
Still, the results weren’t good enough to publish those transcripts anywhere. Instead, I kept them as text-searchable archives to make it easier to find and link to old episodes.
Since then, a cottage industry of apps has arisen around Whisper transcription. Some of those tools do a very good job with what is now an aging model, but I have never been satisfied with their accuracy or speed. However, when we began publishing our podcasts as videos, I knew it was finally time to start generating transcripts because as inaccurate as Whisper is, YouTube’s automatically generated transcripts are far worse.

VidCap in action.
My first stab at video transcription was to use apps like VidCap and MacWhisper. After a transcript was generated, I’d run it through MassReplaceIt, a Mac app that lets you create and apply a huge dictionary of spelling corrections using a bulk find-and-replace operation. As I found errors in AI transcriptions by manually skimming them, I’d add those corrections to my dictionary. As a result, the transcriptions improved over time, but it was a cumbersome process that relied on me spotting errors, and I didn’t have time to do more than scan through each transcript quickly.
That’s why I was so enthusiastic about the speech APIs that Apple introduced last year at WWDC. The accuracy wasn’t any better than Whisper, and in some circumstances it was worse, but it was fast, which I appreciate given the many steps needed to get a YouTube video published.
The process was sped up considerably when Claude Skills were released. A skill can combine a script with instructions to create a hybrid automation with both the deterministic outcome of scripting and the fuzzy analysis of LLMs.

Transcribing with yap.
I’d run yap, a command line tool that I used to transcribe videos with Apple’s speech-to-text framework. Next, I’d open the Claude app, attach the resulting transcript, and run a skill that would run the script, replacing known spelling errors. Then, Claude would analyze the text against its knowledge base, looking for other likely misspellings. When it found one, Claude would reply with some textual context, asking if the proposed change should be made. After I responded, Claude would further improve my transcript, and I’d tell Claude which of its suggestions to add to the script’s dictionary, helping improve the results a little each time I used the skill.
Over the holidays, I refined my skill further and moved it from the Claude app to the Terminal. The first change was to move to parakeet-mlx, an Apple silicon-optimized version of NVIDIA’s Parakeet model that was released last summer. Parakeet isn’t as fast as Apple’s speech APIs, but it’s more accurate, and crucially, its mistakes are closer to the right answers phonetically than the ones made by Apple’s tools. Consequently, Claude is more likely to find mistakes that aren’t in my dictionary of misspellings in its final review.

Managing the built-in corrections dictionary.
With Claude Opus 4.5’s assistance, I rebuilt the Python script at the heart of my Claude skill to run videos through parakeet-mlx, saving the results as either a .srt or .txt file (or both) in the same location as the original file but prepended with “CLEANED TRANSCRIPT.” Because Claude Code can run scripts and access local files from Terminal, the transition to the final fuzzy pass for errors is seamless. Claude asks permission to access the cleaned transcript file that the script creates and then generates a report with suggested changes.

A list of obscure words Claude suggested changing. Every one was correct.
The last step is for me to confirm which suggested changes should be made and which should be added to the dictionary of corrections. The whole process takes just a couple of minutes, and it’s worth the effort. For the last episode of AppStories, the script found and corrected 27 errors, many of which were misspellings of our names, our podcasts, and MacStories. The final pass by Claude managed to catch seven more issues, including everything from a misspelling of the band name Deftones to Susvara, a model of headphones, and Bazzite, an open-source SteamOS project. Those are far from everyday words, but now, their misspellings are not only fixed in the latest episode of AppStories, they’re in the dictionary where those words will always be corrected whether Claude’s analysis catches them or not.

Claude even figured out “goti” was a reference to GOTY (Game of the Year).
I’ve used this same pattern over and over again. I have Claude build me a reliable, deterministic script that helps me work more efficiently; then, I layer in a bit of generative analysis to improve the script in ways that would be impossible or incredibly complex to code deterministically. Here, that generative “extra” looks for spelling errors. Elsewhere, I use it to do things like rank items in a database based on a natural language prompt. It’s an additional pass that elevates the performance of the workflow beyond what was possible when I was using a find-and-replace app and later a simple dictionary check that I manually added items to. The idea behind my transcription cleanup workflow has been the same since the beginning, but boy, have the tools improved the results since I first used Whisper three years ago.
2026-01-13 02:54:14
Last Friday, basketball fans in the Los Angeles Lakers market got their first glimpse of an immersive live game when the Lakers faced the Milwaukee Bucks on Spectrum Front Row on Apple Vision Pro. While that experience was limited geographically and only available to Spectrum customers via the Spectrum SportsNet app, the game replay is now available widely and for free in the NBA app. Vision Pro users in eligible regions outside Lakers territory can download the app, sign up for an NBA ID, and stream the game replay and highlights today. The full schedule and availability of immersive Lakers games were announced last week.
Being from Arkansas and not California, I missed out on the live premiere, but I was able to check out the game replay on my Vision Pro yesterday, and the experience was fantastic. Most of the game was shown from a front-row courtside perspective, which meant I was literally turning my head from side to side as the teams moved up and down the court. It was very different from the bird‘s-eye view I’m used to watching televised sports from, and it really gave me the impression of being in the arena. At one point, when a member of the Lakers scored a point, I felt the urge to start clapping as if they could hear me, even though I was sitting in my bedroom, not at the Lakers game.
There were several other camera angles that the broadcast cut to from time to time. The behind-the-basket view was a fun way to take in the action when someone was about to score, and there was a roaming camera that brought you onto the court itself before the game and during halftime as well. The cuts were sparing, which made the whole experience feel less jarring than some of the immersive sports highlights we’ve seen on Vision Pro before, but the combination of immersive video and multiple angles offered the best of both worlds. It felt like I was actually there taking in the game, and no matter what was happening, I always got to see it from the best angle.
Even if you’re not a big fan of basketball or the Lakers, it’s worth checking out the replay to see what the experience is like. Right now, broadcasting a game in this way is a big undertaking, but I have a feeling it will only become more and more common with time. If this concept eventually expands to other sports and live experiences like concerts, theatrical performances, and more, it would make a really compelling case for the Vision Pro and the sorts of capabilities only visionOS can offer.
2026-01-12 23:56:59
Apple has confirmed to CNBC that it has entered into a multi-year partnership with Google to use the search giant’s models and cloud technology for its own AI efforts. According to an unnamed Apple spokesperson:
After careful evaluation, we determined that Google’s technology provides the most capable foundation for Apple Foundation Models and we’re excited about the innovative new experiences it will unlock for our users.
The report still leaves many questions unanswered, including how Gemini fits in with Apple’s own Foundation Models and whether and to what extent Apple will rely on Google hardware. However, after months of speculation and reports from Mark Gurman at Bloomberg that Apple and Google were negotiating, it looks like we’re on the cusp of Apple’s AI strategy coming into better focus.
UPDATE:
Subsequent to the statement made by Apple to CNBC, Apple and Google released a slightly more detailed joint statement that Google published on X:
Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year.
After careful evaluation, Apple determined that Google’s Al technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users. Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple’s industry-leading privacy standards.
So, while the Apple Foundation Models that power Apple Intelligence will be based on Gemini and unspecified cloud technology, Apple Intelligence features themselves, including more personalized Siri, will continue to run locally on Apple devices and on Apple’s Private Cloud Compute to maintain user privacy.
2026-01-12 21:22:05
Copilot Money, the only personal finance app to win an Apple Editor’s Choice award, now gives you a seamless, cross-device way to manage your money across iPhone, iPad, Mac, and Web.
With Copilot’s beautifully designed interface and powerful financial insights, you can see your spending, budgets, investments, and net worth all in one place. Everything syncs automatically across devices, so whether you’re at your desk or on the go, you always have an up-to-date picture of your finances.
Copilot is built to help you go deeper without feeling overwhelmed. Transactions are intelligently categorized, trends surface automatically, and insights are presented in a way that feels intuitive and confidence-building, not stressful.
Whether you’re tracking monthly spending, planning ahead, or working toward long-term goals, Copilot’s unified dashboard helps you feel clear, calm, and in control of your money.
Copilot Money is offering MacStories readers an extended two-month free trial with code MACSTORIES. Plus, for one more week, you can save 26% on your first year through the link below.
👉 Visit Copilot Money to explore the app and start your extended free trial today. Offer available to new users only, exclusively through this link.
Our thanks to Copilot Money for sponsoring MacStories this week.