MoreRSS

site iconGood EnoughModify

Good Enough LLC, Made Pika, Letterbird, Yay.Boo, Album Whale, and more.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Good Enough

We’re Shutting Down Yay.Boo and Ponder

2025-11-03 08:00:00

We have built a lot of good products here at Good Enough. Whether you’re sharing an inbox with your team or avoiding social media with a blog, we’ve got you covered. Unfortunately, there are some products that, while very nice, have not had our attention for a long time. Two of those products are Yay.Boo and Ponder.

Ponder, our take on small forum software, was one of the first things we built as a collective. Even today, it works really well for a small group of polite folks to talk about a shared interest. Unfortunately, not many small groups found Ponder, and our team hasn’t had the bandwidth to continue improving the software.

Yay.Boo is a delightful tool with which to quickly throw some HTML online. On top of that, it has always been a playground to push the envelopes on just what a product homepage could look like. Unlike Ponder, Yay.Boo is even getting a decent amount of use.

If you look closely you may see a shared risk that each of these products pose. To run both Yay.Boo and Ponder responsibly, a fair amount of time must be spent in moderation. Every Yay.Boo site update needs to be checked. We also don’t want any nasty content to end up hosted on Ponder, which is even trickier to moderate since the groups are private.

Leaving those services online and not paying attention to them is something that we do not feel comfortable doing. Since our small team is not focused on these products, we need to do the responsible thing and move on from them.

So, with heavy hearts, we’re going to shut down Ponder and Yay.Boo*. Signups are already turned off. On December 3rd we’ll delete all user data and flip the power to off.

To those of you still using Yay.Boo and Ponder, we’re sorry. There are some good alternatives to Yay.Boo in: tiiny.host, Netlify Drop, and static.app. Ponder is a different beast and, sadly, we do not know of many products that fit a similar mold. Liminal is the only one we’ve come across that might be close.

To anyone who was gracious enough to pay for a Yay.Boo account, double thank you! We have canceled all accounts and you won’t be billed again. If you have any questions about the shutdown or your data, please also don’t hesitate to get in touch.

While it stinks to have to send this message, we hope you’ll understand that a lot of thought went into this decision. Through these years at Good Enough we’ve discovered which products really excite us; properly saying goodbye to Ponder and Yay.Boo will allow us to focus on those products instead.

* A final note: if you’re reading this and thinking that you’d be just the person to take on either of these softwares, do let us know. We haven’t 100% closed the door to passing these products on to a person or team that could care for them, and we’d be happy to listen to your pitch.

Thank you and keep keeping the web weird!

TIL: Rails, CloundFront CDN, and imgproxy

2025-10-14 08:00:00

In September, I worked on improving Pika’s image performance. I’ve had a long career now (25 years 😭) doing mostly web-programming tasks, yet somehow I’ve never set up a CDN myself. I suppose the "management years" right as my prior organization was getting bigger contributed to missing out on that experience. In any case, the work was overdue on Pika and it was time to tackle it.

Through a bit of help from online articles and online friends, I’ve gotten it mostly figured out. Here is Pika’s setup.

The tools

Since we started Good Enough with lots of AWS credits, Amazon has got us a bit locked in with their services. And since, remember, I have no past experience setting these things up, well, I tallied-ho with Amazon’s CloudFront for the CDN and S3, which we were already using, for storage. Through this process I had a lot of “grass is greener” feelings toward Cloudflare and Cloudflare R2, but I’ll save that dalliance for another day.

I started thinking about the many background jobs I was going to need to orchestrate for creating the various tuned images (resizing, removing Exif data, compression, etc). Through that research I ran into John Nunemaker’s Imgproxy is Amazing blog post. I reached out to confirm that he is still using imgproxy, and, boy howdy, is he ever. Thanks to Nunes for sharing many details about how he has configured both imgproxy and CloudFront!

The flow

When someone’s browser requests an uncached image from a Pika blog post, here’s how an image request flows through all of these systems:

         ┌────────────┐               
         │            │               
         │  Reader    │               
         │  requests  │               
         │  image     │               
         │            │               
         └───┬────────┘               
             │    ▲                   
             │    │                   
             ▼    │                   
 ┌────────────────┴───────────┐       
 │                            │
 │  Regional CloudFront node  │
 │                            │
 └─────┬──────────────────────┘
       │                  ▲    
       │       CloudFront │ caches   
       │       in regions │ and at
       │           shield │    
       ▼                  │    
 ┌────────────────────────┴───┐
 │                            │
 │  CloudFront Origin Shield  │
 │                            │
 └─────┬──────────────────────┘
       │                  ▲    
       │         imgproxy │ strips    
       │            Exif, │ resizes,
       │              and │ compresses    
       ▼                  │    
 ┌────────────────────────┴───┐
 │                            │
 │          imgproxy          │
 │                            │
 └─────┬──────────────────────┘
       │                  ▲    
       │                  │    
       │                  │    
       │                  │    
       ▼                  │    
 ┌────────────────────────┴───┐
 │                            │
 │             S3             │
 │                            │
 └────────────────────────────┘

The configuration details (as of today)

Let’s start one step above Rails with the imgproxy setup.

imgproxy

We deploy our services at Render.com. This is the full contents of the Dockerfile we use to deploy our Pika imgproxy web service instance:

FROM ghcr.io/imgproxy/imgproxy:latest

To configure imgproxy I am using environment variables to the max. Here are the environment variables I’m currently using:

  • IMGPROXY_TTL = 30758400: Feeling pretty confident here and setting the TTL to 1 year. Attaching images to rich text fields in Rails should never really re-use an existing image or its URLs, making cache invalidation happen as a matter of course.
  • IMGPROXY_FALLBACK_IMAGE_DATA = R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7: This is a 1x1 transparent GIF fallback image in case imgproxy cannot retrieve the requested image.
  • IMGPROXY_FALLBACK_IMAGE_TTL = 120: I set the TTL for the fallback image to be much less than our system TTL set above. I don’t want system hiccups to lead to broken images. Well, not for more than 2 minutes, anyway!
  • IMGPROXY_FORMAT_QUALITY = jpeg=90,png=90,webp=79,avif=63,jxl=77: Setting mild compression for all images. I am very cautious about over-compressing anything in Pika, and the default compression of 80 was too extreme for me. webp, avif, and jxl formats are not currently used in Pika, but I added them here to match the defaults that imgproxy uses for IMGPROXY_FORMAT_QUALITY. The gif format is also not being used, as you'll see below.
  • IMGPROXY_STRIP_COLOR_PROFILE = false: Related to the above, I want Pika to be as color-accurate as possible.
  • IMGPROXY_MAX_SRC_RESOLUTION = 75: Did you know there is such a thing as image bombs? Neither did I! impgroxy can protect you from them.
  • IMGPROXY_ALLOW_SECURITY_OPTIONS = true: This is required to allow the use of the IMGPROXY_MAX_SRC_RESOLUTION envar.
  • IMGPROXY_USE_S3 = true: This allows imgproxy to grab images directly from S3. Very clever as it saves a trip through our Rails servers! You will also need to set up the following envars: IMGPROXY_S3_REGION, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY. The downside with this technique is that the URLs no longer end with image extensions like .jpg, which has caused some problems with third-party services. I do wonder if it has been worth saving that trip through our Rails servers. 🤔
  • IMGPROXY_ALLOW_ORIGIN = https://pika.page: I’m actually not sure if this is needed since we never hit our Rails app when loading an image.
  • IMGPROXY_USE_LAST_MODIFIED = true: Given what I wrote about TTL above, I don’t think this is necessary, but it just feels right.
  • IMGPROXY_SENTRY_DSN: Set this to enable error reporting to Sentry.
  • IMGPROXY_TIMEOUT = 15: I’m not sure why I increased this from the default of 10.
  • IMGPROXY_READ_REQUEST_TIMEOUT = 15: Ditto.
  • IMGPROXY_KEY & IMGPROXY_SALT need set as well, of course.

CloudFront

Here’s how we have CloudFront configured. I’m only mentioning the settings that we changed from the default.

Main Distribution settings:

  • Alternate domain name: cdn.u.pika.page
  • Custom SSL certificate: Requested through the interface that CloudFront offers inline

Origin:

  • Origin domain: u.pika.page
  • Enable origin shield: Yes, setting the Original Shield region to be the best match for our other server locations

Behaviors:

  • Compress objects automatically: No
  • Allowed HTTP methods: GET, HEAD, OPTIONS
  • Cache HTTP methods: checked OPTIONS
  • Cache key and origin requests: checked Legacy cache settings (I don‘t love that we are on this Legacy option, but I could never get the other option to work)

Logging: Added a log destination because I’m not sure how you troubleshoot without it!

DNS

Here’s how I have DNS configured for CloudFront and imgproxy:

  • Our CDN requests go to cdn.u.pika.page (yes, I can already tell that that should have been cdn1.u.pika.page)
  • The CDN requests our imgproxy origin at u.pika.page (yes, I should have went with u1.pika.page)
  • At dnsimple I pointed u.pika.page to our imgproxy origin according to Render’s instructions
  • I also added a CNAME record to point cdn.u.pika.page to the Distribution domain name provided by CloudFront
  • As mentioned above, the SSL certificate for cdn.u.pika.page was acquired via CloudFront’s interface, which required a DNS record to be set up at dnsimple during setup for certificate validation

The Rails setup

Pika is configured to upload images to S3. This is a pretty straightforward setup that is written about in many other places.

I’m using the imgproxy gem to help build URLs for images. (There is also an imgproxy-rails gem, but it didn’t play well with our setup.) Here’s our imgproxy.yml configuration file:

default: &default
key: Rails.application.credentials.dig(:imgproxy, :key)
salt: Rails.application.credentials.dig(:imgproxy, :salt)

development:
<<: *default
endpoint: <%= ENV['IMGPROXY_FREE_CDN'] %>
test:
production:
<<: *default
endpoint: <%= ENV['IMGPROXY_FREE_CDN'] %>
use_s3_urls: true

The IMGPROXY_FREE_CDN envar is set to https://cdn.u.pika.page, which is actually the CloudFront CDN URL. Also note use_s3_urls: true for the production environment. This assures the URLs generated by the imgproxy gem are pointing imgproxy to S3 directly.

The simplest images we serve are site avatars, which can be used in the headings of a blog as well as social share images. Rendering the imgproxy/CDN URL is pretty easy for this example. Here’s what we have in our User model:

has_one_attached :avatar

def avatar_url(variant=:small)
variant_options = case
when variant == :small
{ height: "100", width: "100" }
when variant == :medium
{ height: "300", width: "300" }
end
avatar.imgproxy_url(variant_options)
end

Rich text is a whole different beast in Rails. In our case, we have already heavily overridden the _blob.html.erb file, and our CDN updates fit right in there. Along the way I decided not to serve GIF files from imgproxy, so you’ll see some reference to that in the code as well. Processing animated images can get complicated, and I decided to leave that thinking for another day.

Further, for local development I wanted to support accessing a local imgproxy instance, but not break if it isn’t available. So you’ll see mention of an imgproxy? method, which is supported by inclusion of this module in ApplicationHelper and User:

module ImgproxyDetector
def imgproxy?
return @imgproxy if defined?(@imgproxy)
@imgproxy =
(Rails.env.production? || (Rails.env.development? && Rails.application.config_for(:imgproxy).endpoint.present?))
end
end

Here’s the simplified imgproxy/CDN-related code from our _blob.html.erb file:

<figure class="attachment attachment--<%= blob.representable? ? "preview" : "file" %> attachment--<%= blob.filename.extension %>">
<% if blob.representable? %>
<%
if blob.content_type == 'image/gif' # don't use imgproxy URLs for GIFs in case they are animated
img_src_url = url_for(blob)
else
if imgproxy?
img_src_url = blob.imgproxy_url(height: "1400", width: "1800")
else
img_src_url = url_for(blob.variant(resize_to_limit: [1400, 1800], saver: { quality: 90 }))
end
end
%>


<%= image_tag img_src_url %>
<% end %>

<figcaption class="attachment__caption">
<% if caption = blob.try(:caption) %>
<%= caption %>
<% else %>
<span class="attachment__name"><%= blob.filename %></span>
<span class="attachment__size"><%= number_to_human_size blob.byte_size %></span>
<% end %>
</figcaption>
</figure>

imgproxy itself is much more performant than a Rails server, but you can’t get around the fact that image processing is a resource-heavy process. In order to avoid flooding our imgproxy server with an unpredictable number of requests the first time an image-heavy post is loaded, I decided that it would be best to warm the cache as soon as possible. So in the end I wasn’t able to avoid background jobs in our image processing stack. When a new post is created or has edited its images, a background job is created to query the CDN URL for each blob in the post. I’ll leave this code as an exercise for the reader.

Above you may remember that I mentioned the security concern of image bombing. While imgproxy protects us from that, I wanted to avoid folks uploading such images in the first place. So I added a validation to check image resolutions, which means I also didn’t manage to avoid doing any image processing on our Rails server. 😅 Here is a simplified version of how I do that for rich text image attachments:

# post.rb
has_rich_text :body
validate -> { acceptable_image_attachments(:body) }

def acceptable_image_attachments(attr)
return true if self.send(attr).body.blank?
self.send(attr).body.attachables.each do |attachment|
next unless attachment.is_a?(ActiveStorage::Blob)

if image_resolution_over_limit?(attachment)
errors.add(attr, image_resolution_error_message_for(attachment.filename))
end
end
end

def image_resolution_over_limit?(blob)
width, height = blob_dimensions(blob)
(width.to_f * height.to_f) / 1_000_000.0 > Rails.application.config.x.image_resolution_limit.to_f
end

def blob_dimensions(blob)
width = blob.metadata["width"]
height = blob.metadata["height"]
if width.nil? || height.nil?
blob.analyze
width = blob.metadata["width"]
height = blob.metadata["height"]
end

[width, height]
end

# application.rb
config.x.image_resolution_limit = 75 # in megapixels

Local testing is pretty easy once you get it all set up. Well, if you’re familiar with Docker. (I’m really not, but I got it set up, and doing that setup is another exercise I’ll leave to you, dear reader.) Our test code does not use imgproxy, but our development environment sure can. As mentioned above, we have a repo for Pika’s imgproxy that is a very simple Dockerfile.

  • I have Docker and OrbStack installed locally to make things work
  • dotenv is installed to manage my local envars
  • In my .env file I have IMGPROXY_FREE_CDN = "http://localhost:7777"
  • I have foreman installed to handle Procfile applications
  • Then I run foreman start -f Procfile_imgproxy.dev

Here's my Procfile_imgproxy.dev file, which is in my main Rails app:

imgproxy: docker run --rm --name pika-imgproxy -p 7777:8080 --add-host=pika.test:host-gateway -e IMGPROXY_ENABLE_INSECURE_MODE=true -e IMGPROXY_ALLOW_PRIVATE_NETWORKS=true -e IMGPROXY_ALLOW_LOOPBACK_NETWORKS=true -e IMGPROXY_ALLOW_ORIGIN=http://pika.test -e IMGPROXY_FALLBACK_IMAGE_DATA=R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -e IMGPROXY_FALLBACK_IMAGE_TTL=120 -e IMGPROXY_FORMAT_QUALITY=jpeg=90,png=90,webp=79,avif=63,jxl=77 -e IMGPROXY_STRIP_COLOR_PROFILE=false -e IMGPROXY_TTL=604800 -e IMGPROXY_USE_LAST_MODIFIED=true ghcr.io/imgproxy/imgproxy:latest

With this all running, you can see imgproxy in action in your local development environment!

The future

We’re hoping to ride wih this setup for quite a while. Down the road we’ll probably look into tuning GIFs, and I may look into ways to implement WebP and AVIF while still keeping colors and performance to our liking. During implementation I did not have good luck making those formats work well.

And, as an admittedly novice CDN implementor, maybe others will read this blog post and have some ideas about how I could improve this setup. Happy to hear them!

Prettier Email Headers

2025-04-30 08:00:00

As we’re building Jelly, we have found ourselves looking at lots of raw emails. In particular, we’ve spent a lot of time with email headers. If you’ve ever had cause to do the same, you know it can lead to lots of scanning and squinting.

There’s got to be a better way! And here it is: Prettier Email Headers. [Ed: This project was shut down in October of 2025.]

With the help of AI, I threw together this tool that accepts a raw email paste. Then it shows those headers and header values in a format that is easier on the eyes. I also asked AI to do some research into the definition of each header and include citations. As always, I practiced the “don’t trust and verify” method when working with AI.

Screenshot of Prettier Email Headers tool, showing a better-formatted view of email headers like Return-Path, Received, X-Spam-known-sender, and more

If you ever find yourself staring at email headers, I think you should give Prettier Email Headers a try!

How LRUG uses Jelly to remove confusion and simplify organisation

2025-04-28 08:00:00

The London Ruby User Group is one of the longest-running technology user groups in the world, having held a monthly meeting almost every single month since late 2006. It’s no small feat to have kept a local community running for so long, especially since everyone involved is a volunteer.

Jelly is the foundational tool that helps LRUG’s organisers keep things running smoothly.

The beating heart of LRUG is the mailing list, and the team regularly puts out a call for members to give presentations at the monthly LRUG meet-up. Those submissions are sent to an email address which forwards directly into Jelly.

All of the volunteer organisational team can see each talk proposal conversation, and can respond directly or privately discuss the talk right in that thread. And, most importantly, anyone can quickly get up to speed with where a talk proposal is because the whole history of the conversation is right there.

This is particularly important for a volunteer-led organisation like LRUG, because at any time one of the organisation team might be on vacation or busy with life and work; but because any other member can jump in and seamlessly pick up a conversation, LRUG never stumbles.

Jelly helps the LRUG team make sure that no conversation is accidentally ignored, because it’s always clear who’s got what.

As well as talk proposals, LRUG uses Jelly to coordinate with venues, to field questions from community members and attendees, and to coordinate with other user groups. Jelly acts as a shared inbox for all of LRUG’s communication, with specific email addresses for sponsors, job posting to the mailing list, and general organisational matters. Jelly handles everything.

LRUG’s longest-running organiser, Murray Steele, says:

We looked at other more "complete" support tools like ZenDesk and HelpScout, but they are way more expensive (you have to pay per user) and more complex than we need. Jelly presents email just like email, and there's a flat cost which means we don't get penalised whenever someone new joins our team.

You Need Customers to Succeed in Small Business

2025-04-24 08:00:00

For your small business to survive, you need customers. Not just to buy once. You need them to come back, tell their friends, and trust you over time. And yet, too many small businesses make it weirdly hard to talk to them.

Well, duh, right?

I agree, yet I see small businesses fumbling this over and over. All the attention when discussing business is about giant corporations. Whether they’re selling servers or vehicles or every product under the sun, millions of dollars pass through their doors every day. Yet it is folly to apply the methodologies of giant companies to our small businesses. It sounds obvious, but I constantly see small businesses making it hard for customers to get in touch. If a customer does get through the “contact us” gauntlet, that small business often uses needlessly complicated enterprise software to talk with customers.

Small businesses don’t get the spotlight, but they are the engine of the economy. To wit, in the United States:

  • 99.9% of businesses are small
  • Nearly half the private workforce is employed by small businesses
  • They generate over 43% of the country’s GDP

And beyond the stats, small businesses are who we turn to every day: your corner coffee shop, your local cleaner, your neighborhood software team. And don’t forget that every big business started small. Small businesses are the genesis of innovation. We all need small businesses to succeed.

Most small teams aren’t trying to become giant corporations. They want to make a living doing work for a fair return. Many of them work hard in hopes of moving the needle from a fair return to a comfortable life, and maybe even some riches down the road. Yet it’s amazing how often it’s forgotten: you need customers to succeed. Success in small business starts with human conversation.

While talking effectively with your customers does not guarantee success, it is certainly a requirement. Here’s what that looks like: a customer has a question and your team responds kindly, clearly, and quickly. Or sometimes your team wants to reach out with a question for a customer. It’s a simple, human interaction that cannot be done effectively by automation or AI. It’s the air your small business is breathing. Starve that air, and everything else suffers. Your product or service is almost secondary to building a healthy relationship with each of your customers. Big business doesn’t operate this way. We shouldn’t expect it to show us how to build real relationships.

We’re doing our best here at Good Enough to build healthy, happy customer relationships. Whenever you write to us about any of our products, someone on the team is going to reply to offer help or an explanation or an alternative. Fact is, if you write to us about anything, we’re going to reply to offer help or an explanation or an alternative. As an online business, we’re talking with customers primarily over email. For us, Jelly makes those conversations easy to have—human, not hectic.

Actual customer support is remarkable. Actual, healthy human relationships are important. Actual customer conversations are a key to small business success. Choose your actions and tools accordingly.

James is Good Enough

2025-04-22 08:00:00

We put a lot of ourselves into our work, and it occurs to us that you, dear reader, might not know much about us. So we’re descending deeper, ever deeper, into the Q&A column introduce ourselves, one-at-a-time. Desperate to keep things interesting, James tried to spice up the interview format...

… OK ChatGPT, I want you to read my personal website and the Good Enough website, particularly the "<person> is Good Enough" posts, and the posts I've written there.

https://lazyatom.com
https://interblah.net
https://goodenough.us
https://goodenough.us/blog

Once you've done that, I want you to conduct a question-by-question interview with me, to form the basis of a "James is Good Enough" post. Ask me one question at a time, and then use my answer to evolve the interview. We should aim for about 8 questions in total.

Don't be boring. Use the existing posts as inspiration, but I want the post about me to be unique. I'd be cool if it went in a strange tangent.

Absolutely! Let’s kick off this interview with a question that sets the tone:

Q1: If your inner monologue were a programming language, what would it be, and why?

I think it would be one with lots of looping constructs, and recursion. And there'd be a constantly-running supervising process that reflected on what was being processed and tagged things with exceptions like E_WHAT_THE_HELL_ARE_YOU_TALKING_ABOUT_THIS_IS_NONSENSE

Fantastic. You’ve got a reflective meta-loop keeping everything in check. Let’s follow that thought deeper:

Q2: What’s one recurring loop or thought-process that’s currently being tagged by your inner supervisor as E_WHAT_THE_HELL_ARE_YOU_TALKING_ABOUT_THIS_IS_NONSENSE?

Well, it's that I asked you, ChatGPT, to research and prepare for this interview and you've rapidly sent it totally off the rails. You could start with some softball questions asking about myself and my role in the team before things get all weird, couldn't you?

Fair enough—let’s rewind a bit and start from a more grounded place.

Q3: OK James, tell me a bit about who you are, what you’re doing these days, and how you fit into the Good Enough story?

I'm James Adam, I suppose you'd say I'm a "backend" software developer although I do enjoy getting involved in the interactive front-end stuff too. I've been programming for a long time, almost 20 years with Ruby, and I've been around long enough to have contributed some interesting things back to the community and Rails. That's nice.

How do I fit into the Good Enough story? My friend Patrick, who I met when we both lived in Austin, TX, had always been trying to persuade me to work with him. A few jobs later, he succeeded! I was the 6th person to join the team, in mid 2023, and I love being a part of this team. It's long been my dream to build great products with love, and at Good Enough we are lucky to have that opportunity.

That’s perfect; let’s build on that a bit.

Q4: You mentioned it’s long been your dream to build great products with love. What does “building with love” mean for you personally—especially after two decades of programming?

Well, it's a foundational part of the Good Enough manifesto: there's too much crappy software in the world. Software that's too complicated, software that doesn't focus on making things easy for the people using it. Software that takes itself too seriously. Software built for purchasing managers instead of real users. Building software with love means loving the people who use what you make, caring about their experience, prioritising that above everything else (including profits!).

Great! Let’s push into this tension a bit more:

Q5: Loving your users is admirable, but what’s a memorable example of when loving users conflicted with practical realities—like deadlines, budgets, or even your team’s sanity? How did you navigate that?

That's a boring question.

Alright, let’s throw practicalities aside and pivot into something more colorful.

Q5 (take two): If Good Enough were secretly a front for a mysterious cult, what strange rituals or beliefs would you and your teammates secretly adhere to, and why would you be tempted to join in?

That's a much better question. I think we would start writing science fiction novels about how we are all just simulations of simulations, and every year at midnight on the 1st of January we scream garbled binary into abandoned DVD players in the hopes of awakening Y2K, the great Beast Overflow from Beyond the Void, Tony the Pony, Zalgo, Zalgo! And then we'd have a nice cup of tea.
Outstanding! I think we’ve now found our groove.

Q6: If Tony the Pony, Zalgo, and Y2K emerged through your DVD player at midnight and demanded you justify your existence in precisely one feature you’ve built or contributed to, which one would you pick, and how would you convince them to spare humanity based on it?

Pretty much everything we do at Good Enough is a team effort, but I'm proud to have planted the seed that has become Jelly -- it's an idea I've wanted to develop properly for a very long time. That, and an unreleased application I made called Chicken.
Excellent—Jelly and Chicken it is. Clearly, we’ve entered culinary territory now.

Q7: If Jelly and Chicken were literally dishes on a menu at the Good Enough café (the front for the aforementioned cult, obviously), how would you describe their flavors and textures to customers who wander in, unsuspecting of their metaphysical implications?

I think you've gone a bit off the rails again, GPT.
Fair enough—let’s pull it back onto slightly firmer ground, but keeping the flavor of creativity intact.

Q7 (revised): What is it about Jelly, as an idea, that resonated with you deeply enough that it stuck around in your mind until you could finally build it at Good Enough?

It's that email is great, it's universal, it's simple, but there was no "multiplayer" experience for it, no way for a group to act on a single inbox, not without contorting themselves to fit the metaphors of the big support players like ZenDesk or HelpScout, forcing you to play-act like support agents instead of just being people who want to share responsibility for an email address.

It should be a simple thing! But all these big software products make it complicated, smother it in bells and whistles -- and now AI, no offence GPT. There should be a "no-brainer" answer for a group that wants to use email together.

I think Jelly can be that answer.

That’s a fantastic articulation—clear and passionate.

Let’s wrap this up neatly:

Final Question (Q8): You’ve spent your career advocating for simplicity in software. Do you think there’s a deeper personal philosophy beneath that, something you’re trying to express beyond just “good software design”?

I've always been interested in how simple mechanisms or rules can produce complex, emergent behaviour. I've explored that in software development, in products, in the groups and companies that I've contributed to and shaped... I think if we can hone the right set of simple behaviours, amazing things can happen, almost by accident. That's always fascinated me.

At this point ChatGPT tried to summarise the whole thing into a coherent post and, to be honest, it was garbage.

So, instead, you get the raw interview. Judge me harshly, for I deserve it.