MoreRSS

site iconGood EnoughModify

Good Enough LLC, Made Pika, Letterbird, Yay.Boo, Album Whale, and more.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Good Enough

Introducing Letterbird 2.0

2026-01-13 00:42:50

At the end of 2023, we built and launched Letterbird, a stupid simple free contact form for the web. We use it for all our own products (Pika example), and thousands of you are also using it for your own internet email forms. Thank you!

We feel pretty good that we could deliver software that does one thing good enough to be called #done and walk away, and in fact, we’ve barely touched Letterbird since launch. However, a backlog of improvement ideas has been growing, and with our team’s renewed focus, it was a good time to look at that list.

Leveling up is cool, so considering we went years without an update, we’re calling this Letterbird 2.0, why not:

New: Customizable Dark Mode

We feel strongly that dark mode is a required accessibility feature for many readers, and so Letterbird (and all our other products) have always supported dark mode out-of-the-box. However, it was not customizable — you were stuck with a black form.

No longer! Now you can fully customize your form, including the shadowed-box container, in both light and dark mode:

This is especially useful when your form is embedded and you want to match the colors of the site.

New: Custom Form Fields

This one is particularly exciting: Pro subscribers can now add extra fields to their contact form!

You’ll still have the typical email-specific fields for “Name”, “Email”, “Subject” (optional), and “Body”, but you can now also add up to 10 extra fields for any other information you want to collect. Custom fields can either be a text input, radio options (select one), or checkbox options (select multiple), and answers will be at the bottom of the email you receive.

Maybe you want to require all emailers to tell you their “Favorite album this year” (like we are doing for Album Whale), or maybe you want to know if their email falls into a specific category, which you can then use to properly label in your inbox (like we are doing for our general form). This can be just for fun, or an extremely useful productivity tool.

We’d love to see what custom fields you use in your Letterbird!

New: Flat Layout Style

From the beginning, Letterbird was designed with a shadowed-box container for the form. This was a simple solution to make sure the form was still readable as users changed the background color or embedded the form on another website. Over time we’ve learned that this shadow style doesn’t always jive with other website designs.

Now Pro subscribers can also choose to eschew that shadowed-box container and go with a flatter style, which should better fit in when embedded into just about any type of website. This works in tandem with full color customization in both light and dark mode.

As you can see in the screenshot above, we’re using this style for our own contact form on this very site.

New: Remove Letterbird’s Email Footer

Every email you get through your Letterbird contact form includes a small gray-text footer explaining that this message is from your contact form and who you’ll be replying to. It’s useful, but it’s included in the quoted-reply part of your reply email and some users would prefer to turn this off.

Pro subscribers can now turn it off!

Everything Else

  • Letterbird lets you translate all your form labels to another language, but customizing your confirmation message was previously locked behind a Pro subscription. We’ve removed this from being Pro-only to make sure we’re allowing full form translation for all.

  • We’ve updated the attachments field (a Pro-only feature) to be a more visual and better user experience. It’s easier to see which files have been attached, how big the file is, and remove errant files that accidentally got attached.

  • We’re now doing a better job auto-focusing the right input on page load, and resizing the “Body” part of the form as people type into it.

  • We’re also now doing an even better job protecting your inbox from spam and bots.

Plus: A Discount for 2026!

Letterbird is still a stupid simple free contact form on the web. These changes not only make the out-of-the-box version even better, but add much more useful functionality to a Pro subscription.

To celebrate, we’re running a promotion between now and the end of February: Enter code HAPPY2026 for 25% off your first year of Pro!

We’re going to call Letterbird #done again, for now, but if there’s any other features you’d really like to see, we’d really like to hear about them!




Reply by email

How We Configure Our Rails Local CI

2026-01-10 01:41:39

Continuous integration is a great thing, and having tests and security checks run before every deploy is also a great thing. But if you’re a developer who has been shipping production code for more than a week, you definitely understand how much it can all feel like a house of cards that tumbles down nearly every day.

The Good Enough suite of products have been using GitHub Actions to make sure our automated test suites run before each deployment. The (mostly free) servers GitHub offers are predictably slow, with the Pika test suite generally taking close to ten minutes to run. (To that you say, “Delete most of your system tests!” Alas, due to Pika’s lovely editor, we unfortunately have to maintain quite a few system tests for the service.) Even when upgrading, and paying for, a higher-strength GitHub Action server we were seeing runs approaching eight minutes for Pika.

That’s already no fun, but even worse is the fact that our system tests were a bit flaky in the GitHub Actions environment. We eventually got the hint that running system tests in parallel just isn’t possible, but even running them one test at a time would lead to odd failures in part because of how slow things  move in the Actions environment. So imagine the cycle of trying to deploy a Pika update and needing to run continuous integration two, three, or four times. Frustration!

There’s got to be a better way!

There is. Hopefully. With the arrival of Rails 8.1 came the option to set up local CI. As a team of two wanting to move a little more quickly and with a little less frustration, this seems like a perfect fit. Here’s how I’ve set it up for Pika…

ci.rb:

# Run using bin/ci

CI.run do
  step "Setup", "bin/setup --skip-server"

  step "Security: Gem audit", "bin/bundler-audit"
  step "Security: Brakeman code analysis", "bin/brakeman --quiet --no-pager --exit-on-warn --exit-on-error --confidence-level 2"
  step "Security: Importmap vulnerability audit", "bin/importmap audit"

  step "Tests: Rails", "bin/rails test"
  step "Tests: System", "bin/rails test:system"
  step "Tests: Seeds", "env RAILS_ENV=test bin/rails db:seed:replant"

  # Set a green GitHub commit status to unblock PR merge.
  # Requires the `gh` CLI and `gh extension install basecamp/gh-signoff`.
  if success?
    step "Signoff: All systems go. Ready for merge and deploy.", "gh signoff"
  else
    failure "Signoff: CI failed. Do not merge or deploy.", "Fix the issues and try again."
  end
end

In order for the importmap vulnerability audit to run successfully, I needed to update our gemfile with openssl:

group :development, :test do
  gem "openssl"
end

Here’s an excerpt of Pika’s application_system_test_case.rb:

ENV["PARALLEL_WORKERS"] ||= "1"  # System tests seem less flakey when not run in parallel
require "test_helper"

class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
  browser_options = Selenium::WebDriver::Chrome::Options.new.tap do |opts|
    opts.add_argument("--window-size=1200,800")
    opts.add_argument("--disable-extensions")
    # Disable non-foreground tabs from getting a lower process priority
    opts.add_argument("--disable-renderer-backgrounding")
    # Normally, Chrome will treat a 'foreground' tab instead as backgrounded if the surrounding
    # window is occluded (aka visually covered) by another window. This flag disables that.
    opts.add_argument("--disable-backgrounding-occluded-windows")
    # Suppress all permission prompts by automatically denying them.
    opts.add_argument("--deny-permission-prompts")
    opts.add_argument("--enable-automation")
  end

  Capybara.register_driver :chrome_headless do |app|
    browser_options.add_argument("--headless")
    Capybara::Selenium::Driver.new(app, browser: :chrome, options: browser_options)
  end

  Capybara.register_driver :chrome do |app|
    Capybara::Selenium::Driver.new(app, browser: :chrome, options: browser_options)
  end

  if ENV["SYSTEM_TESTS_BROWSER"]
    driven_by :chrome, screen_size: [ 1200, 1000 ]
  else
    driven_by :chrome_headless, screen_size: [ 1200, 1000 ]
  end
end

Prerequisites to run local CI:

  • brew install gh

  • gh auth login

  • gh extension install basecamp/gh-signoff

  • Run: gh signoff install

This installs the GitHub command-line interface, installs the signoff extension for GitHub command-line, and turns on the signoff requirement in your repo.

Here’s the process:

  • Get all your changes pushed to a branch and make a PR

  • Make sure your local environment doesn’t have any lingering file changes or CI will fail

  • Run bin/ci

Running our Pika CI locally completes in under three minutes. That’s a big improvement! Upon successful completion of local CI, signoff will land on your branch, and you can merge and push to main.

If you ever need to move quickly, say in an emergency situation:

> gh signoff create -f
> git push

Since Lettini and I are both super-duper admins in our GitHub account, we needed one more update to protect us from willy-nilly pushing to main. I had to update a setting on GitHub in each repository. I clicked on Do not allow bypassing the above settings in repo > branches > Branch protection rules > main > edit:

It’s not all rainbows and unicorns

In an ideal world, hands-off CI is a really great thing. It will take a bit for these steps to become muscle memory. I hope they do! System tests are still notoriously flaky, but running tests only in our local environments means we shouldn’t have to account for both general flakiness and super-slow-test-running flakiness.

GitHub has a useful feature called Dependabot, which can apply security updates to your dependencies and create a pull request that’s often ready to merge. Sometimes we’ve just clicked that merge button in the past, feeling confident because our test suite had already run in GitHub Actions. Now we’ll have to pull down those branches to go through a local CI and signoff step in order to merge things.

If local CI doesn’t end up fitting us, I’ve also discovered there are faster, GitHub-Action-based alternatives for automating CI, such as Blacksmith. These services also have historically been cheaper than increasing server power at GitHub, though recent policy changes at GitHub have changed that math.

And a thank you

I’d be remiss if I didn’t thank 37signals for opening up their Fizzy repository. This helped me to really streamline our application_system_test_case.rb, which had become a Frankenstein’s monster of a thing as I troubleshot system test issues over the years.




Reply by email

Good Enough Is Reorganizing

2025-11-25 08:00:00

Hello reader, Matthew here. For those of you who’ve been on this journey with us since the early days, you might know the story of Good Enough’s inception. It was started by Shawn and Barry as a modest effort to realize some fun product ideas, with the lofty goal of seeing if we could make the web a little more interesting while making at least enough money to cover costs. They put together a small team (that’s when I joined), and we set out on a stormy year of prototyping and building and bad ideas.

Fast forward to today, and after much trial-and-error, a zine, a printer experiment, and many illustrations that became stickers, we’re thankful to have found some modest product successes in Jelly and Pika. What we’ve learned along the way is that, to properly care for all our products as they continue to grow, more individual focus is needed.

Internally, our team has been mostly split up for a while now, as different team members gravitated to different products. Starting next week, we’ll be making it more official and reorganizing Good Enough into separate entities:

  • James will continue operating Jelly outside of Good Enough.

  • Barry and I (Matthew) will continue operating Pika, along with Letterbird, Album Whale, and other Good Enough services not named Yay.Boo and Ponder.

  • Shawn and Patrick will be moving on from Good Enough and back into the real world with an exciting brick-and-mortar project in New Jersey (we wish them all the luck!).

If you use any of our products and have interacted with us via support, you probably notice that the names above are exactly who you thought of when you thought of the given product. Not much is changing in that sense, and we’re excited and motivated to continue working on them. This reorganization simply enables us to grow the products we love in a more sustainable way. (Case in point: I’ve recently been working on new features for Letterbird! Stay tuned.)

In fact, you probably wouldn’t even notice this reorganization if we didn’t say anything. The only customer-facing change here is that there’ll be a newly formed entity named on our invoices and receipts and throughout our policies: We Are Good Enough LLC.

To keep up with the latest news for Jelly, please be sure to follow the Jelly Changelog. All other Good Enough news will continue to flow through the places you’ve become accustomed to as a Good Enough follower (see the links at the bottom of this page). If you have questions, email us.

And from all of us, thank you for being on this journey with us as we enter the next chapter!




Reply by email

We’re Shutting Down Yay.Boo and Ponder

2025-11-03 08:00:00

Update 12/03/2025: Yay.Boo is staying alive! 🙌 Keep your eyes peeled to Yay.Boo for updates.

We have built a lot of good products here at Good Enough. Whether you’re sharing an inbox with your team or avoiding social media with a blog, we’ve got you covered. Unfortunately, there are some products that, while very nice, have not had our attention for a long time. Two of those products are Yay.Boo and Ponder.

Ponder, our take on small forum software, was one of the first things we built as a collective. Even today, it works really well for a small group of polite folks to talk about a shared interest. Unfortunately, not many small groups found Ponder, and our team hasn’t had the bandwidth to continue improving the software.

Yay.Boo is a delightful tool with which to quickly throw some HTML online. On top of that, it has always been a playground to push the envelopes on just what a product homepage could look like. Unlike Ponder, Yay.Boo is even getting a decent amount of use.

If you look closely you may see a shared risk that each of these products pose. To run both Yay.Boo and Ponder responsibly, a fair amount of time must be spent in moderation. Every Yay.Boo site update needs to be checked. We also don’t want any nasty content to end up hosted on Ponder, which is even trickier to moderate since the groups are private.

Leaving those services online and not paying attention to them is something that we do not feel comfortable doing. Since our small team is not focused on these products, we need to do the responsible thing and move on from them.

So, with heavy hearts, we’re going to shut down Ponder and Yay.Boo (see above note 👆). Signups are already turned off. On December 3rd we’ll delete all user data and flip the power to off.

To those of you still using Yay.Boo and Ponder, we’re sorry. There are some good alternatives to Yay.Boo in: tiiny.host, Netlify Drop, and static.app. Ponder is a different beast and, sadly, we do not know of many products that fit a similar mold. Liminal is the only one we’ve come across that might be close.

To anyone who was gracious enough to pay for a Yay.Boo account, double thank you! We have canceled all accounts and you won’t be billed again. If you have any questions about the shutdown or your data, please also don’t hesitate to get in touch.

While it stinks to have to send this message, we hope you’ll understand that a lot of thought went into this decision. Through these years at Good Enough we’ve discovered which products really excite us; properly saying goodbye to Ponder and Yay.Boo will allow us to focus on those products instead.

A final note: if you’re reading this and thinking that you’d be just the person to take on either of these softwares, do let us know. We haven’t 100% closed the door to passing these products on to a person or team that could care for them, and we’d be happy to listen to your pitch.

Thank you and keep keeping the web weird!




Reply by email

TIL: Rails, CloundFront CDN, and imgproxy

2025-10-14 08:00:00

In September, I worked on improving Pika’s image performance. I’ve had a long career now (25 years 😭) doing mostly web-programming tasks, yet somehow I’ve never set up a CDN myself. I suppose the “management years” right as my prior organization was getting bigger contributed to missing out on that experience. In any case, the work was overdue on Pika and it was time to tackle it.

Through a bit of help from online articles and online friends, I’ve gotten it mostly figured out. Here is Pika’s setup.

The tools

Since we started Good Enough with lots of AWS credits, Amazon has got us a bit locked in with their services. And since, remember, I have no past experience setting these things up, well, I tallied-ho with Amazon’s CloudFront for the CDN and S3, which we were already using, for storage. Through this process I had a lot of “grass is greener” feelings toward Cloudflare and Cloudflare R2, but I’ll save that dalliance for another day.

I started thinking about the many background jobs I was going to need to orchestrate for creating the various tuned images (resizing, removing Exif data, compression, etc). Through that research I ran into John Nunemaker’s Imgproxy is Amazing blog post. I reached out to confirm that he is still using imgproxy, and, boy howdy, is he ever. Thanks to Nunes for sharing many details about how he has configured both imgproxy and CloudFront!

The flow

When someone’s browser requests an uncached image from a Pika blog post, here’s how an image request flows through all of these systems:

         ┌────────────┐               
         │            │               
         │  Reader    │               
         │  requests  │               
         │  image     │               
         │            │               
         └───┬────────┘               
             │    ▲                   
             │    │                   
             ▼    │                   
 ┌────────────────┴───────────┐       
 │                            │
 │  Regional CloudFront node  │
 │                            │
 └─────┬──────────────────────┘
       │                  ▲    
       │       CloudFront │ caches   
       │       in regions │ and at
       │           shield │    
       ▼                  │    
 ┌────────────────────────┴───┐
 │                            │
 │  CloudFront Origin Shield  │
 │                            │
 └─────┬──────────────────────┘
       │                  ▲    
       │         imgproxy │ strips    
       │            Exif, │ resizes,
       │              and │ compresses    
       ▼                  │    
 ┌────────────────────────┴───┐
 │                            │
 │          imgproxy          │
 │                            │
 └─────┬──────────────────────┘
       │                  ▲    
       │                  │    
       │                  │    
       │                  │    
       ▼                  │    
 ┌────────────────────────┴───┐
 │                            │
 │             S3             │
 │                            │
 └────────────────────────────┘

The configuration details (as of today)

Let’s start one step above Rails with the imgproxy setup.

imgproxy

We deploy our services at Render.com. This is the full contents of the Dockerfile we use to deploy our Pika imgproxy web service instance:

FROM ghcr.io/imgproxy/imgproxy:latest

To configure imgproxy I am using environment variables to the max. Here are the environment variables I’m currently using:

  • IMGPROXY_TTL = 30758400: Feeling pretty confident here and setting the TTL to 1 year. Attaching images to rich text fields in Rails should never really re-use an existing image or its URLs, making cache invalidation happen as a matter of course.

  • IMGPROXY_FALLBACK_IMAGE_DATA = R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7: This is a 1x1 transparent GIF fallback image in case imgproxy cannot retrieve the requested image.

  • IMGPROXY_FALLBACK_IMAGE_TTL = 120: I set the TTL for the fallback image to be much less than our system TTL set above. I don’t want system hiccups to lead to broken images. Well, not for more than 2 minutes, anyway!

  • IMGPROXY_FORMAT_QUALITY = jpeg=90,png=90,webp=79,avif=63,jxl=77: Setting mild compression for all images. I am very cautious about over-compressing anything in Pika, and the default compression of 80 was too extreme for me. webp, avif, and jxl formats are not currently used in Pika, but I added them here to match the defaults that imgproxy uses for IMGPROXY_FORMAT_QUALITY. The gif format is also not being used, as you’ll see below.

  • IMGPROXY_STRIP_COLOR_PROFILE = false: Related to the above, I want Pika to be as color-accurate as possible.

  • IMGPROXY_MAX_SRC_RESOLUTION = 75: Did you know there is such a thing as image bombs? Neither did I! impgroxy can protect you from them.

  • IMGPROXY_ALLOW_SECURITY_OPTIONS = true: This is required to allow the use of the IMGPROXY_MAX_SRC_RESOLUTION envar.

  • IMGPROXY_USE_S3 = true: This allows imgproxy to grab images directly from S3. Very clever as it saves a trip through our Rails servers! You will also need to set up the following envars: IMGPROXY_S3_REGION, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY. The downside with this technique is that the URLs no longer end with image extensions like .jpg, which has caused some problems with third-party services. I do wonder if it has been worth saving that trip through our Rails servers. 🤔

  • IMGPROXY_ALLOW_ORIGIN = https://pika.page: I’m actually not sure if this is needed since we never hit our Rails app when loading an image.

  • IMGPROXY_USE_LAST_MODIFIED = true: Given what I wrote about TTL above, I don’t think this is necessary, but it just feels right.

  • IMGPROXY_SENTRY_DSN: Set this to enable error reporting to Sentry.

  • IMGPROXY_TIMEOUT = 15: I’m not sure why I increased this from the default of 10.

  • IMGPROXY_READ_REQUEST_TIMEOUT = 15: Ditto.

  • IMGPROXY_KEY & IMGPROXY_SALT need set as well, of course.

CloudFront

Here’s how we have CloudFront configured. I’m only mentioning the settings that we changed from the default.

Main Distribution settings:

  • Alternate domain name: cdn.u.pika.page

  • Custom SSL certificate: Requested through the interface that CloudFront offers inline

Origin:

  • Origin domain: u.pika.page

  • Enable origin shield: Yes, setting the Original Shield region to be the best match for our other server locations

Behaviors:

  • Compress objects automatically: No

  • Allowed HTTP methods: GET, HEAD, OPTIONS

  • Cache HTTP methods: checked OPTIONS

  • Cache key and origin requests: checked Legacy cache settings (I don‘t love that we are on this Legacy option, but I could never get the other option to work)

Logging: Added a log destination because I’m not sure how you troubleshoot without it!

DNS

Here’s how I have DNS configured for CloudFront and imgproxy:

  • Our CDN requests go to cdn.u.pika.page (yes, I can already tell that that should have been cdn1.u.pika.page)

  • The CDN requests our imgproxy origin at u.pika.page (yes, I should have went with u1.pika.page)

  • At dnsimple I pointed u.pika.page to our imgproxy origin according to Render’s instructions

  • I also added a CNAME record to point cdn.u.pika.page to the Distribution domain name provided by CloudFront

  • As mentioned above, the SSL certificate for cdn.u.pika.page was acquired via CloudFront’s interface, which required a DNS record to be set up at dnsimple during setup for certificate validation

The Rails setup

Pika is configured to upload images to S3. This is a pretty straightforward setup that is written about in many other places.

I’m using the imgproxy gem to help build URLs for images. (There is also an imgproxy-rails gem, but it didn’t play well with our setup.) Here’s our imgproxy.yml configuration file:

default: &default
  key: Rails.application.credentials.dig(:imgproxy, :key)
  salt: Rails.application.credentials.dig(:imgproxy, :salt)

development:
  <<: *default
  endpoint: <%= ENV['IMGPROXY_FREE_CDN'] %>
test:
production:
  <<: *default
  endpoint: <%= ENV['IMGPROXY_FREE_CDN'] %>
  use_s3_urls: true

The IMGPROXY_FREE_CDN envar is set to https://cdn.u.pika.page, which is actually the CloudFront CDN URL. Also note use_s3_urls: true for the production environment. This assures the URLs generated by the imgproxy gem are pointing imgproxy to S3 directly.

The simplest images we serve are site avatars, which can be used in the headings of a blog as well as social share images. Rendering the imgproxy/CDN URL is pretty easy for this example. Here’s what we have in our User model:

has_one_attached :avatar

def avatar_url(variant=:small)
  variant_options = case
                    when variant == :small
                      { height: "100", width: "100" }
                    when variant == :medium
                      { height: "300", width: "300" }
                    end
  avatar.imgproxy_url(variant_options)
end

Rich text is a whole different beast in Rails. In our case, we have already heavily overridden the _blob.html.erb file, and our CDN updates fit right in there. Along the way I decided not to serve GIF files from imgproxy, so you’ll see some reference to that in the code as well. Processing animated images can get complicated, and I decided to leave that thinking for another day.

Further, for local development I wanted to support accessing a local imgproxy instance, but not break if it isn’t available. So you’ll see mention of an imgproxy? method, which is supported by inclusion of this module in ApplicationHelper and User:

module ImgproxyDetector
  def imgproxy?
    return @imgproxy if defined?(@imgproxy)
    @imgproxy =
      (Rails.env.production? || (Rails.env.development? && Rails.application.config_for(:imgproxy).endpoint.present?))
  end
end

Here’s the simplified imgproxy/CDN-related code from our _blob.html.erb file:

<figure class="attachment attachment--<%= blob.representable? ? "preview" : "file" %> attachment--<%= blob.filename.extension %>">
  <% if blob.representable? %>
    <%
      if blob.content_type == 'image/gif' # don't use imgproxy URLs for GIFs in case they are animated
        img_src_url = url_for(blob)
      else
        if imgproxy?
          img_src_url = blob.imgproxy_url(height: "1400", width: "1800")
        else
          img_src_url = url_for(blob.variant(resize_to_limit: [1400, 1800], saver: { quality: 90 }))
        end
      end
    %>

    <%= image_tag img_src_url %>
  <% end %>

  <figcaption class="attachment__caption">
    <% if caption = blob.try(:caption) %>
    <%= caption %>
    <% else %>
      <span class="attachment__name"><%= blob.filename %></span>
      <span class="attachment__size"><%= number_to_human_size blob.byte_size %></span>
    <% end %>
  </figcaption>
</figure>

imgproxy itself is much more performant than a Rails server, but you can’t get around the fact that image processing is a resource-heavy process. In order to avoid flooding our imgproxy server with an unpredictable number of requests the first time an image-heavy post is loaded, I decided that it would be best to warm the cache as soon as possible. So in the end I wasn’t able to avoid background jobs in our image processing stack. When a new post is created or has edited its images, a background job is created to query the CDN URL for each blob in the post. I’ll leave this code as an exercise for the reader.

Above you may remember that I mentioned the security concern of image bombing. While imgproxy protects us from that, I wanted to avoid folks uploading such images in the first place. So I added a validation to check image resolutions, which means I also didn’t manage to avoid doing any image processing on our Rails server. 😅 Here is a simplified version of how I do that for rich text image attachments:

# post.rb
has_rich_text :body
validate -> { acceptable_image_attachments(:body) }

def acceptable_image_attachments(attr)
  return true if self.send(attr).body.blank?
  self.send(attr).body.attachables.each do |attachment|
    next unless attachment.is_a?(ActiveStorage::Blob)

    if image_resolution_over_limit?(attachment)
      errors.add(attr, image_resolution_error_message_for(attachment.filename))
    end
  end
end

def image_resolution_over_limit?(blob)
  width, height = blob_dimensions(blob)
  (width.to_f * height.to_f) / 1_000_000.0 > Rails.application.config.x.image_resolution_limit.to_f
end

def blob_dimensions(blob)
  width = blob.metadata["width"]
  height = blob.metadata["height"]
  if width.nil? || height.nil?
    blob.analyze
    width = blob.metadata["width"]
    height = blob.metadata["height"]
  end

  [width, height]
end

# application.rb
config.x.image_resolution_limit = 75 # in megapixels 

Local testing is pretty easy once you get it all set up. Well, if you’re familiar with Docker. (I’m really not, but I got it set up, and doing that setup is another exercise I’ll leave to you, dear reader.) Our test code does not use imgproxy, but our development environment sure can. As mentioned above, we have a repo for Pika’s imgproxy that is a very simple Dockerfile.

  • I have Docker and OrbStack installed locally to make things work

  • dotenv is installed to manage my local envars

  • In my .env file I have IMGPROXY_FREE_CDN = “http://localhost:7777“

  • I have foreman installed to handle Procfile applications

  • Then I run foreman start -f Procfile_imgproxy.dev

Here’s my Procfile_imgproxy.dev file, which is in my main Rails app:

imgproxy: docker run --rm --name pika-imgproxy -p 7777:8080 --add-host=pika.test:host-gateway -e IMGPROXY_ENABLE_INSECURE_MODE=true -e IMGPROXY_ALLOW_PRIVATE_NETWORKS=true -e IMGPROXY_ALLOW_LOOPBACK_NETWORKS=true -e IMGPROXY_ALLOW_ORIGIN=http://pika.test -e IMGPROXY_FALLBACK_IMAGE_DATA=R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -e IMGPROXY_FALLBACK_IMAGE_TTL=120 -e IMGPROXY_FORMAT_QUALITY=jpeg=90,png=90,webp=79,avif=63,jxl=77 -e IMGPROXY_STRIP_COLOR_PROFILE=false -e IMGPROXY_TTL=604800 -e IMGPROXY_USE_LAST_MODIFIED=true ghcr.io/imgproxy/imgproxy:latest

With this all running, you can see imgproxy in action in your local development environment!

The future

We’re hoping to ride wih this setup for quite a while. Down the road we’ll probably look into tuning GIFs, and I may look into ways to implement WebP and AVIF while still keeping colors and performance to our liking. During implementation I did not have good luck making those formats work well.

And, as an admittedly novice CDN implementor, maybe others will read this blog post and have some ideas about how I could improve this setup. Happy to hear them!




Reply by email

Prettier Email Headers

2025-04-30 08:00:00

As we’re building Jelly, we have found ourselves looking at lots of raw emails. In particular, we’ve spent a lot of time with email headers. If you’ve ever had cause to do the same, you know it can lead to lots of scanning and squinting.

There’s got to be a better way! And here it is: Prettier Email Headers. [Ed: This project was shut down in October of 2025.]

With the help of AI, I threw together this tool that accepts a raw email paste. Then it shows those headers and header values in a format that is easier on the eyes. I also asked AI to do some research into the definition of each header and include citations. As always, I practiced the “don’t trust and verify” method when working with AI.

If you ever find yourself staring at email headers, I think you should give Prettier Email Headers a try!




Reply by email