Rails World - Amsterdam, Netherlands, September 4–5
The first conference is Rails World happening in
Amsterdam on 4 and 5 September.
The tickets are sold out! So if you did not yet got
your ticket, then make sure you keep an eye on it for next year :(
Rails World 2025
Friendly.rb - Bucharest, Romania, September 10–11
The next one Friendly.rb will
happen next week in Bucharest, Romania on 10-11 September. Make sure you
don't have plans for 12 September as we will have an outdoor day.
During the conference there will be a "Brew your own coffee
corner" where you can bring your own pour overs and we will provide
freshly roasted coffee beans from various roasters to delight your
taste. There is also a The Friendly Gameshow a funny
surprise that awaits for you part of the conference. And on Friday, we
will have a
Friendly trip to Sinaia, an easy outdoor trip to one of the most
beautiful cities in Carpathian mountains.
Yes, I shared more details about this conference as I am one of the
co-organisers - so feel free to ping me about it I wrote 3 articles
about why to join Friendly.rb this year:
Friendly.rb conference
EuRuKo - Viana do Castelo, Portugal, September 18–19
The last one, three weeks from now, will happen in Viana do
Castelo, Portugal and it is the biggest Ruby conference in
Europe: EuRuKo
Tickets are still available at https://2025.euruko.org and they
have on Saturday a Ruby Safari - "Guided walking tour around Viana
do Castelo, exploring local gems and hidden spots".
When AI is writing tests in agentic mode, it's crucial to monitor its
output.
Sometimes, it uses tricks to pass the tests or creates unnecessary
tests.
Diff from a change made by agentic LLM
For example, consider this diff that has at least two issues:
1. The test is unnecessary for a medium-risk, medium-impact
feature. It merely checks that the counter cache in Rails functions
correctly. This test was written in an earlier iteration that I haven't
reviewed yet to remove it.
Notice how it added a position: rand(1..1000) to make some tests to
pass because they lacked the position?
This approach is just inviting flaky tests!
I added this to my Flakiness section:
- Use consistent data across tests to avoid flakiness. Prefer existing fixtures instead of generating data randomly.
- Do not use `rand` or anything similar when generating data for tests.
- Use `travel_to` for all time-sensitive tests (assert time or need time to make the test work)
- Use `.sort` when the results are not guaranteed to be in a specific order.
No one will thank you for good error messages or default states, but
they will feel it! The difference between friction and flow is
invisible work.
This is also true for the development process:
Sloppy names turn into messy objects and interfaces. Messy objects lead
to slower teams.
The same goes for testing: Be careful about how you organise the code
inside the test files because if it gets messy, it will slow down the
development and debugging in the future.
It all compounds.
Details are never just details! If you are using AI to generate code,
make sure it generates good names in your domain.
Curious about how I ended up with an invoice nearing $100 for conducting
over 20 million Class A operations on Cloudflare R2?
I initiated several Litestream processes across a variety of side
projects and forgot to set the sync interval! :) It defaults to 1s
The backup I was handling was more complex than the main production
database. It included slight variations in the SQLite file because it
automatically queried various external services to check the status of
different entities.
I enjoy creating small automations using Ruby and Bash. While it might
be simpler to use Bash, I like #Ruby so much that I want to use it more.
Here are two examples of how to run all the tests that have changed in
the current branch: an example for Minitest and one for RSpec
How to run all tests from the current branch using Minitest
How to run all tests from the current branch using RSpec
You can, of course, modify this script to become an MCP, enabling tools like Claude Code or Cursor to run tests at the end of an implementation.
During multiple rounds of test runs and fixes, when focusing on each test individually LLMs may pass the most recent test they fixed but fail previous ones.
Once you agreed to a set of tests: either written by you or written by
AIs but guided by you, don't let the AIs to refactor/change the tests
while they are trying to implement a feature.
If your tests are there as specifications, then the implementation
should be constrained by them. Therefore, they should not be changed
unless you direct them to do so.
I am looking at various personal websites mostly focused on researching
personal websites of technical people mostly programmers because I want
to redo my own personal website - which is old and due for a redesign.
And one thing that I noticed as a reader is that sometimes when I click
on the about section the name is not written there.
But let’s be honest, sometimes you just want a framework that’s reliable, efficient, and doesn’t leave you wrestling with configuration files until 3 AM
Source: https://x.com/AmandaBPerino/status/1916800541746749817 or https://threadreaderapp.com/thread/1916800541746749817.html
That’s why I almost always comment on Hacker News or any forum
where I see people still spreading old narratives about Ruby and Rails.
I think we should not just accept them and move on but reply to them and
show why the narrative is wrong. Maybe not with the purpose of
convincing the author who is spreading that narrative but for other
people who might read that without knowing Ruby and Rails. If no one
refutes these statements, people from outside our community could take
them as truth.
I believe it’s important to engage with empathy and try to have a
dialogue with those spreading this myth. We should try to understand
where they heard about it, when they last checked the status of Ruby on
Rails, and how much they know about our ecosystem.