The uniqueness validation in ActiveRecord is a lie as soon as you have more than one app server process (or thread) running.
It performs a SQL select to ensure that no other records with the same value appear in the database before marking the record as valid. However if two processes happen to check for the same value at the same time before saving, both records are going to appear valid in Rails and will then be saved.
If you have any
has_one relationships in your ActiveRecord models, it’s wise to also back those up with a unique index in the database. That way if the Rails uniqueness validation allows a duplicate record to slip past, you’ll still guarantee the integrity of your data.
You can do this when creating a table using the
create_table method in a migration
create_table :users do |t| t.email t.index :email, unique: true # Use an array of columns for a compound index # t.index [:email, :other_column], unique: true end
Alternatively you can add a unique index later by using the
add_index method in a migration
# add_index :table, :column, unique: true add_index :users, :email, unique: true
Prevention rather than treatment: consistency_fail
You should use the consistency_fail gem to verify that all of your indexes are in place. It scans your ActiveRecord models for uniqueness validations and
has_one relationships and will warn you if any of these are missing indexes at the database level. I recommend running consistencyfail as part of your CI build. The CLI runner will return a failure exit code if indexes are missing so it’s easy to add this to your build by just executing `consistencyfail`.
Data integrity matters!