Two users booked the same turf slot at the same time. Here's what was happening at the database level and how SELECT FOR UPDATE fixed it.
During testing, two users managed to book the same slot at the same time. Both got confirmation emails. Both showed up. That's the worst possible failure mode for a booking platform.
Here's what was happening and how I fixed it.
The original booking flow was straightforward:
bookedThe problem is step 1 and step 2 are two separate database operations. Under concurrent load, two requests can both pass step 1 before either completes step 2.
Request A reads slot β available
Request B reads slot β available
Request A writes booking β success
Request B writes booking β success
Both succeed. One slot, two bookings.
I hit this in testing when I simulated two users clicking "Book" on the same slot within milliseconds of each other. Both got confirmed. The bug wasn't theoretical, it happened on the first realistic concurrency test.
My first instinct was to add a status check right before the insert:
// This does not work
var slot models.Slot
db.First(&slot, slotID)
if slot.Status != "available" {
return errors.New("slot already booked")
}
// Race condition lives here another request can
// pass this check before you write below
db.Model(&slot).Update("status", "booked")
The check and the write are still two operations. A concurrent request can slip between them. The problem is not the check it's that read and write aren't atomic.
The solution is to lock the slot row at read time so no other transaction can read or modify it until you're done.
func BookSlot(db *gorm.DB, slotID uint, userID uint) error {
return db.Transaction(func(tx *gorm.DB) error {
var slot models.Slot
// Lock the row for the duration of this transaction.
// Any other transaction that tries to SELECT FOR UPDATE
// on the same row will block here until we're done.
if err := tx.Set("gorm:query_option", "FOR UPDATE").
First(&slot, slotID).Error; err != nil {
return err
}
// Now we're the only transaction that can see this row.
// Check status inside the lock.
if slot.Status != "available" {
return errors.New("slot is no longer available")
}
// Update slot status
if err := tx.Model(&slot).
Update("status", "booked").Error; err != nil {
return err
}
// Create the booking record
booking := models.Booking{
SlotID: slotID,
UserID: userID,
Status: "confirmed",
}
if err := tx.Create(&booking).Error; err != nil {
return err
}
return nil
// Transaction commits here. Lock released.
// If anything above returned an error, GORM rolls back.
})
}
SELECT FOR UPDATE tells Postgres to lock that row the moment it's read. Any other transaction that tries to read the same row with FOR UPDATE will wait until the first transaction commits or rolls back.
Request A locks the row β checks status β writes booking β commits β releases lock
Request B waits β lock released β reads row β status is now booked β returns error
One booking. Correct.
Wrapping everything in db.Transaction() means GORM automatically rolls back if any operation inside returns an error. I don't need to manually call tx.Rollback() if the booking insert fails after the status update, both changes are reverted. The slot never ends up in a half-updated state.
Pessimistic locking works, but it means concurrent requests for the same slot queue up. That's fine for a turf booking platform, contention on any single slot is low, and a few milliseconds of wait is acceptable. If this were a high-throughput ticketing system where thousands of users hit the same row simultaneously, the queuing would become a bottleneck and I'd look at optimistic locking instead.
For SlotTurf's use case, pessimistic locking is the right call. Simple, correct, and the performance cost is irrelevant at this scale.
Database transactions aren't just for rollback safety. The isolation they provide guaranteeing that your read and write are atomic from Postgres's perspective is what makes concurrent booking logic correct. Application-level status checks can't substitute for that.