Code review

Transaction propagation or 100% deadlock with JPA/Hibernate.

Transaction propagation deadlock

In continuation of the topic about deadlocks I want to tell you about another case of getting deadlock with JPA/Hibernate. This deadlock I’ve already met two times and it is related to transaction propagation.

How often do you manipulate transaction propagation except using default one – REQUIRED (luckily same name for Hibernate and JPA)?

If you ask me, not much. So when I see propagation=REQUIRES_NEW in code I always suspect that something might smell.

Code review

From silver bullet to deadlock… and back

Some problems are expected and easy to solve. In most cases we never call them problems, but the tasks within stories. Solving them we call a development, simply saying a job.

Other problems are not expected, but they are still easy to solve, when we finally know about their existence. We call them bugs, generally we receive them from testers or support engineers. It may take the whole day to solve them. 20 minutes to spend on bug and the rest of the day for table football.

Most problems are expected but hard to solve, and this is the most unfortunate category. Because these problems spend the rest of their miserable lives in a backlog, never reviewed, never analyzed, never mentioned during stand-ups and at the end killed with cleaning up of scrum boards.

The most challenging ones (according to corporate etiquette we prefer to call them challenges, rather then WTF) are unexpected and hard-solvable problems. They even don’t go to a backlog. In most cases just because nobody knows what to put in the title of issue, except “a new shiny bug”, it takes time to investigate. Some developers call them phantom bugs, put a stamp “Unreproducible” on them, other developers just ignore the existence of these bug at all. All developers, to be precise.

But the hardest problems are that, you have no idea about. And this is a real challenge, but not that one from a book of company rules. Solving these problems makes you grow, become more mature and finally stop being a junior developer. I personally hate them most of all. Basically because nobody pays for solving them. But also because it makes me feel almost as miserable as bugs in a backlog. They are never fully solvable, in most cases they avoided, and it’s called best practices in books and video-courses.

Most developers I’ve worked with put persistence problems in a backlog, secretly hoping that someone would delete them (problems, not developers, but who knows) or data storage system would be changed. An unfortunately persistent layer is persistently stable, changing DBMS is rarely an option. Although sometimes I feel almost personal pain because of these left-forever issues, life is life and development costs money.

It’s not a problem, it’s a challenge

This time wind of change blew somewhere between persistence and back-end: the problem occurred too often to send it to backlog and too scary to start actually working on: deadlock. In my previous experience I met deadlocks not more then ten times, in all cases it occurred in heavily used systems with thousands of simultaneous calls, in all cases the root cause was hard to find, in all cases deadlock was solved on database level, applying magical combination of indices, in all cases DBA was blamed in all humans sins. This time things were drastically different: our system is on under-development stage, and we barely have any data on production, the tests run on almost empty environments, testing only simple “super” happy flow scenario, we use default setting of MS-SQL Database running in Docker, we use Hibernate as ORM, services are relatively small and business logic is not sophisticated at all. What can go wrong?