@ApplicationScoped
public class MyTransactionEventListeningBean {
void onBeginTransaction(@Observes @Initialized(TransactionScoped.class) Object event) {
// This gets invoked when a transaction begins.
void onBeforeEndTransaction(@Observes @BeforeDestroyed(TransactionScoped.class) Object event) {
// This gets invoked before a transaction ends (commit or rollback).
void onAfterEndTransaction(@Observes @Destroyed(TransactionScoped.class) Object event) {
// This gets invoked after a transaction ends (commit or rollback).
In listener methods, you can access more information about the transaction in progress by accessing the TransactionManager
,
which is a CDI bean and can be @Inject
ed.
In cloud environments where persistent storage is not available, such as when application containers are unable to use persistent volumes, you can configure the transaction management to store transaction logs in a database by using a Java Database Connectivity (JDBC) datasource.
However, in cloud-native apps, using a database to store transaction logs has additional requirements.
The narayana-jta
extension, which manages these transactions, requires stable storage, a unique reusable node identifier, and a steady IP address to work correctly.
While the JDBC object store provides a stable storage, users must still plan how to meet the other two requirements.
Quarkus, after you evaluate whether using a database to store transaction logs is right for you, allows the following JDBC-specific configuration of the object store included in quarkus.transaction-manager.object-store.<property>
properties, where <property> can be:
type
(string): Configure this property to jdbc
to enable usage of a Quarkus JDBC datasource for storing transaction logs.
The default value is file-system
.
datasource
(string): Specify the name of the datasource for the transaction log storage.
If no value is provided for the datasource
property, Quarkus uses the default datasource.
create-table
(boolean): When set to true
, the transaction log table gets automatically created if it does not already exist.
The default value is false
.
drop-table
(boolean): When set to true
, the tables are dropped on startup if they already exist.
The default value is false
.
table-prefix
(string): Specify the prefix for a related table name.
The default value is quarkus_
.
For more configuration information, see the Narayana JTA - Transaction manager section of the Quarkus All configuration options reference.
Additional information:
Create the transaction log table during the initial setup by setting the create-table
property to true
.
JDBC datasources and ActiveMQ Artemis allow the enlistment and automatically register the XAResourceRecovery
.
JDBC datasources is part of quarkus-agroal
, and it needs to use quarkus.datasource.jdbc.transactions=XA
.
ActiveMQ Artemis is part of quarkus-pooled-jms
, and it needs to use quarkus.pooled-jms.transaction=XA
.
Does it work everywhere I want to?
Yep, it works in your Quarkus application, in your IDE, in your tests, because all of these are Quarkus applications.
JTA has some bad press for some people.
I don’t know why.
Let’s just say that this is not your grandpa’s JTA implementation.
What we have is perfectly embeddable and lean.
Does it do 2 Phase Commit and slow down my app?
No, this is an old folk tale.
Let’s assume it essentially comes for free and let you scale to more complex cases involving several datasources as needed.
I don’t need transaction when I do read only operations, it’s faster.
Wrong.
First off, just disable the transaction by marking your transaction boundary with @Transactional(NOT_SUPPORTED)
(or NEVER
or SUPPORTS
depending on the semantic you want).
Second, it’s again fairy tale that not using transaction is faster.
The answer is, it depends on your DB and how many SQL SELECTs you are making.
No transaction means the DB does have a single operation transaction context anyway.
Third, when you do several SELECTs, it’s better to wrap them in a single transaction because they will all be consistent with one another.
Say your DB represents your car dashboard, you can see the number of kilometers remaining and the fuel gauge level.
By reading it in one transaction, they will be consistent.
If you read one and the other from two different transactions, then they can be inconsistent.
It can be more dramatic if you read data related to rights and access management for example.
Why do you prefer JTA vs Hibernate’s transaction management API
Managing the transactions manually via entityManager.getTransaction().begin()
and friends lead to a butt ugly code with tons of try catch finally that people get wrong.
Transactions are also about JMS and other database access, so one API makes more sense.
It’s a mess because I don’t know if my Jakarta Persistence persistence unit is using JTA
or Resource-level
Transaction
It’s not a mess in Quarkus :)
Resource-level was introduced to support Jakarta Persistence in a non-managed environment.
But Quarkus is both lean and a managed environment, so we can safely always assume we are in JTA mode.
The end result is that the difficulties of running Hibernate ORM + CDI + a transaction manager in Java SE mode are solved by Quarkus.
Configuring the transaction timeout
Configuring transaction node name identifier
Using @TransactionScoped
to bind CDI beans to the transaction lifecycle
Configure storing of Quarkus transaction logs in a database
Why always having a transaction manager?
Related content
On the same topics
Narayana LRA Participant Support
Using Software Transactional Memory in Quarkus
Application Data Caching
Configure data sources in Quarkus
Connecting to an Elasticsearch cluster
Dev Services for Databases
Dev Services for Elasticsearch
Dev Services for Infinispan
Dev Services for Redis
Infinispan Cache
Infinispan Client Extension Reference Guide
Quarkus Security with Jakarta Persistence
Reactive SQL Clients
Redis Cache
Redis Extension Reference Guide
Simplified Hibernate ORM with Panache
Simplified Hibernate ORM with Panache and Kotlin
Simplified Hibernate Reactive with Panache
Simplified MongoDB with Panache
Simplified MongoDB with Panache and Kotlin