-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hibernate Reactive does not combine with blocking calls in Quarkus 3 #32665
Comments
/cc @DavideD (hibernate-reactive), @Sanne (hibernate-reactive), @gavinking (hibernate-reactive) |
Hmm added some more logs to see what's happening, looks like the persisting is indeed done on a worker thread (both on Quarkus 2 and 3)
|
Could we (force) switch back to the event loop thread? |
Have you tried something like CC @jponge |
I don't think this is a Mutiny issue per-se.
If you want to bring back some execution to a Vert.x event-loop then you might capture the Vert.x context (see |
Hm, what's the difference between
That's not very convenient. Maybe |
They're the same under the hood. For a long time the Quarkus executor was not a scheduled thread pool so we had to provide a wrapper on top of a non-scheduled executor, but that's not the case anymore.
Well when you subscribe, it happens from the caller thread. If it's a Vert.x event loop then the subscription starts in such a context. |
@wjglerum Since the // scheduled method is executed on a worker thread
@Scheduled(every = "1m")
void store() {
Fruit fruit = fruitService.random();
VertxContextSupport.subscribeAndAwait(() -> {
Panache.withTransaction(() -> fruitRepository.persist(fruit));
});
} |
👍 |
That indeed works, thanks! This wouldn't really work for more complex examples where we have more calls with uni's and multi's in the reactive world. Next to
I tried doing something like it, but can't seem to really figure out how the code should look like ... As I came up with this: @Singleton
@WithTransaction
public class FruitScheduler {
@Inject
FruitService fruitService;
@Inject
FruitRepository fruitRepository;
@Scheduled(every = "1m")
public void store() {
Log.info("Scheduling fruit!");
Context context = Vertx.currentContext();
Fruit fruit = fruitService.random();
context.runOnContext(v -> fruitRepository.save(fruit));
}
} However that doesn't really work, as we now don't have a session on the context
We can fix that by wrapping this in a |
For example with something like this: @Singleton
@WithTransaction
public class FruitScheduler {
@Inject
FruitService fruitService;
@Inject
FruitRepository fruitRepository;
@Scheduled(every = "1m")
public Uni<Void> store() {
Log.info("Scheduling fruit!");
return fruitRepository.listAll()
.chain(fruits -> fruitService.random().chain(fruit -> fruitRepository.save(fruit)))
.replaceWithVoid();
}
} |
Noted 👍 |
What's the status of this? I was not able to get a clear picture from the existing comments. |
Hi, I would like to ask on a similar matter. Let's imagine we have a hibernate request, blocking action and hibernate request again. Now I ended up with that, how optimal is it? @POST
@Authenticated
@Path("/completions")
public CompletionResults completions(@Valid CompletionQuery query) throws Throwable {
Chat chat = VertxContextSupport.subscribeAndAwait(() -> getOrCreateChat(query));
CompletionResults results = chatService.completions(query, securityIdentity.getPrincipal());
results.setChatId(chat.id);
VertxContextSupport.subscribeAndAwait(() -> persistMessages(chat, query.getMessages(), results.getChoices()));
return results;
} Ideally, I want to run a hibernate request in the event loop, spawn a new thread for the blocking method and then return back to the event loop. My first attempt was like this |
Frankly, if you're in a blocking world, just use Hibernate ORM: it will be easier for you. |
If one is going to use Hibernate ORM as opposed to Hibernate Reactive, there are no efficiancy gains to be had by offloading to another thread - if anything it would be worse. So the gist is, if you absolutely want to gain maximum efficiency, use Hibernate Reactive, otherwise stick with blocking operations using Hibernate ORM. |
The problem is that, if your operation take too long on an IO thread, Quarkus will stop the request and ask to use |
I am not sure how that relates as you are never supposed to block any event loop, no matter what operations you perform. |
True, but Quarkus still cancel the request if the operation take too long, even if it's actually not blocking |
That is configurable |
If we were in a world where we could combine hibernate orm and hibernate-reactive in one and the same project and decide per API which one we use (or have long running background cron-jobs use hibernate orm and the API for querying some status or such in hibernate-reactive) this would be acceptable. As we can not freely combine the two, this is, frankly, very disappointing. I'm used to work in many languages, Scala, Kotlin, Rust, Node.js (bah) to name a few modern ones I use(d) professionally and now with Quarkus it's quite often needed to re-learn concepts and find workarounds for tasks that are easy with other solutions. This is not meant as a diss, I am super grateful you guys are doing an amazing work to bring Java forward, but at the same time I must say: The tradeoffs hurt. We are currently evaluating migrating off of quarkus at some client of mine because of stuff like this. This not a threat or anything, you don't have to care, it's not about emotions or hurting the project, it's a matter of fact I wanted to share with you. We had a service running for many months now. Since yesterday the load increased and we suddenly get (without a new deployment!)
This is for a non-blocking call. |
Negative feedback is often more valuable than positive feedback, so thanks for sharing! The mixing of Hibernate ORM and Hibernate Reactive is something that is on the radar, but hasn't been done yet because other persistence related things get higher priority. As for the exception you are seeing, that is definitely a bug and we absolutely need to fix it. Can you open a new issue and attach a sample application that exhibits this problematic behavior? |
@geoand First off: Thanks! Currently I simply don't have the time to create a minimal reproducible example as this only happened since yesterday under more load and I cannot reproduce it locally so far. I am an open source maintainer myself (under another nick ;)) and would like to give back a little, but right now: I can't. |
Yeah, that's completely understood.
No problem. If and when you can create a sample we can use to debug the problem, please let us know - cc @DavideD |
@Froidoh thanks for your candid comments. the limitations of not being able to mix hibernate classic orm and hibernate reactive is one of the main reasons why Hibernate Reactive continues to be marked with "preview" status - we know it is annoying but also didn't wan't to hold back those using fully reactive from not being able to access reactive. The next update of docs will make the limitations more explicitly documented. That does not solve you problem; but mentioning it here for others to be aware. On your specific issue I'm curious to know a few things to clarify what kind of bug we are dealing with here.
if you have cases of if you do NOT have Thanks again. |
We use <quarkus.platform.version>3.1.0.Final</quarkus.platform.version> - the API in question is defined as @POST
@Path("{bucket}")
@Produces(MediaType.APPLICATION_JSON)
public Uni<FileRow> uploadFile(
@HeaderParam("X-File-Metadata") String metaData,
@HeaderParam("Content-Type") String contentType,
@PathParam("bucket") String bucketName,
InputStream file) {
} It should do a streaming file upload. At the time this API was created there was no way to do streaming multipart file upload, in case you're wondering. We then proceed to fetch a bucket config from the dabase and if it exists we try to persist the file in a non-blocking fashion via fs operations. In there we use an AsyncInputStream and let vertx handle a lot of stuff (biting the buffer so we don't choke on huge files and do not need to allocate too much memory at once) Once the file is uploaded we insert some data to a few db tables and return a result. Maybe there is a footgun in there indeed. |
The first thing to do would be to tre
Can you please fill in some pseudo-code showing what the impl does (and most imporantly where blocking and non-blocking calls are used) |
@geoand trying with 3.3.1 we get: org.hibernate.HibernateException: java.lang.ClassCastException: class java.math.BigDecimal cannot be cast to class java.lang.Integer (java.math.BigDecimal and java.lang.Integer are in module java.base of loader 'bootstrap')" Oh how I missed the runtime errors of Java when working in Rust g We do not use BigDecimal in any of our code btw. There is one occurrence of BigInteger though |
@DavideD ^ |
Same with 3.2.5.Final Not a problem with 3.1.0.Final Didn't do a git bisect but it starts with 3.2.0.Final (didn't try any release candidates) |
Can we have the stack trace? |
I am sorry, I missed this... you know what, it won't help but here it is: @NonBlocking
public Uni<FileInformation> upload(String contentType, InputStream file, String originalFileName, String bucketName, MetaDataDto metadata) {
String normalizedFilename = originalFileName;
try {
byte[] decode = base64decoder.decode(originalFileName);
normalizedFilename = new String(decode, StandardCharsets.UTF_8);
} catch (Exception e) {
}
String filename = normalizedFilename;
return
bucketsService.getConfigIfAllowsNewFiles(bucketName)
.onItem()
.ifNull()
.failWith(BucketNotFoundException.noConfig(bucketName))
.flatMap(bucketConfig -> {
return persistFile(file, contentType, bucketConfig)
.flatMap(writtenFile -> repo.persistAndFlush(new FileInformation(
bucketConfig,
writtenFile,
contentType,
filename
)));
}
)
.flatMap(f -> f.updateMetaDataAndPersistChanges(metadata))
;
}
private Uni<AsyncFileWriteDto> persistFile(InputStream in, String contentType, BucketConfig bucketConfig) {
String fileName = UUID.randomUUID() + MimeTypes.getExtensionForMimeType(contentType);
String pathToDir = bucketConfig.pathToDir();
String filePath = pathToDir.concat("/").concat(fileName);
FileSystem nfs = vertx.fileSystem();
AsyncInputStream ais = new AsyncInputStream(vertx.getDelegate(), vertx.getDelegate().getOrCreateContext(), in, d -> updateHash(sha256Digest, d.getBytes()));
try {
return nfs
.mkdirs(pathToDir)
.flatMap(v -> nfs.createFile(filePath))
.flatMap(v -> nfs.open(filePath, new OpenOptions().setWrite(true)))
.flatMap(asyncFile ->
UniHelper.toUni(ais
.handler(data -> updateHash(sha256Digest, data.getBytes()))
.pipeTo(asyncFile.getDelegate()))
.flatMap(v -> {
BigInteger fileSize = BigInteger.valueOf(ais.getFileSize());
if (BigInteger.ZERO.equals(fileSize)) {
vertx.getDelegate().executeBlocking(p -> {
log.info(String.format("Deleting empty file at %s!", filePath));
nfs.deleteBlocking(filePath);
});
return Uni.createFrom().failure(new EmptyFileUploadException());
}
byte[] digest = sha256Digest.digest();
BigInteger bigInteger = new BigInteger(1, digest);
String fileHash = bigInteger.toString(16);
// Not all file systems support creation time, so don't even bother, let's define our own, it will be good enough
LocalDateTime creationTime = LocalDateTime.now();
LocalDateTime retentionTime = bucketConfig.calculateRetentionTimeFrom(creationTime);
// If everything was okay up until this point, we can make the file "unwritable"
ZonedDateTime zdt = ZonedDateTime.of(retentionTime, ZoneId.systemDefault());
vertx.getDelegate().executeBlocking(p -> {
// Yes, this is blocking as I found no way to do this asynchronously in Java
File f = new File(filePath);
try {
Files.setAttribute(Paths.get(filePath), "lastAccessTime", FileTime.from(zdt.toInstant()));
if (f.setLastModified(zdt.toInstant().toEpochMilli())) {
log.debug(String.format("set last modified of %s to %s", filePath, retentionTime));
} else {
log.error(String.format("Failed to set last modified of %s to %s", filePath, retentionTime));
}
if (!f.setReadOnly()) {
log.error(String.format("Failed to set %s to READ_ONLY", filePath));
}
p.complete();
} catch (IOException e) {
p.fail(e);
}
}, res -> {
if (res.succeeded()) {
log.debug(String.format("Successfully set lastAccessTime of %s to %s", filePath, retentionTime));
} else {
log.error(String.format("Failed to set lastAccessTime of %s to %s", filePath, retentionTime));
}
});
return Uni.createFrom().item(new AsyncFileWriteDto(filePath, creationTime, fileSize, fileHash, retentionTime));
}));
} catch (Exception e) {
log.error(String.format("Failed to persist file: %s", e));
return Uni.createFrom().failure(new FileUploadFailedException());
}
}
/**
* @author stw, antimist
* Taken from github
*/
public class AsyncInputStream implements ReadStream<Buffer> {
public static final int DEFAULT_READ_BUFFER_SIZE = 8192;
private static final Logger log = Logger.getLogger(AsyncInputStream.class);
// Based on the inputStream with the real data
private final ReadableByteChannel ch;
private final Vertx vertx;
private final Context context;
private boolean closed;
private boolean readInProgress;
private Handler<Buffer> dataHandler;
private final Handler<Buffer> hashCalculator;
private Handler<Void> endHandler;
private Handler<Throwable> exceptionHandler;
private final InboundBuffer<Buffer> queue;
private final int readBufferSize = DEFAULT_READ_BUFFER_SIZE;
private long readPos;
private long fileSize;
/**
* Create a new Async InputStream that can we used with a Pump
*
* @param in
* The input stream you want to write somewhere
*/
public AsyncInputStream(Vertx vertx, Context context, InputStream in, Handler<Buffer> hashCalculator) {
this.vertx = vertx;
this.context = context;
this.ch = Channels.newChannel(in);
this.queue = new InboundBuffer<>(context, 0);
this.hashCalculator = hashCalculator;
queue.handler(buff -> {
if (buff.length() > 0) {
handleData(buff);
} else {
handleEnd();
}
});
queue.drainHandler(v -> doRead());
}
public void close() {
closeInternal(null);
}
public void close(Handler<AsyncResult<Void>> handler) {
closeInternal(handler);
}
/*
* (non-Javadoc)
* @see io.vertx.core.streams.ReadStream#endHandler(io.vertx.core.Handler)
*/
@Override
public synchronized AsyncInputStream endHandler(Handler<Void> endHandler) {
check();
this.endHandler = endHandler;
return this;
}
/*
* (non-Javadoc)
* @see
* io.vertx.core.streams.ReadStream#exceptionHandler(io.vertx.core.Handler)
*/
@Override
public synchronized AsyncInputStream exceptionHandler(Handler<Throwable> exceptionHandler) {
check();
this.exceptionHandler = exceptionHandler;
return this;
}
/*
* (non-Javadoc)
* @see io.vertx.core.streams.ReadStream#handler(io.vertx.core.Handler)
*/
@Override
public synchronized AsyncInputStream handler(Handler<Buffer> handler) {
check();
this.dataHandler = handler;
if (this.dataHandler != null && !this.closed) {
this.doRead();
} else {
queue.clear();
}
return this;
}
/*
* (non-Javadoc)
* @see io.vertx.core.streams.ReadStream#pause()
*/
@Override
public synchronized AsyncInputStream pause() {
check();
queue.pause();
return this;
}
/*
* (non-Javadoc)
* @see io.vertx.core.streams.ReadStream#resume()
*/
@Override
public synchronized AsyncInputStream resume() {
check();
if (!closed) {
queue.resume();
}
return this;
}
@Override
public ReadStream<Buffer> fetch(long amount) {
queue.fetch(amount);
return this;
}
private void check() {
if (this.closed) {
throw new IllegalStateException("Inputstream is closed");
}
}
private void checkContext() {
if (!vertx.getOrCreateContext().equals(context)) {
throw new IllegalStateException("AsyncInputStream must only be used in the context that created it, expected: " + this.context
+ " actual " + vertx.getOrCreateContext());
}
}
private synchronized void closeInternal(Handler<AsyncResult<Void>> handler) {
check();
closed = true;
doClose(handler);
}
private void doClose(Handler<AsyncResult<Void>> handler) {
try {
ch.close();
if (handler != null) {
this.vertx.runOnContext(v -> handler.handle(Future.succeededFuture()));
}
} catch (IOException e) {
if (handler != null) {
this.vertx.runOnContext(v -> handler.handle(Future.failedFuture(e)));
}
}
}
public synchronized AsyncInputStream read(Buffer buffer, int offset, long position, int length,
Handler<AsyncResult<Buffer>> handler) {
Objects.requireNonNull(buffer, "buffer");
Objects.requireNonNull(handler, "handler");
Arguments.require(offset >= 0, "offset must be >= 0");
Arguments.require(position >= 0, "position must be >= 0");
Arguments.require(length >= 0, "length must be >= 0");
check();
ByteBuffer bb = ByteBuffer.allocate(length);
doRead(buffer, offset, bb, position, handler);
return this;
}
private void doRead() {
check();
doRead(ByteBuffer.allocate(readBufferSize));
}
private synchronized void doRead(ByteBuffer bb) {
if (!readInProgress) {
readInProgress = true;
Buffer buff = Buffer.buffer(readBufferSize);
doRead(buff, 0, bb, readPos, ar -> {
if (ar.succeeded()) {
readInProgress = false;
Buffer buffer = ar.result();
readPos += buffer.length();
fileSize = readPos;
// Empty buffer represents end of file
if (queue.write(buffer) && buffer.length() > 0) {
doRead(bb);
}
} else {
handleException(ar.cause());
}
});
}
}
private void doRead(Buffer writeBuff, int offset, ByteBuffer buff, long position, Handler<AsyncResult<Buffer>> handler) {
// ReadableByteChannel doesn't have a completion handler, so we wrap it into
// an executeBlocking and use the future there
vertx.executeBlocking(future -> {
try {
Integer bytesRead = ch.read(buff);
future.complete(bytesRead);
} catch (Exception e) {
log.error(e);
future.fail(e);
}
}, res -> {
if (res.failed()) {
context.runOnContext((v) -> handler.handle(Future.failedFuture(res.cause())));
} else {
// Do the completed check
Integer bytesRead = (Integer) res.result();
if (bytesRead == -1) {
//End of file
context.runOnContext((v) -> {
buff.flip();
writeBuff.setBytes(offset, buff);
buff.compact();
handler.handle(Future.succeededFuture(writeBuff));
});
} else if (buff.hasRemaining()) {
long pos = position;
pos += bytesRead;
// resubmit
doRead(writeBuff, offset, buff, pos, handler);
} else {
// It's been fully written
context.runOnContext((v) -> {
buff.flip();
writeBuff.setBytes(offset, buff);
buff.compact();
handler.handle(Future.succeededFuture(writeBuff));
});
}
}
});
}
private void handleData(Buffer buff) {
Handler<Buffer> handler;
synchronized (this) {
handler = this.dataHandler;
}
if (handler != null) {
checkContext();
hashCalculator.handle(buff);
handler.handle(buff);
}
}
private synchronized void handleEnd() {
Handler<Void> endHandler;
synchronized (this) {
dataHandler = null;
endHandler = this.endHandler;
}
if (endHandler != null) {
checkContext();
endHandler.handle(null);
}
}
private void handleException(Throwable t) {
if (exceptionHandler != null && t instanceof Exception) {
exceptionHandler.handle(t);
} else {
log.error("Unhandled exception", t);
}
}
public long getFileSize() {
return fileSize;
}
} Maybe you see something that is absolutely wrong |
Yes, the fix is going from: public class IdGenerator extends MutinyGenerator {
@Override
public Uni<Object> generate(Mutiny.Session session, Object owner, Object currentValue, EventType eventType) {
return session
.createNativeQuery("select OUR_SEQUENCE.nextval FROM dual")
.getSingleResult()
.map(x -> BigInteger.valueOf((Integer) x))
;
}
... to public class IdGenerator extends MutinyGenerator {
@Override
public Uni<Object> generate(Mutiny.Session session, Object owner, Object currentValue, EventType eventType) {
return session
.createNativeQuery("select OUR_SEQUENCE.nextval FROM dual")
.getSingleResult()
.map(x -> ((BigDecimal) x).toBigInteger())
;
}
... I remember we had to change this a couple of times already in the past year(s) going through the versions. At some point it was a generic function and the result of the native query was typed, so this error would've been caught at compile time. But at some point this changed. |
So, the issue here is that you are running a native query without specifying what value you expect to receive. It's possible that with different versions of the driver, or Hibernate, or database, you get something different in return. .createNativeQuery("select OUR_SEQUENCE.nextval FROM dual", BigInteger.class)
.getSingleResult() or, at the very least: .createNativeQuery("select OUR_SEQUENCE.nextval FROM dual", BigDecimal.class)
.getSingleResult()
.map( BigDecimal::toBigInteger) |
That's cool, thanks! But one would still need to cast this to an Object (or at least add a |
Yes, I think you are right. I don't know if there is any particular reason for not using the generic any more. I think this method is inspired by ORM where the generate returns an I will create an issue to make it generic again. |
Actually, just returning a |
@Froidoh so if I understand correctly what you are trying do is something like the following:
Is that write? |
We get the metadata also from the call, I omitted that initially, but it's just a http-header. Sorry, it was an attempt to make the code a bit leaner. So it is:
|
Currently plain bytes as back when we started this multipart data could not be streamed at all in quarkus, so we would've needed to allocate all the files on the server, which was an absolute nogo. A curl call would look like this: curl --location 'localhost:8080/v1/buckets/test_bucket/files' |
Quarkus should not be buffering the multipart contents and if it is, that's a bug. Now, even if you use raw HTTP body, you can just use the Furthermore, you can also use |
@geoand is there any example how to do this in a non-blocking manor while not allocating any additional memory on the server AND not relying on any temp folder? Because the server this runs on doesn't have a lot of HDD, the file-system we push to is a special remote file-system mounted on the server. If a lot of files get uploaded and all of them would need to temporarily reside on the server, we would have a problem. Maybe the "temp folder" is configurable and we could "pipe it through" to the remote file-system... |
I have found what looks like a bug in our File handling which I am looking into. I'll post an update when I have figured it out. |
After #35659 is in, you will be able to use |
If you get a lot of concurrent requests uploading files, I suspect you will still have more room in your tmp folder on your HDD than if you streamed all those in memory and then later on your larger storage. Now, perhaps you're streaming them directly from the network to your larger storage, in which case, yeah, just let RESTEasy Reactive do it for you like Georgios said, and it will be done prior to invoking your endpoint, and give you a Note that this should work both for Multipart and a single file. |
Sadly no, as the server we are talking about is a container with like 200mb of hdd space available :/ |
So, did you try |
I did not, I am currently on vacation and I must say that for the time being 2 of 3 of our services will migrate from quarkus to spring. This will probably lead to more hardware resources but as I am the bottleneck of all development and involved in other projects as well, the decision was met and I think it's the right one. We'll see how everything works out with the advent of Java 21 and "green threads". At least now I am not the only person capable of writing somewhat acceptable code because quarkus/mutiny really feels like another programming language, if you only know "classic imperative java". I wanna say thanks so for all your support and I wish you all the best. We are staying with Quarkus for one service, that works reliable and makes users happy, but has no need of combining blocking calls with non-blocking ones :) |
That is all fine and well, but you can use Quarkus in an imperative way, and it's probably the best approach for most use cases. |
I would argue that this advice should be way more prominent on the website. If you search for quarkus you mostly find the mutiny/vert.x and non-blocking examples that distinguish quarkus from other frameworks in java-land. Also one additional note: What really made a big impact in the decision finding was a new requirement to use redis. A colleague of mine read up the docs, implemented it and it doesn't work if there are more than one hosts specified. Took them a few minutes to get it working in Spring. Granted, they are way more familiar with Spring. |
Although we have been saying from day 1 that Quarkus can be used in both imperative and reactive mode (and the styles can even be mixed in the same application), the fact that you have not got that impression means we certainly need to do much better.
I would need to know more details about this to give a proper answer |
Hi all, So I think we should close this as it's not particularly actionable.. sorry for all confusion. Allow me to give some advise: use Hibernate Reactive exclusively if you have a "pure" reactive application. Attempting to switch threads back and forth from blocking to reactive will only result in significant efficiency waste, making the use of Hibernate Reactive pointless. If you're on a regular executor (not the vertx/netty threads) you're better off to use the "regular" Hibernate ORM within a blocking thread, and remember there's nothing inherently bad about it as we optimised the regular ORM a lot as well: if you find inefficiencies in it, let us know! On the other hand if your entire flow of operations is running on the vertx(netty) IO threads, then (and only then) you can really benefit from Hibernate Reactive: but remember the benefit of it really stems from the fact that you're NOT switching threads and executors. Such switches are the operation to avoid to achieve an high performance, highly efficient system, so attempting to shoehorn Hibernate Reactive operations from within a blocking thread just doesn't make much sense. HTH |
Describe the bug
In Quarkus 2 it's possible to combine blocking calls when writing to the database with Hibernate Reactive with Panache. It looks like this is not possible anymore in Quarkus 3.
Take the following use case:
.runSubscriptionOn(Infrastructure.getDefaultWorkerPool())
, see https://smallrye.io/smallrye-mutiny/2.1.0/guides/imperative-to-reactive/#running-blocking-code-on-subscriptionThis all worked fine on Quarkus 2 (the latest I tried was
2.16.6.Final
), but doesn't work on Quarkus 3 (3.0.0.CR2
)Expected behavior
I would expect that we can do some blocking work during a transaction when using Hibernate Reactive, especially when we delegate that blocking work to a worker thread and switch back to an event loop thread when we persist the entity with Hibernate Reactive.
Actual behavior
The scheduled method fails with the following exception:
How to Reproduce?
Attached reactive.zip is a simple project that reproduces the error.
./mvnw quarkus:dev
./mvnw quarkus:dev -Dorg.hibernate.reactive.common.InternalStateAssertions.ENFORCE=false
Output of
uname -a
orver
Output of
java -version
GraalVM version (if different from Java)
No response
Quarkus version or git rev
Build tool (ie. output of
mvnw --version
orgradlew --version
)Additional information
Looks related to #32533
The text was updated successfully, but these errors were encountered: