Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cross-client Teku VC Lodestar BN is not working #8118

Closed
zilm13 opened this issue Mar 20, 2024 · 8 comments
Closed

Cross-client Teku VC Lodestar BN is not working #8118

zilm13 opened this issue Mar 20, 2024 · 8 comments
Assignees

Comments

@zilm13
Copy link
Contributor

zilm13 commented Mar 20, 2024

Description

According to the Kurtosis data (verified) Teku VC is not working with Lodestar BN. Should be fixed.
Actual list of compatibility is available at https://github.com/kurtosis-tech/ethereum-package?tab=readme-ov-file#beacon-node--validator-client-compatibility :

So it's the only case to resolve at the moment of creating this issue.

@zilm13
Copy link
Contributor Author

zilm13 commented Mar 21, 2024

Lodestar API response doesn't contain required field finalized so we fail on this. I think we shouldn't try to guess it and could continue on this issue when Lodestar starts to provide spec response.

@nflaig
Copy link

nflaig commented Apr 8, 2024

I picked this up from #8180

While experiencing the issue Teku-VC is complaining with
Apr 04 09:52:41 ursa-Xset teku[2401221]: 09:52:41.974 ERROR - Validator *** Error while connecting to beacon node event stream com.launch
darkly.eventsource.StreamIOException: java.net.ConnectException: Failed to connect to /127.0.0.1:1752 (See log file for full stack trace)

I am debugging Lodestar BN <> Teku VC right now trying to figure out remaining compatiblity issues like ChainSafe/lodestar#6635 using Kurtosis. Teku VC seems to be stuck at startup.

I have this massiv stack trace, maybe it's helpful

024-04-08 22:43:32.374+00:00 | okhttp-eventsource-events[null]-1 | ERROR | teku-validator-log | Validator   *** Error while connecting to beacon node event stream
com.launchdarkly.eventsource.StreamIOException: java.net.SocketTimeoutException: timeout
at com.launchdarkly.eventsource.HttpConnectStrategy$Client.connect(HttpConnectStrategy.java:455) ~[okhttp-eventsource-4.1.1.jar:4.1.1]
at com.launchdarkly.eventsource.EventSource.tryStart(EventSource.java:292) ~[okhttp-eventsource-4.1.1.jar:4.1.1]
at com.launchdarkly.eventsource.EventSource.requireEvent(EventSource.java:595) ~[okhttp-eventsource-4.1.1.jar:4.1.1]
at com.launchdarkly.eventsource.EventSource.readAnyEvent(EventSource.java:390) ~[okhttp-eventsource-4.1.1.jar:4.1.1]
at com.launchdarkly.eventsource.background.BackgroundEventSource.pollAndDispatchEvent(BackgroundEventSource.java:194) ~[okhttp-eventsource-4.1.1.jar:4.1.1]
at com.launchdarkly.eventsource.background.BackgroundEventSource.access$900(BackgroundEventSource.java:73) ~[okhttp-eventsource-4.1.1.jar:4.1.1]
at com.launchdarkly.eventsource.background.BackgroundEventSource$1.run(BackgroundEventSource.java:141) ~[okhttp-eventsource-4.1.1.jar:4.1.1]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:?]
at java.base/java.lang.Thread.run(Unknown Source) [?:?]
Caused by: java.net.SocketTimeoutException: timeout
at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:146) ~[okio-jvm-3.6.0.jar:?]
at okio.AsyncTimeout.access$newTimeoutException(AsyncTimeout.kt:161) ~[okio-jvm-3.6.0.jar:?]
at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:339) ~[okio-jvm-3.6.0.jar:?]
at okio.RealBufferedSource.indexOf(RealBufferedSource.kt:430) ~[okio-jvm-3.6.0.jar:?]
at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.kt:323) ~[okio-jvm-3.6.0.jar:?]
at okhttp3.internal.http1.HeadersReader.readLine(HeadersReader.kt:29) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.kt:180) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.kt:110) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.kt:93) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:34) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:201) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.connection.RealCall.execute(RealCall.kt:154) ~[okhttp-4.12.0.jar:?]
at com.launchdarkly.eventsource.HttpConnectStrategy$Client.connect(HttpConnectStrategy.java:452) ~[okhttp-eventsource-4.1.1.jar:4.1.1]
... 9 more
Caused by: java.net.SocketException: Socket closed
at java.base/sun.nio.ch.NioSocketImpl.endRead(Unknown Source) ~[?:?]
at java.base/sun.nio.ch.NioSocketImpl.implRead(Unknown Source) ~[?:?]
at java.base/sun.nio.ch.NioSocketImpl.read(Unknown Source) ~[?:?]
at java.base/sun.nio.ch.NioSocketImpl$1.read(Unknown Source) ~[?:?]
at java.base/java.net.Socket$SocketInputStream.read(Unknown Source) ~[?:?]
at okio.InputStreamSource.read(JvmOkio.kt:93) ~[okio-jvm-3.6.0.jar:?]
at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:128) ~[okio-jvm-3.6.0.jar:?]
at okio.RealBufferedSource.indexOf(RealBufferedSource.kt:430) ~[okio-jvm-3.6.0.jar:?]
at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.kt:323) ~[okio-jvm-3.6.0.jar:?]
at okhttp3.internal.http1.HeadersReader.readLine(HeadersReader.kt:29) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.kt:180) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.kt:110) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.kt:93) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:34) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:201) ~[okhttp-4.12.0.jar:?]
at okhttp3.internal.connection.RealCall.execute(RealCall.kt:154) ~[okhttp-4.12.0.jar:?]
at com.launchdarkly.eventsource.HttpConnectStrategy$Client.connect(HttpConnectStrategy.java:452) ~[okhttp-eventsource-4.1.1.jar:4.1.1]
... 9 more

I have been running Teku VC in the past with Lodestar and this was no an issue previously, and there haven't been any changes to Lodestar event stream API in a while, not sure what I would be looking for right now on our side.

@nflaig
Copy link

nflaig commented Apr 8, 2024

Lodestar API response doesn't contain required field finalized so we fail on this. I think we shouldn't try to guess it and could continue on this issue when Lodestar starts to provide spec response.

Will be part of our next release ChainSafe/lodestar#6645 so this issue should be resolved.

But I am seeing another issue, Teku is trying to call postStateValidators and if it's gets a 404 then it retries the getStateValidators API, so far so good. However, it looks like Teku does not properly format the query parameters, as commas are encoded as %2C which means if you send multiple keys Lodestar will handle that as a single pubkey and not return any data.

Which is why I think Teku VC is stuck

22:45:02.158 WARN  - Validator   *** Slashing protection not loaded for validators: afa0594, a09202a, a4ee262, a9a7af2, a36bd97, 9531099, a0f52ee, abcb932, 8bbff01, 8ab4ac6, ad2becc, ae411a9, b02b9ed, 91edd2b, 8ea53b5, ab5b878, 824f5d9, 8fc37a8, a338e96, 8b3edb7… (250 total)
22:45:07.149 WARN  - Unable to retrieve status for 250 validators.

The interesting part though is that this encoding was fixed a while ago, I am running consensys/teku:latest

@nflaig
Copy link

nflaig commented Apr 10, 2024

Making some progress, have just implemented the post validators endpoint and this fixes the issues mentioned in #8118 (comment)

But there is an issue that Teku VC thinks that the beacon node is not synced but I am not seeing GET /eth/v1/node/syncing being called, any chance Teku uses the head event to determine if beacon node is synced?

2024-04-09 23:32:02.001+00:00 | ValidatorTimingChannel-1 | WARN  | teku-validator-log | Validator   *** Skipped producing sync_signature while node is syncing  Count: 1, Slot: 65
2024-04-09 23:32:02.216+00:00 | validator-async-0 | INFO  | teku-validator-log | Validator   *** Published attestation        Count: 9, Slot: 65, Root: fefe653716e7dec9dc1737f20ed9b3a7a0d596ff83bdacbcd8e2405c1f3d4314

/eth/v1/events?topics=head is called but it fails to establish stream due to #8118 (comment), I am still not sure why this happens, look like an issue at the socket layer.

The last issue that might be worth mentioning is that block production fails as well with the following error

2024-04-09 23:58:10.038+00:00 | validator-async-0 | ERROR | teku-validator-log | Validator   *** Failed to produce block  Slot: 196 Validator: 89ac09c
tech.pegasys.teku.api.exceptions.RemoteServiceNotAvailableException: Server error from Beacon Node API (url = http://172.16.0.25:8562/eth/v1/beacon/blocks, status = 500, message = Internal Server Error)
at tech.pegasys.teku.validator.remote.typedef.ResponseHandler.serviceErrorHandler(ResponseHandler.java:142) ~[teku-validator-remote-24.3.1.jar:24.3.1]
at tech.pegasys.teku.validator.remote.typedef.ResponseHandler.handleResponse(ResponseHandler.java:118) ~[teku-validator-remote-24.3.1.jar:24.3.1]
at tech.pegasys.teku.validator.remote.typedef.handlers.AbstractTypeDefRequest.executeCall(AbstractTypeDefRequest.java:152) ~[teku-validator-remote-24.3.1.jar:24.3.1]
at tech.pegasys.teku.validator.remote.typedef.handlers.AbstractTypeDefRequest.postOctetStream(AbstractTypeDefRequest.java:145) ~[teku-validator-remote-24.3.1.jar:24.3.1]
at tech.pegasys.teku.validator.remote.typedef.handlers.SendSignedBlockRequest.sendSignedBlockAsSsz(SendSignedBlockRequest.java:93) ~[teku-validator-remote-24.3.1.jar:24.3.1]
at tech.pegasys.teku.validator.remote.typedef.handlers.SendSignedBlockRequest.sendSignedBlockAsSszOrFallback(SendSignedBlockRequest.java:82) ~[teku-validator-remote-24.3.1.jar:24.3.1]
at tech.pegasys.teku.validator.remote.typedef.handlers.SendSignedBlockRequest.sendSignedBlock(SendSignedBlockRequest.java:72) ~[teku-validator-remote-24.3.1.jar:24.3.1]
at tech.pegasys.teku.validator.remote.typedef.OkHttpValidatorTypeDefClient.sendSignedBlock(OkHttpValidatorTypeDefClient.java:142) ~[teku-validator-remote-24.3.1.jar:24.3.1]
at tech.pegasys.teku.validator.remote.RemoteValidatorApiHandler.lambda$sendSignedBlock$11(RemoteValidatorApiHandler.java:270) ~[teku-validator-remote-24.3.1.jar:24.3.1]
at tech.pegasys.teku.infrastructure.async.SafeFuture.of(SafeFuture.java:80) ~[teku-infrastructure-async-24.3.1.jar:24.3.1]
at tech.pegasys.teku.validator.remote.RemoteValidatorApiHandler.sendRequest(RemoteValidatorApiHandler.java:479) ~[teku-validator-remote-24.3.1.jar:24.3.1]
at tech.pegasys.teku.validator.remote.RemoteValidatorApiHandler.lambda$sendRequest$34(RemoteValidatorApiHandler.java:474) ~[teku-validator-remote-24.3.1.jar:24.3.1]
at tech.pegasys.teku.infrastructure.async.SafeFuture.of(SafeFuture.java:72) ~[teku-infrastructure-async-24.3.1.jar:24.3.1]
at tech.pegasys.teku.infrastructure.async.ScheduledExecutorAsyncRunner.lambda$createRunnableForAction$1(ScheduledExecutorAsyncRunner.java:124) ~[teku-infrastructure-async-24.3.1.jar:24.3.1]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:?]
at java.base/java.lang.Thread.run(Unknown Source) [?:?]

Not sure why Teku calls block v1 API (fork is deneb) but anyhow the call is never received by Lodestar bn although Teku claims an 500, Internal server error. Additionally confirmed this by looking at the network traffic

@zilm13 zilm13 self-assigned this Apr 11, 2024
@zilm13
Copy link
Contributor Author

zilm13 commented Apr 11, 2024

Hello @nflaig

Thank you for collaboration!
I made a fix following your feedback, #8189
With it merged I was able to run Teku-Lodestar (:next docker) but only with count:2, which is not a requirement for any other BN/VC pair. While running it single, I have following issue for any submit/publish endpoint, we receive the same kind of error in Teku with 500 code:

Apr-11 16:18:56.327[api]             error: Error on submitPoolAttestations [1] slot=1, index=0 - PublishError.InsufficientPeers
Error: PublishError.InsufficientPeers
    at Eth2Gossipsub.publish (file:///usr/app/node_modules/@chainsafe/libp2p-gossipsub/dist/src/index.js:1561:19)
    at async NetworkCore.publishGossip (file:///usr/app/packages/beacon-node/lib/network/core/networkCore.js:291:32)
Apr-11 16:18:56.328[api]             error: Error on submitPoolAttestations [0] slot=1, index=0 - PublishError.InsufficientPeers
Error: PublishError.InsufficientPeers
    at Eth2Gossipsub.publish (file:///usr/app/node_modules/@chainsafe/libp2p-gossipsub/dist/src/index.js:1561:19)
    at async NetworkCore.publishGossip (file:///usr/app/packages/beacon-node/lib/network/core/networkCore.js:291:32)
Apr-11 16:18:56.328[rest]            error: Req req-12 submitPoolAttestations error - Multiple errors on submitPoolAttestations
PublishError.InsufficientPeers
PublishError.InsufficientPeers
Error: Multiple errors on submitPoolAttestations
PublishError.InsufficientPeers
PublishError.InsufficientPeers
    at Object.submitPoolAttestations (file:///usr/app/packages/beacon-node/src/api/impl/beacon/pool/index.ts:104:15)
    at Object.handler (file:///usr/app/packages/api/src/utils/server/genericJsonServer.ts:45:23)
Apr-11 16:18:58.003[]                 info: Searching peers - peers: 0 - slot: 1 - head: (slot -1) 0x1a06…570b - exec-block: valid(0 0xd668…) - finalized: 0x0000…0000:0
Apr-11 16:19:00.152[api]             error: Error on publishAggregateAndProofs [0] slot=1, index=0 - PublishError.InsufficientPeers
Error: PublishError.InsufficientPeers
    at Eth2Gossipsub.publish (file:///usr/app/node_modules/@chainsafe/libp2p-gossipsub/dist/src/index.js:1561:19)
    at async NetworkCore.publishGossip (file:///usr/app/packages/beacon-node/lib/network/core/networkCore.js:291:32)
Apr-11 16:19:00.153[rest]            error: Req req-14 publishAggregateAndProofs error - PublishError.InsufficientPeers
Error: PublishError.InsufficientPeers
    at Eth2Gossipsub.publish (file:///usr/app/node_modules/@chainsafe/libp2p-gossipsub/dist/src/index.js:1561:19)
    at async NetworkCore.publishGossip (file:///usr/app/packages/beacon-node/lib/network/core/networkCore.js:291:32)
Apr-11 16:19:04.239[rest]            error: Req req-17 publishBlock error - PublishError.InsufficientPeers
Error: PublishError.InsufficientPeers
    at Eth2Gossipsub.publish (file:///usr/app/node_modules/@chainsafe/libp2p-gossipsub/dist/src/index.js:1561:19)
    at async NetworkCore.publishGossip (file:///usr/app/packages/beacon-node/lib/network/core/networkCore.js:291:32)

For all other issues:

  1. " Validator *** Error while connecting to beacon node event stream"
    I see this issue, it's ok with snooper disabled, will chec tomorrow a bit more, it looks like a snooper issue, because all is good with eventstream with snooper disabled
  2. I am running consensys/teku:latest
    you could use consensys/teku:develop now, it will include getStateValidators fix
  3. Skipped producing sync_signature while node is syncing
    it's linked with eventstream subscription failure, no head, no sync_signature could be produced, I will dig more, what is the source of the issue
  4. Not sure why Teku calls block v1 API (fork is deneb)
    It's not required to switch to v2 on Deneb, deneb should be supported for v1. We have v2 implemented but we don't want to make it default yet, see Switch to publishBlockV2 #8126
  5. but anyhow the call is never received by Lodestar bn although Teku claims an 500
    We send ssz encoded block, there was a bug in snooper (Kurtosis http requests logging proxy) with octet-streams, it's now fixed, if you update Kurtosis, it will work. So Teku will try ssz, get 415 error and fallback to json, which is supported by Lodestar. Also ssz could be disabled by flag, but not required as fallback is working.

@nflaig
Copy link

nflaig commented Apr 12, 2024

I made a fix following your feedback, #8189

Awesome, thanks for that. We also merged the post variant of validators endpoint (ChainSafe/lodestar#6655), our :next image has it now, and will soon be released.

With it merged I was able to run Teku-Lodestar (:next docker) but only with count:2, which is not a requirement for any other BN/VC pair. While running it single, I have following issue for any submit/publish endpoint, we receive the same kind of error in Teku with 500 code:

This is kinda expected, we require additional flags to run Lodestar without peers, see ethpandaops/ethereum-package#555.

For all other issues:

  1. Great catch, disabling snooper indeed fixes all the remaining issues
  2. or Lodestar next with post endpoints
  3. Seems fixed with snooper disabled and event stream working
  4. Fair, was thinking it's maybe part of the issue but your are totally right, it's fine to still use the API
  5. Likely better for users to disable ssz if they run with Lodestar as beacon node (will add support soon™) as the retry might be problematic due to additional delay, oh well, most people run mev-boost anyways

Looks pretty healthy now

09:43:41.056 INFO  - Validator   *** Published block              Count: 1, Slot: 156, Root: ce036eaea8d1977c8854d071c19c3e21fa5f083264d9940977b1854183d4d980, Blobs: 0, 0 (0%) gas, EL block: 039c0e1fa5f3fea49fba5dc38bb6492a03095cd92b20391492dafb9c035e8d9b (118)
09:43:41.721 INFO  - Validator   *** Published sync_signature     Count: 123, Slot: 156, Root: ce036eaea8d1977c8854d071c19c3e21fa5f083264d9940977b1854183d4d980
09:43:41.722 INFO  - Validator   *** Published attestation        Count: 9, Slot: 156, Root: ce036eaea8d1977c8854d071c19c3e21fa5f083264d9940977b1854183d4d980
09:43:49.434 INFO  - Validator   *** Published aggregate          Count: 1, Slot: 156, Root: ce036eaea8d1977c8854d071c19c3e21fa5f083264d9940977b1854183d4d980
09:43:49.489 INFO  - Validator   *** Published sync_contribution  Count: 15, Slot: 156, Root: ce036eaea8d1977c8854d071c19c3e21fa5f083264d9940977b1854183d4d980
09:43:53.707 INFO  - Validator   *** Published sync_signature     Count: 123, Slot: 157, Root: 91fbfb851b05585d95aa5c2f3c9a1bd8de43e5de807779f29d53f476aecacfc8
09:43:53.768 INFO  - Validator   *** Published attestation        Count: 3, Slot: 157, Root: 91fbfb851b05585d95aa5c2f3c9a1bd8de43e5de807779f29d53f476aecacfc8
09:44:01.404 INFO  - Validator   *** Published aggregate          Count: 1, Slot: 157, Root: 91fbfb851b05585d95aa5c2f3c9a1bd8de43e5de807779f29d53f476aecacfc8
09:44:01.550 INFO  - Validator   *** Published sync_contribution  Count: 12, Slot: 157, Root: 91fbfb851b05585d95aa5c2f3c9a1bd8de43e5de807779f29d53f476aecacfc8
09:44:09.769 INFO  - Validator   *** Published sync_signature     Count: 123, Slot: 158, Root: 91fbfb851b05585d95aa5c2f3c9a1bd8de43e5de807779f29d53f476aecacfc8
09:44:09.773 INFO  - Validator   *** Published attestation        Count: 7, Slot: 158, Root: 91fbfb851b05585d95aa5c2f3c9a1bd8de43e5de807779f29d53f476aecacfc8

Closing the compatibility issue on our side, thanks @zilm13, great collaboration 🚀

@zilm13
Copy link
Contributor Author

zilm13 commented Apr 12, 2024

Great!
Checked how it runs single with the flags, everything is perfect.
Yeah, it's better to disable ssz on prod nodes in this case.
I've also contacted Kurtosis team, they will investigate on events stream issue in snooper, let's see.
Thank you again, closing the issue on our side, too.

@zilm13 zilm13 closed this as completed Apr 12, 2024
@barnabasbusa
Copy link

Updated snooper has been released, which supports ssz. Thanks to @parithosh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants