-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: aggregator service and events #61
Conversation
cefd541
to
bc0f4f8
Compare
bc0f4f8
to
71f31eb
Compare
71f31eb
to
e0ca58e
Compare
e0ca58e
to
acef701
Compare
acef701
to
38650dc
Compare
View stack outputs
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think nothing blocking just some suggestions 🚀
.env.tpl
Outdated
@@ -1,8 +1,8 @@ | |||
# These variables are only available in your SST code. | |||
|
|||
# uncomment to try out deploying the api under a custom domain. | |||
# uncomment to try out deploying the `aggregator-api`` under a custom domain. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# uncomment to try out deploying the `aggregator-api`` under a custom domain. | |
# uncomment to try out deploying the `aggregator-api` under a custom domain. |
package.json
Outdated
"@ucanto/client": "9.0.0", | ||
"@ucanto/principal": "9.0.0", | ||
"@ucanto/transport": "9.0.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing hats?
"@ucanto/client": "9.0.0", | |
"@ucanto/principal": "9.0.0", | |
"@ucanto/transport": "9.0.0", | |
"@ucanto/client": "^9.0.0", | |
"@ucanto/principal": "^9.0.0", | |
"@ucanto/transport": "^9.0.0", |
const encodedBytes = JSONencode(aggregateOfferMessage) | ||
return { | ||
MessageBody: toString(encodedBytes), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe dag-json
has stringify
and parse
functions:
import * as dagJSON from '@ipld/dag-json'
const str = dagJSON.stringify(aggregateOfferMessage)
return { | ||
MessageBody: toString(encodedBytes), | ||
// FIFO Queue message group id | ||
MessageGroupId: options.disableMessageGroupId ? undefined : aggregateOfferMessage.group |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd argue this is not part of the encoded message and is an option you'd sent when sending it...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will create an issue to do this in separate: #63
export const decodeMessage = (message) => { | ||
const decodedBytes = fromString(message.MessageBody) | ||
return JSONdecode(decodedBytes) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems to be repeated a lot - would be nice to reuse something here?
// if one we should put back in queue | ||
if (sqsEvent.Records.length === 1) { | ||
return { | ||
batchItemFailures: sqsEvent.Records.map(r => ({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the difference between this and just throwing and error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so that we can return 200
and account for metrics in a different way if we would like to. This is because it is possible to receive 1 item in the batch if not enough trhoughput
if (eventRawRecords.length !== 1) { | ||
return { | ||
statusCode: 400, | ||
body: `Expected 1 DynamoDBStreamEvent per invocation but received ${eventRawRecords.length}` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
body: `Expected 1 DynamoDBStreamEvent per invocation but received ${eventRawRecords.length}` | |
body: `Expected 1 DynamoDB record per invocation but received ${eventRawRecords.length}` |
const { ok, error } = await aggregatorEvents.handleInclusionInsertToUpdateState(context, record) | ||
if (error) { | ||
return { | ||
statusCode: 500, | ||
body: error.message || 'failed to handle inclusion insert event' | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not throwing here will remove the stack from the logs, so while you get a nice message in the body, it might be harder to debug.
I'm using this helper (Irakli suggested) to make throwing a little easier:
The code above becomes:
const { ok, error } = await aggregatorEvents.handleInclusionInsertToUpdateState(context, record) | |
if (error) { | |
return { | |
statusCode: 500, | |
body: error.message || 'failed to handle inclusion insert event' | |
} | |
} | |
const ok = expect( | |
await aggregatorEvents.handleInclusionInsertToUpdateState(context, record), | |
'failed to handle inclusion insert event' | |
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, I will see how to adopt this.
One thing I would like, is to rely on status codes to infer metrics and this would make it difficult unless we explore this a bit further. Will leave in scope of this PR
if (sqsEvent.Records.length !== 1) { | ||
return { | ||
statusCode: 400, | ||
body: `Expected 1 sqsEvent per invocation but received ${sqsEvent.Records.length}` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
body: `Expected 1 sqsEvent per invocation but received ${sqsEvent.Records.length}` | |
body: `Expected 1 SQS message per invocation but received ${sqsEvent.Records.length}` |
if (sqsEvent.Records.length !== 1) { | ||
return { | ||
statusCode: 400, | ||
body: `Expected 1 sqsEvent per invocation but received ${sqsEvent.Records.length}` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
body: `Expected 1 sqsEvent per invocation but received ${sqsEvent.Records.length}` | |
body: `Expected 1 SQS message per invocation but received ${sqsEvent.Records.length}` |
57b2b20
to
247bac6
Compare
247bac6
to
e7720a6
Compare
Suspect IssuesThis pull request was deployed and Sentry observed the following issues:
Did you find this useful? React with a 👍 or 👎 |
This PR wires up aggregator service and its events. All old code from aggregator was removed given it is now in
filecoin-api
, so there is a huge diff of removed code.data
folder,workflows
folders andprocessor
folder were droppedNote that:
InclusionProofStore
where Proof bytes are stored. Fromfilecoin-api
interface standpoint, proof is available viainclusion-store
, but we can't stick it in DynamoDB. While there are 2 PUT ops behind now (note that if the first fails to the proof bucket, there is no issue as nothing will happen with it)