-
Notifications
You must be signed in to change notification settings - Fork 947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Send job logs to stdout #891
Comments
Seconded, trying to find a solution for this. Within self hosted runners, Worker logs are output to _diag/, but these aren't the same logs that are output within Github |
I also tried to find the files inside the container, but I couldn't understand the structure. I was in doubt as GitHub Actions does to obfuscate the secret data. If this is done on GitHub's servers it could explain the reason for not going to stdout. I haven't tested it yet, but I found these endpoints that may contain the logs, but it wouldn't be so practical for CloudWatch. https://docs.github.com/en/rest/reference/actions#download-job-logs-for-a-workflow-run |
The secrets mask happened on the runner, not on the server-side. All job/step logs are located at You can get log via API: https://docs.github.com/en/rest/reference/actions#download-job-logs-for-a-workflow-run So, is there any reason you have to get the log from the runner instead of getting it via API? |
Many log tools offer a straightforward Docker integration, such as CloudWatch, Fluentd, Loki and GCP. Those tools use Docker logging drivers that will automatically consume stdout and stream into the choosen plataform. This approach is better than consuming from GitHub API because:
|
Yep agreed with the above, we stream our logs to Datadog in our case |
As of v2.300.0, setting |
Tried setting |
Good day, A few points I'd like to bring up here. Re-opening this ticket, and issues with
|
As a follow-up to my earlier request/comment months back, it turned out this is pretty easy without relying on the Create a config CUSTOM_CW_RULE="/opt/aws/pipe_gha_logs.json"
cat << 'EOF' > "$CUSTOM_CW_RULE"
{
"logs": {
"logs_collected": {
"files": {
"collect_list": [{
"file_path": "/opt/actions-runner/_diag/**",
"log_group_name": "${runner_logs}",
"log_stream_name": "{instance_id}"
}]
}
}
}
}
EOF ...and then just appending this config to whatever else the /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
-a append-config -m ec2 -s \
-c "file:$CUSTOM_CW_RULE" Since GHA is already filtering/masking sensitive info, this gives us masked logs in Cloudwatch (making them easier to search, backup, offload, etc.). |
Can someone please let me know how to use "ACTIONS_RUNNER_PRINT_LOG_TO_STDOUT"? Any documentation or sample config file? If it's just the environment variable, nothing happens after adding it. Still not able to see the output in the Worker log file.
|
Line 39 in 7255957
However, this is very verbose and some big JSON object are being sent in multiple lines instead of being stringified. |
Describe the enhancement
I think a good enhancement for self hosted runners would be de capacity to send the logs generated by the job/steps do stdout. This goes very well with ephemeral containers (#510). A broader idea would be to have an arbitrary output to the logs, such as a specific file. This could be implemented alongside the current behavior of sending the logs to GitHub so we can see those in a friendly interface during the job.
Use case
My current use case is to send logs to Cloudwatch, for debugging. Another case I can see is to monitor job errors/warnings on different stacks, such as ELK and Graylog.
Additional information
Since we are here, does anyone know a workaround on this? Seems like there's no well defined path where my logs are stored, so I can't redirect them to stdout using a symlink or something similar.
The text was updated successfully, but these errors were encountered: