Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wire kibana config from fleet #4670

Merged
merged 15 commits into from
Feb 15, 2021
Merged

Conversation

jalvz
Copy link
Contributor

@jalvz jalvz commented Feb 1, 2021

Motivation/summary

Adds support for central config and sourcemaps in managed mode.

Checklist

How to test these changes

In the Kibana UI create an APM integration without (or with a wrong API key), and check that curl http://localhost:8200/config/v1/agents\?service.name=foo returns "Unauthorized". Then create a valid api key and update the apm policy with it. Repeat the same curl command, and this time should respond with service-does-not-exist error.

For sourcemaps: index a sourcemap, and check that APM Server can read it back when ingesting errors, provided the right API key has been configured in the policy editor.

Related issues

Requires elastic/beats#23856

Closes #4573

@apmmachine
Copy link
Contributor

apmmachine commented Feb 1, 2021

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview

Expand to view the summary

Build stats

  • Build Cause: Pull request #4670 updated

  • Start Time: 2021-02-11T18:48:07.695+0000

  • Duration: 43 min 58 sec

  • Commit: cc403ce

Test stats 🧪

Test Results
Failed 0
Passed 4727
Skipped 124
Total 4851

Trends 🧪

Image of Build Times

Image of Tests

Steps errors 4

Expand to view the steps failures

Run Window tests
  • Took 11 min 51 sec . View more details on here
Compress
  • Took 0 min 0 sec . View more details on here
  • Description: tar --exclude=coverage-files.tgz -czf coverage-files.tgz coverage
Compress
  • Took 0 min 0 sec . View more details on here
  • Description: tar --exclude=system-tests-linux-files.tgz -czf system-tests-linux-files.tgz system-tests
Test Sync
  • Took 3 min 27 sec . View more details on here
  • Description: ./.ci/scripts/sync.sh

@jalvz jalvz marked this pull request as ready for review February 4, 2021 16:41
@jalvz
Copy link
Contributor Author

jalvz commented Feb 4, 2021

Still missing tests, but opening this already for review

@jalvz jalvz requested a review from a team February 4, 2021 16:42
Copy link
Member

@axw axw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm pretty confused about what this PR is meant to be doing.

We discussed that the API Key passed used by Fleet would not have sufficient privileges, i.e. would not have the APM space privileges required for querying central config. IIANM that's why you added apm-server.kibana section to the integration package.

So why is elastic/beats#23856 needed, and why is the server still looking at the Fleet config?

beater/beater.go Outdated
@@ -220,7 +223,7 @@ func (s *serverCreator) CheckConfig(cfg *common.Config) error {

func (s *serverCreator) Create(p beat.PipelineConnector, rawConfig *common.Config) (cfgfile.Runner, error) {
integrationConfig, err := config.NewIntegrationConfig(rawConfig)
if err != nil {
if integrationConfig == nil || err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't make sense to return a nil cfgfile.Runner unless there's an error being returned, and integrationConfig will/should only be nil if config.NewIntegrationConfig returns an error -- so I don't think this is right.

beater/config/config.go Show resolved Hide resolved
beater/beater.go Outdated
Comment on lines 288 to 291
cfg, err := config.NewConfig(args.RawConfig, args.KibanaConfig, elasticsearchOutputConfig(args.Beat))
if err != nil {
return nil, err
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
cfg, err := config.NewConfig(args.RawConfig, args.KibanaConfig, elasticsearchOutputConfig(args.Beat))
if err != nil {
return nil, err
}
cfg, err := config.NewConfig(args.RawConfig, elasticsearchOutputConfig(args.Beat))
if err != nil {
return nil, err
}
if args.KibanaConfig != nil {
args.Kibana = *args.KibanaConfig
}

Seeing as this is the only place we pass in the arg, can we just replace the Kibana config afterwards and avoid forcing all other callers to pass in nil? Alternatively we could not unpack Fleet.Kibana in the integration config, and merge it into the config in serverCreator.Create.

@@ -2,3 +2,6 @@ apm-server:
host: {{host}}
secret_token: {{secret_token}}
rum.enabled: {{enable_rum}}
kibana:
enabled: true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
enabled: true

enabled: true is implied, it's not necessary to explicitly specify it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I just liked to be explicit.

@jalvz
Copy link
Contributor Author

jalvz commented Feb 5, 2021

We discussed that the API Key passed used by Fleet would not have sufficient privileges

And this doesn't take the access_api_key from Fleet indeed, just the Kibana connection information (host, etc). We need that, right?

The API Key comes from user input in the package, as we talked about.

@@ -41,6 +41,11 @@ policy_templates:
required: true
show_user: true
default: false
- name: kibana_api_key
type: string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be type: password?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it appears there is no such a thing...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@axw
Copy link
Member

axw commented Feb 5, 2021

And this doesn't take the access_api_key from Fleet indeed, just the Kibana connection information (host, etc). We need that, right?

Ahhhh sorry, I see. Thanks for the explanation.

If elastic/beats#23856 does go ahead: is it not possible to have the spec extract "fleet.kibana" and merge it into "apm-server.kibana"? Is that the bit you said you couldn't get to work? I suppose this approach is fine though.

Anyway, I see there's some contention on the Beats issue so let's wait until that's resolved?

@jalvz
Copy link
Contributor Author

jalvz commented Feb 5, 2021

is it not possible to have the spec extract "fleet.kibana" and merge it into "apm-server.kibana"? Is that the bit you said you couldn't get to work?

I tried a few things, I believe the problem with that one is that it is not possible to merge dicts on conflict; so we would lose either the api_key or the other fields.

@jalvz jalvz requested a review from axw February 9, 2021 13:46
Copy link
Member

@axw axw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. A couple of things I'd like to see:

  • A test for the new APIKey behaviour in kibana.ConnectingClient. I think this should be done before merging.
  • A system test for agent central config when running under Fleet, using the injected fleet.kibana config, as a followup.

@jalvz jalvz merged commit 8507904 into elastic:master Feb 15, 2021
axw pushed a commit to axw/apm-server that referenced this pull request Feb 19, 2021
* Adds kibana and sourcemap api keys to package
# Conflicts:
#	apmpackage/apm/0.1.0/agent/input/template.yml.hbs
#	apmpackage/apm/0.1.0/manifest.yml
#	changelogs/head.asciidoc
#	kibana/connecting_client.go
axw pushed a commit to axw/apm-server that referenced this pull request Feb 19, 2021
* Adds kibana and sourcemap api keys to package
# Conflicts:
#	apmpackage/apm/0.1.0/agent/input/template.yml.hbs
#	apmpackage/apm/0.1.0/manifest.yml
#	changelogs/head.asciidoc
#	kibana/connecting_client.go
axw added a commit that referenced this pull request Feb 19, 2021
* Adds kibana and sourcemap api keys to package
# Conflicts:
#	apmpackage/apm/0.1.0/agent/input/template.yml.hbs
#	apmpackage/apm/0.1.0/manifest.yml
#	changelogs/head.asciidoc
#	kibana/connecting_client.go

Co-authored-by: Juan Álvarez <[email protected]>
axw added a commit that referenced this pull request Feb 19, 2021
* Adds kibana and sourcemap api keys to package
# Conflicts:
#	apmpackage/apm/0.1.0/agent/input/template.yml.hbs
#	apmpackage/apm/0.1.0/manifest.yml
#	changelogs/head.asciidoc
#	kibana/connecting_client.go

Co-authored-by: Juan Álvarez <[email protected]>
@axw axw self-assigned this Feb 24, 2021
@axw
Copy link
Member

axw commented Feb 24, 2021

I tested this with 7.12 BC2, and it doesn't seem to be working. I started the stack with apm-integration-testing (with elastic/apm-integration-testing#1069 applied):

./scripts/compose.py start 7.12 --bc=37f40745 --apm-server-managed --with-elastic-agent --package-registry-snapshot

Then looking at the APM Server logs in the Fleet UI, I see logs like:

[elastic_agent.apm_server][info] Kibana url: http://localhost:5601
[elastic_agent.apm_server][error] failed to obtain connection to Kibana: fail to get the Kibana version: HTTP GET request to http://localhost:5601/api/status fails: fail to execute the HTTP GET request: Get "http://localhost:5601/api/status": dial tcp 127.0.0.1:5601: connect: connection refused. Response: .

@jalvz
Copy link
Contributor Author

jalvz commented Feb 24, 2021

sorry @axw , Beats backports not merged yet

@axw
Copy link
Member

axw commented Mar 3, 2021

Tested again with BC3, works well!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Fleet] Support for APM Agent Central Config with zero configuration
4 participants