Skip to content

Latest commit

 

History

History
105 lines (65 loc) · 7.87 KB

features.md

File metadata and controls

105 lines (65 loc) · 7.87 KB

<- home page

Features

From this page you can briefly learn main features that Logsearch-for-cloudfoundry adds to Logsearch.

Logs retrieval from CloudFoundry

CloudFoundry can be configured to send its platform logs (logs of CloudFoundry components) via relp. Logsearch has jobs for accepting logs sent via relp and sending them to syslog. Therefore these CloudFoundry logs get in Logsearch with no additional efforts.

For application logs CloudFoundry has firehose feature. To get application logs from CloudFoundry using firehose Logsearch-for-cloudfoundry adds a job that runs firehose-to-syslog utility. This utility written in golang retrieves logs from firehose and sends them to syslog.

So, as the result, CloudFoundry logs (both platform and application) appear in syslog and get processed by Logsearch.

Exclude an application from getting its logs in ELK

There is a possibility for CF applications to opt-out of getting their logs in ELK. Technically, the logs "filtering" of such applications is made in the firehose-to-syslog util, so the logs of "ignored" apps do not appear in syslog and, consequently, they don't get into ELK.

To disable logs getting into ELK for an application you need to set the environment variable F2S_DISABLE_LOGGING=true for this app:

$ cf set-env YOUR_APP_NAME F2S_DISABLE_LOGGING true

No app restaging is necessary.

Set the F2S_DISABLE_LOGGING to false when you whant to make this app logs appear in ELK again.

Logstash parsing rules

Logsearch has a set of parsing rules for syslog formats. And it's a good start in general case.

Additionally to this, Logsearch-for-cloudfoundry provides a set of parsing rules for CloudFoundry logs using log formats of CloudFoundry components, firehose-to-syslog (for application logs) and general formats such as JSON.

For more details on parsing please visit Logs parsing page.

Elasticsearch mappings

Logsearch-for-cloudfoundry provides Elasticsearch mappings for logs indices. The mappings include reasonable rules for fields. They include:

  • All string fields are not_analyzed by default

    This mapping is defined in Logsearch release.

  • All static fields are mapped according to their datatypes. So, all fields, known during parsing, are mapped implicitly. Dynamic fields (e.g. json fields) are mapped using defined default mappings (see above) and Elasticsearch dynamic mapping mechanism.

  • There are two string fields that can be used for full text search - @raw and @message. These fields are defined as analyzed strings. Additionaly, for @message field a not_analyzed field copy is defined as @message.raw - this field can be used for analytics. Field @raw is set to be a default full text search field.

Mappings are uploaded to Elasticsearch in indices templates. Note, that Elasticsearch applies them to those indices matching specified index pattern (template property in a template) and according to specified order (order field).

Mappings and Kibana Authentication

Note, that Kibana authentication plugin uses fields @cf.org_id and @cf.space_id for data filtering (read below). And it is important that these fields should be not_analyzed. Keep this in mind if decide to customize mappings.

Kibana authentication plugin

Logsearch-for-cloudfoundry extends Kibana with authentication plugin. The plugin uses UAA (user authentication and authorization server for CloudFoundry) to authenticate a user and get the account information including organizations and spaces in CloudFoundry platform this user has rights to.

Based on the account information the user is authorized in Kibana to see logs of applications running in those organizations and spaces only. Admin users are authorized to see all data in Kibana including CloudFoundry platform logs (admin users are users from system organization - the organisation that owns the CloudFoundry system domain).

Login

From technical point of view, the authorization mechanism applies additional filters to all search requests made from Kibana to Elasticsearch to limit data shown to user. The filtering is done by @cf.org_id and @cf.space_id fields. To make filtering by these fields possible we specify them as not_analyzed in Elasticsearch mappings (read Elasticsearch mappings section above).

The plugin is delivered in Logsearch-for-cloudfoundry deployment with cf-kibana job (case of Kibana deployed to CloudFoundry) and as a plugin installed to standalone Kibana provided by Logsearch deployment.

Redirect after logout

Kibana authentication plugin redirects user to UAA UI for user login and logout accordingly. If you want to get redirected back to the Kibana application after user logout, make sure to enable "redirects after logout" feture in UAA server that you are using. This feature is disabled by default in UAA. You can enable it in the deployment manifest of your UAA. Example:

properties:
...
login:
  logout:
    redirect:
      url: /login
      parameter:
        disable: false
        whitelist:
        - https://my_kibana_domain/login
        - http://my_kibana_domain/login
...

(example is built based on the UAA logout config and UAA-release spec)

Kibana saved objects

Kibana allows to save searches, visualizations, and dashboards and then reuse them when searching data.

To make some start in logs analysis, Logsearch-for-cloudfoundry creates index patterns and a set of predefined searches, visualizations and dashboards in Kibana. These predefined Kibana objects are uploaded to Elasticsearch (.kibana index) during deploy. Logsearch-for-cloudfoundry allows to skip upload of the defaults and also to specify custom data files to be uploaded to Kibana during deploy (see Customization page).

Note that any of uploaded Kibana objects can be deleted/modified then using Kibana interface.

Possibility to deploy Kibana as CloudFoundry application

Logsearch-for-cloudfoundry provides a possibility to deploy Kibana to a CloudFoundry platform. So that instead of a standalone instance (this option is provided by Logsearch deployment) you get your Kibana running in CloudFoundry.

The pros of this approach (comparing to using of a standalone Kibana instance):

  • Easier deployment
  • Automatic scalability and load balancing provided by CloudFoundry platform
  • Less resources is needed

When deploying you can choose which approach to use. See Deployment section for deploy instructions for each option.


For details on the features delivery in Logsearch-for-cloudfoundry deployment see Jobs page. For customization options visit Customization page.


<- prev page | next page ->