On this page you can read about parsing rules (Logstash filters) that Logsearch-for-cloudfoundry adds to Logsearch parsing.
By default, Logsearch-for-cloudfoundry stores parsed logs in indices named logs-%{[@metadata][index]}-%{+YYYY.MM.dd}
. And %{[@metadata][index]}
is calculated as the following:
platform
for CloudFoundry components logs,app-%{[cf][org]}-%{[cf][space]}
for application logs (including CloudFoundry logs about applications).
So the following indices are created as the result:
logs-platform-%{+YYYY.MM.dd}
for platform logs,
logs-app-%{[cf][org]}-%{[cf][space]}-%{+YYYY.MM.dd}
for application logs.
Please note that index name is configurable and can be customized.
Also it can be useful to read how to get list of all indices in Elasticsearch.
Logserach-for-cloudfoundry provides Logstash parsing rules which used to parse incoming log event and create a set of fields from parsed data. Some fields are common for application and platform logs, some are event-specific. There are also system fields added by Logstash. Read below sections to get detailed information about fields the logs are split to when using Logsearch-for-cloudfoundry.
These fields are common for application and platform logs and store the following information from log event:
- Log input (
@input
,@index_type
) - Log shipping (
@shipper.*
fields) - Log source (
@source.*
fields) - Log destination in Elasticsearch (
@metadata.index
,@type
) - Log message payload (
@message
,@level
)
Field | Value examples | Comment |
---|---|---|
@input |
syslog, relp, ... | |
@index_type |
app, platform | Either app or platform. Default is platform. |
@metadata.index |
platform, app-myorg-myspace, ... | Constructed as app-${org}-${space} for application logs. Note that ${space} and ${org} are ommitted in index name if corresponding info is missing in log event. The field is used to set index name ( logstash_parser.elasticsearch.index property in config). |
@shipper.priority |
6, 14, ... | |
@shipper.name |
doppler_syslog, vcap.nats_relp, ... | |
@source. host |
192.168.111.63, ... | |
@source.deployment |
cf-full-diego, ... | For application logs this value is shipped within a log event. For platform logs we provide a deployment dictionary which uses deployment names set with logstash_parser.deployment_name property and maps CloudFoundry jobs to these names.(NOTE: The deployment dictionary is applied in Logsearch parsing rules) |
@source.job |
cell_z1, ... | |
@source.job_index |
52ba268e-5578-4e79-afa2-2ddefd70badg, ... | Bosh ID of the job (guid) - value of spec.id extracted from Bosh for the job |
@source.index |
0, 1, ... | Bosh instance index - value of spec.index extracted from Bosh for the job |
@source.vm |
cell_z1/0 | For those entries where @source.index is passed, calculated as @source.job /@source.index |
@source.component |
rep, nats, bbs, uaa, ... | |
@source.type |
APP, RTR, STG, ... system, cf |
For application logs the field is set with CloudFoundry log source types. Additionally, for log events that don't specify a source type we use) a dictionary based on an event type:LogMessage -> LOG ,Error -> ERR ,ContainerMetric -> CONTAINER ,ValueMetric -> METRIC ,CounterEvent -> COUNT ,HttpStartStop -> HTTP For platform logs the value is either system or cf . |
@type |
LogMessage, Error, ValueMetric, ... system, cf, haproxy, uaa, vcap |
The field is used to define documents type in Elasticsearch (set in logstash_parser.elasticsearch_index_type property).This field is set with values distinguishing logs of different types. |
@message |
This is a sample log message text | |
@level |
INFO, ERROR, WARN, ... | |
@raw |
<13>2016-09-26T18:20:25.134194+00:00 192.168.111.63 vcap.rep [job=cell_z1 index=0] My log message | This field stores an unparsed log event (as it came). (NOTE: This field is provided by Logsearch deployment) |
@timestamp |
September 26th 2016, 21:04:17.928 | The field is set with value of a log event timestamp (time when the log was collected by CloudFoundry logging agent). |
tags |
syslog_standard, app, logmessage, logmessage-app, ... | This field stores tags set during parsing. A specific tag is set in each parsing snippet which helps to track parsing (name of tag = name of snippet). Fail tags are set in case of parsing failures. |
These fields are specific to application logs only. They store CloudFoundry metadata about an application that emmitted the log or relates to the log event (e.g. metrics).
Field | Values |
---|---|
@cf.org_id |
2d5f8dc7-dcf4-443b-9491-a54d27db785f, ... |
@cf.org |
myorg, ... |
@cf.space_id |
c9290e71-780b-43ee-8074-f37ee33b2ff7, ... |
@cf.space |
myspace, ... |
@cf.app_id |
ee61d1b6-f08f-4f93-b93f-2a9b0ae82dfc, ... |
@cf.app |
myapp, ... |
@cf.app_instance |
0, 1, 2, ... |
-
Application logs are shipped in JSON events. Set of JSON fields varies for different event types. All common fields from JSON are mapped accordingly to common fields and CF meta fields listed above. Other JSON fields (those extra fields specific to a particular event type) are stored as
<@type>.<json field name>
.
Example:logmessage.message_type
.
Additionally, format of a log line (message shipped in a log event) may vary for different events. Parsed fields from a log line are stored as<@source.type>.<field name>
. Example:rtr.path
. -
Platform logs are shipped in events of plain text format. The format is parsed and common fields are set from the parsed data.
A format of a log line (message shipped in a log event) may vary for different event types. For consistency we store fields parsed from the log line as<@source.component>.<field name>
. Example:uaa.pid
.
Each parsed log event has also a set of Elasticsearch meta fields (prefixed with _ underscore).
Parsing rules are split to several logical snippets for clarity and better maintenance. All the snippets are included in default.conf.erb file which is eventually used for parsing. The order of snippets is important, because fields parsed in one snippet are then used in another etc.
The parsing rules chain includes:
Contains general fields parsing. Sets such fields as @input, @index_type, [@metadata][index] etc.
General parsing of application logs retrieved by firehose-to-syslog utility from CloudFoundry. Before shipping logs firehose-to-syslog wraps them to a JSON of a special format. The format varies for different event types. For possible event types and their formats see CloudFoundry dropsonde-protocol which firehose-to-syslog uses.
Most of application common fields are parsed in this snippet.
Parses LogMessage events.
Note that snippet app-logmessage-app.conf parses APP log messages - those log lines emmitted by applications during their work. The snippet parses several most popular log formats: JSON, Tomcat container logging format and Logback status lines logging format. See the snippet for details on parsing.
- app-error.conf, app-containermetric.conf, app-valuemetric.conf, app-counterevent.conf, app-http.conf
Parses Error, ContainerMetric, ValueMetric, CounterEvent and HttpStartStop events accordingly.
General parsing of CloudFoundry components logs. Parses logs based on Metron Agent format.
Parsing rules for CloudFoundry haproxy, uaa and other vcap* components.
Performs fields post-processing and clean up.