![]() ![]() With default settings, but many require configuration to specify at least theĭestination and security settings.I was under the impression that port 8089 is used to manage the apps on your endpoints using the Settings > Forwarder Management. The exporters: section is how exporters are configured. Processors : # Data sources: traces attributes : actions : - key : environment value : production action : insert - key : db.statement action : delete - key : email action : hash # Data sources: traces, metrics, logs batch : # Data sources: metrics filter : metrics : include : match_type : regexp metric_names : - prefix/.* - prefix_.* # Data sources: traces, metrics, logs memory_limiter : check_interval : 5s limit_mib : 4000 spike_limit_mib : 500 # Data sources: traces resource : attributes : - key : cloud.zone value : zone-1 action : upsert - key : from_attribute : k8s-cluster action : insert - key : redundant-attribute action : delete # Data sources: traces probabilistic_sampler : hash_seed : 22 sampling_percentage : 15 # Data sources: traces span : name : to_attributes : rules : - ^\/api\/v1\/document\/(?P.*)\/update$ from_attributes : separator : '::' ExportersĪn exporter, which can be push or pull based, is how you send data to one or Processors can be found by combining the list foundįor detailed processor configuration, see the Processors are enabled viaĪ basic example of the default processors is provided below. Which the processor provides a default configuration are overridden.Ĭonfiguring a processor does not enable it. With default settings, but many require configuration. The processors: section is how processors are configured. Processors are run on data between being received and being exported. Receivers : # Data sources: logs fluentforward : endpoint : 0.0.0.0 : 8006 # Data sources: metrics hostmetrics : scrapers : cpu : disk : filesystem : load : memory : network : process : processes : paging : # Data sources: traces jaeger : protocols : grpc : thrift_binary : thrift_compact : thrift_http : # Data sources: traces kafka : protocol_version : 2.0.0 # Data sources: traces, metrics opencensus : # Data sources: traces, metrics, logs otlp : protocols : grpc : http : # Data sources: metrics prometheus : config : scrape_configs : - job_name : otel-collector scrape_interval : 5s static_configs : - targets : # Data sources: traces zipkin : Processors A basic example of receivers is provided below.įor detailed receiver configuration, see the One or more receivers must be configured. Receiver provides a default configuration are overridden.Ĭonfiguring a receiver does not enable it. Configuration parameters specified for which the Wants to change the default configuration then such configuration must beĭefined in this section. With default settings so simply specifying the name of the receiver is enough toĬonfigure it (for example, zipkin:). The receivers: section is how receivers are configured. Receivers : otlp : protocols : grpc : exporters : otlp : endpoint : .local:443 service : extensions : pipelines : traces : receivers : processors : exporters : ReceiversĪ receiver, which can be push or pull based, is how data gets into theĬollector. ![]() Versioning and stability for OpenTelemetry clients.Semantic conventions for GraphQL Server.Semantic conventions for database client calls.Semantic conventions for Compatibility components.Performance Benchmark of OpenTelemetry API.Performance and Blocking of OpenTelemetry API.Metric Requirement Levels for Semantic Conventions.Semantic Conventions for Feature Flag Evaluations.Semantic Convention for event attributes.Mapping Arbitrary Data to OTLP AnyValue.Attribute Requirement Levels for Semantic Conventions. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |