Table of Contents
go-carbon
Golang implementation of Graphite/Carbon server with classic architecture: Agent -> Cache -> Persister
Features
- Receive metrics from TCP and UDP (plaintext protocol)
- Receive metrics with Pickle protocol (TCP only)
- Receive metrics from HTTP
- Receive metrics from Apache Kafka
- storage-schemas.conf
- storage-aggregation.conf
- Carbonlink (requests to cache from graphite-web)
- Carbonlink-like GRPC api
- Logging with rotation support (reopen log if it moves)
- Many persister workers (using many cpu cores)
- Run as daemon
- Optional dump/restore restart on
USR2
signal (configdump
section): stop persister, start write new data to file, dump cache to file, stop all (and restore from files after next start) - Reload some config options without restart (HUP signal):
whisper
section of main config,storage-schemas.conf
andstorage-aggregation.conf
graph-prefix
,metric-interval
,metric-endpoint
,max-cpu
fromcommon
sectiondump
section
Performance
Faster than default carbon. In all conditions :) How much faster depends on server hardware, storage-schemas, etc.
The result of replacing "carbon" to "go-carbon" on a server with a load up to 900 thousand metric per minute:
There were some efforts to find out maximum possible performance of go-carbon on a hardware (2xE5-2620v3, 128GB RAM, local SSDs).
The result of that effort (in points per second):
Stable performance was around 950k points per second with short-term peak performance of 1.2M points/sec.
Efficient metric namespace patterns for trie index
Putting the most common namespaces at the beginning of the metric name could be beneficial for scaling. This doesn't affect the performance of trigram index. But when you decide to switch to trie index for serving higher number metrics in a single go-carbon instance, your query would be more efficient. At the same time, this naming pattern could lead to better metric namespace hierarchy.
For example: querying sys.cpu.loadavg.app.host-0001
would be faster than querying sys.app.host-0001.cpu.loadavg
using trie index. Especially when you have tens of thousands hosts (host-0001, ..., host-9999), they all share the same prefix of sys.cpu.loadavg.app.host-
in trie index, and they are compared only once during query. So this patterns leads to better memory usage and query performance when using trie+nfa/dfa index.
More details could be found in this blog: To glob 10M metrics: Trie * DFA = Tree² for Go-Carbon.
Installation
Use binary packages from releases page or build manually (requires golang 1.8+):
# build binary
git clone https://github.com/go-graphite/go-carbon.git
cd go-carbon
make
We are using to host our packages!
At this moment we are building deb and rpm packages for i386, amd64 and arm64 architectures. Installation guides are available on packagecloud (see the links below).
Stable versions: Stable repo
Autobuilds (master, might be unstable): Autobuild repo
We're uploading Docker images to ghcr.io
Also, you can download test packages from build artifacts: Go to list of test runs, click on PR name, and click on "packages-^1" under "Artifact" section.
Configuration
$ go-carbon --help
Usage of go-carbon:
-check-config=false: Check config and exit
-config="": Filename of config
-config-print-default=false: Print default config
-daemon=false: Run in background
-pidfile="": Pidfile path (only for daemon)
-version=false: Print version
[common]
# Run as user. Works only in daemon mode
user = "carbon"
# Prefix for store all internal go-carbon graphs. Supported macroses: {host}
graph-prefix = "carbon.agents.{host}"
# Endpoint to store internal carbon metrics. Valid values: "" or "local", "tcp://host:port", "udp://host:port"
metric-endpoint = "local"
# Interval of storing internal metrics. Like CARBON_METRIC_INTERVAL
metric-interval = "1m0s"
# Increase for configuration with multi persister workers
max-cpu = 4
[whisper]
data-dir = "/var/lib/graphite/whisper"
# http://graphite.readthedocs.org/en/latest/config-carbon.html#storage-schemas-conf. Required
schemas-file = "/etc/go-carbon/storage-schemas.conf"
# http://graphite.readthedocs.org/en/latest/config-carbon.html#storage-aggregation-conf. Optional
aggregation-file = "/etc/go-carbon/storage-aggregation.conf"
# It's currently go-carbon only feature, not a standard graphite feature. Optional
# More details in doc/quotas.md
# quotas-file = "/etc/go-carbon/storage-quotas.conf"
# Worker threads count. Metrics sharded by "crc32(metricName) % workers"
workers = 8
# Limits the number of whisper update_many() calls per second. 0 - no limit
max-updates-per-second = 0
# Sparse file creation
sparse-create = false
# use flock on every file call (ensures consistency if there are concurrent read/writes to the same file)
flock = true
enabled = true
# Use hashed filenames for tagged metrics instead of human readable
# https://github.com/go-graphite/go-carbon/pull/225
hash-filenames = true
# specify to enable/disable compressed format (EXPERIMENTAL)
# See details and limitations in https://github.com/go-graphite/go-whisper#compressed-format
# IMPORTANT: Only one process/thread could write to compressed whisper files at a time, especially when you are
# rebalancing graphite clusters (with buckytools, for example), flock needs to be enabled both in go-carbon and your tooling.
compressed = false
# automatically delete empty whisper file caused by edge cases like server reboot
remove-empty-file = false
# Enable online whisper file config migration.
#
# online-migration-rate means metrics per second to migrate.
#
# To partially enable default migration for only some matched rules in
# storage-schemas.conf or storage-aggregation.conf, we can set
# online-migration-global-scope = "-" and enable the migration in the config
# files (more examples in deploy/storage-aggregation.conf and deploy/storage-schemas.conf).
#
# online-migration-global-scope can also be set any combination of the 3 rules
# (xff,aggregationMethod,schema) as a csv string
# like: "xff", "xff,aggregationMethod", "xff,schema",
# or "xff,aggregationMethod,schema".
#
# online-migration = false
# online-migration-rate = 5
# online-migration-global-scope = "-"
[cache]
# Limit of in-memory stored points (not metrics)
max-size = 1000000
# Capacity of queue between receivers and cache
# Strategy to persist metrics. Values: "max","sorted","noop"
# "max" - write metrics with most unwritten datapoints first
# "sorted" - sort by timestamp of first unwritten datapoint.
# "noop" - pick metrics to write in unspecified order,
# requires least CPU and improves cache responsiveness
write-strategy = "max"
# If > 0 use bloom filter to detect new metrics instead of cache
bloom-size = 0
[udp]
listen = ":2003"
enabled = true
# Optional internal queue between receiver and cache
buffer-size = 0
[tcp]
listen = ":2003"
enabled = true
# Optional internal queue between receiver and cache
buffer-size = 0
[pickle]
listen = ":2004"
# Limit message size for prevent memory overflow
max-message-size = 67108864
enabled = true
# Optional internal queue between receiver and cache
buffer-size = 0
# You can define unlimited count of additional receivers
# Common definition scheme:
# [receiver.<any receiver name>]
# protocol = "<any supported protocol>"
# <protocol specific options>
#
# All available protocols:
#
# [receiver.udp2]
# protocol = "udp"
# listen = ":2003"
# # Enable optional logging of incomplete messages (chunked by max UDP packet size)
# log-incomplete = false
#
# [receiver.tcp2]
# protocol = "tcp"
# listen = ":2003"
#
# [receiver.pickle2]
# protocol = "pickle"
# listen = ":2004"
# # Limit message size for prevent memory overflow
# max-message-size = 67108864
#
# [receiver.protobuf]
# protocol = "protobuf"
# # Same framing protocol as pickle, but message encoded in protobuf format
# # See https://github.com/go-graphite/go-carbon/blob/master/helper/carbonpb/carbon.proto
# listen = ":2005"
# # Limit message size for prevent memory overflow
# max-message-size = 67108864
#
# [receiver.http]
# protocol = "http"
# # This receiver receives data from POST requests body.
# # Data can be encoded in plain text format (default),
# # protobuf (with Content-Type: application/protobuf header) or
# # pickle (with Content-Type: application/python-pickle header).
# listen = ":2007"
# max-message-size = 67108864
#
# [receiver.kafka]
# protocol = "kafka
# # This receiver receives data from kafka
# # You can use Partitions and Topics to do sharding
# # State is saved in local file to avoid problems with multiple consumers
#
# # Encoding of messages
# # Available options: "plain" (default), "protobuf", "pickle"
# # Please note that for "plain" you must pass metrics with leading "\n".
# # e.x.
# # echo "test.metric $(date +%s) $(date +%s)" | kafkacat -D $'\0' -z snappy -T -b localhost:9092 -t graphite
# parse-protocol = "protobuf"
# # Kafka connection parameters
# brokers = [ "host1:9092", "host2:9092" ]
# topic = "graphite"
# partition = 0
#
# # Specify how often receiver will try to connect to kafka in case of network problems
# reconnect-interval = "5m"
# # How often receiver will ask Kafka for new data (in case there was no messages available to read)
# fetch-interval = "200ms"
#
# # Path to saved kafka state. Used for restarts
# state-file = "/var/lib/graphite/kafka.state"
# # Initial offset, if there is no saved state. Can be relative time or "newest" or "oldest".
# # In case offset is unavailable (in future, etc) fallback is "oldest"
# initial-offset = "-30m"
#
# # Specify kafka feature level (default: 0.11.0.0).
# # Please note that some features (consuming lz4 compressed streams) requires kafka >0.11
# # You must specify version in full. E.x. '0.11.0.0' - ok, but '0.11' is not.
# # Supported version (as of 22 Jan 2018):
# # 0.8.2.0
# # 0.8.2.1
# # 0.8.2.2
# # 0.9.0.0
# # 0.9.0.1
# # 0.10.0.0
# # 0.10.0.1
# # 0.10.1.0
# # 0.10.2.0
# # 0.11.0.0
# # 1.0.0
# kafka-version = "0.11.0.0"
#
# [receiver.pubsub]
# # This receiver receives data from Google PubSub
# # - Authentication is managed through APPLICATION_DEFAULT_CREDENTIALS:
# # - https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application
# # - Currently the subscription must exist before running go-carbon.
# # - The "receiver_*" settings are optional and directly map to the google pubsub
# # libraries ReceiveSettings (https://godoc.org/cloud.google.com/go/pubsub#ReceiveSettings)
# # - How to think about the "receiver_*" settings: In an attempt to maximize throughput the
# # pubsub library will spawn 'receiver_go_routines' to fetch messages from the server.
# # These goroutines simply buffer them into memory until 'receiver_max_messages' or 'receiver_max_bytes'
# # have been read. This does not affect the actual handling of these messages which are processed by other goroutines.
# protocol = "pubsub"
# project = "project-name"
# subscription = "subscription-name"
# receiver_go_routines = 4
# receiver_max_messages = 1000
# receiver_max_bytes = 500000000 # default 500MB
[carbonlink]
listen = "127.0.0.1:7002"
enabled = true
# Close inactive connections after "read-timeout"
read-timeout = "30s"
# grpc api
# protocol: https://github.com/go-graphite/go-carbon/blob/master/helper/carbonpb/carbon.proto
# samples: https://github.com/go-graphite/go-carbon/tree/master/api/sample
[grpc]
listen = "127.0.0.1:7003"
enabled = true
# http://graphite.readthedocs.io/en/latest/tags.html
[tags]
enabled = false
# TagDB url. It should support /tags/tagMultiSeries endpoint
tagdb-url = "http://127.0.0.1:8000"
tagdb-chunk-size = 32
tagdb-update-interval = 100
# Directory for send queue (based on leveldb)
local-dir = "/var/lib/graphite/tagging/"
# POST timeout
tagdb-timeout = "1s"
[carbonserver]
# Please NOTE: carbonserver is not intended to fully replace graphite-web
# It acts as a "REMOTE_STORAGE" for graphite-web or carbonzipper/carbonapi
listen = "127.0.0.1:8080"
# Carbonserver support is still experimental and may contain bugs
# Or be incompatible with github.com/grobian/carbonserver
enabled = false
# Buckets to track response times
buckets = 10
# carbonserver-specific metrics will be sent as counters
# For compatibility with grobian/carbonserver
metrics-as-counters = false
# Read and Write timeouts for HTTP server
read-timeout = "60s"
write-timeout = "60s"
# Request timeout for each API call
request-timeout = "60s"
# Enable /render cache, it will cache the result for 1 minute
query-cache-enabled = true
# Hard limits the number of whisper files that get created each second. 0 - no limit
`max-creates-per-second` = 0
# Enable carbonV2 gRPC streaming render cache, it will cache the result for 1 minute
streaming-query-cache-enabled = false
# 0 for unlimited
query-cache-size-mb = 0
# Enable /metrics/find cache, it will cache the result for 5 minutes
find-cache-enabled = true
# Control trigram index
# This index is used to speed-up /find requests
# However, it will lead to increased memory consumption
# Estimated memory consumption is approx. 500 bytes per each metric on disk
# Another drawback is that it will recreate index every scan-frequency interval
# All new/deleted metrics will still be searchable until index is recreated
trigram-index = true
# carbonserver keeps track of all available whisper files in memory.
# This determines how often it will check FS for new or deleted metrics.
# If you only use the trie index, have