Introduction
Prometheus is a popular open source monitoring system and Grafana, open source tool compliments it in visualization aspects. Combination of these two tools helps the users to understand the complex data with the help of data metrics of any containerized systems. This combination is also more popular and common monitoring stack used by Devops teams.
Prometheus
Prometheus Concepts
To plan and exercise sizing requirement for Prometheus, below concepts need to be understood first.
TimeSeries is streams of timestamped values belonging to the same metric and the same set of labeled dimensions. Besides stored time series, Prometheus may generate temporary derived time series as the result of queries. Prometheus can handle millions of time series. Memory usage is directly proportional to time series count. A time series is thus represented as a series of chunks, which ultimately end up in a time series file (one file per time series) on disk.
prometheus_local_storage_memory_series: The current number of series held in memory
Scrape: Prometheus is a pull-based system. To fetch metrics, Prometheus sends an HTTP request called a scrape. It sends scrapes to targets based on its configuration.
Metrics & Labels: Every time series is uniquely identified by its metric name and optional key-value pairs called labels. The metric name specifies the general feature of a system that is measured (e.g. http_requests_total - the total number of HTTP requests received). 4 types of metrics are Counter, Gauge, Histogram, & Summary
Labels: enable Prometheus's dimensional data model: any given combination of labels for the same metric name identifies a particular dimensional instantiation of that metric (for example: all HTTP requests that used the method POST to the /api/tracks handler).
Samples form the actual time series data. Each sample consists of a float64 value and a millisecond-precision timestamp
Instance/Target & Job: In Prometheus terms, an endpoint you can scrape is called an instance, usually corresponding to a single process. A collection of instances with the same purpose, a process replicated for scalability or reliability for example, is called a job.
Capacity planning exercise for Prometheus
Planning for sizing predominantely includes Memory usage, Disk usage & CPU usage.
Memory usage There are 2 parts in Memory usage: Ingestion and Query. Both needs to be considered in capacity planning for Prometheus.
Data ingestion: Memory requirement depends on the number of time series, the number of labels you have, and your scrape frequency in addition to the raw ingest rate. Finally this capacity needs to be considereing 50% more for garbage collection overhead.
Query: It is important to consider the concurrency and the complex customized query requirement to query data from Prometheeus.
Found this online capacity planning calculator helpful in validating your requirements.
Disk Usage
The Prometheus server will store the metrics in a local folder, for a period of 15 days, by default.Any production-ready deployment requires you to configure a persistent storage interface that will be able to maintain historical metrics data and survive pod restarts.
Prometheus stores its on-disk time series data under the directory specified by the flag storage.local.path (The default path is ./data). The flag storage.local.retention allows you to configure the retention time for samples.
Thumb rule that Prometheus recommends to determine the Disk requirement is calculated as below:
needed_disk_space = retention_time_seconds * ingested_samples_per_second * bytes_per_sample
For example: For 15 days storage 1296000(seconds) * 10000 (samples/second) * 1.3(bytes/sample) = 16,848,000,000 (bytes). Whichwould be approximately 16 Gigabytes.
To lower the rate of ingested samples, you can either reduce the number of time series you scrape (fewer targets or fewer series per target), or you can increase the scrape interval. However, reducing the number of series is likely more effective, due to compression of samples within a series.
More details on Prometheus storage can be found here.
Scale out
There are in fact various ways to scale and federate Prometheus. The architecture is to have multiple sharded Prometheis, each scraping a subset of the targets and aggregating them up within the shard. A leader federates the aggregates produced by the shards, and then the leader aggregates them up to the job level.
An interesting read on Scale out is here for further information.
Grafana
The bottleneck for Grafana performance is the time series database backend with complex queries. By default, Grafana comes with SQLite, an embedded database stored in the Grafana installation location.
No comments:
Post a Comment