- #HOW TO READ A METRIC SCALE RULER TRAINING PRESENTATION HOW TO#
- #HOW TO READ A METRIC SCALE RULER TRAINING PRESENTATION CODE#
Open a new browser window and make sure that the endpoint works. Or, if you’re using Docker, run the following command: docker run -rm -it -p 8080:8080 christianhxc/gorandom:1.0 Now, let’s compile (make sure the environment variable GOPATH is valid) and run the application with the following commands: go get -d
#HOW TO READ A METRIC SCALE RULER TRAINING PRESENTATION CODE#
If you scroll up a little bit, you’ll see that the following code is the one in charge of emitting metrics while the application is running in an infinite loop: go func() , http.Handle("/metrics", promhttp.Handler()) This is the endpoint that prints metrics in a Prometheus format, and it uses the promhttp library for that. At the bottom of the main.go file, the application is exposing a /metrics endpoint. Let’s explore the code from the bottom to the top.
#HOW TO READ A METRIC SCALE RULER TRAINING PRESENTATION HOW TO#
I’m not going to explain every section of the code, but only a few sections that I think are crucial to understanding how to instrument an application. It only emits random latency metrics while the application is running. To start, I’m going to use an existing sample application from the client library in Go. For example, in Go, you get the number of bytes allocated, number of bytes used by the GC, and a lot more. When using client libraries, you get a lot of default metrics from your application. But keep in mind that the preferable way to collect data is to pull metrics from an application’s endpoint. And for those short-lived applications like batch jobs, Prometheus can push metrics with a PushGateway. Other languages like C#, Node.js, or Rust have support as well, but they’re not official (yet). Officially, Prometheus has client libraries for applications written in Go, Java, Ruby, and Python.
How Does Prometheus Integrate With Your Workloads?īecause Prometheus works by pulling metrics (or scrapping metrics, as they call it), you have to instrument your applications properly.
At given intervals, Prometheus will hit targets to collect metrics, aggregate data, show data, or even alert if some thresholds are met-in spite of not having the most beautiful GUI in the world. Prometheus pulls metrics (key/value) and stores the data as time-series, allowing users to query data and alert in a real-time fashion. Now, let’s talk about Prometheus from a more technical standpoint. Having a graduated monitoring project confirms how crucial it is to have monitoring and alerting in place, especially for distributed systems-which are pretty often the norm in Kubernetes. Not many projects have been able to graduate yet. First things first, Prometheus is the second project that graduates, after Kubernetes, from the Cloud Native Computing Foundation (CNCF).