|
|
纯真的木瓜 · java实用型-高并发下RestTempla ...· 1 年前 · |
|
|
跑龙套的蚂蚁 · while循环中的Thread.sleep( ...· 1 年前 · |
|
|
谦和的手术刀 · R语言入门 :基本数据结构 - ...· 1 年前 · |
|
|
耍酷的毛豆 · How do I check a git ...· 1 年前 · |
|
|
无邪的钢笔 · 如何在角11.2.0或更低的角度设置Tail ...· 2 年前 · |
Products
Grafana Cloud
Monitor, analyze, and act faster with AI-powered observability.
LGTM+ Stack
Key Capabilities
Observability Solutions
Open Source
Community resources
Dashboard templates
Try out and share prebuilt visualizations
Prometheus exporters
Get your metrics into Prometheus quickly
end-to-end solutions
Opinionated solutions that help you get there easier and faster
monitor infrastructure
Out-of-the-box KPIs, dashboards, and alerts for observability
visualize any data
Instantly connect all your data sources to Grafana
Learn
Community and events
Resources
Help build the future of open source observability software Open positions
Check out the open source projects we support Downloads
Grafana Cloud
Monitor, analyze, and act faster with AI-powered observability.
Key Capabilities
Observability Solutions
The actually useful free plan
10k series Prometheus metrics
50GB logs, 50GB traces, 50GB profiles
500VUk k6 testing
20+ Enterprise data source plugins
100+ pre-built solutions
Grafana Loki
Multi-tenant log aggregation system
Grafana
Query, visualize, and alert on data
Grafana Tempo
High-scale distributed tracing backend
Grafana Mimir
Scalable and performant metrics backend
Grafana Pyroscope
Scalable continuous profiling backend
Grafana Beyla
eBPF auto-instrumentation
Grafana Faro
Frontend application observability web SDK
Grafana Alloy
OpenTelemetry Collector distribution with Prometheus pipelines
end-to-end solutions
Opinionated solutions that help you get there easier and faster
Kubernetes Monitoring
Get K8s health, performance, and cost monitoring from cluster to container
Application Observability
Monitor application performance
Frontend Observability
Gain real user monitoring insights
Incident Response & Management
Detect and respond to incidents with a simplified workflow
visualize any data
Instantly connect all your data sources to Grafana
You can output granular data points in JSON format.
To do so, use
k6 run
with the
--out
flag.
Pass the path for your JSON file as the flag argument:
k6 run --out json=test_results.json script.js
docker run -it --rm \
-v <scriptdir>:/scripts \
-v <outputdir>:/jsonoutput \
grafana/k6 run --out json=/jsonoutput/my_test_result.json /scripts/script.js
# Note that the docker user must have permission to write to <outputdir>!
Or if you want to get the result gzipped, like this:
k6 run --out json=test_results.gz script.js
docker run -it --rm \
-v <scriptdir>:/scripts \
-v <outputdir>:/jsonoutput \
grafana/k6 run --out json=/jsonoutput/my_test_result.gz /scripts/script.js
# Note that the docker user must have permission to write to <outputdir>!
To inspect the output in real time, you can use a command like
tail -f
on the file you save:
tail -f test_results.json
The JSON output has lines as follows:
{"type":"Metric","data":{"type":"gauge","contains":"default","tainted":null,"thresholds":[],"submetrics":null},"metric":"vus"}
{"type":"Point","data":{"time":"2017-05-09T14:34:45.625742514+02:00","value":5,"tags":null},"metric":"vus"}
{"type":"Metric","data":{"type":"trend","contains":"time","tainted":null,"thresholds":["avg<1000"],"submetrics":null},"metric":"http_req_duration"}
{"type":"Point","data":{"time":"2017-05-09T14:34:45.239531499+02:00","value":459.865729,"tags":{"group":"::my group::json","method":"GET","status":"200","url":"https://quickpizza.grafana.com/api/tools"}},"metric":"http_req_duration"}
Each line either has information about a metric, or logs a data point (sample) for a metric. Lines consist of three items:
type
- can have the values
Metric
or
Point
where
Metric
means the line is declaring a metric, and
Point
is an actual data point (sample) for a metric.
data
- is a dictionary that contains lots of stuff, varying depending on the
"type"
above.
metric
- the name of the metric.
This line has metadata about a metric. Here,
"data"
contains the following:
"type"
- the metric type (“gauge”, “rate”, “counter” or “trend”)
"contains"
- information on the type of data collected (can e.g. be “time” for timing metrics)
"tainted"
- has this metric caused a threshold to fail?
"threshold"
- are there any thresholds attached to this metric?
"submetrics"
- any derived metrics created as a result of adding a threshold using tags.
This line has actual data samples. Here,
"data"
contains these fields:
"time"
- timestamp when the sample was collected
"value"
- the actual data sample; time values are in milliseconds
"tags"
- dictionary with tagname-tagvalue pairs that can be used when filtering results data
You can use jq to process the k6 JSON output.
You can quickly create filters to return a particular metric of the JSON file:
jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200")' myscript-output.json
And calculate an aggregated value of any metric:
jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200") | .data.value' myscript-output.json | jq -s 'add/length'
jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200") | .data.value' myscript-output.json | jq -s min
jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200") | .data.value' myscript-output.json | jq -s max
For more advanced cases, check out the jq Manual
If you want to see only the aggregated data, you can export the end-of-test summary to a JSON file.
For more details, refer to the
handleSummary()
topic in the
end-of-test summary docs
.