Fluent Bit was designed for speed, scale, and flexibility in a very lightweight, efficient package.

Works for Logs, Metrics & Traces

Works for Logs, Metrics & Traces
Works for Logs, Metrics & Traces
Fluent Bit enables you to collect event data from any source, enrich it with filters, and send it to any destination.

Fluent Bit can read from
local files and network devices, and can scrape metrics in the Prometheus
format from your server.

All events are automatically tagged to determine filtering, routing, parsing, modification and output rules.

Filters can modify data by calling an API (E.g. Kubernetes),
remove extraneous fields,
or add values.

Built-in reliability means if you hit a network or server outage
you will be able to resume
from where you left off
without data loss.

Fluent Bit can send data to a multitude of locations,
including popular destinations
like Splunk, Elasticsearch,
OpenSearch, Kafka, and more.

A Brief History of Fluent Bit

A Brief History of Fluent Bit

In 2014, the Fluentd team at Treasure Data began to see the need for a more lightweight log processor to be used in resource-constrained environments like embedded Linux and gateways. The objective was to provide all the speed, scale, and flexibility of Fluentd in a smaller, more efficient footprint. The result was Fluent Bit.

While Fluent Bit did gain rapid adoption in embedded environments, its lightweight, efficient design also made it attractive to those working across the cloud. Features to support more inputs, filters, and outputs were added, and Fluent Bit quickly became the industry standard unified logging layer across all cloud and containerized environments.

Fluent Bit has been deployed over a billion times and is trusted by some of the world’s largest and most complex organizations.