Generally, a non-buffering Buffer will eventually be flushed to the sink
after each logged message. This thus mostly resembles the way
traditional loggers work in Ruby. A buffering Buffer however holds log
messages, fields and tags for a longer time. Only at a specific time,
all log messages and stored fields will be flushed to the Sink as a
single log event. A common scope for such an event is a full request to
a Rack app.
Each buffer instance can hold messages, fields and tags. These together
form the log event which will eventually be written to the log target.
By adding fields and tags, you can add highly details structured
information to your logs which allow to filter and analyze the logs
without having to parse complex multi-line logs.
The fields follow the basic structure of basic Hashes and Arrays but
provide an interface better suitable for us. Specifically:
* They check and enforce the datatypes for keys and values to be
strictly JSON-conforming. Only the basic data-types are accepted
respectively converted to.
* Hashes only accept String keys.
* Basic values are always frozen.
Using methods named after the severity, users can esily log messages
based on their intended severity. We do support the block syntax
throughout to conditionally log expensive messages only if the log level
is low enough:
logger.debug { compute_details_for_log }
The Rackstash::Logger class will server as the public main entry point
for users. It will eventually implement the mostly complete interface of
Ruby's Logger.
The idea of Rackstash is the we will allow to buffer multiple log
messages allong with additional data until a combined log event is
eventually flushed to an underlying log target. This allows to keep
connected log messages and data as a single unit from the start without
having to painstakingly parse and connect these in later systems again.