Each flow now has an associated executor which performs all actions
(writing events, closing, reopening) asynchronously by default using a
Concurrent::SingleThreadExecutor.
This improves the responsiveness of the application by performing the
(usually) IO-bound task of writing the logs to a background thread.
By creating a flow with `synchronous: true`, all actions are run in the
calling thread as before, making the flow blocking.
Using the core `NotInmplementedError` is not desireable since its
documentation includes:
> Note that if `fork` raises a `NotImplementedError`, then
> `respond_to?(:fork)` returns `false`.
Since we are responding to the method but still raise an error, our
usage of the exception does not fulfill its documentation.
A custom error instead of a default `NoMethodError` is still desireable
since it significantly helps with debugging. With a different Exception,
we make it clear that a method is expected to be there and just wasn't
implemented by a subclass as opposed to the caller just using an object
wrong and calling entirely unexpected methods on it.
During normal operation, the Flows will rescue all exceptions and log
them to the special error_flow. By default, we will write JSON logs to
STDERR.
The log location and format can either be change globally by setting (or
changing) the Rackstash.error_flow or for each Flow for a Logger
individually by setting Flow#error_flow.
The fields follow the basic structure of basic Hashes and Arrays but
provide an interface better suitable for us. Specifically:
* They check and enforce the datatypes for keys and values to be
strictly JSON-conforming. Only the basic data-types are accepted
respectively converted to.
* Hashes only accept String keys.
* Basic values are always frozen.
The Rackstash::Logger class will server as the public main entry point
for users. It will eventually implement the mostly complete interface of
Ruby's Logger.
The idea of Rackstash is the we will allow to buffer multiple log
messages allong with additional data until a combined log event is
eventually flushed to an underlying log target. This allows to keep
connected log messages and data as a single unit from the start without
having to painstakingly parse and connect these in later systems again.