A single Sink is tied to a single Logger. It is responsible to:
* Create a log event from a Buffer on #write and send it to each of the
flows independently.
* Forward all actions to all of the defined Flows.
The Sink provides access to all configured data of the Logger which is
used for persisting the Buffers.
A single Buffer can be send to one or more flows which in turn each
write to a different adapter. A Flow object is responsible for
filtering, encoding, and finally persisting the event to an adapter.
Each Flow object can be configured differently which allows to write a
single log event to multiple targets as required.
Previously, we have counted successivly equal lines. However, we want to
count the number of lines with different content to ensure proper
concurrency during the test.
When setting `force: true` (the default), in both cases we not raise an
ArgumentError when setting a forbidden field and overwrite existing
fields. When setting it to `false`, we ignore forbidden or existing
fields in both cases.
We also allow a custom conflict resolution block to be passed to both
methods. In the case of deep_merge! and deep_merge, this applies to all
(potentially deeply nested) fields. Compatible objects, i.e. Hashes and
Arrays are still always merged without calling the block.
This encoder is useful for local consumption of the raw log stream, e.g.
during development where the developer might not care for any additional
fields. With this encoder, the log output can mostly resemble a "classic"
line-based log feed.
With this, we optimize the common case where we do have valid UTF-8
strings to begin with. If the given String is already frozen, as is
common for e.g. Hash keys, we don't even need to create a new object.
With this change, we also always return frozen strings from
`Rackstash::Fields::AbstractCollection#utf9_encode`. This avoids an
unecessary object copy when inserting it in a Hash and still ensures
that values are always frozen anyway.
Users can provide a "callable", i.e. a proc or block which will be
called for each written log. This allows users to custom handle the
logs without having to write a full adapter.
Usually, users should still write a full adapter to handle all cases of
their wrapped log device.
The global default (unless overwritten by an adapter class) is to use
the JSON encoder since it's the most versatile and flexible option for a
logger today.
An ada pert wraps a log device (e.g. a file, an underlying logger, ...)
and provides an uniform interface to write the encoded log event to its
final target.
By using a registry, we can create the required adapter instance for a
provided log device automatically.
`each_with_object` allocates an array for each kv pair. Switching to
the slightly more verbose but less allocatey `each_pair` eliminates
array allocations.
This follows the similar change in Rails:
960de47f0e