* Add tests for polling for multiple messages
This commit adds a mock Redis interface and adds tests that poll the
mock interface for multiple messages at a time. These tests test that
Flodgatt is robust against receiving incomplete messages, including if
the message break results in receiving invalid UTF8.
* Remove temporary files
* Implement faster buffered input
This commit implements a modified ring buffer for input from Redis.
Specifically, Flodgatt now limits the amount of data it fetches from
Redis in one syscall to 8 KiB (two pages on most systems). Flodgatt
will process all complete messages it receives from Redis and then
re-use the same buffer for the next time it retrieves data. If
Flodgatt received a partial message, it will copy the partial message
to the beginning of the buffer before its next read.
This change has little effect on Flodgatt under light load (because it
was rare for Redis to have more than 8 KiB of messages available at
any one time). However, my hope is that this will significantly
reduce memory use on the largest instances.
* Improve handling of backpresure
This commit alters how Flodgatt behaves if it receives enough messages
for a single client to fill that clients channel. (Because the clients
regularly send their messages, should only occur if a single client
receives a large number of messages nearly simultaneously; this is
rare, but could occur, especially on large instances).
Previously, Flodgatt would drop messages in the rare case when the
client's channel was full. Now, Flodgatt will pause the current Redis
poll and yield control back to the client streams, allowing the
clients to empty their channels; Flodgatt will then resume polling
Redis/sending the messages it previously received. With the approach,
Flodgatt will never drop messages.
However, the risk to this approach is that, by never dropping
messages, Flodgatt does not have any way to reduce the amount of work
it needs to do when under heavy load – it delays the work slightly,
but doesn't reduce it. What this means is that it would be
*theoretically* possible for Flodgatt to fall increasingly behind, if
it is continuously receiving more messages than it can process. Due
to how quickly Flodgatt can process messages, though, I suspect this
would only come up if an admin were running Flodgatt in a
*significantly* resource constrained environment, but I wanted to
mention it for the sake of completeness.
This commit also adds a new /status/backpressure endpoint that
displays the current length of the Redis input buffer (which should
typically be low or 0). Like the other /status endpoints, this
endpoint is only enabled when Flodgatt is compiled with the
`stub_status` feature.
* Use monotonically increasing channel_id
Using a monotonically increasing channel_id (instead of a Uuid)
reduces memory use under load by ~3%
* Use replace unbounded channels with bounded
This also slightly reduces memory use
* Heap allocate Event
Wrapping the Event struct in an Arc avoids excessive copying and significantly reduces memory use.
* Implement more efficient unsubscribe strategy
* Fix various Clippy lints; bump version
* Update config defaults
* Initial [WIP] implementation
This initial implementation works to send messages but does not yet
handle unsubscribing properly.
* Implement UnboundedSender
* Implement UnboundedChannels for concurrency
This squashed commit rolls up a series of changes designed to improve
Flodgatt's public API/module boundary. Specifically, this limits the
number of Items that are exported outside the top-level modules (in some
cases because they were already not needed outside that module due to
the earlier code reorganization and in some cases by using public
re-exports to export particular Items from a private module).
Similarly, this commit moves the `Event` struct to the `response`
module (maintaining privacy for the `Event`'s implementation details)
while re-exporting the `Id` struct that `Event` uses internally at the
top level.
All of these changes are made with the goal of making Flodgatt's code
easier to reason about in isolation, which should both make it easier to
maintain and make it easier for new contributors to make changes without
understanding the entire codebase. Additionally, having fewer public
modules will make documenting Flodgatt more extensively much easier.
This commit improves error handling in Flodgatt's main request-response loop, including the portions of that loop that were revised in #128.
This nearly completes the addition of more explicit error handling, but there will be a smaller part 3 to bring the handling of configuration/Postgres errors into conformity with the style here.
This squashed commit makes a fairly significant structural change to significantly reduce Flodgatt's CPU usage.
Flodgatt connects to Redis in a single (green) thread, and then creates a new thread to handle each WebSocket/SSE connection. Previously, each thread was responsible for polling the Redis thread to determine whether it had a message relevant to the connected client. I initially selected this structure both because it was simple and because it minimized memory overhead – no messages are sent to a particular thread unless they are relevant to the client connected to the thread. However, I recently ran some load tests that show this approach to have unacceptable CPU costs when 300+ clients are simultaneously connected.
Accordingly, Flodgatt now uses a different structure: the main Redis thread now announces each incoming message via a watch channel connected to every client thread, and each client thread filters out irrelevant messages. In theory, this could lead to slightly higher memory use, but tests I have run so far have not found a measurable increase. On the other hand, Flodgatt's CPU use is now an order of magnitude lower in tests I've run.
This approach does run a (very slight) risk of dropping messages under extremely heavy load: because a watch channel only stores the most recent message transmitted, if Flodgatt adds a second message before the thread can read the first message, the first message will be overwritten and never transmitted. This seems unlikely to happen in practice, and we can avoid the issue entirely by changing to a broadcast channel when we upgrade to the most recent Tokio version (see #75).
* Tweak release profile & micro optimizations
* Replace std HashMap with hashbrown::HashMap
The hashbrown::HashMap is faster than the std::collections::HashMap,
though it does not protect as well against malicious hash collisions
(e.g., in a DDoS). Since we don't expose the hashing externally,
we should switch to the faster implementation.
* Remove use of last_polled_time [WIP]
This commit stops removing subscriptions based on their last polled
time to test the impact of this change on CPU use. This is a WIP
because it does not yet remove subscriptions in any other way, which
(if deployed in production) would cause a memory leak – memory use
would grow with each new subscription and would never be reduced as
clients end their subscriptions.
* Fix bug with RedisConnection polling freqeuency
* Improve performance of EventStream
This commit changes the EventStream so no longer polls client
WebSocket connections to see if it should clean up the connection.
Instead, it cleans up the connection whenever it attempts to send a
ping or a message through the connection and receives an error
indicating that the client has disconnected. As a result, client
connections aren't cleaned up quite as quickly, but overall sys CPU
time should be dramatically improved.
* Remove empty entries from MsgQueues hashmap
Before this change, entries in the MsgQueue hashmap would remain once
added, even if their value fell to 0. This could lead to a very
slight memory leak/increase, because the hashmap would grow each time
a new user connected and would not decrease again. This is now fixed.
* Bump version and remove unused benchmark
* Add /status API endpoints [WIP]
* Finish /status API endpoints
This PR enables compiling Flodgatt with the `stub_status` feature.
When compiled with `stub_status`, Flodgatt has 3 new API endpoints:
/api/v1/streaming/status, /api/v1/streaming/status/per_timeline, and
/api/v1/streaming/status/queue. The first endpoint lists the total
number of connections, the second lists the number of connections per
timeline, and the third lists the length of the longest queue of
unsent messages (which should be low or zero when Flodgatt is
functioning normally).
Note that the number of _connections_ is not equal to the number of
connected _clients_. If a user is viewing the local timeline, they
would have at least two connections: one for the local timeline, and
one for their user timeline. Other users could have even more
connections.
I decided to make the status endpoints an option you enable at compile
time rather than at run time for three reasons:
* It keeps the API of the default version of Flodgatt 100%
compatible with the Node server's API;
* I don't beleive it's an option Flodgatt adminstrators will want to
toggle on and off frequently.
* Using a compile time option ensures that there is zero runtime
cost when the option is disabled. (The runtime cost should be
negligible either way, but there is value in being 100% sure that
the cost can be eliminated.)
However, I'm happy to make it a runtime option instead if other think
that would be helpful.
* Initial work to support structured errors
* WIP error handling and RedisConn refactor
* WIP for error handling refactor
* Finish substantive work for Redis error handling
* Apply clippy lints
This very minor change moves tests from their current location in
submodules within the file under test into submodules in separate
files. This is a slight deviation from the normal Rust convention
(though only very slight, since the module structure remains the
same). However, it is justified here since the tests are fairly
verbose and including them in the same file was a bit unwieldy.
* Prevent Reciever from querying postgres
Before this commit, the Receiver would query Postgres for the name
associated with a hashtag when it encountered one not in its cache.
This ensured that the Receiver never encountered a (valid) hashtag id
that it couldn't handle, but caused a extra DB query and made
independent sections of the code more entangled than they need to be.
Now, we pass the relevant tag name to the Receiver when it first
starts managing a new subscription and it adds the tag name to its
cache then.
* Improve module boundary/privacy
* Reorganize Receiver to cut RedisStream
* Fix tests for code reorganization
Note that this change includes testing some private functionality by
exposing it publicly in tests via conditional compilation. This
doesn't expose that functionality for the benchmarks, so the benchmark
tests do not currently pass without adding a few `pub use`
statements. This might be worth changing later, but benchmark tests
aren't part of our CI and it's not hard to change when we want to test
performance.
This change also cuts the benchmark tests that were benchmarking old
ways Flodgatt functioned. Those were useful for comparison purposes,
but have served their purpose – we've firmly moved away from the
older/slower approach.
* Fix Receiver for tests
Previously, if a toot's language was `null`, then it would be
permitted regardless of the user's language filter; however, if it
were `""` (the empty string) it was rejected. Now, the empty string
is treated like `null` and the toot is allowed.
When the `WHITELIST_MODE` environmental variable is set, Flodgatt
requires users to authenticate with a valid access token before
subscribing to any timelines (even those that are typically public).
Previously, the language filter was incorrectly applied to all
`update` messages. With this change, it is only applied to public
timelines (i.e., the local timeline and the federated timeline) which
is the behavior described in the docs.
* Fix panic on delete events
Previously, the code attempted to check the toot's language regardless
of event types. That caused a panic for `delete` events, which lack a
language.
* WIP implementation of Message refactor
* Major refactor
* Refactor scope managment to use enum
* Use Timeline type instead of String
* Clean up Receiver's use of Timeline
* Make debug output more readable
* Block statuses from blocking users
This commit fixes an issue where a status from A would be displayed on
B's public timelines even when A had B blocked (i.e., it would treat B
as though they were muted rather than blocked for the purpose of
public timelines).
* Fix bug with incorrect parsing of incomming timeline
* Disable outdated tests
* Bump version
* Read user and domain blocks from Postgres
This commit reads the blocks from pg and stores them in the User
struct; it does not yet actually filter the responses. It also does
not update the tests.
* Update tests
* Filter out toots involving blocked/muted users
* Add support for domain blocks
* Update test and bump version