* event cache: move it to the main SDK crate
* event cache: add requested Debug impl to `RoomEventCacheUpdate`
Somehow the compiler asked for it now...
* event cache: add missing copyright notice to store file
* event cache: use a weak reference to the client internally
This will make it possible to have the `Client` own an `EventCache` without a reference
cycle.
* event cache: move the spawned task to its own function
* event cache: move RwLock from EventCache::inner to the only mutable field inside EventCacheInner
* event cache: have the Client own *the* event cache
The goal is to have a unique EventCache instance overall, that's available from everywhere in
the SDK, notably when creating timelines for rooms.
Because the event cache only owns a weak reference to the client, it means the Client still
can be dropped, In turn, this will close its sender of `RoomUpdates`, which will gracefully
close the task spawned in `EventCache::new` after it's done handling the latest updates.
* event cache: process room updates one at a time
* timeline: use the client-wide event cache instead of spawning one per timeline
This now means that we're passing the "initial events" to the event cache just before initializing
the timeline. As a result, there might be previous events that the event cache saw (coming from
sync), but now we can't decide where to put them; drop previously known events in that case.
* event cache: hey, turns out we don't even need the weak back-link
Keeping it as a separate commit, to make it easier to revert later.
* event cache: remove unused errors
Keeping the error type and results, though, because we might have store errors soonish.
* fixup! event cache: move the spawned task to its own function
* event cache: manually subscribe to the event cache
It was a bad idea to have it enabled by default, since some users may not be interested in all
updates for all rooms (e.g. bots). Instead, we make it so that the event cache must be
explicitly subscribed to, and we do it in two cases:
- in the UI `TimelineBuilder::build` method, because we're interested in updates to the current
room,
- in the `RoomListService`, because we *will* be interested in updates to room derived data (e.g.
unread counts, read receipts, and so on).
This avoids a bit of fiddling when creating the event cache in the client.
This is resilient when a parent Client is forked into a child Client, because the child
`EventCache` share the same subscription as the parent's.
Adds a `TimelineEventTypeFilter` enum that either returns only events whose event_type is included in a set of allowed event types, or all events but those whose event types are in a list of excluded event types.
Also adds `TimelineEventTypeFilter` so the clients can use it to define those lists of event types, which are then converted to ruma `TimelineEventType` and used for filtering.
---
* matrix-sdk-ui: add `TimelineEventTypeFilter` to filter timeline events by either including only those of some event types or all but the ones that match those event types.
* ffi: add bindings to `TimelineEventTypeFilter` and `FilterTimelineEventType` so we can provide these event types from the FFI clients
* Fix format
* Fix tests
* Fix format again (using nightly toolchain)
* Remove `all_filter_...` functions as there is no right way to support it at the moment and they're just helpers
* Improve tests
* Make `TimelineEventFilterFn` public so it can be used in several layers.
* Make `TimelineEventTypeFilter` a struct in the FFI layer
* Add fns for creating a timeline with cache and event type filters
* Remove dead code
* Fix some review comments
* ffi: create new timeline initialization APIs, modify existing ones.
ui: make `Room::timeline()` return `None` if no timeline exists instead of lazily creating one.
More details:
- Added `init_timeline_with_builder` to `matrix_sdk_ui::room_list_service::Room` so a timeline can be initialized at will given a `TimelineBuilder`.
- Create `is_timeline_initialized()` fns in both the ui and ffi layers to check the status of the timeline.
- Make `matrix_sdk_ui::room_list_service::Room::timeline()` only return a timeline if it's already been initialized.
- Create FFI functions to expose these UI ones.
* Fix tests
* Fix some review comments
* Update bindings/matrix-sdk-ffi/src/room_list.rs
Signed-off-by: Benjamin Bouvier <public@benj.me>
---------
Signed-off-by: Benjamin Bouvier <public@benj.me>
Co-authored-by: Benjamin Bouvier <public@benj.me>
This was quite handy during development of the client-side computation of read-receipts, to analyze some bugs.
It might be not useful to have it checked in for long, but I would love to make sure it keeps on compiling
until we have a more stable handling of read receipts in general.
This implements a value-based lock in the crypto stores. The intent is to use that for multiple processes to be able to make writes into the store concurrently, while still cooperating on who does them. In particular, we need this for #1928, since we may have up to two different processes trying to write into the crypto store at the same time.
## New methods in the `CryptoStore` trait
The idea is to introduce two new methods touching **custom values** in the crypto store:
- one to atomically insert a value, only if it was missing (so, not following the semantics of `upsert` used in the `set_custom_value`)
- one to atomically remove a custom value
Those two operations match the semantics we want:
- take the lock only if it ain't taken already == insert an entry only if it was missing
- release the lock = remove the entry
By looking at the number of lines affected by the query, we can infer whether the insert/remove happened or not, that is, if we managed to take the lock or not.
## High-level APIs
I've also added an high-level API, `CryptoStoreLock`, that helps managing such a lock, and adds some niceties on top of that:
- exponential backoff to retry attempts at acquiring the lock, when it was already taken
- attempt to gracefully recover when the lock has been taken by an app that's been killed by the environment
- full configuration of the key / value / backoff parameters
While it'd be nice to have something like a `CryptoStoreLockGuard`, it's hard to implement without being racy, because of the `async` statements that would happen in the `Drop` method (and async drop isn't stable yet).
## Test program
There's also a test program in which I shamelessly show my rudimentary unix skills; I've put it in the `labs/` directory but this could as well be a large integration test. A parent program initially fills a custom crypto store, then creates a `pipe()` for 1-way communication with a child created with `fork()`; then the parent sends commands to the child. These commands consist in reading and writing into the crypto store, using a lock. And while the child attempts to perform these operations, the parent tries hard to get the lock at the same time. This helps figuring out a few issues and making sure that cross-process locking would work as intended.
This patch should ideally be split into multiple smaller ones, but life
is life.
This main purpose of this patch is to fix and to test
`SlidingSyncListRequestGenerator`. This quest has led me to rename
mutiple fields in `SlidingSyncList` and `SlidingSyncListBuilder`, like:
* `rooms_count` becomes `maximum_number_of_rooms`, it's not something
the _client_ counts, but it's a maximum number given by the server,
* `batch_size` becomes `full_sync_batch_size`, so that now, it
emphasizes that it's about full-sync only,
* `limit` becomes `full_sync_maximum_number_of_rooms_to_fetch`, so that
now, it also emphasizes that it's about ful-sync only _and_ what the
limit is about!
This quest has continued with the renaming of the `SlidingSyncMode`
variants. After a discussion with the ElementX team, we've agreed on the
following renamings:
* `Cold` becomes `NotLoaded`,
* `Preload` becomes `Preloaded`,
* `CatchingUp` becomes `PartiallyLoaded`,
* `Live` becomes `FullyLoaded`.
Finally, _le plat de résistance_.
In `SlidingSyncListRequestGenerator`, the `make_request_for_ranges`
has been renamed to `build_request` and no longer takes a `&mut self`
but a simpler `&self`! It didn't make sense to me that something
that make/build a request was modifying `Self`. Because the type of
`SlidingSyncListRequestGenerator::ranges` has changed, all ranges now
have a consistent type (within this module at least). Consequently, this
method no longer need to do a type conversion.
Still on the same type, the `update_state` method is much more
documented, and errors on range bounds (offset by 1) are now all fixed.
The creation of new ranges happens in a new dedicated pure function,
`create_range`. It returns an `Option` because it's possible to not be
able to compute a range (previously, invalid ranges were considered
valid). It's used in the `Iterator` implementation. This `Iterator`
implementation contains a liiiittle bit more code, but at least now
we understand what it does, and it's clear what `range_start` and
`desired_size` we calculate. By the way, the `prefetch_request` method
has been removed: it's not a prefetch, it's a regular request; it was
calculating the range. But now there is `create_range`, and since it's
pure, we can unit test it!
_Pour le dessert_, this patch adds multiple tests. It is now
possible because of the previous refactoring. First off, we test the
`create_range` in many configurations. It's pretty clear to understand,
and since it's core to `SlidingSyncListRequestGenerator`, I'm pretty
happy with how it ends. Second, we test paging-, growing- and selective-
mode with a new macro: `assert_request_and_response`, which allows to
“send” requests, and to “receive” responses. The design of `SlidingSync`
allows to mimic requests and responses, that's great. We don't really
care about the responses here, but we care about the requests' `ranges`,
and the `SlidingSyncList.state` after a response is received. It also
helps to see how ranges behaves when the state is `PartiallyLoaded`
or `FullyLoaded`.
Add a second full-sync-up mode to sliding sync. Previously - and still the default for backwards compatibility, but now named `PagingFullSync` - was to page through the list by the page-size, de-validating the previous page of items. The newly added `GrowingFullSync` instead grows the window by the given number `batch_size` per request, starting and keeping it from `0`. In the latter we might be pushing more data over the connection and are slightly slower, but the top always stays active and thus reactive to changes.
Furthermore the developer can now configure an optional maximum number to grow/paginate the full-sync up to (so stopping before actually having reached `count` whatever is smaller). This is already exposed via FFI, too.
- [x] add new full-sync-mode
- [x] add limited sync-up mode (similar to full sync but only to a limit `n`)
- [x] implement sync-up in jack-in