Progress notification infrastructure#1190
Conversation
5b9bacd to
1934906
Compare
e2dea39 to
c10ffff
Compare
| let library_paths: Vec<PathBuf> = library_paths.into_iter().map(PathBuf::from).collect(); | ||
|
|
||
| let r = harp::command::r_executable(&r_home); | ||
| let package_sources = r | ||
| .and_then(|r| PackageCache::new(r, library_paths.clone()).log_err()) | ||
| .map(|cache| Arc::new(cache) as Arc<dyn oak_sources::PackageSources>); | ||
|
|
||
| let library = Library::new(library_paths, package_sources); |
There was a problem hiding this comment.
I have lifted all of this up out of the constructor because:
- It is needed in the new package sources event loop
- We eventually want to get
library_pathsconstruction out of this constructor anyways (see the FIXME in here)
| pub(crate) fn start(self) -> tokio::task::JoinSet<()> { | ||
| let mut set = tokio::task::JoinSet::<()>::new(); | ||
|
|
||
| // Spawn latency-sensitive auxiliary loop. Must be first to initialise | ||
| // global transmission channel. | ||
| let aux = AuxiliaryState::new(self.client.clone()); | ||
| set.spawn(async move { aux.start().await }); | ||
|
|
||
| // Spawn main loop | ||
| set.spawn(async move { self.main_loop().await }); |
There was a problem hiding this comment.
It always felt weird to me that the GlobalState controlled the spawning of everything
Really we have 3 event loops:
- GlobalState::start()
- AuxiliaryState::start()
- PackageSourceState::start()
start_lsp() now manages all 3 more directly
(It was also kind of required to do it this way due to the fact that PackageSourceState::new() needs a few things that aren't available in GlobalState, so you can't start it from GlobalState)
There was a problem hiding this comment.
All AuxiliaryState support for arbitrary progress reporting lives here
There was a problem hiding this comment.
Package sources event loop lives here
|
|
||
| None | ||
| // We fully expect to be able to find an executable, it is required for many tasks | ||
| panic!("Can't find R executable relative to {r_home:?}") |
There was a problem hiding this comment.
It was getting annoying for this to be optional, and it seems like critical info
| /// # Reader / Writer split | ||
| /// | ||
| /// The cache is split into [PackageCacheReader] and [PackageCacheWriter]. | ||
| /// | ||
| /// - [PackageCacheReader::get] reads from the cache and returns instantly. If the | ||
| /// package isn't in the cache, it simply returns [None]. | ||
| /// | ||
| /// - [PackageCacheWriter::insert] writes into the cache if the package sources aren't | ||
| /// already in the cache. Insertion can be very expensive, and takes place on a | ||
| /// dedicated tokio task to avoid blocking the main loop. |
There was a problem hiding this comment.
Most important change
| /// Both carry the shared cache root lock, because both look at the cache and hand out | ||
| /// paths into it. | ||
| #[derive(Debug)] | ||
| struct PackageCacheShared { |
There was a problem hiding this comment.
This might be useful in the future as well
If the Reader needs to check if there are packages in the queue (i.e. if you just wait a moment they will be ready) then maybe we can store some shared state about them in an RwLock here, which is data shared between a Reader/Writer pair. That was my eventual plan, at least.
Not completely hooked up - no one is sending events to the package sources event loop yet, we are waiting on Salsa support before doing that.
But I feel like the two main core changes are nice:
You can treat these two features as orthogonal, where the package sources event loop is a user of the progress reporter