Skip to content

Refactor Logger to use the actor model#100

Closed
nahharris wants to merge 6 commits intokworkflow:unstablefrom
nahharris:refactor/tokio-logger
Closed

Refactor Logger to use the actor model#100
nahharris wants to merge 6 commits intokworkflow:unstablefrom
nahharris:refactor/tokio-logger

Conversation

@nahharris
Copy link
Contributor

@nahharris nahharris commented Feb 11, 2025

This is the starting point of major refactors in patch-hub and actually a refactor of my own refactor.

Here I'm introducing a new approach for patch-hub to use tokio runtime with the actor model to make patch-hub code more flexible.

How it's implemented

The logger is being run on a dedicated task and it receive messages through a channel. The channel is wrapped with the LoggerTx struct which is cheap to clone so other parts of our code have access to the logger. This wrapper implements several methods that abstract away the low level details of how send the messages.

For this particular implementation, I opted to make every method not async, they spawn a new task to run on the background since we do not depend on the result of most of those methods (they just send a text message to be logged). For those cases where the completion of the task might matter (flush) we return a JoinHandle that can be awaited on. The Logger actor it self will handle the messages synchronously, it means that a command sent to the actor need to be fully handled before the next message can be handled. This won't be the case for every actor, but it makes sense that messages are logged in the same order that they arrive.

There is also a mock version called MockLoggerTx which does nothing at all, just to be used on test scenarios.

To make LoggerTx and MockLoggerTx interchangeable, they implement LoggerActor trait, so every usage of this actor should use a generic type that implements this trait.

Usage Example

// Build a logger, throw it into a task and get both the LoggerTx and a JoinHandle (to await to logger to finish)
let (logger, handle) = Logger::build("/some/path", LogLevel::Error, 0).unwrap().spawn();

// If a function needs to log something, just give it a clone of the logger
// fn some_function(logger: impl LoggerActor);
some_function(logger.clone());

// Dispatch a task that will log an info (any type that implement `Display`
logger.info("Some information");

// Finishes the logger and flushes the logs to the stdout
// Using the logger after a flush will cause a panic
logger.flush().await;

@nahharris nahharris force-pushed the refactor/tokio-logger branch 2 times, most recently from dd85ef2 to 7635af4 Compare February 12, 2025 14:29
@nahharris
Copy link
Contributor Author

@davidbtadokoro I'm now confident enough that this is the approach i'll use along the way. So I'd say it's ready for a review

But also, I have already prepared a new refactor for the config module using this model (check it here) which i think represents the best what I wanna do.

@nahharris nahharris force-pushed the refactor/tokio-logger branch from b75c2f1 to 0f04e11 Compare March 1, 2025 11:39
Copy link
Collaborator

@davidbtadokoro davidbtadokoro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @OJarrisonn, first of all, I really want to thank you for this marvelous work. It must have taken a lot of study and hours to come up with this, so I sincerely thank you for the effort. Also, I am deeply sorry about the huge delay...

How I Reviewed it

As this is a draft, I know there are some intended rough edges, so after a few comments, I refrained from "actually" reviewing it. In this sense, I considered the final product (after the 6 commits) and generally agreed with everything. I made a section below for the overarching request for changes, as the inline review would be unproductive, IMHO.

I've also tested it in many scenarios, and it works flawlessly!

I feel like we can move on to refining this PR in the direction you've taken! The idea is that you focus on making it ready for true review (based on the section below), and then we care for the details.

Request for Changes

  1. You should squash the last two commits (refactor: use a trait to permit use of different actors and feat: create mock version of LoggerTx) into the second one (feat: implement logger actor`), as they are actually "overlapping each other" and a single commit would be cleaner (in terms of commit description, atomicity, and git history) and also for reviewing;
  2. You've used refactor and feat preambles interchangeably. I know we are technically doing a refactor, but I do think we are actually changing the architecture so much that these fall into feat, IMHO;
  3. Rust async is notorious for being somewhat complicated, but you've managed to streamline it a lot! However, I am still a little confused about why we call the transmitter (LoggerTx) the "actor" since the actor itself is the Logger. The instantiated struct only becomes an actor when we call spawn, but I feel like it will be confusing to say that LoggerTx implements the LoggerActor trait, as the actor would be the package of transmitter plus the spawned task;
  4. On the same note as bulletpoint number 3, the sole reason we are creating this generic trait is to be able to mock the transmitter? If so, it seems overkill and not so much a good trade-off, as, for instance, we have to carry App<Logger: LoggerActor> everywhere. What about using the mockall crate for this and using straight implementations for LoggerTx. I don't know if the shift to tokio messes this up, so ignore this bulletpoint if it does;
  5. Last but certainly not least, don't forget to make descriptive commit messages! I know this is tiresome and may seem redundant considering you've did a great job in documenting the codebase (it greatly help me to understand!), but having a summary of the motivation, the change, and the implementation details is paramount for reviewing and for the git history, even more in such a disruptive change!

Genuine Doubt

Unit tests for these parallel entities can be done similarly to how we do in the sequential parts? I.e., could we have a function inside patch_hub::logger::tests that, say, (1) builds and spawns Logger, (2) send commands, and (3) makes assertions?

@nahharris nahharris force-pushed the refactor/tokio-logger branch 2 times, most recently from b8933ad to 6c79f73 Compare March 5, 2025 18:52
@nahharris
Copy link
Contributor Author

nahharris commented Mar 5, 2025

Rust async is notorious for being somewhat complicated, but you've managed to streamline it a lot! However, I am still a little confused about why we call the transmitter (LoggerTx) the "actor" since the actor itself is the Logger. The instantiated struct only becomes an actor when we call spawn, but I feel like it will be confusing to say that LoggerTx implements the LoggerActor trait, as the actor would be the package of transmitter plus the spawned task;

I agree the naming is not great at all. But from the usage perspective it is readable enough (read "not bad") to have a fn foo(logger: impl LoggerActor) instead of a fn foo(ltx: LoggerTx) since we won't use a send method but rather use logging methods. So logger.info("Something") is better imho than ltx.info("Something") or even worse ltx.send(Logger::Command::Info("Something".to_string()).await

On the same note as bulletpoint number 3, the sole reason we are creating this generic trait is to be able to mock the transmitter? If so, it seems overkill and not so much a good trade-off, as, for instance, we have to carry App<Logger: LoggerActor> everywhere. What about using the mockall crate for this and using straight implementations for LoggerTx. I don't know if the shift to tokio messes this up, so ignore this bulletpoint if it does;

I've never used mockall but i could try. But i'm adopting this pattern here in advance for some specific later scenarios. For instance, i've already implemented an actor for environment variables, the default implementation does calls to the std::env, but the mock version uses a simple HashMap. This way we no longer need to use that lock in config::tests. If this level of flexibility is possible with mockall then i can change my implementations

Unit tests for these parallel entities can be done similarly to how we do in the sequential parts? I.e., could we have a function inside patch_hub::logger::tests that, say, (1) builds and spawns Logger, (2) send commands, and (3) makes assertions?

Absolutely. Tokio provides the #[tokio::test] macro (same usage as the builtin #[test]) which permits us to create tests for async scenarios (async fn test_something_idk)

@davidbtadokoro
Copy link
Collaborator

I've never used mockall but i could try. But i'm adopting this pattern here in advance for some specific later scenarios. For instance, i've already implemented an actor for environment variables, the default implementation does calls to the std::env, but the mock version uses a simple HashMap. This way we no longer need to use that lock in config::tests. If this level of flexibility is possible with mockall then i can change my implementations

Nice! Let's leave it as is to focus on the architecture redesign. @lorenzoberts, do you think that using mockall in these parallel entities is viable? Considering the implementation you did on the LoreAPIClient and conceptually, I think it is, but I am not sure...

Absolutely. Tokio provides the #[tokio::test] macro (same usage as the builtin #[test]) which permits us to create tests for async scenarios (async fn test_something_idk)

Awesome, great news!

@davidbtadokoro
Copy link
Collaborator

@OJarrisonn, thanks for the quick update, going to start reviewing the change 👍

@nahharris
Copy link
Contributor Author

@davidbtadokoro could you please re-read the comment, i was editing it will you responded

I'm dumb and forgot to answer some of your points

@davidbtadokoro
Copy link
Collaborator

I'm dumb and forgot to answer some of your points

Cmon man, dumb is something you certainly is not 🤣

I agree the naming is not great at all. But from the usage perspective it is readable enough (read "not bad") to have a fn foo(logger: impl LoggerActor) instead of a fn foo(ltx: LoggerTx) since we won't use a send method but rather use logging methods. So logger.info("Something") is better imho than ltx.info("Something") or even worse ltx.send(Logger::Command::Info("Something".to_string()).await

True! I see how being "purist" here would be more confusing without much gain. I totally agree, so let's stick with this approach, as for all effects the callers are interacting with the actor and it doesn't matter if the interface of it is just the transmitter (I think you can even argue that this enhances encapsulation). The comment was just to see if I am really grasping what is going on 😅

@nahharris
Copy link
Contributor Author

Since the overrall structure of this PR was approved and I am happy enough with it I'll mark it as ready for review

@nahharris nahharris marked this pull request as ready for review March 7, 2025 13:53
davidbtadokoro pushed a commit to davidbtadokoro/patch-hub that referenced this pull request Mar 7, 2025
The project's CI relies heavily on the `checkout` action from GitHub
Actions that, essentially, deals with checking out to the correct commit
on which to run the workflow. By default, PRs check out to the temporary
merge commit resulting from the PR branch HEAD and the target branch
HEAD (`unstable` here in `patch-hub`). This logic is sound to "assure"
that the workflows run on a version of the PR that considers the latest
state of the branch it intends to merge.

However, it can cause strange side effects, such as when GitHub can
successfully merge, but the PR branch conflicts in other ways. In this
example [1][2], the PR kworkflow#100 changed the function `install_hooks()` to
expect an argument, but in `unstable`, there was a commit that
introduced a test that used `install_hooks()` without arguments. The
merge was done seamlessly, but it generated a compilation error.

In this sense, change the workflows to check out to PR branch HEAD
commit instead of merge commit [3] to avoid us guessing what happened
when debugging failing workflows. This has the upside down of "wasting"
CI on "stale" PR branches, but demanding updated branches is
a maintainer's job (besides being a good practice from contributors).

[1]: https://github.com/kworkflow/patch-hub/actions/runs/13722146202/job/38391557671
[2]: https://github.com/kworkflow/patch-hub/actions/runs/13722146239/job/38379854217
[3]: https://github.com/actions/checkout/blob/main/README.md#Checkout-pull-request-HEAD-commit-instead-of-merge-commit

Signed-off-by: David Tadokoro <davidbtadokoro@usp.br>
Copy link
Collaborator

@davidbtadokoro davidbtadokoro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @OJarrisonn, and thanks again for the updates (and the patience). Your work here is marvelous, and it didn't make it as complicated as people say async is.

I left many comments, but most are about text meaning, formatting, and typos. There are some important comments also, so please see if you can address them.

Two things didn't fit the inline review that I would like to bring to you:

I think I could reproduce the bug once again

Even though you seemed to stabilize a lot of the problem of the flush overlapping with log and causing a panic, I spam-tested running and closing patch-hub and could reproduce it once more:

❯ cargo run
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.06s
     Running `target/debug/patch-hub`
The application panicked (crashed).
Message:  Failed to write to latest log file: Custom { kind: Other, error: "background task failed" }
Location: src/logger.rs:173

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

Making logger global

I have a doubt about which we shouldn't implement in this PR. I was wondering if we could make logger "globally" available like you did in the past so we don't need to clone the reference throughout the whole system. Not so much due to the cloning, as it seems really lightweight, but to avoid remembering to pass it around, logging should (ideally) be available to all parts of patch-hub.

I have a hunch that this may seem incompatible with the direction we are moving, but just in case, it is possible without much hassle.

src/main.rs Outdated
let args = Cli::parse();

utils::install_hooks()?;
let (logger, _) = Logger::build("/tmp", LogLevel::Info, 0).await?.spawn();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that we first need to initialize the logger and that the value logs_path from Config isn't available, so that is why you use /tmp here. This is a "chicken and egg" problem because if we initialize the logger first, we can't access the user-defined log's path, while initializing the configurations first won't allow us to log problems that may occur (PR #101 mentions this).

I consider that the logger should indeed be the first, and maybe having a configurable log's path isn't very attractive to the user. The XDG Base Directory Specification says that logs should be in ~/.local/state/<app-name> by default. I propose we do something similar to how we handle the config.json location, i.e., have precedence of an env var and default to ~/.local/state/patch-hub while removing the related entry from Config.

I don't think we should address this in this PR, but please add a TODO (I will open an issue also) so we remember to enforce this in a future PR.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I think of it, this challenge may be bigger, as how would the user would define the LogLevel? Maybe

Is Config initializing first and any and all error in its setup phase be printed directly to the terminal with eprintln a good idea?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should address this in this PR, but please add a TODO (I will open an issue also) so we remember to enforce this in a future PR.

I've already addressed this in the next PR (#111)

Is Config initializing first and any and all error in its setup phase be printed directly to the terminal with eprintln a good idea?

It might be, I just thinking that we would need some more complicated steps:

  1. Load config (eprintln!) on error
  2. Start logger with config opts
  3. Pass a LoggerTx to the config so further errors can be logged not eprinted

I can also address this more intricate loading in the #111 PR where Config is an actor as well

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I think of it, this challenge may be bigger, as how would the user would define the LogLevel? Maybe

Is Config initializing first and any and all error in its setup phase be printed directly to the terminal with eprintln a good idea?

I might be missing something about the assumed implementation of user-configurable LogLevel but is there a reason this can't also be handled with environment variables or even extra configuration within the app that is shown optionally (say when DEBUG is set to true in the environment)?

Since we're having a custom in-house logging system, I feel it would be a shame if we had exceptions where we used eprintln instead 😅

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

app::config is the one responsible for managing all settings on patch-hub. Managing log level either with environment variables or a config option should be done within this module.

I don't see much of a problem if we use eprintln before our logger is setup

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@OJarrisonn wrote:

It might be, I just thinking that we would need some more complicated steps:

1. Load config (`eprintln!`) on error

2. Start logger with config opts

3. Pass a LoggerTx to the config so further errors can be logged not `eprint`ed

I can also address this more intricate loading in the #111 PR where Config is an actor as well

Your idea is sound, and I confess I didn't think about logging things in Config after initialization. In any case, I think you shouldn't address this if it isn't straightforward. Let's focus on "actorizing" the logger and config first.

@ivinjabraham wrote:

I might be missing something about the assumed implementation of user-configurable LogLevel but is there a reason this can't also be handled with environment variables or even extra configuration within the app that is shown optionally (say when DEBUG is set to true in the environment)?

You are correct, we can. Actually, patch-hub gives maximum precedence over env vars when setting configs, but the thing is that (with the exception of where we save the config file itself), all other configurations should be "saveable" in a file, IMHO.

Since we're having a custom in-house logging system, I feel it would be a shame if we had exceptions where we used eprintln instead 😅

I agree with @OJarrisonn that using eprintln for this exception is OK.

davidbtadokoro pushed a commit that referenced this pull request Mar 13, 2025
The project's CI relies heavily on the `checkout` action from GitHub
Actions that, essentially, deals with checking out to the correct commit
on which to run the workflow. By default, PRs check out to the temporary
merge commit resulting from the PR branch HEAD and the target branch
HEAD (`unstable` here in `patch-hub`). This logic is sound to "assure"
that the workflows run on a version of the PR that considers the latest
state of the branch it intends to merge.

However, it can cause strange side effects, such as when GitHub can
successfully merge, but the PR branch conflicts in other ways. In this
example [1][2], the PR #100 changed the function `install_hooks()` to
expect an argument, but in `unstable`, there was a commit that
introduced a test that used `install_hooks()` without arguments. The
merge was done seamlessly, but it generated a compilation error.

In this sense, change the workflows to check out to PR branch HEAD
commit instead of merge commit [3] to avoid us guessing what happened
when debugging failing workflows. This has the upside down of "wasting"
CI on "stale" PR branches, but demanding updated branches is
a maintainer's job (besides being a good practice from contributors).

[1]: https://github.com/kworkflow/patch-hub/actions/runs/13722146202/job/38391557671
[2]: https://github.com/kworkflow/patch-hub/actions/runs/13722146239/job/38379854217
[3]: https://github.com/actions/checkout/blob/main/README.md#Checkout-pull-request-HEAD-commit-instead-of-merge-commit

Signed-off-by: David Tadokoro <davidbtadokoro@usp.br>
davidbtadokoro pushed a commit that referenced this pull request Mar 13, 2025
The project's CI relies heavily on the `checkout` action from GitHub
Actions that, essentially, deals with checking out to the correct commit
on which to run the workflow. By default, PRs check out to the temporary
merge commit resulting from the PR branch HEAD and the target branch
HEAD (`unstable` here in `patch-hub`). This logic is sound to "assure"
that the workflows run on a version of the PR that considers the latest
state of the branch it intends to merge.

However, it can cause strange side effects, such as when GitHub can
successfully merge, but the PR branch conflicts in other ways. In this
example [1][2], the PR #100 changed the function `install_hooks()` to
expect an argument, but in `unstable`, there was a commit that
introduced a test that used `install_hooks()` without arguments. The
merge was done seamlessly, but it generated a compilation error.

In this sense, change the workflows to check out to PR branch HEAD
commit instead of merge commit [3] to avoid us guessing what happened
when debugging failing workflows. This has the upside down of "wasting"
CI on "stale" PR branches, but demanding updated branches is
a maintainer's job (besides being a good practice from contributors).

[1]: https://github.com/kworkflow/patch-hub/actions/runs/13722146202/job/38391557671
[2]: https://github.com/kworkflow/patch-hub/actions/runs/13722146239/job/38379854217
[3]: https://github.com/actions/checkout/blob/main/README.md#Checkout-pull-request-HEAD-commit-instead-of-merge-commit

Signed-off-by: David Tadokoro <davidbtadokoro@usp.br>
Reviewed-by: OJarrisonn <j.h.m.t.v.10@gmail.com>
@nahharris nahharris force-pushed the refactor/tokio-logger branch 2 times, most recently from 3b0029c to cab9ecc Compare March 13, 2025 19:14
@ivinjabraham
Copy link
Contributor

I think I must have missed out on some previous discussion about architectural changes, so I apologize if these questions were already asked before. I've also never used the actor model before and just read up about it so sorry in advance for any beginner-level questions. That said, I wanted to ask a few things to better understand how this all works out as I'm currently working on documentation for the project.

  1. What motivated the custom system for logging that we did not have in established crates (tracing or log)? What advantages does it provide?
  2. Is this part of a broader actor-based architecture in patch-hub, or is it meant to be an isolated improvement? If it is the former, what are the long-term plans or goals for this shift?
  3. From what I understand, the actor model is generally used in distributed systems that need a high level of concurrency — such as game servers. Are there specific concurrency-related issues in patch-hub that this is solving?
  4. Since this is also (eventually) targeting a refactor on Config, can you elaborate why we use the actor system to manage a config. struct as well?

I suppose I'm missing out on why we need the extra concurrency requirement, since the only primary asynchronous operation is fetching content from lore.kernel.org which is network I/O-bound.

@nahharris nahharris force-pushed the refactor/tokio-logger branch from e831cb3 to 04f3c67 Compare March 15, 2025 12:14
@nahharris
Copy link
Contributor Author

What motivated the custom system for logging that we did not have in established crates (tracing or log)? What advantages does it provide?

I don't think we have any advantages, we just didn't use a crate when we first implemented it

Is this part of a broader actor-based architecture in patch-hub, or is it meant to be an isolated improvement? If it is the former, what are the long-term plans or goals for this shift?

Yes, patch-hub will be entirely based on the actor model. About some of the goals:

  1. Patch-hub need some architectural changes because code was starting to get sort of. This will force us to reinterpret the overall code structure
  2. We need parallelism for some tasks but the current implementation is not much thread-friendly
  3. The actor model may increase testability by allowing the decoupling of side-effectish code into separate actors

From what I understand, the actor model is generally used in distributed systems that need a high level of concurrency — such as game servers. Are there specific concurrency-related issues in patch-hub that this is solving?

It was designed for those kind of systems but it's not exclusive. When i first started thinking about this I was willing to find some balence between OOP and FP that was concurrency/parallelism-friendly. Actor model seems like a good fit.

Since this is also (eventually) targeting a refactor on Config, can you elaborate why we use the actor system to manage a config. struct as well?

I don't know what else i could elaborate on this but the core idea of the actor model: everything is an actor. Just like "everything is an object" in OOP

@lorenzoberts
Copy link
Collaborator

Hey, guys! I’ve been trying to follow all the discussions around the logger issue, but I may have missed something. If I bring up a topic that’s already been covered, please forgive me.

One of the key points @davidbtadokoro raised in the review, which I’d like to emphasize, is how we can make the logger global. This is also related to @ivinjabraham 's question about why we don’t use established crates.

I’ve worked with the “tracing” crate before, and while I still don't fully understand how they handle everything under the hood, I understand it’s designed to work well in a global context. I don’t think we need to delete this PR or abandon our current Logger implementation entirely, but if we’re aiming to refactor the logging logic, we should have a solid reason for sticking with our in-house solution rather than moving to an established crate like "tracing". Otherwise, I feel that we'll just be trying to reinvent the wheel.

Maybe we can also mix both worlds, but I do believe it would be a step backward to continue passing the logger as a function or struct argument whenever we need to use it.

Additionally, to gather metrics beyond logs, if we wanted to maintain the Logger pattern, we would need to create another similar module and also pass it as a parameter. However, this doesn't seem like the best solution.

@OJarrisonn, feel free to correct me if I’ve misunderstood anything. You have a better understanding of the Logger than I do, so if I’ve missed something, let me know!

@ivinjabraham
Copy link
Contributor

ivinjabraham commented Mar 15, 2025

I agree with @lorenzoberts's thoughts. I was trying to document the proposed actors earlier today and it took a surprising amount of time to conceptualize — especially for a logger which isn't a functional requirment of the project. I was planning to have a doc explaining the 'why' and 'how' but without any clear advantages, I would have to skip the former.

I believe this extends to the broader push for the actor model too. While the actor model definitely has benefits (especially in terms of concurrency while mutating shared states), I think it might be over-engineering the problem at hand. Perhaps I've acquired a distaste for abstractions from working on drivers, but I don't believe these additional abstraction layers is the way to address our architectural problems.

Instead, I think a more effective path forward would be to refine our directory structure to extend from the initial MVC architecture. I believe the benefits of the actor model can be achieved with just Rust's concurrency and async model. We would have to refactor parts of the codebase to be modularised so as to fit in the new directories but I think that would be a one-time cost that will pay off in long-term maintainability.

I had planned to send a proposed structure, if it wasn't already in the works, in the GitHub discussions threads I opened earlier this month, but I think that was the wrong place entirely. As I'm currently away and travelling, I can't provide a full breakdown from my phone but I can share a more detailed plan in the next twenty four hours.

Add `tokio` crate as a dependency.

Signed-off-by: OJarrisonn <j.h.m.t.v.10@gmail.com>
Here i've created the implementation of the logger actor. This actor is
responsible for logging messages.

The `Logger` struct is a simple refactor of the old
`app::logging::Logger` it is responsible for storing messages that will
be later printed to the stdout and also to provide methods such as `spawn`

`spawn` is responsible for creating a dedicated task that will handle
all the messages received from all the transmitters.

The `LoggerTx` is a transmitter that is responsible by sending messages
down to the running actor. This is what should be used accross the code
base to communicate with the actor.

The `MockLoggerTx` is a mock version of this logger which does nothing
at all.

Finally but not least, the `LoggerActor` trait which unifies the
interface of `LoggerTx` and `MockLoggerTx`. This permits the swaping of
the loggers for testing scenarios.

Signed-off-by: OJarrisonn <j.h.m.t.v.10@gmail.com>
Removes the old logger implementation for cleanup purposes

Signed-off-by: OJarrisonn <j.h.m.t.v.10@gmail.com>
Adapts the `utils` testing module to support the new async setup
function.

Signed-off-by: OJarrisonn <j.h.m.t.v.10@gmail.com>
Adds a simple description to the `logger` field in the `App` struct

Signed-off-by: OJarrisonn <j.h.m.t.v.10@gmail.com>
@nahharris nahharris force-pushed the refactor/tokio-logger branch from 66f426b to 197ac96 Compare March 18, 2025 17:53
@davidbtadokoro
Copy link
Collaborator

Hey @OJarrisonn, thank you so much for the work done about the actors rearchitecture! Going to close this PR and #111 for organization purposes as we talked offline.

@ivinjabraham
Copy link
Contributor

I’ve worked with the “tracing” crate before, and while I still don't fully understand how they handle everything under the hood, I understand it’s designed to work well in a global context. I don’t think we need to delete this PR or abandon our current Logger implementation entirely, but if we’re aiming to refactor the logging logic, we should have a solid reason for sticking with our in-house solution rather than moving to an established crate like "tracing". Otherwise, I feel that we'll just be trying to reinvent the wheel.

Maybe we can also mix both worlds, but I do believe it would be a step backward to continue passing the logger as a function or struct argument whenever we need to use it.

Additionally, to gather metrics beyond logs, if we wanted to maintain the Logger pattern, we would need to create another similar module and also pass it as a parameter. However, this doesn't seem like the best solution.

This is related to #47. Was a conclusion reached on this discussion? Are we continuing with the in-house logging system?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants