Skip to content

fix: s3 delivery after connect#264

Open
jeffersonfelixdev wants to merge 3 commits intoasternic:mainfrom
zapperapi:s3-delivery
Open

fix: s3 delivery after connect#264
jeffersonfelixdev wants to merge 3 commits intoasternic:mainfrom
zapperapi:s3-delivery

Conversation

@jeffersonfelixdev
Copy link

Fix: S3 media delivery not applied until container restart

Description

When creating a user via POST /admin/users with mediaDelivery: "s3" and a valid S3 configuration (confirmed by /session/s3/test), the service did not upload message media to S3 after the user authorized the account. The /session/status endpoint correctly showed S3 as enabled with media_delivery: "s3", but incoming messages with attachments (images, audio, documents, video) were not sent to S3. Calling /session/disconnect and /session/connect did not fix the behaviour; only a full container restart did.

Root cause

The authentication middleware (authalice) populates the in-memory user cache (userinfocache) when a request is made with a token that is not yet cached. It loads the user from the database and builds a Values object to store in the cache.

The middleware’s SQL query did not include s3_enabled or media_delivery. So when a newly created user (with S3 configured) made their first authenticated request (e.g. /session/connect to authorize the session), the cache was filled without S3Enabled and MediaDelivery. The message handler in wmiau.go uses the cache first: when the user is in cache it reads S3Enabled and MediaDelivery from there. With those keys missing (empty string), the condition s3Config.Enabled == "true" failed and S3 uploads were never triggered.

On container restart, connectOnStartup() runs and uses a different query that does include s3_enabled and media_delivery, so the cache was populated correctly and S3 worked until the next user created and authorized without a restart.

Solution

Update the authentication middleware in handlers.go so that when loading the user from the database it:

  1. Extends the query to select s3_enabled (as 'true'/'false') and COALESCE(media_delivery, 'base64').
  2. Adds S3Enabled and MediaDelivery to the Values struct written to userinfocache.

With this change, the first authenticated request after user creation (e.g. connect + QR auth) fills the cache with the correct S3 settings from the database, and the message handler uses S3 for media delivery without requiring a container restart.

Changes

  • handlers.go (authalice): include s3_enabled and media_delivery in the DB query and in the cached user Values.

This PR resolves #230.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @jeffersonfelixdev, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug preventing S3 media delivery from functioning immediately for newly created users. Previously, S3 settings were not correctly loaded into the in-memory user cache by the authentication middleware, requiring a full container restart to activate S3 functionality. The implemented solution modifies the database query within the authentication process to include the necessary S3 enablement and media delivery preferences, ensuring that the user cache is populated with accurate information from the first authenticated request.

Highlights

  • S3 Media Delivery Fix: Resolved an issue where S3 media delivery was not applied for newly created users until a container restart, despite S3 being correctly configured.
  • Root Cause Identified: The authentication middleware's database query for user information was missing s3_enabled and media_delivery fields, leading to an incomplete userinfocache entry for new users.
  • Authentication Middleware Update: Modified the authalice middleware in handlers.go to extend the database query to include s3_enabled and media_delivery and ensure these values are correctly stored in the userinfocache.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • handlers.go
    • Updated the SQL query to select s3_enabled and media_delivery columns from the users table.
    • Modified the rows.Scan method to read the newly selected s3Enabled and mediaDelivery values.
    • Added S3Enabled and MediaDelivery fields to the Values struct before caching user information.
Activity
  • No specific activity (comments, reviews, or progress updates) was provided in the context for this pull request.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses a bug where S3 media delivery settings were not being applied for new users until a container restart. The root cause was accurately identified as the authentication middleware's cache not being populated with S3-related fields. The fix, which involves updating the SQL query in authalice and adding the new fields to the cached Values struct, is appropriate and well-implemented. The fix is well-explained in the description. I've included a couple of suggestions to improve code maintainability.

log.Info().Msg("Looking for user information in DB")
// Checks DB from matching user and store user values in context
rows, err := s.db.Query("SELECT id,name,webhook,jid,events,proxy_url,qrcode,history,hmac_key IS NOT NULL AND length(hmac_key) > 0 FROM users WHERE token=$1 LIMIT 1", token)
rows, err := s.db.Query("SELECT id,name,webhook,jid,events,proxy_url,qrcode,history,hmac_key IS NOT NULL AND length(hmac_key) > 0,CASE WHEN s3_enabled THEN 'true' ELSE 'false' END,COALESCE(media_delivery, 'base64') FROM users WHERE token=$1 LIMIT 1", token)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This SQL query has become quite long, which can harm readability and maintainability. It's a good practice to extract complex queries into named constants. This improves readability and separates SQL from Go logic.

For example, you could define a constant at the package level:

const authaliceUserQuery = `
    SELECT
        id, name, webhook, jid, events, proxy_url, qrcode, history,
        hmac_key IS NOT NULL AND length(hmac_key) > 0,
        CASE WHEN s3_enabled THEN 'true' ELSE 'false' END,
        COALESCE(media_delivery, 'base64')
    FROM users
    WHERE token=$1
    LIMIT 1`

Then, you can use this constant in your query call, making the code cleaner.

Suggested change
rows, err := s.db.Query("SELECT id,name,webhook,jid,events,proxy_url,qrcode,history,hmac_key IS NOT NULL AND length(hmac_key) > 0,CASE WHEN s3_enabled THEN 'true' ELSE 'false' END,COALESCE(media_delivery, 'base64') FROM users WHERE token=$1 LIMIT 1", token)
rows, err := s.db.Query(authaliceUserQuery, token)

Comment on lines 179 to 192
v := Values{map[string]string{
"Id": txtid,
"Name": name,
"Jid": jid,
"Webhook": webhook,
"Token": token,
"Proxy": proxy_url,
"Events": events,
"Qrcode": qrcode,
"History": historyStr,
"HasHmac": strconv.FormatBool(hasHmac),
"Id": txtid,
"Name": name,
"Jid": jid,
"Webhook": webhook,
"Token": token,
"Proxy": proxy_url,
"Events": events,
"Qrcode": qrcode,
"History": historyStr,
"HasHmac": strconv.FormatBool(hasHmac),
"S3Enabled": s3Enabled,
"MediaDelivery": mediaDelivery,
}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve readability and long-term maintainability, it's a good practice to keep the keys in map literals sorted alphabetically. This makes it easier for developers to find specific keys when reading or modifying the code.

Suggested change
v := Values{map[string]string{
"Id": txtid,
"Name": name,
"Jid": jid,
"Webhook": webhook,
"Token": token,
"Proxy": proxy_url,
"Events": events,
"Qrcode": qrcode,
"History": historyStr,
"HasHmac": strconv.FormatBool(hasHmac),
"Id": txtid,
"Name": name,
"Jid": jid,
"Webhook": webhook,
"Token": token,
"Proxy": proxy_url,
"Events": events,
"Qrcode": qrcode,
"History": historyStr,
"HasHmac": strconv.FormatBool(hasHmac),
"S3Enabled": s3Enabled,
"MediaDelivery": mediaDelivery,
}}
v := Values{map[string]string{
"Events": events,
"HasHmac": strconv.FormatBool(hasHmac),
"History": historyStr,
"Id": txtid,
"Jid": jid,
"MediaDelivery": mediaDelivery,
"Name": name,
"Proxy": proxy_url,
"Qrcode": qrcode,
"S3Enabled": s3Enabled,
"Token": token,
"Webhook": webhook,
}}

@jeffersonfelixdev
Copy link
Author

After more tests I figured out that S3 client was failing before upload media. I added lazy client initialization to ensure that client is connected before upload to S3.

Now is working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

S3 media upload not activated until full container restart (Docker Swarm)

1 participant

Comments