Compare commits

...

72 Commits

Author SHA1 Message Date
Mathijs van Veluw
53f58b14d5 Fix admin diagnostics crash (#5886)
Better handle semver issues.
Fixes #5882
Fixes #5883
Fixes #5885

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-05-26 23:14:17 +02:00
Mathijs van Veluw
ef7835d1b0 Sync with Upstream (#5798)
* WIP Sync with Upstream

WIP on syncing API Responses with upstream.
This to prevent issues with new clients, and find possible current issues like members, collections, groups etc..

Signed-off-by: BlackDex <black.dex@gmail.com>

* More API Response fixes

- Some 2fa checks
- Some org checks
- Reconfigured the experimental flags and noted which are deprecated
  Also removed some hard-coded defaults.
- Updated crates

Signed-off-by: BlackDex <black.dex@gmail.com>

* Add avatar color to emergency access api

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix spelling and some crate updates

Signed-off-by: BlackDex <black.dex@gmail.com>

* Use PushId and always generate the PushId

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix clippy lints

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix several Push issues and API's

Signed-off-by: BlackDex <black.dex@gmail.com>

* Check if push_uuid is empty and generate when needed

Signed-off-by: BlackDex <black.dex@gmail.com>

* Updated some comments and removed old export format

Signed-off-by: BlackDex <black.dex@gmail.com>

* cargo update

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix bulk edit Fixes #5737

Signed-off-by: BlackDex <black.dex@gmail.com>

* Send an email when an account exists already

When you want to change your email address into an account which already exists, upstream sends an email to the existing account.
Lets do the same.

Kinda fixes #5630

Signed-off-by: BlackDex <black.dex@gmail.com>

* Update 2fa removal/revoke email

Signed-off-by: BlackDex <black.dex@gmail.com>

* Allow col managers to import

This commit adds functionality to allow users with manage access to a collection, or managers with all access to import into an organization.

Fixes #5592

Signed-off-by: BlackDex <black.dex@gmail.com>

* Filter deprected flags and only return active flags

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix grammer

Signed-off-by: BlackDex <black.dex@gmail.com>

* Rename Small to Compact

Signed-off-by: BlackDex <black.dex@gmail.com>

* Rebase with upstream and fix conflicts

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-05-26 21:00:59 +02:00
Mathijs van Veluw
3a44dc963b Update admin interface (#5880)
- Updated Backend Admin dependencies
- Fixed NTP time by using CloudFlare trace - Fixes #5797
- Fixed web-vault version check = Fixes #5761
- Fixed an issue with the css not hiding the 'Create Account' link.
  There were no braces around the function call.
  Also added a hide for newer web-vault versions as it still causes confusion with the cached /api/config.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-05-26 20:37:50 +02:00
Timshel
a039e227c7 web-client now request email 2fa (#5871) 2025-05-26 20:24:30 +02:00
Timshel
602b18fdd6 Remove old client version check (#5874) 2025-05-23 17:24:03 +02:00
Timshel
bf04c64759 Toggle providers using class (#5832) 2025-05-16 18:54:44 +02:00
Timshel
2f1d86b7f1 remove Hide Business scss rules (#5855) 2025-05-16 18:53:01 +02:00
moodejb123
ff97bcfdda Add totp menu feature flag (#5850) 2025-05-16 18:52:00 +02:00
Mathijs van Veluw
73f2441d1a Update Rust, Crates and Web-Vault (#5860)
- Updated web-vault to v2025.5.0
- Updated Rust to v1.87.0
- Updated all the crates
- Replaced yubico with yubico_ng
- Fixed several new (nightly) clippy lints

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-05-16 18:49:43 +02:00
Helmut K. C. Tessarek
ad8484a2d5 feat: add ip address in logs when email 2fa token is invalid or not available (#5779)
* Update email.rs

Add ip_src on logs when email 2fa token is invalid or not available
Changes for fail2ban purposes

* Update email.rs

removed current_time

* fix: compile error

---------

Co-authored-by: setsecurity <set.ghost@gmail.com>
2025-05-12 19:27:43 +02:00
Josh
9813e480c0 Fix minimum Android version for self-host email alias feature flags (#5802)
* Fix minimum Android version for self-host email alias feature flags

* Update documentation for iOS
2025-05-12 19:25:46 +02:00
Timshel
bfe172702a Fix Yubico toggle (#5833) 2025-05-12 19:24:27 +02:00
Timshel
df42b6d6b0 Fix Yubico toggle (#5833) 2025-05-12 19:24:12 +02:00
Helmut K. C. Tessarek
2697fe8aba feat: add feature flag export-attachments (#5784) 2025-05-01 17:40:26 +02:00
Stefan Melmuk
674e444d67 respond with cipher json when deleting attachments (#5823) 2025-05-01 17:28:23 +02:00
Timshel
0d16da440d On member invite and edit access_all is not sent anymore (#5673)
* On member invite and edit access_all is not sent anymore

* Use MembershipType ordering for access_all check

Fixes #5711
2025-04-16 17:52:26 +02:00
Mathijs van Veluw
66cf179bca Updates and general fixes (#5762)
Updated all the crates to the latest version.
We can unpin mimalloc, since the musl issues have been fixed
Also fix a RUSTSEC https://osv.dev/vulnerability/RUSTSEC-2025-0023 for tokio

Fixed some clippy lints reported by nightly.

Ensure lints and are also run on the macro crate.
This resulted in some lints being triggered, which I fixed.

Updated some GHA uses.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-04-09 21:21:10 +02:00
Mathijs van Veluw
025bb90f8f Fix debian docker building (#5752)
In previous attempts to get mysqlclient-sys to build and work I added some extra build variables.
These are not needed if you configure pkg-config correctly.
The same goes for OpenSSL btw.

This PR configures the pkg-config in the right way and allows the crates to build using the right lib paths automatically.
Because of this change also the lib/include paths were not needed anymore for some architectures, except for i386.

Also updated crates again.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-04-05 17:58:32 +02:00
Mathijs van Veluw
d5039d9c17 Add Docker Templates pre-commit check (#5749)
Added the same check as done via GitHub Actions to check template changes to the pre-commit checks.
This should catch these mistakes before they are commited and pushed.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-04-04 19:02:19 +02:00
Daniel García
e7c796a660 Verify templates in CI (#5748)
* Verify templates in CI

* No need to install packages

* Remove unnecessary fetch depth
2025-04-04 18:14:19 +02:00
Daniel
bbbd2f6d15 Update Rust to 1.86.0 (#5744)
- also raise MSRV to 1.84.0

- fix `Dockerfile` template
- remove no longed needed `-vvv` argument for `cargo build`
2025-04-04 18:04:36 +02:00
Mathijs van Veluw
a2d7895586 Really fix building (#5745)
Signed-off-by: BlackDex <black.dex@gmail.com>
2025-04-04 16:53:09 +02:00
Mathijs van Veluw
8a0cb1137e Fix mysqlclient-sys building (#5743)
Because of some issues with mysqlclient we need to use buildtime bindgen.
This also needed some extra environment variables to point the bindgen to the correct files and correct version.

Also update some other crates.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-04-04 16:37:57 +02:00
Timshel
f960bf59bb Fix invited user registration without SMTP (#5712) 2025-04-04 13:54:28 +02:00
Mathijs van Veluw
3a1f1bae00 Update deps and web-vault (#5742)
- Updated crates
  Pinned mimalloc, since it has issues with musl
- Updated web-vault to v2025.3.1
- Updated bootstrap

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-04-04 12:18:09 +02:00
Mathijs van Veluw
8dfe805954 Update Rust, Crates and other deps (#5709)
- Updated Rust to v1.85.1
- Updated crates and fixed breaking changes
- Updated datatables js
- Updated GitHub Actions

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-03-19 17:39:53 +01:00
Mathijs van Veluw
07b869b3ef Some fixes for the new web-vault and updates (#5703)
- Added a new org policy
- Some new lint fixes
- Crate updates
  Switched to `pastey`, since `paste` is unmaintained.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-03-17 23:02:02 +01:00
Daniel García
2a18665288 Implement new registration flow with email verification (#5215)
* Implement registration with required verified email

* Optional name, emergency access, and signups_allowed

* Implement org invite, remove unneeded invite accept

* fix invitation logic for new registration flow (#5691)

* fix invitation logic for new registration flow

* clarify email_2fa_enforce_on_verified_invite

---------

Co-authored-by: Stefan Melmuk <509385+stefan0xC@users.noreply.github.com>
2025-03-17 16:28:01 +01:00
Josh
71952a4ab5 Add AnonAddy/SimpleLogin self host feature flag (#5694)
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2025-03-15 19:57:04 +01:00
Ben Sherman
994d157064 Add support for mutual-tls feature flag (#5698)
* Add support for mutual-tls feature flag

* Fix formatting

---------

Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2025-03-15 19:46:42 +01:00
Timshel
1dae6093c9 Use subtle to replace deprecated ring::constant_time::verify_slices_are_equal (#5680) 2025-03-15 19:33:17 +01:00
Daniel
6edceb5f7a Update Rust to 1.85.0 (#5634)
- also update the crates
2025-02-24 12:12:34 +01:00
Stefan Melmuk
359a4a088a allow CLI to upload files with truncated filenames (#5618)
due to a bug in the CLI the filename in the form-data is not complete if
the encrypted filename happens to contain a /
2025-02-19 10:40:59 +01:00
Mathijs van Veluw
3baffeee9a Fix db issues with Option<> values and upd crates (#5594)
Some tables were lacking an option to convert Option<> to NULL.
This commit will fix that.

Also updated the crates to the latest version available.
2025-02-14 17:58:57 +01:00
Daniel
d5c353427d Update crates & fix CVE-2025-25188 (#5576) 2025-02-12 10:21:12 +01:00
Mathijs van Veluw
1f868b8d22 Show assigned collections on member edit (#5556)
Because we were using the `has_full_access()` function we did not returned assigned collections for an owner/admin even if the did not have the `access_all` flag set.
This commit will change that to use the `access_all` flag instead, and return assigned collections too.

While saving a member and having it assigned collections would still save those rights, and it was also visible in the collection management, it wasn't at the member it self.
So, it did work, but was not visible.

Fixes #5554
Fixes #5555

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-02-07 22:33:11 +01:00
Mathijs van Veluw
8d1df08b81 Fix icon redirect not working on desktop (#5536)
* Fix icon redirect not working on desktop

We also need to exclude the header in case we do an external_icon call.

Fixes #5535

Signed-off-by: BlackDex <black.dex@gmail.com>

* Add informational comments to the icon_external function

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix spelling/grammar

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-02-04 13:20:32 +01:00
Stefan Melmuk
3b6bccde97 add bulk-access endpoint for collections (#5542) 2025-02-04 09:42:02 +01:00
Daniel
d2b36642a6 Update crates & fix CVE-2025-24898 (#5538) 2025-02-04 01:01:06 +01:00
Mathijs van Veluw
a02fb0fd24 Update workflows and enhance security (#5537)
This commit updates the workflow files and also fixes some security issues which were reported by using zizmor https://github.com/woodruffw/zizmor

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-02-04 00:33:43 +01:00
Daniel
1109293992 Update Rust to 1.84.1 (#5508)
- also update the crates
- add necessary modifications for `rand` upgrade
- `small_rng` is enabled by default now
2025-02-01 13:16:32 +01:00
Mathijs van Veluw
3c29f82974 Allow all manager to create collections again (#5488)
* Allow all manager to create collections again

This commit checks if the member is a manager or better, and if so allows it to createCollections.
We actually check if it is less then a Manager, since the `limitCollectionCreation` should be set to false to allow it and true to prevent.

This should fix an issue discussed in #5484

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix some small issues

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-29 20:41:31 +01:00
Roman Ratiner
663f88e717 Fix Duo Field Names for Web Client (#5491)
* Fix Duo Field Names for Web Client

* Fix Api Validation

* Rename Duo Labels In Admin
2025-01-29 12:00:14 +01:00
Stefan Melmuk
a3dccee243 add and use new event types (#5482)
* add additional event_types

* use correct event_type when leaving an org

* use correct event type when deleting a user

* also correctly log auth requests

* add correct membership info to event log
2025-01-28 11:25:53 +01:00
Mathijs van Veluw
c0ebe0d982 Fix passwordRevisionDate format (#5477) 2025-01-27 20:16:59 +01:00
Win‮8201‭Linux‬
1b46c80389 Make sure the icons are displayed correctly in desktop clients (#5469) 2025-01-27 18:29:24 +01:00
Stefan Melmuk
2c549984c0 let invited members access OrgMemberHeaders (#5461) 2025-01-27 18:27:11 +01:00
Stefan Melmuk
ecab7a50ea hide already approved (or declined) devices (#5467) 2025-01-27 18:21:22 +01:00
Stefan Melmuk
2903a3a13a only validate SMTP_FROM if necessary (#5442) 2025-01-25 05:46:43 +01:00
Mathijs van Veluw
952992c85b Org fixes (#5438)
* Security fixes for admin and sendmail

Because the Vaultwarden Admin Backend endpoints did not validated the Content-Type during a request, it was possible to update settings via CSRF. But, this was only possible if there was no `ADMIN_TOKEN` set at all. To make sure these environments are also safe I added the needed content-type checks at the functions.
This could cause some users who have scripts which uses cURL for example to adjust there commands to provide the correct headers.

By using a crafted favicon and having access to the Admin Backend an attacker could run custom commands on the host/container where Vaultwarden is running on. The main issue here is that we allowed the sendmail binary name/path to be changed. To mitigate this we removed this configuration item and only then `sendmail` binary as a name can be used.
This could cause some issues where the `sendmail` binary is not in the `$PATH` and thus not able to be started. In these cases the admins should make sure `$PATH` is set correctly or create a custom shell script or symlink at a location which is in the `$PATH`.

Added an extra security header and adjusted the CSP to be more strict by setting `default-src` to `none` and added the needed missing specific policies.

Also created a general email validation function which does some more checking to catch invalid email address not found by the email_address crate.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix security issue with organizationId validation

Because of a invalid check/validation of the OrganizationId which most of the time is located in the path but sometimes provided as a URL Parameter, the parameter overruled the path ID during the Guard checks.
This resulted in someone being able to execute commands as an Admin or Owner of the OrganizationId fetched from the parameter, but the API endpoints then used the OrganizationId located in the path instead.

This commit fixes the extraction of the OrganizationId in the Guard and also added some extra validations of this OrgId in several functions.

Also added an extra `OrgMemberHeaders` which can be used to only allow access to organization endpoints which should only be accessible by members of that org.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Update server version in config endpoint

Updated the server version reported to the clients to `2025.1.0`.
This should make Vaultwarden future proof for the newer clients released by Bitwarden.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix and adjust build workflow

The build workflow had an issue with some `if` checks.
For one they had two `$` signs, and it is not recommended to use `always()` since canceling a workflow does not cancel those calls.
Using `!cancelled()` is the preferred way.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Update crates

Signed-off-by: BlackDex <black.dex@gmail.com>

* Allow sendmail to be configurable

This reverts a previous change which removed the sendmail to be configurable.
We now set the config to be read-only, and omit all read-only values from being stored during a save action from the admin interface.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Add more org_id checks

Added more org_id checks at all functions which use the org_id in there path.

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-25 01:32:09 +01:00
Stefan Melmuk
c0be36a17f update web-vault to v2025.1.1 and add /api/devices (#5422)
* add /api/devices endpoints

* load pending device requests

* order pending authrequests by creation date

* update web-vault to v2025.1.1
2025-01-23 12:30:55 +01:00
Mathijs van Veluw
d1dee04615 Add manage role for collections and groups (#5386)
* Add manage role for collections and groups

This commit will add the manage role/column to collections and groups.
We need this to allow users part of a collection either directly or via groups to be able to delete ciphers.
Without this, they are only able to either edit or view them when using new clients, since these check the manage role.

Still trying to keep it compatible with previous versions and able to revert to an older Vaultwarden version and the `access_all` feature of the older installations.
In a future version we should really check and fix these rights and create some kind of migration step to also remove the `access_all` feature and convert that to a `manage` option.
But this commit at least creates the base for this already.

This should resolve #5367

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix an issue with access_all

If owners or admins do not have the `access_all` flag set, in case they do not want to see all collection on the password manager view, they didn't see any collections at all anymore.

This should fix that they are still able to view all the collections and have access to it.

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-21 23:33:41 +01:00
Stefan Melmuk
ef2695de0c improve admin invite (#5403)
* check for admin invite

* refactor the invitation logic

* cleanup check for undefined token

* prevent wrong user from accepting invitation
2025-01-20 20:21:44 +01:00
Daniel
29f2b433f0 Simplify container image attestation (#5387) 2025-01-13 19:16:10 +01:00
Mathijs van Veluw
07f80346b4 Fix version detection on bake (#5382) 2025-01-11 11:54:38 +01:00
Mathijs van Veluw
4f68eafa3e Add Attestations for containers and artifacts (#5378)
* Add Attestations for containers and artifacts

This commit will add attestation actions to sign the containers and binaries which can be verified via the gh cli.
https://cli.github.com/manual/gh_attestation_verify

The binaries from both Alpine and Debian based images are extracted and attested so that you can verify the binaries of all the containers.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Adjust attest to use globbing

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-10 21:32:38 +01:00
Integral
327d369188 refactor: replace static with const for global constants (#5260)
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2025-01-10 21:06:38 +01:00
Mathijs van Veluw
ca7483df85 Fix an issue with login with device (#5379)
During the refactoring done in #5320 there has a buggy slipped through which changed a uuid.
This commit fixes this, and also made some vars pass by reference.

Fixes #5377

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-10 20:37:23 +01:00
Helmut K. C. Tessarek
16b6d2a71e build: raise msrv (1.83.0) rust toolchain (1.84.0) (#5374)
* build: raise msrv (1.83.0) rust toolchain (1.84.0)

* build: also update docker images
2025-01-10 20:34:48 +01:00
Stefan Melmuk
871a3f214a rename membership and adopt newtype pattern (#5320)
* rename membership

rename UserOrganization to Membership to clarify the relation
and prevent confusion whether something refers to a member(ship) or user

* use newtype pattern

* implement custom derive macro IdFromParam

* add UuidFromParam macro for UUIDs

* add macros to Docker build

Co-authored-by: dfunkt <dfunkt@users.noreply.github.com>

---------

Co-authored-by: dfunkt <dfunkt@users.noreply.github.com>
2025-01-09 18:37:23 +01:00
Mathijs van Veluw
10d12676cf Allow building with Rust v1.84.0 or newer (#5371) 2025-01-09 12:33:02 +01:00
Mathijs van Veluw
dec3a9603a Update crates and web-vault to v2025.1.0 (#5368)
- Updated the web-vault to use v2025.1.0 (pre-release)
- Updated crates

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-08 18:14:08 +01:00
Mathijs van Veluw
86aaf27659 Prevent new users/members to be stored in db when invite fails (#5350)
* Prevent new users/members when invite fails

Currently when a (new) user gets invited as a member to an org, and SMTP is enabled, but sending the invite fails, the user is still created.
They will only not have received a mail, and admins/owners need to re-invite the member again.
Since the dialog window still keeps on-top when this fails, it kinda invites to click try again, but that will fail in mentioning the user is already a member.

To prevent this weird flow, this commit will delete the user, invite and member if sending the mail failed.
This allows the inviter to try again if there was a temporary hiccup for example, or contact the server admin and does not leave stray users/members around.

Fixes #5349

Signed-off-by: BlackDex <black.dex@gmail.com>

* Adjust deleting records

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-08 18:13:45 +01:00
Stefan Melmuk
bc913d1156 fix manager role in admin users overview (#5359)
due to the hack the returned type has changed
2025-01-07 12:47:37 +01:00
Mathijs van Veluw
ef4bff09eb Fix issue with key-rotate (#5348)
The new web-vault seems to call an extra endpoint, which looks like it is only used when passkeys can be used for login.
Since we do not support this (yet), we can just return an empty data object.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-04 23:00:05 +01:00
Mathijs van Veluw
4816f77fd7 Add partial role support for manager only using web-vault v2024.12.0 (#5219)
* Add partial role support for manager only

- Add the custom role which replaces the manager role
- Added mini-details endpoint used by v2024.11.1

These changes try to add the custom role in such a way that it stays compatible with the older manager role.
It will convert a manager role into a custom role, and if a manager has `access-all` rights, it will enable the correct custom roles.
Upon saving it will convert these back to the old format.

What this does is making sure you are able to revert back to an older version of Vaultwarden without issues.
This way we can support newer web-vault's and still be compatible with a previous Vaultwarden version if needed.

In the future this needs to be changed to full role support though.

Fixed the 2FA hide CSS since the order of options has changed

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix hide passkey login

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix hide create account

Signed-off-by: BlackDex <black.dex@gmail.com>

* Small changes for v2024.12.0

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix hide create account link

Signed-off-by: BlackDex <black.dex@gmail.com>

* Add pre-release web-vault

Signed-off-by: BlackDex <black.dex@gmail.com>

* Rename function to mention swapping uuid's

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-04 19:31:59 +01:00
Mathijs van Veluw
dfd9e65396 Refactor the uri match fix and fix ssh-key sync (#5339)
* Refactor the uri match change

Refactored the uri match fix to also convert numbers within a string to an int.
If it fails it will be null.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix ssh-key sync issues

If any of the mandatory ssh-key json data values are not a string or are an empty string, this will break the mobile clients.
This commit fixes this by checking if any of the values are missing or invalid and converts the json data to `null`.
It will ensure the clients can sync and show the vault.

Fixes #5343
Fixes #5322

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-04 19:11:46 +01:00
Mathijs van Veluw
b1481c7c1a Update crates and GHA (#5346)
- Updated crates to the latest version
- Updated GitHub Actions to the latest version

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-04 19:02:15 +01:00
Stefan Melmuk
d9e0d68f20 fix group issue in send_invite (#5321) 2024-12-31 13:28:19 +01:00
Timshel
08183fc999 Add TOTP delete endpoint (#5327) 2024-12-30 16:57:52 +01:00
Mathijs van Veluw
d9b043d32c Fix issues when uri match is a string (#5332) 2024-12-29 21:26:03 +01:00
Ephemera42
ed4ad67e73 Add inline-menu-positioning-improvements feature flag (#5313) 2024-12-20 17:49:46 +01:00
96 changed files with 7185 additions and 4127 deletions

View File

@@ -5,6 +5,7 @@
!.git !.git
!docker/healthcheck.sh !docker/healthcheck.sh
!docker/start.sh !docker/start.sh
!macros
!migrations !migrations
!src !src

View File

@@ -229,7 +229,8 @@
# SIGNUPS_ALLOWED=true # SIGNUPS_ALLOWED=true
## Controls if new users need to verify their email address upon registration ## Controls if new users need to verify their email address upon registration
## Note that setting this option to true prevents logins until the email address has been verified! ## On new client versions, this will require the user to verify their email at signup time.
## On older clients, it will require the user to verify their email before they can log in.
## The welcome email will include a verification link, and login attempts will periodically ## The welcome email will include a verification link, and login attempts will periodically
## trigger another verification email to be sent. ## trigger another verification email to be sent.
# SIGNUPS_VERIFY=false # SIGNUPS_VERIFY=false
@@ -343,15 +344,17 @@
## Client Settings ## Client Settings
## Enable experimental feature flags for clients. ## Enable experimental feature flags for clients.
## This is a comma-separated list of flags, e.g. "flag1,flag2,flag3". ## This is a comma-separated list of flags, e.g. "flag1,flag2,flag3".
## Note that clients cache the /api/config endpoint for about 1 hour and it could take some time before they are enabled or disabled!
## ##
## The following flags are available: ## The following flags are available:
## - "autofill-overlay": Add an overlay menu to form fields for quick access to credentials. ## - "inline-menu-positioning-improvements": Enable the use of inline menu password generator and identity suggestions in the browser extension.
## - "autofill-v2": Use the new autofill implementation. ## - "inline-menu-totp": Enable the use of inline menu TOTP codes in the browser extension.
## - "browser-fileless-import": Directly import credentials from other providers without a file.
## - "extension-refresh": Temporarily enable the new extension design until general availability (should be used with the beta Chrome extension)
## - "fido2-vault-credentials": Enable the use of FIDO2 security keys as second factor.
## - "ssh-key-vault-item": Enable the creation and use of SSH key vault items. (Needs clients >=2024.12.0)
## - "ssh-agent": Enable SSH agent support on Desktop. (Needs desktop >=2024.12.0) ## - "ssh-agent": Enable SSH agent support on Desktop. (Needs desktop >=2024.12.0)
## - "ssh-key-vault-item": Enable the creation and use of SSH key vault items. (Needs clients >=2024.12.0)
## - "export-attachments": Enable support for exporting attachments (Clients >=2025.4.0)
## - "anon-addy-self-host-alias": Enable configuring self-hosted Anon Addy alias generator. (Needs Android >=2025.3.0, iOS >=2025.4.0)
## - "simple-login-self-host-alias": Enable configuring self-hosted Simple Login alias generator. (Needs Android >=2025.3.0, iOS >=2025.4.0)
## - "mutual-tls": Enable the use of mutual TLS on Android (Client >= 2025.2.0)
# EXPERIMENTAL_CLIENT_FEATURE_FLAGS=fido2-vault-credentials # EXPERIMENTAL_CLIENT_FEATURE_FLAGS=fido2-vault-credentials
## Require new device emails. When a user logs in an email is required to be sent. ## Require new device emails. When a user logs in an email is required to be sent.
@@ -485,7 +488,7 @@
## Maximum attempts before an email token is reset and a new email will need to be sent. ## Maximum attempts before an email token is reset and a new email will need to be sent.
# EMAIL_ATTEMPTS_LIMIT=3 # EMAIL_ATTEMPTS_LIMIT=3
## ##
## Setup email 2FA regardless of any organization policy ## Setup email 2FA on registration regardless of any organization policy
# EMAIL_2FA_ENFORCE_ON_VERIFIED_INVITE=false # EMAIL_2FA_ENFORCE_ON_VERIFIED_INVITE=false
## Automatically setup email 2FA as fallback provider when needed ## Automatically setup email 2FA as fallback provider when needed
# EMAIL_2FA_AUTO_FALLBACK=false # EMAIL_2FA_AUTO_FALLBACK=false

2
.github/CODEOWNERS vendored
View File

@@ -1,3 +1,5 @@
/.github @dani-garcia @BlackDex /.github @dani-garcia @BlackDex
/.github/** @dani-garcia @BlackDex
/.github/CODEOWNERS @dani-garcia @BlackDex /.github/CODEOWNERS @dani-garcia @BlackDex
/.github/workflows/** @dani-garcia @BlackDex /.github/workflows/** @dani-garcia @BlackDex
/SECURITY.md @dani-garcia @BlackDex

View File

@@ -1,4 +1,5 @@
name: Build name: Build
permissions: {}
on: on:
push: push:
@@ -13,6 +14,7 @@ on:
- "diesel.toml" - "diesel.toml"
- "docker/Dockerfile.j2" - "docker/Dockerfile.j2"
- "docker/DockerSettings.yaml" - "docker/DockerSettings.yaml"
pull_request: pull_request:
paths: paths:
- ".github/workflows/build.yml" - ".github/workflows/build.yml"
@@ -28,13 +30,17 @@ on:
jobs: jobs:
build: build:
name: Build and Test ${{ matrix.channel }}
permissions:
actions: write
contents: read
# We use Ubuntu 22.04 here because this matches the library versions used within the Debian docker containers # We use Ubuntu 22.04 here because this matches the library versions used within the Debian docker containers
runs-on: ubuntu-22.04 runs-on: ubuntu-22.04
timeout-minutes: 120 timeout-minutes: 120
# Make warnings errors, this is to prevent warnings slipping through. # Make warnings errors, this is to prevent warnings slipping through.
# This is done globally to prevent rebuilds when the RUSTFLAGS env variable changes. # This is done globally to prevent rebuilds when the RUSTFLAGS env variable changes.
env: env:
RUSTFLAGS: "-D warnings" RUSTFLAGS: "-Dwarnings"
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
@@ -42,20 +48,19 @@ jobs:
- "rust-toolchain" # The version defined in rust-toolchain - "rust-toolchain" # The version defined in rust-toolchain
- "msrv" # The supported MSRV - "msrv" # The supported MSRV
name: Build and Test ${{ matrix.channel }}
steps: steps:
# Checkout the repo
- name: "Checkout"
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
# End Checkout the repo
# Install dependencies # Install dependencies
- name: "Install dependencies Ubuntu" - name: "Install dependencies Ubuntu"
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends openssl build-essential libmariadb-dev-compat libpq-dev libssl-dev pkg-config run: sudo apt-get update && sudo apt-get install -y --no-install-recommends openssl build-essential libmariadb-dev-compat libpq-dev libssl-dev pkg-config
# End Install dependencies # End Install dependencies
# Checkout the repo
- name: "Checkout"
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
with:
persist-credentials: false
fetch-depth: 0
# End Checkout the repo
# Determine rust-toolchain version # Determine rust-toolchain version
- name: Init Variables - name: Init Variables
@@ -75,7 +80,7 @@ jobs:
# Only install the clippy and rustfmt components on the default rust-toolchain # Only install the clippy and rustfmt components on the default rust-toolchain
- name: "Install rust-toolchain version" - name: "Install rust-toolchain version"
uses: dtolnay/rust-toolchain@315e265cd78dad1e1dcf3a5074f6d6c47029d5aa # master @ Nov 18, 2024, 5:36 AM GMT+1 uses: dtolnay/rust-toolchain@56f84321dbccf38fb67ce29ab63e4754056677e0 # master @ Mar 18, 2025, 8:14 PM GMT+1
if: ${{ matrix.channel == 'rust-toolchain' }} if: ${{ matrix.channel == 'rust-toolchain' }}
with: with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}" toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -85,7 +90,7 @@ jobs:
# Install the any other channel to be used for which we do not execute clippy and rustfmt # Install the any other channel to be used for which we do not execute clippy and rustfmt
- name: "Install MSRV version" - name: "Install MSRV version"
uses: dtolnay/rust-toolchain@315e265cd78dad1e1dcf3a5074f6d6c47029d5aa # master @ Nov 18, 2024, 5:36 AM GMT+1 uses: dtolnay/rust-toolchain@56f84321dbccf38fb67ce29ab63e4754056677e0 # master @ Mar 18, 2025, 8:14 PM GMT+1
if: ${{ matrix.channel != 'rust-toolchain' }} if: ${{ matrix.channel != 'rust-toolchain' }}
with: with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}" toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -93,11 +98,13 @@ jobs:
# Set the current matrix toolchain version as default # Set the current matrix toolchain version as default
- name: "Set toolchain ${{steps.toolchain.outputs.RUST_TOOLCHAIN}} as default" - name: "Set toolchain ${{steps.toolchain.outputs.RUST_TOOLCHAIN}} as default"
env:
RUST_TOOLCHAIN: ${{steps.toolchain.outputs.RUST_TOOLCHAIN}}
run: | run: |
# Remove the rust-toolchain.toml # Remove the rust-toolchain.toml
rm rust-toolchain.toml rm rust-toolchain.toml
# Set the default # Set the default
rustup default ${{steps.toolchain.outputs.RUST_TOOLCHAIN}} rustup default "${RUST_TOOLCHAIN}"
# Show environment # Show environment
- name: "Show environment" - name: "Show environment"
@@ -107,7 +114,8 @@ jobs:
# End Show environment # End Show environment
# Enable Rust Caching # Enable Rust Caching
- uses: Swatinem/rust-cache@82a92a6e8fbeee089604da2575dc567ae9ddeaab # v2.7.5 - name: Rust Caching
uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
with: with:
# Use a custom prefix-key to force a fresh start. This is sometimes needed with bigger changes. # Use a custom prefix-key to force a fresh start. This is sometimes needed with bigger changes.
# Like changing the build host from Ubuntu 20.04 to 22.04 for example. # Like changing the build host from Ubuntu 20.04 to 22.04 for example.
@@ -119,37 +127,37 @@ jobs:
# First test all features together, afterwards test them separately. # First test all features together, afterwards test them separately.
- name: "test features: sqlite,mysql,postgresql,enable_mimalloc,query_logger" - name: "test features: sqlite,mysql,postgresql,enable_mimalloc,query_logger"
id: test_sqlite_mysql_postgresql_mimalloc_logger id: test_sqlite_mysql_postgresql_mimalloc_logger
if: $${{ always() }} if: ${{ !cancelled() }}
run: | run: |
cargo test --features sqlite,mysql,postgresql,enable_mimalloc,query_logger cargo test --features sqlite,mysql,postgresql,enable_mimalloc,query_logger
- name: "test features: sqlite,mysql,postgresql,enable_mimalloc" - name: "test features: sqlite,mysql,postgresql,enable_mimalloc"
id: test_sqlite_mysql_postgresql_mimalloc id: test_sqlite_mysql_postgresql_mimalloc
if: $${{ always() }} if: ${{ !cancelled() }}
run: | run: |
cargo test --features sqlite,mysql,postgresql,enable_mimalloc cargo test --features sqlite,mysql,postgresql,enable_mimalloc
- name: "test features: sqlite,mysql,postgresql" - name: "test features: sqlite,mysql,postgresql"
id: test_sqlite_mysql_postgresql id: test_sqlite_mysql_postgresql
if: $${{ always() }} if: ${{ !cancelled() }}
run: | run: |
cargo test --features sqlite,mysql,postgresql cargo test --features sqlite,mysql,postgresql
- name: "test features: sqlite" - name: "test features: sqlite"
id: test_sqlite id: test_sqlite
if: $${{ always() }} if: ${{ !cancelled() }}
run: | run: |
cargo test --features sqlite cargo test --features sqlite
- name: "test features: mysql" - name: "test features: mysql"
id: test_mysql id: test_mysql
if: $${{ always() }} if: ${{ !cancelled() }}
run: | run: |
cargo test --features mysql cargo test --features mysql
- name: "test features: postgresql" - name: "test features: postgresql"
id: test_postgresql id: test_postgresql
if: $${{ always() }} if: ${{ !cancelled() }}
run: | run: |
cargo test --features postgresql cargo test --features postgresql
# End Run cargo tests # End Run cargo tests
@@ -158,16 +166,16 @@ jobs:
# Run cargo clippy, and fail on warnings # Run cargo clippy, and fail on warnings
- name: "clippy features: sqlite,mysql,postgresql,enable_mimalloc" - name: "clippy features: sqlite,mysql,postgresql,enable_mimalloc"
id: clippy id: clippy
if: ${{ always() && matrix.channel == 'rust-toolchain' }} if: ${{ !cancelled() && matrix.channel == 'rust-toolchain' }}
run: | run: |
cargo clippy --features sqlite,mysql,postgresql,enable_mimalloc -- -D warnings cargo clippy --features sqlite,mysql,postgresql,enable_mimalloc
# End Run cargo clippy # End Run cargo clippy
# Run cargo fmt (Only run on rust-toolchain defined version) # Run cargo fmt (Only run on rust-toolchain defined version)
- name: "check formatting" - name: "check formatting"
id: formatting id: formatting
if: ${{ always() && matrix.channel == 'rust-toolchain' }} if: ${{ !cancelled() && matrix.channel == 'rust-toolchain' }}
run: | run: |
cargo fmt --all -- --check cargo fmt --all -- --check
# End Run cargo fmt # End Run cargo fmt
@@ -177,22 +185,31 @@ jobs:
# This is useful so all test/clippy/fmt actions are done, and they can all be addressed # This is useful so all test/clippy/fmt actions are done, and they can all be addressed
- name: "Some checks failed" - name: "Some checks failed"
if: ${{ failure() }} if: ${{ failure() }}
env:
TEST_DB_M_L: ${{ steps.test_sqlite_mysql_postgresql_mimalloc_logger.outcome }}
TEST_DB_M: ${{ steps.test_sqlite_mysql_postgresql_mimalloc.outcome }}
TEST_DB: ${{ steps.test_sqlite_mysql_postgresql.outcome }}
TEST_SQLITE: ${{ steps.test_sqlite.outcome }}
TEST_MYSQL: ${{ steps.test_mysql.outcome }}
TEST_POSTGRESQL: ${{ steps.test_postgresql.outcome }}
CLIPPY: ${{ steps.clippy.outcome }}
FMT: ${{ steps.formatting.outcome }}
run: | run: |
echo "### :x: Checks Failed!" >> $GITHUB_STEP_SUMMARY echo "### :x: Checks Failed!" >> "${GITHUB_STEP_SUMMARY}"
echo "" >> $GITHUB_STEP_SUMMARY echo "" >> "${GITHUB_STEP_SUMMARY}"
echo "|Job|Status|" >> $GITHUB_STEP_SUMMARY echo "|Job|Status|" >> "${GITHUB_STEP_SUMMARY}"
echo "|---|------|" >> $GITHUB_STEP_SUMMARY echo "|---|------|" >> "${GITHUB_STEP_SUMMARY}"
echo "|test (sqlite,mysql,postgresql,enable_mimalloc,query_logger)|${{ steps.test_sqlite_mysql_postgresql_mimalloc_logger.outcome }}|" >> $GITHUB_STEP_SUMMARY echo "|test (sqlite,mysql,postgresql,enable_mimalloc,query_logger)|${TEST_DB_M_L}|" >> "${GITHUB_STEP_SUMMARY}"
echo "|test (sqlite,mysql,postgresql,enable_mimalloc)|${{ steps.test_sqlite_mysql_postgresql_mimalloc.outcome }}|" >> $GITHUB_STEP_SUMMARY echo "|test (sqlite,mysql,postgresql,enable_mimalloc)|${TEST_DB_M}|" >> "${GITHUB_STEP_SUMMARY}"
echo "|test (sqlite,mysql,postgresql)|${{ steps.test_sqlite_mysql_postgresql.outcome }}|" >> $GITHUB_STEP_SUMMARY echo "|test (sqlite,mysql,postgresql)|${TEST_DB}|" >> "${GITHUB_STEP_SUMMARY}"
echo "|test (sqlite)|${{ steps.test_sqlite.outcome }}|" >> $GITHUB_STEP_SUMMARY echo "|test (sqlite)|${TEST_SQLITE}|" >> "${GITHUB_STEP_SUMMARY}"
echo "|test (mysql)|${{ steps.test_mysql.outcome }}|" >> $GITHUB_STEP_SUMMARY echo "|test (mysql)|${TEST_MYSQL}|" >> "${GITHUB_STEP_SUMMARY}"
echo "|test (postgresql)|${{ steps.test_postgresql.outcome }}|" >> $GITHUB_STEP_SUMMARY echo "|test (postgresql)|${TEST_POSTGRESQL}|" >> "${GITHUB_STEP_SUMMARY}"
echo "|clippy (sqlite,mysql,postgresql,enable_mimalloc)|${{ steps.clippy.outcome }}|" >> $GITHUB_STEP_SUMMARY echo "|clippy (sqlite,mysql,postgresql,enable_mimalloc)|${CLIPPY}|" >> "${GITHUB_STEP_SUMMARY}"
echo "|fmt|${{ steps.formatting.outcome }}|" >> $GITHUB_STEP_SUMMARY echo "|fmt|${FMT}|" >> "${GITHUB_STEP_SUMMARY}"
echo "" >> $GITHUB_STEP_SUMMARY echo "" >> "${GITHUB_STEP_SUMMARY}"
echo "Please check the failed jobs and fix where needed." >> $GITHUB_STEP_SUMMARY echo "Please check the failed jobs and fix where needed." >> "${GITHUB_STEP_SUMMARY}"
echo "" >> $GITHUB_STEP_SUMMARY echo "" >> "${GITHUB_STEP_SUMMARY}"
exit 1 exit 1
@@ -201,5 +218,5 @@ jobs:
- name: "All checks passed" - name: "All checks passed"
if: ${{ success() }} if: ${{ success() }}
run: | run: |
echo "### :tada: Checks Passed!" >> $GITHUB_STEP_SUMMARY echo "### :tada: Checks Passed!" >> "${GITHUB_STEP_SUMMARY}"
echo "" >> $GITHUB_STEP_SUMMARY echo "" >> "${GITHUB_STEP_SUMMARY}"

28
.github/workflows/check-templates.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: Check templates
permissions: {}
on: [ push, pull_request ]
jobs:
docker-templates:
permissions:
contents: read
runs-on: ubuntu-24.04
timeout-minutes: 30
steps:
# Checkout the repo
- name: "Checkout"
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
with:
persist-credentials: false
# End Checkout the repo
- name: Run make to rebuild templates
working-directory: docker
run: make
- name: Check for unstaged changes
working-directory: docker
run: git diff --exit-code
continue-on-error: false

View File

@@ -1,24 +1,20 @@
name: Hadolint name: Hadolint
permissions: {}
on: [ on: [ push, pull_request ]
push,
pull_request
]
jobs: jobs:
hadolint: hadolint:
name: Validate Dockerfile syntax name: Validate Dockerfile syntax
permissions:
contents: read
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
timeout-minutes: 30 timeout-minutes: 30
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
# End Checkout the repo
steps:
# Start Docker Buildx # Start Docker Buildx
- name: Setup Docker Buildx - name: Setup Docker Buildx
uses: docker/setup-buildx-action@c47758b77c9736f4b2ef4073d4d51994fabfe349 # v3.7.1 uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
# https://github.com/moby/buildkit/issues/3969 # https://github.com/moby/buildkit/issues/3969
# Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills # Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills
with: with:
@@ -37,6 +33,12 @@ jobs:
env: env:
HADOLINT_VERSION: 2.12.0 HADOLINT_VERSION: 2.12.0
# End Download hadolint # End Download hadolint
# Checkout the repo
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
with:
persist-credentials: false
# End Checkout the repo
# Test Dockerfiles with hadolint # Test Dockerfiles with hadolint
- name: Run hadolint - name: Run hadolint

View File

@@ -1,4 +1,5 @@
name: Release name: Release
permissions: {}
on: on:
push: push:
@@ -6,17 +7,23 @@ on:
- main - main
tags: tags:
- '*' # https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions#filter-pattern-cheat-sheet
- '[1-2].[0-9]+.[0-9]+'
jobs: jobs:
# https://github.com/marketplace/actions/skip-duplicate-actions # https://github.com/marketplace/actions/skip-duplicate-actions
# Some checks to determine if we need to continue with building a new docker. # Some checks to determine if we need to continue with building a new docker.
# We will skip this check if we are creating a tag, because that has the same hash as a previous run already. # We will skip this check if we are creating a tag, because that has the same hash as a previous run already.
skip_check: skip_check:
runs-on: ubuntu-24.04 # Only run this in the upstream repo and not on forks
if: ${{ github.repository == 'dani-garcia/vaultwarden' }} if: ${{ github.repository == 'dani-garcia/vaultwarden' }}
name: Cancel older jobs when running
permissions:
actions: write
runs-on: ubuntu-24.04
outputs: outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }} should_skip: ${{ steps.skip_check.outputs.should_skip }}
steps: steps:
- name: Skip Duplicates Actions - name: Skip Duplicates Actions
id: skip_check id: skip_check
@@ -27,11 +34,17 @@ jobs:
if: ${{ github.ref_type == 'branch' }} if: ${{ github.ref_type == 'branch' }}
docker-build: docker-build:
runs-on: ubuntu-24.04
timeout-minutes: 120
needs: skip_check needs: skip_check
if: ${{ needs.skip_check.outputs.should_skip != 'true' && github.repository == 'dani-garcia/vaultwarden' }} if: ${{ needs.skip_check.outputs.should_skip != 'true' && github.repository == 'dani-garcia/vaultwarden' }}
# Start a local docker registry to extract the final Alpine static build binaries name: Build Vaultwarden containers
permissions:
packages: write
contents: read
attestations: write
id-token: write
runs-on: ubuntu-24.04
timeout-minutes: 120
# Start a local docker registry to extract the compiled binaries to upload as artifacts and attest them
services: services:
registry: registry:
image: registry:2 image: registry:2
@@ -56,37 +69,42 @@ jobs:
base_image: ["debian","alpine"] base_image: ["debian","alpine"]
steps: steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
with:
fetch-depth: 0
- name: Initialize QEMU binfmt support - name: Initialize QEMU binfmt support
uses: docker/setup-qemu-action@49b3bc8e6bdd4a60e6116a5414239cba5943d3cf # v3.2.0 uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
with: with:
platforms: "arm64,arm" platforms: "arm64,arm"
# Start Docker Buildx # Start Docker Buildx
- name: Setup Docker Buildx - name: Setup Docker Buildx
uses: docker/setup-buildx-action@c47758b77c9736f4b2ef4073d4d51994fabfe349 # v3.7.1 uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
# https://github.com/moby/buildkit/issues/3969 # https://github.com/moby/buildkit/issues/3969
# Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills # Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills
with: with:
cache-binary: false
buildkitd-config-inline: | buildkitd-config-inline: |
[worker.oci] [worker.oci]
max-parallelism = 2 max-parallelism = 2
driver-opts: | driver-opts: |
network=host network=host
# Checkout the repo
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
# We need fetch-depth of 0 so we also get all the tag metadata
with:
persist-credentials: false
fetch-depth: 0
# Determine Base Tags and Source Version # Determine Base Tags and Source Version
- name: Determine Base Tags and Source Version - name: Determine Base Tags and Source Version
shell: bash shell: bash
env:
REF_TYPE: ${{ github.ref_type }}
run: | run: |
# Check which main tag we are going to build determined by github.ref_type # Check which main tag we are going to build determined by ref_type
if [[ "${{ github.ref_type }}" == "tag" ]]; then if [[ "${REF_TYPE}" == "tag" ]]; then
echo "BASE_TAGS=latest,${GITHUB_REF#refs/*/}" | tee -a "${GITHUB_ENV}" echo "BASE_TAGS=latest,${GITHUB_REF#refs/*/}" | tee -a "${GITHUB_ENV}"
elif [[ "${{ github.ref_type }}" == "branch" ]]; then elif [[ "${REF_TYPE}" == "branch" ]]; then
echo "BASE_TAGS=testing" | tee -a "${GITHUB_ENV}" echo "BASE_TAGS=testing" | tee -a "${GITHUB_ENV}"
fi fi
@@ -102,7 +120,7 @@ jobs:
# Login to Docker Hub # Login to Docker Hub
- name: Login to Docker Hub - name: Login to Docker Hub
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0 uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
@@ -111,12 +129,14 @@ jobs:
- name: Add registry for DockerHub - name: Add registry for DockerHub
if: ${{ env.HAVE_DOCKERHUB_LOGIN == 'true' }} if: ${{ env.HAVE_DOCKERHUB_LOGIN == 'true' }}
shell: bash shell: bash
env:
DOCKERHUB_REPO: ${{ vars.DOCKERHUB_REPO }}
run: | run: |
echo "CONTAINER_REGISTRIES=${{ vars.DOCKERHUB_REPO }}" | tee -a "${GITHUB_ENV}" echo "CONTAINER_REGISTRIES=${DOCKERHUB_REPO}" | tee -a "${GITHUB_ENV}"
# Login to GitHub Container Registry # Login to GitHub Container Registry
- name: Login to GitHub Container Registry - name: Login to GitHub Container Registry
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0 uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.repository_owner }} username: ${{ github.repository_owner }}
@@ -126,12 +146,14 @@ jobs:
- name: Add registry for ghcr.io - name: Add registry for ghcr.io
if: ${{ env.HAVE_GHCR_LOGIN == 'true' }} if: ${{ env.HAVE_GHCR_LOGIN == 'true' }}
shell: bash shell: bash
env:
GHCR_REPO: ${{ vars.GHCR_REPO }}
run: | run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.GHCR_REPO }}" | tee -a "${GITHUB_ENV}" echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${GHCR_REPO}" | tee -a "${GITHUB_ENV}"
# Login to Quay.io # Login to Quay.io
- name: Login to Quay.io - name: Login to Quay.io
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0 uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with: with:
registry: quay.io registry: quay.io
username: ${{ secrets.QUAY_USERNAME }} username: ${{ secrets.QUAY_USERNAME }}
@@ -141,17 +163,22 @@ jobs:
- name: Add registry for Quay.io - name: Add registry for Quay.io
if: ${{ env.HAVE_QUAY_LOGIN == 'true' }} if: ${{ env.HAVE_QUAY_LOGIN == 'true' }}
shell: bash shell: bash
env:
QUAY_REPO: ${{ vars.QUAY_REPO }}
run: | run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.QUAY_REPO }}" | tee -a "${GITHUB_ENV}" echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${QUAY_REPO}" | tee -a "${GITHUB_ENV}"
- name: Configure build cache from/to - name: Configure build cache from/to
shell: bash shell: bash
env:
GHCR_REPO: ${{ vars.GHCR_REPO }}
BASE_IMAGE: ${{ matrix.base_image }}
run: | run: |
# #
# Check if there is a GitHub Container Registry Login and use it for caching # Check if there is a GitHub Container Registry Login and use it for caching
if [[ -n "${HAVE_GHCR_LOGIN}" ]]; then if [[ -n "${HAVE_GHCR_LOGIN}" ]]; then
echo "BAKE_CACHE_FROM=type=registry,ref=${{ vars.GHCR_REPO }}-buildcache:${{ matrix.base_image }}" | tee -a "${GITHUB_ENV}" echo "BAKE_CACHE_FROM=type=registry,ref=${GHCR_REPO}-buildcache:${BASE_IMAGE}" | tee -a "${GITHUB_ENV}"
echo "BAKE_CACHE_TO=type=registry,ref=${{ vars.GHCR_REPO }}-buildcache:${{ matrix.base_image }},compression=zstd,mode=max" | tee -a "${GITHUB_ENV}" echo "BAKE_CACHE_TO=type=registry,ref=${GHCR_REPO}-buildcache:${BASE_IMAGE},compression=zstd,mode=max" | tee -a "${GITHUB_ENV}"
else else
echo "BAKE_CACHE_FROM=" echo "BAKE_CACHE_FROM="
echo "BAKE_CACHE_TO=" echo "BAKE_CACHE_TO="
@@ -159,13 +186,13 @@ jobs:
# #
- name: Add localhost registry - name: Add localhost registry
if: ${{ matrix.base_image == 'alpine' }}
shell: bash shell: bash
run: | run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}localhost:5000/vaultwarden/server" | tee -a "${GITHUB_ENV}" echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}localhost:5000/vaultwarden/server" | tee -a "${GITHUB_ENV}"
- name: Bake ${{ matrix.base_image }} containers - name: Bake ${{ matrix.base_image }} containers
uses: docker/bake-action@2e3d19baedb14545e5d41222653874f25d5b4dfb # v5.10.0 id: bake_vw
uses: docker/bake-action@4ba453fbc2db7735392b93edf935aaf9b1e8f747 # v6.5.0
env: env:
BASE_TAGS: "${{ env.BASE_TAGS }}" BASE_TAGS: "${{ env.BASE_TAGS }}"
SOURCE_COMMIT: "${{ env.SOURCE_COMMIT }}" SOURCE_COMMIT: "${{ env.SOURCE_COMMIT }}"
@@ -175,78 +202,119 @@ jobs:
with: with:
pull: true pull: true
push: true push: true
source: .
files: docker/docker-bake.hcl files: docker/docker-bake.hcl
targets: "${{ matrix.base_image }}-multi" targets: "${{ matrix.base_image }}-multi"
set: | set: |
*.cache-from=${{ env.BAKE_CACHE_FROM }} *.cache-from=${{ env.BAKE_CACHE_FROM }}
*.cache-to=${{ env.BAKE_CACHE_TO }} *.cache-to=${{ env.BAKE_CACHE_TO }}
- name: Extract digest SHA
shell: bash
env:
BAKE_METADATA: ${{ steps.bake_vw.outputs.metadata }}
run: |
GET_DIGEST_SHA="$(jq -r '.["${{ matrix.base_image }}-multi"]."containerimage.digest"' <<< "${BAKE_METADATA}")"
echo "DIGEST_SHA=${GET_DIGEST_SHA}" | tee -a "${GITHUB_ENV}"
# Attest container images
- name: Attest - docker.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_DOCKERHUB_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
with:
subject-name: ${{ vars.DOCKERHUB_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
push-to-registry: true
- name: Attest - ghcr.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_GHCR_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
with:
subject-name: ${{ vars.GHCR_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
push-to-registry: true
- name: Attest - quay.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_QUAY_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
with:
subject-name: ${{ vars.QUAY_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
push-to-registry: true
# Extract the Alpine binaries from the containers # Extract the Alpine binaries from the containers
- name: Extract binaries - name: Extract binaries
if: ${{ matrix.base_image == 'alpine' }}
shell: bash shell: bash
env:
REF_TYPE: ${{ github.ref_type }}
run: | run: |
# Check which main tag we are going to build determined by github.ref_type # Check which main tag we are going to build determined by ref_type
if [[ "${{ github.ref_type }}" == "tag" ]]; then if [[ "${REF_TYPE}" == "tag" ]]; then
EXTRACT_TAG="latest" EXTRACT_TAG="latest"
elif [[ "${{ github.ref_type }}" == "branch" ]]; then elif [[ "${REF_TYPE}" == "branch" ]]; then
EXTRACT_TAG="testing" EXTRACT_TAG="testing"
fi fi
# Check which base_image was used and append -alpine if needed
if [[ "${{ matrix.base_image }}" == "alpine" ]]; then
EXTRACT_TAG="${EXTRACT_TAG}-alpine"
fi
# After each extraction the image is removed. # After each extraction the image is removed.
# This is needed because using different platforms doesn't trigger a new pull/download # This is needed because using different platforms doesn't trigger a new pull/download
# Extract amd64 binary # Extract amd64 binary
docker create --name amd64 --platform=linux/amd64 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}-alpine" docker create --name amd64 --platform=linux/amd64 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp amd64:/vaultwarden vaultwarden-amd64 docker cp amd64:/vaultwarden vaultwarden-amd64-${{ matrix.base_image }}
docker rm --force amd64 docker rm --force amd64
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}-alpine" docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
# Extract arm64 binary # Extract arm64 binary
docker create --name arm64 --platform=linux/arm64 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}-alpine" docker create --name arm64 --platform=linux/arm64 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp arm64:/vaultwarden vaultwarden-arm64 docker cp arm64:/vaultwarden vaultwarden-arm64-${{ matrix.base_image }}
docker rm --force arm64 docker rm --force arm64
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}-alpine" docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
# Extract armv7 binary # Extract armv7 binary
docker create --name armv7 --platform=linux/arm/v7 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}-alpine" docker create --name armv7 --platform=linux/arm/v7 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp armv7:/vaultwarden vaultwarden-armv7 docker cp armv7:/vaultwarden vaultwarden-armv7-${{ matrix.base_image }}
docker rm --force armv7 docker rm --force armv7
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}-alpine" docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
# Extract armv6 binary # Extract armv6 binary
docker create --name armv6 --platform=linux/arm/v6 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}-alpine" docker create --name armv6 --platform=linux/arm/v6 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp armv6:/vaultwarden vaultwarden-armv6 docker cp armv6:/vaultwarden vaultwarden-armv6-${{ matrix.base_image }}
docker rm --force armv6 docker rm --force armv6
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}-alpine" docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
# Upload artifacts to Github Actions # Upload artifacts to Github Actions and Attest the binaries
- name: "Upload amd64 artifact" - name: "Upload amd64 artifact ${{ matrix.base_image }}"
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4.4.3 uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ matrix.base_image == 'alpine' }}
with: with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-amd64 name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-amd64-${{ matrix.base_image }}
path: vaultwarden-amd64 path: vaultwarden-amd64-${{ matrix.base_image }}
- name: "Upload arm64 artifact" - name: "Upload arm64 artifact ${{ matrix.base_image }}"
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4.4.3 uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ matrix.base_image == 'alpine' }}
with: with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-arm64 name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-arm64-${{ matrix.base_image }}
path: vaultwarden-arm64 path: vaultwarden-arm64-${{ matrix.base_image }}
- name: "Upload armv7 artifact" - name: "Upload armv7 artifact ${{ matrix.base_image }}"
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4.4.3 uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ matrix.base_image == 'alpine' }}
with: with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv7 name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv7-${{ matrix.base_image }}
path: vaultwarden-armv7 path: vaultwarden-armv7-${{ matrix.base_image }}
- name: "Upload armv6 artifact" - name: "Upload armv6 artifact ${{ matrix.base_image }}"
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4.4.3 uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ matrix.base_image == 'alpine' }}
with: with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv6 name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv6-${{ matrix.base_image }}
path: vaultwarden-armv6 path: vaultwarden-armv6-${{ matrix.base_image }}
- name: "Attest artifacts ${{ matrix.base_image }}"
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
with:
subject-path: vaultwarden-*
# End Upload artifacts to Github Actions # End Upload artifacts to Github Actions

View File

@@ -1,3 +1,6 @@
name: Cleanup
permissions: {}
on: on:
workflow_dispatch: workflow_dispatch:
inputs: inputs:
@@ -9,10 +12,11 @@ on:
schedule: schedule:
- cron: '0 1 * * FRI' - cron: '0 1 * * FRI'
name: Cleanup
jobs: jobs:
releasecache-cleanup: releasecache-cleanup:
name: Releasecache Cleanup name: Releasecache Cleanup
permissions:
packages: write
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
continue-on-error: true continue-on-error: true
timeout-minutes: 30 timeout-minutes: 30

View File

@@ -1,37 +1,42 @@
name: trivy name: Trivy
permissions: {}
on: on:
push: push:
branches: branches:
- main - main
tags: tags:
- '*' - '*'
pull_request: pull_request:
branches: [ "main" ] branches:
- main
schedule: schedule:
- cron: '08 11 * * *' - cron: '08 11 * * *'
permissions:
contents: read
jobs: jobs:
trivy-scan: trivy-scan:
# Only run this in the master repo and not on forks # Only run this in the upstream repo and not on forks
# When all forks run this at the same time, it is causing `Too Many Requests` issues # When all forks run this at the same time, it is causing `Too Many Requests` issues
if: ${{ github.repository == 'dani-garcia/vaultwarden' }} if: ${{ github.repository == 'dani-garcia/vaultwarden' }}
name: Check name: Trivy Scan
runs-on: ubuntu-24.04
timeout-minutes: 30
permissions: permissions:
contents: read contents: read
security-events: write
actions: read actions: read
security-events: write
runs-on: ubuntu-24.04
timeout-minutes: 30
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2 uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
with:
persist-credentials: false
- name: Run Trivy vulnerability scanner - name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@18f2510ee396bbf400402947b394f2dd8c87dbb0 # v0.29.0 uses: aquasecurity/trivy-action@6c175e9c4083a92bbca2f9724c8a5e33bc2d97a5 # v0.30.0
env: env:
TRIVY_DB_REPOSITORY: docker.io/aquasec/trivy-db:2,public.ecr.aws/aquasecurity/trivy-db:2,ghcr.io/aquasecurity/trivy-db:2 TRIVY_DB_REPOSITORY: docker.io/aquasec/trivy-db:2,public.ecr.aws/aquasecurity/trivy-db:2,ghcr.io/aquasecurity/trivy-db:2
TRIVY_JAVA_DB_REPOSITORY: docker.io/aquasec/trivy-java-db:1,public.ecr.aws/aquasecurity/trivy-java-db:1,ghcr.io/aquasecurity/trivy-java-db:1 TRIVY_JAVA_DB_REPOSITORY: docker.io/aquasec/trivy-java-db:1,public.ecr.aws/aquasecurity/trivy-java-db:1,ghcr.io/aquasecurity/trivy-java-db:1

View File

@@ -1,7 +1,7 @@
--- ---
repos: repos:
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0 rev: v5.0.0
hooks: hooks:
- id: check-yaml - id: check-yaml
- id: check-json - id: check-json
@@ -31,7 +31,7 @@ repos:
language: system language: system
args: ["--features", "sqlite,mysql,postgresql,enable_mimalloc", "--"] args: ["--features", "sqlite,mysql,postgresql,enable_mimalloc", "--"]
types_or: [rust, file] types_or: [rust, file]
files: (Cargo.toml|Cargo.lock|rust-toolchain|.*\.rs$) files: (Cargo.toml|Cargo.lock|rust-toolchain.toml|rustfmt.toml|.*\.rs$)
pass_filenames: false pass_filenames: false
- id: cargo-clippy - id: cargo-clippy
name: cargo clippy name: cargo clippy
@@ -40,5 +40,13 @@ repos:
language: system language: system
args: ["--features", "sqlite,mysql,postgresql,enable_mimalloc", "--", "-D", "warnings"] args: ["--features", "sqlite,mysql,postgresql,enable_mimalloc", "--", "-D", "warnings"]
types_or: [rust, file] types_or: [rust, file]
files: (Cargo.toml|Cargo.lock|rust-toolchain|clippy.toml|.*\.rs$) files: (Cargo.toml|Cargo.lock|rust-toolchain.toml|rustfmt.toml|.*\.rs$)
pass_filenames: false pass_filenames: false
- id: check-docker-templates
name: check docker templates
description: Check if the Docker templates are updated
language: system
entry: sh
args:
- "-c"
- "cd docker && make"

1541
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,12 @@
[workspace]
members = ["macros"]
[package] [package]
name = "vaultwarden" name = "vaultwarden"
version = "1.0.0" version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"] authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2021" edition = "2021"
rust-version = "1.82.0" rust-version = "1.85.0"
resolver = "2" resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden" repository = "https://github.com/dani-garcia/vaultwarden"
@@ -39,8 +42,10 @@ unstable = []
syslog = "7.0.0" syslog = "7.0.0"
[dependencies] [dependencies]
macros = { path = "./macros" }
# Logging # Logging
log = "0.4.22" log = "0.4.27"
fern = { version = "0.7.1", features = ["syslog-7", "reopen-1"] } fern = { version = "0.7.1", features = ["syslog-7", "reopen-1"] }
tracing = { version = "0.1.41", features = ["log"] } # Needed to have lettre and webauthn-rs trace logging to work tracing = { version = "0.1.41", features = ["log"] } # Needed to have lettre and webauthn-rs trace logging to work
@@ -48,12 +53,12 @@ tracing = { version = "0.1.41", features = ["log"] } # Needed to have lettre and
dotenvy = { version = "0.15.7", default-features = false } dotenvy = { version = "0.15.7", default-features = false }
# Lazy initialization # Lazy initialization
once_cell = "1.20.2" once_cell = "1.21.3"
# Numerical libraries # Numerical libraries
num-traits = "0.2.19" num-traits = "0.2.19"
num-derive = "0.4.2" num-derive = "0.4.2"
bigdecimal = "0.4.7" bigdecimal = "0.4.8"
# Web framework # Web framework
rocket = { version = "0.5.1", features = ["tls", "json"], default-features = false } rocket = { version = "0.5.1", features = ["tls", "json"], default-features = false }
@@ -67,46 +72,50 @@ dashmap = "6.1.0"
# Async futures # Async futures
futures = "0.3.31" futures = "0.3.31"
tokio = { version = "1.42.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] } tokio = { version = "1.45.1", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] }
# A generic serialization/deserialization framework # A generic serialization/deserialization framework
serde = { version = "1.0.216", features = ["derive"] } serde = { version = "1.0.219", features = ["derive"] }
serde_json = "1.0.133" serde_json = "1.0.140"
# A safe, extensible ORM and Query builder # A safe, extensible ORM and Query builder
diesel = { version = "2.2.6", features = ["chrono", "r2d2", "numeric"] } diesel = { version = "2.2.10", features = ["chrono", "r2d2", "numeric"] }
diesel_migrations = "2.2.0" diesel_migrations = "2.2.0"
diesel_logger = { version = "0.4.0", optional = true } diesel_logger = { version = "0.4.0", optional = true }
derive_more = { version = "2.0.1", features = ["from", "into", "as_ref", "deref", "display"] }
diesel-derive-newtype = "2.1.2"
# Bundled/Static SQLite # Bundled/Static SQLite
libsqlite3-sys = { version = "0.30.1", features = ["bundled"], optional = true } libsqlite3-sys = { version = "0.33.0", features = ["bundled"], optional = true }
# Crypto-related libraries # Crypto-related libraries
rand = { version = "0.8.5", features = ["small_rng"] } rand = "0.9.1"
ring = "0.17.8" ring = "0.17.14"
subtle = "2.6.1"
# UUID generation # UUID generation
uuid = { version = "1.11.0", features = ["v4"] } uuid = { version = "1.17.0", features = ["v4"] }
# Date and time libraries # Date and time libraries
chrono = { version = "0.4.39", features = ["clock", "serde"], default-features = false } chrono = { version = "0.4.41", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.10.0" chrono-tz = "0.10.3"
time = "0.3.37" time = "0.3.41"
# Job scheduler # Job scheduler
job_scheduler_ng = "2.0.5" job_scheduler_ng = "2.2.0"
# Data encoding library Hex/Base32/Base64 # Data encoding library Hex/Base32/Base64
data-encoding = "2.6.0" data-encoding = "2.9.0"
# JWT library # JWT library
jsonwebtoken = "9.3.0" jsonwebtoken = "9.3.1"
# TOTP library # TOTP library
totp-lite = "2.0.1" totp-lite = "2.0.1"
# Yubico Library # Yubico Library
yubico = { version = "0.12.0", features = ["online-tokio"], default-features = false } yubico = { package = "yubico_ng", version = "0.13.0", features = ["online-tokio"], default-features = false }
# WebAuthn libraries # WebAuthn libraries
webauthn-rs = "0.3.2" webauthn-rs = "0.3.2"
@@ -115,63 +124,60 @@ webauthn-rs = "0.3.2"
url = "2.5.4" url = "2.5.4"
# Email libraries # Email libraries
lettre = { version = "0.11.11", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false } lettre = { version = "0.11.16", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
percent-encoding = "2.3.1" # URL encoding library used for URL's in the emails percent-encoding = "2.3.1" # URL encoding library used for URL's in the emails
email_address = "0.2.9" email_address = "0.2.9"
# HTML Template library # HTML Template library
handlebars = { version = "6.2.0", features = ["dir_source"] } handlebars = { version = "6.3.2", features = ["dir_source"] }
# HTTP client (Used for favicons, version check, DUO and HIBP API) # HTTP client (Used for favicons, version check, DUO and HIBP API)
reqwest = { version = "0.12.9", features = ["native-tls-alpn", "stream", "json", "gzip", "brotli", "socks", "cookies"] } reqwest = { version = "0.12.15", features = ["native-tls-alpn", "stream", "json", "gzip", "brotli", "socks", "cookies"] }
hickory-resolver = "0.24.2" hickory-resolver = "0.25.2"
# Favicon extraction libraries # Favicon extraction libraries
html5gum = "0.7.0" html5gum = "0.7.0"
regex = { version = "1.11.1", features = ["std", "perf", "unicode-perl"], default-features = false } regex = { version = "1.11.1", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.3.1" data-url = "0.3.1"
bytes = "1.9.0" bytes = "1.10.1"
# Cache function results (Used for version check and favicon fetching) # Cache function results (Used for version check and favicon fetching)
cached = { version = "0.54.0", features = ["async"] } cached = { version = "0.55.1", features = ["async"] }
# Used for custom short lived cookie jar during favicon extraction # Used for custom short lived cookie jar during favicon extraction
cookie = "0.18.1" cookie = "0.18.1"
cookie_store = "0.21.1" cookie_store = "0.21.1"
# Used by U2F, JWT and PostgreSQL # Used by U2F, JWT and PostgreSQL
openssl = "0.10.68" openssl = "0.10.72"
# CLI argument parsing # CLI argument parsing
pico-args = "0.5.0" pico-args = "0.5.0"
# Macro ident concatenation # Macro ident concatenation
paste = "1.0.15" pastey = "0.1.0"
governor = "0.8.0" governor = "0.10.0"
# Check client versions for specific features. # Check client versions for specific features.
semver = "1.0.24" semver = "1.0.26"
# Allow overriding the default memory allocator # Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow # Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "0.1.43", features = ["secure"], default-features = false, optional = true } mimalloc = { version = "0.1.46", features = ["secure"], default-features = false, optional = true }
which = "7.0.0"
which = "7.0.3"
# Argon2 library with support for the PHC format # Argon2 library with support for the PHC format
argon2 = "0.5.3" argon2 = "0.5.3"
# Reading a password from the cli for generating the Argon2id ADMIN_TOKEN # Reading a password from the cli for generating the Argon2id ADMIN_TOKEN
rpassword = "7.3.1" rpassword = "7.4.0"
# Loading a dynamic CSS Stylesheet # Loading a dynamic CSS Stylesheet
grass_compiler = { version = "0.13.4", default-features = false } grass_compiler = { version = "0.13.4", default-features = false }
[patch.crates-io]
# Patch yubico to remove duplicate crates of older versions
yubico = { git = "https://github.com/BlackDex/yubico-rs", rev = "00df14811f58155c0f02e3ab10f1570ed3e115c6" }
# Strip debuginfo from the release builds # Strip debuginfo from the release builds
# The symbols are the provide better panic traces # The debug symbols are to provide better panic traces
# Also enable fat LTO and use 1 codegen unit for optimizations # Also enable fat LTO and use 1 codegen unit for optimizations
[profile.release] [profile.release]
strip = "debuginfo" strip = "debuginfo"
@@ -206,7 +212,7 @@ codegen-units = 16
# Linting config # Linting config
# https://doc.rust-lang.org/rustc/lints/groups.html # https://doc.rust-lang.org/rustc/lints/groups.html
[lints.rust] [workspace.lints.rust]
# Forbid # Forbid
unsafe_code = "forbid" unsafe_code = "forbid"
non_ascii_idents = "forbid" non_ascii_idents = "forbid"
@@ -230,13 +236,20 @@ unused_import_braces = "deny"
unused_lifetimes = "deny" unused_lifetimes = "deny"
unused_qualifications = "deny" unused_qualifications = "deny"
variant_size_differences = "deny" variant_size_differences = "deny"
# Allow the following lints since these cause issues with Rust v1.84.0 or newer
# Building Vaultwarden with Rust v1.85.0 and edition 2024 also works without issues
if_let_rescope = "allow"
tail_expr_drop_order = "allow"
# https://rust-lang.github.io/rust-clippy/stable/index.html # https://rust-lang.github.io/rust-clippy/stable/index.html
[lints.clippy] [workspace.lints.clippy]
# Warn # Warn
dbg_macro = "warn" dbg_macro = "warn"
todo = "warn" todo = "warn"
# Ignore/Allow
result_large_err = "allow"
# Deny # Deny
case_sensitive_file_extension_comparisons = "deny" case_sensitive_file_extension_comparisons = "deny"
cast_lossless = "deny" cast_lossless = "deny"
@@ -267,3 +280,6 @@ unused_async = "deny"
unused_self = "deny" unused_self = "deny"
verbose_file_reads = "deny" verbose_file_reads = "deny"
zero_sized_map_values = "deny" zero_sized_map_values = "deny"
[lints]
workspace = true

View File

@@ -48,8 +48,8 @@ fn main() {
fn run(args: &[&str]) -> Result<String, std::io::Error> { fn run(args: &[&str]) -> Result<String, std::io::Error> {
let out = Command::new(args[0]).args(&args[1..]).output()?; let out = Command::new(args[0]).args(&args[1..]).output()?;
if !out.status.success() { if !out.status.success() {
use std::io::{Error, ErrorKind}; use std::io::Error;
return Err(Error::new(ErrorKind::Other, "Command not successful")); return Err(Error::other("Command not successful"));
} }
Ok(String::from_utf8(out.stdout).unwrap().trim().to_string()) Ok(String::from_utf8(out.stdout).unwrap().trim().to_string())
} }

View File

@@ -1,11 +1,11 @@
--- ---
vault_version: "v2024.6.2c" vault_version: "v2025.5.0"
vault_image_digest: "sha256:409ab328ca931439cb916b388a4bb784bd44220717aaf74cf71620c23e34fc2b" vault_image_digest: "sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e"
# Cross Compile Docker Helper Scripts v1.5.0 # Cross Compile Docker Helper Scripts v1.6.1
# We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts # We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts
# https://github.com/tonistiigi/xx | https://hub.docker.com/r/tonistiigi/xx/tags # https://github.com/tonistiigi/xx | https://hub.docker.com/r/tonistiigi/xx/tags
xx_image_digest: "sha256:1978e7a58a1777cb0ef0dde76bad60b7914b21da57cfa88047875e4f364297aa" xx_image_digest: "sha256:9c207bead753dda9430bdd15425c6518fc7a03d866103c516a2c6889188f5894"
rust_version: 1.83.0 # Rust version to be used rust_version: 1.87.0 # Rust version to be used
debian_version: bookworm # Debian release name to be used debian_version: bookworm # Debian release name to be used
alpine_version: "3.21" # Alpine version to be used alpine_version: "3.21" # Alpine version to be used
# For which platforms/architectures will we try to build images # For which platforms/architectures will we try to build images

View File

@@ -19,23 +19,23 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2024.6.2c # $ docker pull docker.io/vaultwarden/web-vault:v2025.5.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2024.6.2c # $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.5.0
# [docker.io/vaultwarden/web-vault@sha256:409ab328ca931439cb916b388a4bb784bd44220717aaf74cf71620c23e34fc2b] # [docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:409ab328ca931439cb916b388a4bb784bd44220717aaf74cf71620c23e34fc2b # $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e
# [docker.io/vaultwarden/web-vault:v2024.6.2c] # [docker.io/vaultwarden/web-vault:v2025.5.0]
# #
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:409ab328ca931439cb916b388a4bb784bd44220717aaf74cf71620c23e34fc2b AS vault FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e AS vault
########################## ALPINE BUILD IMAGES ########################## ########################## ALPINE BUILD IMAGES ##########################
## NOTE: The Alpine Base Images do not support other platforms then linux/amd64 ## NOTE: The Alpine Base Images do not support other platforms then linux/amd64
## And for Alpine we define all build images here, they will only be loaded when actually used ## And for Alpine we define all build images here, they will only be loaded when actually used
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.83.0 AS build_amd64 FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.87.0 AS build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.83.0 AS build_arm64 FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.87.0 AS build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.83.0 AS build_armv7 FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.87.0 AS build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.83.0 AS build_armv6 FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.87.0 AS build_armv6
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006 # hadolint ignore=DL3006
@@ -76,6 +76,7 @@ RUN source /env-cargo && \
# Copies over *only* your manifests and build files # Copies over *only* your manifests and build files
COPY ./Cargo.* ./rust-toolchain.toml ./build.rs ./ COPY ./Cargo.* ./rust-toolchain.toml ./build.rs ./
COPY ./macros ./macros
ARG CARGO_PROFILE=release ARG CARGO_PROFILE=release

View File

@@ -19,24 +19,24 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2024.6.2c # $ docker pull docker.io/vaultwarden/web-vault:v2025.5.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2024.6.2c # $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.5.0
# [docker.io/vaultwarden/web-vault@sha256:409ab328ca931439cb916b388a4bb784bd44220717aaf74cf71620c23e34fc2b] # [docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:409ab328ca931439cb916b388a4bb784bd44220717aaf74cf71620c23e34fc2b # $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e
# [docker.io/vaultwarden/web-vault:v2024.6.2c] # [docker.io/vaultwarden/web-vault:v2025.5.0]
# #
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:409ab328ca931439cb916b388a4bb784bd44220717aaf74cf71620c23e34fc2b AS vault FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:a0a377b810e66a4ebf1416f732d2be06f3262bf5a5238695af88d3ec6871cc0e AS vault
########################## Cross Compile Docker Helper Scripts ########################## ########################## Cross Compile Docker Helper Scripts ##########################
## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts ## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts
## And these bash scripts do not have any significant difference if at all ## And these bash scripts do not have any significant difference if at all
FROM --platform=linux/amd64 docker.io/tonistiigi/xx@sha256:1978e7a58a1777cb0ef0dde76bad60b7914b21da57cfa88047875e4f364297aa AS xx FROM --platform=linux/amd64 docker.io/tonistiigi/xx@sha256:9c207bead753dda9430bdd15425c6518fc7a03d866103c516a2c6889188f5894 AS xx
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006 # hadolint ignore=DL3006
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.83.0-slim-bookworm AS build FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.87.0-slim-bookworm AS build
COPY --from=xx / / COPY --from=xx / /
ARG TARGETARCH ARG TARGETARCH
ARG TARGETVARIANT ARG TARGETVARIANT
@@ -89,24 +89,24 @@ RUN USER=root cargo new --bin /app
WORKDIR /app WORKDIR /app
# Environment variables for Cargo on Debian based builds # Environment variables for Cargo on Debian based builds
ARG ARCH_OPENSSL_LIB_DIR \ ARG TARGET_PKG_CONFIG_PATH
ARCH_OPENSSL_INCLUDE_DIR
RUN source /env-cargo && \ RUN source /env-cargo && \
if xx-info is-cross ; then \ if xx-info is-cross ; then \
# Some special variables if needed to override some build paths
if [[ -n "${ARCH_OPENSSL_LIB_DIR}" && -n "${ARCH_OPENSSL_INCLUDE_DIR}" ]]; then \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_LIB_DIR=${ARCH_OPENSSL_LIB_DIR}" >> /env-cargo && \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_INCLUDE_DIR=${ARCH_OPENSSL_INCLUDE_DIR}" >> /env-cargo ; \
fi && \
# We can't use xx-cargo since that uses clang, which doesn't work for our libraries. # We can't use xx-cargo since that uses clang, which doesn't work for our libraries.
# Because of this we generate the needed environment variables here which we can load in the needed steps. # Because of this we generate the needed environment variables here which we can load in the needed steps.
echo "export CC_$(echo "${CARGO_TARGET}" | tr '[:upper:]' '[:lower:]' | tr - _)=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \ echo "export CC_$(echo "${CARGO_TARGET}" | tr '[:upper:]' '[:lower:]' | tr - _)=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \
echo "export CARGO_TARGET_$(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_LINKER=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \ echo "export CARGO_TARGET_$(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_LINKER=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \
echo "export PKG_CONFIG=/usr/bin/$(xx-info)-pkg-config" >> /env-cargo && \
echo "export CROSS_COMPILE=1" >> /env-cargo && \ echo "export CROSS_COMPILE=1" >> /env-cargo && \
echo "export OPENSSL_INCLUDE_DIR=/usr/include/$(xx-info)" >> /env-cargo && \ echo "export PKG_CONFIG_ALLOW_CROSS=1" >> /env-cargo && \
echo "export OPENSSL_LIB_DIR=/usr/lib/$(xx-info)" >> /env-cargo ; \ # For some architectures `xx-info` returns a triple which doesn't matches the path on disk
# In those cases you can override this by setting the `TARGET_PKG_CONFIG_PATH` build-arg
if [[ -n "${TARGET_PKG_CONFIG_PATH}" ]]; then \
echo "export TARGET_PKG_CONFIG_PATH=${TARGET_PKG_CONFIG_PATH}" >> /env-cargo ; \
else \
echo "export PKG_CONFIG_PATH=/usr/lib/$(xx-info)/pkgconfig" >> /env-cargo ; \
fi && \
echo "# End of env-cargo" >> /env-cargo ; \
fi && \ fi && \
# Output the current contents of the file # Output the current contents of the file
cat /env-cargo cat /env-cargo
@@ -116,6 +116,7 @@ RUN source /env-cargo && \
# Copies over *only* your manifests and build files # Copies over *only* your manifests and build files
COPY ./Cargo.* ./rust-toolchain.toml ./build.rs ./ COPY ./Cargo.* ./rust-toolchain.toml ./build.rs ./
COPY ./macros ./macros
ARG CARGO_PROFILE=release ARG CARGO_PROFILE=release

View File

@@ -109,24 +109,24 @@ WORKDIR /app
{% if base == "debian" %} {% if base == "debian" %}
# Environment variables for Cargo on Debian based builds # Environment variables for Cargo on Debian based builds
ARG ARCH_OPENSSL_LIB_DIR \ ARG TARGET_PKG_CONFIG_PATH
ARCH_OPENSSL_INCLUDE_DIR
RUN source /env-cargo && \ RUN source /env-cargo && \
if xx-info is-cross ; then \ if xx-info is-cross ; then \
# Some special variables if needed to override some build paths
if [[ -n "${ARCH_OPENSSL_LIB_DIR}" && -n "${ARCH_OPENSSL_INCLUDE_DIR}" ]]; then \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_LIB_DIR=${ARCH_OPENSSL_LIB_DIR}" >> /env-cargo && \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_INCLUDE_DIR=${ARCH_OPENSSL_INCLUDE_DIR}" >> /env-cargo ; \
fi && \
# We can't use xx-cargo since that uses clang, which doesn't work for our libraries. # We can't use xx-cargo since that uses clang, which doesn't work for our libraries.
# Because of this we generate the needed environment variables here which we can load in the needed steps. # Because of this we generate the needed environment variables here which we can load in the needed steps.
echo "export CC_$(echo "${CARGO_TARGET}" | tr '[:upper:]' '[:lower:]' | tr - _)=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \ echo "export CC_$(echo "${CARGO_TARGET}" | tr '[:upper:]' '[:lower:]' | tr - _)=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \
echo "export CARGO_TARGET_$(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_LINKER=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \ echo "export CARGO_TARGET_$(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_LINKER=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \
echo "export PKG_CONFIG=/usr/bin/$(xx-info)-pkg-config" >> /env-cargo && \
echo "export CROSS_COMPILE=1" >> /env-cargo && \ echo "export CROSS_COMPILE=1" >> /env-cargo && \
echo "export OPENSSL_INCLUDE_DIR=/usr/include/$(xx-info)" >> /env-cargo && \ echo "export PKG_CONFIG_ALLOW_CROSS=1" >> /env-cargo && \
echo "export OPENSSL_LIB_DIR=/usr/lib/$(xx-info)" >> /env-cargo ; \ # For some architectures `xx-info` returns a triple which doesn't matches the path on disk
# In those cases you can override this by setting the `TARGET_PKG_CONFIG_PATH` build-arg
if [[ -n "${TARGET_PKG_CONFIG_PATH}" ]]; then \
echo "export TARGET_PKG_CONFIG_PATH=${TARGET_PKG_CONFIG_PATH}" >> /env-cargo ; \
else \
echo "export PKG_CONFIG_PATH=/usr/lib/$(xx-info)/pkgconfig" >> /env-cargo ; \
fi && \
echo "# End of env-cargo" >> /env-cargo ; \
fi && \ fi && \
# Output the current contents of the file # Output the current contents of the file
cat /env-cargo cat /env-cargo
@@ -143,6 +143,7 @@ RUN source /env-cargo && \
# Copies over *only* your manifests and build files # Copies over *only* your manifests and build files
COPY ./Cargo.* ./rust-toolchain.toml ./build.rs ./ COPY ./Cargo.* ./rust-toolchain.toml ./build.rs ./
COPY ./macros ./macros
ARG CARGO_PROFILE=release ARG CARGO_PROFILE=release

View File

@@ -133,8 +133,7 @@ target "debian-386" {
platforms = ["linux/386"] platforms = ["linux/386"]
tags = generate_tags("", "-386") tags = generate_tags("", "-386")
args = { args = {
ARCH_OPENSSL_LIB_DIR = "/usr/lib/i386-linux-gnu" TARGET_PKG_CONFIG_PATH = "/usr/lib/i386-linux-gnu/pkgconfig"
ARCH_OPENSSL_INCLUDE_DIR = "/usr/include/i386-linux-gnu"
} }
} }
@@ -142,20 +141,12 @@ target "debian-ppc64le" {
inherits = ["debian"] inherits = ["debian"]
platforms = ["linux/ppc64le"] platforms = ["linux/ppc64le"]
tags = generate_tags("", "-ppc64le") tags = generate_tags("", "-ppc64le")
args = {
ARCH_OPENSSL_LIB_DIR = "/usr/lib/powerpc64le-linux-gnu"
ARCH_OPENSSL_INCLUDE_DIR = "/usr/include/powerpc64le-linux-gnu"
}
} }
target "debian-s390x" { target "debian-s390x" {
inherits = ["debian"] inherits = ["debian"]
platforms = ["linux/s390x"] platforms = ["linux/s390x"]
tags = generate_tags("", "-s390x") tags = generate_tags("", "-s390x")
args = {
ARCH_OPENSSL_LIB_DIR = "/usr/lib/s390x-linux-gnu"
ARCH_OPENSSL_INCLUDE_DIR = "/usr/include/s390x-linux-gnu"
}
} }
// ==== End of unsupported Debian architecture targets === // ==== End of unsupported Debian architecture targets ===

16
macros/Cargo.toml Normal file
View File

@@ -0,0 +1,16 @@
[package]
name = "macros"
version = "0.1.0"
edition = "2021"
[lib]
name = "macros"
path = "src/lib.rs"
proc-macro = true
[dependencies]
quote = "1.0.40"
syn = "2.0.101"
[lints]
workspace = true

56
macros/src/lib.rs Normal file
View File

@@ -0,0 +1,56 @@
use proc_macro::TokenStream;
use quote::quote;
#[proc_macro_derive(UuidFromParam)]
pub fn derive_uuid_from_param(input: TokenStream) -> TokenStream {
let ast = syn::parse(input).unwrap();
impl_derive_uuid_macro(&ast)
}
fn impl_derive_uuid_macro(ast: &syn::DeriveInput) -> TokenStream {
let name = &ast.ident;
let gen_derive = quote! {
#[automatically_derived]
impl<'r> rocket::request::FromParam<'r> for #name {
type Error = ();
#[inline(always)]
fn from_param(param: &'r str) -> Result<Self, Self::Error> {
if uuid::Uuid::parse_str(param).is_ok() {
Ok(Self(param.to_string()))
} else {
Err(())
}
}
}
};
gen_derive.into()
}
#[proc_macro_derive(IdFromParam)]
pub fn derive_id_from_param(input: TokenStream) -> TokenStream {
let ast = syn::parse(input).unwrap();
impl_derive_safestring_macro(&ast)
}
fn impl_derive_safestring_macro(ast: &syn::DeriveInput) -> TokenStream {
let name = &ast.ident;
let gen_derive = quote! {
#[automatically_derived]
impl<'r> rocket::request::FromParam<'r> for #name {
type Error = ();
#[inline(always)]
fn from_param(param: &'r str) -> Result<Self, Self::Error> {
if param.chars().all(|c| matches!(c, 'a'..='z' | 'A'..='Z' |'0'..='9' | '-')) {
Ok(Self(param.to_string()))
} else {
Err(())
}
}
}
};
gen_derive.into()
}

View File

@@ -0,0 +1,5 @@
ALTER TABLE users_collections
ADD COLUMN manage BOOLEAN NOT NULL DEFAULT FALSE;
ALTER TABLE collections_groups
ADD COLUMN manage BOOLEAN NOT NULL DEFAULT FALSE;

View File

@@ -0,0 +1,5 @@
ALTER TABLE users_collections
ADD COLUMN manage BOOLEAN NOT NULL DEFAULT FALSE;
ALTER TABLE collections_groups
ADD COLUMN manage BOOLEAN NOT NULL DEFAULT FALSE;

View File

@@ -0,0 +1,5 @@
ALTER TABLE users_collections
ADD COLUMN manage BOOLEAN NOT NULL DEFAULT 0; -- FALSE
ALTER TABLE collections_groups
ADD COLUMN manage BOOLEAN NOT NULL DEFAULT 0; -- FALSE

View File

@@ -1,4 +1,4 @@
[toolchain] [toolchain]
channel = "1.83.0" channel = "1.87.0"
components = [ "rustfmt", "clippy" ] components = [ "rustfmt", "clippy" ]
profile = "minimal" profile = "minimal"

View File

@@ -50,7 +50,7 @@ pub fn routes() -> Vec<Route> {
disable_user, disable_user,
enable_user, enable_user,
remove_2fa, remove_2fa,
update_user_org_type, update_membership_type,
update_revision_users, update_revision_users,
post_config, post_config,
delete_config, delete_config,
@@ -99,9 +99,10 @@ const DT_FMT: &str = "%Y-%m-%d %H:%M:%S %Z";
const BASE_TEMPLATE: &str = "admin/base"; const BASE_TEMPLATE: &str = "admin/base";
const ACTING_ADMIN_USER: &str = "vaultwarden-admin-00000-000000000000"; const ACTING_ADMIN_USER: &str = "vaultwarden-admin-00000-000000000000";
pub const FAKE_ADMIN_UUID: &str = "00000000-0000-0000-0000-000000000000";
fn admin_path() -> String { fn admin_path() -> String {
format!("{}{}", CONFIG.domain_path(), ADMIN_PATH) format!("{}{ADMIN_PATH}", CONFIG.domain_path())
} }
#[derive(Debug)] #[derive(Debug)]
@@ -170,7 +171,7 @@ struct LoginForm {
redirect: Option<String>, redirect: Option<String>,
} }
#[post("/", data = "<data>")] #[post("/", format = "application/x-www-form-urlencoded", data = "<data>")]
fn post_admin_login( fn post_admin_login(
data: Form<LoginForm>, data: Form<LoginForm>,
cookies: &CookieJar<'_>, cookies: &CookieJar<'_>,
@@ -205,7 +206,7 @@ fn post_admin_login(
cookies.add(cookie); cookies.add(cookie);
if let Some(redirect) = redirect { if let Some(redirect) = redirect {
Ok(Redirect::to(format!("{}{}", admin_path(), redirect))) Ok(Redirect::to(format!("{}{redirect}", admin_path())))
} else { } else {
Err(AdminResponse::Ok(render_admin_page())) Err(AdminResponse::Ok(render_admin_page()))
} }
@@ -280,15 +281,15 @@ struct InviteData {
email: String, email: String,
} }
async fn get_user_or_404(uuid: &str, conn: &mut DbConn) -> ApiResult<User> { async fn get_user_or_404(user_id: &UserId, conn: &mut DbConn) -> ApiResult<User> {
if let Some(user) = User::find_by_uuid(uuid, conn).await { if let Some(user) = User::find_by_uuid(user_id, conn).await {
Ok(user) Ok(user)
} else { } else {
err_code!("User doesn't exist", Status::NotFound.code); err_code!("User doesn't exist", Status::NotFound.code);
} }
} }
#[post("/invite", data = "<data>")] #[post("/invite", format = "application/json", data = "<data>")]
async fn invite_user(data: Json<InviteData>, _token: AdminToken, mut conn: DbConn) -> JsonResult { async fn invite_user(data: Json<InviteData>, _token: AdminToken, mut conn: DbConn) -> JsonResult {
let data: InviteData = data.into_inner(); let data: InviteData = data.into_inner();
if User::find_by_mail(&data.email, &mut conn).await.is_some() { if User::find_by_mail(&data.email, &mut conn).await.is_some() {
@@ -299,7 +300,9 @@ async fn invite_user(data: Json<InviteData>, _token: AdminToken, mut conn: DbCon
async fn _generate_invite(user: &User, conn: &mut DbConn) -> EmptyResult { async fn _generate_invite(user: &User, conn: &mut DbConn) -> EmptyResult {
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
mail::send_invite(user, None, None, &CONFIG.invitation_org_name(), None).await let org_id: OrganizationId = FAKE_ADMIN_UUID.to_string().into();
let member_id: MembershipId = FAKE_ADMIN_UUID.to_string().into();
mail::send_invite(user, org_id, member_id, &CONFIG.invitation_org_name(), None).await
} else { } else {
let invitation = Invitation::new(&user.email); let invitation = Invitation::new(&user.email);
invitation.save(conn).await invitation.save(conn).await
@@ -312,7 +315,7 @@ async fn invite_user(data: Json<InviteData>, _token: AdminToken, mut conn: DbCon
Ok(Json(user.to_json(&mut conn).await)) Ok(Json(user.to_json(&mut conn).await))
} }
#[post("/test/smtp", data = "<data>")] #[post("/test/smtp", format = "application/json", data = "<data>")]
async fn test_smtp(data: Json<InviteData>, _token: AdminToken) -> EmptyResult { async fn test_smtp(data: Json<InviteData>, _token: AdminToken) -> EmptyResult {
let data: InviteData = data.into_inner(); let data: InviteData = data.into_inner();
@@ -381,29 +384,29 @@ async fn get_user_by_mail_json(mail: &str, _token: AdminToken, mut conn: DbConn)
} }
} }
#[get("/users/<uuid>")] #[get("/users/<user_id>")]
async fn get_user_json(uuid: &str, _token: AdminToken, mut conn: DbConn) -> JsonResult { async fn get_user_json(user_id: UserId, _token: AdminToken, mut conn: DbConn) -> JsonResult {
let u = get_user_or_404(uuid, &mut conn).await?; let u = get_user_or_404(&user_id, &mut conn).await?;
let mut usr = u.to_json(&mut conn).await; let mut usr = u.to_json(&mut conn).await;
usr["userEnabled"] = json!(u.enabled); usr["userEnabled"] = json!(u.enabled);
usr["createdAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT)); usr["createdAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
Ok(Json(usr)) Ok(Json(usr))
} }
#[post("/users/<uuid>/delete")] #[post("/users/<user_id>/delete", format = "application/json")]
async fn delete_user(uuid: &str, token: AdminToken, mut conn: DbConn) -> EmptyResult { async fn delete_user(user_id: UserId, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let user = get_user_or_404(uuid, &mut conn).await?; let user = get_user_or_404(&user_id, &mut conn).await?;
// Get the user_org records before deleting the actual user // Get the membership records before deleting the actual user
let user_orgs = UserOrganization::find_any_state_by_user(uuid, &mut conn).await; let memberships = Membership::find_any_state_by_user(&user_id, &mut conn).await;
let res = user.delete(&mut conn).await; let res = user.delete(&mut conn).await;
for user_org in user_orgs { for membership in memberships {
log_event( log_event(
EventType::OrganizationUserRemoved as i32, EventType::OrganizationUserDeleted as i32,
&user_org.uuid, &membership.uuid,
&user_org.org_uuid, &membership.org_uuid,
ACTING_ADMIN_USER, &ACTING_ADMIN_USER.into(),
14, // Use UnknownBrowser type 14, // Use UnknownBrowser type
&token.ip.ip, &token.ip.ip,
&mut conn, &mut conn,
@@ -414,17 +417,17 @@ async fn delete_user(uuid: &str, token: AdminToken, mut conn: DbConn) -> EmptyRe
res res
} }
#[post("/users/<uuid>/deauth")] #[post("/users/<user_id>/deauth", format = "application/json")]
async fn deauth_user(uuid: &str, _token: AdminToken, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult { async fn deauth_user(user_id: UserId, _token: AdminToken, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?; let mut user = get_user_or_404(&user_id, &mut conn).await?;
nt.send_logout(&user, None).await; nt.send_logout(&user, None, &mut conn).await;
if CONFIG.push_enabled() { if CONFIG.push_enabled() {
for device in Device::find_push_devices_by_user(&user.uuid, &mut conn).await { for device in Device::find_push_devices_by_user(&user.uuid, &mut conn).await {
match unregister_push_device(device.push_uuid).await { match unregister_push_device(&device.push_uuid).await {
Ok(r) => r, Ok(r) => r,
Err(e) => error!("Unable to unregister devices from Bitwarden server: {}", e), Err(e) => error!("Unable to unregister devices from Bitwarden server: {e}"),
}; };
} }
} }
@@ -435,47 +438,49 @@ async fn deauth_user(uuid: &str, _token: AdminToken, mut conn: DbConn, nt: Notif
user.save(&mut conn).await user.save(&mut conn).await
} }
#[post("/users/<uuid>/disable")] #[post("/users/<user_id>/disable", format = "application/json")]
async fn disable_user(uuid: &str, _token: AdminToken, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult { async fn disable_user(user_id: UserId, _token: AdminToken, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?; let mut user = get_user_or_404(&user_id, &mut conn).await?;
Device::delete_all_by_user(&user.uuid, &mut conn).await?; Device::delete_all_by_user(&user.uuid, &mut conn).await?;
user.reset_security_stamp(); user.reset_security_stamp();
user.enabled = false; user.enabled = false;
let save_result = user.save(&mut conn).await; let save_result = user.save(&mut conn).await;
nt.send_logout(&user, None).await; nt.send_logout(&user, None, &mut conn).await;
save_result save_result
} }
#[post("/users/<uuid>/enable")] #[post("/users/<user_id>/enable", format = "application/json")]
async fn enable_user(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyResult { async fn enable_user(user_id: UserId, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?; let mut user = get_user_or_404(&user_id, &mut conn).await?;
user.enabled = true; user.enabled = true;
user.save(&mut conn).await user.save(&mut conn).await
} }
#[post("/users/<uuid>/remove-2fa")] #[post("/users/<user_id>/remove-2fa", format = "application/json")]
async fn remove_2fa(uuid: &str, token: AdminToken, mut conn: DbConn) -> EmptyResult { async fn remove_2fa(user_id: UserId, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?; let mut user = get_user_or_404(&user_id, &mut conn).await?;
TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?; TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?;
two_factor::enforce_2fa_policy(&user, ACTING_ADMIN_USER, 14, &token.ip.ip, &mut conn).await?; two_factor::enforce_2fa_policy(&user, &ACTING_ADMIN_USER.into(), 14, &token.ip.ip, &mut conn).await?;
user.totp_recover = None; user.totp_recover = None;
user.save(&mut conn).await user.save(&mut conn).await
} }
#[post("/users/<uuid>/invite/resend")] #[post("/users/<user_id>/invite/resend", format = "application/json")]
async fn resend_user_invite(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyResult { async fn resend_user_invite(user_id: UserId, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
if let Some(user) = User::find_by_uuid(uuid, &mut conn).await { if let Some(user) = User::find_by_uuid(&user_id, &mut conn).await {
//TODO: replace this with user.status check when it will be available (PR#3397) //TODO: replace this with user.status check when it will be available (PR#3397)
if !user.password_hash.is_empty() { if !user.password_hash.is_empty() {
err_code!("User already accepted invitation", Status::BadRequest.code); err_code!("User already accepted invitation", Status::BadRequest.code);
} }
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
mail::send_invite(&user, None, None, &CONFIG.invitation_org_name(), None).await let org_id: OrganizationId = FAKE_ADMIN_UUID.to_string().into();
let member_id: MembershipId = FAKE_ADMIN_UUID.to_string().into();
mail::send_invite(&user, org_id, member_id, &CONFIG.invitation_org_name(), None).await
} else { } else {
Ok(()) Ok(())
} }
@@ -485,42 +490,41 @@ async fn resend_user_invite(uuid: &str, _token: AdminToken, mut conn: DbConn) ->
} }
#[derive(Debug, Deserialize)] #[derive(Debug, Deserialize)]
struct UserOrgTypeData { struct MembershipTypeData {
user_type: NumberOrString, user_type: NumberOrString,
user_uuid: String, user_uuid: UserId,
org_uuid: String, org_uuid: OrganizationId,
} }
#[post("/users/org_type", data = "<data>")] #[post("/users/org_type", format = "application/json", data = "<data>")]
async fn update_user_org_type(data: Json<UserOrgTypeData>, token: AdminToken, mut conn: DbConn) -> EmptyResult { async fn update_membership_type(data: Json<MembershipTypeData>, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let data: UserOrgTypeData = data.into_inner(); let data: MembershipTypeData = data.into_inner();
let Some(mut user_to_edit) = let Some(mut member_to_edit) = Membership::find_by_user_and_org(&data.user_uuid, &data.org_uuid, &mut conn).await
UserOrganization::find_by_user_and_org(&data.user_uuid, &data.org_uuid, &mut conn).await
else { else {
err!("The specified user isn't member of the organization") err!("The specified user isn't member of the organization")
}; };
let new_type = match UserOrgType::from_str(&data.user_type.into_string()) { let new_type = match MembershipType::from_str(&data.user_type.into_string()) {
Some(new_type) => new_type as i32, Some(new_type) => new_type as i32,
None => err!("Invalid type"), None => err!("Invalid type"),
}; };
if user_to_edit.atype == UserOrgType::Owner && new_type != UserOrgType::Owner { if member_to_edit.atype == MembershipType::Owner && new_type != MembershipType::Owner {
// Removing owner permission, check that there is at least one other confirmed owner // Removing owner permission, check that there is at least one other confirmed owner
if UserOrganization::count_confirmed_by_org_and_type(&data.org_uuid, UserOrgType::Owner, &mut conn).await <= 1 { if Membership::count_confirmed_by_org_and_type(&data.org_uuid, MembershipType::Owner, &mut conn).await <= 1 {
err!("Can't change the type of the last owner") err!("Can't change the type of the last owner")
} }
} }
// This check is also done at api::organizations::{accept_invite(), _confirm_invite, _activate_user(), edit_user()}, update_user_org_type // This check is also done at api::organizations::{accept_invite, _confirm_invite, _activate_member, edit_member}, update_membership_type
// It returns different error messages per function. // It returns different error messages per function.
if new_type < UserOrgType::Admin { if new_type < MembershipType::Admin {
match OrgPolicy::is_user_allowed(&user_to_edit.user_uuid, &user_to_edit.org_uuid, true, &mut conn).await { match OrgPolicy::is_user_allowed(&member_to_edit.user_uuid, &member_to_edit.org_uuid, true, &mut conn).await {
Ok(_) => {} Ok(_) => {}
Err(OrgPolicyErr::TwoFactorMissing) => { Err(OrgPolicyErr::TwoFactorMissing) => {
if CONFIG.email_2fa_auto_fallback() { if CONFIG.email_2fa_auto_fallback() {
two_factor::email::find_and_activate_email_2fa(&user_to_edit.user_uuid, &mut conn).await?; two_factor::email::find_and_activate_email_2fa(&member_to_edit.user_uuid, &mut conn).await?;
} else { } else {
err!("You cannot modify this user to this type because they have not setup 2FA"); err!("You cannot modify this user to this type because they have not setup 2FA");
} }
@@ -533,20 +537,20 @@ async fn update_user_org_type(data: Json<UserOrgTypeData>, token: AdminToken, mu
log_event( log_event(
EventType::OrganizationUserUpdated as i32, EventType::OrganizationUserUpdated as i32,
&user_to_edit.uuid, &member_to_edit.uuid,
&data.org_uuid, &data.org_uuid,
ACTING_ADMIN_USER, &ACTING_ADMIN_USER.into(),
14, // Use UnknownBrowser type 14, // Use UnknownBrowser type
&token.ip.ip, &token.ip.ip,
&mut conn, &mut conn,
) )
.await; .await;
user_to_edit.atype = new_type; member_to_edit.atype = new_type;
user_to_edit.save(&mut conn).await member_to_edit.save(&mut conn).await
} }
#[post("/users/update_revision")] #[post("/users/update_revision", format = "application/json")]
async fn update_revision_users(_token: AdminToken, mut conn: DbConn) -> EmptyResult { async fn update_revision_users(_token: AdminToken, mut conn: DbConn) -> EmptyResult {
User::update_all_revisions(&mut conn).await User::update_all_revisions(&mut conn).await
} }
@@ -557,7 +561,7 @@ async fn organizations_overview(_token: AdminToken, mut conn: DbConn) -> ApiResu
let mut organizations_json = Vec::with_capacity(organizations.len()); let mut organizations_json = Vec::with_capacity(organizations.len());
for o in organizations { for o in organizations {
let mut org = o.to_json(); let mut org = o.to_json();
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &mut conn).await); org["user_count"] = json!(Membership::count_by_org(&o.uuid, &mut conn).await);
org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &mut conn).await); org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &mut conn).await);
org["collection_count"] = json!(Collection::count_by_org(&o.uuid, &mut conn).await); org["collection_count"] = json!(Collection::count_by_org(&o.uuid, &mut conn).await);
org["group_count"] = json!(Group::count_by_org(&o.uuid, &mut conn).await); org["group_count"] = json!(Group::count_by_org(&o.uuid, &mut conn).await);
@@ -571,9 +575,9 @@ async fn organizations_overview(_token: AdminToken, mut conn: DbConn) -> ApiResu
Ok(Html(text)) Ok(Html(text))
} }
#[post("/organizations/<uuid>/delete")] #[post("/organizations/<org_id>/delete", format = "application/json")]
async fn delete_organization(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyResult { async fn delete_organization(org_id: OrganizationId, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let org = Organization::find_by_uuid(uuid, &mut conn).await.map_res("Organization doesn't exist")?; let org = Organization::find_by_uuid(&org_id, &mut conn).await.map_res("Organization doesn't exist")?;
org.delete(&mut conn).await org.delete(&mut conn).await
} }
@@ -587,20 +591,14 @@ struct GitCommit {
sha: String, sha: String,
} }
#[derive(Deserialize)]
struct TimeApi {
year: u16,
month: u8,
day: u8,
hour: u8,
minute: u8,
seconds: u8,
}
async fn get_json_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> { async fn get_json_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
Ok(make_http_request(Method::GET, url)?.send().await?.error_for_status()?.json::<T>().await?) Ok(make_http_request(Method::GET, url)?.send().await?.error_for_status()?.json::<T>().await?)
} }
async fn get_text_api(url: &str) -> Result<String, Error> {
Ok(make_http_request(Method::GET, url)?.send().await?.error_for_status()?.text().await?)
}
async fn has_http_access() -> bool { async fn has_http_access() -> bool {
let Ok(req) = make_http_request(Method::HEAD, "https://github.com/dani-garcia/vaultwarden") else { let Ok(req) = make_http_request(Method::HEAD, "https://github.com/dani-garcia/vaultwarden") else {
return false; return false;
@@ -612,9 +610,10 @@ async fn has_http_access() -> bool {
} }
use cached::proc_macro::cached; use cached::proc_macro::cached;
/// Cache this function to prevent API call rate limit. Github only allows 60 requests per hour, and we use 3 here already. /// Cache this function to prevent API call rate limit. Github only allows 60 requests per hour, and we use 3 here already
/// It will cache this function for 300 seconds (5 minutes) which should prevent the exhaustion of the rate limit. /// It will cache this function for 600 seconds (10 minutes) which should prevent the exhaustion of the rate limit
#[cached(time = 300, sync_writes = true)] /// Any cache will be lost if Vaultwarden is restarted
#[cached(time = 600, sync_writes = "default")]
async fn get_release_info(has_http_access: bool, running_within_container: bool) -> (String, String, String) { async fn get_release_info(has_http_access: bool, running_within_container: bool) -> (String, String, String) {
// If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway. // If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway.
if has_http_access { if has_http_access {
@@ -632,7 +631,7 @@ async fn get_release_info(has_http_access: bool, running_within_container: bool)
} }
_ => "-".to_string(), _ => "-".to_string(),
}, },
// Do not fetch the web-vault version when running within a container. // Do not fetch the web-vault version when running within a container
// The web-vault version is embedded within the container it self, and should not be updated manually // The web-vault version is embedded within the container it self, and should not be updated manually
if running_within_container { if running_within_container {
"-".to_string() "-".to_string()
@@ -654,17 +653,18 @@ async fn get_release_info(has_http_access: bool, running_within_container: bool)
async fn get_ntp_time(has_http_access: bool) -> String { async fn get_ntp_time(has_http_access: bool) -> String {
if has_http_access { if has_http_access {
if let Ok(ntp_time) = get_json_api::<TimeApi>("https://www.timeapi.io/api/Time/current/zone?timeZone=UTC").await if let Ok(cf_trace) = get_text_api("https://cloudflare.com/cdn-cgi/trace").await {
{ for line in cf_trace.lines() {
return format!( if let Some((key, value)) = line.split_once('=') {
"{year}-{month:02}-{day:02} {hour:02}:{minute:02}:{seconds:02} UTC", if key == "ts" {
year = ntp_time.year, let ts = value.split_once('.').map_or(value, |(s, _)| s);
month = ntp_time.month, if let Ok(dt) = chrono::DateTime::parse_from_str(ts, "%s") {
day = ntp_time.day, return dt.format("%Y-%m-%d %H:%M:%S UTC").to_string();
hour = ntp_time.hour, }
minute = ntp_time.minute, break;
seconds = ntp_time.seconds }
); }
}
} }
} }
String::from("Unable to fetch NTP time.") String::from("Unable to fetch NTP time.")
@@ -697,6 +697,16 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
// Get current running versions // Get current running versions
let web_vault_version = get_web_vault_version(); let web_vault_version = get_web_vault_version();
// Check if the running version is newer than the latest stable released version
let web_vault_pre_release = if let Ok(web_ver_match) = semver::VersionReq::parse(&format!(">{latest_web_build}")) {
web_ver_match.matches(
&semver::Version::parse(&web_vault_version).unwrap_or_else(|_| semver::Version::parse("2025.1.1").unwrap()),
)
} else {
error!("Unable to parse latest_web_build: '{latest_web_build}'");
false
};
let diagnostics_json = json!({ let diagnostics_json = json!({
"dns_resolved": dns_resolved, "dns_resolved": dns_resolved,
"current_release": VERSION, "current_release": VERSION,
@@ -705,6 +715,7 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
"web_vault_enabled": &CONFIG.web_vault_enabled(), "web_vault_enabled": &CONFIG.web_vault_enabled(),
"web_vault_version": web_vault_version, "web_vault_version": web_vault_version,
"latest_web_build": latest_web_build, "latest_web_build": latest_web_build,
"web_vault_pre_release": web_vault_pre_release,
"running_within_container": running_within_container, "running_within_container": running_within_container,
"container_base_image": if running_within_container { container_base_image() } else { "Not applicable" }, "container_base_image": if running_within_container { container_base_image() } else { "Not applicable" },
"has_http_access": has_http_access, "has_http_access": has_http_access,
@@ -720,6 +731,7 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
"overrides": &CONFIG.get_overrides().join(", "), "overrides": &CONFIG.get_overrides().join(", "),
"host_arch": env::consts::ARCH, "host_arch": env::consts::ARCH,
"host_os": env::consts::OS, "host_os": env::consts::OS,
"tz_env": env::var("TZ").unwrap_or_default(),
"server_time_local": Local::now().format("%Y-%m-%d %H:%M:%S %Z").to_string(), "server_time_local": Local::now().format("%Y-%m-%d %H:%M:%S %Z").to_string(),
"server_time": Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(), // Run the server date/time check as late as possible to minimize the time difference "server_time": Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(), // Run the server date/time check as late as possible to minimize the time difference
"ntp_time": get_ntp_time(has_http_access).await, // Run the ntp check as late as possible to minimize the time difference "ntp_time": get_ntp_time(has_http_access).await, // Run the ntp check as late as possible to minimize the time difference
@@ -729,7 +741,7 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
Ok(Html(text)) Ok(Html(text))
} }
#[get("/diagnostics/config")] #[get("/diagnostics/config", format = "application/json")]
fn get_diagnostics_config(_token: AdminToken) -> Json<Value> { fn get_diagnostics_config(_token: AdminToken) -> Json<Value> {
let support_json = CONFIG.get_support_json(); let support_json = CONFIG.get_support_json();
Json(support_json) Json(support_json)
@@ -740,16 +752,16 @@ fn get_diagnostics_http(code: u16, _token: AdminToken) -> EmptyResult {
err_code!(format!("Testing error {code} response"), code); err_code!(format!("Testing error {code} response"), code);
} }
#[post("/config", data = "<data>")] #[post("/config", format = "application/json", data = "<data>")]
fn post_config(data: Json<ConfigBuilder>, _token: AdminToken) -> EmptyResult { fn post_config(data: Json<ConfigBuilder>, _token: AdminToken) -> EmptyResult {
let data: ConfigBuilder = data.into_inner(); let data: ConfigBuilder = data.into_inner();
if let Err(e) = CONFIG.update_config(data) { if let Err(e) = CONFIG.update_config(data, true) {
err!(format!("Unable to save config: {e:?}")) err!(format!("Unable to save config: {e:?}"))
} }
Ok(()) Ok(())
} }
#[post("/config/delete")] #[post("/config/delete", format = "application/json")]
fn delete_config(_token: AdminToken) -> EmptyResult { fn delete_config(_token: AdminToken) -> EmptyResult {
if let Err(e) = CONFIG.delete_user_config() { if let Err(e) = CONFIG.delete_user_config() {
err!(format!("Unable to delete config: {e:?}")) err!(format!("Unable to delete config: {e:?}"))
@@ -757,7 +769,7 @@ fn delete_config(_token: AdminToken) -> EmptyResult {
Ok(()) Ok(())
} }
#[post("/config/backup_db")] #[post("/config/backup_db", format = "application/json")]
async fn backup_db(_token: AdminToken, mut conn: DbConn) -> ApiResult<String> { async fn backup_db(_token: AdminToken, mut conn: DbConn) -> ApiResult<String> {
if *CAN_BACKUP { if *CAN_BACKUP {
match backup_database(&mut conn).await { match backup_database(&mut conn).await {

View File

@@ -30,6 +30,7 @@ pub fn routes() -> Vec<rocket::Route> {
profile, profile,
put_profile, put_profile,
post_profile, post_profile,
put_avatar,
get_public_keys, get_public_keys,
post_keys, post_keys,
post_password, post_password,
@@ -42,9 +43,8 @@ pub fn routes() -> Vec<rocket::Route> {
post_verify_email_token, post_verify_email_token,
post_delete_recover, post_delete_recover,
post_delete_recover_token, post_delete_recover_token,
post_device_token,
delete_account,
post_delete_account, post_delete_account,
delete_account,
revision_date, revision_date,
password_hint, password_hint,
prelogin, prelogin,
@@ -52,7 +52,9 @@ pub fn routes() -> Vec<rocket::Route> {
api_key, api_key,
rotate_api_key, rotate_api_key,
get_known_device, get_known_device,
put_avatar, get_all_devices,
get_device,
post_device_token,
put_device_token, put_device_token,
put_clear_device_token, put_clear_device_token,
post_clear_device_token, post_clear_device_token,
@@ -68,18 +70,31 @@ pub fn routes() -> Vec<rocket::Route> {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct RegisterData { pub struct RegisterData {
email: String, email: String,
kdf: Option<i32>, kdf: Option<i32>,
kdf_iterations: Option<i32>, kdf_iterations: Option<i32>,
kdf_memory: Option<i32>, kdf_memory: Option<i32>,
kdf_parallelism: Option<i32>, kdf_parallelism: Option<i32>,
#[serde(alias = "userSymmetricKey")]
key: String, key: String,
#[serde(alias = "userAsymmetricKeys")]
keys: Option<KeysData>, keys: Option<KeysData>,
master_password_hash: String, master_password_hash: String,
master_password_hint: Option<String>, master_password_hint: Option<String>,
name: Option<String>, name: Option<String>,
token: Option<String>,
#[allow(dead_code)] #[allow(dead_code)]
organization_user_id: Option<String>, organization_user_id: Option<MembershipId>,
// Used only from the register/finish endpoint
email_verification_token: Option<String>,
accept_emergency_access_id: Option<EmergencyAccessId>,
accept_emergency_access_invite_token: Option<String>,
#[serde(alias = "token")]
org_invite_token: Option<String>,
} }
#[derive(Debug, Deserialize)] #[derive(Debug, Deserialize)]
@@ -106,29 +121,93 @@ fn enforce_password_hint_setting(password_hint: &Option<String>) -> EmptyResult
} }
Ok(()) Ok(())
} }
async fn is_email_2fa_required(org_user_uuid: Option<String>, conn: &mut DbConn) -> bool { async fn is_email_2fa_required(member_id: Option<MembershipId>, conn: &mut DbConn) -> bool {
if !CONFIG._enable_email_2fa() { if !CONFIG._enable_email_2fa() {
return false; return false;
} }
if CONFIG.email_2fa_enforce_on_verified_invite() { if CONFIG.email_2fa_enforce_on_verified_invite() {
return true; return true;
} }
if org_user_uuid.is_some() { if let Some(member_id) = member_id {
return OrgPolicy::is_enabled_for_member(&org_user_uuid.unwrap(), OrgPolicyType::TwoFactorAuthentication, conn) return OrgPolicy::is_enabled_for_member(&member_id, OrgPolicyType::TwoFactorAuthentication, conn).await;
.await;
} }
false false
} }
#[post("/accounts/register", data = "<data>")] #[post("/accounts/register", data = "<data>")]
async fn register(data: Json<RegisterData>, conn: DbConn) -> JsonResult { async fn register(data: Json<RegisterData>, conn: DbConn) -> JsonResult {
_register(data, conn).await _register(data, false, conn).await
} }
pub async fn _register(data: Json<RegisterData>, mut conn: DbConn) -> JsonResult { pub async fn _register(data: Json<RegisterData>, email_verification: bool, mut conn: DbConn) -> JsonResult {
let data: RegisterData = data.into_inner(); let mut data: RegisterData = data.into_inner();
let email = data.email.to_lowercase(); let email = data.email.to_lowercase();
let mut email_verified = false;
let mut pending_emergency_access = None;
// First, validate the provided verification tokens
if email_verification {
match (
&data.email_verification_token,
&data.accept_emergency_access_id,
&data.accept_emergency_access_invite_token,
&data.organization_user_id,
&data.org_invite_token,
) {
// Normal user registration, when email verification is required
(Some(email_verification_token), None, None, None, None) => {
let claims = crate::auth::decode_register_verify(email_verification_token)?;
if claims.sub != data.email {
err!("Email verification token does not match email");
}
// During this call we don't get the name, so extract it from the claims
if claims.name.is_some() {
data.name = claims.name;
}
email_verified = claims.verified;
}
// Emergency access registration
(None, Some(accept_emergency_access_id), Some(accept_emergency_access_invite_token), None, None) => {
if !CONFIG.emergency_access_allowed() {
err!("Emergency access is not enabled.")
}
let claims = crate::auth::decode_emergency_access_invite(accept_emergency_access_invite_token)?;
if claims.email != data.email {
err!("Claim email does not match email")
}
if &claims.emer_id != accept_emergency_access_id {
err!("Claim emer_id does not match accept_emergency_access_id")
}
pending_emergency_access = Some((accept_emergency_access_id, claims));
email_verified = true;
}
// Org invite
(None, None, None, Some(organization_user_id), Some(org_invite_token)) => {
let claims = decode_invite(org_invite_token)?;
if claims.email != data.email {
err!("Claim email does not match email")
}
if &claims.member_id != organization_user_id {
err!("Claim org_user_id does not match organization_user_id")
}
email_verified = true;
}
_ => {
err!("Registration is missing required parameters")
}
}
}
// Check if the length of the username exceeds 50 characters (Same is Upstream Bitwarden) // Check if the length of the username exceeds 50 characters (Same is Upstream Bitwarden)
// This also prevents issues with very long usernames causing to large JWT's. See #2419 // This also prevents issues with very long usernames causing to large JWT's. See #2419
if let Some(ref name) = data.name { if let Some(ref name) = data.name {
@@ -142,28 +221,25 @@ pub async fn _register(data: Json<RegisterData>, mut conn: DbConn) -> JsonResult
let password_hint = clean_password_hint(&data.master_password_hint); let password_hint = clean_password_hint(&data.master_password_hint);
enforce_password_hint_setting(&password_hint)?; enforce_password_hint_setting(&password_hint)?;
let mut verified_by_invite = false;
let mut user = match User::find_by_mail(&email, &mut conn).await { let mut user = match User::find_by_mail(&email, &mut conn).await {
Some(mut user) => { Some(user) => {
if !user.password_hash.is_empty() { if !user.password_hash.is_empty() {
err!("Registration not allowed or user already exists") err!("Registration not allowed or user already exists")
} }
if let Some(token) = data.token { if let Some(token) = data.org_invite_token {
let claims = decode_invite(&token)?; let claims = decode_invite(&token)?;
if claims.email == email { if claims.email == email {
// Verify the email address when signing up via a valid invite token // Verify the email address when signing up via a valid invite token
verified_by_invite = true; email_verified = true;
user.verified_at = Some(Utc::now().naive_utc());
user user
} else { } else {
err!("Registration email does not match invite email") err!("Registration email does not match invite email")
} }
} else if Invitation::take(&email, &mut conn).await { } else if Invitation::take(&email, &mut conn).await {
for user_org in UserOrganization::find_invited_by_user(&user.uuid, &mut conn).await.iter_mut() { for membership in Membership::find_invited_by_user(&user.uuid, &mut conn).await.iter_mut() {
user_org.status = UserOrgStatus::Accepted as i32; membership.status = MembershipStatus::Accepted as i32;
user_org.save(&mut conn).await?; membership.save(&mut conn).await?;
} }
user user
} else if CONFIG.is_signup_allowed(&email) } else if CONFIG.is_signup_allowed(&email)
@@ -179,7 +255,10 @@ pub async fn _register(data: Json<RegisterData>, mut conn: DbConn) -> JsonResult
// Order is important here; the invitation check must come first // Order is important here; the invitation check must come first
// because the vaultwarden admin can invite anyone, regardless // because the vaultwarden admin can invite anyone, regardless
// of other signup restrictions. // of other signup restrictions.
if Invitation::take(&email, &mut conn).await || CONFIG.is_signup_allowed(&email) { if Invitation::take(&email, &mut conn).await
|| CONFIG.is_signup_allowed(&email)
|| pending_emergency_access.is_some()
{
User::new(email.clone()) User::new(email.clone())
} else { } else {
err!("Registration not allowed or user already exists") err!("Registration not allowed or user already exists")
@@ -214,17 +293,21 @@ pub async fn _register(data: Json<RegisterData>, mut conn: DbConn) -> JsonResult
user.public_key = Some(keys.public_key); user.public_key = Some(keys.public_key);
} }
if email_verified {
user.verified_at = Some(Utc::now().naive_utc());
}
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
if CONFIG.signups_verify() && !verified_by_invite { if CONFIG.signups_verify() && !email_verified {
if let Err(e) = mail::send_welcome_must_verify(&user.email, &user.uuid).await { if let Err(e) = mail::send_welcome_must_verify(&user.email, &user.uuid).await {
error!("Error sending welcome email: {:#?}", e); error!("Error sending welcome email: {e:#?}");
} }
user.last_verifying_at = Some(user.created_at); user.last_verifying_at = Some(user.created_at);
} else if let Err(e) = mail::send_welcome(&user.email).await { } else if let Err(e) = mail::send_welcome(&user.email).await {
error!("Error sending welcome email: {:#?}", e); error!("Error sending welcome email: {e:#?}");
} }
if verified_by_invite && is_email_2fa_required(data.organization_user_id, &mut conn).await { if email_verified && is_email_2fa_required(data.organization_user_id, &mut conn).await {
email::activate_email_2fa(&user, &mut conn).await.ok(); email::activate_email_2fa(&user, &mut conn).await.ok();
} }
} }
@@ -253,7 +336,6 @@ async fn profile(headers: Headers, mut conn: DbConn) -> Json<Value> {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct ProfileData { struct ProfileData {
// culture: String, // Ignored, always use en-US // culture: String, // Ignored, always use en-US
// masterPasswordHint: Option<String>, // Ignored, has been moved to ChangePassData
name: String, name: String,
} }
@@ -305,9 +387,9 @@ async fn put_avatar(data: Json<AvatarData>, headers: Headers, mut conn: DbConn)
Ok(Json(user.to_json(&mut conn).await)) Ok(Json(user.to_json(&mut conn).await))
} }
#[get("/users/<uuid>/public-key")] #[get("/users/<user_id>/public-key")]
async fn get_public_keys(uuid: &str, _headers: Headers, mut conn: DbConn) -> JsonResult { async fn get_public_keys(user_id: UserId, _headers: Headers, mut conn: DbConn) -> JsonResult {
let user = match User::find_by_uuid(uuid, &mut conn).await { let user = match User::find_by_uuid(&user_id, &mut conn).await {
Some(user) if user.public_key.is_some() => user, Some(user) if user.public_key.is_some() => user,
Some(_) => err_code!("User has no public_key", Status::NotFound.code), Some(_) => err_code!("User has no public_key", Status::NotFound.code),
None => err_code!("User doesn't exist", Status::NotFound.code), None => err_code!("User doesn't exist", Status::NotFound.code),
@@ -366,7 +448,12 @@ async fn post_password(data: Json<ChangePassData>, headers: Headers, mut conn: D
&data.new_master_password_hash, &data.new_master_password_hash,
Some(data.key), Some(data.key),
true, true,
Some(vec![String::from("post_rotatekey"), String::from("get_contacts"), String::from("get_public_keys")]), Some(vec![
String::from("post_rotatekey"),
String::from("get_contacts"),
String::from("get_public_keys"),
String::from("get_api_webauthn"),
]),
); );
let save_result = user.save(&mut conn).await; let save_result = user.save(&mut conn).await;
@@ -374,7 +461,7 @@ async fn post_password(data: Json<ChangePassData>, headers: Headers, mut conn: D
// Prevent logging out the client where the user requested this endpoint from. // Prevent logging out the client where the user requested this endpoint from.
// If you do logout the user it will causes issues at the client side. // If you do logout the user it will causes issues at the client side.
// Adding the device uuid will prevent this. // Adding the device uuid will prevent this.
nt.send_logout(&user, Some(headers.device.uuid)).await; nt.send_logout(&user, Some(headers.device.uuid.clone()), &mut conn).await;
save_result save_result
} }
@@ -434,7 +521,7 @@ async fn post_kdf(data: Json<ChangeKdfData>, headers: Headers, mut conn: DbConn,
user.set_password(&data.new_master_password_hash, Some(data.key), true, None); user.set_password(&data.new_master_password_hash, Some(data.key), true, None);
let save_result = user.save(&mut conn).await; let save_result = user.save(&mut conn).await;
nt.send_logout(&user, Some(headers.device.uuid)).await; nt.send_logout(&user, Some(headers.device.uuid.clone()), &mut conn).await;
save_result save_result
} }
@@ -445,21 +532,21 @@ struct UpdateFolderData {
// There is a bug in 2024.3.x which adds a `null` item. // There is a bug in 2024.3.x which adds a `null` item.
// To bypass this we allow a Option here, but skip it during the updates // To bypass this we allow a Option here, but skip it during the updates
// See: https://github.com/bitwarden/clients/issues/8453 // See: https://github.com/bitwarden/clients/issues/8453
id: Option<String>, id: Option<FolderId>,
name: String, name: String,
} }
#[derive(Deserialize)] #[derive(Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct UpdateEmergencyAccessData { struct UpdateEmergencyAccessData {
id: String, id: EmergencyAccessId,
key_encrypted: String, key_encrypted: String,
} }
#[derive(Deserialize)] #[derive(Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct UpdateResetPasswordData { struct UpdateResetPasswordData {
organization_id: String, organization_id: OrganizationId,
reset_password_key: String, reset_password_key: String,
} }
@@ -484,48 +571,49 @@ fn validate_keydata(
existing_ciphers: &[Cipher], existing_ciphers: &[Cipher],
existing_folders: &[Folder], existing_folders: &[Folder],
existing_emergency_access: &[EmergencyAccess], existing_emergency_access: &[EmergencyAccess],
existing_user_orgs: &[UserOrganization], existing_memberships: &[Membership],
existing_sends: &[Send], existing_sends: &[Send],
) -> EmptyResult { ) -> EmptyResult {
// Check that we're correctly rotating all the user's ciphers // Check that we're correctly rotating all the user's ciphers
let existing_cipher_ids = existing_ciphers.iter().map(|c| c.uuid.as_str()).collect::<HashSet<_>>(); let existing_cipher_ids = existing_ciphers.iter().map(|c| &c.uuid).collect::<HashSet<&CipherId>>();
let provided_cipher_ids = data let provided_cipher_ids = data
.ciphers .ciphers
.iter() .iter()
.filter(|c| c.organization_id.is_none()) .filter(|c| c.organization_id.is_none())
.filter_map(|c| c.id.as_deref()) .filter_map(|c| c.id.as_ref())
.collect::<HashSet<_>>(); .collect::<HashSet<&CipherId>>();
if !provided_cipher_ids.is_superset(&existing_cipher_ids) { if !provided_cipher_ids.is_superset(&existing_cipher_ids) {
err!("All existing ciphers must be included in the rotation") err!("All existing ciphers must be included in the rotation")
} }
// Check that we're correctly rotating all the user's folders // Check that we're correctly rotating all the user's folders
let existing_folder_ids = existing_folders.iter().map(|f| f.uuid.as_str()).collect::<HashSet<_>>(); let existing_folder_ids = existing_folders.iter().map(|f| &f.uuid).collect::<HashSet<&FolderId>>();
let provided_folder_ids = data.folders.iter().filter_map(|f| f.id.as_deref()).collect::<HashSet<_>>(); let provided_folder_ids = data.folders.iter().filter_map(|f| f.id.as_ref()).collect::<HashSet<&FolderId>>();
if !provided_folder_ids.is_superset(&existing_folder_ids) { if !provided_folder_ids.is_superset(&existing_folder_ids) {
err!("All existing folders must be included in the rotation") err!("All existing folders must be included in the rotation")
} }
// Check that we're correctly rotating all the user's emergency access keys // Check that we're correctly rotating all the user's emergency access keys
let existing_emergency_access_ids = let existing_emergency_access_ids =
existing_emergency_access.iter().map(|ea| ea.uuid.as_str()).collect::<HashSet<_>>(); existing_emergency_access.iter().map(|ea| &ea.uuid).collect::<HashSet<&EmergencyAccessId>>();
let provided_emergency_access_ids = let provided_emergency_access_ids =
data.emergency_access_keys.iter().map(|ea| ea.id.as_str()).collect::<HashSet<_>>(); data.emergency_access_keys.iter().map(|ea| &ea.id).collect::<HashSet<&EmergencyAccessId>>();
if !provided_emergency_access_ids.is_superset(&existing_emergency_access_ids) { if !provided_emergency_access_ids.is_superset(&existing_emergency_access_ids) {
err!("All existing emergency access keys must be included in the rotation") err!("All existing emergency access keys must be included in the rotation")
} }
// Check that we're correctly rotating all the user's reset password keys // Check that we're correctly rotating all the user's reset password keys
let existing_reset_password_ids = existing_user_orgs.iter().map(|uo| uo.org_uuid.as_str()).collect::<HashSet<_>>(); let existing_reset_password_ids =
existing_memberships.iter().map(|m| &m.org_uuid).collect::<HashSet<&OrganizationId>>();
let provided_reset_password_ids = let provided_reset_password_ids =
data.reset_password_keys.iter().map(|rp| rp.organization_id.as_str()).collect::<HashSet<_>>(); data.reset_password_keys.iter().map(|rp| &rp.organization_id).collect::<HashSet<&OrganizationId>>();
if !provided_reset_password_ids.is_superset(&existing_reset_password_ids) { if !provided_reset_password_ids.is_superset(&existing_reset_password_ids) {
err!("All existing reset password keys must be included in the rotation") err!("All existing reset password keys must be included in the rotation")
} }
// Check that we're correctly rotating all the user's sends // Check that we're correctly rotating all the user's sends
let existing_send_ids = existing_sends.iter().map(|s| s.uuid.as_str()).collect::<HashSet<_>>(); let existing_send_ids = existing_sends.iter().map(|s| &s.uuid).collect::<HashSet<&SendId>>();
let provided_send_ids = data.sends.iter().filter_map(|s| s.id.as_deref()).collect::<HashSet<_>>(); let provided_send_ids = data.sends.iter().filter_map(|s| s.id.as_ref()).collect::<HashSet<&SendId>>();
if !provided_send_ids.is_superset(&existing_send_ids) { if !provided_send_ids.is_superset(&existing_send_ids) {
err!("All existing sends must be included in the rotation") err!("All existing sends must be included in the rotation")
} }
@@ -548,24 +636,24 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
// TODO: See if we can optimize the whole cipher adding/importing and prevent duplicate code and checks. // TODO: See if we can optimize the whole cipher adding/importing and prevent duplicate code and checks.
Cipher::validate_cipher_data(&data.ciphers)?; Cipher::validate_cipher_data(&data.ciphers)?;
let user_uuid = &headers.user.uuid; let user_id = &headers.user.uuid;
// TODO: Ideally we'd do everything after this point in a single transaction. // TODO: Ideally we'd do everything after this point in a single transaction.
let mut existing_ciphers = Cipher::find_owned_by_user(user_uuid, &mut conn).await; let mut existing_ciphers = Cipher::find_owned_by_user(user_id, &mut conn).await;
let mut existing_folders = Folder::find_by_user(user_uuid, &mut conn).await; let mut existing_folders = Folder::find_by_user(user_id, &mut conn).await;
let mut existing_emergency_access = EmergencyAccess::find_all_by_grantor_uuid(user_uuid, &mut conn).await; let mut existing_emergency_access = EmergencyAccess::find_all_by_grantor_uuid(user_id, &mut conn).await;
let mut existing_user_orgs = UserOrganization::find_by_user(user_uuid, &mut conn).await; let mut existing_memberships = Membership::find_by_user(user_id, &mut conn).await;
// We only rotate the reset password key if it is set. // We only rotate the reset password key if it is set.
existing_user_orgs.retain(|uo| uo.reset_password_key.is_some()); existing_memberships.retain(|m| m.reset_password_key.is_some());
let mut existing_sends = Send::find_by_user(user_uuid, &mut conn).await; let mut existing_sends = Send::find_by_user(user_id, &mut conn).await;
validate_keydata( validate_keydata(
&data, &data,
&existing_ciphers, &existing_ciphers,
&existing_folders, &existing_folders,
&existing_emergency_access, &existing_emergency_access,
&existing_user_orgs, &existing_memberships,
&existing_sends, &existing_sends,
)?; )?;
@@ -597,14 +685,14 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
// Update reset password data // Update reset password data
for reset_password_data in data.reset_password_keys { for reset_password_data in data.reset_password_keys {
let Some(user_org) = let Some(membership) =
existing_user_orgs.iter_mut().find(|uo| uo.org_uuid == reset_password_data.organization_id) existing_memberships.iter_mut().find(|m| m.org_uuid == reset_password_data.organization_id)
else { else {
err!("Reset password doesn't exist") err!("Reset password doesn't exist")
}; };
user_org.reset_password_key = Some(reset_password_data.reset_password_key); membership.reset_password_key = Some(reset_password_data.reset_password_key);
user_org.save(&mut conn).await? membership.save(&mut conn).await?
} }
// Update send data // Update send data
@@ -645,7 +733,7 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, mut conn: DbConn,
// Prevent logging out the client where the user requested this endpoint from. // Prevent logging out the client where the user requested this endpoint from.
// If you do logout the user it will causes issues at the client side. // If you do logout the user it will causes issues at the client side.
// Adding the device uuid will prevent this. // Adding the device uuid will prevent this.
nt.send_logout(&user, Some(headers.device.uuid)).await; nt.send_logout(&user, Some(headers.device.uuid.clone()), &mut conn).await;
save_result save_result
} }
@@ -661,7 +749,7 @@ async fn post_sstamp(data: Json<PasswordOrOtpData>, headers: Headers, mut conn:
user.reset_security_stamp(); user.reset_security_stamp();
let save_result = user.save(&mut conn).await; let save_result = user.save(&mut conn).await;
nt.send_logout(&user, None).await; nt.send_logout(&user, None, &mut conn).await;
save_result save_result
} }
@@ -687,6 +775,11 @@ async fn post_email_token(data: Json<EmailTokenData>, headers: Headers, mut conn
} }
if User::find_by_mail(&data.new_email, &mut conn).await.is_some() { if User::find_by_mail(&data.new_email, &mut conn).await.is_some() {
if CONFIG.mail_enabled() {
if let Err(e) = mail::send_change_email_existing(&data.new_email, &user.email).await {
error!("Error sending change-email-existing email: {e:#?}");
}
}
err!("Email already in use"); err!("Email already in use");
} }
@@ -698,10 +791,10 @@ async fn post_email_token(data: Json<EmailTokenData>, headers: Headers, mut conn
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
if let Err(e) = mail::send_change_email(&data.new_email, &token).await { if let Err(e) = mail::send_change_email(&data.new_email, &token).await {
error!("Error sending change-email email: {:#?}", e); error!("Error sending change-email email: {e:#?}");
} }
} else { } else {
debug!("Email change request for user ({}) to email ({}) with token ({})", user.uuid, data.new_email, token); debug!("Email change request for user ({}) to email ({}) with token ({token})", user.uuid, data.new_email);
} }
user.email_new = Some(data.new_email); user.email_new = Some(data.new_email);
@@ -769,7 +862,7 @@ async fn post_email(data: Json<ChangeEmailData>, headers: Headers, mut conn: DbC
let save_result = user.save(&mut conn).await; let save_result = user.save(&mut conn).await;
nt.send_logout(&user, None).await; nt.send_logout(&user, None, &mut conn).await;
save_result save_result
} }
@@ -783,7 +876,7 @@ async fn post_verify_email(headers: Headers) -> EmptyResult {
} }
if let Err(e) = mail::send_verify_email(&user.email, &user.uuid).await { if let Err(e) = mail::send_verify_email(&user.email, &user.uuid).await {
error!("Error sending verify_email email: {:#?}", e); error!("Error sending verify_email email: {e:#?}");
} }
Ok(()) Ok(())
@@ -792,7 +885,7 @@ async fn post_verify_email(headers: Headers) -> EmptyResult {
#[derive(Deserialize)] #[derive(Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct VerifyEmailTokenData { struct VerifyEmailTokenData {
user_id: String, user_id: UserId,
token: String, token: String,
} }
@@ -807,14 +900,14 @@ async fn post_verify_email_token(data: Json<VerifyEmailTokenData>, mut conn: DbC
let Ok(claims) = decode_verify_email(&data.token) else { let Ok(claims) = decode_verify_email(&data.token) else {
err!("Invalid claim") err!("Invalid claim")
}; };
if claims.sub != user.uuid { if claims.sub != *user.uuid {
err!("Invalid claim"); err!("Invalid claim");
} }
user.verified_at = Some(Utc::now().naive_utc()); user.verified_at = Some(Utc::now().naive_utc());
user.last_verifying_at = None; user.last_verifying_at = None;
user.login_verify_count = 0; user.login_verify_count = 0;
if let Err(e) = user.save(&mut conn).await { if let Err(e) = user.save(&mut conn).await {
error!("Error saving email verification: {:#?}", e); error!("Error saving email verification: {e:#?}");
} }
Ok(()) Ok(())
@@ -833,7 +926,7 @@ async fn post_delete_recover(data: Json<DeleteRecoverData>, mut conn: DbConn) ->
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
if let Some(user) = User::find_by_mail(&data.email, &mut conn).await { if let Some(user) = User::find_by_mail(&data.email, &mut conn).await {
if let Err(e) = mail::send_delete_account(&user.email, &user.uuid).await { if let Err(e) = mail::send_delete_account(&user.email, &user.uuid).await {
error!("Error sending delete account email: {:#?}", e); error!("Error sending delete account email: {e:#?}");
} }
} }
Ok(()) Ok(())
@@ -849,7 +942,7 @@ async fn post_delete_recover(data: Json<DeleteRecoverData>, mut conn: DbConn) ->
#[derive(Deserialize)] #[derive(Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct DeleteRecoverTokenData { struct DeleteRecoverTokenData {
user_id: String, user_id: UserId,
token: String, token: String,
} }
@@ -865,7 +958,7 @@ async fn post_delete_recover_token(data: Json<DeleteRecoverTokenData>, mut conn:
err!("User doesn't exist") err!("User doesn't exist")
}; };
if claims.sub != user.uuid { if claims.sub != *user.uuid {
err!("Invalid claim"); err!("Invalid claim");
} }
user.delete(&mut conn).await user.delete(&mut conn).await
@@ -917,9 +1010,9 @@ async fn password_hint(data: Json<PasswordHintData>, mut conn: DbConn) -> EmptyR
// paths that send mail take noticeably longer than ones that // paths that send mail take noticeably longer than ones that
// don't. Add a randomized sleep to mitigate this somewhat. // don't. Add a randomized sleep to mitigate this somewhat.
use rand::{rngs::SmallRng, Rng, SeedableRng}; use rand::{rngs::SmallRng, Rng, SeedableRng};
let mut rng = SmallRng::from_entropy(); let mut rng = SmallRng::from_os_rng();
let delta: i32 = 100; let delta: i32 = 100;
let sleep_ms = (1_000 + rng.gen_range(-delta..=delta)) as u64; let sleep_ms = (1_000 + rng.random_range(-delta..=delta)) as u64;
tokio::time::sleep(tokio::time::Duration::from_millis(sleep_ms)).await; tokio::time::sleep(tokio::time::Duration::from_millis(sleep_ms)).await;
Ok(()) Ok(())
} else { } else {
@@ -967,7 +1060,7 @@ pub async fn _prelogin(data: Json<PreloginData>, mut conn: DbConn) -> Json<Value
})) }))
} }
// https://github.com/bitwarden/server/blob/master/src/Api/Models/Request/Accounts/SecretVerificationRequestModel.cs // https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Api/Auth/Models/Request/Accounts/SecretVerificationRequestModel.cs
#[derive(Deserialize)] #[derive(Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct SecretVerificationRequest { struct SecretVerificationRequest {
@@ -1027,7 +1120,7 @@ async fn get_known_device(device: KnownDevice, mut conn: DbConn) -> JsonResult {
struct KnownDevice { struct KnownDevice {
email: String, email: String,
uuid: String, uuid: DeviceId,
} }
#[rocket::async_trait] #[rocket::async_trait]
@@ -1050,7 +1143,7 @@ impl<'r> FromRequest<'r> for KnownDevice {
}; };
let uuid = if let Some(uuid) = req.headers().get_one("X-Device-Identifier") { let uuid = if let Some(uuid) = req.headers().get_one("X-Device-Identifier") {
uuid.to_string() uuid.to_string().into()
} else { } else {
return Outcome::Error((Status::BadRequest, "X-Device-Identifier value is required")); return Outcome::Error((Status::BadRequest, "X-Device-Identifier value is required"));
}; };
@@ -1062,40 +1155,60 @@ impl<'r> FromRequest<'r> for KnownDevice {
} }
} }
#[get("/devices")]
async fn get_all_devices(headers: Headers, mut conn: DbConn) -> JsonResult {
let devices = Device::find_with_auth_request_by_user(&headers.user.uuid, &mut conn).await;
let devices = devices.iter().map(|device| device.to_json()).collect::<Vec<Value>>();
Ok(Json(json!({
"data": devices,
"continuationToken": null,
"object": "list"
})))
}
#[get("/devices/identifier/<device_id>")]
async fn get_device(device_id: DeviceId, headers: Headers, mut conn: DbConn) -> JsonResult {
let Some(device) = Device::find_by_uuid_and_user(&device_id, &headers.user.uuid, &mut conn).await else {
err!("No device found");
};
Ok(Json(device.to_json()))
}
#[derive(Deserialize)] #[derive(Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct PushToken { struct PushToken {
push_token: String, push_token: String,
} }
#[post("/devices/identifier/<uuid>/token", data = "<data>")] #[post("/devices/identifier/<device_id>/token", data = "<data>")]
async fn post_device_token(uuid: &str, data: Json<PushToken>, headers: Headers, conn: DbConn) -> EmptyResult { async fn post_device_token(device_id: DeviceId, data: Json<PushToken>, headers: Headers, conn: DbConn) -> EmptyResult {
put_device_token(uuid, data, headers, conn).await put_device_token(device_id, data, headers, conn).await
} }
#[put("/devices/identifier/<uuid>/token", data = "<data>")] #[put("/devices/identifier/<device_id>/token", data = "<data>")]
async fn put_device_token(uuid: &str, data: Json<PushToken>, headers: Headers, mut conn: DbConn) -> EmptyResult { async fn put_device_token(
device_id: DeviceId,
data: Json<PushToken>,
headers: Headers,
mut conn: DbConn,
) -> EmptyResult {
let data = data.into_inner(); let data = data.into_inner();
let token = data.push_token; let token = data.push_token;
let Some(mut device) = Device::find_by_uuid_and_user(&headers.device.uuid, &headers.user.uuid, &mut conn).await let Some(mut device) = Device::find_by_uuid_and_user(&headers.device.uuid, &headers.user.uuid, &mut conn).await
else { else {
err!(format!("Error: device {uuid} should be present before a token can be assigned")) err!(format!("Error: device {device_id} should be present before a token can be assigned"))
}; };
// if the device already has been registered // Check if the new token is the same as the registered token
if device.is_registered() { // Although upstream seems to always register a device on login, we do not.
// check if the new token is the same as the registered token // Unless this causes issues, lets keep it this way, else we might need to also register on every login.
if device.push_token.is_some() && device.push_token.unwrap() == token.clone() { if device.push_token.as_ref() == Some(&token) {
debug!("Device {} is already registered and token is the same", uuid); debug!("Device {device_id} for user {} is already registered and token is identical", headers.user.uuid);
return Ok(()); return Ok(());
} else {
// Try to unregister already registered device
unregister_push_device(device.push_uuid).await.ok();
}
// clear the push_uuid
device.push_uuid = None;
} }
device.push_token = Some(token); device.push_token = Some(token);
if let Err(e) = device.save(&mut conn).await { if let Err(e) = device.save(&mut conn).await {
err!(format!("An error occurred while trying to save the device push token: {e}")); err!(format!("An error occurred while trying to save the device push token: {e}"));
@@ -1106,35 +1219,38 @@ async fn put_device_token(uuid: &str, data: Json<PushToken>, headers: Headers, m
Ok(()) Ok(())
} }
#[put("/devices/identifier/<uuid>/clear-token")] #[put("/devices/identifier/<device_id>/clear-token")]
async fn put_clear_device_token(uuid: &str, mut conn: DbConn) -> EmptyResult { async fn put_clear_device_token(device_id: DeviceId, mut conn: DbConn) -> EmptyResult {
// This only clears push token // This only clears push token
// https://github.com/bitwarden/core/blob/master/src/Api/Controllers/DevicesController.cs#L109 // https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Api/Controllers/DevicesController.cs#L215
// https://github.com/bitwarden/core/blob/master/src/Core/Services/Implementations/DeviceService.cs#L37 // https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Core/Services/Implementations/DeviceService.cs#L37
// This is somehow not implemented in any app, added it in case it is required // This is somehow not implemented in any app, added it in case it is required
// 2025: Also, it looks like it only clears the first found device upstream, which is probably faulty.
// This because currently multiple accounts could be on the same device/app and that would cause issues.
// Vaultwarden removes the push-token for all devices, but this probably means we should also unregister all these devices.
if !CONFIG.push_enabled() { if !CONFIG.push_enabled() {
return Ok(()); return Ok(());
} }
if let Some(device) = Device::find_by_uuid(uuid, &mut conn).await { if let Some(device) = Device::find_by_uuid(&device_id, &mut conn).await {
Device::clear_push_token_by_uuid(uuid, &mut conn).await?; Device::clear_push_token_by_uuid(&device_id, &mut conn).await?;
unregister_push_device(device.push_uuid).await?; unregister_push_device(&device.push_uuid).await?;
} }
Ok(()) Ok(())
} }
// On upstream server, both PUT and POST are declared. Implementing the POST method in case it would be useful somewhere // On upstream server, both PUT and POST are declared. Implementing the POST method in case it would be useful somewhere
#[post("/devices/identifier/<uuid>/clear-token")] #[post("/devices/identifier/<device_id>/clear-token")]
async fn post_clear_device_token(uuid: &str, conn: DbConn) -> EmptyResult { async fn post_clear_device_token(device_id: DeviceId, conn: DbConn) -> EmptyResult {
put_clear_device_token(uuid, conn).await put_clear_device_token(device_id, conn).await
} }
#[derive(Debug, Deserialize)] #[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct AuthRequestRequest { struct AuthRequestRequest {
access_code: String, access_code: String,
device_identifier: String, device_identifier: DeviceId,
email: String, email: String,
public_key: String, public_key: String,
// Not used for now // Not used for now
@@ -1156,10 +1272,10 @@ async fn post_auth_request(
}; };
// Validate device uuid and type // Validate device uuid and type
match Device::find_by_uuid_and_user(&data.device_identifier, &user.uuid, &mut conn).await { let device = match Device::find_by_uuid_and_user(&data.device_identifier, &user.uuid, &mut conn).await {
Some(device) if device.atype == client_headers.device_type => {} Some(device) if device.atype == client_headers.device_type => device,
_ => err!("AuthRequest doesn't exist", "Device verification failed"), _ => err!("AuthRequest doesn't exist", "Device verification failed"),
} };
let mut auth_request = AuthRequest::new( let mut auth_request = AuthRequest::new(
user.uuid.clone(), user.uuid.clone(),
@@ -1171,7 +1287,16 @@ async fn post_auth_request(
); );
auth_request.save(&mut conn).await?; auth_request.save(&mut conn).await?;
nt.send_auth_request(&user.uuid, &auth_request.uuid, &data.device_identifier, &mut conn).await; nt.send_auth_request(&user.uuid, &auth_request.uuid, &device, &mut conn).await;
log_user_event(
EventType::UserRequestedDeviceApproval as i32,
&user.uuid,
client_headers.device_type,
&client_headers.ip.ip,
&mut conn,
)
.await;
Ok(Json(json!({ Ok(Json(json!({
"id": auth_request.uuid, "id": auth_request.uuid,
@@ -1188,16 +1313,17 @@ async fn post_auth_request(
}))) })))
} }
#[get("/auth-requests/<uuid>")] #[get("/auth-requests/<auth_request_id>")]
async fn get_auth_request(uuid: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn get_auth_request(auth_request_id: AuthRequestId, headers: Headers, mut conn: DbConn) -> JsonResult {
let Some(auth_request) = AuthRequest::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await else { let Some(auth_request) = AuthRequest::find_by_uuid_and_user(&auth_request_id, &headers.user.uuid, &mut conn).await
else {
err!("AuthRequest doesn't exist", "Record not found or user uuid does not match") err!("AuthRequest doesn't exist", "Record not found or user uuid does not match")
}; };
let response_date_utc = auth_request.response_date.map(|response_date| format_date(&response_date)); let response_date_utc = auth_request.response_date.map(|response_date| format_date(&response_date));
Ok(Json(json!({ Ok(Json(json!({
"id": uuid, "id": &auth_request_id,
"publicKey": auth_request.public_key, "publicKey": auth_request.public_key,
"requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(), "requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(),
"requestIpAddress": auth_request.request_ip, "requestIpAddress": auth_request.request_ip,
@@ -1214,15 +1340,15 @@ async fn get_auth_request(uuid: &str, headers: Headers, mut conn: DbConn) -> Jso
#[derive(Debug, Deserialize)] #[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct AuthResponseRequest { struct AuthResponseRequest {
device_identifier: String, device_identifier: DeviceId,
key: String, key: String,
master_password_hash: Option<String>, master_password_hash: Option<String>,
request_approved: bool, request_approved: bool,
} }
#[put("/auth-requests/<uuid>", data = "<data>")] #[put("/auth-requests/<auth_request_id>", data = "<data>")]
async fn put_auth_request( async fn put_auth_request(
uuid: &str, auth_request_id: AuthRequestId,
data: Json<AuthResponseRequest>, data: Json<AuthResponseRequest>,
headers: Headers, headers: Headers,
mut conn: DbConn, mut conn: DbConn,
@@ -1230,10 +1356,16 @@ async fn put_auth_request(
nt: Notify<'_>, nt: Notify<'_>,
) -> JsonResult { ) -> JsonResult {
let data = data.into_inner(); let data = data.into_inner();
let Some(mut auth_request) = AuthRequest::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await else { let Some(mut auth_request) =
AuthRequest::find_by_uuid_and_user(&auth_request_id, &headers.user.uuid, &mut conn).await
else {
err!("AuthRequest doesn't exist", "Record not found or user uuid does not match") err!("AuthRequest doesn't exist", "Record not found or user uuid does not match")
}; };
if headers.device.uuid != data.device_identifier {
err!("AuthRequest doesn't exist", "Device verification failed")
}
if auth_request.approved.is_some() { if auth_request.approved.is_some() {
err!("An authentication request with the same device already exists") err!("An authentication request with the same device already exists")
} }
@@ -1250,14 +1382,31 @@ async fn put_auth_request(
auth_request.save(&mut conn).await?; auth_request.save(&mut conn).await?;
ant.send_auth_response(&auth_request.user_uuid, &auth_request.uuid).await; ant.send_auth_response(&auth_request.user_uuid, &auth_request.uuid).await;
nt.send_auth_response(&auth_request.user_uuid, &auth_request.uuid, data.device_identifier, &mut conn).await; nt.send_auth_response(&auth_request.user_uuid, &auth_request.uuid, &headers.device, &mut conn).await;
log_user_event(
EventType::OrganizationUserApprovedAuthRequest as i32,
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
)
.await;
} else { } else {
// If denied, there's no reason to keep the request // If denied, there's no reason to keep the request
auth_request.delete(&mut conn).await?; auth_request.delete(&mut conn).await?;
log_user_event(
EventType::OrganizationUserRejectedAuthRequest as i32,
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
)
.await;
} }
Ok(Json(json!({ Ok(Json(json!({
"id": uuid, "id": &auth_request_id,
"publicKey": auth_request.public_key, "publicKey": auth_request.public_key,
"requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(), "requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(),
"requestIpAddress": auth_request.request_ip, "requestIpAddress": auth_request.request_ip,
@@ -1271,14 +1420,14 @@ async fn put_auth_request(
}))) })))
} }
#[get("/auth-requests/<uuid>/response?<code>")] #[get("/auth-requests/<auth_request_id>/response?<code>")]
async fn get_auth_request_response( async fn get_auth_request_response(
uuid: &str, auth_request_id: AuthRequestId,
code: &str, code: &str,
client_headers: ClientHeaders, client_headers: ClientHeaders,
mut conn: DbConn, mut conn: DbConn,
) -> JsonResult { ) -> JsonResult {
let Some(auth_request) = AuthRequest::find_by_uuid(uuid, &mut conn).await else { let Some(auth_request) = AuthRequest::find_by_uuid(&auth_request_id, &mut conn).await else {
err!("AuthRequest doesn't exist", "User not found") err!("AuthRequest doesn't exist", "User not found")
}; };
@@ -1292,7 +1441,7 @@ async fn get_auth_request_response(
let response_date_utc = auth_request.response_date.map(|response_date| format_date(&response_date)); let response_date_utc = auth_request.response_date.map(|response_date| format_date(&response_date));
Ok(Json(json!({ Ok(Json(json!({
"id": uuid, "id": &auth_request_id,
"publicKey": auth_request.public_key, "publicKey": auth_request.public_key,
"requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(), "requestDeviceType": DeviceType::from_i32(auth_request.device_type).to_string(),
"requestIpAddress": auth_request.request_ip, "requestIpAddress": auth_request.request_ip,

File diff suppressed because it is too large Load Diff

View File

@@ -93,10 +93,10 @@ async fn get_grantees(headers: Headers, mut conn: DbConn) -> Json<Value> {
} }
#[get("/emergency-access/<emer_id>")] #[get("/emergency-access/<emer_id>")]
async fn get_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn get_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?; check_emergency_access_enabled()?;
match EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &headers.user.uuid, &mut conn).await { match EmergencyAccess::find_by_uuid_and_grantor_uuid(&emer_id, &headers.user.uuid, &mut conn).await {
Some(emergency_access) => Ok(Json( Some(emergency_access) => Ok(Json(
emergency_access.to_json_grantee_details(&mut conn).await.expect("Grantee user should exist but does not!"), emergency_access.to_json_grantee_details(&mut conn).await.expect("Grantee user should exist but does not!"),
)), )),
@@ -118,7 +118,7 @@ struct EmergencyAccessUpdateData {
#[put("/emergency-access/<emer_id>", data = "<data>")] #[put("/emergency-access/<emer_id>", data = "<data>")]
async fn put_emergency_access( async fn put_emergency_access(
emer_id: &str, emer_id: EmergencyAccessId,
data: Json<EmergencyAccessUpdateData>, data: Json<EmergencyAccessUpdateData>,
headers: Headers, headers: Headers,
conn: DbConn, conn: DbConn,
@@ -128,7 +128,7 @@ async fn put_emergency_access(
#[post("/emergency-access/<emer_id>", data = "<data>")] #[post("/emergency-access/<emer_id>", data = "<data>")]
async fn post_emergency_access( async fn post_emergency_access(
emer_id: &str, emer_id: EmergencyAccessId,
data: Json<EmergencyAccessUpdateData>, data: Json<EmergencyAccessUpdateData>,
headers: Headers, headers: Headers,
mut conn: DbConn, mut conn: DbConn,
@@ -138,7 +138,7 @@ async fn post_emergency_access(
let data: EmergencyAccessUpdateData = data.into_inner(); let data: EmergencyAccessUpdateData = data.into_inner();
let Some(mut emergency_access) = let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &headers.user.uuid, &mut conn).await EmergencyAccess::find_by_uuid_and_grantor_uuid(&emer_id, &headers.user.uuid, &mut conn).await
else { else {
err!("Emergency access not valid.") err!("Emergency access not valid.")
}; };
@@ -163,12 +163,12 @@ async fn post_emergency_access(
// region delete // region delete
#[delete("/emergency-access/<emer_id>")] #[delete("/emergency-access/<emer_id>")]
async fn delete_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult { async fn delete_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_enabled()?; check_emergency_access_enabled()?;
let emergency_access = match ( let emergency_access = match (
EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &headers.user.uuid, &mut conn).await, EmergencyAccess::find_by_uuid_and_grantor_uuid(&emer_id, &headers.user.uuid, &mut conn).await,
EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &headers.user.uuid, &mut conn).await, EmergencyAccess::find_by_uuid_and_grantee_uuid(&emer_id, &headers.user.uuid, &mut conn).await,
) { ) {
(Some(grantor_emer), None) => { (Some(grantor_emer), None) => {
info!("Grantor deleted emergency access {emer_id}"); info!("Grantor deleted emergency access {emer_id}");
@@ -186,7 +186,7 @@ async fn delete_emergency_access(emer_id: &str, headers: Headers, mut conn: DbCo
} }
#[post("/emergency-access/<emer_id>/delete")] #[post("/emergency-access/<emer_id>/delete")]
async fn post_delete_emergency_access(emer_id: &str, headers: Headers, conn: DbConn) -> EmptyResult { async fn post_delete_emergency_access(emer_id: EmergencyAccessId, headers: Headers, conn: DbConn) -> EmptyResult {
delete_emergency_access(emer_id, headers, conn).await delete_emergency_access(emer_id, headers, conn).await
} }
@@ -227,7 +227,7 @@ async fn send_invite(data: Json<EmergencyAccessInviteData>, headers: Headers, mu
let (grantee_user, new_user) = match User::find_by_mail(&email, &mut conn).await { let (grantee_user, new_user) = match User::find_by_mail(&email, &mut conn).await {
None => { None => {
if !CONFIG.invitations_allowed() { if !CONFIG.invitations_allowed() {
err!(format!("Grantee user does not exist: {}", &email)) err!(format!("Grantee user does not exist: {email}"))
} }
if !CONFIG.is_email_domain_allowed(&email) { if !CONFIG.is_email_domain_allowed(&email) {
@@ -266,8 +266,8 @@ async fn send_invite(data: Json<EmergencyAccessInviteData>, headers: Headers, mu
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
mail::send_emergency_access_invite( mail::send_emergency_access_invite(
&new_emergency_access.email.expect("Grantee email does not exists"), &new_emergency_access.email.expect("Grantee email does not exists"),
&grantee_user.uuid, grantee_user.uuid,
&new_emergency_access.uuid, new_emergency_access.uuid,
&grantor_user.name, &grantor_user.name,
&grantor_user.email, &grantor_user.email,
) )
@@ -281,11 +281,11 @@ async fn send_invite(data: Json<EmergencyAccessInviteData>, headers: Headers, mu
} }
#[post("/emergency-access/<emer_id>/reinvite")] #[post("/emergency-access/<emer_id>/reinvite")]
async fn resend_invite(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult { async fn resend_invite(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_enabled()?; check_emergency_access_enabled()?;
let Some(mut emergency_access) = let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &headers.user.uuid, &mut conn).await EmergencyAccess::find_by_uuid_and_grantor_uuid(&emer_id, &headers.user.uuid, &mut conn).await
else { else {
err!("Emergency access not valid.") err!("Emergency access not valid.")
}; };
@@ -307,8 +307,8 @@ async fn resend_invite(emer_id: &str, headers: Headers, mut conn: DbConn) -> Emp
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
mail::send_emergency_access_invite( mail::send_emergency_access_invite(
&email, &email,
&grantor_user.uuid, grantor_user.uuid,
&emergency_access.uuid, emergency_access.uuid,
&grantor_user.name, &grantor_user.name,
&grantor_user.email, &grantor_user.email,
) )
@@ -331,7 +331,12 @@ struct AcceptData {
} }
#[post("/emergency-access/<emer_id>/accept", data = "<data>")] #[post("/emergency-access/<emer_id>/accept", data = "<data>")]
async fn accept_invite(emer_id: &str, data: Json<AcceptData>, headers: Headers, mut conn: DbConn) -> EmptyResult { async fn accept_invite(
emer_id: EmergencyAccessId,
data: Json<AcceptData>,
headers: Headers,
mut conn: DbConn,
) -> EmptyResult {
check_emergency_access_enabled()?; check_emergency_access_enabled()?;
let data: AcceptData = data.into_inner(); let data: AcceptData = data.into_inner();
@@ -355,7 +360,7 @@ async fn accept_invite(emer_id: &str, data: Json<AcceptData>, headers: Headers,
// We need to search for the uuid in combination with the email, since we do not yet store the uuid of the grantee in the database. // We need to search for the uuid in combination with the email, since we do not yet store the uuid of the grantee in the database.
// The uuid of the grantee gets stored once accepted. // The uuid of the grantee gets stored once accepted.
let Some(mut emergency_access) = let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_email(emer_id, &headers.user.email, &mut conn).await EmergencyAccess::find_by_uuid_and_grantee_email(&emer_id, &headers.user.email, &mut conn).await
else { else {
err!("Emergency access not valid.") err!("Emergency access not valid.")
}; };
@@ -389,7 +394,7 @@ struct ConfirmData {
#[post("/emergency-access/<emer_id>/confirm", data = "<data>")] #[post("/emergency-access/<emer_id>/confirm", data = "<data>")]
async fn confirm_emergency_access( async fn confirm_emergency_access(
emer_id: &str, emer_id: EmergencyAccessId,
data: Json<ConfirmData>, data: Json<ConfirmData>,
headers: Headers, headers: Headers,
mut conn: DbConn, mut conn: DbConn,
@@ -401,7 +406,7 @@ async fn confirm_emergency_access(
let key = data.key; let key = data.key;
let Some(mut emergency_access) = let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &confirming_user.uuid, &mut conn).await EmergencyAccess::find_by_uuid_and_grantor_uuid(&emer_id, &confirming_user.uuid, &mut conn).await
else { else {
err!("Emergency access not valid.") err!("Emergency access not valid.")
}; };
@@ -441,12 +446,12 @@ async fn confirm_emergency_access(
// region access emergency access // region access emergency access
#[post("/emergency-access/<emer_id>/initiate")] #[post("/emergency-access/<emer_id>/initiate")]
async fn initiate_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn initiate_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?; check_emergency_access_enabled()?;
let initiating_user = headers.user; let initiating_user = headers.user;
let Some(mut emergency_access) = let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &initiating_user.uuid, &mut conn).await EmergencyAccess::find_by_uuid_and_grantee_uuid(&emer_id, &initiating_user.uuid, &mut conn).await
else { else {
err!("Emergency access not valid.") err!("Emergency access not valid.")
}; };
@@ -479,11 +484,11 @@ async fn initiate_emergency_access(emer_id: &str, headers: Headers, mut conn: Db
} }
#[post("/emergency-access/<emer_id>/approve")] #[post("/emergency-access/<emer_id>/approve")]
async fn approve_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn approve_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?; check_emergency_access_enabled()?;
let Some(mut emergency_access) = let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &headers.user.uuid, &mut conn).await EmergencyAccess::find_by_uuid_and_grantor_uuid(&emer_id, &headers.user.uuid, &mut conn).await
else { else {
err!("Emergency access not valid.") err!("Emergency access not valid.")
}; };
@@ -514,11 +519,11 @@ async fn approve_emergency_access(emer_id: &str, headers: Headers, mut conn: DbC
} }
#[post("/emergency-access/<emer_id>/reject")] #[post("/emergency-access/<emer_id>/reject")]
async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn reject_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?; check_emergency_access_enabled()?;
let Some(mut emergency_access) = let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(emer_id, &headers.user.uuid, &mut conn).await EmergencyAccess::find_by_uuid_and_grantor_uuid(&emer_id, &headers.user.uuid, &mut conn).await
else { else {
err!("Emergency access not valid.") err!("Emergency access not valid.")
}; };
@@ -551,11 +556,11 @@ async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbCo
// region action // region action
#[post("/emergency-access/<emer_id>/view")] #[post("/emergency-access/<emer_id>/view")]
async fn view_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn view_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?; check_emergency_access_enabled()?;
let Some(emergency_access) = let Some(emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &headers.user.uuid, &mut conn).await EmergencyAccess::find_by_uuid_and_grantee_uuid(&emer_id, &headers.user.uuid, &mut conn).await
else { else {
err!("Emergency access not valid.") err!("Emergency access not valid.")
}; };
@@ -589,12 +594,12 @@ async fn view_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn
} }
#[post("/emergency-access/<emer_id>/takeover")] #[post("/emergency-access/<emer_id>/takeover")]
async fn takeover_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn takeover_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?; check_emergency_access_enabled()?;
let requesting_user = headers.user; let requesting_user = headers.user;
let Some(emergency_access) = let Some(emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &requesting_user.uuid, &mut conn).await EmergencyAccess::find_by_uuid_and_grantee_uuid(&emer_id, &requesting_user.uuid, &mut conn).await
else { else {
err!("Emergency access not valid.") err!("Emergency access not valid.")
}; };
@@ -628,7 +633,7 @@ struct EmergencyAccessPasswordData {
#[post("/emergency-access/<emer_id>/password", data = "<data>")] #[post("/emergency-access/<emer_id>/password", data = "<data>")]
async fn password_emergency_access( async fn password_emergency_access(
emer_id: &str, emer_id: EmergencyAccessId,
data: Json<EmergencyAccessPasswordData>, data: Json<EmergencyAccessPasswordData>,
headers: Headers, headers: Headers,
mut conn: DbConn, mut conn: DbConn,
@@ -641,7 +646,7 @@ async fn password_emergency_access(
let requesting_user = headers.user; let requesting_user = headers.user;
let Some(emergency_access) = let Some(emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &requesting_user.uuid, &mut conn).await EmergencyAccess::find_by_uuid_and_grantee_uuid(&emer_id, &requesting_user.uuid, &mut conn).await
else { else {
err!("Emergency access not valid.") err!("Emergency access not valid.")
}; };
@@ -662,9 +667,9 @@ async fn password_emergency_access(
TwoFactor::delete_all_by_user(&grantor_user.uuid, &mut conn).await?; TwoFactor::delete_all_by_user(&grantor_user.uuid, &mut conn).await?;
// Remove grantor from all organisations unless Owner // Remove grantor from all organisations unless Owner
for user_org in UserOrganization::find_any_state_by_user(&grantor_user.uuid, &mut conn).await { for member in Membership::find_any_state_by_user(&grantor_user.uuid, &mut conn).await {
if user_org.atype != UserOrgType::Owner as i32 { if member.atype != MembershipType::Owner as i32 {
user_org.delete(&mut conn).await?; member.delete(&mut conn).await?;
} }
} }
Ok(()) Ok(())
@@ -673,10 +678,10 @@ async fn password_emergency_access(
// endregion // endregion
#[get("/emergency-access/<emer_id>/policies")] #[get("/emergency-access/<emer_id>/policies")]
async fn policies_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn policies_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> JsonResult {
let requesting_user = headers.user; let requesting_user = headers.user;
let Some(emergency_access) = let Some(emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(emer_id, &requesting_user.uuid, &mut conn).await EmergencyAccess::find_by_uuid_and_grantee_uuid(&emer_id, &requesting_user.uuid, &mut conn).await
else { else {
err!("Emergency access not valid.") err!("Emergency access not valid.")
}; };
@@ -701,11 +706,11 @@ async fn policies_emergency_access(emer_id: &str, headers: Headers, mut conn: Db
fn is_valid_request( fn is_valid_request(
emergency_access: &EmergencyAccess, emergency_access: &EmergencyAccess,
requesting_user_uuid: &str, requesting_user_id: &UserId,
requested_access_type: EmergencyAccessType, requested_access_type: EmergencyAccessType,
) -> bool { ) -> bool {
emergency_access.grantee_uuid.is_some() emergency_access.grantee_uuid.is_some()
&& emergency_access.grantee_uuid.as_ref().unwrap() == requesting_user_uuid && emergency_access.grantee_uuid.as_ref().unwrap() == requesting_user_id
&& emergency_access.status == EmergencyAccessStatus::RecoveryApproved as i32 && emergency_access.status == EmergencyAccessStatus::RecoveryApproved as i32
&& emergency_access.atype == requested_access_type as i32 && emergency_access.atype == requested_access_type as i32
} }

View File

@@ -8,7 +8,7 @@ use crate::{
api::{EmptyResult, JsonResult}, api::{EmptyResult, JsonResult},
auth::{AdminHeaders, Headers}, auth::{AdminHeaders, Headers},
db::{ db::{
models::{Cipher, Event, UserOrganization}, models::{Cipher, CipherId, Event, Membership, MembershipId, OrganizationId, UserId},
DbConn, DbPool, DbConn, DbPool,
}, },
util::parse_date, util::parse_date,
@@ -29,9 +29,18 @@ struct EventRange {
continuation_token: Option<String>, continuation_token: Option<String>,
} }
// Upstream: https://github.com/bitwarden/server/blob/9ecf69d9cabce732cf2c57976dd9afa5728578fb/src/Api/Controllers/EventsController.cs#LL84C35-L84C41 // Upstream: https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Api/AdminConsole/Controllers/EventsController.cs#L87
#[get("/organizations/<org_id>/events?<data..>")] #[get("/organizations/<org_id>/events?<data..>")]
async fn get_org_events(org_id: &str, data: EventRange, _headers: AdminHeaders, mut conn: DbConn) -> JsonResult { async fn get_org_events(
org_id: OrganizationId,
data: EventRange,
headers: AdminHeaders,
mut conn: DbConn,
) -> JsonResult {
if org_id != headers.org_id {
err!("Organization not found", "Organization id's do not match");
}
// Return an empty vec when we org events are disabled. // Return an empty vec when we org events are disabled.
// This prevents client errors // This prevents client errors
let events_json: Vec<Value> = if !CONFIG.org_events_enabled() { let events_json: Vec<Value> = if !CONFIG.org_events_enabled() {
@@ -44,7 +53,7 @@ async fn get_org_events(org_id: &str, data: EventRange, _headers: AdminHeaders,
parse_date(&data.end) parse_date(&data.end)
}; };
Event::find_by_organization_uuid(org_id, &start_date, &end_date, &mut conn) Event::find_by_organization_uuid(&org_id, &start_date, &end_date, &mut conn)
.await .await
.iter() .iter()
.map(|e| e.to_json()) .map(|e| e.to_json())
@@ -59,14 +68,14 @@ async fn get_org_events(org_id: &str, data: EventRange, _headers: AdminHeaders,
} }
#[get("/ciphers/<cipher_id>/events?<data..>")] #[get("/ciphers/<cipher_id>/events?<data..>")]
async fn get_cipher_events(cipher_id: &str, data: EventRange, headers: Headers, mut conn: DbConn) -> JsonResult { async fn get_cipher_events(cipher_id: CipherId, data: EventRange, headers: Headers, mut conn: DbConn) -> JsonResult {
// Return an empty vec when we org events are disabled. // Return an empty vec when we org events are disabled.
// This prevents client errors // This prevents client errors
let events_json: Vec<Value> = if !CONFIG.org_events_enabled() { let events_json: Vec<Value> = if !CONFIG.org_events_enabled() {
Vec::with_capacity(0) Vec::with_capacity(0)
} else { } else {
let mut events_json = Vec::with_capacity(0); let mut events_json = Vec::with_capacity(0);
if UserOrganization::user_has_ge_admin_access_to_cipher(&headers.user.uuid, cipher_id, &mut conn).await { if Membership::user_has_ge_admin_access_to_cipher(&headers.user.uuid, &cipher_id, &mut conn).await {
let start_date = parse_date(&data.start); let start_date = parse_date(&data.start);
let end_date = if let Some(before_date) = &data.continuation_token { let end_date = if let Some(before_date) = &data.continuation_token {
parse_date(before_date) parse_date(before_date)
@@ -74,7 +83,7 @@ async fn get_cipher_events(cipher_id: &str, data: EventRange, headers: Headers,
parse_date(&data.end) parse_date(&data.end)
}; };
events_json = Event::find_by_cipher_uuid(cipher_id, &start_date, &end_date, &mut conn) events_json = Event::find_by_cipher_uuid(&cipher_id, &start_date, &end_date, &mut conn)
.await .await
.iter() .iter()
.map(|e| e.to_json()) .map(|e| e.to_json())
@@ -90,14 +99,17 @@ async fn get_cipher_events(cipher_id: &str, data: EventRange, headers: Headers,
}))) })))
} }
#[get("/organizations/<org_id>/users/<user_org_id>/events?<data..>")] #[get("/organizations/<org_id>/users/<member_id>/events?<data..>")]
async fn get_user_events( async fn get_user_events(
org_id: &str, org_id: OrganizationId,
user_org_id: &str, member_id: MembershipId,
data: EventRange, data: EventRange,
_headers: AdminHeaders, headers: AdminHeaders,
mut conn: DbConn, mut conn: DbConn,
) -> JsonResult { ) -> JsonResult {
if org_id != headers.org_id {
err!("Organization not found", "Organization id's do not match");
}
// Return an empty vec when we org events are disabled. // Return an empty vec when we org events are disabled.
// This prevents client errors // This prevents client errors
let events_json: Vec<Value> = if !CONFIG.org_events_enabled() { let events_json: Vec<Value> = if !CONFIG.org_events_enabled() {
@@ -110,7 +122,7 @@ async fn get_user_events(
parse_date(&data.end) parse_date(&data.end)
}; };
Event::find_by_org_and_user_org(org_id, user_org_id, &start_date, &end_date, &mut conn) Event::find_by_org_and_member(&org_id, &member_id, &start_date, &end_date, &mut conn)
.await .await
.iter() .iter()
.map(|e| e.to_json()) .map(|e| e.to_json())
@@ -152,13 +164,13 @@ struct EventCollection {
date: String, date: String,
// Optional // Optional
cipher_id: Option<String>, cipher_id: Option<CipherId>,
organization_id: Option<String>, organization_id: Option<OrganizationId>,
} }
// Upstream: // Upstream:
// https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Events/Controllers/CollectController.cs // https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Events/Controllers/CollectController.cs
// https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Core/Services/Implementations/EventService.cs // https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Core/AdminConsole/Services/Implementations/EventService.cs
#[post("/collect", format = "application/json", data = "<data>")] #[post("/collect", format = "application/json", data = "<data>")]
async fn post_events_collect(data: Json<Vec<EventCollection>>, headers: Headers, mut conn: DbConn) -> EmptyResult { async fn post_events_collect(data: Json<Vec<EventCollection>>, headers: Headers, mut conn: DbConn) -> EmptyResult {
if !CONFIG.org_events_enabled() { if !CONFIG.org_events_enabled() {
@@ -180,11 +192,11 @@ async fn post_events_collect(data: Json<Vec<EventCollection>>, headers: Headers,
.await; .await;
} }
1600..=1699 => { 1600..=1699 => {
if let Some(org_uuid) = &event.organization_id { if let Some(org_id) = &event.organization_id {
_log_event( _log_event(
event.r#type, event.r#type,
org_uuid, org_id,
org_uuid, org_id,
&headers.user.uuid, &headers.user.uuid,
headers.device.atype, headers.device.atype,
Some(event_date), Some(event_date),
@@ -197,11 +209,11 @@ async fn post_events_collect(data: Json<Vec<EventCollection>>, headers: Headers,
_ => { _ => {
if let Some(cipher_uuid) = &event.cipher_id { if let Some(cipher_uuid) = &event.cipher_id {
if let Some(cipher) = Cipher::find_by_uuid(cipher_uuid, &mut conn).await { if let Some(cipher) = Cipher::find_by_uuid(cipher_uuid, &mut conn).await {
if let Some(org_uuid) = cipher.organization_uuid { if let Some(org_id) = cipher.organization_uuid {
_log_event( _log_event(
event.r#type, event.r#type,
cipher_uuid, cipher_uuid,
&org_uuid, &org_id,
&headers.user.uuid, &headers.user.uuid,
headers.device.atype, headers.device.atype,
Some(event_date), Some(event_date),
@@ -218,38 +230,39 @@ async fn post_events_collect(data: Json<Vec<EventCollection>>, headers: Headers,
Ok(()) Ok(())
} }
pub async fn log_user_event(event_type: i32, user_uuid: &str, device_type: i32, ip: &IpAddr, conn: &mut DbConn) { pub async fn log_user_event(event_type: i32, user_id: &UserId, device_type: i32, ip: &IpAddr, conn: &mut DbConn) {
if !CONFIG.org_events_enabled() { if !CONFIG.org_events_enabled() {
return; return;
} }
_log_user_event(event_type, user_uuid, device_type, None, ip, conn).await; _log_user_event(event_type, user_id, device_type, None, ip, conn).await;
} }
async fn _log_user_event( async fn _log_user_event(
event_type: i32, event_type: i32,
user_uuid: &str, user_id: &UserId,
device_type: i32, device_type: i32,
event_date: Option<NaiveDateTime>, event_date: Option<NaiveDateTime>,
ip: &IpAddr, ip: &IpAddr,
conn: &mut DbConn, conn: &mut DbConn,
) { ) {
let orgs = UserOrganization::get_org_uuid_by_user(user_uuid, conn).await; let memberships = Membership::find_by_user(user_id, conn).await;
let mut events: Vec<Event> = Vec::with_capacity(orgs.len() + 1); // We need an event per org and one without an org let mut events: Vec<Event> = Vec::with_capacity(memberships.len() + 1); // We need an event per org and one without an org
// Upstream saves the event also without any org_uuid. // Upstream saves the event also without any org_id.
let mut event = Event::new(event_type, event_date); let mut event = Event::new(event_type, event_date);
event.user_uuid = Some(String::from(user_uuid)); event.user_uuid = Some(user_id.clone());
event.act_user_uuid = Some(String::from(user_uuid)); event.act_user_uuid = Some(user_id.clone());
event.device_type = Some(device_type); event.device_type = Some(device_type);
event.ip_address = Some(ip.to_string()); event.ip_address = Some(ip.to_string());
events.push(event); events.push(event);
// For each org a user is a member of store these events per org // For each org a user is a member of store these events per org
for org_uuid in orgs { for membership in memberships {
let mut event = Event::new(event_type, event_date); let mut event = Event::new(event_type, event_date);
event.user_uuid = Some(String::from(user_uuid)); event.user_uuid = Some(user_id.clone());
event.org_uuid = Some(org_uuid); event.org_uuid = Some(membership.org_uuid);
event.act_user_uuid = Some(String::from(user_uuid)); event.org_user_uuid = Some(membership.uuid);
event.act_user_uuid = Some(user_id.clone());
event.device_type = Some(device_type); event.device_type = Some(device_type);
event.ip_address = Some(ip.to_string()); event.ip_address = Some(ip.to_string());
events.push(event); events.push(event);
@@ -261,8 +274,8 @@ async fn _log_user_event(
pub async fn log_event( pub async fn log_event(
event_type: i32, event_type: i32,
source_uuid: &str, source_uuid: &str,
org_uuid: &str, org_id: &OrganizationId,
act_user_uuid: &str, act_user_id: &UserId,
device_type: i32, device_type: i32,
ip: &IpAddr, ip: &IpAddr,
conn: &mut DbConn, conn: &mut DbConn,
@@ -270,15 +283,15 @@ pub async fn log_event(
if !CONFIG.org_events_enabled() { if !CONFIG.org_events_enabled() {
return; return;
} }
_log_event(event_type, source_uuid, org_uuid, act_user_uuid, device_type, None, ip, conn).await; _log_event(event_type, source_uuid, org_id, act_user_id, device_type, None, ip, conn).await;
} }
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
async fn _log_event( async fn _log_event(
event_type: i32, event_type: i32,
source_uuid: &str, source_uuid: &str,
org_uuid: &str, org_id: &OrganizationId,
act_user_uuid: &str, act_user_id: &UserId,
device_type: i32, device_type: i32,
event_date: Option<NaiveDateTime>, event_date: Option<NaiveDateTime>,
ip: &IpAddr, ip: &IpAddr,
@@ -290,31 +303,31 @@ async fn _log_event(
// 1000..=1099 Are user events, they need to be logged via log_user_event() // 1000..=1099 Are user events, they need to be logged via log_user_event()
// Cipher Events // Cipher Events
1100..=1199 => { 1100..=1199 => {
event.cipher_uuid = Some(String::from(source_uuid)); event.cipher_uuid = Some(source_uuid.to_string().into());
} }
// Collection Events // Collection Events
1300..=1399 => { 1300..=1399 => {
event.collection_uuid = Some(String::from(source_uuid)); event.collection_uuid = Some(source_uuid.to_string().into());
} }
// Group Events // Group Events
1400..=1499 => { 1400..=1499 => {
event.group_uuid = Some(String::from(source_uuid)); event.group_uuid = Some(source_uuid.to_string().into());
} }
// Org User Events // Org User Events
1500..=1599 => { 1500..=1599 => {
event.org_user_uuid = Some(String::from(source_uuid)); event.org_user_uuid = Some(source_uuid.to_string().into());
} }
// 1600..=1699 Are organizational events, and they do not need the source_uuid // 1600..=1699 Are organizational events, and they do not need the source_uuid
// Policy Events // Policy Events
1700..=1799 => { 1700..=1799 => {
event.policy_uuid = Some(String::from(source_uuid)); event.policy_uuid = Some(source_uuid.to_string().into());
} }
// Ignore others // Ignore others
_ => {} _ => {}
} }
event.org_uuid = Some(String::from(org_uuid)); event.org_uuid = Some(org_id.clone());
event.act_user_uuid = Some(String::from(act_user_uuid)); event.act_user_uuid = Some(act_user_id.clone());
event.device_type = Some(device_type); event.device_type = Some(device_type);
event.ip_address = Some(ip.to_string()); event.ip_address = Some(ip.to_string());
event.save(conn).await.unwrap_or(()); event.save(conn).await.unwrap_or(());

View File

@@ -23,9 +23,9 @@ async fn get_folders(headers: Headers, mut conn: DbConn) -> Json<Value> {
})) }))
} }
#[get("/folders/<uuid>")] #[get("/folders/<folder_id>")]
async fn get_folder(uuid: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn get_folder(folder_id: FolderId, headers: Headers, mut conn: DbConn) -> JsonResult {
match Folder::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await { match Folder::find_by_uuid_and_user(&folder_id, &headers.user.uuid, &mut conn).await {
Some(folder) => Ok(Json(folder.to_json())), Some(folder) => Ok(Json(folder.to_json())),
_ => err!("Invalid folder", "Folder does not exist or belongs to another user"), _ => err!("Invalid folder", "Folder does not exist or belongs to another user"),
} }
@@ -35,7 +35,7 @@ async fn get_folder(uuid: &str, headers: Headers, mut conn: DbConn) -> JsonResul
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct FolderData { pub struct FolderData {
pub name: String, pub name: String,
pub id: Option<String>, pub id: Option<FolderId>,
} }
#[post("/folders", data = "<data>")] #[post("/folders", data = "<data>")]
@@ -45,19 +45,25 @@ async fn post_folders(data: Json<FolderData>, headers: Headers, mut conn: DbConn
let mut folder = Folder::new(headers.user.uuid, data.name); let mut folder = Folder::new(headers.user.uuid, data.name);
folder.save(&mut conn).await?; folder.save(&mut conn).await?;
nt.send_folder_update(UpdateType::SyncFolderCreate, &folder, &headers.device.uuid, &mut conn).await; nt.send_folder_update(UpdateType::SyncFolderCreate, &folder, &headers.device, &mut conn).await;
Ok(Json(folder.to_json())) Ok(Json(folder.to_json()))
} }
#[post("/folders/<uuid>", data = "<data>")] #[post("/folders/<folder_id>", data = "<data>")]
async fn post_folder(uuid: &str, data: Json<FolderData>, headers: Headers, conn: DbConn, nt: Notify<'_>) -> JsonResult { async fn post_folder(
put_folder(uuid, data, headers, conn, nt).await folder_id: FolderId,
data: Json<FolderData>,
headers: Headers,
conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
put_folder(folder_id, data, headers, conn, nt).await
} }
#[put("/folders/<uuid>", data = "<data>")] #[put("/folders/<folder_id>", data = "<data>")]
async fn put_folder( async fn put_folder(
uuid: &str, folder_id: FolderId,
data: Json<FolderData>, data: Json<FolderData>,
headers: Headers, headers: Headers,
mut conn: DbConn, mut conn: DbConn,
@@ -65,32 +71,32 @@ async fn put_folder(
) -> JsonResult { ) -> JsonResult {
let data: FolderData = data.into_inner(); let data: FolderData = data.into_inner();
let Some(mut folder) = Folder::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await else { let Some(mut folder) = Folder::find_by_uuid_and_user(&folder_id, &headers.user.uuid, &mut conn).await else {
err!("Invalid folder", "Folder does not exist or belongs to another user") err!("Invalid folder", "Folder does not exist or belongs to another user")
}; };
folder.name = data.name; folder.name = data.name;
folder.save(&mut conn).await?; folder.save(&mut conn).await?;
nt.send_folder_update(UpdateType::SyncFolderUpdate, &folder, &headers.device.uuid, &mut conn).await; nt.send_folder_update(UpdateType::SyncFolderUpdate, &folder, &headers.device, &mut conn).await;
Ok(Json(folder.to_json())) Ok(Json(folder.to_json()))
} }
#[post("/folders/<uuid>/delete")] #[post("/folders/<folder_id>/delete")]
async fn delete_folder_post(uuid: &str, headers: Headers, conn: DbConn, nt: Notify<'_>) -> EmptyResult { async fn delete_folder_post(folder_id: FolderId, headers: Headers, conn: DbConn, nt: Notify<'_>) -> EmptyResult {
delete_folder(uuid, headers, conn, nt).await delete_folder(folder_id, headers, conn, nt).await
} }
#[delete("/folders/<uuid>")] #[delete("/folders/<folder_id>")]
async fn delete_folder(uuid: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult { async fn delete_folder(folder_id: FolderId, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let Some(folder) = Folder::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await else { let Some(folder) = Folder::find_by_uuid_and_user(&folder_id, &headers.user.uuid, &mut conn).await else {
err!("Invalid folder", "Folder does not exist or belongs to another user") err!("Invalid folder", "Folder does not exist or belongs to another user")
}; };
// Delete the actual folder entry // Delete the actual folder entry
folder.delete(&mut conn).await?; folder.delete(&mut conn).await?;
nt.send_folder_update(UpdateType::SyncFolderDelete, &folder, &headers.device.uuid, &mut conn).await; nt.send_folder_update(UpdateType::SyncFolderDelete, &folder, &headers.device, &mut conn).await;
Ok(()) Ok(())
} }

View File

@@ -18,7 +18,7 @@ pub use sends::purge_sends;
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
let mut eq_domains_routes = routes![get_eq_domains, post_eq_domains, put_eq_domains]; let mut eq_domains_routes = routes![get_eq_domains, post_eq_domains, put_eq_domains];
let mut hibp_routes = routes![hibp_breach]; let mut hibp_routes = routes![hibp_breach];
let mut meta_routes = routes![alive, now, version, config]; let mut meta_routes = routes![alive, now, version, config, get_api_webauthn];
let mut routes = Vec::new(); let mut routes = Vec::new();
routes.append(&mut accounts::routes()); routes.append(&mut accounts::routes());
@@ -124,7 +124,7 @@ async fn post_eq_domains(
user.save(&mut conn).await?; user.save(&mut conn).await?;
nt.send_user_update(UpdateType::SyncSettings, &user).await; nt.send_user_update(UpdateType::SyncSettings, &user, &headers.device.push_uuid, &mut conn).await;
Ok(Json(json!({}))) Ok(Json(json!({})))
} }
@@ -184,22 +184,39 @@ fn version() -> Json<&'static str> {
Json(crate::VERSION.unwrap_or_default()) Json(crate::VERSION.unwrap_or_default())
} }
#[get("/webauthn")]
fn get_api_webauthn(_headers: Headers) -> Json<Value> {
// Prevent a 404 error, which also causes key-rotation issues
// It looks like this is used when login with passkeys is enabled, which Vaultwarden does not (yet) support
// An empty list/data also works fine
Json(json!({
"object": "list",
"data": [],
"continuationToken": null
}))
}
#[get("/config")] #[get("/config")]
fn config() -> Json<Value> { fn config() -> Json<Value> {
let domain = crate::CONFIG.domain(); let domain = crate::CONFIG.domain();
// Official available feature flags can be found here:
// Server (v2025.5.0): https://github.com/bitwarden/server/blob/4a7db112a0952c6df8bacf36c317e9c4e58c3651/src/Core/Constants.cs#L102
// Client (v2025.5.0): https://github.com/bitwarden/clients/blob/9df8a3cc50ed45f52513e62c23fcc8a4b745f078/libs/common/src/enums/feature-flag.enum.ts#L10
// Android (v2025.4.0): https://github.com/bitwarden/android/blob/bee09de972c3870de0d54a0067996be473ec55c7/app/src/main/java/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L27
// iOS (v2025.4.0): https://github.com/bitwarden/ios/blob/956e05db67344c912e3a1b8cb2609165d67da1c9/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
let mut feature_states = let mut feature_states =
parse_experimental_client_feature_flags(&crate::CONFIG.experimental_client_feature_flags()); parse_experimental_client_feature_flags(&crate::CONFIG.experimental_client_feature_flags());
// Force the new key rotation feature feature_states.insert("duo-redirect".to_string(), true);
feature_states.insert("key-rotation-improvements".to_string(), true); feature_states.insert("email-verification".to_string(), true);
feature_states.insert("flexible-collections-v-1".to_string(), false); feature_states.insert("unauth-ui-refresh".to_string(), true);
Json(json!({ Json(json!({
// Note: The clients use this version to handle backwards compatibility concerns // Note: The clients use this version to handle backwards compatibility concerns
// This means they expect a version that closely matches the Bitwarden server version // This means they expect a version that closely matches the Bitwarden server version
// We should make sure that we keep this updated when we support the new server features // We should make sure that we keep this updated when we support the new server features
// Version history: // Version history:
// - Individual cipher key encryption: 2023.9.1 // - Individual cipher key encryption: 2024.2.0
"version": "2024.2.0", "version": "2025.4.0",
"gitHash": option_env!("GIT_REV"), "gitHash": option_env!("GIT_REV"),
"server": { "server": {
"name": "Vaultwarden", "name": "Vaultwarden",
@@ -214,6 +231,12 @@ fn config() -> Json<Value> {
"identity": format!("{domain}/identity"), "identity": format!("{domain}/identity"),
"notifications": format!("{domain}/notifications"), "notifications": format!("{domain}/notifications"),
"sso": "", "sso": "",
"cloudRegion": null,
},
// Bitwarden uses this for the self-hosted servers to indicate the default push technology
"push": {
"pushTechnology": 0,
"vapidPublicKey": null
}, },
"featureStates": feature_states, "featureStates": feature_states,
"object": "config", "object": "config",

File diff suppressed because it is too large Load Diff

View File

@@ -46,46 +46,42 @@ struct OrgImportData {
#[post("/public/organization/import", data = "<data>")] #[post("/public/organization/import", data = "<data>")]
async fn ldap_import(data: Json<OrgImportData>, token: PublicToken, mut conn: DbConn) -> EmptyResult { async fn ldap_import(data: Json<OrgImportData>, token: PublicToken, mut conn: DbConn) -> EmptyResult {
// Most of the logic for this function can be found here // Most of the logic for this function can be found here
// https://github.com/bitwarden/server/blob/fd892b2ff4547648a276734fb2b14a8abae2c6f5/src/Core/Services/Implementations/OrganizationService.cs#L1797 // https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Core/AdminConsole/Services/Implementations/OrganizationService.cs#L1203
let org_id = token.0; let org_id = token.0;
let data = data.into_inner(); let data = data.into_inner();
for user_data in &data.members { for user_data in &data.members {
let mut user_created: bool = false;
if user_data.deleted { if user_data.deleted {
// If user is marked for deletion and it exists, revoke it // If user is marked for deletion and it exists, revoke it
if let Some(mut user_org) = if let Some(mut member) = Membership::find_by_email_and_org(&user_data.email, &org_id, &mut conn).await {
UserOrganization::find_by_email_and_org(&user_data.email, &org_id, &mut conn).await
{
// Only revoke a user if it is not the last confirmed owner // Only revoke a user if it is not the last confirmed owner
let revoked = if user_org.atype == UserOrgType::Owner let revoked = if member.atype == MembershipType::Owner
&& user_org.status == UserOrgStatus::Confirmed as i32 && member.status == MembershipStatus::Confirmed as i32
{ {
if UserOrganization::count_confirmed_by_org_and_type(&org_id, UserOrgType::Owner, &mut conn).await if Membership::count_confirmed_by_org_and_type(&org_id, MembershipType::Owner, &mut conn).await <= 1
<= 1
{ {
warn!("Can't revoke the last owner"); warn!("Can't revoke the last owner");
false false
} else { } else {
user_org.revoke() member.revoke()
} }
} else { } else {
user_org.revoke() member.revoke()
}; };
let ext_modified = user_org.set_external_id(Some(user_data.external_id.clone())); let ext_modified = member.set_external_id(Some(user_data.external_id.clone()));
if revoked || ext_modified { if revoked || ext_modified {
user_org.save(&mut conn).await?; member.save(&mut conn).await?;
} }
} }
// If user is part of the organization, restore it // If user is part of the organization, restore it
} else if let Some(mut user_org) = } else if let Some(mut member) = Membership::find_by_email_and_org(&user_data.email, &org_id, &mut conn).await {
UserOrganization::find_by_email_and_org(&user_data.email, &org_id, &mut conn).await let restored = member.restore();
{ let ext_modified = member.set_external_id(Some(user_data.external_id.clone()));
let restored = user_org.restore();
let ext_modified = user_org.set_external_id(Some(user_data.external_id.clone()));
if restored || ext_modified { if restored || ext_modified {
user_org.save(&mut conn).await?; member.save(&mut conn).await?;
} }
} else { } else {
// If user is not part of the organization // If user is not part of the organization
@@ -97,25 +93,25 @@ async fn ldap_import(data: Json<OrgImportData>, token: PublicToken, mut conn: Db
new_user.save(&mut conn).await?; new_user.save(&mut conn).await?;
if !CONFIG.mail_enabled() { if !CONFIG.mail_enabled() {
let invitation = Invitation::new(&new_user.email); Invitation::new(&new_user.email).save(&mut conn).await?;
invitation.save(&mut conn).await?;
} }
user_created = true;
new_user new_user
} }
}; };
let user_org_status = if CONFIG.mail_enabled() || user.password_hash.is_empty() { let member_status = if CONFIG.mail_enabled() || user.password_hash.is_empty() {
UserOrgStatus::Invited as i32 MembershipStatus::Invited as i32
} else { } else {
UserOrgStatus::Accepted as i32 // Automatically mark user as accepted if no email invites MembershipStatus::Accepted as i32 // Automatically mark user as accepted if no email invites
}; };
let mut new_org_user = UserOrganization::new(user.uuid.clone(), org_id.clone()); let mut new_member = Membership::new(user.uuid.clone(), org_id.clone());
new_org_user.set_external_id(Some(user_data.external_id.clone())); new_member.set_external_id(Some(user_data.external_id.clone()));
new_org_user.access_all = false; new_member.access_all = false;
new_org_user.atype = UserOrgType::User as i32; new_member.atype = MembershipType::User as i32;
new_org_user.status = user_org_status; new_member.status = member_status;
new_org_user.save(&mut conn).await?; new_member.save(&mut conn).await?;
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
let (org_name, org_email) = match Organization::find_by_uuid(&org_id, &mut conn).await { let (org_name, org_email) = match Organization::find_by_uuid(&org_id, &mut conn).await {
@@ -123,8 +119,18 @@ async fn ldap_import(data: Json<OrgImportData>, token: PublicToken, mut conn: Db
None => err!("Error looking up organization"), None => err!("Error looking up organization"),
}; };
mail::send_invite(&user, Some(org_id.clone()), Some(new_org_user.uuid), &org_name, Some(org_email)) if let Err(e) =
.await?; mail::send_invite(&user, org_id.clone(), new_member.uuid.clone(), &org_name, Some(org_email)).await
{
// Upon error delete the user, invite and org member records when needed
if user_created {
user.delete(&mut conn).await?;
} else {
new_member.delete(&mut conn).await?;
}
err!(format!("Error sending invite: {e:?} "));
}
} }
} }
} }
@@ -149,9 +155,8 @@ async fn ldap_import(data: Json<OrgImportData>, token: PublicToken, mut conn: Db
GroupUser::delete_all_by_group(&group_uuid, &mut conn).await?; GroupUser::delete_all_by_group(&group_uuid, &mut conn).await?;
for ext_id in &group_data.member_external_ids { for ext_id in &group_data.member_external_ids {
if let Some(user_org) = UserOrganization::find_by_external_id_and_org(ext_id, &org_id, &mut conn).await if let Some(member) = Membership::find_by_external_id_and_org(ext_id, &org_id, &mut conn).await {
{ let mut group_user = GroupUser::new(group_uuid.clone(), member.uuid.clone());
let mut group_user = GroupUser::new(group_uuid.clone(), user_org.uuid.clone());
group_user.save(&mut conn).await?; group_user.save(&mut conn).await?;
} }
} }
@@ -164,20 +169,19 @@ async fn ldap_import(data: Json<OrgImportData>, token: PublicToken, mut conn: Db
if data.overwrite_existing { if data.overwrite_existing {
// Generate a HashSet to quickly verify if a member is listed or not. // Generate a HashSet to quickly verify if a member is listed or not.
let sync_members: HashSet<String> = data.members.into_iter().map(|m| m.external_id).collect(); let sync_members: HashSet<String> = data.members.into_iter().map(|m| m.external_id).collect();
for user_org in UserOrganization::find_by_org(&org_id, &mut conn).await { for member in Membership::find_by_org(&org_id, &mut conn).await {
if let Some(ref user_external_id) = user_org.external_id { if let Some(ref user_external_id) = member.external_id {
if !sync_members.contains(user_external_id) { if !sync_members.contains(user_external_id) {
if user_org.atype == UserOrgType::Owner && user_org.status == UserOrgStatus::Confirmed as i32 { if member.atype == MembershipType::Owner && member.status == MembershipStatus::Confirmed as i32 {
// Removing owner, check that there is at least one other confirmed owner // Removing owner, check that there is at least one other confirmed owner
if UserOrganization::count_confirmed_by_org_and_type(&org_id, UserOrgType::Owner, &mut conn) if Membership::count_confirmed_by_org_and_type(&org_id, MembershipType::Owner, &mut conn).await
.await
<= 1 <= 1
{ {
warn!("Can't delete the last owner"); warn!("Can't delete the last owner");
continue; continue;
} }
} }
user_org.delete(&mut conn).await?; member.delete(&mut conn).await?;
} }
} }
} }
@@ -186,7 +190,7 @@ async fn ldap_import(data: Json<OrgImportData>, token: PublicToken, mut conn: Db
Ok(()) Ok(())
} }
pub struct PublicToken(String); pub struct PublicToken(OrganizationId);
#[rocket::async_trait] #[rocket::async_trait]
impl<'r> FromRequest<'r> for PublicToken { impl<'r> FromRequest<'r> for PublicToken {
@@ -226,10 +230,11 @@ impl<'r> FromRequest<'r> for PublicToken {
Outcome::Success(conn) => conn, Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"), _ => err_handler!("Error getting DB"),
}; };
let Some(org_uuid) = claims.client_id.strip_prefix("organization.") else { let Some(org_id) = claims.client_id.strip_prefix("organization.") else {
err_handler!("Malformed client_id") err_handler!("Malformed client_id")
}; };
let Some(org_api_key) = OrganizationApiKey::find_by_org_uuid(org_uuid, &conn).await else { let org_id: OrganizationId = org_id.to_string().into();
let Some(org_api_key) = OrganizationApiKey::find_by_org_uuid(&org_id, &conn).await else {
err_handler!("Invalid client_id") err_handler!("Invalid client_id")
}; };
if org_api_key.org_uuid != claims.client_sub { if org_api_key.org_uuid != claims.client_sub {

View File

@@ -2,6 +2,7 @@ use std::path::Path;
use chrono::{DateTime, TimeDelta, Utc}; use chrono::{DateTime, TimeDelta, Utc};
use num_traits::ToPrimitive; use num_traits::ToPrimitive;
use once_cell::sync::Lazy;
use rocket::form::Form; use rocket::form::Form;
use rocket::fs::NamedFile; use rocket::fs::NamedFile;
use rocket::fs::TempFile; use rocket::fs::TempFile;
@@ -12,11 +13,26 @@ use crate::{
api::{ApiResult, EmptyResult, JsonResult, Notify, UpdateType}, api::{ApiResult, EmptyResult, JsonResult, Notify, UpdateType},
auth::{ClientIp, Headers, Host}, auth::{ClientIp, Headers, Host},
db::{models::*, DbConn, DbPool}, db::{models::*, DbConn, DbPool},
util::{NumberOrString, SafeString}, util::NumberOrString,
CONFIG, CONFIG,
}; };
const SEND_INACCESSIBLE_MSG: &str = "Send does not exist or is no longer available"; const SEND_INACCESSIBLE_MSG: &str = "Send does not exist or is no longer available";
static ANON_PUSH_DEVICE: Lazy<Device> = Lazy::new(|| {
let dt = crate::util::parse_date("1970-01-01T00:00:00.000000Z");
Device {
uuid: String::from("00000000-0000-0000-0000-000000000000").into(),
created_at: dt,
updated_at: dt,
user_uuid: String::from("00000000-0000-0000-0000-000000000000").into(),
name: String::new(),
atype: 14, // 14 == Unknown Browser
push_uuid: Some(String::from("00000000-0000-0000-0000-000000000000").into()),
push_token: None,
refresh_token: String::new(),
twofactor_remember: None,
}
});
// The max file size allowed by Bitwarden clients and add an extra 5% to avoid issues // The max file size allowed by Bitwarden clients and add an extra 5% to avoid issues
const SIZE_525_MB: i64 = 550_502_400; const SIZE_525_MB: i64 = 550_502_400;
@@ -67,7 +83,7 @@ pub struct SendData {
file_length: Option<NumberOrString>, file_length: Option<NumberOrString>,
// Used for key rotations // Used for key rotations
pub id: Option<String>, pub id: Option<SendId>,
} }
/// Enforces the `Disable Send` policy. A non-owner/admin user belonging to /// Enforces the `Disable Send` policy. A non-owner/admin user belonging to
@@ -79,9 +95,9 @@ pub struct SendData {
/// There is also a Vaultwarden-specific `sends_allowed` config setting that /// There is also a Vaultwarden-specific `sends_allowed` config setting that
/// controls this policy globally. /// controls this policy globally.
async fn enforce_disable_send_policy(headers: &Headers, conn: &mut DbConn) -> EmptyResult { async fn enforce_disable_send_policy(headers: &Headers, conn: &mut DbConn) -> EmptyResult {
let user_uuid = &headers.user.uuid; let user_id = &headers.user.uuid;
if !CONFIG.sends_allowed() if !CONFIG.sends_allowed()
|| OrgPolicy::is_applicable_to_user(user_uuid, OrgPolicyType::DisableSend, None, conn).await || OrgPolicy::is_applicable_to_user(user_id, OrgPolicyType::DisableSend, None, conn).await
{ {
err!("Due to an Enterprise Policy, you are only able to delete an existing Send.") err!("Due to an Enterprise Policy, you are only able to delete an existing Send.")
} }
@@ -95,9 +111,9 @@ async fn enforce_disable_send_policy(headers: &Headers, conn: &mut DbConn) -> Em
/// ///
/// Ref: https://bitwarden.com/help/article/policies/#send-options /// Ref: https://bitwarden.com/help/article/policies/#send-options
async fn enforce_disable_hide_email_policy(data: &SendData, headers: &Headers, conn: &mut DbConn) -> EmptyResult { async fn enforce_disable_hide_email_policy(data: &SendData, headers: &Headers, conn: &mut DbConn) -> EmptyResult {
let user_uuid = &headers.user.uuid; let user_id = &headers.user.uuid;
let hide_email = data.hide_email.unwrap_or(false); let hide_email = data.hide_email.unwrap_or(false);
if hide_email && OrgPolicy::is_hide_email_disabled(user_uuid, conn).await { if hide_email && OrgPolicy::is_hide_email_disabled(user_id, conn).await {
err!( err!(
"Due to an Enterprise Policy, you are not allowed to hide your email address \ "Due to an Enterprise Policy, you are not allowed to hide your email address \
from recipients when creating or editing a Send." from recipients when creating or editing a Send."
@@ -106,7 +122,7 @@ async fn enforce_disable_hide_email_policy(data: &SendData, headers: &Headers, c
Ok(()) Ok(())
} }
fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> { fn create_send(data: SendData, user_id: UserId) -> ApiResult<Send> {
let data_val = if data.r#type == SendType::Text as i32 { let data_val = if data.r#type == SendType::Text as i32 {
data.text data.text
} else if data.r#type == SendType::File as i32 { } else if data.r#type == SendType::File as i32 {
@@ -129,7 +145,7 @@ fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
} }
let mut send = Send::new(data.r#type, data.name, data_str, data.key, data.deletion_date.naive_utc()); let mut send = Send::new(data.r#type, data.name, data_str, data.key, data.deletion_date.naive_utc());
send.user_uuid = Some(user_uuid); send.user_uuid = Some(user_id);
send.notes = data.notes; send.notes = data.notes;
send.max_access_count = match data.max_access_count { send.max_access_count = match data.max_access_count {
Some(m) => Some(m.into_i32()?), Some(m) => Some(m.into_i32()?),
@@ -157,11 +173,11 @@ async fn get_sends(headers: Headers, mut conn: DbConn) -> Json<Value> {
})) }))
} }
#[get("/sends/<uuid>")] #[get("/sends/<send_id>")]
async fn get_send(uuid: &str, headers: Headers, mut conn: DbConn) -> JsonResult { async fn get_send(send_id: SendId, headers: Headers, mut conn: DbConn) -> JsonResult {
match Send::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await { match Send::find_by_uuid_and_user(&send_id, &headers.user.uuid, &mut conn).await {
Some(send) => Ok(Json(send.to_json())), Some(send) => Ok(Json(send.to_json())),
None => err!("Send not found", "Invalid uuid or does not belong to user"), None => err!("Send not found", "Invalid send uuid or does not belong to user"),
} }
} }
@@ -182,7 +198,7 @@ async fn post_send(data: Json<SendData>, headers: Headers, mut conn: DbConn, nt:
UpdateType::SyncSendCreate, UpdateType::SyncSendCreate,
&send, &send,
&send.update_users_revision(&mut conn).await, &send.update_users_revision(&mut conn).await,
&headers.device.uuid, &headers.device,
&mut conn, &mut conn,
) )
.await; .await;
@@ -204,6 +220,8 @@ struct UploadDataV2<'f> {
// @deprecated Mar 25 2021: This method has been deprecated in favor of direct uploads (v2). // @deprecated Mar 25 2021: This method has been deprecated in favor of direct uploads (v2).
// This method still exists to support older clients, probably need to remove it sometime. // This method still exists to support older clients, probably need to remove it sometime.
// Upstream: https://github.com/bitwarden/server/blob/d0c793c95181dfb1b447eb450f85ba0bfd7ef643/src/Api/Controllers/SendsController.cs#L164-L167 // Upstream: https://github.com/bitwarden/server/blob/d0c793c95181dfb1b447eb450f85ba0bfd7ef643/src/Api/Controllers/SendsController.cs#L164-L167
// 2025: This endpoint doesn't seem to exists anymore in the latest version
// See: https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Api/Tools/Controllers/SendsController.cs
#[post("/sends/file", format = "multipart/form-data", data = "<data>")] #[post("/sends/file", format = "multipart/form-data", data = "<data>")]
async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult { async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?; enforce_disable_send_policy(&headers, &mut conn).await?;
@@ -249,7 +267,7 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
err!("Send content is not a file"); err!("Send content is not a file");
} }
let file_id = crate::crypto::generate_send_id(); let file_id = crate::crypto::generate_send_file_id();
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(&send.uuid); let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(&send.uuid);
let file_path = folder_path.join(&file_id); let file_path = folder_path.join(&file_id);
tokio::fs::create_dir_all(&folder_path).await?; tokio::fs::create_dir_all(&folder_path).await?;
@@ -272,7 +290,7 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
UpdateType::SyncSendCreate, UpdateType::SyncSendCreate,
&send, &send,
&send.update_users_revision(&mut conn).await, &send.update_users_revision(&mut conn).await,
&headers.device.uuid, &headers.device,
&mut conn, &mut conn,
) )
.await; .await;
@@ -280,7 +298,7 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
Ok(Json(send.to_json())) Ok(Json(send.to_json()))
} }
// Upstream: https://github.com/bitwarden/server/blob/d0c793c95181dfb1b447eb450f85ba0bfd7ef643/src/Api/Controllers/SendsController.cs#L190 // Upstream: https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Api/Tools/Controllers/SendsController.cs#L165
#[post("/sends/file/v2", data = "<data>")] #[post("/sends/file/v2", data = "<data>")]
async fn post_send_file_v2(data: Json<SendData>, headers: Headers, mut conn: DbConn) -> JsonResult { async fn post_send_file_v2(data: Json<SendData>, headers: Headers, mut conn: DbConn) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?; enforce_disable_send_policy(&headers, &mut conn).await?;
@@ -324,7 +342,7 @@ async fn post_send_file_v2(data: Json<SendData>, headers: Headers, mut conn: DbC
let mut send = create_send(data, headers.user.uuid)?; let mut send = create_send(data, headers.user.uuid)?;
let file_id = crate::crypto::generate_send_id(); let file_id = crate::crypto::generate_send_file_id();
let mut data_value: Value = serde_json::from_str(&send.data)?; let mut data_value: Value = serde_json::from_str(&send.data)?;
if let Some(o) = data_value.as_object_mut() { if let Some(o) = data_value.as_object_mut() {
@@ -338,7 +356,7 @@ async fn post_send_file_v2(data: Json<SendData>, headers: Headers, mut conn: DbC
Ok(Json(json!({ Ok(Json(json!({
"fileUploadType": 0, // 0 == Direct | 1 == Azure "fileUploadType": 0, // 0 == Direct | 1 == Azure
"object": "send-fileUpload", "object": "send-fileUpload",
"url": format!("/sends/{}/file/{}", send.uuid, file_id), "url": format!("/sends/{}/file/{file_id}", send.uuid),
"sendResponse": send.to_json() "sendResponse": send.to_json()
}))) })))
} }
@@ -346,16 +364,16 @@ async fn post_send_file_v2(data: Json<SendData>, headers: Headers, mut conn: DbC
#[derive(Deserialize)] #[derive(Deserialize)]
#[allow(non_snake_case)] #[allow(non_snake_case)]
pub struct SendFileData { pub struct SendFileData {
id: String, id: SendFileId,
size: u64, size: u64,
fileName: String, fileName: String,
} }
// https://github.com/bitwarden/server/blob/66f95d1c443490b653e5a15d32977e2f5a3f9e32/src/Api/Tools/Controllers/SendsController.cs#L250 // https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Api/Tools/Controllers/SendsController.cs#L195
#[post("/sends/<send_uuid>/file/<file_id>", format = "multipart/form-data", data = "<data>")] #[post("/sends/<send_id>/file/<file_id>", format = "multipart/form-data", data = "<data>")]
async fn post_send_file_v2_data( async fn post_send_file_v2_data(
send_uuid: &str, send_id: SendId,
file_id: &str, file_id: SendFileId,
data: Form<UploadDataV2<'_>>, data: Form<UploadDataV2<'_>>,
headers: Headers, headers: Headers,
mut conn: DbConn, mut conn: DbConn,
@@ -365,8 +383,8 @@ async fn post_send_file_v2_data(
let mut data = data.into_inner(); let mut data = data.into_inner();
let Some(send) = Send::find_by_uuid_and_user(send_uuid, &headers.user.uuid, &mut conn).await else { let Some(send) = Send::find_by_uuid_and_user(&send_id, &headers.user.uuid, &mut conn).await else {
err!("Send not found. Unable to save the file.", "Invalid uuid or does not belong to user.") err!("Send not found. Unable to save the file.", "Invalid send uuid or does not belong to user.")
}; };
if send.atype != SendType::File as i32 { if send.atype != SendType::File as i32 {
@@ -378,7 +396,11 @@ async fn post_send_file_v2_data(
}; };
match data.data.raw_name() { match data.data.raw_name() {
Some(raw_file_name) if raw_file_name.dangerous_unsafe_unsanitized_raw() == send_data.fileName => (), Some(raw_file_name)
if raw_file_name.dangerous_unsafe_unsanitized_raw() == send_data.fileName
// be less strict only if using CLI, cf. https://github.com/dani-garcia/vaultwarden/issues/5614
|| (headers.device.is_cli() && send_data.fileName.ends_with(raw_file_name.dangerous_unsafe_unsanitized_raw().as_str())
) => {}
Some(raw_file_name) => err!( Some(raw_file_name) => err!(
"Send file name does not match.", "Send file name does not match.",
format!( format!(
@@ -402,7 +424,7 @@ async fn post_send_file_v2_data(
err!("Send file size does not match.", format!("Expected a file size of {} got {size}", send_data.size)); err!("Send file size does not match.", format!("Expected a file size of {} got {size}", send_data.size));
} }
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(send_uuid); let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(send_id);
let file_path = folder_path.join(file_id); let file_path = folder_path.join(file_id);
// Check if the file already exists, if that is the case do not overwrite it // Check if the file already exists, if that is the case do not overwrite it
@@ -420,7 +442,7 @@ async fn post_send_file_v2_data(
UpdateType::SyncSendCreate, UpdateType::SyncSendCreate,
&send, &send,
&send.update_users_revision(&mut conn).await, &send.update_users_revision(&mut conn).await,
&headers.device.uuid, &headers.device,
&mut conn, &mut conn,
) )
.await; .await;
@@ -485,7 +507,7 @@ async fn post_access(
UpdateType::SyncSendUpdate, UpdateType::SyncSendUpdate,
&send, &send,
&send.update_users_revision(&mut conn).await, &send.update_users_revision(&mut conn).await,
&String::from("00000000-0000-0000-0000-000000000000"), &ANON_PUSH_DEVICE,
&mut conn, &mut conn,
) )
.await; .await;
@@ -495,14 +517,14 @@ async fn post_access(
#[post("/sends/<send_id>/access/file/<file_id>", data = "<data>")] #[post("/sends/<send_id>/access/file/<file_id>", data = "<data>")]
async fn post_access_file( async fn post_access_file(
send_id: &str, send_id: SendId,
file_id: &str, file_id: SendFileId,
data: Json<SendAccessData>, data: Json<SendAccessData>,
host: Host, host: Host,
mut conn: DbConn, mut conn: DbConn,
nt: Notify<'_>, nt: Notify<'_>,
) -> JsonResult { ) -> JsonResult {
let Some(mut send) = Send::find_by_uuid(send_id, &mut conn).await else { let Some(mut send) = Send::find_by_uuid(&send_id, &mut conn).await else {
err_code!(SEND_INACCESSIBLE_MSG, 404) err_code!(SEND_INACCESSIBLE_MSG, 404)
}; };
@@ -542,22 +564,22 @@ async fn post_access_file(
UpdateType::SyncSendUpdate, UpdateType::SyncSendUpdate,
&send, &send,
&send.update_users_revision(&mut conn).await, &send.update_users_revision(&mut conn).await,
&String::from("00000000-0000-0000-0000-000000000000"), &ANON_PUSH_DEVICE,
&mut conn, &mut conn,
) )
.await; .await;
let token_claims = crate::auth::generate_send_claims(send_id, file_id); let token_claims = crate::auth::generate_send_claims(&send_id, &file_id);
let token = crate::auth::encode_jwt(&token_claims); let token = crate::auth::encode_jwt(&token_claims);
Ok(Json(json!({ Ok(Json(json!({
"object": "send-fileDownload", "object": "send-fileDownload",
"id": file_id, "id": file_id,
"url": format!("{}/api/sends/{}/{}?t={}", &host.host, send_id, file_id, token) "url": format!("{}/api/sends/{send_id}/{file_id}?t={token}", &host.host)
}))) })))
} }
#[get("/sends/<send_id>/<file_id>?<t>")] #[get("/sends/<send_id>/<file_id>?<t>")]
async fn download_send(send_id: SafeString, file_id: SafeString, t: &str) -> Option<NamedFile> { async fn download_send(send_id: SendId, file_id: SendFileId, t: &str) -> Option<NamedFile> {
if let Ok(claims) = crate::auth::decode_send(t) { if let Ok(claims) = crate::auth::decode_send(t) {
if claims.sub == format!("{send_id}/{file_id}") { if claims.sub == format!("{send_id}/{file_id}") {
return NamedFile::open(Path::new(&CONFIG.sends_folder()).join(send_id).join(file_id)).await.ok(); return NamedFile::open(Path::new(&CONFIG.sends_folder()).join(send_id).join(file_id)).await.ok();
@@ -566,15 +588,21 @@ async fn download_send(send_id: SafeString, file_id: SafeString, t: &str) -> Opt
None None
} }
#[put("/sends/<uuid>", data = "<data>")] #[put("/sends/<send_id>", data = "<data>")]
async fn put_send(uuid: &str, data: Json<SendData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult { async fn put_send(
send_id: SendId,
data: Json<SendData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?; enforce_disable_send_policy(&headers, &mut conn).await?;
let data: SendData = data.into_inner(); let data: SendData = data.into_inner();
enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?; enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?;
let Some(mut send) = Send::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await else { let Some(mut send) = Send::find_by_uuid_and_user(&send_id, &headers.user.uuid, &mut conn).await else {
err!("Send not found", "Send uuid is invalid or does not belong to user") err!("Send not found", "Send send_id is invalid or does not belong to user")
}; };
update_send_from_data(&mut send, data, &headers, &mut conn, &nt, UpdateType::SyncSendUpdate).await?; update_send_from_data(&mut send, data, &headers, &mut conn, &nt, UpdateType::SyncSendUpdate).await?;
@@ -635,14 +663,14 @@ pub async fn update_send_from_data(
send.save(conn).await?; send.save(conn).await?;
if ut != UpdateType::None { if ut != UpdateType::None {
nt.send_send_update(ut, send, &send.update_users_revision(conn).await, &headers.device.uuid, conn).await; nt.send_send_update(ut, send, &send.update_users_revision(conn).await, &headers.device, conn).await;
} }
Ok(()) Ok(())
} }
#[delete("/sends/<uuid>")] #[delete("/sends/<send_id>")]
async fn delete_send(uuid: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult { async fn delete_send(send_id: SendId, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let Some(send) = Send::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await else { let Some(send) = Send::find_by_uuid_and_user(&send_id, &headers.user.uuid, &mut conn).await else {
err!("Send not found", "Invalid send uuid, or does not belong to user") err!("Send not found", "Invalid send uuid, or does not belong to user")
}; };
@@ -651,7 +679,7 @@ async fn delete_send(uuid: &str, headers: Headers, mut conn: DbConn, nt: Notify<
UpdateType::SyncSendDelete, UpdateType::SyncSendDelete,
&send, &send,
&send.update_users_revision(&mut conn).await, &send.update_users_revision(&mut conn).await,
&headers.device.uuid, &headers.device,
&mut conn, &mut conn,
) )
.await; .await;
@@ -659,11 +687,11 @@ async fn delete_send(uuid: &str, headers: Headers, mut conn: DbConn, nt: Notify<
Ok(()) Ok(())
} }
#[put("/sends/<uuid>/remove-password")] #[put("/sends/<send_id>/remove-password")]
async fn put_remove_password(uuid: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult { async fn put_remove_password(send_id: SendId, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?; enforce_disable_send_policy(&headers, &mut conn).await?;
let Some(mut send) = Send::find_by_uuid_and_user(uuid, &headers.user.uuid, &mut conn).await else { let Some(mut send) = Send::find_by_uuid_and_user(&send_id, &headers.user.uuid, &mut conn).await else {
err!("Send not found", "Invalid send uuid, or does not belong to user") err!("Send not found", "Invalid send uuid, or does not belong to user")
}; };
@@ -673,7 +701,7 @@ async fn put_remove_password(uuid: &str, headers: Headers, mut conn: DbConn, nt:
UpdateType::SyncSendUpdate, UpdateType::SyncSendUpdate,
&send, &send,
&send.update_users_revision(&mut conn).await, &send.update_users_revision(&mut conn).await,
&headers.device.uuid, &headers.device,
&mut conn, &mut conn,
) )
.await; .await;

View File

@@ -7,7 +7,7 @@ use crate::{
auth::{ClientIp, Headers}, auth::{ClientIp, Headers},
crypto, crypto,
db::{ db::{
models::{EventType, TwoFactor, TwoFactorType}, models::{EventType, TwoFactor, TwoFactorType, UserId},
DbConn, DbConn,
}, },
util::NumberOrString, util::NumberOrString,
@@ -16,7 +16,7 @@ use crate::{
pub use crate::config::CONFIG; pub use crate::config::CONFIG;
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
routes![generate_authenticator, activate_authenticator, activate_authenticator_put,] routes![generate_authenticator, activate_authenticator, activate_authenticator_put, disable_authenticator]
} }
#[post("/two-factor/get-authenticator", data = "<data>")] #[post("/two-factor/get-authenticator", data = "<data>")]
@@ -34,6 +34,10 @@ async fn generate_authenticator(data: Json<PasswordOrOtpData>, headers: Headers,
_ => (false, crypto::encode_random_bytes::<20>(BASE32)), _ => (false, crypto::encode_random_bytes::<20>(BASE32)),
}; };
// Upstream seems to also return `userVerificationToken`, but doesn't seem to be used at all.
// It should help prevent TOTP disclosure if someone keeps their vault unlocked.
// Since it doesn't seem to be used, and also does not cause any issues, lets leave it out of the response.
// See: https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Api/Auth/Controllers/TwoFactorController.cs#L94
Ok(Json(json!({ Ok(Json(json!({
"enabled": enabled, "enabled": enabled,
"key": key, "key": key,
@@ -95,7 +99,7 @@ async fn activate_authenticator_put(data: Json<EnableAuthenticatorData>, headers
} }
pub async fn validate_totp_code_str( pub async fn validate_totp_code_str(
user_uuid: &str, user_id: &UserId,
totp_code: &str, totp_code: &str,
secret: &str, secret: &str,
ip: &ClientIp, ip: &ClientIp,
@@ -105,11 +109,11 @@ pub async fn validate_totp_code_str(
err!("TOTP code is not a number"); err!("TOTP code is not a number");
} }
validate_totp_code(user_uuid, totp_code, secret, ip, conn).await validate_totp_code(user_id, totp_code, secret, ip, conn).await
} }
pub async fn validate_totp_code( pub async fn validate_totp_code(
user_uuid: &str, user_id: &UserId,
totp_code: &str, totp_code: &str,
secret: &str, secret: &str,
ip: &ClientIp, ip: &ClientIp,
@@ -121,11 +125,11 @@ pub async fn validate_totp_code(
err!("Invalid TOTP secret") err!("Invalid TOTP secret")
}; };
let mut twofactor = let mut twofactor = match TwoFactor::find_by_user_and_type(user_id, TwoFactorType::Authenticator as i32, conn).await
match TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::Authenticator as i32, conn).await { {
Some(tf) => tf, Some(tf) => tf,
_ => TwoFactor::new(user_uuid.to_string(), TwoFactorType::Authenticator, secret.to_string()), _ => TwoFactor::new(user_id.clone(), TwoFactorType::Authenticator, secret.to_string()),
}; };
// The amount of steps back and forward in time // The amount of steps back and forward in time
// Also check if we need to disable time drifted TOTP codes. // Also check if we need to disable time drifted TOTP codes.
@@ -148,7 +152,7 @@ pub async fn validate_totp_code(
if generated == totp_code && time_step > twofactor.last_used { if generated == totp_code && time_step > twofactor.last_used {
// If the step does not equals 0 the time is drifted either server or client side. // If the step does not equals 0 the time is drifted either server or client side.
if step != 0 { if step != 0 {
warn!("TOTP Time drift detected. The step offset is {}", step); warn!("TOTP Time drift detected. The step offset is {step}");
} }
// Save the last used time step so only totp time steps higher then this one are allowed. // Save the last used time step so only totp time steps higher then this one are allowed.
@@ -157,7 +161,7 @@ pub async fn validate_totp_code(
twofactor.save(conn).await?; twofactor.save(conn).await?;
return Ok(()); return Ok(());
} else if generated == totp_code && time_step <= twofactor.last_used { } else if generated == totp_code && time_step <= twofactor.last_used {
warn!("This TOTP or a TOTP code within {} steps back or forward has already been used!", steps); warn!("This TOTP or a TOTP code within {steps} steps back or forward has already been used!");
err!( err!(
format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip), format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip),
ErrorEvent { ErrorEvent {
@@ -175,3 +179,47 @@ pub async fn validate_totp_code(
} }
); );
} }
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct DisableAuthenticatorData {
key: String,
master_password_hash: String,
r#type: NumberOrString,
}
#[delete("/two-factor/authenticator", data = "<data>")]
async fn disable_authenticator(data: Json<DisableAuthenticatorData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let user = headers.user;
let type_ = data.r#type.into_i32()?;
if !user.check_valid_password(&data.master_password_hash) {
err!("Invalid password");
}
if let Some(twofactor) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await {
if twofactor.data == data.key {
twofactor.delete(&mut conn).await?;
log_user_event(
EventType::UserDisabled2fa as i32,
&user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
)
.await;
} else {
err!(format!("TOTP key for user {} does not match recorded value, cannot deactivate", &user.email));
}
}
if TwoFactor::find_by_user(&user.uuid, &mut conn).await.is_empty() {
super::enforce_2fa_policy(&user, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await?;
}
Ok(Json(json!({
"enabled": false,
"keys": type_,
"object": "twoFactorProvider"
})))
}

View File

@@ -11,7 +11,7 @@ use crate::{
auth::Headers, auth::Headers,
crypto, crypto,
db::{ db::{
models::{EventType, TwoFactor, TwoFactorType, User}, models::{EventType, TwoFactor, TwoFactorType, User, UserId},
DbConn, DbConn,
}, },
error::MapResult, error::MapResult,
@@ -26,8 +26,8 @@ pub fn routes() -> Vec<Route> {
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
struct DuoData { struct DuoData {
host: String, // Duo API hostname host: String, // Duo API hostname
ik: String, // integration key ik: String, // client id
sk: String, // secret key sk: String, // client secret
} }
impl DuoData { impl DuoData {
@@ -111,13 +111,16 @@ async fn get_duo(data: Json<PasswordOrOtpData>, headers: Headers, mut conn: DbCo
json!({ json!({
"enabled": enabled, "enabled": enabled,
"host": data.host, "host": data.host,
"secretKey": data.sk, "clientSecret": data.sk,
"integrationKey": data.ik, "clientId": data.ik,
"object": "twoFactorDuo" "object": "twoFactorDuo"
}) })
} else { } else {
json!({ json!({
"enabled": enabled, "enabled": enabled,
"host": null,
"clientSecret": null,
"clientId": null,
"object": "twoFactorDuo" "object": "twoFactorDuo"
}) })
}; };
@@ -129,8 +132,8 @@ async fn get_duo(data: Json<PasswordOrOtpData>, headers: Headers, mut conn: DbCo
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct EnableDuoData { struct EnableDuoData {
host: String, host: String,
secret_key: String, client_secret: String,
integration_key: String, client_id: String,
master_password_hash: Option<String>, master_password_hash: Option<String>,
otp: Option<String>, otp: Option<String>,
} }
@@ -139,8 +142,8 @@ impl From<EnableDuoData> for DuoData {
fn from(d: EnableDuoData) -> Self { fn from(d: EnableDuoData) -> Self {
Self { Self {
host: d.host, host: d.host,
ik: d.integration_key, ik: d.client_id,
sk: d.secret_key, sk: d.client_secret,
} }
} }
} }
@@ -151,7 +154,7 @@ fn check_duo_fields_custom(data: &EnableDuoData) -> bool {
st.is_empty() || s == DISABLED_MESSAGE_DEFAULT st.is_empty() || s == DISABLED_MESSAGE_DEFAULT
} }
!empty_or_default(&data.host) && !empty_or_default(&data.secret_key) && !empty_or_default(&data.integration_key) !empty_or_default(&data.host) && !empty_or_default(&data.client_secret) && !empty_or_default(&data.client_id)
} }
#[post("/two-factor/duo", data = "<data>")] #[post("/two-factor/duo", data = "<data>")]
@@ -186,8 +189,8 @@ async fn activate_duo(data: Json<EnableDuoData>, headers: Headers, mut conn: DbC
Ok(Json(json!({ Ok(Json(json!({
"enabled": true, "enabled": true,
"host": data.host, "host": data.host,
"secretKey": data.sk, "clientSecret": data.sk,
"integrationKey": data.ik, "clientId": data.ik,
"object": "twoFactorDuo" "object": "twoFactorDuo"
}))) })))
} }
@@ -202,7 +205,7 @@ async fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData)
use std::str::FromStr; use std::str::FromStr;
// https://duo.com/docs/authapi#api-details // https://duo.com/docs/authapi#api-details
let url = format!("https://{}{}", &data.host, path); let url = format!("https://{}{path}", &data.host);
let date = Utc::now().to_rfc2822(); let date = Utc::now().to_rfc2822();
let username = &data.ik; let username = &data.ik;
let fields = [&date, method, &data.host, path, params]; let fields = [&date, method, &data.host, path, params];
@@ -228,11 +231,11 @@ const AUTH_PREFIX: &str = "AUTH";
const DUO_PREFIX: &str = "TX"; const DUO_PREFIX: &str = "TX";
const APP_PREFIX: &str = "APP"; const APP_PREFIX: &str = "APP";
async fn get_user_duo_data(uuid: &str, conn: &mut DbConn) -> DuoStatus { async fn get_user_duo_data(user_id: &UserId, conn: &mut DbConn) -> DuoStatus {
let type_ = TwoFactorType::Duo as i32; let type_ = TwoFactorType::Duo as i32;
// If the user doesn't have an entry, disabled // If the user doesn't have an entry, disabled
let Some(twofactor) = TwoFactor::find_by_user_and_type(uuid, type_, conn).await else { let Some(twofactor) = TwoFactor::find_by_user_and_type(user_id, type_, conn).await else {
return DuoStatus::Disabled(DuoData::global().is_some()); return DuoStatus::Disabled(DuoData::global().is_some());
}; };
@@ -274,9 +277,9 @@ pub async fn generate_duo_signature(email: &str, conn: &mut DbConn) -> ApiResult
fn sign_duo_values(key: &str, email: &str, ikey: &str, prefix: &str, expire: i64) -> String { fn sign_duo_values(key: &str, email: &str, ikey: &str, prefix: &str, expire: i64) -> String {
let val = format!("{email}|{ikey}|{expire}"); let val = format!("{email}|{ikey}|{expire}");
let cookie = format!("{}|{}", prefix, BASE64.encode(val.as_bytes())); let cookie = format!("{prefix}|{}", BASE64.encode(val.as_bytes()));
format!("{}|{}", cookie, crypto::hmac_sign(key, &cookie)) format!("{cookie}|{}", crypto::hmac_sign(key, &cookie))
} }
pub async fn validate_duo_login(email: &str, response: &str, conn: &mut DbConn) -> EmptyResult { pub async fn validate_duo_login(email: &str, response: &str, conn: &mut DbConn) -> EmptyResult {

View File

@@ -10,7 +10,7 @@ use crate::{
api::{core::two_factor::duo::get_duo_keys_email, EmptyResult}, api::{core::two_factor::duo::get_duo_keys_email, EmptyResult},
crypto, crypto,
db::{ db::{
models::{EventType, TwoFactorDuoContext}, models::{DeviceId, EventType, TwoFactorDuoContext},
DbConn, DbPool, DbConn, DbPool,
}, },
error::Error, error::Error,
@@ -21,7 +21,7 @@ use url::Url;
// The location on this service that Duo should redirect users to. For us, this is a bridge // The location on this service that Duo should redirect users to. For us, this is a bridge
// built in to the Bitwarden clients. // built in to the Bitwarden clients.
// See: https://github.com/bitwarden/clients/blob/main/apps/web/src/connectors/duo-redirect.ts // See: https://github.com/bitwarden/clients/blob/5fb46df3415aefced0b52f2db86c873962255448/apps/web/src/connectors/duo-redirect.ts
const DUO_REDIRECT_LOCATION: &str = "duo-redirect-connector.html"; const DUO_REDIRECT_LOCATION: &str = "duo-redirect-connector.html";
// Number of seconds that a JWT we generate for Duo should be valid for. // Number of seconds that a JWT we generate for Duo should be valid for.
@@ -182,7 +182,7 @@ impl DuoClient {
HealthCheckResponse::HealthFail { HealthCheckResponse::HealthFail {
message, message,
message_detail, message_detail,
} => err!(format!("Duo health check FAIL response, msg: {}, detail: {}", message, message_detail)), } => err!(format!("Duo health check FAIL response, msg: {message}, detail: {message_detail}")),
}; };
if health_stat != "OK" { if health_stat != "OK" {
@@ -275,7 +275,7 @@ impl DuoClient {
let status_code = res.status(); let status_code = res.status();
if status_code != StatusCode::OK { if status_code != StatusCode::OK {
err!(format!("Failure response from Duo: {}", status_code)) err!(format!("Failure response from Duo: {status_code}"))
} }
let response: IdTokenResponse = match res.json::<IdTokenResponse>().await { let response: IdTokenResponse = match res.json::<IdTokenResponse>().await {
@@ -379,7 +379,7 @@ fn make_callback_url(client_name: &str) -> Result<String, Error> {
pub async fn get_duo_auth_url( pub async fn get_duo_auth_url(
email: &str, email: &str,
client_id: &str, client_id: &str,
device_identifier: &String, device_identifier: &DeviceId,
conn: &mut DbConn, conn: &mut DbConn,
) -> Result<String, Error> { ) -> Result<String, Error> {
let (ik, sk, _, host) = get_duo_keys_email(email, conn).await?; let (ik, sk, _, host) = get_duo_keys_email(email, conn).await?;
@@ -417,7 +417,7 @@ pub async fn validate_duo_login(
email: &str, email: &str,
two_factor_token: &str, two_factor_token: &str,
client_id: &str, client_id: &str,
device_identifier: &str, device_identifier: &DeviceId,
conn: &mut DbConn, conn: &mut DbConn,
) -> EmptyResult { ) -> EmptyResult {
// Result supplied to us by clients in the form "<authz code>|<state>" // Result supplied to us by clients in the form "<authz code>|<state>"
@@ -478,7 +478,7 @@ pub async fn validate_duo_login(
Err(e) => return Err(e), Err(e) => return Err(e),
}; };
let d: Digest = digest(&SHA512_256, format!("{}{}", ctx.nonce, device_identifier).as_bytes()); let d: Digest = digest(&SHA512_256, format!("{}{device_identifier}", ctx.nonce).as_bytes());
let hash: String = HEXLOWER.encode(d.as_ref()); let hash: String = HEXLOWER.encode(d.as_ref());
match client.exchange_authz_code_for_result(code, email, hash.as_str()).await { match client.exchange_authz_code_for_result(code, email, hash.as_str()).await {

View File

@@ -10,7 +10,7 @@ use crate::{
auth::Headers, auth::Headers,
crypto, crypto,
db::{ db::{
models::{EventType, TwoFactor, TwoFactorType, User}, models::{EventType, TwoFactor, TwoFactorType, User, UserId},
DbConn, DbConn,
}, },
error::{Error, MapResult}, error::{Error, MapResult},
@@ -59,10 +59,9 @@ async fn send_email_login(data: Json<SendEmailLoginData>, mut conn: DbConn) -> E
} }
/// Generate the token, save the data for later verification and send email to user /// Generate the token, save the data for later verification and send email to user
pub async fn send_token(user_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn send_token(user_id: &UserId, conn: &mut DbConn) -> EmptyResult {
let type_ = TwoFactorType::Email as i32; let type_ = TwoFactorType::Email as i32;
let mut twofactor = let mut twofactor = TwoFactor::find_by_user_and_type(user_id, type_, conn).await.map_res("Two factor not found")?;
TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await.map_res("Two factor not found")?;
let generated_token = crypto::generate_email_token(CONFIG.email_token_size()); let generated_token = crypto::generate_email_token(CONFIG.email_token_size());
@@ -198,14 +197,20 @@ async fn email(data: Json<EmailData>, headers: Headers, mut conn: DbConn) -> Jso
} }
/// Validate the email code when used as TwoFactor token mechanism /// Validate the email code when used as TwoFactor token mechanism
pub async fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &mut DbConn) -> EmptyResult { pub async fn validate_email_code_str(
user_id: &UserId,
token: &str,
data: &str,
ip: &std::net::IpAddr,
conn: &mut DbConn,
) -> EmptyResult {
let mut email_data = EmailTokenData::from_json(data)?; let mut email_data = EmailTokenData::from_json(data)?;
let mut twofactor = TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::Email as i32, conn) let mut twofactor = TwoFactor::find_by_user_and_type(user_id, TwoFactorType::Email as i32, conn)
.await .await
.map_res("Two factor not found")?; .map_res("Two factor not found")?;
let Some(issued_token) = &email_data.last_token else { let Some(issued_token) = &email_data.last_token else {
err!( err!(
"No token available", format!("No token available! IP: {ip}"),
ErrorEvent { ErrorEvent {
event: EventType::UserFailedLogIn2fa event: EventType::UserFailedLogIn2fa
} }
@@ -221,7 +226,7 @@ pub async fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, c
twofactor.save(conn).await?; twofactor.save(conn).await?;
err!( err!(
"Token is invalid", format!("Token is invalid! IP: {ip}"),
ErrorEvent { ErrorEvent {
event: EventType::UserFailedLogIn2fa event: EventType::UserFailedLogIn2fa
} }
@@ -324,11 +329,11 @@ pub fn obscure_email(email: &str) -> String {
} }
}; };
format!("{}@{}", new_name, &domain) format!("{new_name}@{domain}")
} }
pub async fn find_and_activate_email_2fa(user_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn find_and_activate_email_2fa(user_id: &UserId, conn: &mut DbConn) -> EmptyResult {
if let Some(user) = User::find_by_uuid(user_uuid, conn).await { if let Some(user) = User::find_by_uuid(user_id, conn).await {
activate_email_2fa(&user, conn).await activate_email_2fa(&user, conn).await
} else { } else {
err!("User not found!"); err!("User not found!");

View File

@@ -173,17 +173,16 @@ async fn disable_twofactor_put(data: Json<DisableTwoFactorData>, headers: Header
pub async fn enforce_2fa_policy( pub async fn enforce_2fa_policy(
user: &User, user: &User,
act_uuid: &str, act_user_id: &UserId,
device_type: i32, device_type: i32,
ip: &std::net::IpAddr, ip: &std::net::IpAddr,
conn: &mut DbConn, conn: &mut DbConn,
) -> EmptyResult { ) -> EmptyResult {
for member in UserOrganization::find_by_user_and_policy(&user.uuid, OrgPolicyType::TwoFactorAuthentication, conn) for member in
.await Membership::find_by_user_and_policy(&user.uuid, OrgPolicyType::TwoFactorAuthentication, conn).await.into_iter()
.into_iter()
{ {
// Policy only applies to non-Owner/non-Admin members who have accepted joining the org // Policy only applies to non-Owner/non-Admin members who have accepted joining the org
if member.atype < UserOrgType::Admin { if member.atype < MembershipType::Admin {
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
let org = Organization::find_by_uuid(&member.org_uuid, conn).await.unwrap(); let org = Organization::find_by_uuid(&member.org_uuid, conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name).await?; mail::send_2fa_removed_from_org(&user.email, &org.name).await?;
@@ -196,7 +195,7 @@ pub async fn enforce_2fa_policy(
EventType::OrganizationUserRevoked as i32, EventType::OrganizationUserRevoked as i32,
&member.uuid, &member.uuid,
&member.org_uuid, &member.org_uuid,
act_uuid, act_user_id,
device_type, device_type,
ip, ip,
conn, conn,
@@ -209,16 +208,16 @@ pub async fn enforce_2fa_policy(
} }
pub async fn enforce_2fa_policy_for_org( pub async fn enforce_2fa_policy_for_org(
org_uuid: &str, org_id: &OrganizationId,
act_uuid: &str, act_user_id: &UserId,
device_type: i32, device_type: i32,
ip: &std::net::IpAddr, ip: &std::net::IpAddr,
conn: &mut DbConn, conn: &mut DbConn,
) -> EmptyResult { ) -> EmptyResult {
let org = Organization::find_by_uuid(org_uuid, conn).await.unwrap(); let org = Organization::find_by_uuid(org_id, conn).await.unwrap();
for member in UserOrganization::find_confirmed_by_org(org_uuid, conn).await.into_iter() { for member in Membership::find_confirmed_by_org(org_id, conn).await.into_iter() {
// Don't enforce the policy for Admins and Owners. // Don't enforce the policy for Admins and Owners.
if member.atype < UserOrgType::Admin && TwoFactor::find_by_user(&member.user_uuid, conn).await.is_empty() { if member.atype < MembershipType::Admin && TwoFactor::find_by_user(&member.user_uuid, conn).await.is_empty() {
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
let user = User::find_by_uuid(&member.user_uuid, conn).await.unwrap(); let user = User::find_by_uuid(&member.user_uuid, conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name).await?; mail::send_2fa_removed_from_org(&user.email, &org.name).await?;
@@ -230,8 +229,8 @@ pub async fn enforce_2fa_policy_for_org(
log_event( log_event(
EventType::OrganizationUserRevoked as i32, EventType::OrganizationUserRevoked as i32,
&member.uuid, &member.uuid,
org_uuid, org_id,
act_uuid, act_user_id,
device_type, device_type,
ip, ip,
conn, conn,

View File

@@ -6,7 +6,7 @@ use crate::{
auth::Headers, auth::Headers,
crypto, crypto,
db::{ db::{
models::{TwoFactor, TwoFactorType}, models::{TwoFactor, TwoFactorType, UserId},
DbConn, DbConn,
}, },
error::{Error, MapResult}, error::{Error, MapResult},
@@ -104,11 +104,11 @@ async fn verify_otp(data: Json<ProtectedActionVerify>, headers: Headers, mut con
pub async fn validate_protected_action_otp( pub async fn validate_protected_action_otp(
otp: &str, otp: &str,
user_uuid: &str, user_id: &UserId,
delete_if_valid: bool, delete_if_valid: bool,
conn: &mut DbConn, conn: &mut DbConn,
) -> EmptyResult { ) -> EmptyResult {
let pa = TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::ProtectedActions as i32, conn) let pa = TwoFactor::find_by_user_and_type(user_id, TwoFactorType::ProtectedActions as i32, conn)
.await .await
.map_res("Protected action token not found, try sending the code again or restart the process")?; .map_res("Protected action token not found, try sending the code again or restart the process")?;
let mut pa_data = ProtectedActionData::from_json(&pa.data)?; let mut pa_data = ProtectedActionData::from_json(&pa.data)?;

View File

@@ -11,7 +11,7 @@ use crate::{
}, },
auth::Headers, auth::Headers,
db::{ db::{
models::{EventType, TwoFactor, TwoFactorType}, models::{EventType, TwoFactor, TwoFactorType, UserId},
DbConn, DbConn,
}, },
error::Error, error::Error,
@@ -148,7 +148,7 @@ async fn generate_webauthn_challenge(data: Json<PasswordOrOtpData>, headers: Hea
)?; )?;
let type_ = TwoFactorType::WebauthnRegisterChallenge; let type_ = TwoFactorType::WebauthnRegisterChallenge;
TwoFactor::new(user.uuid, type_, serde_json::to_string(&state)?).save(&mut conn).await?; TwoFactor::new(user.uuid.clone(), type_, serde_json::to_string(&state)?).save(&mut conn).await?;
let mut challenge_value = serde_json::to_value(challenge.public_key)?; let mut challenge_value = serde_json::to_value(challenge.public_key)?;
challenge_value["status"] = "ok".into(); challenge_value["status"] = "ok".into();
@@ -352,20 +352,20 @@ async fn delete_webauthn(data: Json<DeleteU2FData>, headers: Headers, mut conn:
} }
pub async fn get_webauthn_registrations( pub async fn get_webauthn_registrations(
user_uuid: &str, user_id: &UserId,
conn: &mut DbConn, conn: &mut DbConn,
) -> Result<(bool, Vec<WebauthnRegistration>), Error> { ) -> Result<(bool, Vec<WebauthnRegistration>), Error> {
let type_ = TwoFactorType::Webauthn as i32; let type_ = TwoFactorType::Webauthn as i32;
match TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await { match TwoFactor::find_by_user_and_type(user_id, type_, conn).await {
Some(tf) => Ok((tf.enabled, serde_json::from_str(&tf.data)?)), Some(tf) => Ok((tf.enabled, serde_json::from_str(&tf.data)?)),
None => Ok((false, Vec::new())), // If no data, return empty list None => Ok((false, Vec::new())), // If no data, return empty list
} }
} }
pub async fn generate_webauthn_login(user_uuid: &str, conn: &mut DbConn) -> JsonResult { pub async fn generate_webauthn_login(user_id: &UserId, conn: &mut DbConn) -> JsonResult {
// Load saved credentials // Load saved credentials
let creds: Vec<Credential> = let creds: Vec<Credential> =
get_webauthn_registrations(user_uuid, conn).await?.1.into_iter().map(|r| r.credential).collect(); get_webauthn_registrations(user_id, conn).await?.1.into_iter().map(|r| r.credential).collect();
if creds.is_empty() { if creds.is_empty() {
err!("No Webauthn devices registered") err!("No Webauthn devices registered")
@@ -376,7 +376,7 @@ pub async fn generate_webauthn_login(user_uuid: &str, conn: &mut DbConn) -> Json
let (response, state) = WebauthnConfig::load().generate_challenge_authenticate_options(creds, Some(ext))?; let (response, state) = WebauthnConfig::load().generate_challenge_authenticate_options(creds, Some(ext))?;
// Save the challenge state for later validation // Save the challenge state for later validation
TwoFactor::new(user_uuid.into(), TwoFactorType::WebauthnLoginChallenge, serde_json::to_string(&state)?) TwoFactor::new(user_id.clone(), TwoFactorType::WebauthnLoginChallenge, serde_json::to_string(&state)?)
.save(conn) .save(conn)
.await?; .await?;
@@ -384,9 +384,9 @@ pub async fn generate_webauthn_login(user_uuid: &str, conn: &mut DbConn) -> Json
Ok(Json(serde_json::to_value(response.public_key)?)) Ok(Json(serde_json::to_value(response.public_key)?))
} }
pub async fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &mut DbConn) -> EmptyResult { pub async fn validate_webauthn_login(user_id: &UserId, response: &str, conn: &mut DbConn) -> EmptyResult {
let type_ = TwoFactorType::WebauthnLoginChallenge as i32; let type_ = TwoFactorType::WebauthnLoginChallenge as i32;
let state = match TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await { let state = match TwoFactor::find_by_user_and_type(user_id, type_, conn).await {
Some(tf) => { Some(tf) => {
let state: AuthenticationState = serde_json::from_str(&tf.data)?; let state: AuthenticationState = serde_json::from_str(&tf.data)?;
tf.delete(conn).await?; tf.delete(conn).await?;
@@ -403,7 +403,7 @@ pub async fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &mut
let rsp: PublicKeyCredentialCopy = serde_json::from_str(response)?; let rsp: PublicKeyCredentialCopy = serde_json::from_str(response)?;
let rsp: PublicKeyCredential = rsp.into(); let rsp: PublicKeyCredential = rsp.into();
let mut registrations = get_webauthn_registrations(user_uuid, conn).await?.1; let mut registrations = get_webauthn_registrations(user_id, conn).await?.1;
// If the credential we received is migrated from U2F, enable the U2F compatibility // If the credential we received is migrated from U2F, enable the U2F compatibility
//let use_u2f = registrations.iter().any(|r| r.migrated && r.credential.cred_id == rsp.raw_id.0); //let use_u2f = registrations.iter().any(|r| r.migrated && r.credential.cred_id == rsp.raw_id.0);
@@ -413,7 +413,7 @@ pub async fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &mut
if &reg.credential.cred_id == cred_id { if &reg.credential.cred_id == cred_id {
reg.credential.counter = auth_data.counter; reg.credential.counter = auth_data.counter;
TwoFactor::new(user_uuid.to_string(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?) TwoFactor::new(user_id.clone(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?)
.save(conn) .save(conn)
.await?; .await?;
return Ok(()); return Ok(());

View File

@@ -92,10 +92,10 @@ async fn generate_yubikey(data: Json<PasswordOrOtpData>, headers: Headers, mut c
data.validate(&user, false, &mut conn).await?; data.validate(&user, false, &mut conn).await?;
let user_uuid = &user.uuid; let user_id = &user.uuid;
let yubikey_type = TwoFactorType::YubiKey as i32; let yubikey_type = TwoFactorType::YubiKey as i32;
let r = TwoFactor::find_by_user_and_type(user_uuid, yubikey_type, &mut conn).await; let r = TwoFactor::find_by_user_and_type(user_id, yubikey_type, &mut conn).await;
if let Some(r) = r { if let Some(r) = r {
let yubikey_metadata: YubikeyMetadata = serde_json::from_str(&r.data)?; let yubikey_metadata: YubikeyMetadata = serde_json::from_str(&r.data)?;

View File

@@ -63,15 +63,18 @@ static CLIENT: Lazy<Client> = Lazy::new(|| {
// Build Regex only once since this takes a lot of time. // Build Regex only once since this takes a lot of time.
static ICON_SIZE_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?x)(\d+)\D*(\d+)").unwrap()); static ICON_SIZE_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?x)(\d+)\D*(\d+)").unwrap());
// The function name `icon_external` is checked in the `on_response` function in `AppHeaders`
// It is used to prevent sending a specific header which breaks icon downloads.
// If this function needs to be renamed, also adjust the code in `util.rs`
#[get("/<domain>/icon.png")] #[get("/<domain>/icon.png")]
fn icon_external(domain: &str) -> Option<Redirect> { fn icon_external(domain: &str) -> Option<Redirect> {
if !is_valid_domain(domain) { if !is_valid_domain(domain) {
warn!("Invalid domain: {}", domain); warn!("Invalid domain: {domain}");
return None; return None;
} }
if should_block_address(domain) { if should_block_address(domain) {
warn!("Blocked address: {}", domain); warn!("Blocked address: {domain}");
return None; return None;
} }
@@ -93,7 +96,7 @@ async fn icon_internal(domain: &str) -> Cached<(ContentType, Vec<u8>)> {
const FALLBACK_ICON: &[u8] = include_bytes!("../static/images/fallback-icon.png"); const FALLBACK_ICON: &[u8] = include_bytes!("../static/images/fallback-icon.png");
if !is_valid_domain(domain) { if !is_valid_domain(domain) {
warn!("Invalid domain: {}", domain); warn!("Invalid domain: {domain}");
return Cached::ttl( return Cached::ttl(
(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()), (ContentType::new("image", "png"), FALLBACK_ICON.to_vec()),
CONFIG.icon_cache_negttl(), CONFIG.icon_cache_negttl(),
@@ -102,7 +105,7 @@ async fn icon_internal(domain: &str) -> Cached<(ContentType, Vec<u8>)> {
} }
if should_block_address(domain) { if should_block_address(domain) {
warn!("Blocked address: {}", domain); warn!("Blocked address: {domain}");
return Cached::ttl( return Cached::ttl(
(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()), (ContentType::new("image", "png"), FALLBACK_ICON.to_vec()),
CONFIG.icon_cache_negttl(), CONFIG.icon_cache_negttl(),
@@ -127,7 +130,7 @@ fn is_valid_domain(domain: &str) -> bool {
// If parsing the domain fails using Url, it will not work with reqwest. // If parsing the domain fails using Url, it will not work with reqwest.
if let Err(parse_error) = url::Url::parse(format!("https://{domain}").as_str()) { if let Err(parse_error) = url::Url::parse(format!("https://{domain}").as_str()) {
debug!("Domain parse error: '{}' - {:?}", domain, parse_error); debug!("Domain parse error: '{domain}' - {parse_error:?}");
return false; return false;
} else if domain.is_empty() } else if domain.is_empty()
|| domain.contains("..") || domain.contains("..")
@@ -136,18 +139,17 @@ fn is_valid_domain(domain: &str) -> bool {
|| domain.ends_with('-') || domain.ends_with('-')
{ {
debug!( debug!(
"Domain validation error: '{}' is either empty, contains '..', starts with an '.', starts or ends with a '-'", "Domain validation error: '{domain}' is either empty, contains '..', starts with an '.', starts or ends with a '-'"
domain
); );
return false; return false;
} else if domain.len() > 255 { } else if domain.len() > 255 {
debug!("Domain validation error: '{}' exceeds 255 characters", domain); debug!("Domain validation error: '{domain}' exceeds 255 characters");
return false; return false;
} }
for c in domain.chars() { for c in domain.chars() {
if !c.is_alphanumeric() && !ALLOWED_CHARS.contains(c) { if !c.is_alphanumeric() && !ALLOWED_CHARS.contains(c) {
debug!("Domain validation error: '{}' contains an invalid character '{}'", domain, c); debug!("Domain validation error: '{domain}' contains an invalid character '{c}'");
return false; return false;
} }
} }
@@ -156,7 +158,7 @@ fn is_valid_domain(domain: &str) -> bool {
} }
async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> { async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
let path = format!("{}/{}.png", CONFIG.icon_cache_folder(), domain); let path = format!("{}/{domain}.png", CONFIG.icon_cache_folder());
// Check for expiration of negatively cached copy // Check for expiration of negatively cached copy
if icon_is_negcached(&path).await { if icon_is_negcached(&path).await {
@@ -164,10 +166,7 @@ async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
} }
if let Some(icon) = get_cached_icon(&path).await { if let Some(icon) = get_cached_icon(&path).await {
let icon_type = match get_icon_type(&icon) { let icon_type = get_icon_type(&icon).unwrap_or("x-icon");
Some(x) => x,
_ => "x-icon",
};
return Some((icon, icon_type.to_string())); return Some((icon, icon_type.to_string()));
} }
@@ -189,7 +188,7 @@ async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
return None; return None;
} }
warn!("Unable to download icon: {:?}", e); warn!("Unable to download icon: {e:?}");
let miss_indicator = path + ".miss"; let miss_indicator = path + ".miss";
save_icon(&miss_indicator, &[]).await; save_icon(&miss_indicator, &[]).await;
None None
@@ -231,7 +230,7 @@ async fn icon_is_negcached(path: &str) -> bool {
// No longer negatively cached, drop the marker // No longer negatively cached, drop the marker
Ok(true) => { Ok(true) => {
if let Err(e) = remove_file(&miss_indicator).await { if let Err(e) = remove_file(&miss_indicator).await {
error!("Could not remove negative cache indicator for icon {:?}: {:?}", path, e); error!("Could not remove negative cache indicator for icon {path:?}: {e:?}");
} }
false false
} }
@@ -291,9 +290,7 @@ fn get_favicons_node(dom: Tokenizer<StringReader<'_>, FaviconEmitter>, icons: &m
TAG_HEAD if token.closing => { TAG_HEAD if token.closing => {
break; break;
} }
_ => { _ => {}
continue;
}
} }
} }
@@ -533,10 +530,10 @@ async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> {
// Check if the icon type is allowed, else try an icon from the list. // Check if the icon type is allowed, else try an icon from the list.
icon_type = get_icon_type(&body); icon_type = get_icon_type(&body);
if icon_type.is_none() { if icon_type.is_none() {
debug!("Icon from {} data:image uri, is not a valid image type", domain); debug!("Icon from {domain} data:image uri, is not a valid image type");
continue; continue;
} }
info!("Extracted icon from data:image uri for {}", domain); info!("Extracted icon from data:image uri for {domain}");
buffer = body.freeze(); buffer = body.freeze();
break; break;
} }
@@ -576,7 +573,7 @@ async fn save_icon(path: &str, icon: &[u8]) {
create_dir_all(&CONFIG.icon_cache_folder()).await.expect("Error creating icon cache folder"); create_dir_all(&CONFIG.icon_cache_folder()).await.expect("Error creating icon cache folder");
} }
Err(e) => { Err(e) => {
warn!("Unable to save icon: {:?}", e); warn!("Unable to save icon: {e:?}");
} }
} }
} }

View File

@@ -17,21 +17,26 @@ use crate::{
push::register_push_device, push::register_push_device,
ApiResult, EmptyResult, JsonResult, ApiResult, EmptyResult, JsonResult,
}, },
auth::{generate_organization_api_key_login_claims, ClientHeaders, ClientIp}, auth::{generate_organization_api_key_login_claims, ClientHeaders, ClientIp, ClientVersion},
db::{models::*, DbConn}, db::{models::*, DbConn},
error::MapResult, error::MapResult,
mail, util, CONFIG, mail, util, CONFIG,
}; };
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
routes![login, prelogin, identity_register] routes![login, prelogin, identity_register, register_verification_email, register_finish]
} }
#[post("/connect/token", data = "<data>")] #[post("/connect/token", data = "<data>")]
async fn login(data: Form<ConnectData>, client_header: ClientHeaders, mut conn: DbConn) -> JsonResult { async fn login(
data: Form<ConnectData>,
client_header: ClientHeaders,
client_version: Option<ClientVersion>,
mut conn: DbConn,
) -> JsonResult {
let data: ConnectData = data.into_inner(); let data: ConnectData = data.into_inner();
let mut user_uuid: Option<String> = None; let mut user_id: Option<UserId> = None;
let login_result = match data.grant_type.as_ref() { let login_result = match data.grant_type.as_ref() {
"refresh_token" => { "refresh_token" => {
@@ -48,7 +53,7 @@ async fn login(data: Form<ConnectData>, client_header: ClientHeaders, mut conn:
_check_is_some(&data.device_name, "device_name cannot be blank")?; _check_is_some(&data.device_name, "device_name cannot be blank")?;
_check_is_some(&data.device_type, "device_type cannot be blank")?; _check_is_some(&data.device_type, "device_type cannot be blank")?;
_password_login(data, &mut user_uuid, &mut conn, &client_header.ip).await _password_login(data, &mut user_id, &mut conn, &client_header.ip, &client_version).await
} }
"client_credentials" => { "client_credentials" => {
_check_is_some(&data.client_id, "client_id cannot be blank")?; _check_is_some(&data.client_id, "client_id cannot be blank")?;
@@ -59,17 +64,17 @@ async fn login(data: Form<ConnectData>, client_header: ClientHeaders, mut conn:
_check_is_some(&data.device_name, "device_name cannot be blank")?; _check_is_some(&data.device_name, "device_name cannot be blank")?;
_check_is_some(&data.device_type, "device_type cannot be blank")?; _check_is_some(&data.device_type, "device_type cannot be blank")?;
_api_key_login(data, &mut user_uuid, &mut conn, &client_header.ip).await _api_key_login(data, &mut user_id, &mut conn, &client_header.ip).await
} }
t => err!("Invalid type", t), t => err!("Invalid type", t),
}; };
if let Some(user_uuid) = user_uuid { if let Some(user_id) = user_id {
match &login_result { match &login_result {
Ok(_) => { Ok(_) => {
log_user_event( log_user_event(
EventType::UserLoggedIn as i32, EventType::UserLoggedIn as i32,
&user_uuid, &user_id,
client_header.device_type, client_header.device_type,
&client_header.ip.ip, &client_header.ip.ip,
&mut conn, &mut conn,
@@ -80,7 +85,7 @@ async fn login(data: Form<ConnectData>, client_header: ClientHeaders, mut conn:
if let Some(ev) = e.get_event() { if let Some(ev) = e.get_event() {
log_user_event( log_user_event(
ev.event as i32, ev.event as i32,
&user_uuid, &user_id,
client_header.device_type, client_header.device_type,
&client_header.ip.ip, &client_header.ip.ip,
&mut conn, &mut conn,
@@ -111,8 +116,8 @@ async fn _refresh_login(data: ConnectData, conn: &mut DbConn) -> JsonResult {
// Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out // Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156 // See: https://github.com/dani-garcia/vaultwarden/issues/4156
// --- // ---
// let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await; // let members = Membership::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec); let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec, data.client_id);
device.save(conn).await?; device.save(conn).await?;
let result = json!({ let result = json!({
@@ -141,9 +146,10 @@ struct MasterPasswordPolicy {
async fn _password_login( async fn _password_login(
data: ConnectData, data: ConnectData,
user_uuid: &mut Option<String>, user_id: &mut Option<UserId>,
conn: &mut DbConn, conn: &mut DbConn,
ip: &ClientIp, ip: &ClientIp,
client_version: &Option<ClientVersion>,
) -> JsonResult { ) -> JsonResult {
// Validate scope // Validate scope
let scope = data.scope.as_ref().unwrap(); let scope = data.scope.as_ref().unwrap();
@@ -158,17 +164,17 @@ async fn _password_login(
// Get the user // Get the user
let username = data.username.as_ref().unwrap().trim(); let username = data.username.as_ref().unwrap().trim();
let Some(mut user) = User::find_by_mail(username, conn).await else { let Some(mut user) = User::find_by_mail(username, conn).await else {
err!("Username or password is incorrect. Try again", format!("IP: {}. Username: {}.", ip.ip, username)) err!("Username or password is incorrect. Try again", format!("IP: {}. Username: {username}.", ip.ip))
}; };
// Set the user_uuid here to be passed back used for event logging. // Set the user_id here to be passed back used for event logging.
*user_uuid = Some(user.uuid.clone()); *user_id = Some(user.uuid.clone());
// Check if the user is disabled // Check if the user is disabled
if !user.enabled { if !user.enabled {
err!( err!(
"This user has been disabled", "This user has been disabled",
format!("IP: {}. Username: {}.", ip.ip, username), format!("IP: {}. Username: {username}.", ip.ip),
ErrorEvent { ErrorEvent {
event: EventType::UserFailedLogIn event: EventType::UserFailedLogIn
} }
@@ -178,12 +184,11 @@ async fn _password_login(
let password = data.password.as_ref().unwrap(); let password = data.password.as_ref().unwrap();
// If we get an auth request, we don't check the user's password, but the access code of the auth request // If we get an auth request, we don't check the user's password, but the access code of the auth request
if let Some(ref auth_request_uuid) = data.auth_request { if let Some(ref auth_request_id) = data.auth_request {
let Some(auth_request) = AuthRequest::find_by_uuid_and_user(auth_request_uuid.as_str(), &user.uuid, conn).await let Some(auth_request) = AuthRequest::find_by_uuid_and_user(auth_request_id, &user.uuid, conn).await else {
else {
err!( err!(
"Auth request not found. Try again.", "Auth request not found. Try again.",
format!("IP: {}. Username: {}.", ip.ip, username), format!("IP: {}. Username: {username}.", ip.ip),
ErrorEvent { ErrorEvent {
event: EventType::UserFailedLogIn, event: EventType::UserFailedLogIn,
} }
@@ -201,7 +206,7 @@ async fn _password_login(
{ {
err!( err!(
"Username or access code is incorrect. Try again", "Username or access code is incorrect. Try again",
format!("IP: {}. Username: {}.", ip.ip, username), format!("IP: {}. Username: {username}.", ip.ip),
ErrorEvent { ErrorEvent {
event: EventType::UserFailedLogIn, event: EventType::UserFailedLogIn,
} }
@@ -210,7 +215,7 @@ async fn _password_login(
} else if !user.check_valid_password(password) { } else if !user.check_valid_password(password) {
err!( err!(
"Username or password is incorrect. Try again", "Username or password is incorrect. Try again",
format!("IP: {}. Username: {}.", ip.ip, username), format!("IP: {}. Username: {username}.", ip.ip),
ErrorEvent { ErrorEvent {
event: EventType::UserFailedLogIn, event: EventType::UserFailedLogIn,
} }
@@ -223,7 +228,7 @@ async fn _password_login(
user.set_password(password, None, false, None); user.set_password(password, None, false, None);
if let Err(e) = user.save(conn).await { if let Err(e) = user.save(conn).await {
error!("Error updating user: {:#?}", e); error!("Error updating user: {e:#?}");
} }
} }
@@ -242,11 +247,11 @@ async fn _password_login(
user.login_verify_count += 1; user.login_verify_count += 1;
if let Err(e) = user.save(conn).await { if let Err(e) = user.save(conn).await {
error!("Error updating user: {:#?}", e); error!("Error updating user: {e:#?}");
} }
if let Err(e) = mail::send_verify_email(&user.email, &user.uuid).await { if let Err(e) = mail::send_verify_email(&user.email, &user.uuid).await {
error!("Error auto-sending email verification email: {:#?}", e); error!("Error auto-sending email verification email: {e:#?}");
} }
} }
} }
@@ -254,7 +259,7 @@ async fn _password_login(
// We still want the login to fail until they actually verified the email address // We still want the login to fail until they actually verified the email address
err!( err!(
"Please verify your email before trying again.", "Please verify your email before trying again.",
format!("IP: {}. Username: {}.", ip.ip, username), format!("IP: {}. Username: {username}.", ip.ip),
ErrorEvent { ErrorEvent {
event: EventType::UserFailedLogIn event: EventType::UserFailedLogIn
} }
@@ -263,11 +268,11 @@ async fn _password_login(
let (mut device, new_device) = get_device(&data, conn, &user).await; let (mut device, new_device) = get_device(&data, conn, &user).await;
let twofactor_token = twofactor_auth(&user, &data, &mut device, ip, conn).await?; let twofactor_token = twofactor_auth(&user, &data, &mut device, ip, client_version, conn).await?;
if CONFIG.mail_enabled() && new_device { if CONFIG.mail_enabled() && new_device {
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device).await { if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device).await {
error!("Error sending new device email: {:#?}", e); error!("Error sending new device email: {e:#?}");
if CONFIG.require_device_email() { if CONFIG.require_device_email() {
err!( err!(
@@ -291,11 +296,11 @@ async fn _password_login(
// Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out // Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156 // See: https://github.com/dani-garcia/vaultwarden/issues/4156
// --- // ---
// let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await; // let members = Membership::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec); let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec, data.client_id);
device.save(conn).await?; device.save(conn).await?;
// Fetch all valid Master Password Policies and merge them into one with all true's and larges numbers as one policy // Fetch all valid Master Password Policies and merge them into one with all trues and largest numbers as one policy
let master_password_policies: Vec<MasterPasswordPolicy> = let master_password_policies: Vec<MasterPasswordPolicy> =
OrgPolicy::find_accepted_and_confirmed_by_user_and_active_policy( OrgPolicy::find_accepted_and_confirmed_by_user_and_active_policy(
&user.uuid, &user.uuid,
@@ -307,6 +312,7 @@ async fn _password_login(
.filter_map(|p| serde_json::from_str(&p.data).ok()) .filter_map(|p| serde_json::from_str(&p.data).ok())
.collect(); .collect();
// NOTE: Upstream still uses PascalCase here for `Object`!
let master_password_policy = if !master_password_policies.is_empty() { let master_password_policy = if !master_password_policies.is_empty() {
let mut mpp_json = json!(master_password_policies.into_iter().reduce(|acc, policy| { let mut mpp_json = json!(master_password_policies.into_iter().reduce(|acc, policy| {
MasterPasswordPolicy { MasterPasswordPolicy {
@@ -319,10 +325,10 @@ async fn _password_login(
enforce_on_login: acc.enforce_on_login || policy.enforce_on_login, enforce_on_login: acc.enforce_on_login || policy.enforce_on_login,
} }
})); }));
mpp_json["object"] = json!("masterPasswordPolicy"); mpp_json["Object"] = json!("masterPasswordPolicy");
mpp_json mpp_json
} else { } else {
json!({"object": "masterPasswordPolicy"}) json!({"Object": "masterPasswordPolicy"})
}; };
let mut result = json!({ let mut result = json!({
@@ -353,13 +359,13 @@ async fn _password_login(
result["TwoFactorToken"] = Value::String(token); result["TwoFactorToken"] = Value::String(token);
} }
info!("User {} logged in successfully. IP: {}", username, ip.ip); info!("User {username} logged in successfully. IP: {}", ip.ip);
Ok(Json(result)) Ok(Json(result))
} }
async fn _api_key_login( async fn _api_key_login(
data: ConnectData, data: ConnectData,
user_uuid: &mut Option<String>, user_id: &mut Option<UserId>,
conn: &mut DbConn, conn: &mut DbConn,
ip: &ClientIp, ip: &ClientIp,
) -> JsonResult { ) -> JsonResult {
@@ -368,7 +374,7 @@ async fn _api_key_login(
// Validate scope // Validate scope
match data.scope.as_ref().unwrap().as_ref() { match data.scope.as_ref().unwrap().as_ref() {
"api" => _user_api_key_login(data, user_uuid, conn, ip).await, "api" => _user_api_key_login(data, user_id, conn, ip).await,
"api.organization" => _organization_api_key_login(data, conn, ip).await, "api.organization" => _organization_api_key_login(data, conn, ip).await,
_ => err!("Scope not supported"), _ => err!("Scope not supported"),
} }
@@ -376,21 +382,22 @@ async fn _api_key_login(
async fn _user_api_key_login( async fn _user_api_key_login(
data: ConnectData, data: ConnectData,
user_uuid: &mut Option<String>, user_id: &mut Option<UserId>,
conn: &mut DbConn, conn: &mut DbConn,
ip: &ClientIp, ip: &ClientIp,
) -> JsonResult { ) -> JsonResult {
// Get the user via the client_id // Get the user via the client_id
let client_id = data.client_id.as_ref().unwrap(); let client_id = data.client_id.as_ref().unwrap();
let Some(client_user_uuid) = client_id.strip_prefix("user.") else { let Some(client_user_id) = client_id.strip_prefix("user.") else {
err!("Malformed client_id", format!("IP: {}.", ip.ip)) err!("Malformed client_id", format!("IP: {}.", ip.ip))
}; };
let Some(user) = User::find_by_uuid(client_user_uuid, conn).await else { let client_user_id: UserId = client_user_id.into();
let Some(user) = User::find_by_uuid(&client_user_id, conn).await else {
err!("Invalid client_id", format!("IP: {}.", ip.ip)) err!("Invalid client_id", format!("IP: {}.", ip.ip))
}; };
// Set the user_uuid here to be passed back used for event logging. // Set the user_id here to be passed back used for event logging.
*user_uuid = Some(user.uuid.clone()); *user_id = Some(user.uuid.clone());
// Check if the user is disabled // Check if the user is disabled
if !user.enabled { if !user.enabled {
@@ -420,7 +427,7 @@ async fn _user_api_key_login(
if CONFIG.mail_enabled() && new_device { if CONFIG.mail_enabled() && new_device {
let now = Utc::now().naive_utc(); let now = Utc::now().naive_utc();
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device).await { if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device).await {
error!("Error sending new device email: {:#?}", e); error!("Error sending new device email: {e:#?}");
if CONFIG.require_device_email() { if CONFIG.require_device_email() {
err!( err!(
@@ -440,8 +447,8 @@ async fn _user_api_key_login(
// Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out // Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156 // See: https://github.com/dani-garcia/vaultwarden/issues/4156
// --- // ---
// let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await; // let members = Membership::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec); let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec, data.client_id);
device.save(conn).await?; device.save(conn).await?;
info!("User {} logged in successfully via API key. IP: {}", user.email, ip.ip); info!("User {} logged in successfully via API key. IP: {}", user.email, ip.ip);
@@ -469,10 +476,11 @@ async fn _user_api_key_login(
async fn _organization_api_key_login(data: ConnectData, conn: &mut DbConn, ip: &ClientIp) -> JsonResult { async fn _organization_api_key_login(data: ConnectData, conn: &mut DbConn, ip: &ClientIp) -> JsonResult {
// Get the org via the client_id // Get the org via the client_id
let client_id = data.client_id.as_ref().unwrap(); let client_id = data.client_id.as_ref().unwrap();
let Some(org_uuid) = client_id.strip_prefix("organization.") else { let Some(org_id) = client_id.strip_prefix("organization.") else {
err!("Malformed client_id", format!("IP: {}.", ip.ip)) err!("Malformed client_id", format!("IP: {}.", ip.ip))
}; };
let Some(org_api_key) = OrganizationApiKey::find_by_org_uuid(org_uuid, conn).await else { let org_id: OrganizationId = org_id.to_string().into();
let Some(org_api_key) = OrganizationApiKey::find_by_org_uuid(&org_id, conn).await else {
err!("Invalid client_id", format!("IP: {}.", ip.ip)) err!("Invalid client_id", format!("IP: {}.", ip.ip))
}; };
@@ -519,6 +527,7 @@ async fn twofactor_auth(
data: &ConnectData, data: &ConnectData,
device: &mut Device, device: &mut Device,
ip: &ClientIp, ip: &ClientIp,
client_version: &Option<ClientVersion>,
conn: &mut DbConn, conn: &mut DbConn,
) -> ApiResult<Option<String>> { ) -> ApiResult<Option<String>> {
let twofactors = TwoFactor::find_by_user(&user.uuid, conn).await; let twofactors = TwoFactor::find_by_user(&user.uuid, conn).await;
@@ -537,7 +546,10 @@ async fn twofactor_auth(
let twofactor_code = match data.two_factor_token { let twofactor_code = match data.two_factor_token {
Some(ref code) => code, Some(ref code) => code,
None => { None => {
err_json!(_json_err_twofactor(&twofactor_ids, &user.uuid, data, conn).await?, "2FA token not provided") err_json!(
_json_err_twofactor(&twofactor_ids, &user.uuid, data, client_version, conn).await?,
"2FA token not provided"
)
} }
}; };
@@ -574,7 +586,7 @@ async fn twofactor_auth(
} }
} }
Some(TwoFactorType::Email) => { Some(TwoFactorType::Email) => {
email::validate_email_code_str(&user.uuid, twofactor_code, &selected_data?, conn).await? email::validate_email_code_str(&user.uuid, twofactor_code, &selected_data?, &ip.ip, conn).await?
} }
Some(TwoFactorType::Remember) => { Some(TwoFactorType::Remember) => {
@@ -584,7 +596,7 @@ async fn twofactor_auth(
} }
_ => { _ => {
err_json!( err_json!(
_json_err_twofactor(&twofactor_ids, &user.uuid, data, conn).await?, _json_err_twofactor(&twofactor_ids, &user.uuid, data, client_version, conn).await?,
"2FA Remember token not provided" "2FA Remember token not provided"
) )
} }
@@ -614,8 +626,9 @@ fn _selected_data(tf: Option<TwoFactor>) -> ApiResult<String> {
async fn _json_err_twofactor( async fn _json_err_twofactor(
providers: &[i32], providers: &[i32],
user_uuid: &str, user_id: &UserId,
data: &ConnectData, data: &ConnectData,
client_version: &Option<ClientVersion>,
conn: &mut DbConn, conn: &mut DbConn,
) -> ApiResult<Value> { ) -> ApiResult<Value> {
let mut result = json!({ let mut result = json!({
@@ -635,12 +648,12 @@ async fn _json_err_twofactor(
Some(TwoFactorType::Authenticator) => { /* Nothing to do for TOTP */ } Some(TwoFactorType::Authenticator) => { /* Nothing to do for TOTP */ }
Some(TwoFactorType::Webauthn) if CONFIG.domain_set() => { Some(TwoFactorType::Webauthn) if CONFIG.domain_set() => {
let request = webauthn::generate_webauthn_login(user_uuid, conn).await?; let request = webauthn::generate_webauthn_login(user_id, conn).await?;
result["TwoFactorProviders2"][provider.to_string()] = request.0; result["TwoFactorProviders2"][provider.to_string()] = request.0;
} }
Some(TwoFactorType::Duo) => { Some(TwoFactorType::Duo) => {
let email = match User::find_by_uuid(user_uuid, conn).await { let email = match User::find_by_uuid(user_id, conn).await {
Some(u) => u.email, Some(u) => u.email,
None => err!("User does not exist"), None => err!("User does not exist"),
}; };
@@ -672,7 +685,7 @@ async fn _json_err_twofactor(
} }
Some(tf_type @ TwoFactorType::YubiKey) => { Some(tf_type @ TwoFactorType::YubiKey) => {
let Some(twofactor) = TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await else { let Some(twofactor) = TwoFactor::find_by_user_and_type(user_id, tf_type as i32, conn).await else {
err!("No YubiKey devices registered") err!("No YubiKey devices registered")
}; };
@@ -684,13 +697,21 @@ async fn _json_err_twofactor(
} }
Some(tf_type @ TwoFactorType::Email) => { Some(tf_type @ TwoFactorType::Email) => {
let Some(twofactor) = TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await else { let Some(twofactor) = TwoFactor::find_by_user_and_type(user_id, tf_type as i32, conn).await else {
err!("No twofactor email registered") err!("No twofactor email registered")
}; };
// Send email immediately if email is the only 2FA option // Starting with version 2025.5.0 the client will call `/api/two-factor/send-email-login`.
if providers.len() == 1 { let disabled_send = if let Some(cv) = client_version {
email::send_token(user_uuid, conn).await? let ver_match = semver::VersionReq::parse(">=2025.5.0").unwrap();
ver_match.matches(&cv.0)
} else {
false
};
// Send email immediately if email is the only 2FA option.
if providers.len() == 1 && !disabled_send {
email::send_token(user_id, conn).await?
} }
let email_data = email::EmailTokenData::from_json(&twofactor.data)?; let email_data = email::EmailTokenData::from_json(&twofactor.data)?;
@@ -713,7 +734,66 @@ async fn prelogin(data: Json<PreloginData>, conn: DbConn) -> Json<Value> {
#[post("/accounts/register", data = "<data>")] #[post("/accounts/register", data = "<data>")]
async fn identity_register(data: Json<RegisterData>, conn: DbConn) -> JsonResult { async fn identity_register(data: Json<RegisterData>, conn: DbConn) -> JsonResult {
_register(data, conn).await _register(data, false, conn).await
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct RegisterVerificationData {
email: String,
name: Option<String>,
// receiveMarketingEmails: bool,
}
#[derive(rocket::Responder)]
enum RegisterVerificationResponse {
NoContent(()),
Token(Json<String>),
}
#[post("/accounts/register/send-verification-email", data = "<data>")]
async fn register_verification_email(
data: Json<RegisterVerificationData>,
mut conn: DbConn,
) -> ApiResult<RegisterVerificationResponse> {
let data = data.into_inner();
if !CONFIG.is_signup_allowed(&data.email) {
err!("Registration not allowed or user already exists")
}
let should_send_mail = CONFIG.mail_enabled() && CONFIG.signups_verify();
let token_claims =
crate::auth::generate_register_verify_claims(data.email.clone(), data.name.clone(), should_send_mail);
let token = crate::auth::encode_jwt(&token_claims);
if should_send_mail {
let user = User::find_by_mail(&data.email, &mut conn).await;
if user.filter(|u| u.private_key.is_some()).is_some() {
// There is still a timing side channel here in that the code
// paths that send mail take noticeably longer than ones that
// don't. Add a randomized sleep to mitigate this somewhat.
use rand::{rngs::SmallRng, Rng, SeedableRng};
let mut rng = SmallRng::from_os_rng();
let delta: i32 = 100;
let sleep_ms = (1_000 + rng.random_range(-delta..=delta)) as u64;
tokio::time::sleep(tokio::time::Duration::from_millis(sleep_ms)).await;
} else {
mail::send_register_verify_email(&data.email, &token).await?;
}
Ok(RegisterVerificationResponse::NoContent(()))
} else {
// If email verification is not required, return the token directly
// the clients will use this token to finish the registration
Ok(RegisterVerificationResponse::Token(Json(token)))
}
}
#[post("/accounts/register/finish", data = "<data>")]
async fn register_finish(data: Json<RegisterData>, conn: DbConn) -> JsonResult {
_register(data, true, conn).await
} }
// https://github.com/bitwarden/jslib/blob/master/common/src/models/request/tokenRequest.ts // https://github.com/bitwarden/jslib/blob/master/common/src/models/request/tokenRequest.ts
@@ -745,7 +825,7 @@ struct ConnectData {
#[field(name = uncased("device_identifier"))] #[field(name = uncased("device_identifier"))]
#[field(name = uncased("deviceidentifier"))] #[field(name = uncased("deviceidentifier"))]
device_identifier: Option<String>, device_identifier: Option<DeviceId>,
#[field(name = uncased("device_name"))] #[field(name = uncased("device_name"))]
#[field(name = uncased("devicename"))] #[field(name = uncased("devicename"))]
device_name: Option<String>, device_name: Option<String>,
@@ -768,7 +848,7 @@ struct ConnectData {
#[field(name = uncased("twofactorremember"))] #[field(name = uncased("twofactorremember"))]
two_factor_remember: Option<i32>, two_factor_remember: Option<i32>,
#[field(name = uncased("authrequest"))] #[field(name = uncased("authrequest"))]
auth_request: Option<String>, auth_request: Option<AuthRequestId>,
} }
fn _check_is_some<T>(value: &Option<T>, msg: &str) -> EmptyResult { fn _check_is_some<T>(value: &Option<T>, msg: &str) -> EmptyResult {

View File

@@ -10,7 +10,7 @@ use rocket_ws::{Message, WebSocket};
use crate::{ use crate::{
auth::{ClientIp, WsAccessTokenHeader}, auth::{ClientIp, WsAccessTokenHeader},
db::{ db::{
models::{Cipher, Folder, Send as DbSend, User}, models::{AuthRequestId, Cipher, CollectionId, Device, DeviceId, Folder, PushId, Send as DbSend, User, UserId},
DbConn, DbConn,
}, },
Error, CONFIG, Error, CONFIG,
@@ -53,13 +53,13 @@ struct WsAccessToken {
struct WSEntryMapGuard { struct WSEntryMapGuard {
users: Arc<WebSocketUsers>, users: Arc<WebSocketUsers>,
user_uuid: String, user_uuid: UserId,
entry_uuid: uuid::Uuid, entry_uuid: uuid::Uuid,
addr: IpAddr, addr: IpAddr,
} }
impl WSEntryMapGuard { impl WSEntryMapGuard {
fn new(users: Arc<WebSocketUsers>, user_uuid: String, entry_uuid: uuid::Uuid, addr: IpAddr) -> Self { fn new(users: Arc<WebSocketUsers>, user_uuid: UserId, entry_uuid: uuid::Uuid, addr: IpAddr) -> Self {
Self { Self {
users, users,
user_uuid, user_uuid,
@@ -72,7 +72,7 @@ impl WSEntryMapGuard {
impl Drop for WSEntryMapGuard { impl Drop for WSEntryMapGuard {
fn drop(&mut self) { fn drop(&mut self) {
info!("Closing WS connection from {}", self.addr); info!("Closing WS connection from {}", self.addr);
if let Some(mut entry) = self.users.map.get_mut(&self.user_uuid) { if let Some(mut entry) = self.users.map.get_mut(self.user_uuid.as_ref()) {
entry.retain(|(uuid, _)| uuid != &self.entry_uuid); entry.retain(|(uuid, _)| uuid != &self.entry_uuid);
} }
} }
@@ -101,6 +101,7 @@ impl Drop for WSAnonymousEntryMapGuard {
} }
} }
#[allow(tail_expr_drop_order)]
#[get("/hub?<data..>")] #[get("/hub?<data..>")]
fn websockets_hub<'r>( fn websockets_hub<'r>(
ws: WebSocket, ws: WebSocket,
@@ -129,7 +130,7 @@ fn websockets_hub<'r>(
// Add a channel to send messages to this client to the map // Add a channel to send messages to this client to the map
let entry_uuid = uuid::Uuid::new_v4(); let entry_uuid = uuid::Uuid::new_v4();
let (tx, rx) = tokio::sync::mpsc::channel::<Message>(100); let (tx, rx) = tokio::sync::mpsc::channel::<Message>(100);
users.map.entry(claims.sub.clone()).or_default().push((entry_uuid, tx)); users.map.entry(claims.sub.to_string()).or_default().push((entry_uuid, tx));
// Once the guard goes out of scope, the connection will have been closed and the entry will be deleted from the map // Once the guard goes out of scope, the connection will have been closed and the entry will be deleted from the map
(rx, WSEntryMapGuard::new(users, claims.sub, entry_uuid, addr)) (rx, WSEntryMapGuard::new(users, claims.sub, entry_uuid, addr))
@@ -156,7 +157,6 @@ fn websockets_hub<'r>(
if serde_json::from_str(msg).ok() == Some(INITIAL_MESSAGE) { if serde_json::from_str(msg).ok() == Some(INITIAL_MESSAGE) {
yield Message::binary(INITIAL_RESPONSE); yield Message::binary(INITIAL_RESPONSE);
continue;
} }
} }
@@ -186,6 +186,7 @@ fn websockets_hub<'r>(
}) })
} }
#[allow(tail_expr_drop_order)]
#[get("/anonymous-hub?<token..>")] #[get("/anonymous-hub?<token..>")]
fn anonymous_websockets_hub<'r>(ws: WebSocket, token: String, ip: ClientIp) -> Result<rocket_ws::Stream!['r], Error> { fn anonymous_websockets_hub<'r>(ws: WebSocket, token: String, ip: ClientIp) -> Result<rocket_ws::Stream!['r], Error> {
let addr = ip.ip; let addr = ip.ip;
@@ -223,7 +224,6 @@ fn anonymous_websockets_hub<'r>(ws: WebSocket, token: String, ip: ClientIp) -> R
if serde_json::from_str(msg).ok() == Some(INITIAL_MESSAGE) { if serde_json::from_str(msg).ok() == Some(INITIAL_MESSAGE) {
yield Message::binary(INITIAL_RESPONSE); yield Message::binary(INITIAL_RESPONSE);
continue;
} }
} }
@@ -290,7 +290,7 @@ fn serialize(val: Value) -> Vec<u8> {
fn serialize_date(date: NaiveDateTime) -> Value { fn serialize_date(date: NaiveDateTime) -> Value {
let seconds: i64 = date.and_utc().timestamp(); let seconds: i64 = date.and_utc().timestamp();
let nanos: i64 = date.and_utc().timestamp_subsec_nanos().into(); let nanos: i64 = date.and_utc().timestamp_subsec_nanos().into();
let timestamp = nanos << 34 | seconds; let timestamp = (nanos << 34) | seconds;
let bs = timestamp.to_be_bytes(); let bs = timestamp.to_be_bytes();
@@ -328,8 +328,8 @@ pub struct WebSocketUsers {
} }
impl WebSocketUsers { impl WebSocketUsers {
async fn send_update(&self, user_uuid: &str, data: &[u8]) { async fn send_update(&self, user_id: &UserId, data: &[u8]) {
if let Some(user) = self.map.get(user_uuid).map(|v| v.clone()) { if let Some(user) = self.map.get(user_id.as_ref()).map(|v| v.clone()) {
for (_, sender) in user.iter() { for (_, sender) in user.iter() {
if let Err(e) = sender.send(Message::binary(data)).await { if let Err(e) = sender.send(Message::binary(data)).await {
error!("Error sending WS update {e}"); error!("Error sending WS update {e}");
@@ -339,13 +339,13 @@ impl WebSocketUsers {
} }
// NOTE: The last modified date needs to be updated before calling these methods // NOTE: The last modified date needs to be updated before calling these methods
pub async fn send_user_update(&self, ut: UpdateType, user: &User) { pub async fn send_user_update(&self, ut: UpdateType, user: &User, push_uuid: &Option<PushId>, conn: &mut DbConn) {
// Skip any processing if both WebSockets and Push are not active // Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED { if *NOTIFICATIONS_DISABLED {
return; return;
} }
let data = create_update( let data = create_update(
vec![("UserId".into(), user.uuid.clone().into()), ("Date".into(), serialize_date(user.updated_at))], vec![("UserId".into(), user.uuid.to_string().into()), ("Date".into(), serialize_date(user.updated_at))],
ut, ut,
None, None,
); );
@@ -355,19 +355,19 @@ impl WebSocketUsers {
} }
if CONFIG.push_enabled() { if CONFIG.push_enabled() {
push_user_update(ut, user); push_user_update(ut, user, push_uuid, conn).await;
} }
} }
pub async fn send_logout(&self, user: &User, acting_device_uuid: Option<String>) { pub async fn send_logout(&self, user: &User, acting_device_id: Option<DeviceId>, conn: &mut DbConn) {
// Skip any processing if both WebSockets and Push are not active // Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED { if *NOTIFICATIONS_DISABLED {
return; return;
} }
let data = create_update( let data = create_update(
vec![("UserId".into(), user.uuid.clone().into()), ("Date".into(), serialize_date(user.updated_at))], vec![("UserId".into(), user.uuid.to_string().into()), ("Date".into(), serialize_date(user.updated_at))],
UpdateType::LogOut, UpdateType::LogOut,
acting_device_uuid.clone(), acting_device_id.clone(),
); );
if CONFIG.enable_websocket() { if CONFIG.enable_websocket() {
@@ -375,29 +375,23 @@ impl WebSocketUsers {
} }
if CONFIG.push_enabled() { if CONFIG.push_enabled() {
push_logout(user, acting_device_uuid); push_logout(user, acting_device_id.clone(), conn).await;
} }
} }
pub async fn send_folder_update( pub async fn send_folder_update(&self, ut: UpdateType, folder: &Folder, device: &Device, conn: &mut DbConn) {
&self,
ut: UpdateType,
folder: &Folder,
acting_device_uuid: &String,
conn: &mut DbConn,
) {
// Skip any processing if both WebSockets and Push are not active // Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED { if *NOTIFICATIONS_DISABLED {
return; return;
} }
let data = create_update( let data = create_update(
vec![ vec![
("Id".into(), folder.uuid.clone().into()), ("Id".into(), folder.uuid.to_string().into()),
("UserId".into(), folder.user_uuid.clone().into()), ("UserId".into(), folder.user_uuid.to_string().into()),
("RevisionDate".into(), serialize_date(folder.updated_at)), ("RevisionDate".into(), serialize_date(folder.updated_at)),
], ],
ut, ut,
Some(acting_device_uuid.into()), Some(device.uuid.clone()),
); );
if CONFIG.enable_websocket() { if CONFIG.enable_websocket() {
@@ -405,7 +399,7 @@ impl WebSocketUsers {
} }
if CONFIG.push_enabled() { if CONFIG.push_enabled() {
push_folder_update(ut, folder, acting_device_uuid, conn).await; push_folder_update(ut, folder, device, conn).await;
} }
} }
@@ -413,48 +407,48 @@ impl WebSocketUsers {
&self, &self,
ut: UpdateType, ut: UpdateType,
cipher: &Cipher, cipher: &Cipher,
user_uuids: &[String], user_ids: &[UserId],
acting_device_uuid: &String, device: &Device,
collection_uuids: Option<Vec<String>>, collection_uuids: Option<Vec<CollectionId>>,
conn: &mut DbConn, conn: &mut DbConn,
) { ) {
// Skip any processing if both WebSockets and Push are not active // Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED { if *NOTIFICATIONS_DISABLED {
return; return;
} }
let org_uuid = convert_option(cipher.organization_uuid.clone()); let org_id = convert_option(cipher.organization_uuid.as_deref());
// Depending if there are collections provided or not, we need to have different values for the following variables. // Depending if there are collections provided or not, we need to have different values for the following variables.
// The user_uuid should be `null`, and the revision date should be set to now, else the clients won't sync the collection change. // The user_uuid should be `null`, and the revision date should be set to now, else the clients won't sync the collection change.
let (user_uuid, collection_uuids, revision_date) = if let Some(collection_uuids) = collection_uuids { let (user_id, collection_uuids, revision_date) = if let Some(collection_uuids) = collection_uuids {
( (
Value::Nil, Value::Nil,
Value::Array(collection_uuids.into_iter().map(|v| v.into()).collect::<Vec<Value>>()), Value::Array(collection_uuids.into_iter().map(|v| v.to_string().into()).collect::<Vec<Value>>()),
serialize_date(Utc::now().naive_utc()), serialize_date(Utc::now().naive_utc()),
) )
} else { } else {
(convert_option(cipher.user_uuid.clone()), Value::Nil, serialize_date(cipher.updated_at)) (convert_option(cipher.user_uuid.as_deref()), Value::Nil, serialize_date(cipher.updated_at))
}; };
let data = create_update( let data = create_update(
vec![ vec![
("Id".into(), cipher.uuid.clone().into()), ("Id".into(), cipher.uuid.to_string().into()),
("UserId".into(), user_uuid), ("UserId".into(), user_id),
("OrganizationId".into(), org_uuid), ("OrganizationId".into(), org_id),
("CollectionIds".into(), collection_uuids), ("CollectionIds".into(), collection_uuids),
("RevisionDate".into(), revision_date), ("RevisionDate".into(), revision_date),
], ],
ut, ut,
Some(acting_device_uuid.into()), Some(device.uuid.clone()), // Acting device id (unique device/app uuid)
); );
if CONFIG.enable_websocket() { if CONFIG.enable_websocket() {
for uuid in user_uuids { for uuid in user_ids {
self.send_update(uuid, &data).await; self.send_update(uuid, &data).await;
} }
} }
if CONFIG.push_enabled() && user_uuids.len() == 1 { if CONFIG.push_enabled() && user_ids.len() == 1 {
push_cipher_update(ut, cipher, acting_device_uuid, conn).await; push_cipher_update(ut, cipher, device, conn).await;
} }
} }
@@ -462,20 +456,20 @@ impl WebSocketUsers {
&self, &self,
ut: UpdateType, ut: UpdateType,
send: &DbSend, send: &DbSend,
user_uuids: &[String], user_ids: &[UserId],
acting_device_uuid: &String, device: &Device,
conn: &mut DbConn, conn: &mut DbConn,
) { ) {
// Skip any processing if both WebSockets and Push are not active // Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED { if *NOTIFICATIONS_DISABLED {
return; return;
} }
let user_uuid = convert_option(send.user_uuid.clone()); let user_id = convert_option(send.user_uuid.as_deref());
let data = create_update( let data = create_update(
vec![ vec![
("Id".into(), send.uuid.clone().into()), ("Id".into(), send.uuid.to_string().into()),
("UserId".into(), user_uuid), ("UserId".into(), user_id),
("RevisionDate".into(), serialize_date(send.revision_date)), ("RevisionDate".into(), serialize_date(send.revision_date)),
], ],
ut, ut,
@@ -483,20 +477,20 @@ impl WebSocketUsers {
); );
if CONFIG.enable_websocket() { if CONFIG.enable_websocket() {
for uuid in user_uuids { for uuid in user_ids {
self.send_update(uuid, &data).await; self.send_update(uuid, &data).await;
} }
} }
if CONFIG.push_enabled() && user_uuids.len() == 1 { if CONFIG.push_enabled() && user_ids.len() == 1 {
push_send_update(ut, send, acting_device_uuid, conn).await; push_send_update(ut, send, device, conn).await;
} }
} }
pub async fn send_auth_request( pub async fn send_auth_request(
&self, &self,
user_uuid: &String, user_id: &UserId,
auth_request_uuid: &String, auth_request_uuid: &str,
acting_device_uuid: &String, device: &Device,
conn: &mut DbConn, conn: &mut DbConn,
) { ) {
// Skip any processing if both WebSockets and Push are not active // Skip any processing if both WebSockets and Push are not active
@@ -504,24 +498,24 @@ impl WebSocketUsers {
return; return;
} }
let data = create_update( let data = create_update(
vec![("Id".into(), auth_request_uuid.clone().into()), ("UserId".into(), user_uuid.clone().into())], vec![("Id".into(), auth_request_uuid.to_owned().into()), ("UserId".into(), user_id.to_string().into())],
UpdateType::AuthRequest, UpdateType::AuthRequest,
Some(acting_device_uuid.to_string()), Some(device.uuid.clone()),
); );
if CONFIG.enable_websocket() { if CONFIG.enable_websocket() {
self.send_update(user_uuid, &data).await; self.send_update(user_id, &data).await;
} }
if CONFIG.push_enabled() { if CONFIG.push_enabled() {
push_auth_request(user_uuid.to_string(), auth_request_uuid.to_string(), conn).await; push_auth_request(user_id, auth_request_uuid, device, conn).await;
} }
} }
pub async fn send_auth_response( pub async fn send_auth_response(
&self, &self,
user_uuid: &String, user_id: &UserId,
auth_response_uuid: &str, auth_request_id: &AuthRequestId,
approving_device_uuid: String, device: &Device,
conn: &mut DbConn, conn: &mut DbConn,
) { ) {
// Skip any processing if both WebSockets and Push are not active // Skip any processing if both WebSockets and Push are not active
@@ -529,17 +523,16 @@ impl WebSocketUsers {
return; return;
} }
let data = create_update( let data = create_update(
vec![("Id".into(), auth_response_uuid.to_owned().into()), ("UserId".into(), user_uuid.clone().into())], vec![("Id".into(), auth_request_id.to_string().into()), ("UserId".into(), user_id.to_string().into())],
UpdateType::AuthRequestResponse, UpdateType::AuthRequestResponse,
approving_device_uuid.clone().into(), Some(device.uuid.clone()),
); );
if CONFIG.enable_websocket() { if CONFIG.enable_websocket() {
self.send_update(auth_response_uuid, &data).await; self.send_update(user_id, &data).await;
} }
if CONFIG.push_enabled() { if CONFIG.push_enabled() {
push_auth_response(user_uuid.to_string(), auth_response_uuid.to_string(), approving_device_uuid, conn) push_auth_response(user_id, auth_request_id, device, conn).await;
.await;
} }
} }
} }
@@ -558,16 +551,16 @@ impl AnonymousWebSocketSubscriptions {
} }
} }
pub async fn send_auth_response(&self, user_uuid: &String, auth_response_uuid: &str) { pub async fn send_auth_response(&self, user_id: &UserId, auth_request_id: &AuthRequestId) {
if !CONFIG.enable_websocket() { if !CONFIG.enable_websocket() {
return; return;
} }
let data = create_anonymous_update( let data = create_anonymous_update(
vec![("Id".into(), auth_response_uuid.to_owned().into()), ("UserId".into(), user_uuid.clone().into())], vec![("Id".into(), auth_request_id.to_string().into()), ("UserId".into(), user_id.to_string().into())],
UpdateType::AuthRequestResponse, UpdateType::AuthRequestResponse,
user_uuid.to_string(), user_id.clone(),
); );
self.send_update(auth_response_uuid, &data).await; self.send_update(auth_request_id, &data).await;
} }
} }
@@ -579,14 +572,14 @@ impl AnonymousWebSocketSubscriptions {
"ReceiveMessage", // Target "ReceiveMessage", // Target
[ // Arguments [ // Arguments
{ {
"ContextId": acting_device_uuid || Nil, "ContextId": acting_device_id || Nil,
"Type": ut as i32, "Type": ut as i32,
"Payload": {} "Payload": {}
} }
] ]
] ]
*/ */
fn create_update(payload: Vec<(Value, Value)>, ut: UpdateType, acting_device_uuid: Option<String>) -> Vec<u8> { fn create_update(payload: Vec<(Value, Value)>, ut: UpdateType, acting_device_id: Option<DeviceId>) -> Vec<u8> {
use rmpv::Value as V; use rmpv::Value as V;
let value = V::Array(vec![ let value = V::Array(vec![
@@ -595,7 +588,7 @@ fn create_update(payload: Vec<(Value, Value)>, ut: UpdateType, acting_device_uui
V::Nil, V::Nil,
"ReceiveMessage".into(), "ReceiveMessage".into(),
V::Array(vec![V::Map(vec![ V::Array(vec![V::Map(vec![
("ContextId".into(), acting_device_uuid.map(|v| v.into()).unwrap_or_else(|| V::Nil)), ("ContextId".into(), acting_device_id.map(|v| v.to_string().into()).unwrap_or_else(|| V::Nil)),
("Type".into(), (ut as i32).into()), ("Type".into(), (ut as i32).into()),
("Payload".into(), payload.into()), ("Payload".into(), payload.into()),
])]), ])]),
@@ -604,7 +597,7 @@ fn create_update(payload: Vec<(Value, Value)>, ut: UpdateType, acting_device_uui
serialize(value) serialize(value)
} }
fn create_anonymous_update(payload: Vec<(Value, Value)>, ut: UpdateType, user_id: String) -> Vec<u8> { fn create_anonymous_update(payload: Vec<(Value, Value)>, ut: UpdateType, user_id: UserId) -> Vec<u8> {
use rmpv::Value as V; use rmpv::Value as V;
let value = V::Array(vec![ let value = V::Array(vec![
@@ -615,7 +608,7 @@ fn create_anonymous_update(payload: Vec<(Value, Value)>, ut: UpdateType, user_id
V::Array(vec![V::Map(vec![ V::Array(vec![V::Map(vec![
("Type".into(), (ut as i32).into()), ("Type".into(), (ut as i32).into()),
("Payload".into(), payload.into()), ("Payload".into(), payload.into()),
("UserId".into(), user_id.into()), ("UserId".into(), user_id.to_string().into()),
])]), ])]),
]); ]);

View File

@@ -7,9 +7,9 @@ use tokio::sync::RwLock;
use crate::{ use crate::{
api::{ApiResult, EmptyResult, UpdateType}, api::{ApiResult, EmptyResult, UpdateType},
db::models::{Cipher, Device, Folder, Send, User}, db::models::{AuthRequestId, Cipher, Device, DeviceId, Folder, PushId, Send, User, UserId},
http_client::make_http_request, http_client::make_http_request,
util::format_date, util::{format_date, get_uuid},
CONFIG, CONFIG,
}; };
@@ -28,20 +28,20 @@ struct LocalAuthPushToken {
valid_until: Instant, valid_until: Instant,
} }
async fn get_auth_push_token() -> ApiResult<String> { async fn get_auth_api_token() -> ApiResult<String> {
static PUSH_TOKEN: Lazy<RwLock<LocalAuthPushToken>> = Lazy::new(|| { static API_TOKEN: Lazy<RwLock<LocalAuthPushToken>> = Lazy::new(|| {
RwLock::new(LocalAuthPushToken { RwLock::new(LocalAuthPushToken {
access_token: String::new(), access_token: String::new(),
valid_until: Instant::now(), valid_until: Instant::now(),
}) })
}); });
let push_token = PUSH_TOKEN.read().await; let api_token = API_TOKEN.read().await;
if push_token.valid_until.saturating_duration_since(Instant::now()).as_secs() > 0 { if api_token.valid_until.saturating_duration_since(Instant::now()).as_secs() > 0 {
debug!("Auth Push token still valid, no need for a new one"); debug!("Auth Push token still valid, no need for a new one");
return Ok(push_token.access_token.clone()); return Ok(api_token.access_token.clone());
} }
drop(push_token); // Drop the read lock now drop(api_token); // Drop the read lock now
let installation_id = CONFIG.push_installation_id(); let installation_id = CONFIG.push_installation_id();
let client_id = format!("installation.{installation_id}"); let client_id = format!("installation.{installation_id}");
@@ -68,44 +68,48 @@ async fn get_auth_push_token() -> ApiResult<String> {
Err(e) => err!(format!("Unexpected push token received from bitwarden server: {e}")), Err(e) => err!(format!("Unexpected push token received from bitwarden server: {e}")),
}; };
let mut push_token = PUSH_TOKEN.write().await; let mut api_token = API_TOKEN.write().await;
push_token.valid_until = Instant::now() api_token.valid_until = Instant::now()
.checked_add(Duration::new((json_pushtoken.expires_in / 2) as u64, 0)) // Token valid for half the specified time .checked_add(Duration::new((json_pushtoken.expires_in / 2) as u64, 0)) // Token valid for half the specified time
.unwrap(); .unwrap();
push_token.access_token = json_pushtoken.access_token; api_token.access_token = json_pushtoken.access_token;
debug!("Token still valid for {}", push_token.valid_until.saturating_duration_since(Instant::now()).as_secs()); debug!("Token still valid for {}", api_token.valid_until.saturating_duration_since(Instant::now()).as_secs());
Ok(push_token.access_token.clone()) Ok(api_token.access_token.clone())
} }
pub async fn register_push_device(device: &mut Device, conn: &mut crate::db::DbConn) -> EmptyResult { pub async fn register_push_device(device: &mut Device, conn: &mut crate::db::DbConn) -> EmptyResult {
if !CONFIG.push_enabled() || !device.is_push_device() || device.is_registered() { if !CONFIG.push_enabled() || !device.is_push_device() {
return Ok(()); return Ok(());
} }
if device.push_token.is_none() { if device.push_token.is_none() {
warn!("Skipping the registration of the device {} because the push_token field is empty.", device.uuid); warn!("Skipping the registration of the device {:?} because the push_token field is empty.", device.uuid);
warn!("To get rid of this message you need to clear the app data and reconnect the device."); warn!("To get rid of this message you need to logout, clear the app data and login again on the device.");
return Ok(()); return Ok(());
} }
debug!("Registering Device {}", device.uuid); debug!("Registering Device {:?}", device.push_uuid);
// generate a random push_uuid so we know the device is registered // Generate a random push_uuid so if it doesn't already have one
device.push_uuid = Some(uuid::Uuid::new_v4().to_string()); if device.push_uuid.is_none() {
device.push_uuid = Some(PushId(get_uuid()));
}
//Needed to register a device for push to bitwarden : //Needed to register a device for push to bitwarden :
let data = json!({ let data = json!({
"deviceId": device.push_uuid, // Unique UUID per user/device
"pushToken": device.push_token,
"userId": device.user_uuid, "userId": device.user_uuid,
"deviceId": device.push_uuid,
"identifier": device.uuid,
"type": device.atype, "type": device.atype,
"pushToken": device.push_token "identifier": device.uuid, // Unique UUID of the device/app, determined by the device/app it self currently registering
// "organizationIds:" [] // TODO: This is not yet implemented by Vaultwarden!
"installationId": CONFIG.push_installation_id(),
}); });
let auth_push_token = get_auth_push_token().await?; let auth_api_token = get_auth_api_token().await?;
let auth_header = format!("Bearer {}", &auth_push_token); let auth_header = format!("Bearer {auth_api_token}");
if let Err(e) = make_http_request(Method::POST, &(CONFIG.push_relay_uri() + "/push/register"))? if let Err(e) = make_http_request(Method::POST, &(CONFIG.push_relay_uri() + "/push/register"))?
.header(CONTENT_TYPE, "application/json") .header(CONTENT_TYPE, "application/json")
@@ -126,18 +130,21 @@ pub async fn register_push_device(device: &mut Device, conn: &mut crate::db::DbC
Ok(()) Ok(())
} }
pub async fn unregister_push_device(push_uuid: Option<String>) -> EmptyResult { pub async fn unregister_push_device(push_id: &Option<PushId>) -> EmptyResult {
if !CONFIG.push_enabled() || push_uuid.is_none() { if !CONFIG.push_enabled() || push_id.is_none() {
return Ok(()); return Ok(());
} }
let auth_push_token = get_auth_push_token().await?; let auth_api_token = get_auth_api_token().await?;
let auth_header = format!("Bearer {}", &auth_push_token); let auth_header = format!("Bearer {auth_api_token}");
match make_http_request(Method::DELETE, &(CONFIG.push_relay_uri() + "/push/" + &push_uuid.unwrap()))? match make_http_request(
.header(AUTHORIZATION, auth_header) Method::POST,
.send() &format!("{}/push/delete/{}", CONFIG.push_relay_uri(), push_id.as_ref().unwrap()),
.await )?
.header(AUTHORIZATION, auth_header)
.send()
.await
{ {
Ok(r) => r, Ok(r) => r,
Err(e) => err!(format!("An error occurred during device unregistration: {e}")), Err(e) => err!(format!("An error occurred during device unregistration: {e}")),
@@ -145,105 +152,110 @@ pub async fn unregister_push_device(push_uuid: Option<String>) -> EmptyResult {
Ok(()) Ok(())
} }
pub async fn push_cipher_update( pub async fn push_cipher_update(ut: UpdateType, cipher: &Cipher, device: &Device, conn: &mut crate::db::DbConn) {
ut: UpdateType,
cipher: &Cipher,
acting_device_uuid: &String,
conn: &mut crate::db::DbConn,
) {
// We shouldn't send a push notification on cipher update if the cipher belongs to an organization, this isn't implemented in the upstream server too. // We shouldn't send a push notification on cipher update if the cipher belongs to an organization, this isn't implemented in the upstream server too.
if cipher.organization_uuid.is_some() { if cipher.organization_uuid.is_some() {
return; return;
}; };
let Some(user_uuid) = &cipher.user_uuid else { let Some(user_id) = &cipher.user_uuid else {
debug!("Cipher has no uuid"); debug!("Cipher has no uuid");
return; return;
}; };
if Device::check_user_has_push_device(user_uuid, conn).await { if Device::check_user_has_push_device(user_id, conn).await {
send_to_push_relay(json!({ send_to_push_relay(json!({
"userId": user_uuid, "userId": user_id,
"organizationId": (), "organizationId": null,
"deviceId": acting_device_uuid, "deviceId": device.push_uuid, // Should be the records unique uuid of the acting device (unique uuid per user/device)
"identifier": acting_device_uuid, "identifier": device.uuid, // Should be the acting device id (aka uuid per device/app)
"type": ut as i32, "type": ut as i32,
"payload": { "payload": {
"Id": cipher.uuid, "id": cipher.uuid,
"UserId": cipher.user_uuid, "userId": cipher.user_uuid,
"OrganizationId": (), "organizationId": null,
"RevisionDate": format_date(&cipher.updated_at) "collectionIds": null,
} "revisionDate": format_date(&cipher.updated_at)
},
"clientType": null,
"installationId": null
})) }))
.await; .await;
} }
} }
pub fn push_logout(user: &User, acting_device_uuid: Option<String>) { pub async fn push_logout(user: &User, acting_device_id: Option<DeviceId>, conn: &mut crate::db::DbConn) {
let acting_device_uuid: Value = acting_device_uuid.map(|v| v.into()).unwrap_or_else(|| Value::Null); let acting_device_id: Value = acting_device_id.map(|v| v.to_string().into()).unwrap_or_else(|| Value::Null);
tokio::task::spawn(send_to_push_relay(json!({ if Device::check_user_has_push_device(&user.uuid, conn).await {
"userId": user.uuid,
"organizationId": (),
"deviceId": acting_device_uuid,
"identifier": acting_device_uuid,
"type": UpdateType::LogOut as i32,
"payload": {
"UserId": user.uuid,
"Date": format_date(&user.updated_at)
}
})));
}
pub fn push_user_update(ut: UpdateType, user: &User) {
tokio::task::spawn(send_to_push_relay(json!({
"userId": user.uuid,
"organizationId": (),
"deviceId": (),
"identifier": (),
"type": ut as i32,
"payload": {
"UserId": user.uuid,
"Date": format_date(&user.updated_at)
}
})));
}
pub async fn push_folder_update(
ut: UpdateType,
folder: &Folder,
acting_device_uuid: &String,
conn: &mut crate::db::DbConn,
) {
if Device::check_user_has_push_device(&folder.user_uuid, conn).await {
tokio::task::spawn(send_to_push_relay(json!({ tokio::task::spawn(send_to_push_relay(json!({
"userId": folder.user_uuid, "userId": user.uuid,
"organizationId": (), "organizationId": (),
"deviceId": acting_device_uuid, "deviceId": acting_device_id,
"identifier": acting_device_uuid, "identifier": acting_device_id,
"type": ut as i32, "type": UpdateType::LogOut as i32,
"payload": { "payload": {
"Id": folder.uuid, "userId": user.uuid,
"UserId": folder.user_uuid, "date": format_date(&user.updated_at)
"RevisionDate": format_date(&folder.updated_at) },
} "clientType": null,
"installationId": null
}))); })));
} }
} }
pub async fn push_send_update(ut: UpdateType, send: &Send, acting_device_uuid: &String, conn: &mut crate::db::DbConn) { pub async fn push_user_update(ut: UpdateType, user: &User, push_uuid: &Option<PushId>, conn: &mut crate::db::DbConn) {
if Device::check_user_has_push_device(&user.uuid, conn).await {
tokio::task::spawn(send_to_push_relay(json!({
"userId": user.uuid,
"organizationId": null,
"deviceId": push_uuid,
"identifier": null,
"type": ut as i32,
"payload": {
"userId": user.uuid,
"date": format_date(&user.updated_at)
},
"clientType": null,
"installationId": null
})));
}
}
pub async fn push_folder_update(ut: UpdateType, folder: &Folder, device: &Device, conn: &mut crate::db::DbConn) {
if Device::check_user_has_push_device(&folder.user_uuid, conn).await {
tokio::task::spawn(send_to_push_relay(json!({
"userId": folder.user_uuid,
"organizationId": null,
"deviceId": device.push_uuid, // Should be the records unique uuid of the acting device (unique uuid per user/device)
"identifier": device.uuid, // Should be the acting device id (aka uuid per device/app)
"type": ut as i32,
"payload": {
"id": folder.uuid,
"userId": folder.user_uuid,
"revisionDate": format_date(&folder.updated_at)
},
"clientType": null,
"installationId": null
})));
}
}
pub async fn push_send_update(ut: UpdateType, send: &Send, device: &Device, conn: &mut crate::db::DbConn) {
if let Some(s) = &send.user_uuid { if let Some(s) = &send.user_uuid {
if Device::check_user_has_push_device(s, conn).await { if Device::check_user_has_push_device(s, conn).await {
tokio::task::spawn(send_to_push_relay(json!({ tokio::task::spawn(send_to_push_relay(json!({
"userId": send.user_uuid, "userId": send.user_uuid,
"organizationId": (), "organizationId": null,
"deviceId": acting_device_uuid, "deviceId": device.push_uuid, // Should be the records unique uuid of the acting device (unique uuid per user/device)
"identifier": acting_device_uuid, "identifier": device.uuid, // Should be the acting device id (aka uuid per device/app)
"type": ut as i32, "type": ut as i32,
"payload": { "payload": {
"Id": send.uuid, "id": send.uuid,
"UserId": send.user_uuid, "userId": send.user_uuid,
"RevisionDate": format_date(&send.revision_date) "revisionDate": format_date(&send.revision_date)
} },
"clientType": null,
"installationId": null
}))); })));
} }
} }
@@ -254,20 +266,20 @@ async fn send_to_push_relay(notification_data: Value) {
return; return;
} }
let auth_push_token = match get_auth_push_token().await { let auth_api_token = match get_auth_api_token().await {
Ok(s) => s, Ok(s) => s,
Err(e) => { Err(e) => {
debug!("Could not get the auth push token: {}", e); debug!("Could not get the auth push token: {e}");
return; return;
} }
}; };
let auth_header = format!("Bearer {}", &auth_push_token); let auth_header = format!("Bearer {auth_api_token}");
let req = match make_http_request(Method::POST, &(CONFIG.push_relay_uri() + "/push/send")) { let req = match make_http_request(Method::POST, &(CONFIG.push_relay_uri() + "/push/send")) {
Ok(r) => r, Ok(r) => r,
Err(e) => { Err(e) => {
error!("An error occurred while sending a send update to the push relay: {}", e); error!("An error occurred while sending a send update to the push relay: {e}");
return; return;
} }
}; };
@@ -280,43 +292,47 @@ async fn send_to_push_relay(notification_data: Value) {
.send() .send()
.await .await
{ {
error!("An error occurred while sending a send update to the push relay: {}", e); error!("An error occurred while sending a send update to the push relay: {e}");
}; };
} }
pub async fn push_auth_request(user_uuid: String, auth_request_uuid: String, conn: &mut crate::db::DbConn) { pub async fn push_auth_request(user_id: &UserId, auth_request_id: &str, device: &Device, conn: &mut crate::db::DbConn) {
if Device::check_user_has_push_device(user_uuid.as_str(), conn).await { if Device::check_user_has_push_device(user_id, conn).await {
tokio::task::spawn(send_to_push_relay(json!({ tokio::task::spawn(send_to_push_relay(json!({
"userId": user_uuid, "userId": user_id,
"organizationId": (), "organizationId": null,
"deviceId": null, "deviceId": device.push_uuid, // Should be the records unique uuid of the acting device (unique uuid per user/device)
"identifier": null, "identifier": device.uuid, // Should be the acting device id (aka uuid per device/app)
"type": UpdateType::AuthRequest as i32, "type": UpdateType::AuthRequest as i32,
"payload": { "payload": {
"Id": auth_request_uuid, "userId": user_id,
"UserId": user_uuid, "id": auth_request_id,
} },
"clientType": null,
"installationId": null
}))); })));
} }
} }
pub async fn push_auth_response( pub async fn push_auth_response(
user_uuid: String, user_id: &UserId,
auth_request_uuid: String, auth_request_id: &AuthRequestId,
approving_device_uuid: String, device: &Device,
conn: &mut crate::db::DbConn, conn: &mut crate::db::DbConn,
) { ) {
if Device::check_user_has_push_device(user_uuid.as_str(), conn).await { if Device::check_user_has_push_device(user_id, conn).await {
tokio::task::spawn(send_to_push_relay(json!({ tokio::task::spawn(send_to_push_relay(json!({
"userId": user_uuid, "userId": user_id,
"organizationId": (), "organizationId": null,
"deviceId": approving_device_uuid, "deviceId": device.push_uuid, // Should be the records unique uuid of the acting device (unique uuid per user/device)
"identifier": approving_device_uuid, "identifier": device.uuid, // Should be the acting device id (aka uuid per device/app)
"type": UpdateType::AuthRequestResponse as i32, "type": UpdateType::AuthRequestResponse as i32,
"payload": { "payload": {
"Id": auth_request_uuid, "userId": user_id,
"UserId": user_uuid, "id": auth_request_id,
} },
"clientType": null,
"installationId": null
}))); })));
} }
} }

View File

@@ -1,4 +1,3 @@
use once_cell::sync::Lazy;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use rocket::{ use rocket::{
@@ -13,8 +12,9 @@ use serde_json::Value;
use crate::{ use crate::{
api::{core::now, ApiResult, EmptyResult}, api::{core::now, ApiResult, EmptyResult},
auth::decode_file_download, auth::decode_file_download,
db::models::{AttachmentId, CipherId},
error::Error, error::Error,
util::{get_web_vault_version, Cached, SafeString}, util::Cached,
CONFIG, CONFIG,
}; };
@@ -54,46 +54,10 @@ fn not_found() -> ApiResult<Html<String>> {
#[get("/css/vaultwarden.css")] #[get("/css/vaultwarden.css")]
fn vaultwarden_css() -> Cached<Css<String>> { fn vaultwarden_css() -> Cached<Css<String>> {
// Configure the web-vault version as an integer so it can be used as a comparison smaller or greater then.
// The default is based upon the version since this feature is added.
static WEB_VAULT_VERSION: Lazy<u32> = Lazy::new(|| {
let re = regex::Regex::new(r"(\d{4})\.(\d{1,2})\.(\d{1,2})").unwrap();
let vault_version = get_web_vault_version();
let (major, minor, patch) = match re.captures(&vault_version) {
Some(c) if c.len() == 4 => (
c.get(1).unwrap().as_str().parse().unwrap(),
c.get(2).unwrap().as_str().parse().unwrap(),
c.get(3).unwrap().as_str().parse().unwrap(),
),
_ => (2024, 6, 2),
};
format!("{major}{minor:02}{patch:02}").parse::<u32>().unwrap()
});
// Configure the Vaultwarden version as an integer so it can be used as a comparison smaller or greater then.
// The default is based upon the version since this feature is added.
static VW_VERSION: Lazy<u32> = Lazy::new(|| {
let re = regex::Regex::new(r"(\d{1})\.(\d{1,2})\.(\d{1,2})").unwrap();
let vw_version = crate::VERSION.unwrap_or("1.32.1");
let (major, minor, patch) = match re.captures(vw_version) {
Some(c) if c.len() == 4 => (
c.get(1).unwrap().as_str().parse().unwrap(),
c.get(2).unwrap().as_str().parse().unwrap(),
c.get(3).unwrap().as_str().parse().unwrap(),
),
_ => (1, 32, 1),
};
format!("{major}{minor:02}{patch:02}").parse::<u32>().unwrap()
});
let css_options = json!({ let css_options = json!({
"web_vault_version": *WEB_VAULT_VERSION,
"vw_version": *VW_VERSION,
"signup_disabled": !CONFIG.signups_allowed() && CONFIG.signups_domains_whitelist().is_empty(), "signup_disabled": !CONFIG.signups_allowed() && CONFIG.signups_domains_whitelist().is_empty(),
"mail_enabled": CONFIG.mail_enabled(), "mail_enabled": CONFIG.mail_enabled(),
"yubico_enabled": CONFIG._enable_yubico() && (CONFIG.yubico_client_id().is_some() == CONFIG.yubico_secret_key().is_some()), "yubico_enabled": CONFIG._enable_yubico() && CONFIG.yubico_client_id().is_some() && CONFIG.yubico_secret_key().is_some(),
"emergency_access_allowed": CONFIG.emergency_access_allowed(), "emergency_access_allowed": CONFIG.emergency_access_allowed(),
"sends_allowed": CONFIG.sends_allowed(), "sends_allowed": CONFIG.sends_allowed(),
"load_user_scss": true, "load_user_scss": true,
@@ -195,16 +159,16 @@ async fn web_files(p: PathBuf) -> Cached<Option<NamedFile>> {
Cached::long(NamedFile::open(Path::new(&CONFIG.web_vault_folder()).join(p)).await.ok(), true) Cached::long(NamedFile::open(Path::new(&CONFIG.web_vault_folder()).join(p)).await.ok(), true)
} }
#[get("/attachments/<uuid>/<file_id>?<token>")] #[get("/attachments/<cipher_id>/<file_id>?<token>")]
async fn attachments(uuid: SafeString, file_id: SafeString, token: String) -> Option<NamedFile> { async fn attachments(cipher_id: CipherId, file_id: AttachmentId, token: String) -> Option<NamedFile> {
let Ok(claims) = decode_file_download(&token) else { let Ok(claims) = decode_file_download(&token) else {
return None; return None;
}; };
if claims.sub != *uuid || claims.file_id != *file_id { if claims.sub != cipher_id || claims.file_id != file_id {
return None; return None;
} }
NamedFile::open(Path::new(&CONFIG.attachments_folder()).join(uuid).join(file_id)).await.ok() NamedFile::open(Path::new(&CONFIG.attachments_folder()).join(cipher_id.as_ref()).join(file_id.as_ref())).await.ok()
} }
// We use DbConn here to let the alive healthcheck also verify the database connection. // We use DbConn here to let the alive healthcheck also verify the database connection.

View File

@@ -14,6 +14,10 @@ use std::{
net::IpAddr, net::IpAddr,
}; };
use crate::db::models::{
AttachmentId, CipherId, CollectionId, DeviceId, EmergencyAccessId, MembershipId, OrgApiKeyId, OrganizationId,
SendFileId, SendId, UserId,
};
use crate::{error::Error, CONFIG}; use crate::{error::Error, CONFIG};
const JWT_ALGORITHM: Algorithm = Algorithm::RS256; const JWT_ALGORITHM: Algorithm = Algorithm::RS256;
@@ -31,6 +35,7 @@ static JWT_ADMIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|admin", CONFIG.
static JWT_SEND_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|send", CONFIG.domain_origin())); static JWT_SEND_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|send", CONFIG.domain_origin()));
static JWT_ORG_API_KEY_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|api.organization", CONFIG.domain_origin())); static JWT_ORG_API_KEY_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|api.organization", CONFIG.domain_origin()));
static JWT_FILE_DOWNLOAD_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|file_download", CONFIG.domain_origin())); static JWT_FILE_DOWNLOAD_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|file_download", CONFIG.domain_origin()));
static JWT_REGISTER_VERIFY_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|register_verify", CONFIG.domain_origin()));
static PRIVATE_RSA_KEY: OnceCell<EncodingKey> = OnceCell::new(); static PRIVATE_RSA_KEY: OnceCell<EncodingKey> = OnceCell::new();
static PUBLIC_RSA_KEY: OnceCell<DecodingKey> = OnceCell::new(); static PUBLIC_RSA_KEY: OnceCell<DecodingKey> = OnceCell::new();
@@ -141,6 +146,10 @@ pub fn decode_file_download(token: &str) -> Result<FileDownloadClaims, Error> {
decode_jwt(token, JWT_FILE_DOWNLOAD_ISSUER.to_string()) decode_jwt(token, JWT_FILE_DOWNLOAD_ISSUER.to_string())
} }
pub fn decode_register_verify(token: &str) -> Result<RegisterVerifyClaims, Error> {
decode_jwt(token, JWT_REGISTER_VERIFY_ISSUER.to_string())
}
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct LoginJwtClaims { pub struct LoginJwtClaims {
// Not before // Not before
@@ -150,7 +159,7 @@ pub struct LoginJwtClaims {
// Issuer // Issuer
pub iss: String, pub iss: String,
// Subject // Subject
pub sub: String, pub sub: UserId,
pub premium: bool, pub premium: bool,
pub name: String, pub name: String,
@@ -171,7 +180,12 @@ pub struct LoginJwtClaims {
// user security_stamp // user security_stamp
pub sstamp: String, pub sstamp: String,
// device uuid // device uuid
pub device: String, pub device: DeviceId,
// what kind of device, like FirefoxBrowser or Android derived from DeviceType
pub devicetype: String,
// the type of client_id, like web, cli, desktop, browser or mobile
pub client_id: String,
// [ "api", "offline_access" ] // [ "api", "offline_access" ]
pub scope: Vec<String>, pub scope: Vec<String>,
// [ "Application" ] // [ "Application" ]
@@ -187,19 +201,19 @@ pub struct InviteJwtClaims {
// Issuer // Issuer
pub iss: String, pub iss: String,
// Subject // Subject
pub sub: String, pub sub: UserId,
pub email: String, pub email: String,
pub org_id: Option<String>, pub org_id: OrganizationId,
pub user_org_id: Option<String>, pub member_id: MembershipId,
pub invited_by_email: Option<String>, pub invited_by_email: Option<String>,
} }
pub fn generate_invite_claims( pub fn generate_invite_claims(
uuid: String, user_id: UserId,
email: String, email: String,
org_id: Option<String>, org_id: OrganizationId,
user_org_id: Option<String>, member_id: MembershipId,
invited_by_email: Option<String>, invited_by_email: Option<String>,
) -> InviteJwtClaims { ) -> InviteJwtClaims {
let time_now = Utc::now(); let time_now = Utc::now();
@@ -208,10 +222,10 @@ pub fn generate_invite_claims(
nbf: time_now.timestamp(), nbf: time_now.timestamp(),
exp: (time_now + TimeDelta::try_hours(expire_hours).unwrap()).timestamp(), exp: (time_now + TimeDelta::try_hours(expire_hours).unwrap()).timestamp(),
iss: JWT_INVITE_ISSUER.to_string(), iss: JWT_INVITE_ISSUER.to_string(),
sub: uuid, sub: user_id,
email, email,
org_id, org_id,
user_org_id, member_id,
invited_by_email, invited_by_email,
} }
} }
@@ -225,18 +239,18 @@ pub struct EmergencyAccessInviteJwtClaims {
// Issuer // Issuer
pub iss: String, pub iss: String,
// Subject // Subject
pub sub: String, pub sub: UserId,
pub email: String, pub email: String,
pub emer_id: String, pub emer_id: EmergencyAccessId,
pub grantor_name: String, pub grantor_name: String,
pub grantor_email: String, pub grantor_email: String,
} }
pub fn generate_emergency_access_invite_claims( pub fn generate_emergency_access_invite_claims(
uuid: String, user_id: UserId,
email: String, email: String,
emer_id: String, emer_id: EmergencyAccessId,
grantor_name: String, grantor_name: String,
grantor_email: String, grantor_email: String,
) -> EmergencyAccessInviteJwtClaims { ) -> EmergencyAccessInviteJwtClaims {
@@ -246,7 +260,7 @@ pub fn generate_emergency_access_invite_claims(
nbf: time_now.timestamp(), nbf: time_now.timestamp(),
exp: (time_now + TimeDelta::try_hours(expire_hours).unwrap()).timestamp(), exp: (time_now + TimeDelta::try_hours(expire_hours).unwrap()).timestamp(),
iss: JWT_EMERGENCY_ACCESS_INVITE_ISSUER.to_string(), iss: JWT_EMERGENCY_ACCESS_INVITE_ISSUER.to_string(),
sub: uuid, sub: user_id,
email, email,
emer_id, emer_id,
grantor_name, grantor_name,
@@ -263,20 +277,23 @@ pub struct OrgApiKeyLoginJwtClaims {
// Issuer // Issuer
pub iss: String, pub iss: String,
// Subject // Subject
pub sub: String, pub sub: OrgApiKeyId,
pub client_id: String, pub client_id: String,
pub client_sub: String, pub client_sub: OrganizationId,
pub scope: Vec<String>, pub scope: Vec<String>,
} }
pub fn generate_organization_api_key_login_claims(uuid: String, org_id: String) -> OrgApiKeyLoginJwtClaims { pub fn generate_organization_api_key_login_claims(
org_api_key_uuid: OrgApiKeyId,
org_id: OrganizationId,
) -> OrgApiKeyLoginJwtClaims {
let time_now = Utc::now(); let time_now = Utc::now();
OrgApiKeyLoginJwtClaims { OrgApiKeyLoginJwtClaims {
nbf: time_now.timestamp(), nbf: time_now.timestamp(),
exp: (time_now + TimeDelta::try_hours(1).unwrap()).timestamp(), exp: (time_now + TimeDelta::try_hours(1).unwrap()).timestamp(),
iss: JWT_ORG_API_KEY_ISSUER.to_string(), iss: JWT_ORG_API_KEY_ISSUER.to_string(),
sub: uuid, sub: org_api_key_uuid,
client_id: format!("organization.{org_id}"), client_id: format!("organization.{org_id}"),
client_sub: org_id, client_sub: org_id,
scope: vec!["api.organization".into()], scope: vec!["api.organization".into()],
@@ -292,22 +309,49 @@ pub struct FileDownloadClaims {
// Issuer // Issuer
pub iss: String, pub iss: String,
// Subject // Subject
pub sub: String, pub sub: CipherId,
pub file_id: String, pub file_id: AttachmentId,
} }
pub fn generate_file_download_claims(uuid: String, file_id: String) -> FileDownloadClaims { pub fn generate_file_download_claims(cipher_id: CipherId, file_id: AttachmentId) -> FileDownloadClaims {
let time_now = Utc::now(); let time_now = Utc::now();
FileDownloadClaims { FileDownloadClaims {
nbf: time_now.timestamp(), nbf: time_now.timestamp(),
exp: (time_now + TimeDelta::try_minutes(5).unwrap()).timestamp(), exp: (time_now + TimeDelta::try_minutes(5).unwrap()).timestamp(),
iss: JWT_FILE_DOWNLOAD_ISSUER.to_string(), iss: JWT_FILE_DOWNLOAD_ISSUER.to_string(),
sub: uuid, sub: cipher_id,
file_id, file_id,
} }
} }
#[derive(Debug, Serialize, Deserialize)]
pub struct RegisterVerifyClaims {
// Not before
pub nbf: i64,
// Expiration time
pub exp: i64,
// Issuer
pub iss: String,
// Subject
pub sub: String,
pub name: Option<String>,
pub verified: bool,
}
pub fn generate_register_verify_claims(email: String, name: Option<String>, verified: bool) -> RegisterVerifyClaims {
let time_now = Utc::now();
RegisterVerifyClaims {
nbf: time_now.timestamp(),
exp: (time_now + TimeDelta::try_minutes(30).unwrap()).timestamp(),
iss: JWT_REGISTER_VERIFY_ISSUER.to_string(),
sub: email,
name,
verified,
}
}
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct BasicJwtClaims { pub struct BasicJwtClaims {
// Not before // Not before
@@ -331,14 +375,14 @@ pub fn generate_delete_claims(uuid: String) -> BasicJwtClaims {
} }
} }
pub fn generate_verify_email_claims(uuid: String) -> BasicJwtClaims { pub fn generate_verify_email_claims(user_id: UserId) -> BasicJwtClaims {
let time_now = Utc::now(); let time_now = Utc::now();
let expire_hours = i64::from(CONFIG.invitation_expiration_hours()); let expire_hours = i64::from(CONFIG.invitation_expiration_hours());
BasicJwtClaims { BasicJwtClaims {
nbf: time_now.timestamp(), nbf: time_now.timestamp(),
exp: (time_now + TimeDelta::try_hours(expire_hours).unwrap()).timestamp(), exp: (time_now + TimeDelta::try_hours(expire_hours).unwrap()).timestamp(),
iss: JWT_VERIFYEMAIL_ISSUER.to_string(), iss: JWT_VERIFYEMAIL_ISSUER.to_string(),
sub: uuid, sub: user_id.to_string(),
} }
} }
@@ -352,7 +396,7 @@ pub fn generate_admin_claims() -> BasicJwtClaims {
} }
} }
pub fn generate_send_claims(send_id: &str, file_id: &str) -> BasicJwtClaims { pub fn generate_send_claims(send_id: &SendId, file_id: &SendFileId) -> BasicJwtClaims {
let time_now = Utc::now(); let time_now = Utc::now();
BasicJwtClaims { BasicJwtClaims {
nbf: time_now.timestamp(), nbf: time_now.timestamp(),
@@ -371,7 +415,7 @@ use rocket::{
}; };
use crate::db::{ use crate::db::{
models::{Collection, Device, User, UserOrgStatus, UserOrgType, UserOrganization, UserStampException}, models::{Collection, Device, Membership, MembershipStatus, MembershipType, User, UserStampException},
DbConn, DbConn,
}; };
@@ -475,19 +519,19 @@ impl<'r> FromRequest<'r> for Headers {
err_handler!("Invalid claim") err_handler!("Invalid claim")
}; };
let device_uuid = claims.device; let device_id = claims.device;
let user_uuid = claims.sub; let user_id = claims.sub;
let mut conn = match DbConn::from_request(request).await { let mut conn = match DbConn::from_request(request).await {
Outcome::Success(conn) => conn, Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"), _ => err_handler!("Error getting DB"),
}; };
let Some(device) = Device::find_by_uuid_and_user(&device_uuid, &user_uuid, &mut conn).await else { let Some(device) = Device::find_by_uuid_and_user(&device_id, &user_id, &mut conn).await else {
err_handler!("Invalid device id") err_handler!("Invalid device id")
}; };
let Some(user) = User::find_by_uuid(&user_uuid, &mut conn).await else { let Some(user) = User::find_by_uuid(&user_id, &mut conn).await else {
err_handler!("Device has no user associated") err_handler!("Device has no user associated")
}; };
@@ -508,7 +552,7 @@ impl<'r> FromRequest<'r> for Headers {
let mut user = user; let mut user = user;
user.reset_stamp_exception(); user.reset_stamp_exception();
if let Err(e) = user.save(&mut conn).await { if let Err(e) = user.save(&mut conn).await {
error!("Error updating user: {:#?}", e); error!("Error updating user: {e:#?}");
} }
err_handler!("Stamp exception is expired") err_handler!("Stamp exception is expired")
} else if !stamp_exception.routes.contains(&current_route.to_string()) { } else if !stamp_exception.routes.contains(&current_route.to_string()) {
@@ -534,11 +578,30 @@ pub struct OrgHeaders {
pub host: String, pub host: String,
pub device: Device, pub device: Device,
pub user: User, pub user: User,
pub org_user_type: UserOrgType, pub membership_type: MembershipType,
pub org_user: UserOrganization, pub membership_status: MembershipStatus,
pub membership: Membership,
pub ip: ClientIp, pub ip: ClientIp,
} }
impl OrgHeaders {
fn is_member(&self) -> bool {
// NOTE: we don't care about MembershipStatus at the moment because this is only used
// where an invited, accepted or confirmed user is expected if this ever changes or
// if from_i32 is changed to return Some(Revoked) this check needs to be changed accordingly
self.membership_type >= MembershipType::User
}
fn is_confirmed_and_admin(&self) -> bool {
self.membership_status == MembershipStatus::Confirmed && self.membership_type >= MembershipType::Admin
}
fn is_confirmed_and_manager(&self) -> bool {
self.membership_status == MembershipStatus::Confirmed && self.membership_type >= MembershipType::Manager
}
fn is_confirmed_and_owner(&self) -> bool {
self.membership_status == MembershipStatus::Confirmed && self.membership_type == MembershipType::Owner
}
}
#[rocket::async_trait] #[rocket::async_trait]
impl<'r> FromRequest<'r> for OrgHeaders { impl<'r> FromRequest<'r> for OrgHeaders {
type Error = &'static str; type Error = &'static str;
@@ -549,55 +612,50 @@ impl<'r> FromRequest<'r> for OrgHeaders {
// org_id is usually the second path param ("/organizations/<org_id>"), // org_id is usually the second path param ("/organizations/<org_id>"),
// but there are cases where it is a query value. // but there are cases where it is a query value.
// First check the path, if this is not a valid uuid, try the query values. // First check the path, if this is not a valid uuid, try the query values.
let url_org_id: Option<&str> = { let url_org_id: Option<OrganizationId> = {
let mut url_org_id = None; if let Some(Ok(org_id)) = request.param::<OrganizationId>(1) {
if let Some(Ok(org_id)) = request.param::<&str>(1) { Some(org_id.clone())
if uuid::Uuid::parse_str(org_id).is_ok() { } else if let Some(Ok(org_id)) = request.query_value::<OrganizationId>("organizationId") {
url_org_id = Some(org_id); Some(org_id.clone())
} } else {
None
} }
if let Some(Ok(org_id)) = request.query_value::<&str>("organizationId") {
if uuid::Uuid::parse_str(org_id).is_ok() {
url_org_id = Some(org_id);
}
}
url_org_id
}; };
match url_org_id { match url_org_id {
Some(org_id) => { Some(org_id) if uuid::Uuid::parse_str(&org_id).is_ok() => {
let mut conn = match DbConn::from_request(request).await { let mut conn = match DbConn::from_request(request).await {
Outcome::Success(conn) => conn, Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"), _ => err_handler!("Error getting DB"),
}; };
let user = headers.user; let user = headers.user;
let org_user = match UserOrganization::find_by_user_and_org(&user.uuid, org_id, &mut conn).await { let Some(membership) = Membership::find_by_user_and_org(&user.uuid, &org_id, &mut conn).await else {
Some(user) => { err_handler!("The current user isn't member of the organization");
if user.status == UserOrgStatus::Confirmed as i32 {
user
} else {
err_handler!("The current user isn't confirmed member of the organization")
}
}
None => err_handler!("The current user isn't member of the organization"),
}; };
Outcome::Success(Self { Outcome::Success(Self {
host: headers.host, host: headers.host,
device: headers.device, device: headers.device,
user, user,
org_user_type: { membership_type: {
if let Some(org_usr_type) = UserOrgType::from_i32(org_user.atype) { if let Some(member_type) = MembershipType::from_i32(membership.atype) {
org_usr_type member_type
} else { } else {
// This should only happen if the DB is corrupted // This should only happen if the DB is corrupted
err_handler!("Unknown user type in the database") err_handler!("Unknown user type in the database")
} }
}, },
org_user, membership_status: {
if let Some(member_status) = MembershipStatus::from_i32(membership.status) {
// NOTE: add additional check for revoked if from_i32 is ever changed
// to return Revoked status.
member_status
} else {
err_handler!("User status is either revoked or invalid.")
}
},
membership,
ip: headers.ip, ip: headers.ip,
}) })
} }
@@ -610,8 +668,9 @@ pub struct AdminHeaders {
pub host: String, pub host: String,
pub device: Device, pub device: Device,
pub user: User, pub user: User,
pub org_user_type: UserOrgType, pub membership_type: MembershipType,
pub ip: ClientIp, pub ip: ClientIp,
pub org_id: OrganizationId,
} }
#[rocket::async_trait] #[rocket::async_trait]
@@ -620,13 +679,14 @@ impl<'r> FromRequest<'r> for AdminHeaders {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> { async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await); let headers = try_outcome!(OrgHeaders::from_request(request).await);
if headers.org_user_type >= UserOrgType::Admin { if headers.is_confirmed_and_admin() {
Outcome::Success(Self { Outcome::Success(Self {
host: headers.host, host: headers.host,
device: headers.device, device: headers.device,
user: headers.user, user: headers.user,
org_user_type: headers.org_user_type, membership_type: headers.membership_type,
ip: headers.ip, ip: headers.ip,
org_id: headers.membership.org_uuid,
}) })
} else { } else {
err_handler!("You need to be Admin or Owner to call this endpoint") err_handler!("You need to be Admin or Owner to call this endpoint")
@@ -634,30 +694,19 @@ impl<'r> FromRequest<'r> for AdminHeaders {
} }
} }
impl From<AdminHeaders> for Headers {
fn from(h: AdminHeaders) -> Headers {
Headers {
host: h.host,
device: h.device,
user: h.user,
ip: h.ip,
}
}
}
// col_id is usually the fourth path param ("/organizations/<org_id>/collections/<col_id>"), // col_id is usually the fourth path param ("/organizations/<org_id>/collections/<col_id>"),
// but there could be cases where it is a query value. // but there could be cases where it is a query value.
// First check the path, if this is not a valid uuid, try the query values. // First check the path, if this is not a valid uuid, try the query values.
fn get_col_id(request: &Request<'_>) -> Option<String> { fn get_col_id(request: &Request<'_>) -> Option<CollectionId> {
if let Some(Ok(col_id)) = request.param::<String>(3) { if let Some(Ok(col_id)) = request.param::<String>(3) {
if uuid::Uuid::parse_str(&col_id).is_ok() { if uuid::Uuid::parse_str(&col_id).is_ok() {
return Some(col_id); return Some(col_id.into());
} }
} }
if let Some(Ok(col_id)) = request.query_value::<String>("collectionId") { if let Some(Ok(col_id)) = request.query_value::<String>("collectionId") {
if uuid::Uuid::parse_str(&col_id).is_ok() { if uuid::Uuid::parse_str(&col_id).is_ok() {
return Some(col_id); return Some(col_id.into());
} }
} }
@@ -672,6 +721,7 @@ pub struct ManagerHeaders {
pub device: Device, pub device: Device,
pub user: User, pub user: User,
pub ip: ClientIp, pub ip: ClientIp,
pub org_id: OrganizationId,
} }
#[rocket::async_trait] #[rocket::async_trait]
@@ -680,7 +730,7 @@ impl<'r> FromRequest<'r> for ManagerHeaders {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> { async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await); let headers = try_outcome!(OrgHeaders::from_request(request).await);
if headers.org_user_type >= UserOrgType::Manager { if headers.is_confirmed_and_manager() {
match get_col_id(request) { match get_col_id(request) {
Some(col_id) => { Some(col_id) => {
let mut conn = match DbConn::from_request(request).await { let mut conn = match DbConn::from_request(request).await {
@@ -688,7 +738,7 @@ impl<'r> FromRequest<'r> for ManagerHeaders {
_ => err_handler!("Error getting DB"), _ => err_handler!("Error getting DB"),
}; };
if !Collection::can_access_collection(&headers.org_user, &col_id, &mut conn).await { if !Collection::can_access_collection(&headers.membership, &col_id, &mut conn).await {
err_handler!("The current user isn't a manager for this collection") err_handler!("The current user isn't a manager for this collection")
} }
} }
@@ -700,6 +750,7 @@ impl<'r> FromRequest<'r> for ManagerHeaders {
device: headers.device, device: headers.device,
user: headers.user, user: headers.user,
ip: headers.ip, ip: headers.ip,
org_id: headers.membership.org_uuid,
}) })
} else { } else {
err_handler!("You need to be a Manager, Admin or Owner to call this endpoint") err_handler!("You need to be a Manager, Admin or Owner to call this endpoint")
@@ -724,7 +775,7 @@ pub struct ManagerHeadersLoose {
pub host: String, pub host: String,
pub device: Device, pub device: Device,
pub user: User, pub user: User,
pub org_user: UserOrganization, pub membership: Membership,
pub ip: ClientIp, pub ip: ClientIp,
} }
@@ -734,12 +785,12 @@ impl<'r> FromRequest<'r> for ManagerHeadersLoose {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> { async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await); let headers = try_outcome!(OrgHeaders::from_request(request).await);
if headers.org_user_type >= UserOrgType::Manager { if headers.is_confirmed_and_manager() {
Outcome::Success(Self { Outcome::Success(Self {
host: headers.host, host: headers.host,
device: headers.device, device: headers.device,
user: headers.user, user: headers.user,
org_user: headers.org_user, membership: headers.membership,
ip: headers.ip, ip: headers.ip,
}) })
} else { } else {
@@ -762,14 +813,14 @@ impl From<ManagerHeadersLoose> for Headers {
impl ManagerHeaders { impl ManagerHeaders {
pub async fn from_loose( pub async fn from_loose(
h: ManagerHeadersLoose, h: ManagerHeadersLoose,
collections: &Vec<String>, collections: &Vec<CollectionId>,
conn: &mut DbConn, conn: &mut DbConn,
) -> Result<ManagerHeaders, Error> { ) -> Result<ManagerHeaders, Error> {
for col_id in collections { for col_id in collections {
if uuid::Uuid::parse_str(col_id).is_err() { if uuid::Uuid::parse_str(col_id.as_ref()).is_err() {
err!("Collection Id is malformed!"); err!("Collection Id is malformed!");
} }
if !Collection::can_access_collection(&h.org_user, col_id, conn).await { if !Collection::can_access_collection(&h.membership, col_id, conn).await {
err!("You don't have access to all collections!"); err!("You don't have access to all collections!");
} }
} }
@@ -779,6 +830,7 @@ impl ManagerHeaders {
device: h.device, device: h.device,
user: h.user, user: h.user,
ip: h.ip, ip: h.ip,
org_id: h.membership.org_uuid,
}) })
} }
} }
@@ -787,6 +839,7 @@ pub struct OwnerHeaders {
pub device: Device, pub device: Device,
pub user: User, pub user: User,
pub ip: ClientIp, pub ip: ClientIp,
pub org_id: OrganizationId,
} }
#[rocket::async_trait] #[rocket::async_trait]
@@ -795,11 +848,12 @@ impl<'r> FromRequest<'r> for OwnerHeaders {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> { async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await); let headers = try_outcome!(OrgHeaders::from_request(request).await);
if headers.org_user_type == UserOrgType::Owner { if headers.is_confirmed_and_owner() {
Outcome::Success(Self { Outcome::Success(Self {
device: headers.device, device: headers.device,
user: headers.user, user: headers.user,
ip: headers.ip, ip: headers.ip,
org_id: headers.membership.org_uuid,
}) })
} else { } else {
err_handler!("You need to be Owner to call this endpoint") err_handler!("You need to be Owner to call this endpoint")
@@ -807,6 +861,45 @@ impl<'r> FromRequest<'r> for OwnerHeaders {
} }
} }
pub struct OrgMemberHeaders {
pub host: String,
pub device: Device,
pub user: User,
pub membership: Membership,
pub ip: ClientIp,
}
#[rocket::async_trait]
impl<'r> FromRequest<'r> for OrgMemberHeaders {
type Error = &'static str;
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await);
if headers.is_member() {
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
membership: headers.membership,
ip: headers.ip,
})
} else {
err_handler!("You need to be a Member of the Organization to call this endpoint")
}
}
}
impl From<OrgMemberHeaders> for Headers {
fn from(h: OrgMemberHeaders) -> Headers {
Headers {
host: h.host,
device: h.device,
user: h.user,
ip: h.ip,
}
}
}
// //
// Client IP address detection // Client IP address detection
// //
@@ -827,7 +920,7 @@ impl<'r> FromRequest<'r> for ClientIp {
None => ip, None => ip,
} }
.parse() .parse()
.map_err(|_| warn!("'{}' header is malformed: {}", CONFIG.ip_header(), ip)) .map_err(|_| warn!("'{}' header is malformed: {ip}", CONFIG.ip_header()))
.ok() .ok()
}) })
} else { } else {

View File

@@ -1,8 +1,10 @@
use std::env::consts::EXE_SUFFIX; use std::{
use std::process::exit; env::consts::EXE_SUFFIX,
use std::sync::{ process::exit,
atomic::{AtomicBool, Ordering}, sync::{
RwLock, atomic::{AtomicBool, Ordering},
RwLock,
},
}; };
use job_scheduler_ng::Schedule; use job_scheduler_ng::Schedule;
@@ -12,7 +14,7 @@ use reqwest::Url;
use crate::{ use crate::{
db::DbConnType, db::DbConnType,
error::Error, error::Error,
util::{get_env, get_env_bool, parse_experimental_client_feature_flags}, util::{get_env, get_env_bool, get_web_vault_version, is_valid_email, parse_experimental_client_feature_flags},
}; };
static CONFIG_FILE: Lazy<String> = Lazy::new(|| { static CONFIG_FILE: Lazy<String> = Lazy::new(|| {
@@ -102,7 +104,7 @@ macro_rules! make_config {
let mut builder = ConfigBuilder::default(); let mut builder = ConfigBuilder::default();
$($( $($(
builder.$name = make_config! { @getenv paste::paste!(stringify!([<$name:upper>])), $ty }; builder.$name = make_config! { @getenv pastey::paste!(stringify!([<$name:upper>])), $ty };
)+)+ )+)+
builder builder
@@ -114,6 +116,14 @@ macro_rules! make_config {
serde_json::from_str(&config_str).map_err(Into::into) serde_json::from_str(&config_str).map_err(Into::into)
} }
fn clear_non_editable(&mut self) {
$($(
if !$editable {
self.$name = None;
}
)+)+
}
/// Merges the values of both builders into a new builder. /// Merges the values of both builders into a new builder.
/// If both have the same element, `other` wins. /// If both have the same element, `other` wins.
fn merge(&self, other: &Self, show_overrides: bool, overrides: &mut Vec<String>) -> Self { fn merge(&self, other: &Self, show_overrides: bool, overrides: &mut Vec<String>) -> Self {
@@ -123,7 +133,7 @@ macro_rules! make_config {
builder.$name = v.clone(); builder.$name = v.clone();
if self.$name.is_some() { if self.$name.is_some() {
overrides.push(paste::paste!(stringify!([<$name:upper>])).into()); overrides.push(pastey::paste!(stringify!([<$name:upper>])).into());
} }
} }
)+)+ )+)+
@@ -221,7 +231,7 @@ macro_rules! make_config {
element.insert("default".into(), serde_json::to_value(def.$name).unwrap()); element.insert("default".into(), serde_json::to_value(def.$name).unwrap());
element.insert("type".into(), (_get_form_type(stringify!($ty))).into()); element.insert("type".into(), (_get_form_type(stringify!($ty))).into());
element.insert("doc".into(), (_get_doc(concat!($($doc),+))).into()); element.insert("doc".into(), (_get_doc(concat!($($doc),+))).into());
element.insert("overridden".into(), (overridden.contains(&paste::paste!(stringify!([<$name:upper>])).into())).into()); element.insert("overridden".into(), (overridden.contains(&pastey::paste!(stringify!([<$name:upper>])).into())).into());
element element
}), }),
)+ )+
@@ -365,19 +375,19 @@ make_config! {
/// Data folder |> Main data folder /// Data folder |> Main data folder
data_folder: String, false, def, "data".to_string(); data_folder: String, false, def, "data".to_string();
/// Database URL /// Database URL
database_url: String, false, auto, |c| format!("{}/{}", c.data_folder, "db.sqlite3"); database_url: String, false, auto, |c| format!("{}/db.sqlite3", c.data_folder);
/// Icon cache folder /// Icon cache folder
icon_cache_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "icon_cache"); icon_cache_folder: String, false, auto, |c| format!("{}/icon_cache", c.data_folder);
/// Attachments folder /// Attachments folder
attachments_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "attachments"); attachments_folder: String, false, auto, |c| format!("{}/attachments", c.data_folder);
/// Sends folder /// Sends folder
sends_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "sends"); sends_folder: String, false, auto, |c| format!("{}/sends", c.data_folder);
/// Temp folder |> Used for storing temporary file uploads /// Temp folder |> Used for storing temporary file uploads
tmp_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "tmp"); tmp_folder: String, false, auto, |c| format!("{}/tmp", c.data_folder);
/// Templates folder /// Templates folder
templates_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "templates"); templates_folder: String, false, auto, |c| format!("{}/templates", c.data_folder);
/// Session JWT key /// Session JWT key
rsa_key_filename: String, false, auto, |c| format!("{}/{}", c.data_folder, "rsa_key"); rsa_key_filename: String, false, auto, |c| format!("{}/rsa_key", c.data_folder);
/// Web vault folder /// Web vault folder
web_vault_folder: String, false, def, "web-vault/".to_string(); web_vault_folder: String, false, def, "web-vault/".to_string();
}, },
@@ -474,7 +484,8 @@ make_config! {
disable_icon_download: bool, true, def, false; disable_icon_download: bool, true, def, false;
/// Allow new signups |> Controls whether new users can register. Users can be invited by the vaultwarden admin even if this is disabled /// Allow new signups |> Controls whether new users can register. Users can be invited by the vaultwarden admin even if this is disabled
signups_allowed: bool, true, def, true; signups_allowed: bool, true, def, true;
/// Require email verification on signups. This will prevent logins from succeeding until the address has been verified /// Require email verification on signups. On new client versions, this will require verification at signup time. On older clients,
/// this will prevent logins from succeeding until the address has been verified
signups_verify: bool, true, def, false; signups_verify: bool, true, def, false;
/// If signups require email verification, automatically re-send verification email if it hasn't been sent for a while (in seconds) /// If signups require email verification, automatically re-send verification email if it hasn't been sent for a while (in seconds)
signups_verify_resend_time: u64, true, def, 3_600; signups_verify_resend_time: u64, true, def, 3_600;
@@ -568,7 +579,7 @@ make_config! {
authenticator_disable_time_drift: bool, true, def, false; authenticator_disable_time_drift: bool, true, def, false;
/// Customize the enabled feature flags on the clients |> This is a comma separated list of feature flags to enable. /// Customize the enabled feature flags on the clients |> This is a comma separated list of feature flags to enable.
experimental_client_feature_flags: String, false, def, "fido2-vault-credentials".to_string(); experimental_client_feature_flags: String, false, def, String::new();
/// Require new device emails |> When a user logs in an email is required to be sent. /// Require new device emails |> When a user logs in an email is required to be sent.
/// If sending the email fails the login attempt will fail. /// If sending the email fails the login attempt will fail.
@@ -660,9 +671,9 @@ make_config! {
_enable_duo: bool, true, def, true; _enable_duo: bool, true, def, true;
/// Attempt to use deprecated iframe-based Traditional Prompt (Duo WebSDK 2) /// Attempt to use deprecated iframe-based Traditional Prompt (Duo WebSDK 2)
duo_use_iframe: bool, false, def, false; duo_use_iframe: bool, false, def, false;
/// Integration Key /// Client Id
duo_ikey: String, true, option; duo_ikey: String, true, option;
/// Secret Key /// Client Secret
duo_skey: Pass, true, option; duo_skey: Pass, true, option;
/// Host /// Host
duo_host: String, true, option; duo_host: String, true, option;
@@ -677,7 +688,7 @@ make_config! {
/// Use Sendmail |> Whether to send mail via the `sendmail` command /// Use Sendmail |> Whether to send mail via the `sendmail` command
use_sendmail: bool, true, def, false; use_sendmail: bool, true, def, false;
/// Sendmail Command |> Which sendmail command to use. The one found in the $PATH is used if not specified. /// Sendmail Command |> Which sendmail command to use. The one found in the $PATH is used if not specified.
sendmail_command: String, true, option; sendmail_command: String, false, option;
/// Host /// Host
smtp_host: String, true, option; smtp_host: String, true, option;
/// DEPRECATED smtp_ssl |> DEPRECATED - Please use SMTP_SECURITY /// DEPRECATED smtp_ssl |> DEPRECATED - Please use SMTP_SECURITY
@@ -724,7 +735,7 @@ make_config! {
email_expiration_time: u64, true, def, 600; email_expiration_time: u64, true, def, 600;
/// Maximum attempts |> Maximum attempts before an email token is reset and a new email will need to be sent /// Maximum attempts |> Maximum attempts before an email token is reset and a new email will need to be sent
email_attempts_limit: u64, true, def, 3; email_attempts_limit: u64, true, def, 3;
/// Automatically enforce at login |> Setup email 2FA provider regardless of any organization policy /// Setup email 2FA at signup |> Setup email 2FA provider on registration regardless of any organization policy
email_2fa_enforce_on_verified_invite: bool, true, def, false; email_2fa_enforce_on_verified_invite: bool, true, def, false;
/// Auto-enable 2FA (Know the risks!) |> Automatically setup email 2FA as fallback provider when needed /// Auto-enable 2FA (Know the risks!) |> Automatically setup email 2FA as fallback provider when needed
email_2fa_auto_fallback: bool, true, def, false; email_2fa_auto_fallback: bool, true, def, false;
@@ -822,15 +833,25 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
} }
} }
// TODO: deal with deprecated flags so they can be removed from this list, cf. #4263 // Server (v2025.5.0): https://github.com/bitwarden/server/blob/4a7db112a0952c6df8bacf36c317e9c4e58c3651/src/Core/Constants.cs#L102
// Client (v2025.5.0): https://github.com/bitwarden/clients/blob/9df8a3cc50ed45f52513e62c23fcc8a4b745f078/libs/common/src/enums/feature-flag.enum.ts#L10
// Android (v2025.4.0): https://github.com/bitwarden/android/blob/bee09de972c3870de0d54a0067996be473ec55c7/app/src/main/java/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L27
// iOS (v2025.4.0): https://github.com/bitwarden/ios/blob/956e05db67344c912e3a1b8cb2609165d67da1c9/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
//
// NOTE: Move deprecated flags to the utils::parse_experimental_client_feature_flags() DEPRECATED_FLAGS const!
const KNOWN_FLAGS: &[&str] = &[ const KNOWN_FLAGS: &[&str] = &[
"autofill-overlay", // Autofill Team
"autofill-v2", "inline-menu-positioning-improvements",
"browser-fileless-import", "inline-menu-totp",
"extension-refresh",
"fido2-vault-credentials",
"ssh-key-vault-item",
"ssh-agent", "ssh-agent",
// Key Management Team
"ssh-key-vault-item",
// Tools
"export-attachments",
// Mobile Team
"anon-addy-self-host-alias",
"simple-login-self-host-alias",
"mutual-tls",
]; ];
let configured_flags = parse_experimental_client_feature_flags(&cfg.experimental_client_feature_flags); let configured_flags = parse_experimental_client_feature_flags(&cfg.experimental_client_feature_flags);
let invalid_flags: Vec<_> = configured_flags.keys().filter(|flag| !KNOWN_FLAGS.contains(&flag.as_str())).collect(); let invalid_flags: Vec<_> = configured_flags.keys().filter(|flag| !KNOWN_FLAGS.contains(&flag.as_str())).collect();
@@ -892,12 +913,12 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
let command = cfg.sendmail_command.clone().unwrap_or_else(|| format!("sendmail{EXE_SUFFIX}")); let command = cfg.sendmail_command.clone().unwrap_or_else(|| format!("sendmail{EXE_SUFFIX}"));
let mut path = std::path::PathBuf::from(&command); let mut path = std::path::PathBuf::from(&command);
// Check if we can find the sendmail command to execute when no absolute path is given
if !path.is_absolute() { if !path.is_absolute() {
match which::which(&command) { let Ok(which_path) = which::which(&command) else {
Ok(result) => path = result, err!(format!("sendmail command {command} not found in $PATH"))
Err(_) => err!(format!("sendmail command {command:?} not found in $PATH")), };
} path = which_path;
} }
match path.metadata() { match path.metadata() {
@@ -931,8 +952,8 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
} }
} }
if (cfg.smtp_host.is_some() || cfg.use_sendmail) && !cfg.smtp_from.contains('@') { if (cfg.smtp_host.is_some() || cfg.use_sendmail) && !is_valid_email(&cfg.smtp_from) {
err!("SMTP_FROM does not contain a mandatory @ sign") err!(format!("SMTP_FROM '{}' is not a valid email address", cfg.smtp_from))
} }
if cfg._enable_email_2fa && cfg.email_token_size < 6 { if cfg._enable_email_2fa && cfg.email_token_size < 6 {
@@ -1145,12 +1166,17 @@ impl Config {
}) })
} }
pub fn update_config(&self, other: ConfigBuilder) -> Result<(), Error> { pub fn update_config(&self, other: ConfigBuilder, ignore_non_editable: bool) -> Result<(), Error> {
// Remove default values // Remove default values
//let builder = other.remove(&self.inner.read().unwrap()._env); //let builder = other.remove(&self.inner.read().unwrap()._env);
// TODO: Remove values that are defaults, above only checks those set by env and not the defaults // TODO: Remove values that are defaults, above only checks those set by env and not the defaults
let builder = other; let mut builder = other;
// Remove values that are not editable
if ignore_non_editable {
builder.clear_non_editable();
}
// Serialize now before we consume the builder // Serialize now before we consume the builder
let config_str = serde_json::to_string_pretty(&builder)?; let config_str = serde_json::to_string_pretty(&builder)?;
@@ -1185,7 +1211,7 @@ impl Config {
let mut _overrides = Vec::new(); let mut _overrides = Vec::new();
usr.merge(&other, false, &mut _overrides) usr.merge(&other, false, &mut _overrides)
}; };
self.update_config(builder) self.update_config(builder, false)
} }
/// Tests whether an email's domain is allowed. A domain is allowed if it /// Tests whether an email's domain is allowed. A domain is allowed if it
@@ -1194,7 +1220,7 @@ impl Config {
pub fn is_email_domain_allowed(&self, email: &str) -> bool { pub fn is_email_domain_allowed(&self, email: &str) -> bool {
let e: Vec<&str> = email.rsplitn(2, '@').collect(); let e: Vec<&str> = email.rsplitn(2, '@').collect();
if e.len() != 2 || e[0].is_empty() || e[1].is_empty() { if e.len() != 2 || e[0].is_empty() || e[1].is_empty() {
warn!("Failed to parse email address '{}'", email); warn!("Failed to parse email address '{email}'");
return false; return false;
} }
let email_domain = e[0].to_lowercase(); let email_domain = e[0].to_lowercase();
@@ -1326,6 +1352,8 @@ where
// Register helpers // Register helpers
hb.register_helper("case", Box::new(case_helper)); hb.register_helper("case", Box::new(case_helper));
hb.register_helper("to_json", Box::new(to_json)); hb.register_helper("to_json", Box::new(to_json));
hb.register_helper("webver", Box::new(webver));
hb.register_helper("vwver", Box::new(vwver));
macro_rules! reg { macro_rules! reg {
($name:expr) => {{ ($name:expr) => {{
@@ -1349,6 +1377,7 @@ where
reg!("email/email_footer_text"); reg!("email/email_footer_text");
reg!("email/admin_reset_password", ".html"); reg!("email/admin_reset_password", ".html");
reg!("email/change_email_existing", ".html");
reg!("email/change_email", ".html"); reg!("email/change_email", ".html");
reg!("email/delete_account", ".html"); reg!("email/delete_account", ".html");
reg!("email/emergency_access_invite_accepted", ".html"); reg!("email/emergency_access_invite_accepted", ".html");
@@ -1365,6 +1394,7 @@ where
reg!("email/protected_action", ".html"); reg!("email/protected_action", ".html");
reg!("email/pw_hint_none", ".html"); reg!("email/pw_hint_none", ".html");
reg!("email/pw_hint_some", ".html"); reg!("email/pw_hint_some", ".html");
reg!("email/register_verify_email", ".html");
reg!("email/send_2fa_removed_from_org", ".html"); reg!("email/send_2fa_removed_from_org", ".html");
reg!("email/send_emergency_access_invite", ".html"); reg!("email/send_emergency_access_invite", ".html");
reg!("email/send_org_invite", ".html"); reg!("email/send_org_invite", ".html");
@@ -1429,3 +1459,42 @@ fn to_json<'reg, 'rc>(
out.write(&json)?; out.write(&json)?;
Ok(()) Ok(())
} }
// Configure the web-vault version as an integer so it can be used as a comparison smaller or greater then.
// The default is based upon the version since this feature is added.
static WEB_VAULT_VERSION: Lazy<semver::Version> = Lazy::new(|| {
let vault_version = get_web_vault_version();
// Use a single regex capture to extract version components
let re = regex::Regex::new(r"(\d{4})\.(\d{1,2})\.(\d{1,2})").unwrap();
re.captures(&vault_version)
.and_then(|c| {
(c.len() == 4).then(|| {
format!("{}.{}.{}", c.get(1).unwrap().as_str(), c.get(2).unwrap().as_str(), c.get(3).unwrap().as_str())
})
})
.and_then(|v| semver::Version::parse(&v).ok())
.unwrap_or_else(|| semver::Version::parse("2024.6.2").unwrap())
});
// Configure the Vaultwarden version as an integer so it can be used as a comparison smaller or greater then.
// The default is based upon the version since this feature is added.
static VW_VERSION: Lazy<semver::Version> = Lazy::new(|| {
let vw_version = crate::VERSION.unwrap_or("1.32.5");
// Use a single regex capture to extract version components
let re = regex::Regex::new(r"(\d{1})\.(\d{1,2})\.(\d{1,2})").unwrap();
re.captures(vw_version)
.and_then(|c| {
(c.len() == 4).then(|| {
format!("{}.{}.{}", c.get(1).unwrap().as_str(), c.get(2).unwrap().as_str(), c.get(3).unwrap().as_str())
})
})
.and_then(|v| semver::Version::parse(&v).ok())
.unwrap_or_else(|| semver::Version::parse("1.32.5").unwrap())
});
handlebars::handlebars_helper!(webver: | web_vault_version: String |
semver::VersionReq::parse(&web_vault_version).expect("Invalid web-vault version compare string").matches(&WEB_VAULT_VERSION)
);
handlebars::handlebars_helper!(vwver: | vw_version: String |
semver::VersionReq::parse(&vw_version).expect("Invalid Vaultwarden version compare string").matches(&VW_VERSION)
);

View File

@@ -6,7 +6,7 @@ use std::num::NonZeroU32;
use data_encoding::{Encoding, HEXLOWER}; use data_encoding::{Encoding, HEXLOWER};
use ring::{digest, hmac, pbkdf2}; use ring::{digest, hmac, pbkdf2};
static DIGEST_ALG: pbkdf2::Algorithm = pbkdf2::PBKDF2_HMAC_SHA256; const DIGEST_ALG: pbkdf2::Algorithm = pbkdf2::PBKDF2_HMAC_SHA256;
const OUTPUT_LEN: usize = digest::SHA256_OUTPUT_LEN; const OUTPUT_LEN: usize = digest::SHA256_OUTPUT_LEN;
pub fn hash_password(secret: &[u8], salt: &[u8], iterations: u32) -> Vec<u8> { pub fn hash_password(secret: &[u8], salt: &[u8], iterations: u32) -> Vec<u8> {
@@ -56,11 +56,11 @@ pub fn encode_random_bytes<const N: usize>(e: Encoding) -> String {
pub fn get_random_string(alphabet: &[u8], num_chars: usize) -> String { pub fn get_random_string(alphabet: &[u8], num_chars: usize) -> String {
// Ref: https://rust-lang-nursery.github.io/rust-cookbook/algorithms/randomness.html // Ref: https://rust-lang-nursery.github.io/rust-cookbook/algorithms/randomness.html
use rand::Rng; use rand::Rng;
let mut rng = rand::thread_rng(); let mut rng = rand::rng();
(0..num_chars) (0..num_chars)
.map(|_| { .map(|_| {
let i = rng.gen_range(0..alphabet.len()); let i = rng.random_range(0..alphabet.len());
alphabet[i] as char alphabet[i] as char
}) })
.collect() .collect()
@@ -84,14 +84,15 @@ pub fn generate_id<const N: usize>() -> String {
encode_random_bytes::<N>(HEXLOWER) encode_random_bytes::<N>(HEXLOWER)
} }
pub fn generate_send_id() -> String { pub fn generate_send_file_id() -> String {
// Send IDs are globally scoped, so make them longer to avoid collisions. // Send File IDs are globally scoped, so make them longer to avoid collisions.
generate_id::<32>() // 256 bits generate_id::<32>() // 256 bits
} }
pub fn generate_attachment_id() -> String { use crate::db::models::AttachmentId;
pub fn generate_attachment_id() -> AttachmentId {
// Attachment IDs are scoped to a cipher, so they can be smaller. // Attachment IDs are scoped to a cipher, so they can be smaller.
generate_id::<10>() // 80 bits AttachmentId(generate_id::<10>()) // 80 bits
} }
/// Generates a numeric token for email-based verifications. /// Generates a numeric token for email-based verifications.
@@ -109,7 +110,6 @@ pub fn generate_api_key() -> String {
// Constant time compare // Constant time compare
// //
pub fn ct_eq<T: AsRef<[u8]>, U: AsRef<[u8]>>(a: T, b: U) -> bool { pub fn ct_eq<T: AsRef<[u8]>, U: AsRef<[u8]>>(a: T, b: U) -> bool {
use ring::constant_time::verify_slices_are_equal; use subtle::ConstantTimeEq;
a.as_ref().ct_eq(b.as_ref()).into()
verify_slices_are_equal(a.as_ref(), b.as_ref()).is_ok()
} }

View File

@@ -130,7 +130,7 @@ macro_rules! generate_connections {
DbConnType::$name => { DbConnType::$name => {
#[cfg($name)] #[cfg($name)]
{ {
paste::paste!{ [< $name _migrations >]::run_migrations()?; } pastey::paste!{ [< $name _migrations >]::run_migrations()?; }
let manager = ConnectionManager::new(&url); let manager = ConnectionManager::new(&url);
let pool = Pool::builder() let pool = Pool::builder()
.max_size(CONFIG.database_max_conns()) .max_size(CONFIG.database_max_conns())
@@ -259,7 +259,7 @@ macro_rules! db_run {
$($( $($(
#[cfg($db)] #[cfg($db)]
$crate::db::DbConnInner::$db($conn) => { $crate::db::DbConnInner::$db($conn) => {
paste::paste! { pastey::paste! {
#[allow(unused)] use $crate::db::[<__ $db _schema>]::{self as schema, *}; #[allow(unused)] use $crate::db::[<__ $db _schema>]::{self as schema, *};
#[allow(unused)] use [<__ $db _model>]::*; #[allow(unused)] use [<__ $db _model>]::*;
} }
@@ -280,7 +280,7 @@ macro_rules! db_run {
$($( $($(
#[cfg($db)] #[cfg($db)]
$crate::db::DbConnInner::$db($conn) => { $crate::db::DbConnInner::$db($conn) => {
paste::paste! { pastey::paste! {
#[allow(unused)] use $crate::db::[<__ $db _schema>]::{self as schema, *}; #[allow(unused)] use $crate::db::[<__ $db _schema>]::{self as schema, *};
// @ RAW: #[allow(unused)] use [<__ $db _model>]::*; // @ RAW: #[allow(unused)] use [<__ $db _model>]::*;
} }
@@ -337,7 +337,7 @@ macro_rules! db_object {
}; };
( @db $db:ident | $( #[$attr:meta] )* | $name:ident | $( $( #[$field_attr:meta] )* $vis:vis $field:ident : $typ:ty),+) => { ( @db $db:ident | $( #[$attr:meta] )* | $name:ident | $( $( #[$field_attr:meta] )* $vis:vis $field:ident : $typ:ty),+) => {
paste::paste! { pastey::paste! {
#[allow(unused)] use super::*; #[allow(unused)] use super::*;
#[allow(unused)] use diesel::prelude::*; #[allow(unused)] use diesel::prelude::*;
#[allow(unused)] use $crate::db::[<__ $db _schema>]::*; #[allow(unused)] use $crate::db::[<__ $db _schema>]::*;

View File

@@ -1,9 +1,12 @@
use std::io::ErrorKind; use std::io::ErrorKind;
use bigdecimal::{BigDecimal, ToPrimitive}; use bigdecimal::{BigDecimal, ToPrimitive};
use derive_more::{AsRef, Deref, Display};
use serde_json::Value; use serde_json::Value;
use super::{CipherId, OrganizationId, UserId};
use crate::CONFIG; use crate::CONFIG;
use macros::IdFromParam;
db_object! { db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
@@ -11,8 +14,8 @@ db_object! {
#[diesel(treat_none_as_null = true)] #[diesel(treat_none_as_null = true)]
#[diesel(primary_key(id))] #[diesel(primary_key(id))]
pub struct Attachment { pub struct Attachment {
pub id: String, pub id: AttachmentId,
pub cipher_uuid: String, pub cipher_uuid: CipherId,
pub file_name: String, // encrypted pub file_name: String, // encrypted
pub file_size: i64, pub file_size: i64,
pub akey: Option<String>, pub akey: Option<String>,
@@ -21,7 +24,13 @@ db_object! {
/// Local methods /// Local methods
impl Attachment { impl Attachment {
pub const fn new(id: String, cipher_uuid: String, file_name: String, file_size: i64, akey: Option<String>) -> Self { pub const fn new(
id: AttachmentId,
cipher_uuid: CipherId,
file_name: String,
file_size: i64,
akey: Option<String>,
) -> Self {
Self { Self {
id, id,
cipher_uuid, cipher_uuid,
@@ -37,7 +46,7 @@ impl Attachment {
pub fn get_url(&self, host: &str) -> String { pub fn get_url(&self, host: &str) -> String {
let token = encode_jwt(&generate_file_download_claims(self.cipher_uuid.clone(), self.id.clone())); let token = encode_jwt(&generate_file_download_claims(self.cipher_uuid.clone(), self.id.clone()));
format!("{}/attachments/{}/{}?token={}", host, self.cipher_uuid, self.id, token) format!("{host}/attachments/{}/{}?token={token}", self.cipher_uuid, self.id)
} }
pub fn to_json(&self, host: &str) -> Value { pub fn to_json(&self, host: &str) -> Value {
@@ -108,7 +117,7 @@ impl Attachment {
// upstream caller has already cleaned up the file as part of // upstream caller has already cleaned up the file as part of
// its own error handling. // its own error handling.
Err(e) if e.kind() == ErrorKind::NotFound => { Err(e) if e.kind() == ErrorKind::NotFound => {
debug!("File '{}' already deleted.", file_path); debug!("File '{file_path}' already deleted.");
Ok(()) Ok(())
} }
Err(e) => Err(e.into()), Err(e) => Err(e.into()),
@@ -117,14 +126,14 @@ impl Attachment {
}} }}
} }
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_cipher(cipher_uuid: &CipherId, conn: &mut DbConn) -> EmptyResult {
for attachment in Attachment::find_by_cipher(cipher_uuid, conn).await { for attachment in Attachment::find_by_cipher(cipher_uuid, conn).await {
attachment.delete(conn).await?; attachment.delete(conn).await?;
} }
Ok(()) Ok(())
} }
pub async fn find_by_id(id: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_id(id: &AttachmentId, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
attachments::table attachments::table
.filter(attachments::id.eq(id.to_lowercase())) .filter(attachments::id.eq(id.to_lowercase()))
@@ -134,7 +143,7 @@ impl Attachment {
}} }}
} }
pub async fn find_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_cipher(cipher_uuid: &CipherId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
attachments::table attachments::table
.filter(attachments::cipher_uuid.eq(cipher_uuid)) .filter(attachments::cipher_uuid.eq(cipher_uuid))
@@ -144,7 +153,7 @@ impl Attachment {
}} }}
} }
pub async fn size_by_user(user_uuid: &str, conn: &mut DbConn) -> i64 { pub async fn size_by_user(user_uuid: &UserId, conn: &mut DbConn) -> i64 {
db_run! { conn: { db_run! { conn: {
let result: Option<BigDecimal> = attachments::table let result: Option<BigDecimal> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid))) .left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
@@ -161,7 +170,7 @@ impl Attachment {
}} }}
} }
pub async fn count_by_user(user_uuid: &str, conn: &mut DbConn) -> i64 { pub async fn count_by_user(user_uuid: &UserId, conn: &mut DbConn) -> i64 {
db_run! { conn: { db_run! { conn: {
attachments::table attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid))) .left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
@@ -172,7 +181,7 @@ impl Attachment {
}} }}
} }
pub async fn size_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 { pub async fn size_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> i64 {
db_run! { conn: { db_run! { conn: {
let result: Option<BigDecimal> = attachments::table let result: Option<BigDecimal> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid))) .left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
@@ -189,7 +198,7 @@ impl Attachment {
}} }}
} }
pub async fn count_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 { pub async fn count_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> i64 {
db_run! { conn: { db_run! { conn: {
attachments::table attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid))) .left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
@@ -203,7 +212,11 @@ impl Attachment {
// This will return all attachments linked to the user or org // This will return all attachments linked to the user or org
// There is no filtering done here if the user actually has access! // There is no filtering done here if the user actually has access!
// It is used to speed up the sync process, and the matching is done in a different part. // It is used to speed up the sync process, and the matching is done in a different part.
pub async fn find_all_by_user_and_orgs(user_uuid: &str, org_uuids: &Vec<String>, conn: &mut DbConn) -> Vec<Self> { pub async fn find_all_by_user_and_orgs(
user_uuid: &UserId,
org_uuids: &Vec<OrganizationId>,
conn: &mut DbConn,
) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
attachments::table attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid))) .left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
@@ -216,3 +229,20 @@ impl Attachment {
}} }}
} }
} }
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
IdFromParam,
)]
pub struct AttachmentId(pub String);

View File

@@ -1,5 +1,9 @@
use crate::crypto::ct_eq; use super::{DeviceId, OrganizationId, UserId};
use crate::{crypto::ct_eq, util::format_date};
use chrono::{NaiveDateTime, Utc}; use chrono::{NaiveDateTime, Utc};
use derive_more::{AsRef, Deref, Display, From};
use macros::UuidFromParam;
use serde_json::Value;
db_object! { db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset, Deserialize, Serialize)] #[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset, Deserialize, Serialize)]
@@ -7,15 +11,15 @@ db_object! {
#[diesel(treat_none_as_null = true)] #[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))] #[diesel(primary_key(uuid))]
pub struct AuthRequest { pub struct AuthRequest {
pub uuid: String, pub uuid: AuthRequestId,
pub user_uuid: String, pub user_uuid: UserId,
pub organization_uuid: Option<String>, pub organization_uuid: Option<OrganizationId>,
pub request_device_identifier: String, pub request_device_identifier: DeviceId,
pub device_type: i32, // https://github.com/bitwarden/server/blob/master/src/Core/Enums/DeviceType.cs pub device_type: i32, // https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Core/Enums/DeviceType.cs
pub request_ip: String, pub request_ip: String,
pub response_device_id: Option<String>, pub response_device_id: Option<DeviceId>,
pub access_code: String, pub access_code: String,
pub public_key: String, pub public_key: String,
@@ -33,8 +37,8 @@ db_object! {
impl AuthRequest { impl AuthRequest {
pub fn new( pub fn new(
user_uuid: String, user_uuid: UserId,
request_device_identifier: String, request_device_identifier: DeviceId,
device_type: i32, device_type: i32,
request_ip: String, request_ip: String,
access_code: String, access_code: String,
@@ -43,7 +47,7 @@ impl AuthRequest {
let now = Utc::now().naive_utc(); let now = Utc::now().naive_utc();
Self { Self {
uuid: crate::util::get_uuid(), uuid: AuthRequestId(crate::util::get_uuid()),
user_uuid, user_uuid,
organization_uuid: None, organization_uuid: None,
@@ -61,6 +65,13 @@ impl AuthRequest {
authentication_date: None, authentication_date: None,
} }
} }
pub fn to_json_for_pending_device(&self) -> Value {
json!({
"id": self.uuid,
"creationDate": format_date(&self.creation_date),
})
}
} }
use crate::db::DbConn; use crate::db::DbConn;
@@ -101,7 +112,7 @@ impl AuthRequest {
} }
} }
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid(uuid: &AuthRequestId, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: { db_run! {conn: {
auth_requests::table auth_requests::table
.filter(auth_requests::uuid.eq(uuid)) .filter(auth_requests::uuid.eq(uuid))
@@ -111,7 +122,7 @@ impl AuthRequest {
}} }}
} }
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid_and_user(uuid: &AuthRequestId, user_uuid: &UserId, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: { db_run! {conn: {
auth_requests::table auth_requests::table
.filter(auth_requests::uuid.eq(uuid)) .filter(auth_requests::uuid.eq(uuid))
@@ -122,7 +133,7 @@ impl AuthRequest {
}} }}
} }
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: { db_run! {conn: {
auth_requests::table auth_requests::table
.filter(auth_requests::user_uuid.eq(user_uuid)) .filter(auth_requests::user_uuid.eq(user_uuid))
@@ -130,6 +141,21 @@ impl AuthRequest {
}} }}
} }
pub async fn find_by_user_and_requested_device(
user_uuid: &UserId,
device_uuid: &DeviceId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! {conn: {
auth_requests::table
.filter(auth_requests::user_uuid.eq(user_uuid))
.filter(auth_requests::request_device_identifier.eq(device_uuid))
.filter(auth_requests::approved.is_null())
.order_by(auth_requests::creation_date.desc())
.first::<AuthRequestDb>(conn).ok().from_db()
}}
}
pub async fn find_created_before(dt: &NaiveDateTime, conn: &mut DbConn) -> Vec<Self> { pub async fn find_created_before(dt: &NaiveDateTime, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: { db_run! {conn: {
auth_requests::table auth_requests::table
@@ -157,3 +183,21 @@ impl AuthRequest {
} }
} }
} }
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
From,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
UuidFromParam,
)]
pub struct AuthRequestId(String);

View File

@@ -1,13 +1,15 @@
use crate::util::LowerCase; use crate::util::LowerCase;
use crate::CONFIG; use crate::CONFIG;
use chrono::{NaiveDateTime, TimeDelta, Utc}; use chrono::{NaiveDateTime, TimeDelta, Utc};
use derive_more::{AsRef, Deref, Display, From};
use serde_json::Value; use serde_json::Value;
use super::{ use super::{
Attachment, CollectionCipher, Favorite, FolderCipher, Group, User, UserOrgStatus, UserOrgType, UserOrganization, Attachment, CollectionCipher, CollectionId, Favorite, FolderCipher, FolderId, Group, Membership, MembershipStatus,
MembershipType, OrganizationId, User, UserId,
}; };
use crate::api::core::{CipherData, CipherSyncData, CipherSyncType}; use crate::api::core::{CipherData, CipherSyncData, CipherSyncType};
use macros::UuidFromParam;
use std::borrow::Cow; use std::borrow::Cow;
@@ -17,12 +19,12 @@ db_object! {
#[diesel(treat_none_as_null = true)] #[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))] #[diesel(primary_key(uuid))]
pub struct Cipher { pub struct Cipher {
pub uuid: String, pub uuid: CipherId,
pub created_at: NaiveDateTime, pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime, pub updated_at: NaiveDateTime,
pub user_uuid: Option<String>, pub user_uuid: Option<UserId>,
pub organization_uuid: Option<String>, pub organization_uuid: Option<OrganizationId>,
pub key: Option<String>, pub key: Option<String>,
@@ -57,7 +59,7 @@ impl Cipher {
let now = Utc::now().naive_utc(); let now = Utc::now().naive_utc();
Self { Self {
uuid: crate::util::get_uuid(), uuid: CipherId(crate::util::get_uuid()),
created_at: now, created_at: now,
updated_at: now, updated_at: now,
@@ -83,7 +85,7 @@ impl Cipher {
let mut validation_errors = serde_json::Map::new(); let mut validation_errors = serde_json::Map::new();
let max_note_size = CONFIG._max_note_size(); let max_note_size = CONFIG._max_note_size();
let max_note_size_msg = let max_note_size_msg =
format!("The field Notes exceeds the maximum encrypted value length of {} characters.", &max_note_size); format!("The field Notes exceeds the maximum encrypted value length of {max_note_size} characters.");
for (index, cipher) in cipher_data.iter().enumerate() { for (index, cipher) in cipher_data.iter().enumerate() {
// Validate the note size and if it is exceeded return a warning // Validate the note size and if it is exceeded return a warning
if let Some(note) = &cipher.notes { if let Some(note) = &cipher.notes {
@@ -135,12 +137,12 @@ impl Cipher {
pub async fn to_json( pub async fn to_json(
&self, &self,
host: &str, host: &str,
user_uuid: &str, user_uuid: &UserId,
cipher_sync_data: Option<&CipherSyncData>, cipher_sync_data: Option<&CipherSyncData>,
sync_type: CipherSyncType, sync_type: CipherSyncType,
conn: &mut DbConn, conn: &mut DbConn,
) -> Value { ) -> Value {
use crate::util::format_date; use crate::util::{format_date, validate_and_format_date};
let mut attachments_json: Value = Value::Null; let mut attachments_json: Value = Value::Null;
if let Some(cipher_sync_data) = cipher_sync_data { if let Some(cipher_sync_data) = cipher_sync_data {
@@ -156,16 +158,16 @@ impl Cipher {
// We don't need these values at all for Organizational syncs // We don't need these values at all for Organizational syncs
// Skip any other database calls if this is the case and just return false. // Skip any other database calls if this is the case and just return false.
let (read_only, hide_passwords) = if sync_type == CipherSyncType::User { let (read_only, hide_passwords, _) = if sync_type == CipherSyncType::User {
match self.get_access_restrictions(user_uuid, cipher_sync_data, conn).await { match self.get_access_restrictions(user_uuid, cipher_sync_data, conn).await {
Some((ro, hp)) => (ro, hp), Some((ro, hp, mn)) => (ro, hp, mn),
None => { None => {
error!("Cipher ownership assertion failure"); error!("Cipher ownership assertion failure");
(true, true) (true, true, false)
} }
} }
} else { } else {
(false, false) (false, false, false)
}; };
let fields_json: Vec<_> = self let fields_json: Vec<_> = self
@@ -218,7 +220,7 @@ impl Cipher {
}) })
.map(|mut d| match d.get("lastUsedDate").and_then(|l| l.as_str()) { .map(|mut d| match d.get("lastUsedDate").and_then(|l| l.as_str()) {
Some(l) => { Some(l) => {
d["lastUsedDate"] = json!(crate::util::validate_and_format_date(l)); d["lastUsedDate"] = json!(validate_and_format_date(l));
d d
} }
_ => { _ => {
@@ -241,12 +243,28 @@ impl Cipher {
// NOTE: This was marked as *Backwards Compatibility Code*, but as of January 2021 this is still being used by upstream // NOTE: This was marked as *Backwards Compatibility Code*, but as of January 2021 this is still being used by upstream
// Set the first element of the Uris array as Uri, this is needed several (mobile) clients. // Set the first element of the Uris array as Uri, this is needed several (mobile) clients.
if self.atype == 1 { if self.atype == 1 {
if type_data_json["uris"].is_array() { // Upstream always has an `uri` key/value
let uri = type_data_json["uris"][0]["uri"].clone(); type_data_json["uri"] = Value::Null;
type_data_json["uri"] = uri; if let Some(uris) = type_data_json["uris"].as_array_mut() {
} else { if !uris.is_empty() {
// Upstream always has an Uri key/value // Fix uri match values first, they are only allowed to be a number or null
type_data_json["uri"] = Value::Null; // If it is a string, convert it to an int or null if that fails
for uri in &mut *uris {
if uri["match"].is_string() {
let match_value = match uri["match"].as_str().unwrap_or_default().parse::<u8>() {
Ok(n) => json!(n),
_ => Value::Null,
};
uri["match"] = match_value;
}
}
type_data_json["uri"] = uris[0]["uri"].clone();
}
}
// Check if `passwordRevisionDate` is a valid date, else convert it
if let Some(pw_revision) = type_data_json["passwordRevisionDate"].as_str() {
type_data_json["passwordRevisionDate"] = json!(validate_and_format_date(pw_revision));
} }
} }
@@ -261,6 +279,19 @@ impl Cipher {
} }
} }
// Fix invalid SSH Entries
// This breaks at least the native mobile client if invalid
// The only way to fix this is by setting type_data_json to `null`
// Opening this ssh-key in the mobile client will probably crash the client, but you can edit, save and afterwards delete it
if self.atype == 5
&& (type_data_json["keyFingerprint"].as_str().is_none_or(|v| v.is_empty())
|| type_data_json["privateKey"].as_str().is_none_or(|v| v.is_empty())
|| type_data_json["publicKey"].as_str().is_none_or(|v| v.is_empty()))
{
warn!("Error parsing ssh-key, mandatory fields are invalid for {}", self.uuid);
type_data_json = Value::Null;
}
// Clone the type_data and add some default value. // Clone the type_data and add some default value.
let mut data_json = type_data_json.clone(); let mut data_json = type_data_json.clone();
@@ -278,7 +309,7 @@ impl Cipher {
Cow::from(Vec::with_capacity(0)) Cow::from(Vec::with_capacity(0))
} }
} else { } else {
Cow::from(self.get_admin_collections(user_uuid.to_string(), conn).await) Cow::from(self.get_admin_collections(user_uuid.clone(), conn).await)
}; };
// There are three types of cipher response models in upstream // There are three types of cipher response models in upstream
@@ -287,7 +318,7 @@ impl Cipher {
// supports the "cipherDetails" type, though it seems like the // supports the "cipherDetails" type, though it seems like the
// Bitwarden clients will ignore extra fields. // Bitwarden clients will ignore extra fields.
// //
// Ref: https://github.com/bitwarden/server/blob/master/src/Core/Models/Api/Response/CipherResponseModel.cs // Ref: https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Api/Vault/Models/Response/CipherResponseModel.cs#L14
let mut json_object = json!({ let mut json_object = json!({
"object": "cipherDetails", "object": "cipherDetails",
"id": self.uuid, "id": self.uuid,
@@ -327,7 +358,7 @@ impl Cipher {
// Skip adding these fields in that case // Skip adding these fields in that case
if sync_type == CipherSyncType::User { if sync_type == CipherSyncType::User {
json_object["folderId"] = json!(if let Some(cipher_sync_data) = cipher_sync_data { json_object["folderId"] = json!(if let Some(cipher_sync_data) = cipher_sync_data {
cipher_sync_data.cipher_folders.get(&self.uuid).map(|c| c.to_string()) cipher_sync_data.cipher_folders.get(&self.uuid).cloned()
} else { } else {
self.get_folder_uuid(user_uuid, conn).await self.get_folder_uuid(user_uuid, conn).await
}); });
@@ -356,7 +387,7 @@ impl Cipher {
json_object json_object
} }
pub async fn update_users_revision(&self, conn: &mut DbConn) -> Vec<String> { pub async fn update_users_revision(&self, conn: &mut DbConn) -> Vec<UserId> {
let mut user_uuids = Vec::new(); let mut user_uuids = Vec::new();
match self.user_uuid { match self.user_uuid {
Some(ref user_uuid) => { Some(ref user_uuid) => {
@@ -367,17 +398,16 @@ impl Cipher {
// Belongs to Organization, need to update affected users // Belongs to Organization, need to update affected users
if let Some(ref org_uuid) = self.organization_uuid { if let Some(ref org_uuid) = self.organization_uuid {
// users having access to the collection // users having access to the collection
let mut collection_users = let mut collection_users = Membership::find_by_cipher_and_org(&self.uuid, org_uuid, conn).await;
UserOrganization::find_by_cipher_and_org(&self.uuid, org_uuid, conn).await;
if CONFIG.org_groups_enabled() { if CONFIG.org_groups_enabled() {
// members of a group having access to the collection // members of a group having access to the collection
let group_users = let group_users =
UserOrganization::find_by_cipher_and_org_with_group(&self.uuid, org_uuid, conn).await; Membership::find_by_cipher_and_org_with_group(&self.uuid, org_uuid, conn).await;
collection_users.extend(group_users); collection_users.extend(group_users);
} }
for user_org in collection_users { for member in collection_users {
User::update_uuid_revision(&user_org.user_uuid, conn).await; User::update_uuid_revision(&member.user_uuid, conn).await;
user_uuids.push(user_org.user_uuid.clone()) user_uuids.push(member.user_uuid.clone())
} }
} }
} }
@@ -435,7 +465,7 @@ impl Cipher {
}} }}
} }
pub async fn delete_all_by_organization(org_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_organization(org_uuid: &OrganizationId, conn: &mut DbConn) -> EmptyResult {
// TODO: Optimize this by executing a DELETE directly on the database, instead of first fetching. // TODO: Optimize this by executing a DELETE directly on the database, instead of first fetching.
for cipher in Self::find_by_org(org_uuid, conn).await { for cipher in Self::find_by_org(org_uuid, conn).await {
cipher.delete(conn).await?; cipher.delete(conn).await?;
@@ -443,7 +473,7 @@ impl Cipher {
Ok(()) Ok(())
} }
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
for cipher in Self::find_owned_by_user(user_uuid, conn).await { for cipher in Self::find_owned_by_user(user_uuid, conn).await {
cipher.delete(conn).await?; cipher.delete(conn).await?;
} }
@@ -461,52 +491,59 @@ impl Cipher {
} }
} }
pub async fn move_to_folder(&self, folder_uuid: Option<String>, user_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn move_to_folder(
&self,
folder_uuid: Option<FolderId>,
user_uuid: &UserId,
conn: &mut DbConn,
) -> EmptyResult {
User::update_uuid_revision(user_uuid, conn).await; User::update_uuid_revision(user_uuid, conn).await;
match (self.get_folder_uuid(user_uuid, conn).await, folder_uuid) { match (self.get_folder_uuid(user_uuid, conn).await, folder_uuid) {
// No changes // No changes
(None, None) => Ok(()), (None, None) => Ok(()),
(Some(ref old), Some(ref new)) if old == new => Ok(()), (Some(ref old_folder), Some(ref new_folder)) if old_folder == new_folder => Ok(()),
// Add to folder // Add to folder
(None, Some(new)) => FolderCipher::new(&new, &self.uuid).save(conn).await, (None, Some(new_folder)) => FolderCipher::new(new_folder, self.uuid.clone()).save(conn).await,
// Remove from folder // Remove from folder
(Some(old), None) => match FolderCipher::find_by_folder_and_cipher(&old, &self.uuid, conn).await { (Some(old_folder), None) => {
Some(old) => old.delete(conn).await, match FolderCipher::find_by_folder_and_cipher(&old_folder, &self.uuid, conn).await {
None => err!("Couldn't move from previous folder"), Some(old_folder) => old_folder.delete(conn).await,
}, None => err!("Couldn't move from previous folder"),
}
}
// Move to another folder // Move to another folder
(Some(old), Some(new)) => { (Some(old_folder), Some(new_folder)) => {
if let Some(old) = FolderCipher::find_by_folder_and_cipher(&old, &self.uuid, conn).await { if let Some(old_folder) = FolderCipher::find_by_folder_and_cipher(&old_folder, &self.uuid, conn).await {
old.delete(conn).await?; old_folder.delete(conn).await?;
} }
FolderCipher::new(&new, &self.uuid).save(conn).await FolderCipher::new(new_folder, self.uuid.clone()).save(conn).await
} }
} }
} }
/// Returns whether this cipher is directly owned by the user. /// Returns whether this cipher is directly owned by the user.
pub fn is_owned_by_user(&self, user_uuid: &str) -> bool { pub fn is_owned_by_user(&self, user_uuid: &UserId) -> bool {
self.user_uuid.is_some() && self.user_uuid.as_ref().unwrap() == user_uuid self.user_uuid.is_some() && self.user_uuid.as_ref().unwrap() == user_uuid
} }
/// Returns whether this cipher is owned by an org in which the user has full access. /// Returns whether this cipher is owned by an org in which the user has full access.
async fn is_in_full_access_org( async fn is_in_full_access_org(
&self, &self,
user_uuid: &str, user_uuid: &UserId,
cipher_sync_data: Option<&CipherSyncData>, cipher_sync_data: Option<&CipherSyncData>,
conn: &mut DbConn, conn: &mut DbConn,
) -> bool { ) -> bool {
if let Some(ref org_uuid) = self.organization_uuid { if let Some(ref org_uuid) = self.organization_uuid {
if let Some(cipher_sync_data) = cipher_sync_data { if let Some(cipher_sync_data) = cipher_sync_data {
if let Some(cached_user_org) = cipher_sync_data.user_organizations.get(org_uuid) { if let Some(cached_member) = cipher_sync_data.members.get(org_uuid) {
return cached_user_org.has_full_access(); return cached_member.has_full_access();
} }
} else if let Some(user_org) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn).await { } else if let Some(member) = Membership::find_by_user_and_org(user_uuid, org_uuid, conn).await {
return user_org.has_full_access(); return member.has_full_access();
} }
} }
false false
@@ -515,7 +552,7 @@ impl Cipher {
/// Returns whether this cipher is owned by an group in which the user has full access. /// Returns whether this cipher is owned by an group in which the user has full access.
async fn is_in_full_access_group( async fn is_in_full_access_group(
&self, &self,
user_uuid: &str, user_uuid: &UserId,
cipher_sync_data: Option<&CipherSyncData>, cipher_sync_data: Option<&CipherSyncData>,
conn: &mut DbConn, conn: &mut DbConn,
) -> bool { ) -> bool {
@@ -535,14 +572,14 @@ impl Cipher {
/// Returns the user's access restrictions to this cipher. A return value /// Returns the user's access restrictions to this cipher. A return value
/// of None means that this cipher does not belong to the user, and is /// of None means that this cipher does not belong to the user, and is
/// not in any collection the user has access to. Otherwise, the user has /// not in any collection the user has access to. Otherwise, the user has
/// access to this cipher, and Some(read_only, hide_passwords) represents /// access to this cipher, and Some(read_only, hide_passwords, manage) represents
/// the access restrictions. /// the access restrictions.
pub async fn get_access_restrictions( pub async fn get_access_restrictions(
&self, &self,
user_uuid: &str, user_uuid: &UserId,
cipher_sync_data: Option<&CipherSyncData>, cipher_sync_data: Option<&CipherSyncData>,
conn: &mut DbConn, conn: &mut DbConn,
) -> Option<(bool, bool)> { ) -> Option<(bool, bool, bool)> {
// Check whether this cipher is directly owned by the user, or is in // Check whether this cipher is directly owned by the user, or is in
// a collection that the user has full access to. If so, there are no // a collection that the user has full access to. If so, there are no
// access restrictions. // access restrictions.
@@ -550,21 +587,21 @@ impl Cipher {
|| self.is_in_full_access_org(user_uuid, cipher_sync_data, conn).await || self.is_in_full_access_org(user_uuid, cipher_sync_data, conn).await
|| self.is_in_full_access_group(user_uuid, cipher_sync_data, conn).await || self.is_in_full_access_group(user_uuid, cipher_sync_data, conn).await
{ {
return Some((false, false)); return Some((false, false, true));
} }
let rows = if let Some(cipher_sync_data) = cipher_sync_data { let rows = if let Some(cipher_sync_data) = cipher_sync_data {
let mut rows: Vec<(bool, bool)> = Vec::new(); let mut rows: Vec<(bool, bool, bool)> = Vec::new();
if let Some(collections) = cipher_sync_data.cipher_collections.get(&self.uuid) { if let Some(collections) = cipher_sync_data.cipher_collections.get(&self.uuid) {
for collection in collections { for collection in collections {
//User permissions //User permissions
if let Some(uc) = cipher_sync_data.user_collections.get(collection) { if let Some(cu) = cipher_sync_data.user_collections.get(collection) {
rows.push((uc.read_only, uc.hide_passwords)); rows.push((cu.read_only, cu.hide_passwords, cu.manage));
} }
//Group permissions //Group permissions
if let Some(cg) = cipher_sync_data.user_collections_groups.get(collection) { if let Some(cg) = cipher_sync_data.user_collections_groups.get(collection) {
rows.push((cg.read_only, cg.hide_passwords)); rows.push((cg.read_only, cg.hide_passwords, cg.manage));
} }
} }
} }
@@ -591,15 +628,21 @@ impl Cipher {
// booleans and this behavior isn't portable anyway. // booleans and this behavior isn't portable anyway.
let mut read_only = true; let mut read_only = true;
let mut hide_passwords = true; let mut hide_passwords = true;
for (ro, hp) in rows.iter() { let mut manage = false;
for (ro, hp, mn) in rows.iter() {
read_only &= ro; read_only &= ro;
hide_passwords &= hp; hide_passwords &= hp;
manage &= mn;
} }
Some((read_only, hide_passwords)) Some((read_only, hide_passwords, manage))
} }
async fn get_user_collections_access_flags(&self, user_uuid: &str, conn: &mut DbConn) -> Vec<(bool, bool)> { async fn get_user_collections_access_flags(
&self,
user_uuid: &UserId,
conn: &mut DbConn,
) -> Vec<(bool, bool, bool)> {
db_run! {conn: { db_run! {conn: {
// Check whether this cipher is in any collections accessible to the // Check whether this cipher is in any collections accessible to the
// user. If so, retrieve the access flags for each collection. // user. If so, retrieve the access flags for each collection.
@@ -610,13 +653,17 @@ impl Cipher {
.inner_join(users_collections::table.on( .inner_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid) ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
.and(users_collections::user_uuid.eq(user_uuid)))) .and(users_collections::user_uuid.eq(user_uuid))))
.select((users_collections::read_only, users_collections::hide_passwords)) .select((users_collections::read_only, users_collections::hide_passwords, users_collections::manage))
.load::<(bool, bool)>(conn) .load::<(bool, bool, bool)>(conn)
.expect("Error getting user access restrictions") .expect("Error getting user access restrictions")
}} }}
} }
async fn get_group_collections_access_flags(&self, user_uuid: &str, conn: &mut DbConn) -> Vec<(bool, bool)> { async fn get_group_collections_access_flags(
&self,
user_uuid: &UserId,
conn: &mut DbConn,
) -> Vec<(bool, bool, bool)> {
if !CONFIG.org_groups_enabled() { if !CONFIG.org_groups_enabled() {
return Vec::new(); return Vec::new();
} }
@@ -636,49 +683,49 @@ impl Cipher {
users_organizations::uuid.eq(groups_users::users_organizations_uuid) users_organizations::uuid.eq(groups_users::users_organizations_uuid)
)) ))
.filter(users_organizations::user_uuid.eq(user_uuid)) .filter(users_organizations::user_uuid.eq(user_uuid))
.select((collections_groups::read_only, collections_groups::hide_passwords)) .select((collections_groups::read_only, collections_groups::hide_passwords, collections_groups::manage))
.load::<(bool, bool)>(conn) .load::<(bool, bool, bool)>(conn)
.expect("Error getting group access restrictions") .expect("Error getting group access restrictions")
}} }}
} }
pub async fn is_write_accessible_to_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool { pub async fn is_write_accessible_to_user(&self, user_uuid: &UserId, conn: &mut DbConn) -> bool {
match self.get_access_restrictions(user_uuid, None, conn).await { match self.get_access_restrictions(user_uuid, None, conn).await {
Some((read_only, _hide_passwords)) => !read_only, Some((read_only, _hide_passwords, manage)) => !read_only || manage,
None => false, None => false,
} }
} }
pub async fn is_accessible_to_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool { pub async fn is_accessible_to_user(&self, user_uuid: &UserId, conn: &mut DbConn) -> bool {
self.get_access_restrictions(user_uuid, None, conn).await.is_some() self.get_access_restrictions(user_uuid, None, conn).await.is_some()
} }
// Returns whether this cipher is a favorite of the specified user. // Returns whether this cipher is a favorite of the specified user.
pub async fn is_favorite(&self, user_uuid: &str, conn: &mut DbConn) -> bool { pub async fn is_favorite(&self, user_uuid: &UserId, conn: &mut DbConn) -> bool {
Favorite::is_favorite(&self.uuid, user_uuid, conn).await Favorite::is_favorite(&self.uuid, user_uuid, conn).await
} }
// Sets whether this cipher is a favorite of the specified user. // Sets whether this cipher is a favorite of the specified user.
pub async fn set_favorite(&self, favorite: Option<bool>, user_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn set_favorite(&self, favorite: Option<bool>, user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
match favorite { match favorite {
None => Ok(()), // No change requested. None => Ok(()), // No change requested.
Some(status) => Favorite::set_favorite(status, &self.uuid, user_uuid, conn).await, Some(status) => Favorite::set_favorite(status, &self.uuid, user_uuid, conn).await,
} }
} }
pub async fn get_folder_uuid(&self, user_uuid: &str, conn: &mut DbConn) -> Option<String> { pub async fn get_folder_uuid(&self, user_uuid: &UserId, conn: &mut DbConn) -> Option<FolderId> {
db_run! {conn: { db_run! {conn: {
folders_ciphers::table folders_ciphers::table
.inner_join(folders::table) .inner_join(folders::table)
.filter(folders::user_uuid.eq(&user_uuid)) .filter(folders::user_uuid.eq(&user_uuid))
.filter(folders_ciphers::cipher_uuid.eq(&self.uuid)) .filter(folders_ciphers::cipher_uuid.eq(&self.uuid))
.select(folders_ciphers::folder_uuid) .select(folders_ciphers::folder_uuid)
.first::<String>(conn) .first::<FolderId>(conn)
.ok() .ok()
}} }}
} }
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid(uuid: &CipherId, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: { db_run! {conn: {
ciphers::table ciphers::table
.filter(ciphers::uuid.eq(uuid)) .filter(ciphers::uuid.eq(uuid))
@@ -688,7 +735,11 @@ impl Cipher {
}} }}
} }
pub async fn find_by_uuid_and_org(cipher_uuid: &str, org_uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid_and_org(
cipher_uuid: &CipherId,
org_uuid: &OrganizationId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! {conn: { db_run! {conn: {
ciphers::table ciphers::table
.filter(ciphers::uuid.eq(cipher_uuid)) .filter(ciphers::uuid.eq(cipher_uuid))
@@ -711,7 +762,7 @@ impl Cipher {
// true, then the non-interesting ciphers will not be returned. As a // true, then the non-interesting ciphers will not be returned. As a
// result, those ciphers will not appear in "My Vault" for the org // result, those ciphers will not appear in "My Vault" for the org
// owner/admin, but they can still be accessed via the org vault view. // owner/admin, but they can still be accessed via the org vault view.
pub async fn find_by_user(user_uuid: &str, visible_only: bool, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_user(user_uuid: &UserId, visible_only: bool, conn: &mut DbConn) -> Vec<Self> {
if CONFIG.org_groups_enabled() { if CONFIG.org_groups_enabled() {
db_run! {conn: { db_run! {conn: {
let mut query = ciphers::table let mut query = ciphers::table
@@ -721,7 +772,7 @@ impl Cipher {
.left_join(users_organizations::table.on( .left_join(users_organizations::table.on(
ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable()) ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable())
.and(users_organizations::user_uuid.eq(user_uuid)) .and(users_organizations::user_uuid.eq(user_uuid))
.and(users_organizations::status.eq(UserOrgStatus::Confirmed as i32)) .and(users_organizations::status.eq(MembershipStatus::Confirmed as i32))
)) ))
.left_join(users_collections::table.on( .left_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid) ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
@@ -748,7 +799,7 @@ impl Cipher {
if !visible_only { if !visible_only {
query = query.or_filter( query = query.or_filter(
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin/owner users_organizations::atype.le(MembershipType::Admin as i32) // Org admin/owner
); );
} }
@@ -766,7 +817,7 @@ impl Cipher {
.left_join(users_organizations::table.on( .left_join(users_organizations::table.on(
ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable()) ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable())
.and(users_organizations::user_uuid.eq(user_uuid)) .and(users_organizations::user_uuid.eq(user_uuid))
.and(users_organizations::status.eq(UserOrgStatus::Confirmed as i32)) .and(users_organizations::status.eq(MembershipStatus::Confirmed as i32))
)) ))
.left_join(users_collections::table.on( .left_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid) ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
@@ -780,7 +831,7 @@ impl Cipher {
if !visible_only { if !visible_only {
query = query.or_filter( query = query.or_filter(
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin/owner users_organizations::atype.le(MembershipType::Admin as i32) // Org admin/owner
); );
} }
@@ -793,12 +844,12 @@ impl Cipher {
} }
// Find all ciphers visible to the specified user. // Find all ciphers visible to the specified user.
pub async fn find_by_user_visible(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_user_visible(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
Self::find_by_user(user_uuid, true, conn).await Self::find_by_user(user_uuid, true, conn).await
} }
// Find all ciphers directly owned by the specified user. // Find all ciphers directly owned by the specified user.
pub async fn find_owned_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_owned_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: { db_run! {conn: {
ciphers::table ciphers::table
.filter( .filter(
@@ -809,7 +860,7 @@ impl Cipher {
}} }}
} }
pub async fn count_owned_by_user(user_uuid: &str, conn: &mut DbConn) -> i64 { pub async fn count_owned_by_user(user_uuid: &UserId, conn: &mut DbConn) -> i64 {
db_run! {conn: { db_run! {conn: {
ciphers::table ciphers::table
.filter(ciphers::user_uuid.eq(user_uuid)) .filter(ciphers::user_uuid.eq(user_uuid))
@@ -820,7 +871,7 @@ impl Cipher {
}} }}
} }
pub async fn find_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: { db_run! {conn: {
ciphers::table ciphers::table
.filter(ciphers::organization_uuid.eq(org_uuid)) .filter(ciphers::organization_uuid.eq(org_uuid))
@@ -828,7 +879,7 @@ impl Cipher {
}} }}
} }
pub async fn count_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 { pub async fn count_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> i64 {
db_run! {conn: { db_run! {conn: {
ciphers::table ciphers::table
.filter(ciphers::organization_uuid.eq(org_uuid)) .filter(ciphers::organization_uuid.eq(org_uuid))
@@ -839,7 +890,7 @@ impl Cipher {
}} }}
} }
pub async fn find_by_folder(folder_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_folder(folder_uuid: &FolderId, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: { db_run! {conn: {
folders_ciphers::table.inner_join(ciphers::table) folders_ciphers::table.inner_join(ciphers::table)
.filter(folders_ciphers::folder_uuid.eq(folder_uuid)) .filter(folders_ciphers::folder_uuid.eq(folder_uuid))
@@ -857,7 +908,7 @@ impl Cipher {
}} }}
} }
pub async fn get_collections(&self, user_id: String, conn: &mut DbConn) -> Vec<String> { pub async fn get_collections(&self, user_uuid: UserId, conn: &mut DbConn) -> Vec<CollectionId> {
if CONFIG.org_groups_enabled() { if CONFIG.org_groups_enabled() {
db_run! {conn: { db_run! {conn: {
ciphers_collections::table ciphers_collections::table
@@ -867,11 +918,11 @@ impl Cipher {
)) ))
.left_join(users_organizations::table.on( .left_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid) users_organizations::org_uuid.eq(collections::org_uuid)
.and(users_organizations::user_uuid.eq(user_id.clone())) .and(users_organizations::user_uuid.eq(user_uuid.clone()))
)) ))
.left_join(users_collections::table.on( .left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid) users_collections::collection_uuid.eq(ciphers_collections::collection_uuid)
.and(users_collections::user_uuid.eq(user_id.clone())) .and(users_collections::user_uuid.eq(user_uuid.clone()))
)) ))
.left_join(groups_users::table.on( .left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid) groups_users::users_organizations_uuid.eq(users_organizations::uuid)
@@ -882,14 +933,14 @@ impl Cipher {
.and(collections_groups::groups_uuid.eq(groups::uuid)) .and(collections_groups::groups_uuid.eq(groups::uuid))
)) ))
.filter(users_organizations::access_all.eq(true) // User has access all .filter(users_organizations::access_all.eq(true) // User has access all
.or(users_collections::user_uuid.eq(user_id) // User has access to collection .or(users_collections::user_uuid.eq(user_uuid) // User has access to collection
.and(users_collections::read_only.eq(false))) .and(users_collections::read_only.eq(false)))
.or(groups::access_all.eq(true)) // Access via groups .or(groups::access_all.eq(true)) // Access via groups
.or(collections_groups::collections_uuid.is_not_null() // Access via groups .or(collections_groups::collections_uuid.is_not_null() // Access via groups
.and(collections_groups::read_only.eq(false))) .and(collections_groups::read_only.eq(false)))
) )
.select(ciphers_collections::collection_uuid) .select(ciphers_collections::collection_uuid)
.load::<String>(conn).unwrap_or_default() .load::<CollectionId>(conn).unwrap_or_default()
}} }}
} else { } else {
db_run! {conn: { db_run! {conn: {
@@ -900,23 +951,23 @@ impl Cipher {
)) ))
.inner_join(users_organizations::table.on( .inner_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid) users_organizations::org_uuid.eq(collections::org_uuid)
.and(users_organizations::user_uuid.eq(user_id.clone())) .and(users_organizations::user_uuid.eq(user_uuid.clone()))
)) ))
.left_join(users_collections::table.on( .left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid) users_collections::collection_uuid.eq(ciphers_collections::collection_uuid)
.and(users_collections::user_uuid.eq(user_id.clone())) .and(users_collections::user_uuid.eq(user_uuid.clone()))
)) ))
.filter(users_organizations::access_all.eq(true) // User has access all .filter(users_organizations::access_all.eq(true) // User has access all
.or(users_collections::user_uuid.eq(user_id) // User has access to collection .or(users_collections::user_uuid.eq(user_uuid) // User has access to collection
.and(users_collections::read_only.eq(false))) .and(users_collections::read_only.eq(false)))
) )
.select(ciphers_collections::collection_uuid) .select(ciphers_collections::collection_uuid)
.load::<String>(conn).unwrap_or_default() .load::<CollectionId>(conn).unwrap_or_default()
}} }}
} }
} }
pub async fn get_admin_collections(&self, user_id: String, conn: &mut DbConn) -> Vec<String> { pub async fn get_admin_collections(&self, user_uuid: UserId, conn: &mut DbConn) -> Vec<CollectionId> {
if CONFIG.org_groups_enabled() { if CONFIG.org_groups_enabled() {
db_run! {conn: { db_run! {conn: {
ciphers_collections::table ciphers_collections::table
@@ -926,11 +977,11 @@ impl Cipher {
)) ))
.left_join(users_organizations::table.on( .left_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid) users_organizations::org_uuid.eq(collections::org_uuid)
.and(users_organizations::user_uuid.eq(user_id.clone())) .and(users_organizations::user_uuid.eq(user_uuid.clone()))
)) ))
.left_join(users_collections::table.on( .left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid) users_collections::collection_uuid.eq(ciphers_collections::collection_uuid)
.and(users_collections::user_uuid.eq(user_id.clone())) .and(users_collections::user_uuid.eq(user_uuid.clone()))
)) ))
.left_join(groups_users::table.on( .left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid) groups_users::users_organizations_uuid.eq(users_organizations::uuid)
@@ -941,15 +992,15 @@ impl Cipher {
.and(collections_groups::groups_uuid.eq(groups::uuid)) .and(collections_groups::groups_uuid.eq(groups::uuid))
)) ))
.filter(users_organizations::access_all.eq(true) // User has access all .filter(users_organizations::access_all.eq(true) // User has access all
.or(users_collections::user_uuid.eq(user_id) // User has access to collection .or(users_collections::user_uuid.eq(user_uuid) // User has access to collection
.and(users_collections::read_only.eq(false))) .and(users_collections::read_only.eq(false)))
.or(groups::access_all.eq(true)) // Access via groups .or(groups::access_all.eq(true)) // Access via groups
.or(collections_groups::collections_uuid.is_not_null() // Access via groups .or(collections_groups::collections_uuid.is_not_null() // Access via groups
.and(collections_groups::read_only.eq(false))) .and(collections_groups::read_only.eq(false)))
.or(users_organizations::atype.le(UserOrgType::Admin as i32)) // User is admin or owner .or(users_organizations::atype.le(MembershipType::Admin as i32)) // User is admin or owner
) )
.select(ciphers_collections::collection_uuid) .select(ciphers_collections::collection_uuid)
.load::<String>(conn).unwrap_or_default() .load::<CollectionId>(conn).unwrap_or_default()
}} }}
} else { } else {
db_run! {conn: { db_run! {conn: {
@@ -960,26 +1011,29 @@ impl Cipher {
)) ))
.inner_join(users_organizations::table.on( .inner_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid) users_organizations::org_uuid.eq(collections::org_uuid)
.and(users_organizations::user_uuid.eq(user_id.clone())) .and(users_organizations::user_uuid.eq(user_uuid.clone()))
)) ))
.left_join(users_collections::table.on( .left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid) users_collections::collection_uuid.eq(ciphers_collections::collection_uuid)
.and(users_collections::user_uuid.eq(user_id.clone())) .and(users_collections::user_uuid.eq(user_uuid.clone()))
)) ))
.filter(users_organizations::access_all.eq(true) // User has access all .filter(users_organizations::access_all.eq(true) // User has access all
.or(users_collections::user_uuid.eq(user_id) // User has access to collection .or(users_collections::user_uuid.eq(user_uuid) // User has access to collection
.and(users_collections::read_only.eq(false))) .and(users_collections::read_only.eq(false)))
.or(users_organizations::atype.le(UserOrgType::Admin as i32)) // User is admin or owner .or(users_organizations::atype.le(MembershipType::Admin as i32)) // User is admin or owner
) )
.select(ciphers_collections::collection_uuid) .select(ciphers_collections::collection_uuid)
.load::<String>(conn).unwrap_or_default() .load::<CollectionId>(conn).unwrap_or_default()
}} }}
} }
} }
/// Return a Vec with (cipher_uuid, collection_uuid) /// Return a Vec with (cipher_uuid, collection_uuid)
/// This is used during a full sync so we only need one query for all collections accessible. /// This is used during a full sync so we only need one query for all collections accessible.
pub async fn get_collections_with_cipher_by_user(user_id: String, conn: &mut DbConn) -> Vec<(String, String)> { pub async fn get_collections_with_cipher_by_user(
user_uuid: UserId,
conn: &mut DbConn,
) -> Vec<(CipherId, CollectionId)> {
db_run! {conn: { db_run! {conn: {
ciphers_collections::table ciphers_collections::table
.inner_join(collections::table.on( .inner_join(collections::table.on(
@@ -987,12 +1041,12 @@ impl Cipher {
)) ))
.inner_join(users_organizations::table.on( .inner_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid).and( users_organizations::org_uuid.eq(collections::org_uuid).and(
users_organizations::user_uuid.eq(user_id.clone()) users_organizations::user_uuid.eq(user_uuid.clone())
) )
)) ))
.left_join(users_collections::table.on( .left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid).and( users_collections::collection_uuid.eq(ciphers_collections::collection_uuid).and(
users_collections::user_uuid.eq(user_id.clone()) users_collections::user_uuid.eq(user_uuid.clone())
) )
)) ))
.left_join(groups_users::table.on( .left_join(groups_users::table.on(
@@ -1006,14 +1060,32 @@ impl Cipher {
collections_groups::groups_uuid.eq(groups::uuid) collections_groups::groups_uuid.eq(groups::uuid)
) )
)) ))
.or_filter(users_collections::user_uuid.eq(user_id)) // User has access to collection .or_filter(users_collections::user_uuid.eq(user_uuid)) // User has access to collection
.or_filter(users_organizations::access_all.eq(true)) // User has access all .or_filter(users_organizations::access_all.eq(true)) // User has access all
.or_filter(users_organizations::atype.le(UserOrgType::Admin as i32)) // User is admin or owner .or_filter(users_organizations::atype.le(MembershipType::Admin as i32)) // User is admin or owner
.or_filter(groups::access_all.eq(true)) //Access via group .or_filter(groups::access_all.eq(true)) //Access via group
.or_filter(collections_groups::collections_uuid.is_not_null()) //Access via group .or_filter(collections_groups::collections_uuid.is_not_null()) //Access via group
.select(ciphers_collections::all_columns) .select(ciphers_collections::all_columns)
.distinct() .distinct()
.load::<(String, String)>(conn).unwrap_or_default() .load::<(CipherId, CollectionId)>(conn).unwrap_or_default()
}} }}
} }
} }
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
From,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
UuidFromParam,
)]
pub struct CipherId(String);

View File

@@ -1,15 +1,21 @@
use derive_more::{AsRef, Deref, Display, From};
use serde_json::Value; use serde_json::Value;
use super::{CollectionGroup, GroupUser, User, UserOrgStatus, UserOrgType, UserOrganization}; use super::{
CipherId, CollectionGroup, GroupUser, Membership, MembershipId, MembershipStatus, MembershipType, OrganizationId,
User, UserId,
};
use crate::CONFIG; use crate::CONFIG;
use macros::UuidFromParam;
db_object! { db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[diesel(table_name = collections)] #[diesel(table_name = collections)]
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))] #[diesel(primary_key(uuid))]
pub struct Collection { pub struct Collection {
pub uuid: String, pub uuid: CollectionId,
pub org_uuid: String, pub org_uuid: OrganizationId,
pub name: String, pub name: String,
pub external_id: Option<String>, pub external_id: Option<String>,
} }
@@ -18,26 +24,27 @@ db_object! {
#[diesel(table_name = users_collections)] #[diesel(table_name = users_collections)]
#[diesel(primary_key(user_uuid, collection_uuid))] #[diesel(primary_key(user_uuid, collection_uuid))]
pub struct CollectionUser { pub struct CollectionUser {
pub user_uuid: String, pub user_uuid: UserId,
pub collection_uuid: String, pub collection_uuid: CollectionId,
pub read_only: bool, pub read_only: bool,
pub hide_passwords: bool, pub hide_passwords: bool,
pub manage: bool,
} }
#[derive(Identifiable, Queryable, Insertable)] #[derive(Identifiable, Queryable, Insertable)]
#[diesel(table_name = ciphers_collections)] #[diesel(table_name = ciphers_collections)]
#[diesel(primary_key(cipher_uuid, collection_uuid))] #[diesel(primary_key(cipher_uuid, collection_uuid))]
pub struct CollectionCipher { pub struct CollectionCipher {
pub cipher_uuid: String, pub cipher_uuid: CipherId,
pub collection_uuid: String, pub collection_uuid: CollectionId,
} }
} }
/// Local methods /// Local methods
impl Collection { impl Collection {
pub fn new(org_uuid: String, name: String, external_id: Option<String>) -> Self { pub fn new(org_uuid: OrganizationId, name: String, external_id: Option<String>) -> Self {
let mut new_model = Self { let mut new_model = Self {
uuid: crate::util::get_uuid(), uuid: CollectionId(crate::util::get_uuid()),
org_uuid, org_uuid,
name, name,
external_id: None, external_id: None,
@@ -74,22 +81,30 @@ impl Collection {
pub async fn to_json_details( pub async fn to_json_details(
&self, &self,
user_uuid: &str, user_uuid: &UserId,
cipher_sync_data: Option<&crate::api::core::CipherSyncData>, cipher_sync_data: Option<&crate::api::core::CipherSyncData>,
conn: &mut DbConn, conn: &mut DbConn,
) -> Value { ) -> Value {
let (read_only, hide_passwords, can_manage) = if let Some(cipher_sync_data) = cipher_sync_data { let (read_only, hide_passwords, manage) = if let Some(cipher_sync_data) = cipher_sync_data {
match cipher_sync_data.user_organizations.get(&self.org_uuid) { match cipher_sync_data.members.get(&self.org_uuid) {
// Only for Manager types Bitwarden returns true for the can_manage option // Only for Manager types Bitwarden returns true for the manage option
// Owners and Admins always have true // Owners and Admins always have true. Users are not able to have full access
Some(uo) if uo.has_full_access() => (false, false, uo.atype >= UserOrgType::Manager), Some(m) if m.has_full_access() => (false, false, m.atype >= MembershipType::Manager),
Some(uo) => { Some(m) => {
// Only let a manager manage collections when the have full read/write access // Only let a manager manage collections when the have full read/write access
let is_manager = uo.atype == UserOrgType::Manager; let is_manager = m.atype == MembershipType::Manager;
if let Some(uc) = cipher_sync_data.user_collections.get(&self.uuid) { if let Some(cu) = cipher_sync_data.user_collections.get(&self.uuid) {
(uc.read_only, uc.hide_passwords, is_manager && !uc.read_only && !uc.hide_passwords) (
cu.read_only,
cu.hide_passwords,
cu.manage || (is_manager && !cu.read_only && !cu.hide_passwords),
)
} else if let Some(cg) = cipher_sync_data.user_collections_groups.get(&self.uuid) { } else if let Some(cg) = cipher_sync_data.user_collections_groups.get(&self.uuid) {
(cg.read_only, cg.hide_passwords, is_manager && !cg.read_only && !cg.hide_passwords) (
cg.read_only,
cg.hide_passwords,
cg.manage || (is_manager && !cg.read_only && !cg.hide_passwords),
)
} else { } else {
(false, false, false) (false, false, false)
} }
@@ -97,19 +112,16 @@ impl Collection {
_ => (true, true, false), _ => (true, true, false),
} }
} else { } else {
match UserOrganization::find_confirmed_by_user_and_org(user_uuid, &self.org_uuid, conn).await { match Membership::find_confirmed_by_user_and_org(user_uuid, &self.org_uuid, conn).await {
Some(ou) if ou.has_full_access() => (false, false, ou.atype >= UserOrgType::Manager), Some(m) if m.has_full_access() => (false, false, m.atype >= MembershipType::Manager),
Some(ou) => { Some(_) if self.is_manageable_by_user(user_uuid, conn).await => (false, false, true),
let is_manager = ou.atype == UserOrgType::Manager; Some(m) => {
let is_manager = m.atype == MembershipType::Manager;
let read_only = !self.is_writable_by_user(user_uuid, conn).await; let read_only = !self.is_writable_by_user(user_uuid, conn).await;
let hide_passwords = self.hide_passwords_for_user(user_uuid, conn).await; let hide_passwords = self.hide_passwords_for_user(user_uuid, conn).await;
(read_only, hide_passwords, is_manager && !read_only && !hide_passwords) (read_only, hide_passwords, is_manager && !read_only && !hide_passwords)
} }
_ => ( _ => (true, true, false),
!self.is_writable_by_user(user_uuid, conn).await,
self.hide_passwords_for_user(user_uuid, conn).await,
false,
),
} }
}; };
@@ -117,17 +129,17 @@ impl Collection {
json_object["object"] = json!("collectionDetails"); json_object["object"] = json!("collectionDetails");
json_object["readOnly"] = json!(read_only); json_object["readOnly"] = json!(read_only);
json_object["hidePasswords"] = json!(hide_passwords); json_object["hidePasswords"] = json!(hide_passwords);
json_object["manage"] = json!(can_manage); json_object["manage"] = json!(manage);
json_object json_object
} }
pub async fn can_access_collection(org_user: &UserOrganization, col_id: &str, conn: &mut DbConn) -> bool { pub async fn can_access_collection(member: &Membership, col_id: &CollectionId, conn: &mut DbConn) -> bool {
org_user.has_status(UserOrgStatus::Confirmed) member.has_status(MembershipStatus::Confirmed)
&& (org_user.has_full_access() && (member.has_full_access()
|| CollectionUser::has_access_to_collection_by_user(col_id, &org_user.user_uuid, conn).await || CollectionUser::has_access_to_collection_by_user(col_id, &member.user_uuid, conn).await
|| (CONFIG.org_groups_enabled() || (CONFIG.org_groups_enabled()
&& (GroupUser::has_full_access_by_member(&org_user.org_uuid, &org_user.uuid, conn).await && (GroupUser::has_full_access_by_member(&member.org_uuid, &member.uuid, conn).await
|| GroupUser::has_access_to_collection_by_member(col_id, &org_user.uuid, conn).await))) || GroupUser::has_access_to_collection_by_member(col_id, &member.uuid, conn).await)))
} }
} }
@@ -185,7 +197,7 @@ impl Collection {
}} }}
} }
pub async fn delete_all_by_organization(org_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_organization(org_uuid: &OrganizationId, conn: &mut DbConn) -> EmptyResult {
for collection in Self::find_by_organization(org_uuid, conn).await { for collection in Self::find_by_organization(org_uuid, conn).await {
collection.delete(conn).await?; collection.delete(conn).await?;
} }
@@ -193,12 +205,12 @@ impl Collection {
} }
pub async fn update_users_revision(&self, conn: &mut DbConn) { pub async fn update_users_revision(&self, conn: &mut DbConn) {
for user_org in UserOrganization::find_by_collection_and_org(&self.uuid, &self.org_uuid, conn).await.iter() { for member in Membership::find_by_collection_and_org(&self.uuid, &self.org_uuid, conn).await.iter() {
User::update_uuid_revision(&user_org.user_uuid, conn).await; User::update_uuid_revision(&member.user_uuid, conn).await;
} }
} }
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid(uuid: &CollectionId, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
collections::table collections::table
.filter(collections::uuid.eq(uuid)) .filter(collections::uuid.eq(uuid))
@@ -208,7 +220,7 @@ impl Collection {
}} }}
} }
pub async fn find_by_user_uuid(user_uuid: String, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_user_uuid(user_uuid: UserId, conn: &mut DbConn) -> Vec<Self> {
if CONFIG.org_groups_enabled() { if CONFIG.org_groups_enabled() {
db_run! { conn: { db_run! { conn: {
collections::table collections::table
@@ -234,7 +246,7 @@ impl Collection {
) )
)) ))
.filter( .filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32) users_organizations::status.eq(MembershipStatus::Confirmed as i32)
) )
.filter( .filter(
users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection
@@ -265,7 +277,7 @@ impl Collection {
) )
)) ))
.filter( .filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32) users_organizations::status.eq(MembershipStatus::Confirmed as i32)
) )
.filter( .filter(
users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection
@@ -279,15 +291,19 @@ impl Collection {
} }
} }
pub async fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_organization_and_user_uuid(
org_uuid: &OrganizationId,
user_uuid: &UserId,
conn: &mut DbConn,
) -> Vec<Self> {
Self::find_by_user_uuid(user_uuid.to_owned(), conn) Self::find_by_user_uuid(user_uuid.to_owned(), conn)
.await .await
.into_iter() .into_iter()
.filter(|c| c.org_uuid == org_uuid) .filter(|c| &c.org_uuid == org_uuid)
.collect() .collect()
} }
pub async fn find_by_organization(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_organization(org_uuid: &OrganizationId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
collections::table collections::table
.filter(collections::org_uuid.eq(org_uuid)) .filter(collections::org_uuid.eq(org_uuid))
@@ -297,7 +313,7 @@ impl Collection {
}} }}
} }
pub async fn count_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 { pub async fn count_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> i64 {
db_run! { conn: { db_run! { conn: {
collections::table collections::table
.filter(collections::org_uuid.eq(org_uuid)) .filter(collections::org_uuid.eq(org_uuid))
@@ -308,7 +324,11 @@ impl Collection {
}} }}
} }
pub async fn find_by_uuid_and_org(uuid: &str, org_uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid_and_org(
uuid: &CollectionId,
org_uuid: &OrganizationId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
collections::table collections::table
.filter(collections::uuid.eq(uuid)) .filter(collections::uuid.eq(uuid))
@@ -320,7 +340,7 @@ impl Collection {
}} }}
} }
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: String, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid_and_user(uuid: &CollectionId, user_uuid: UserId, conn: &mut DbConn) -> Option<Self> {
if CONFIG.org_groups_enabled() { if CONFIG.org_groups_enabled() {
db_run! { conn: { db_run! { conn: {
collections::table collections::table
@@ -349,7 +369,7 @@ impl Collection {
.filter( .filter(
users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner users_organizations::atype.le(MembershipType::Admin as i32) // Org admin or owner
)).or( )).or(
groups::access_all.eq(true) // access_all in groups groups::access_all.eq(true) // access_all in groups
).or( // access via groups ).or( // access via groups
@@ -378,7 +398,7 @@ impl Collection {
.filter( .filter(
users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner users_organizations::atype.le(MembershipType::Admin as i32) // Org admin or owner
)) ))
).select(collections::all_columns) ).select(collections::all_columns)
.first::<CollectionDb>(conn).ok() .first::<CollectionDb>(conn).ok()
@@ -387,7 +407,7 @@ impl Collection {
} }
} }
pub async fn is_writable_by_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool { pub async fn is_writable_by_user(&self, user_uuid: &UserId, conn: &mut DbConn) -> bool {
let user_uuid = user_uuid.to_string(); let user_uuid = user_uuid.to_string();
if CONFIG.org_groups_enabled() { if CONFIG.org_groups_enabled() {
db_run! { conn: { db_run! { conn: {
@@ -411,7 +431,7 @@ impl Collection {
collections_groups::groups_uuid.eq(groups_users::groups_uuid) collections_groups::groups_uuid.eq(groups_users::groups_uuid)
.and(collections_groups::collections_uuid.eq(collections::uuid)) .and(collections_groups::collections_uuid.eq(collections::uuid))
)) ))
.filter(users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner .filter(users_organizations::atype.le(MembershipType::Admin as i32) // Org admin or owner
.or(users_organizations::access_all.eq(true)) // access_all via membership .or(users_organizations::access_all.eq(true)) // access_all via membership
.or(users_collections::collection_uuid.eq(&self.uuid) // write access given to collection .or(users_collections::collection_uuid.eq(&self.uuid) // write access given to collection
.and(users_collections::read_only.eq(false))) .and(users_collections::read_only.eq(false)))
@@ -436,7 +456,7 @@ impl Collection {
users_collections::collection_uuid.eq(collections::uuid) users_collections::collection_uuid.eq(collections::uuid)
.and(users_collections::user_uuid.eq(user_uuid)) .and(users_collections::user_uuid.eq(user_uuid))
)) ))
.filter(users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner .filter(users_organizations::atype.le(MembershipType::Admin as i32) // Org admin or owner
.or(users_organizations::access_all.eq(true)) // access_all via membership .or(users_organizations::access_all.eq(true)) // access_all via membership
.or(users_collections::collection_uuid.eq(&self.uuid) // write access given to collection .or(users_collections::collection_uuid.eq(&self.uuid) // write access given to collection
.and(users_collections::read_only.eq(false))) .and(users_collections::read_only.eq(false)))
@@ -449,7 +469,7 @@ impl Collection {
} }
} }
pub async fn hide_passwords_for_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool { pub async fn hide_passwords_for_user(&self, user_uuid: &UserId, conn: &mut DbConn) -> bool {
let user_uuid = user_uuid.to_string(); let user_uuid = user_uuid.to_string();
db_run! { conn: { db_run! { conn: {
collections::table collections::table
@@ -478,7 +498,7 @@ impl Collection {
.filter( .filter(
users_collections::collection_uuid.eq(&self.uuid).and(users_collections::hide_passwords.eq(true)).or(// Directly accessed collection users_collections::collection_uuid.eq(&self.uuid).and(users_collections::hide_passwords.eq(true)).or(// Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner users_organizations::atype.le(MembershipType::Admin as i32) // Org admin or owner
)).or( )).or(
groups::access_all.eq(true) // access_all in groups groups::access_all.eq(true) // access_all in groups
).or( // access via groups ).or( // access via groups
@@ -494,11 +514,61 @@ impl Collection {
.unwrap_or(0) != 0 .unwrap_or(0) != 0
}} }}
} }
pub async fn is_manageable_by_user(&self, user_uuid: &UserId, conn: &mut DbConn) -> bool {
let user_uuid = user_uuid.to_string();
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid.clone())
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid)
)
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
collections_groups::collections_uuid.eq(collections::uuid)
)
))
.filter(collections::uuid.eq(&self.uuid))
.filter(
users_collections::collection_uuid.eq(&self.uuid).and(users_collections::manage.eq(true)).or(// Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(MembershipType::Admin as i32) // Org admin or owner
)).or(
groups::access_all.eq(true) // access_all in groups
).or( // access via groups
groups_users::users_organizations_uuid.eq(users_organizations::uuid).and(
collections_groups::collections_uuid.is_not_null().and(
collections_groups::manage.eq(true))
)
)
)
.count()
.first::<i64>(conn)
.ok()
.unwrap_or(0) != 0
}}
}
} }
/// Database methods /// Database methods
impl CollectionUser { impl CollectionUser {
pub async fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_organization_and_user_uuid(
org_uuid: &OrganizationId,
user_uuid: &UserId,
conn: &mut DbConn,
) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
users_collections::table users_collections::table
.filter(users_collections::user_uuid.eq(user_uuid)) .filter(users_collections::user_uuid.eq(user_uuid))
@@ -511,24 +581,30 @@ impl CollectionUser {
}} }}
} }
pub async fn find_by_organization(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_organization_swap_user_uuid_with_member_uuid(
db_run! { conn: { org_uuid: &OrganizationId,
conn: &mut DbConn,
) -> Vec<CollectionMembership> {
let col_users = db_run! { conn: {
users_collections::table users_collections::table
.inner_join(collections::table.on(collections::uuid.eq(users_collections::collection_uuid))) .inner_join(collections::table.on(collections::uuid.eq(users_collections::collection_uuid)))
.filter(collections::org_uuid.eq(org_uuid)) .filter(collections::org_uuid.eq(org_uuid))
.inner_join(users_organizations::table.on(users_organizations::user_uuid.eq(users_collections::user_uuid))) .inner_join(users_organizations::table.on(users_organizations::user_uuid.eq(users_collections::user_uuid)))
.select((users_organizations::uuid, users_collections::collection_uuid, users_collections::read_only, users_collections::hide_passwords)) .filter(users_organizations::org_uuid.eq(org_uuid))
.select((users_organizations::uuid, users_collections::collection_uuid, users_collections::read_only, users_collections::hide_passwords, users_collections::manage))
.load::<CollectionUserDb>(conn) .load::<CollectionUserDb>(conn)
.expect("Error loading users_collections") .expect("Error loading users_collections")
.from_db() .from_db()
}} }};
col_users.into_iter().map(|c| c.into()).collect()
} }
pub async fn save( pub async fn save(
user_uuid: &str, user_uuid: &UserId,
collection_uuid: &str, collection_uuid: &CollectionId,
read_only: bool, read_only: bool,
hide_passwords: bool, hide_passwords: bool,
manage: bool,
conn: &mut DbConn, conn: &mut DbConn,
) -> EmptyResult { ) -> EmptyResult {
User::update_uuid_revision(user_uuid, conn).await; User::update_uuid_revision(user_uuid, conn).await;
@@ -541,6 +617,7 @@ impl CollectionUser {
users_collections::collection_uuid.eq(collection_uuid), users_collections::collection_uuid.eq(collection_uuid),
users_collections::read_only.eq(read_only), users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords), users_collections::hide_passwords.eq(hide_passwords),
users_collections::manage.eq(manage),
)) ))
.execute(conn) .execute(conn)
{ {
@@ -555,6 +632,7 @@ impl CollectionUser {
users_collections::collection_uuid.eq(collection_uuid), users_collections::collection_uuid.eq(collection_uuid),
users_collections::read_only.eq(read_only), users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords), users_collections::hide_passwords.eq(hide_passwords),
users_collections::manage.eq(manage),
)) ))
.execute(conn) .execute(conn)
.map_res("Error adding user to collection") .map_res("Error adding user to collection")
@@ -569,12 +647,14 @@ impl CollectionUser {
users_collections::collection_uuid.eq(collection_uuid), users_collections::collection_uuid.eq(collection_uuid),
users_collections::read_only.eq(read_only), users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords), users_collections::hide_passwords.eq(hide_passwords),
users_collections::manage.eq(manage),
)) ))
.on_conflict((users_collections::user_uuid, users_collections::collection_uuid)) .on_conflict((users_collections::user_uuid, users_collections::collection_uuid))
.do_update() .do_update()
.set(( .set((
users_collections::read_only.eq(read_only), users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords), users_collections::hide_passwords.eq(hide_passwords),
users_collections::manage.eq(manage),
)) ))
.execute(conn) .execute(conn)
.map_res("Error adding user to collection") .map_res("Error adding user to collection")
@@ -596,7 +676,7 @@ impl CollectionUser {
}} }}
} }
pub async fn find_by_collection(collection_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_collection(collection_uuid: &CollectionId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
users_collections::table users_collections::table
.filter(users_collections::collection_uuid.eq(collection_uuid)) .filter(users_collections::collection_uuid.eq(collection_uuid))
@@ -607,24 +687,27 @@ impl CollectionUser {
}} }}
} }
pub async fn find_by_collection_swap_user_uuid_with_org_user_uuid( pub async fn find_by_org_and_coll_swap_user_uuid_with_member_uuid(
collection_uuid: &str, org_uuid: &OrganizationId,
collection_uuid: &CollectionId,
conn: &mut DbConn, conn: &mut DbConn,
) -> Vec<Self> { ) -> Vec<CollectionMembership> {
db_run! { conn: { let col_users = db_run! { conn: {
users_collections::table users_collections::table
.filter(users_collections::collection_uuid.eq(collection_uuid)) .filter(users_collections::collection_uuid.eq(collection_uuid))
.filter(users_organizations::org_uuid.eq(org_uuid))
.inner_join(users_organizations::table.on(users_organizations::user_uuid.eq(users_collections::user_uuid))) .inner_join(users_organizations::table.on(users_organizations::user_uuid.eq(users_collections::user_uuid)))
.select((users_organizations::uuid, users_collections::collection_uuid, users_collections::read_only, users_collections::hide_passwords)) .select((users_organizations::uuid, users_collections::collection_uuid, users_collections::read_only, users_collections::hide_passwords, users_collections::manage))
.load::<CollectionUserDb>(conn) .load::<CollectionUserDb>(conn)
.expect("Error loading users_collections") .expect("Error loading users_collections")
.from_db() .from_db()
}} }};
col_users.into_iter().map(|c| c.into()).collect()
} }
pub async fn find_by_collection_and_user( pub async fn find_by_collection_and_user(
collection_uuid: &str, collection_uuid: &CollectionId,
user_uuid: &str, user_uuid: &UserId,
conn: &mut DbConn, conn: &mut DbConn,
) -> Option<Self> { ) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
@@ -638,7 +721,7 @@ impl CollectionUser {
}} }}
} }
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
users_collections::table users_collections::table
.filter(users_collections::user_uuid.eq(user_uuid)) .filter(users_collections::user_uuid.eq(user_uuid))
@@ -649,7 +732,7 @@ impl CollectionUser {
}} }}
} }
pub async fn delete_all_by_collection(collection_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_collection(collection_uuid: &CollectionId, conn: &mut DbConn) -> EmptyResult {
for collection in CollectionUser::find_by_collection(collection_uuid, conn).await.iter() { for collection in CollectionUser::find_by_collection(collection_uuid, conn).await.iter() {
User::update_uuid_revision(&collection.user_uuid, conn).await; User::update_uuid_revision(&collection.user_uuid, conn).await;
} }
@@ -661,7 +744,11 @@ impl CollectionUser {
}} }}
} }
pub async fn delete_all_by_user_and_org(user_uuid: &str, org_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_user_and_org(
user_uuid: &UserId,
org_uuid: &OrganizationId,
conn: &mut DbConn,
) -> EmptyResult {
let collectionusers = Self::find_by_organization_and_user_uuid(org_uuid, user_uuid, conn).await; let collectionusers = Self::find_by_organization_and_user_uuid(org_uuid, user_uuid, conn).await;
db_run! { conn: { db_run! { conn: {
@@ -677,14 +764,18 @@ impl CollectionUser {
}} }}
} }
pub async fn has_access_to_collection_by_user(col_id: &str, user_uuid: &str, conn: &mut DbConn) -> bool { pub async fn has_access_to_collection_by_user(
col_id: &CollectionId,
user_uuid: &UserId,
conn: &mut DbConn,
) -> bool {
Self::find_by_collection_and_user(col_id, user_uuid, conn).await.is_some() Self::find_by_collection_and_user(col_id, user_uuid, conn).await.is_some()
} }
} }
/// Database methods /// Database methods
impl CollectionCipher { impl CollectionCipher {
pub async fn save(cipher_uuid: &str, collection_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn save(cipher_uuid: &CipherId, collection_uuid: &CollectionId, conn: &mut DbConn) -> EmptyResult {
Self::update_users_revision(collection_uuid, conn).await; Self::update_users_revision(collection_uuid, conn).await;
db_run! { conn: db_run! { conn:
@@ -714,7 +805,7 @@ impl CollectionCipher {
} }
} }
pub async fn delete(cipher_uuid: &str, collection_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete(cipher_uuid: &CipherId, collection_uuid: &CollectionId, conn: &mut DbConn) -> EmptyResult {
Self::update_users_revision(collection_uuid, conn).await; Self::update_users_revision(collection_uuid, conn).await;
db_run! { conn: { db_run! { conn: {
@@ -728,7 +819,7 @@ impl CollectionCipher {
}} }}
} }
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_cipher(cipher_uuid: &CipherId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
diesel::delete(ciphers_collections::table.filter(ciphers_collections::cipher_uuid.eq(cipher_uuid))) diesel::delete(ciphers_collections::table.filter(ciphers_collections::cipher_uuid.eq(cipher_uuid)))
.execute(conn) .execute(conn)
@@ -736,7 +827,7 @@ impl CollectionCipher {
}} }}
} }
pub async fn delete_all_by_collection(collection_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_collection(collection_uuid: &CollectionId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
diesel::delete(ciphers_collections::table.filter(ciphers_collections::collection_uuid.eq(collection_uuid))) diesel::delete(ciphers_collections::table.filter(ciphers_collections::collection_uuid.eq(collection_uuid)))
.execute(conn) .execute(conn)
@@ -744,9 +835,63 @@ impl CollectionCipher {
}} }}
} }
pub async fn update_users_revision(collection_uuid: &str, conn: &mut DbConn) { pub async fn update_users_revision(collection_uuid: &CollectionId, conn: &mut DbConn) {
if let Some(collection) = Collection::find_by_uuid(collection_uuid, conn).await { if let Some(collection) = Collection::find_by_uuid(collection_uuid, conn).await {
collection.update_users_revision(conn).await; collection.update_users_revision(conn).await;
} }
} }
} }
// Added in case we need the membership_uuid instead of the user_uuid
pub struct CollectionMembership {
pub membership_uuid: MembershipId,
pub collection_uuid: CollectionId,
pub read_only: bool,
pub hide_passwords: bool,
pub manage: bool,
}
impl CollectionMembership {
pub fn to_json_details_for_member(&self, membership_type: i32) -> Value {
json!({
"id": self.membership_uuid,
"readOnly": self.read_only,
"hidePasswords": self.hide_passwords,
"manage": membership_type >= MembershipType::Admin
|| self.manage
|| (membership_type == MembershipType::Manager
&& !self.read_only
&& !self.hide_passwords),
})
}
}
impl From<CollectionUser> for CollectionMembership {
fn from(c: CollectionUser) -> Self {
Self {
membership_uuid: c.user_uuid.to_string().into(),
collection_uuid: c.collection_uuid,
read_only: c.read_only,
hide_passwords: c.hide_passwords,
manage: c.manage,
}
}
}
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
From,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
UuidFromParam,
)]
pub struct CollectionId(String);

View File

@@ -1,7 +1,14 @@
use chrono::{NaiveDateTime, Utc}; use chrono::{NaiveDateTime, Utc};
use derive_more::{Display, From};
use serde_json::Value;
use crate::{crypto, CONFIG}; use super::{AuthRequest, UserId};
use core::fmt; use crate::{
crypto,
util::{format_date, get_uuid},
CONFIG,
};
use macros::{IdFromParam, UuidFromParam};
db_object! { db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
@@ -9,26 +16,25 @@ db_object! {
#[diesel(treat_none_as_null = true)] #[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid, user_uuid))] #[diesel(primary_key(uuid, user_uuid))]
pub struct Device { pub struct Device {
pub uuid: String, pub uuid: DeviceId,
pub created_at: NaiveDateTime, pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime, pub updated_at: NaiveDateTime,
pub user_uuid: String, pub user_uuid: UserId,
pub name: String, pub name: String,
pub atype: i32, // https://github.com/bitwarden/server/blob/dcc199bcce4aa2d5621f6fab80f1b49d8b143418/src/Core/Enums/DeviceType.cs pub atype: i32, // https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Core/Enums/DeviceType.cs
pub push_uuid: Option<String>, pub push_uuid: Option<PushId>,
pub push_token: Option<String>, pub push_token: Option<String>,
pub refresh_token: String, pub refresh_token: String,
pub twofactor_remember: Option<String>, pub twofactor_remember: Option<String>,
} }
} }
/// Local methods /// Local methods
impl Device { impl Device {
pub fn new(uuid: String, user_uuid: String, name: String, atype: i32) -> Self { pub fn new(uuid: DeviceId, user_uuid: UserId, name: String, atype: i32) -> Self {
let now = Utc::now().naive_utc(); let now = Utc::now().naive_utc();
Self { Self {
@@ -40,13 +46,25 @@ impl Device {
name, name,
atype, atype,
push_uuid: None, push_uuid: Some(PushId(get_uuid())),
push_token: None, push_token: None,
refresh_token: String::new(), refresh_token: String::new(),
twofactor_remember: None, twofactor_remember: None,
} }
} }
pub fn to_json(&self) -> Value {
json!({
"id": self.uuid,
"name": self.name,
"type": self.atype,
"identifier": self.uuid,
"creationDate": format_date(&self.created_at),
"isTrusted": false,
"object":"device"
})
}
pub fn refresh_twofactor_remember(&mut self) -> String { pub fn refresh_twofactor_remember(&mut self) -> String {
use data_encoding::BASE64; use data_encoding::BASE64;
let twofactor_remember = crypto::encode_random_bytes::<180>(BASE64); let twofactor_remember = crypto::encode_random_bytes::<180>(BASE64);
@@ -59,7 +77,12 @@ impl Device {
self.twofactor_remember = None; self.twofactor_remember = None;
} }
pub fn refresh_tokens(&mut self, user: &super::User, scope: Vec<String>) -> (String, i64) { pub fn refresh_tokens(
&mut self,
user: &super::User,
scope: Vec<String>,
client_id: Option<String>,
) -> (String, i64) {
// If there is no refresh token, we create one // If there is no refresh token, we create one
if self.refresh_token.is_empty() { if self.refresh_token.is_empty() {
use data_encoding::BASE64URL; use data_encoding::BASE64URL;
@@ -70,17 +93,22 @@ impl Device {
let time_now = Utc::now(); let time_now = Utc::now();
self.updated_at = time_now.naive_utc(); self.updated_at = time_now.naive_utc();
// Generate a random push_uuid so if it doesn't already have one
if self.push_uuid.is_none() {
self.push_uuid = Some(PushId(get_uuid()));
}
// --- // ---
// Disabled these keys to be added to the JWT since they could cause the JWT to get too large // Disabled these keys to be added to the JWT since they could cause the JWT to get too large
// Also These key/value pairs are not used anywhere by either Vaultwarden or Bitwarden Clients // Also These key/value pairs are not used anywhere by either Vaultwarden or Bitwarden Clients
// Because these might get used in the future, and they are added by the Bitwarden Server, lets keep it, but then commented out // Because these might get used in the future, and they are added by the Bitwarden Server, lets keep it, but then commented out
// --- // ---
// fn arg: orgs: Vec<super::UserOrganization>, // fn arg: members: Vec<super::Membership>,
// --- // ---
// let orgowner: Vec<_> = orgs.iter().filter(|o| o.atype == 0).map(|o| o.org_uuid.clone()).collect(); // let orgowner: Vec<_> = members.iter().filter(|m| m.atype == 0).map(|o| o.org_uuid.clone()).collect();
// let orgadmin: Vec<_> = orgs.iter().filter(|o| o.atype == 1).map(|o| o.org_uuid.clone()).collect(); // let orgadmin: Vec<_> = members.iter().filter(|m| m.atype == 1).map(|o| o.org_uuid.clone()).collect();
// let orguser: Vec<_> = orgs.iter().filter(|o| o.atype == 2).map(|o| o.org_uuid.clone()).collect(); // let orguser: Vec<_> = members.iter().filter(|m| m.atype == 2).map(|o| o.org_uuid.clone()).collect();
// let orgmanager: Vec<_> = orgs.iter().filter(|o| o.atype == 3).map(|o| o.org_uuid.clone()).collect(); // let orgmanager: Vec<_> = members.iter().filter(|m| m.atype == 3).map(|o| o.org_uuid.clone()).collect();
// Create the JWT claims struct, to send to the client // Create the JWT claims struct, to send to the client
use crate::auth::{encode_jwt, LoginJwtClaims, DEFAULT_VALIDITY, JWT_LOGIN_ISSUER}; use crate::auth::{encode_jwt, LoginJwtClaims, DEFAULT_VALIDITY, JWT_LOGIN_ISSUER};
@@ -107,6 +135,8 @@ impl Device {
// orgmanager, // orgmanager,
sstamp: user.security_stamp.clone(), sstamp: user.security_stamp.clone(),
device: self.uuid.clone(), device: self.uuid.clone(),
devicetype: DeviceType::from_i32(self.atype).to_string(),
client_id: client_id.unwrap_or("undefined".to_string()),
scope, scope,
amr: vec!["Application".into()], amr: vec!["Application".into()],
}; };
@@ -118,11 +148,43 @@ impl Device {
matches!(DeviceType::from_i32(self.atype), DeviceType::Android | DeviceType::Ios) matches!(DeviceType::from_i32(self.atype), DeviceType::Android | DeviceType::Ios)
} }
pub fn is_registered(&self) -> bool { pub fn is_cli(&self) -> bool {
self.push_uuid.is_some() matches!(DeviceType::from_i32(self.atype), DeviceType::WindowsCLI | DeviceType::MacOsCLI | DeviceType::LinuxCLI)
} }
} }
pub struct DeviceWithAuthRequest {
pub device: Device,
pub pending_auth_request: Option<AuthRequest>,
}
impl DeviceWithAuthRequest {
pub fn to_json(&self) -> Value {
let auth_request = match &self.pending_auth_request {
Some(auth_request) => auth_request.to_json_for_pending_device(),
None => Value::Null,
};
json!({
"id": self.device.uuid,
"name": self.device.name,
"type": self.device.atype,
"identifier": self.device.uuid,
"creationDate": format_date(&self.device.created_at),
"devicePendingAuthRequest": auth_request,
"isTrusted": false,
"encryptedPublicKey": null,
"encryptedUserKey": null,
"object": "device",
})
}
pub fn from(c: Device, a: Option<AuthRequest>) -> Self {
Self {
device: c,
pending_auth_request: a,
}
}
}
use crate::db::DbConn; use crate::db::DbConn;
use crate::api::EmptyResult; use crate::api::EmptyResult;
@@ -150,7 +212,7 @@ impl Device {
} }
} }
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
diesel::delete(devices::table.filter(devices::user_uuid.eq(user_uuid))) diesel::delete(devices::table.filter(devices::user_uuid.eq(user_uuid)))
.execute(conn) .execute(conn)
@@ -158,7 +220,7 @@ impl Device {
}} }}
} }
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid_and_user(uuid: &DeviceId, user_uuid: &UserId, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
devices::table devices::table
.filter(devices::uuid.eq(uuid)) .filter(devices::uuid.eq(uuid))
@@ -169,7 +231,17 @@ impl Device {
}} }}
} }
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_with_auth_request_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<DeviceWithAuthRequest> {
let devices = Self::find_by_user(user_uuid, conn).await;
let mut result = Vec::new();
for device in devices {
let auth_request = AuthRequest::find_by_user_and_requested_device(user_uuid, &device.uuid, conn).await;
result.push(DeviceWithAuthRequest::from(device, auth_request));
}
result
}
pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
devices::table devices::table
.filter(devices::user_uuid.eq(user_uuid)) .filter(devices::user_uuid.eq(user_uuid))
@@ -179,7 +251,7 @@ impl Device {
}} }}
} }
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid(uuid: &DeviceId, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
devices::table devices::table
.filter(devices::uuid.eq(uuid)) .filter(devices::uuid.eq(uuid))
@@ -189,7 +261,7 @@ impl Device {
}} }}
} }
pub async fn clear_push_token_by_uuid(uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn clear_push_token_by_uuid(uuid: &DeviceId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
diesel::update(devices::table) diesel::update(devices::table)
.filter(devices::uuid.eq(uuid)) .filter(devices::uuid.eq(uuid))
@@ -208,7 +280,7 @@ impl Device {
}} }}
} }
pub async fn find_latest_active_by_user(user_uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_latest_active_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
devices::table devices::table
.filter(devices::user_uuid.eq(user_uuid)) .filter(devices::user_uuid.eq(user_uuid))
@@ -219,7 +291,7 @@ impl Device {
}} }}
} }
pub async fn find_push_devices_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_push_devices_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
devices::table devices::table
.filter(devices::user_uuid.eq(user_uuid)) .filter(devices::user_uuid.eq(user_uuid))
@@ -230,7 +302,7 @@ impl Device {
}} }}
} }
pub async fn check_user_has_push_device(user_uuid: &str, conn: &mut DbConn) -> bool { pub async fn check_user_has_push_device(user_uuid: &UserId, conn: &mut DbConn) -> bool {
db_run! { conn: { db_run! { conn: {
devices::table devices::table
.filter(devices::user_uuid.eq(user_uuid)) .filter(devices::user_uuid.eq(user_uuid))
@@ -243,68 +315,62 @@ impl Device {
} }
} }
#[derive(Display)]
pub enum DeviceType { pub enum DeviceType {
#[display("Android")]
Android = 0, Android = 0,
#[display("iOS")]
Ios = 1, Ios = 1,
#[display("Chrome Extension")]
ChromeExtension = 2, ChromeExtension = 2,
#[display("Firefox Extension")]
FirefoxExtension = 3, FirefoxExtension = 3,
#[display("Opera Extension")]
OperaExtension = 4, OperaExtension = 4,
#[display("Edge Extension")]
EdgeExtension = 5, EdgeExtension = 5,
#[display("Windows")]
WindowsDesktop = 6, WindowsDesktop = 6,
#[display("macOS")]
MacOsDesktop = 7, MacOsDesktop = 7,
#[display("Linux")]
LinuxDesktop = 8, LinuxDesktop = 8,
#[display("Chrome")]
ChromeBrowser = 9, ChromeBrowser = 9,
#[display("Firefox")]
FirefoxBrowser = 10, FirefoxBrowser = 10,
#[display("Opera")]
OperaBrowser = 11, OperaBrowser = 11,
#[display("Edge")]
EdgeBrowser = 12, EdgeBrowser = 12,
#[display("Internet Explorer")]
IEBrowser = 13, IEBrowser = 13,
#[display("Unknown Browser")]
UnknownBrowser = 14, UnknownBrowser = 14,
#[display("Android")]
AndroidAmazon = 15, AndroidAmazon = 15,
#[display("UWP")]
Uwp = 16, Uwp = 16,
#[display("Safari")]
SafariBrowser = 17, SafariBrowser = 17,
#[display("Vivaldi")]
VivaldiBrowser = 18, VivaldiBrowser = 18,
#[display("Vivaldi Extension")]
VivaldiExtension = 19, VivaldiExtension = 19,
#[display("Safari Extension")]
SafariExtension = 20, SafariExtension = 20,
#[display("SDK")]
Sdk = 21, Sdk = 21,
#[display("Server")]
Server = 22, Server = 22,
#[display("Windows CLI")]
WindowsCLI = 23, WindowsCLI = 23,
#[display("macOS CLI")]
MacOsCLI = 24, MacOsCLI = 24,
#[display("Linux CLI")]
LinuxCLI = 25, LinuxCLI = 25,
} }
impl fmt::Display for DeviceType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
DeviceType::Android => write!(f, "Android"),
DeviceType::Ios => write!(f, "iOS"),
DeviceType::ChromeExtension => write!(f, "Chrome Extension"),
DeviceType::FirefoxExtension => write!(f, "Firefox Extension"),
DeviceType::OperaExtension => write!(f, "Opera Extension"),
DeviceType::EdgeExtension => write!(f, "Edge Extension"),
DeviceType::WindowsDesktop => write!(f, "Windows"),
DeviceType::MacOsDesktop => write!(f, "macOS"),
DeviceType::LinuxDesktop => write!(f, "Linux"),
DeviceType::ChromeBrowser => write!(f, "Chrome"),
DeviceType::FirefoxBrowser => write!(f, "Firefox"),
DeviceType::OperaBrowser => write!(f, "Opera"),
DeviceType::EdgeBrowser => write!(f, "Edge"),
DeviceType::IEBrowser => write!(f, "Internet Explorer"),
DeviceType::UnknownBrowser => write!(f, "Unknown Browser"),
DeviceType::AndroidAmazon => write!(f, "Android"),
DeviceType::Uwp => write!(f, "UWP"),
DeviceType::SafariBrowser => write!(f, "Safari"),
DeviceType::VivaldiBrowser => write!(f, "Vivaldi"),
DeviceType::VivaldiExtension => write!(f, "Vivaldi Extension"),
DeviceType::SafariExtension => write!(f, "Safari Extension"),
DeviceType::Sdk => write!(f, "SDK"),
DeviceType::Server => write!(f, "Server"),
DeviceType::WindowsCLI => write!(f, "Windows CLI"),
DeviceType::MacOsCLI => write!(f, "macOS CLI"),
DeviceType::LinuxCLI => write!(f, "Linux CLI"),
}
}
}
impl DeviceType { impl DeviceType {
pub fn from_i32(value: i32) -> DeviceType { pub fn from_i32(value: i32) -> DeviceType {
match value { match value {
@@ -338,3 +404,11 @@ impl DeviceType {
} }
} }
} }
#[derive(
Clone, Debug, DieselNewType, Display, From, FromForm, Hash, PartialEq, Eq, Serialize, Deserialize, IdFromParam,
)]
pub struct DeviceId(String);
#[derive(Clone, Debug, DieselNewType, Display, From, FromForm, Serialize, Deserialize, UuidFromParam)]
pub struct PushId(pub String);

View File

@@ -1,9 +1,10 @@
use chrono::{NaiveDateTime, Utc}; use chrono::{NaiveDateTime, Utc};
use derive_more::{AsRef, Deref, Display, From};
use serde_json::Value; use serde_json::Value;
use super::{User, UserId};
use crate::{api::EmptyResult, db::DbConn, error::MapResult}; use crate::{api::EmptyResult, db::DbConn, error::MapResult};
use macros::UuidFromParam;
use super::User;
db_object! { db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
@@ -11,9 +12,9 @@ db_object! {
#[diesel(treat_none_as_null = true)] #[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))] #[diesel(primary_key(uuid))]
pub struct EmergencyAccess { pub struct EmergencyAccess {
pub uuid: String, pub uuid: EmergencyAccessId,
pub grantor_uuid: String, pub grantor_uuid: UserId,
pub grantee_uuid: Option<String>, pub grantee_uuid: Option<UserId>,
pub email: Option<String>, pub email: Option<String>,
pub key_encrypted: Option<String>, pub key_encrypted: Option<String>,
pub atype: i32, //EmergencyAccessType pub atype: i32, //EmergencyAccessType
@@ -29,11 +30,11 @@ db_object! {
// Local methods // Local methods
impl EmergencyAccess { impl EmergencyAccess {
pub fn new(grantor_uuid: String, email: String, status: i32, atype: i32, wait_time_days: i32) -> Self { pub fn new(grantor_uuid: UserId, email: String, status: i32, atype: i32, wait_time_days: i32) -> Self {
let now = Utc::now().naive_utc(); let now = Utc::now().naive_utc();
Self { Self {
uuid: crate::util::get_uuid(), uuid: EmergencyAccessId(crate::util::get_uuid()),
grantor_uuid, grantor_uuid,
grantee_uuid: None, grantee_uuid: None,
email: Some(email), email: Some(email),
@@ -77,12 +78,13 @@ impl EmergencyAccess {
"grantorId": grantor_user.uuid, "grantorId": grantor_user.uuid,
"email": grantor_user.email, "email": grantor_user.email,
"name": grantor_user.name, "name": grantor_user.name,
"avatarColor": grantor_user.avatar_color,
"object": "emergencyAccessGrantorDetails", "object": "emergencyAccessGrantorDetails",
}) })
} }
pub async fn to_json_grantee_details(&self, conn: &mut DbConn) -> Option<Value> { pub async fn to_json_grantee_details(&self, conn: &mut DbConn) -> Option<Value> {
let grantee_user = if let Some(grantee_uuid) = self.grantee_uuid.as_deref() { let grantee_user = if let Some(grantee_uuid) = &self.grantee_uuid {
User::find_by_uuid(grantee_uuid, conn).await.expect("Grantee user not found.") User::find_by_uuid(grantee_uuid, conn).await.expect("Grantee user not found.")
} else if let Some(email) = self.email.as_deref() { } else if let Some(email) = self.email.as_deref() {
match User::find_by_mail(email, conn).await { match User::find_by_mail(email, conn).await {
@@ -105,6 +107,7 @@ impl EmergencyAccess {
"granteeId": grantee_user.uuid, "granteeId": grantee_user.uuid,
"email": grantee_user.email, "email": grantee_user.email,
"name": grantee_user.name, "name": grantee_user.name,
"avatarColor": grantee_user.avatar_color,
"object": "emergencyAccessGranteeDetails", "object": "emergencyAccessGranteeDetails",
})) }))
} }
@@ -211,7 +214,7 @@ impl EmergencyAccess {
}} }}
} }
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
for ea in Self::find_all_by_grantor_uuid(user_uuid, conn).await { for ea in Self::find_all_by_grantor_uuid(user_uuid, conn).await {
ea.delete(conn).await?; ea.delete(conn).await?;
} }
@@ -239,8 +242,8 @@ impl EmergencyAccess {
} }
pub async fn find_by_grantor_uuid_and_grantee_uuid_or_email( pub async fn find_by_grantor_uuid_and_grantee_uuid_or_email(
grantor_uuid: &str, grantor_uuid: &UserId,
grantee_uuid: &str, grantee_uuid: &UserId,
email: &str, email: &str,
conn: &mut DbConn, conn: &mut DbConn,
) -> Option<Self> { ) -> Option<Self> {
@@ -262,7 +265,11 @@ impl EmergencyAccess {
}} }}
} }
pub async fn find_by_uuid_and_grantor_uuid(uuid: &str, grantor_uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid_and_grantor_uuid(
uuid: &EmergencyAccessId,
grantor_uuid: &UserId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
emergency_access::table emergency_access::table
.filter(emergency_access::uuid.eq(uuid)) .filter(emergency_access::uuid.eq(uuid))
@@ -272,7 +279,11 @@ impl EmergencyAccess {
}} }}
} }
pub async fn find_by_uuid_and_grantee_uuid(uuid: &str, grantee_uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid_and_grantee_uuid(
uuid: &EmergencyAccessId,
grantee_uuid: &UserId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
emergency_access::table emergency_access::table
.filter(emergency_access::uuid.eq(uuid)) .filter(emergency_access::uuid.eq(uuid))
@@ -282,7 +293,11 @@ impl EmergencyAccess {
}} }}
} }
pub async fn find_by_uuid_and_grantee_email(uuid: &str, grantee_email: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid_and_grantee_email(
uuid: &EmergencyAccessId,
grantee_email: &str,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
emergency_access::table emergency_access::table
.filter(emergency_access::uuid.eq(uuid)) .filter(emergency_access::uuid.eq(uuid))
@@ -292,7 +307,7 @@ impl EmergencyAccess {
}} }}
} }
pub async fn find_all_by_grantee_uuid(grantee_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_all_by_grantee_uuid(grantee_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
emergency_access::table emergency_access::table
.filter(emergency_access::grantee_uuid.eq(grantee_uuid)) .filter(emergency_access::grantee_uuid.eq(grantee_uuid))
@@ -319,7 +334,7 @@ impl EmergencyAccess {
}} }}
} }
pub async fn find_all_by_grantor_uuid(grantor_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_all_by_grantor_uuid(grantor_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
emergency_access::table emergency_access::table
.filter(emergency_access::grantor_uuid.eq(grantor_uuid)) .filter(emergency_access::grantor_uuid.eq(grantor_uuid))
@@ -327,7 +342,12 @@ impl EmergencyAccess {
}} }}
} }
pub async fn accept_invite(&mut self, grantee_uuid: &str, grantee_email: &str, conn: &mut DbConn) -> EmptyResult { pub async fn accept_invite(
&mut self,
grantee_uuid: &UserId,
grantee_email: &str,
conn: &mut DbConn,
) -> EmptyResult {
if self.email.is_none() || self.email.as_ref().unwrap() != grantee_email { if self.email.is_none() || self.email.as_ref().unwrap() != grantee_email {
err!("User email does not match invite."); err!("User email does not match invite.");
} }
@@ -337,10 +357,28 @@ impl EmergencyAccess {
} }
self.status = EmergencyAccessStatus::Accepted as i32; self.status = EmergencyAccessStatus::Accepted as i32;
self.grantee_uuid = Some(String::from(grantee_uuid)); self.grantee_uuid = Some(grantee_uuid.clone());
self.email = None; self.email = None;
self.save(conn).await self.save(conn).await
} }
} }
// endregion // endregion
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
From,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
UuidFromParam,
)]
pub struct EmergencyAccessId(String);

View File

@@ -1,41 +1,42 @@
use crate::db::DbConn; use chrono::{NaiveDateTime, TimeDelta, Utc};
//use derive_more::{AsRef, Deref, Display, From};
use serde_json::Value; use serde_json::Value;
use crate::{api::EmptyResult, error::MapResult, CONFIG}; use super::{CipherId, CollectionId, GroupId, MembershipId, OrgPolicyId, OrganizationId, UserId};
use crate::{api::EmptyResult, db::DbConn, error::MapResult, CONFIG};
use chrono::{NaiveDateTime, TimeDelta, Utc};
// https://bitwarden.com/help/event-logs/ // https://bitwarden.com/help/event-logs/
db_object! { db_object! {
// Upstream: https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Core/Services/Implementations/EventService.cs // Upstream: https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Core/AdminConsole/Services/Implementations/EventService.cs
// Upstream: https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Api/Models/Public/Response/EventResponseModel.cs // Upstream: https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Api/AdminConsole/Public/Models/Response/EventResponseModel.cs
// Upstream SQL: https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Sql/dbo/Tables/Event.sql // Upstream SQL: https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Sql/dbo/Tables/Event.sql
#[derive(Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[diesel(table_name = event)] #[diesel(table_name = event)]
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))] #[diesel(primary_key(uuid))]
pub struct Event { pub struct Event {
pub uuid: String, pub uuid: EventId,
pub event_type: i32, // EventType pub event_type: i32, // EventType
pub user_uuid: Option<String>, pub user_uuid: Option<UserId>,
pub org_uuid: Option<String>, pub org_uuid: Option<OrganizationId>,
pub cipher_uuid: Option<String>, pub cipher_uuid: Option<CipherId>,
pub collection_uuid: Option<String>, pub collection_uuid: Option<CollectionId>,
pub group_uuid: Option<String>, pub group_uuid: Option<GroupId>,
pub org_user_uuid: Option<String>, pub org_user_uuid: Option<MembershipId>,
pub act_user_uuid: Option<String>, pub act_user_uuid: Option<UserId>,
// Upstream enum: https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Core/Enums/DeviceType.cs // Upstream enum: https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Core/Enums/DeviceType.cs
pub device_type: Option<i32>, pub device_type: Option<i32>,
pub ip_address: Option<String>, pub ip_address: Option<String>,
pub event_date: NaiveDateTime, pub event_date: NaiveDateTime,
pub policy_uuid: Option<String>, pub policy_uuid: Option<OrgPolicyId>,
pub provider_uuid: Option<String>, pub provider_uuid: Option<String>,
pub provider_user_uuid: Option<String>, pub provider_user_uuid: Option<String>,
pub provider_org_uuid: Option<String>, pub provider_org_uuid: Option<String>,
} }
} }
// Upstream enum: https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Core/Enums/EventType.cs // Upstream enum: https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Core/AdminConsole/Enums/EventType.cs
#[derive(Debug, Copy, Clone)] #[derive(Debug, Copy, Clone)]
pub enum EventType { pub enum EventType {
// User // User
@@ -49,6 +50,8 @@ pub enum EventType {
UserClientExportedVault = 1007, UserClientExportedVault = 1007,
// UserUpdatedTempPassword = 1008, // Not supported // UserUpdatedTempPassword = 1008, // Not supported
// UserMigratedKeyToKeyConnector = 1009, // Not supported // UserMigratedKeyToKeyConnector = 1009, // Not supported
UserRequestedDeviceApproval = 1010,
// UserTdeOffboardingPasswordSet = 1011, // Not supported
// Cipher // Cipher
CipherCreated = 1100, CipherCreated = 1100,
@@ -84,7 +87,7 @@ pub enum EventType {
OrganizationUserInvited = 1500, OrganizationUserInvited = 1500,
OrganizationUserConfirmed = 1501, OrganizationUserConfirmed = 1501,
OrganizationUserUpdated = 1502, OrganizationUserUpdated = 1502,
OrganizationUserRemoved = 1503, OrganizationUserRemoved = 1503, // Organization user data was deleted
OrganizationUserUpdatedGroups = 1504, OrganizationUserUpdatedGroups = 1504,
// OrganizationUserUnlinkedSso = 1505, // Not supported // OrganizationUserUnlinkedSso = 1505, // Not supported
OrganizationUserResetPasswordEnroll = 1506, OrganizationUserResetPasswordEnroll = 1506,
@@ -94,6 +97,10 @@ pub enum EventType {
// OrganizationUserFirstSsoLogin = 1510, // Not supported // OrganizationUserFirstSsoLogin = 1510, // Not supported
OrganizationUserRevoked = 1511, OrganizationUserRevoked = 1511,
OrganizationUserRestored = 1512, OrganizationUserRestored = 1512,
OrganizationUserApprovedAuthRequest = 1513,
OrganizationUserRejectedAuthRequest = 1514,
OrganizationUserDeleted = 1515, // Both user and organization user data were deleted
OrganizationUserLeft = 1516, // User voluntarily left the organization
// Organization // Organization
OrganizationUpdated = 1600, OrganizationUpdated = 1600,
@@ -105,6 +112,7 @@ pub enum EventType {
// OrganizationEnabledKeyConnector = 1606, // Not supported // OrganizationEnabledKeyConnector = 1606, // Not supported
// OrganizationDisabledKeyConnector = 1607, // Not supported // OrganizationDisabledKeyConnector = 1607, // Not supported
// OrganizationSponsorshipsSynced = 1608, // Not supported // OrganizationSponsorshipsSynced = 1608, // Not supported
// OrganizationCollectionManagementUpdated = 1609, // Not supported
// Policy // Policy
PolicyUpdated = 1700, PolicyUpdated = 1700,
@@ -117,6 +125,13 @@ pub enum EventType {
// ProviderOrganizationAdded = 1901, // Not supported // ProviderOrganizationAdded = 1901, // Not supported
// ProviderOrganizationRemoved = 1902, // Not supported // ProviderOrganizationRemoved = 1902, // Not supported
// ProviderOrganizationVaultAccessed = 1903, // Not supported // ProviderOrganizationVaultAccessed = 1903, // Not supported
// OrganizationDomainAdded = 2000, // Not supported
// OrganizationDomainRemoved = 2001, // Not supported
// OrganizationDomainVerified = 2002, // Not supported
// OrganizationDomainNotVerified = 2003, // Not supported
// SecretRetrieved = 2100, // Not supported
} }
/// Local methods /// Local methods
@@ -128,7 +143,7 @@ impl Event {
}; };
Self { Self {
uuid: crate::util::get_uuid(), uuid: EventId(crate::util::get_uuid()),
event_type, event_type,
user_uuid: None, user_uuid: None,
org_uuid: None, org_uuid: None,
@@ -172,7 +187,7 @@ impl Event {
} }
/// Database methods /// Database methods
/// https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Core/Services/Implementations/EventService.cs /// https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Core/AdminConsole/Services/Implementations/EventService.cs
impl Event { impl Event {
pub const PAGE_SIZE: i64 = 30; pub const PAGE_SIZE: i64 = 30;
@@ -246,7 +261,7 @@ impl Event {
/// ############## /// ##############
/// Custom Queries /// Custom Queries
pub async fn find_by_organization_uuid( pub async fn find_by_organization_uuid(
org_uuid: &str, org_uuid: &OrganizationId,
start: &NaiveDateTime, start: &NaiveDateTime,
end: &NaiveDateTime, end: &NaiveDateTime,
conn: &mut DbConn, conn: &mut DbConn,
@@ -263,7 +278,7 @@ impl Event {
}} }}
} }
pub async fn count_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 { pub async fn count_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> i64 {
db_run! { conn: { db_run! { conn: {
event::table event::table
.filter(event::org_uuid.eq(org_uuid)) .filter(event::org_uuid.eq(org_uuid))
@@ -274,16 +289,16 @@ impl Event {
}} }}
} }
pub async fn find_by_org_and_user_org( pub async fn find_by_org_and_member(
org_uuid: &str, org_uuid: &OrganizationId,
user_org_uuid: &str, member_uuid: &MembershipId,
start: &NaiveDateTime, start: &NaiveDateTime,
end: &NaiveDateTime, end: &NaiveDateTime,
conn: &mut DbConn, conn: &mut DbConn,
) -> Vec<Self> { ) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
event::table event::table
.inner_join(users_organizations::table.on(users_organizations::uuid.eq(user_org_uuid))) .inner_join(users_organizations::table.on(users_organizations::uuid.eq(member_uuid)))
.filter(event::org_uuid.eq(org_uuid)) .filter(event::org_uuid.eq(org_uuid))
.filter(event::event_date.between(start, end)) .filter(event::event_date.between(start, end))
.filter(event::user_uuid.eq(users_organizations::user_uuid.nullable()).or(event::act_user_uuid.eq(users_organizations::user_uuid.nullable()))) .filter(event::user_uuid.eq(users_organizations::user_uuid.nullable()).or(event::act_user_uuid.eq(users_organizations::user_uuid.nullable())))
@@ -297,7 +312,7 @@ impl Event {
} }
pub async fn find_by_cipher_uuid( pub async fn find_by_cipher_uuid(
cipher_uuid: &str, cipher_uuid: &CipherId,
start: &NaiveDateTime, start: &NaiveDateTime,
end: &NaiveDateTime, end: &NaiveDateTime,
conn: &mut DbConn, conn: &mut DbConn,
@@ -327,3 +342,6 @@ impl Event {
} }
} }
} }
#[derive(Clone, Debug, DieselNewType, FromForm, Hash, PartialEq, Eq, Serialize, Deserialize)]
pub struct EventId(String);

View File

@@ -1,12 +1,12 @@
use super::User; use super::{CipherId, User, UserId};
db_object! { db_object! {
#[derive(Identifiable, Queryable, Insertable)] #[derive(Identifiable, Queryable, Insertable)]
#[diesel(table_name = favorites)] #[diesel(table_name = favorites)]
#[diesel(primary_key(user_uuid, cipher_uuid))] #[diesel(primary_key(user_uuid, cipher_uuid))]
pub struct Favorite { pub struct Favorite {
pub user_uuid: String, pub user_uuid: UserId,
pub cipher_uuid: String, pub cipher_uuid: CipherId,
} }
} }
@@ -17,7 +17,7 @@ use crate::error::MapResult;
impl Favorite { impl Favorite {
// Returns whether the specified cipher is a favorite of the specified user. // Returns whether the specified cipher is a favorite of the specified user.
pub async fn is_favorite(cipher_uuid: &str, user_uuid: &str, conn: &mut DbConn) -> bool { pub async fn is_favorite(cipher_uuid: &CipherId, user_uuid: &UserId, conn: &mut DbConn) -> bool {
db_run! { conn: { db_run! { conn: {
let query = favorites::table let query = favorites::table
.filter(favorites::cipher_uuid.eq(cipher_uuid)) .filter(favorites::cipher_uuid.eq(cipher_uuid))
@@ -29,7 +29,12 @@ impl Favorite {
} }
// Sets whether the specified cipher is a favorite of the specified user. // Sets whether the specified cipher is a favorite of the specified user.
pub async fn set_favorite(favorite: bool, cipher_uuid: &str, user_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn set_favorite(
favorite: bool,
cipher_uuid: &CipherId,
user_uuid: &UserId,
conn: &mut DbConn,
) -> EmptyResult {
let (old, new) = (Self::is_favorite(cipher_uuid, user_uuid, conn).await, favorite); let (old, new) = (Self::is_favorite(cipher_uuid, user_uuid, conn).await, favorite);
match (old, new) { match (old, new) {
(false, true) => { (false, true) => {
@@ -62,7 +67,7 @@ impl Favorite {
} }
// Delete all favorite entries associated with the specified cipher. // Delete all favorite entries associated with the specified cipher.
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_cipher(cipher_uuid: &CipherId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
diesel::delete(favorites::table.filter(favorites::cipher_uuid.eq(cipher_uuid))) diesel::delete(favorites::table.filter(favorites::cipher_uuid.eq(cipher_uuid)))
.execute(conn) .execute(conn)
@@ -71,7 +76,7 @@ impl Favorite {
} }
// Delete all favorite entries associated with the specified user. // Delete all favorite entries associated with the specified user.
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
diesel::delete(favorites::table.filter(favorites::user_uuid.eq(user_uuid))) diesel::delete(favorites::table.filter(favorites::user_uuid.eq(user_uuid)))
.execute(conn) .execute(conn)
@@ -81,12 +86,12 @@ impl Favorite {
/// Return a vec with (cipher_uuid) this will only contain favorite flagged ciphers /// Return a vec with (cipher_uuid) this will only contain favorite flagged ciphers
/// This is used during a full sync so we only need one query for all favorite cipher matches. /// This is used during a full sync so we only need one query for all favorite cipher matches.
pub async fn get_all_cipher_uuid_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<String> { pub async fn get_all_cipher_uuid_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<CipherId> {
db_run! { conn: { db_run! { conn: {
favorites::table favorites::table
.filter(favorites::user_uuid.eq(user_uuid)) .filter(favorites::user_uuid.eq(user_uuid))
.select(favorites::cipher_uuid) .select(favorites::cipher_uuid)
.load::<String>(conn) .load::<CipherId>(conn)
.unwrap_or_default() .unwrap_or_default()
}} }}
} }

View File

@@ -1,17 +1,19 @@
use chrono::{NaiveDateTime, Utc}; use chrono::{NaiveDateTime, Utc};
use derive_more::{AsRef, Deref, Display, From};
use serde_json::Value; use serde_json::Value;
use super::User; use super::{CipherId, User, UserId};
use macros::UuidFromParam;
db_object! { db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[diesel(table_name = folders)] #[diesel(table_name = folders)]
#[diesel(primary_key(uuid))] #[diesel(primary_key(uuid))]
pub struct Folder { pub struct Folder {
pub uuid: String, pub uuid: FolderId,
pub created_at: NaiveDateTime, pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime, pub updated_at: NaiveDateTime,
pub user_uuid: String, pub user_uuid: UserId,
pub name: String, pub name: String,
} }
@@ -19,18 +21,18 @@ db_object! {
#[diesel(table_name = folders_ciphers)] #[diesel(table_name = folders_ciphers)]
#[diesel(primary_key(cipher_uuid, folder_uuid))] #[diesel(primary_key(cipher_uuid, folder_uuid))]
pub struct FolderCipher { pub struct FolderCipher {
pub cipher_uuid: String, pub cipher_uuid: CipherId,
pub folder_uuid: String, pub folder_uuid: FolderId,
} }
} }
/// Local methods /// Local methods
impl Folder { impl Folder {
pub fn new(user_uuid: String, name: String) -> Self { pub fn new(user_uuid: UserId, name: String) -> Self {
let now = Utc::now().naive_utc(); let now = Utc::now().naive_utc();
Self { Self {
uuid: crate::util::get_uuid(), uuid: FolderId(crate::util::get_uuid()),
created_at: now, created_at: now,
updated_at: now, updated_at: now,
@@ -52,10 +54,10 @@ impl Folder {
} }
impl FolderCipher { impl FolderCipher {
pub fn new(folder_uuid: &str, cipher_uuid: &str) -> Self { pub fn new(folder_uuid: FolderId, cipher_uuid: CipherId) -> Self {
Self { Self {
folder_uuid: folder_uuid.to_string(), folder_uuid,
cipher_uuid: cipher_uuid.to_string(), cipher_uuid,
} }
} }
} }
@@ -113,14 +115,14 @@ impl Folder {
}} }}
} }
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
for folder in Self::find_by_user(user_uuid, conn).await { for folder in Self::find_by_user(user_uuid, conn).await {
folder.delete(conn).await?; folder.delete(conn).await?;
} }
Ok(()) Ok(())
} }
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid_and_user(uuid: &FolderId, user_uuid: &UserId, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
folders::table folders::table
.filter(folders::uuid.eq(uuid)) .filter(folders::uuid.eq(uuid))
@@ -131,7 +133,7 @@ impl Folder {
}} }}
} }
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
folders::table folders::table
.filter(folders::user_uuid.eq(user_uuid)) .filter(folders::user_uuid.eq(user_uuid))
@@ -177,7 +179,7 @@ impl FolderCipher {
}} }}
} }
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_cipher(cipher_uuid: &CipherId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
diesel::delete(folders_ciphers::table.filter(folders_ciphers::cipher_uuid.eq(cipher_uuid))) diesel::delete(folders_ciphers::table.filter(folders_ciphers::cipher_uuid.eq(cipher_uuid)))
.execute(conn) .execute(conn)
@@ -185,7 +187,7 @@ impl FolderCipher {
}} }}
} }
pub async fn delete_all_by_folder(folder_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_folder(folder_uuid: &FolderId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
diesel::delete(folders_ciphers::table.filter(folders_ciphers::folder_uuid.eq(folder_uuid))) diesel::delete(folders_ciphers::table.filter(folders_ciphers::folder_uuid.eq(folder_uuid)))
.execute(conn) .execute(conn)
@@ -193,7 +195,11 @@ impl FolderCipher {
}} }}
} }
pub async fn find_by_folder_and_cipher(folder_uuid: &str, cipher_uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_folder_and_cipher(
folder_uuid: &FolderId,
cipher_uuid: &CipherId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
folders_ciphers::table folders_ciphers::table
.filter(folders_ciphers::folder_uuid.eq(folder_uuid)) .filter(folders_ciphers::folder_uuid.eq(folder_uuid))
@@ -204,7 +210,7 @@ impl FolderCipher {
}} }}
} }
pub async fn find_by_folder(folder_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_folder(folder_uuid: &FolderId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
folders_ciphers::table folders_ciphers::table
.filter(folders_ciphers::folder_uuid.eq(folder_uuid)) .filter(folders_ciphers::folder_uuid.eq(folder_uuid))
@@ -216,14 +222,32 @@ impl FolderCipher {
/// Return a vec with (cipher_uuid, folder_uuid) /// Return a vec with (cipher_uuid, folder_uuid)
/// This is used during a full sync so we only need one query for all folder matches. /// This is used during a full sync so we only need one query for all folder matches.
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<(String, String)> { pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<(CipherId, FolderId)> {
db_run! { conn: { db_run! { conn: {
folders_ciphers::table folders_ciphers::table
.inner_join(folders::table) .inner_join(folders::table)
.filter(folders::user_uuid.eq(user_uuid)) .filter(folders::user_uuid.eq(user_uuid))
.select(folders_ciphers::all_columns) .select(folders_ciphers::all_columns)
.load::<(String, String)>(conn) .load::<(CipherId, FolderId)>(conn)
.unwrap_or_default() .unwrap_or_default()
}} }}
} }
} }
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
From,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
UuidFromParam,
)]
pub struct FolderId(String);

View File

@@ -1,17 +1,20 @@
use super::{User, UserOrganization}; use super::{CollectionId, Membership, MembershipId, OrganizationId, User, UserId};
use crate::api::EmptyResult; use crate::api::EmptyResult;
use crate::db::DbConn; use crate::db::DbConn;
use crate::error::MapResult; use crate::error::MapResult;
use chrono::{NaiveDateTime, Utc}; use chrono::{NaiveDateTime, Utc};
use derive_more::{AsRef, Deref, Display, From};
use macros::UuidFromParam;
use serde_json::Value; use serde_json::Value;
db_object! { db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[diesel(table_name = groups)] #[diesel(table_name = groups)]
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))] #[diesel(primary_key(uuid))]
pub struct Group { pub struct Group {
pub uuid: String, pub uuid: GroupId,
pub organizations_uuid: String, pub organizations_uuid: OrganizationId,
pub name: String, pub name: String,
pub access_all: bool, pub access_all: bool,
pub external_id: Option<String>, pub external_id: Option<String>,
@@ -23,28 +26,34 @@ db_object! {
#[diesel(table_name = collections_groups)] #[diesel(table_name = collections_groups)]
#[diesel(primary_key(collections_uuid, groups_uuid))] #[diesel(primary_key(collections_uuid, groups_uuid))]
pub struct CollectionGroup { pub struct CollectionGroup {
pub collections_uuid: String, pub collections_uuid: CollectionId,
pub groups_uuid: String, pub groups_uuid: GroupId,
pub read_only: bool, pub read_only: bool,
pub hide_passwords: bool, pub hide_passwords: bool,
pub manage: bool,
} }
#[derive(Identifiable, Queryable, Insertable)] #[derive(Identifiable, Queryable, Insertable)]
#[diesel(table_name = groups_users)] #[diesel(table_name = groups_users)]
#[diesel(primary_key(groups_uuid, users_organizations_uuid))] #[diesel(primary_key(groups_uuid, users_organizations_uuid))]
pub struct GroupUser { pub struct GroupUser {
pub groups_uuid: String, pub groups_uuid: GroupId,
pub users_organizations_uuid: String pub users_organizations_uuid: MembershipId
} }
} }
/// Local methods /// Local methods
impl Group { impl Group {
pub fn new(organizations_uuid: String, name: String, access_all: bool, external_id: Option<String>) -> Self { pub fn new(
organizations_uuid: OrganizationId,
name: String,
access_all: bool,
external_id: Option<String>,
) -> Self {
let now = Utc::now().naive_utc(); let now = Utc::now().naive_utc();
let mut new_model = Self { let mut new_model = Self {
uuid: crate::util::get_uuid(), uuid: GroupId(crate::util::get_uuid()),
organizations_uuid, organizations_uuid,
name, name,
access_all, access_all,
@@ -59,21 +68,19 @@ impl Group {
} }
pub fn to_json(&self) -> Value { pub fn to_json(&self) -> Value {
use crate::util::format_date;
json!({ json!({
"id": self.uuid, "id": self.uuid,
"organizationId": self.organizations_uuid, "organizationId": self.organizations_uuid,
"name": self.name, "name": self.name,
"accessAll": self.access_all,
"externalId": self.external_id, "externalId": self.external_id,
"creationDate": format_date(&self.creation_date),
"revisionDate": format_date(&self.revision_date),
"object": "group" "object": "group"
}) })
} }
pub async fn to_json_details(&self, conn: &mut DbConn) -> Value { pub async fn to_json_details(&self, conn: &mut DbConn) -> Value {
// If both read_only and hide_passwords are false, then manage should be true
// You can't have an entry with read_only and manage, or hide_passwords and manage
// Or an entry with everything to false
let collections_groups: Vec<Value> = CollectionGroup::find_by_group(&self.uuid, conn) let collections_groups: Vec<Value> = CollectionGroup::find_by_group(&self.uuid, conn)
.await .await
.iter() .iter()
@@ -82,7 +89,7 @@ impl Group {
"id": entry.collections_uuid, "id": entry.collections_uuid,
"readOnly": entry.read_only, "readOnly": entry.read_only,
"hidePasswords": entry.hide_passwords, "hidePasswords": entry.hide_passwords,
"manage": false "manage": entry.manage,
}) })
}) })
.collect(); .collect();
@@ -108,18 +115,38 @@ impl Group {
} }
impl CollectionGroup { impl CollectionGroup {
pub fn new(collections_uuid: String, groups_uuid: String, read_only: bool, hide_passwords: bool) -> Self { pub fn new(
collections_uuid: CollectionId,
groups_uuid: GroupId,
read_only: bool,
hide_passwords: bool,
manage: bool,
) -> Self {
Self { Self {
collections_uuid, collections_uuid,
groups_uuid, groups_uuid,
read_only, read_only,
hide_passwords, hide_passwords,
manage,
} }
} }
pub fn to_json_details_for_group(&self) -> Value {
// If both read_only and hide_passwords are false, then manage should be true
// You can't have an entry with read_only and manage, or hide_passwords and manage
// Or an entry with everything to false
// For backwards compaibility and migration proposes we keep checking read_only and hide_password
json!({
"id": self.groups_uuid,
"readOnly": self.read_only,
"hidePasswords": self.hide_passwords,
"manage": self.manage || (!self.read_only && !self.hide_passwords),
})
}
} }
impl GroupUser { impl GroupUser {
pub fn new(groups_uuid: String, users_organizations_uuid: String) -> Self { pub fn new(groups_uuid: GroupId, users_organizations_uuid: MembershipId) -> Self {
Self { Self {
groups_uuid, groups_uuid,
users_organizations_uuid, users_organizations_uuid,
@@ -163,27 +190,27 @@ impl Group {
} }
} }
pub async fn delete_all_by_organization(org_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_organization(org_uuid: &OrganizationId, conn: &mut DbConn) -> EmptyResult {
for group in Self::find_by_organization(org_uuid, conn).await { for group in Self::find_by_organization(org_uuid, conn).await {
group.delete(conn).await?; group.delete(conn).await?;
} }
Ok(()) Ok(())
} }
pub async fn find_by_organization(organizations_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_organization(org_uuid: &OrganizationId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
groups::table groups::table
.filter(groups::organizations_uuid.eq(organizations_uuid)) .filter(groups::organizations_uuid.eq(org_uuid))
.load::<GroupDb>(conn) .load::<GroupDb>(conn)
.expect("Error loading groups") .expect("Error loading groups")
.from_db() .from_db()
}} }}
} }
pub async fn count_by_org(organizations_uuid: &str, conn: &mut DbConn) -> i64 { pub async fn count_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> i64 {
db_run! { conn: { db_run! { conn: {
groups::table groups::table
.filter(groups::organizations_uuid.eq(organizations_uuid)) .filter(groups::organizations_uuid.eq(org_uuid))
.count() .count()
.first::<i64>(conn) .first::<i64>(conn)
.ok() .ok()
@@ -191,7 +218,7 @@ impl Group {
}} }}
} }
pub async fn find_by_uuid_and_org(uuid: &str, org_uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid_and_org(uuid: &GroupId, org_uuid: &OrganizationId, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
groups::table groups::table
.filter(groups::uuid.eq(uuid)) .filter(groups::uuid.eq(uuid))
@@ -202,7 +229,11 @@ impl Group {
}} }}
} }
pub async fn find_by_external_id_and_org(external_id: &str, org_uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_external_id_and_org(
external_id: &str,
org_uuid: &OrganizationId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
groups::table groups::table
.filter(groups::external_id.eq(external_id)) .filter(groups::external_id.eq(external_id))
@@ -213,7 +244,7 @@ impl Group {
}} }}
} }
//Returns all organizations the user has full access to //Returns all organizations the user has full access to
pub async fn gather_user_organizations_full_access(user_uuid: &str, conn: &mut DbConn) -> Vec<String> { pub async fn get_orgs_by_user_with_full_access(user_uuid: &UserId, conn: &mut DbConn) -> Vec<OrganizationId> {
db_run! { conn: { db_run! { conn: {
groups_users::table groups_users::table
.inner_join(users_organizations::table.on( .inner_join(users_organizations::table.on(
@@ -226,12 +257,12 @@ impl Group {
.filter(groups::access_all.eq(true)) .filter(groups::access_all.eq(true))
.select(groups::organizations_uuid) .select(groups::organizations_uuid)
.distinct() .distinct()
.load::<String>(conn) .load::<OrganizationId>(conn)
.expect("Error loading organization group full access information for user") .expect("Error loading organization group full access information for user")
}} }}
} }
pub async fn is_in_full_access_group(user_uuid: &str, org_uuid: &str, conn: &mut DbConn) -> bool { pub async fn is_in_full_access_group(user_uuid: &UserId, org_uuid: &OrganizationId, conn: &mut DbConn) -> bool {
db_run! { conn: { db_run! { conn: {
groups::table groups::table
.inner_join(groups_users::table.on( .inner_join(groups_users::table.on(
@@ -260,13 +291,13 @@ impl Group {
}} }}
} }
pub async fn update_revision(uuid: &str, conn: &mut DbConn) { pub async fn update_revision(uuid: &GroupId, conn: &mut DbConn) {
if let Err(e) = Self::_update_revision(uuid, &Utc::now().naive_utc(), conn).await { if let Err(e) = Self::_update_revision(uuid, &Utc::now().naive_utc(), conn).await {
warn!("Failed to update revision for {}: {:#?}", uuid, e); warn!("Failed to update revision for {uuid}: {e:#?}");
} }
} }
async fn _update_revision(uuid: &str, date: &NaiveDateTime, conn: &mut DbConn) -> EmptyResult { async fn _update_revision(uuid: &GroupId, date: &NaiveDateTime, conn: &mut DbConn) -> EmptyResult {
db_run! {conn: { db_run! {conn: {
crate::util::retry(|| { crate::util::retry(|| {
diesel::update(groups::table.filter(groups::uuid.eq(uuid))) diesel::update(groups::table.filter(groups::uuid.eq(uuid)))
@@ -293,6 +324,7 @@ impl CollectionGroup {
collections_groups::groups_uuid.eq(&self.groups_uuid), collections_groups::groups_uuid.eq(&self.groups_uuid),
collections_groups::read_only.eq(&self.read_only), collections_groups::read_only.eq(&self.read_only),
collections_groups::hide_passwords.eq(&self.hide_passwords), collections_groups::hide_passwords.eq(&self.hide_passwords),
collections_groups::manage.eq(&self.manage),
)) ))
.execute(conn) .execute(conn)
{ {
@@ -307,6 +339,7 @@ impl CollectionGroup {
collections_groups::groups_uuid.eq(&self.groups_uuid), collections_groups::groups_uuid.eq(&self.groups_uuid),
collections_groups::read_only.eq(&self.read_only), collections_groups::read_only.eq(&self.read_only),
collections_groups::hide_passwords.eq(&self.hide_passwords), collections_groups::hide_passwords.eq(&self.hide_passwords),
collections_groups::manage.eq(&self.manage),
)) ))
.execute(conn) .execute(conn)
.map_res("Error adding group to collection") .map_res("Error adding group to collection")
@@ -321,12 +354,14 @@ impl CollectionGroup {
collections_groups::groups_uuid.eq(&self.groups_uuid), collections_groups::groups_uuid.eq(&self.groups_uuid),
collections_groups::read_only.eq(self.read_only), collections_groups::read_only.eq(self.read_only),
collections_groups::hide_passwords.eq(self.hide_passwords), collections_groups::hide_passwords.eq(self.hide_passwords),
collections_groups::manage.eq(self.manage),
)) ))
.on_conflict((collections_groups::collections_uuid, collections_groups::groups_uuid)) .on_conflict((collections_groups::collections_uuid, collections_groups::groups_uuid))
.do_update() .do_update()
.set(( .set((
collections_groups::read_only.eq(self.read_only), collections_groups::read_only.eq(self.read_only),
collections_groups::hide_passwords.eq(self.hide_passwords), collections_groups::hide_passwords.eq(self.hide_passwords),
collections_groups::manage.eq(self.manage),
)) ))
.execute(conn) .execute(conn)
.map_res("Error adding group to collection") .map_res("Error adding group to collection")
@@ -334,7 +369,7 @@ impl CollectionGroup {
} }
} }
pub async fn find_by_group(group_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_group(group_uuid: &GroupId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
collections_groups::table collections_groups::table
.filter(collections_groups::groups_uuid.eq(group_uuid)) .filter(collections_groups::groups_uuid.eq(group_uuid))
@@ -344,7 +379,7 @@ impl CollectionGroup {
}} }}
} }
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
collections_groups::table collections_groups::table
.inner_join(groups_users::table.on( .inner_join(groups_users::table.on(
@@ -361,7 +396,7 @@ impl CollectionGroup {
}} }}
} }
pub async fn find_by_collection(collection_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_collection(collection_uuid: &CollectionId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
collections_groups::table collections_groups::table
.filter(collections_groups::collections_uuid.eq(collection_uuid)) .filter(collections_groups::collections_uuid.eq(collection_uuid))
@@ -387,7 +422,7 @@ impl CollectionGroup {
}} }}
} }
pub async fn delete_all_by_group(group_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_group(group_uuid: &GroupId, conn: &mut DbConn) -> EmptyResult {
let group_users = GroupUser::find_by_group(group_uuid, conn).await; let group_users = GroupUser::find_by_group(group_uuid, conn).await;
for group_user in group_users { for group_user in group_users {
group_user.update_user_revision(conn).await; group_user.update_user_revision(conn).await;
@@ -401,7 +436,7 @@ impl CollectionGroup {
}} }}
} }
pub async fn delete_all_by_collection(collection_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_collection(collection_uuid: &CollectionId, conn: &mut DbConn) -> EmptyResult {
let collection_assigned_to_groups = CollectionGroup::find_by_collection(collection_uuid, conn).await; let collection_assigned_to_groups = CollectionGroup::find_by_collection(collection_uuid, conn).await;
for collection_assigned_to_group in collection_assigned_to_groups { for collection_assigned_to_group in collection_assigned_to_groups {
let group_users = GroupUser::find_by_group(&collection_assigned_to_group.groups_uuid, conn).await; let group_users = GroupUser::find_by_group(&collection_assigned_to_group.groups_uuid, conn).await;
@@ -466,7 +501,7 @@ impl GroupUser {
} }
} }
pub async fn find_by_group(group_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_group(group_uuid: &GroupId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
groups_users::table groups_users::table
.filter(groups_users::groups_uuid.eq(group_uuid)) .filter(groups_users::groups_uuid.eq(group_uuid))
@@ -476,10 +511,10 @@ impl GroupUser {
}} }}
} }
pub async fn find_by_user(users_organizations_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_member(member_uuid: &MembershipId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
groups_users::table groups_users::table
.filter(groups_users::users_organizations_uuid.eq(users_organizations_uuid)) .filter(groups_users::users_organizations_uuid.eq(member_uuid))
.load::<GroupUserDb>(conn) .load::<GroupUserDb>(conn)
.expect("Error loading groups for user") .expect("Error loading groups for user")
.from_db() .from_db()
@@ -487,8 +522,8 @@ impl GroupUser {
} }
pub async fn has_access_to_collection_by_member( pub async fn has_access_to_collection_by_member(
collection_uuid: &str, collection_uuid: &CollectionId,
member_uuid: &str, member_uuid: &MembershipId,
conn: &mut DbConn, conn: &mut DbConn,
) -> bool { ) -> bool {
db_run! { conn: { db_run! { conn: {
@@ -504,7 +539,11 @@ impl GroupUser {
}} }}
} }
pub async fn has_full_access_by_member(org_uuid: &str, member_uuid: &str, conn: &mut DbConn) -> bool { pub async fn has_full_access_by_member(
org_uuid: &OrganizationId,
member_uuid: &MembershipId,
conn: &mut DbConn,
) -> bool {
db_run! { conn: { db_run! { conn: {
groups_users::table groups_users::table
.inner_join(groups::table.on( .inner_join(groups::table.on(
@@ -520,32 +559,32 @@ impl GroupUser {
} }
pub async fn update_user_revision(&self, conn: &mut DbConn) { pub async fn update_user_revision(&self, conn: &mut DbConn) {
match UserOrganization::find_by_uuid(&self.users_organizations_uuid, conn).await { match Membership::find_by_uuid(&self.users_organizations_uuid, conn).await {
Some(user) => User::update_uuid_revision(&user.user_uuid, conn).await, Some(member) => User::update_uuid_revision(&member.user_uuid, conn).await,
None => warn!("User could not be found!"), None => warn!("Member could not be found!"),
} }
} }
pub async fn delete_by_group_id_and_user_id( pub async fn delete_by_group_and_member(
group_uuid: &str, group_uuid: &GroupId,
users_organizations_uuid: &str, member_uuid: &MembershipId,
conn: &mut DbConn, conn: &mut DbConn,
) -> EmptyResult { ) -> EmptyResult {
match UserOrganization::find_by_uuid(users_organizations_uuid, conn).await { match Membership::find_by_uuid(member_uuid, conn).await {
Some(user) => User::update_uuid_revision(&user.user_uuid, conn).await, Some(member) => User::update_uuid_revision(&member.user_uuid, conn).await,
None => warn!("User could not be found!"), None => warn!("Member could not be found!"),
}; };
db_run! { conn: { db_run! { conn: {
diesel::delete(groups_users::table) diesel::delete(groups_users::table)
.filter(groups_users::groups_uuid.eq(group_uuid)) .filter(groups_users::groups_uuid.eq(group_uuid))
.filter(groups_users::users_organizations_uuid.eq(users_organizations_uuid)) .filter(groups_users::users_organizations_uuid.eq(member_uuid))
.execute(conn) .execute(conn)
.map_res("Error deleting group users") .map_res("Error deleting group users")
}} }}
} }
pub async fn delete_all_by_group(group_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_group(group_uuid: &GroupId, conn: &mut DbConn) -> EmptyResult {
let group_users = GroupUser::find_by_group(group_uuid, conn).await; let group_users = GroupUser::find_by_group(group_uuid, conn).await;
for group_user in group_users { for group_user in group_users {
group_user.update_user_revision(conn).await; group_user.update_user_revision(conn).await;
@@ -559,17 +598,35 @@ impl GroupUser {
}} }}
} }
pub async fn delete_all_by_user(users_organizations_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_member(member_uuid: &MembershipId, conn: &mut DbConn) -> EmptyResult {
match UserOrganization::find_by_uuid(users_organizations_uuid, conn).await { match Membership::find_by_uuid(member_uuid, conn).await {
Some(user) => User::update_uuid_revision(&user.user_uuid, conn).await, Some(member) => User::update_uuid_revision(&member.user_uuid, conn).await,
None => warn!("User could not be found!"), None => warn!("Member could not be found!"),
} }
db_run! { conn: { db_run! { conn: {
diesel::delete(groups_users::table) diesel::delete(groups_users::table)
.filter(groups_users::users_organizations_uuid.eq(users_organizations_uuid)) .filter(groups_users::users_organizations_uuid.eq(member_uuid))
.execute(conn) .execute(conn)
.map_res("Error deleting user groups") .map_res("Error deleting user groups")
}} }}
} }
} }
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
From,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
UuidFromParam,
)]
pub struct GroupId(String);

View File

@@ -16,20 +16,26 @@ mod two_factor_duo_context;
mod two_factor_incomplete; mod two_factor_incomplete;
mod user; mod user;
pub use self::attachment::Attachment; pub use self::attachment::{Attachment, AttachmentId};
pub use self::auth_request::AuthRequest; pub use self::auth_request::{AuthRequest, AuthRequestId};
pub use self::cipher::{Cipher, RepromptType}; pub use self::cipher::{Cipher, CipherId, RepromptType};
pub use self::collection::{Collection, CollectionCipher, CollectionUser}; pub use self::collection::{Collection, CollectionCipher, CollectionId, CollectionUser};
pub use self::device::{Device, DeviceType}; pub use self::device::{Device, DeviceId, DeviceType, PushId};
pub use self::emergency_access::{EmergencyAccess, EmergencyAccessStatus, EmergencyAccessType}; pub use self::emergency_access::{EmergencyAccess, EmergencyAccessId, EmergencyAccessStatus, EmergencyAccessType};
pub use self::event::{Event, EventType}; pub use self::event::{Event, EventType};
pub use self::favorite::Favorite; pub use self::favorite::Favorite;
pub use self::folder::{Folder, FolderCipher}; pub use self::folder::{Folder, FolderCipher, FolderId};
pub use self::group::{CollectionGroup, Group, GroupUser}; pub use self::group::{CollectionGroup, Group, GroupId, GroupUser};
pub use self::org_policy::{OrgPolicy, OrgPolicyErr, OrgPolicyType}; pub use self::org_policy::{OrgPolicy, OrgPolicyErr, OrgPolicyId, OrgPolicyType};
pub use self::organization::{Organization, OrganizationApiKey, UserOrgStatus, UserOrgType, UserOrganization}; pub use self::organization::{
pub use self::send::{Send, SendType}; Membership, MembershipId, MembershipStatus, MembershipType, OrgApiKeyId, Organization, OrganizationApiKey,
OrganizationId,
};
pub use self::send::{
id::{SendFileId, SendId},
Send, SendType,
};
pub use self::two_factor::{TwoFactor, TwoFactorType}; pub use self::two_factor::{TwoFactor, TwoFactorType};
pub use self::two_factor_duo_context::TwoFactorDuoContext; pub use self::two_factor_duo_context::TwoFactorDuoContext;
pub use self::two_factor_incomplete::TwoFactorIncomplete; pub use self::two_factor_incomplete::TwoFactorIncomplete;
pub use self::user::{Invitation, User, UserKdfType, UserStampException}; pub use self::user::{Invitation, User, UserId, UserKdfType, UserStampException};

View File

@@ -1,3 +1,4 @@
use derive_more::{AsRef, From};
use serde::Deserialize; use serde::Deserialize;
use serde_json::Value; use serde_json::Value;
@@ -5,22 +6,22 @@ use crate::api::EmptyResult;
use crate::db::DbConn; use crate::db::DbConn;
use crate::error::MapResult; use crate::error::MapResult;
use super::{TwoFactor, UserOrgStatus, UserOrgType, UserOrganization}; use super::{Membership, MembershipId, MembershipStatus, MembershipType, OrganizationId, TwoFactor, UserId};
db_object! { db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[diesel(table_name = org_policies)] #[diesel(table_name = org_policies)]
#[diesel(primary_key(uuid))] #[diesel(primary_key(uuid))]
pub struct OrgPolicy { pub struct OrgPolicy {
pub uuid: String, pub uuid: OrgPolicyId,
pub org_uuid: String, pub org_uuid: OrganizationId,
pub atype: i32, pub atype: i32,
pub enabled: bool, pub enabled: bool,
pub data: String, pub data: String,
} }
} }
// https://github.com/bitwarden/server/blob/b86a04cef9f1e1b82cf18e49fc94e017c641130c/src/Core/Enums/PolicyType.cs // https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Core/AdminConsole/Enums/PolicyType.cs
#[derive(Copy, Clone, Eq, PartialEq, num_derive::FromPrimitive)] #[derive(Copy, Clone, Eq, PartialEq, num_derive::FromPrimitive)]
pub enum OrgPolicyType { pub enum OrgPolicyType {
TwoFactorAuthentication = 0, TwoFactorAuthentication = 0,
@@ -34,9 +35,13 @@ pub enum OrgPolicyType {
ResetPassword = 8, ResetPassword = 8,
// MaximumVaultTimeout = 9, // Not supported (Not AGPLv3 Licensed) // MaximumVaultTimeout = 9, // Not supported (Not AGPLv3 Licensed)
// DisablePersonalVaultExport = 10, // Not supported (Not AGPLv3 Licensed) // DisablePersonalVaultExport = 10, // Not supported (Not AGPLv3 Licensed)
// ActivateAutofill = 11,
// AutomaticAppLogIn = 12,
// FreeFamiliesSponsorshipPolicy = 13,
RemoveUnlockWithPin = 14,
} }
// https://github.com/bitwarden/server/blob/5cbdee137921a19b1f722920f0fa3cd45af2ef0f/src/Core/Models/Data/Organizations/Policies/SendOptionsPolicyData.cs // https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Core/AdminConsole/Models/Data/Organizations/Policies/SendOptionsPolicyData.cs#L5
#[derive(Deserialize)] #[derive(Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct SendOptionsPolicyData { pub struct SendOptionsPolicyData {
@@ -44,7 +49,7 @@ pub struct SendOptionsPolicyData {
pub disable_hide_email: bool, pub disable_hide_email: bool,
} }
// https://github.com/bitwarden/server/blob/5cbdee137921a19b1f722920f0fa3cd45af2ef0f/src/Core/Models/Data/Organizations/Policies/ResetPasswordDataModel.cs // https://github.com/bitwarden/server/blob/9ebe16587175b1c0e9208f84397bb75d0d595510/src/Core/AdminConsole/Models/Data/Organizations/Policies/ResetPasswordDataModel.cs
#[derive(Deserialize)] #[derive(Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct ResetPasswordDataModel { pub struct ResetPasswordDataModel {
@@ -62,9 +67,9 @@ pub enum OrgPolicyErr {
/// Local methods /// Local methods
impl OrgPolicy { impl OrgPolicy {
pub fn new(org_uuid: String, atype: OrgPolicyType, data: String) -> Self { pub fn new(org_uuid: OrganizationId, atype: OrgPolicyType, data: String) -> Self {
Self { Self {
uuid: crate::util::get_uuid(), uuid: OrgPolicyId(crate::util::get_uuid()),
org_uuid, org_uuid,
atype: atype as i32, atype: atype as i32,
enabled: false, enabled: false,
@@ -78,14 +83,24 @@ impl OrgPolicy {
pub fn to_json(&self) -> Value { pub fn to_json(&self) -> Value {
let data_json: Value = serde_json::from_str(&self.data).unwrap_or(Value::Null); let data_json: Value = serde_json::from_str(&self.data).unwrap_or(Value::Null);
json!({ let mut policy = json!({
"id": self.uuid, "id": self.uuid,
"organizationId": self.org_uuid, "organizationId": self.org_uuid,
"type": self.atype, "type": self.atype,
"data": data_json, "data": data_json,
"enabled": self.enabled, "enabled": self.enabled,
"object": "policy", "object": "policy",
}) });
// Upstream adds this key/value
// Allow enabling Single Org policy when the organization has claimed domains.
// See: (https://github.com/bitwarden/server/pull/5565)
// We return the same to prevent possible issues
if self.atype == 8i32 {
policy["canToggleState"] = json!(true);
}
policy
} }
} }
@@ -142,7 +157,7 @@ impl OrgPolicy {
}} }}
} }
pub async fn find_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
org_policies::table org_policies::table
.filter(org_policies::org_uuid.eq(org_uuid)) .filter(org_policies::org_uuid.eq(org_uuid))
@@ -152,7 +167,7 @@ impl OrgPolicy {
}} }}
} }
pub async fn find_confirmed_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_confirmed_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
org_policies::table org_policies::table
.inner_join( .inner_join(
@@ -161,7 +176,7 @@ impl OrgPolicy {
.and(users_organizations::user_uuid.eq(user_uuid))) .and(users_organizations::user_uuid.eq(user_uuid)))
) )
.filter( .filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32) users_organizations::status.eq(MembershipStatus::Confirmed as i32)
) )
.select(org_policies::all_columns) .select(org_policies::all_columns)
.load::<OrgPolicyDb>(conn) .load::<OrgPolicyDb>(conn)
@@ -170,7 +185,11 @@ impl OrgPolicy {
}} }}
} }
pub async fn find_by_org_and_type(org_uuid: &str, policy_type: OrgPolicyType, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_org_and_type(
org_uuid: &OrganizationId,
policy_type: OrgPolicyType,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
org_policies::table org_policies::table
.filter(org_policies::org_uuid.eq(org_uuid)) .filter(org_policies::org_uuid.eq(org_uuid))
@@ -181,7 +200,7 @@ impl OrgPolicy {
}} }}
} }
pub async fn delete_all_by_organization(org_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_organization(org_uuid: &OrganizationId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
diesel::delete(org_policies::table.filter(org_policies::org_uuid.eq(org_uuid))) diesel::delete(org_policies::table.filter(org_policies::org_uuid.eq(org_uuid)))
.execute(conn) .execute(conn)
@@ -190,7 +209,7 @@ impl OrgPolicy {
} }
pub async fn find_accepted_and_confirmed_by_user_and_active_policy( pub async fn find_accepted_and_confirmed_by_user_and_active_policy(
user_uuid: &str, user_uuid: &UserId,
policy_type: OrgPolicyType, policy_type: OrgPolicyType,
conn: &mut DbConn, conn: &mut DbConn,
) -> Vec<Self> { ) -> Vec<Self> {
@@ -202,10 +221,10 @@ impl OrgPolicy {
.and(users_organizations::user_uuid.eq(user_uuid))) .and(users_organizations::user_uuid.eq(user_uuid)))
) )
.filter( .filter(
users_organizations::status.eq(UserOrgStatus::Accepted as i32) users_organizations::status.eq(MembershipStatus::Accepted as i32)
) )
.or_filter( .or_filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32) users_organizations::status.eq(MembershipStatus::Confirmed as i32)
) )
.filter(org_policies::atype.eq(policy_type as i32)) .filter(org_policies::atype.eq(policy_type as i32))
.filter(org_policies::enabled.eq(true)) .filter(org_policies::enabled.eq(true))
@@ -217,7 +236,7 @@ impl OrgPolicy {
} }
pub async fn find_confirmed_by_user_and_active_policy( pub async fn find_confirmed_by_user_and_active_policy(
user_uuid: &str, user_uuid: &UserId,
policy_type: OrgPolicyType, policy_type: OrgPolicyType,
conn: &mut DbConn, conn: &mut DbConn,
) -> Vec<Self> { ) -> Vec<Self> {
@@ -229,7 +248,7 @@ impl OrgPolicy {
.and(users_organizations::user_uuid.eq(user_uuid))) .and(users_organizations::user_uuid.eq(user_uuid)))
) )
.filter( .filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32) users_organizations::status.eq(MembershipStatus::Confirmed as i32)
) )
.filter(org_policies::atype.eq(policy_type as i32)) .filter(org_policies::atype.eq(policy_type as i32))
.filter(org_policies::enabled.eq(true)) .filter(org_policies::enabled.eq(true))
@@ -244,21 +263,21 @@ impl OrgPolicy {
/// and the user is not an owner or admin of that org. This is only useful for checking /// and the user is not an owner or admin of that org. This is only useful for checking
/// applicability of policy types that have these particular semantics. /// applicability of policy types that have these particular semantics.
pub async fn is_applicable_to_user( pub async fn is_applicable_to_user(
user_uuid: &str, user_uuid: &UserId,
policy_type: OrgPolicyType, policy_type: OrgPolicyType,
exclude_org_uuid: Option<&str>, exclude_org_uuid: Option<&OrganizationId>,
conn: &mut DbConn, conn: &mut DbConn,
) -> bool { ) -> bool {
for policy in for policy in
OrgPolicy::find_accepted_and_confirmed_by_user_and_active_policy(user_uuid, policy_type, conn).await OrgPolicy::find_accepted_and_confirmed_by_user_and_active_policy(user_uuid, policy_type, conn).await
{ {
// Check if we need to skip this organization. // Check if we need to skip this organization.
if exclude_org_uuid.is_some() && exclude_org_uuid.unwrap() == policy.org_uuid { if exclude_org_uuid.is_some() && *exclude_org_uuid.unwrap() == policy.org_uuid {
continue; continue;
} }
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, &policy.org_uuid, conn).await { if let Some(user) = Membership::find_by_user_and_org(user_uuid, &policy.org_uuid, conn).await {
if user.atype < UserOrgType::Admin { if user.atype < MembershipType::Admin {
return true; return true;
} }
} }
@@ -267,8 +286,8 @@ impl OrgPolicy {
} }
pub async fn is_user_allowed( pub async fn is_user_allowed(
user_uuid: &str, user_uuid: &UserId,
org_uuid: &str, org_uuid: &OrganizationId,
exclude_current_org: bool, exclude_current_org: bool,
conn: &mut DbConn, conn: &mut DbConn,
) -> OrgPolicyResult { ) -> OrgPolicyResult {
@@ -296,7 +315,7 @@ impl OrgPolicy {
Ok(()) Ok(())
} }
pub async fn org_is_reset_password_auto_enroll(org_uuid: &str, conn: &mut DbConn) -> bool { pub async fn org_is_reset_password_auto_enroll(org_uuid: &OrganizationId, conn: &mut DbConn) -> bool {
match OrgPolicy::find_by_org_and_type(org_uuid, OrgPolicyType::ResetPassword, conn).await { match OrgPolicy::find_by_org_and_type(org_uuid, OrgPolicyType::ResetPassword, conn).await {
Some(policy) => match serde_json::from_str::<ResetPasswordDataModel>(&policy.data) { Some(policy) => match serde_json::from_str::<ResetPasswordDataModel>(&policy.data) {
Ok(opts) => { Ok(opts) => {
@@ -312,12 +331,12 @@ impl OrgPolicy {
/// Returns true if the user belongs to an org that has enabled the `DisableHideEmail` /// Returns true if the user belongs to an org that has enabled the `DisableHideEmail`
/// option of the `Send Options` policy, and the user is not an owner or admin of that org. /// option of the `Send Options` policy, and the user is not an owner or admin of that org.
pub async fn is_hide_email_disabled(user_uuid: &str, conn: &mut DbConn) -> bool { pub async fn is_hide_email_disabled(user_uuid: &UserId, conn: &mut DbConn) -> bool {
for policy in for policy in
OrgPolicy::find_confirmed_by_user_and_active_policy(user_uuid, OrgPolicyType::SendOptions, conn).await OrgPolicy::find_confirmed_by_user_and_active_policy(user_uuid, OrgPolicyType::SendOptions, conn).await
{ {
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, &policy.org_uuid, conn).await { if let Some(user) = Membership::find_by_user_and_org(user_uuid, &policy.org_uuid, conn).await {
if user.atype < UserOrgType::Admin { if user.atype < MembershipType::Admin {
match serde_json::from_str::<SendOptionsPolicyData>(&policy.data) { match serde_json::from_str::<SendOptionsPolicyData>(&policy.data) {
Ok(opts) => { Ok(opts) => {
if opts.disable_hide_email { if opts.disable_hide_email {
@@ -332,12 +351,19 @@ impl OrgPolicy {
false false
} }
pub async fn is_enabled_for_member(org_user_uuid: &str, policy_type: OrgPolicyType, conn: &mut DbConn) -> bool { pub async fn is_enabled_for_member(
if let Some(membership) = UserOrganization::find_by_uuid(org_user_uuid, conn).await { member_uuid: &MembershipId,
if let Some(policy) = OrgPolicy::find_by_org_and_type(&membership.org_uuid, policy_type, conn).await { policy_type: OrgPolicyType,
conn: &mut DbConn,
) -> bool {
if let Some(member) = Membership::find_by_uuid(member_uuid, conn).await {
if let Some(policy) = OrgPolicy::find_by_org_and_type(&member.org_uuid, policy_type, conn).await {
return policy.enabled; return policy.enabled;
} }
} }
false false
} }
} }
#[derive(Clone, Debug, AsRef, DieselNewType, From, FromForm, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct OrgPolicyId(String);

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,8 @@ use serde_json::Value;
use crate::util::LowerCase; use crate::util::LowerCase;
use super::User; use super::{OrganizationId, User, UserId};
use id::SendId;
db_object! { db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
@@ -11,11 +12,10 @@ db_object! {
#[diesel(treat_none_as_null = true)] #[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))] #[diesel(primary_key(uuid))]
pub struct Send { pub struct Send {
pub uuid: String, pub uuid: SendId,
pub user_uuid: Option<String>,
pub organization_uuid: Option<String>,
pub user_uuid: Option<UserId>,
pub organization_uuid: Option<OrganizationId>,
pub name: String, pub name: String,
pub notes: Option<String>, pub notes: Option<String>,
@@ -51,7 +51,7 @@ impl Send {
let now = Utc::now().naive_utc(); let now = Utc::now().naive_utc();
Self { Self {
uuid: crate::util::get_uuid(), uuid: SendId::from(crate::util::get_uuid()),
user_uuid: None, user_uuid: None,
organization_uuid: None, organization_uuid: None,
@@ -243,7 +243,7 @@ impl Send {
} }
} }
pub async fn update_users_revision(&self, conn: &mut DbConn) -> Vec<String> { pub async fn update_users_revision(&self, conn: &mut DbConn) -> Vec<UserId> {
let mut user_uuids = Vec::new(); let mut user_uuids = Vec::new();
match &self.user_uuid { match &self.user_uuid {
Some(user_uuid) => { Some(user_uuid) => {
@@ -257,7 +257,7 @@ impl Send {
user_uuids user_uuids
} }
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
for send in Self::find_by_user(user_uuid, conn).await { for send in Self::find_by_user(user_uuid, conn).await {
send.delete(conn).await?; send.delete(conn).await?;
} }
@@ -273,14 +273,14 @@ impl Send {
}; };
let uuid = match Uuid::from_slice(&uuid_vec) { let uuid = match Uuid::from_slice(&uuid_vec) {
Ok(u) => u.to_string(), Ok(u) => SendId::from(u.to_string()),
Err(_) => return None, Err(_) => return None,
}; };
Self::find_by_uuid(&uuid, conn).await Self::find_by_uuid(&uuid, conn).await
} }
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid(uuid: &SendId, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: { db_run! {conn: {
sends::table sends::table
.filter(sends::uuid.eq(uuid)) .filter(sends::uuid.eq(uuid))
@@ -290,7 +290,7 @@ impl Send {
}} }}
} }
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid_and_user(uuid: &SendId, user_uuid: &UserId, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: { db_run! {conn: {
sends::table sends::table
.filter(sends::uuid.eq(uuid)) .filter(sends::uuid.eq(uuid))
@@ -301,7 +301,7 @@ impl Send {
}} }}
} }
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: { db_run! {conn: {
sends::table sends::table
.filter(sends::user_uuid.eq(user_uuid)) .filter(sends::user_uuid.eq(user_uuid))
@@ -309,7 +309,7 @@ impl Send {
}} }}
} }
pub async fn size_by_user(user_uuid: &str, conn: &mut DbConn) -> Option<i64> { pub async fn size_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Option<i64> {
let sends = Self::find_by_user(user_uuid, conn).await; let sends = Self::find_by_user(user_uuid, conn).await;
#[derive(serde::Deserialize)] #[derive(serde::Deserialize)]
@@ -332,7 +332,7 @@ impl Send {
Some(total) Some(total)
} }
pub async fn find_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: { db_run! {conn: {
sends::table sends::table
.filter(sends::organization_uuid.eq(org_uuid)) .filter(sends::organization_uuid.eq(org_uuid))
@@ -349,3 +349,48 @@ impl Send {
}} }}
} }
} }
// separate namespace to avoid name collision with std::marker::Send
pub mod id {
use derive_more::{AsRef, Deref, Display, From};
use macros::{IdFromParam, UuidFromParam};
use std::marker::Send;
use std::path::Path;
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
From,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
UuidFromParam,
)]
pub struct SendId(String);
impl AsRef<Path> for SendId {
#[inline]
fn as_ref(&self) -> &Path {
Path::new(&self.0)
}
}
#[derive(
Clone, Debug, AsRef, Deref, Display, From, FromForm, Hash, PartialEq, Eq, Serialize, Deserialize, IdFromParam,
)]
pub struct SendFileId(String);
impl AsRef<Path> for SendFileId {
#[inline]
fn as_ref(&self) -> &Path {
Path::new(&self.0)
}
}
}

View File

@@ -1,5 +1,6 @@
use serde_json::Value; use serde_json::Value;
use super::UserId;
use crate::{api::EmptyResult, db::DbConn, error::MapResult}; use crate::{api::EmptyResult, db::DbConn, error::MapResult};
db_object! { db_object! {
@@ -7,8 +8,8 @@ db_object! {
#[diesel(table_name = twofactor)] #[diesel(table_name = twofactor)]
#[diesel(primary_key(uuid))] #[diesel(primary_key(uuid))]
pub struct TwoFactor { pub struct TwoFactor {
pub uuid: String, pub uuid: TwoFactorId,
pub user_uuid: String, pub user_uuid: UserId,
pub atype: i32, pub atype: i32,
pub enabled: bool, pub enabled: bool,
pub data: String, pub data: String,
@@ -41,9 +42,9 @@ pub enum TwoFactorType {
/// Local methods /// Local methods
impl TwoFactor { impl TwoFactor {
pub fn new(user_uuid: String, atype: TwoFactorType, data: String) -> Self { pub fn new(user_uuid: UserId, atype: TwoFactorType, data: String) -> Self {
Self { Self {
uuid: crate::util::get_uuid(), uuid: TwoFactorId(crate::util::get_uuid()),
user_uuid, user_uuid,
atype: atype as i32, atype: atype as i32,
enabled: true, enabled: true,
@@ -118,7 +119,7 @@ impl TwoFactor {
}} }}
} }
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> { pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
twofactor::table twofactor::table
.filter(twofactor::user_uuid.eq(user_uuid)) .filter(twofactor::user_uuid.eq(user_uuid))
@@ -129,7 +130,7 @@ impl TwoFactor {
}} }}
} }
pub async fn find_by_user_and_type(user_uuid: &str, atype: i32, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_user_and_type(user_uuid: &UserId, atype: i32, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
twofactor::table twofactor::table
.filter(twofactor::user_uuid.eq(user_uuid)) .filter(twofactor::user_uuid.eq(user_uuid))
@@ -140,7 +141,7 @@ impl TwoFactor {
}} }}
} }
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid))) diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid)))
.execute(conn) .execute(conn)
@@ -217,3 +218,6 @@ impl TwoFactor {
Ok(()) Ok(())
} }
} }
#[derive(Clone, Debug, DieselNewType, FromForm, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct TwoFactorId(String);

View File

@@ -1,17 +1,26 @@
use chrono::{NaiveDateTime, Utc}; use chrono::{NaiveDateTime, Utc};
use crate::{api::EmptyResult, auth::ClientIp, db::DbConn, error::MapResult, CONFIG}; use crate::{
api::EmptyResult,
auth::ClientIp,
db::{
models::{DeviceId, UserId},
DbConn,
},
error::MapResult,
CONFIG,
};
db_object! { db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[diesel(table_name = twofactor_incomplete)] #[diesel(table_name = twofactor_incomplete)]
#[diesel(primary_key(user_uuid, device_uuid))] #[diesel(primary_key(user_uuid, device_uuid))]
pub struct TwoFactorIncomplete { pub struct TwoFactorIncomplete {
pub user_uuid: String, pub user_uuid: UserId,
// This device UUID is simply what's claimed by the device. It doesn't // This device UUID is simply what's claimed by the device. It doesn't
// necessarily correspond to any UUID in the devices table, since a device // necessarily correspond to any UUID in the devices table, since a device
// must complete 2FA login before being added into the devices table. // must complete 2FA login before being added into the devices table.
pub device_uuid: String, pub device_uuid: DeviceId,
pub device_name: String, pub device_name: String,
pub device_type: i32, pub device_type: i32,
pub login_time: NaiveDateTime, pub login_time: NaiveDateTime,
@@ -21,8 +30,8 @@ db_object! {
impl TwoFactorIncomplete { impl TwoFactorIncomplete {
pub async fn mark_incomplete( pub async fn mark_incomplete(
user_uuid: &str, user_uuid: &UserId,
device_uuid: &str, device_uuid: &DeviceId,
device_name: &str, device_name: &str,
device_type: i32, device_type: i32,
ip: &ClientIp, ip: &ClientIp,
@@ -55,7 +64,7 @@ impl TwoFactorIncomplete {
}} }}
} }
pub async fn mark_complete(user_uuid: &str, device_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn mark_complete(user_uuid: &UserId, device_uuid: &DeviceId, conn: &mut DbConn) -> EmptyResult {
if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() { if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() {
return Ok(()); return Ok(());
} }
@@ -63,7 +72,11 @@ impl TwoFactorIncomplete {
Self::delete_by_user_and_device(user_uuid, device_uuid, conn).await Self::delete_by_user_and_device(user_uuid, device_uuid, conn).await
} }
pub async fn find_by_user_and_device(user_uuid: &str, device_uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_user_and_device(
user_uuid: &UserId,
device_uuid: &DeviceId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: { db_run! { conn: {
twofactor_incomplete::table twofactor_incomplete::table
.filter(twofactor_incomplete::user_uuid.eq(user_uuid)) .filter(twofactor_incomplete::user_uuid.eq(user_uuid))
@@ -88,7 +101,11 @@ impl TwoFactorIncomplete {
Self::delete_by_user_and_device(&self.user_uuid, &self.device_uuid, conn).await Self::delete_by_user_and_device(&self.user_uuid, &self.device_uuid, conn).await
} }
pub async fn delete_by_user_and_device(user_uuid: &str, device_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_by_user_and_device(
user_uuid: &UserId,
device_uuid: &DeviceId,
conn: &mut DbConn,
) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
diesel::delete(twofactor_incomplete::table diesel::delete(twofactor_incomplete::table
.filter(twofactor_incomplete::user_uuid.eq(user_uuid)) .filter(twofactor_incomplete::user_uuid.eq(user_uuid))
@@ -98,7 +115,7 @@ impl TwoFactorIncomplete {
}} }}
} }
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult { pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: { db_run! { conn: {
diesel::delete(twofactor_incomplete::table.filter(twofactor_incomplete::user_uuid.eq(user_uuid))) diesel::delete(twofactor_incomplete::table.filter(twofactor_incomplete::user_uuid.eq(user_uuid)))
.execute(conn) .execute(conn)

View File

@@ -1,9 +1,19 @@
use crate::util::{format_date, get_uuid, retry};
use chrono::{NaiveDateTime, TimeDelta, Utc}; use chrono::{NaiveDateTime, TimeDelta, Utc};
use derive_more::{AsRef, Deref, Display, From};
use serde_json::Value; use serde_json::Value;
use crate::crypto; use super::{
use crate::CONFIG; Cipher, Device, EmergencyAccess, Favorite, Folder, Membership, MembershipType, TwoFactor, TwoFactorIncomplete,
};
use crate::{
api::EmptyResult,
crypto,
db::DbConn,
error::MapResult,
util::{format_date, get_uuid, retry},
CONFIG,
};
use macros::UuidFromParam;
db_object! { db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)] #[derive(Identifiable, Queryable, Insertable, AsChangeset)]
@@ -11,7 +21,7 @@ db_object! {
#[diesel(treat_none_as_null = true)] #[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))] #[diesel(primary_key(uuid))]
pub struct User { pub struct User {
pub uuid: String, pub uuid: UserId,
pub enabled: bool, pub enabled: bool,
pub created_at: NaiveDateTime, pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime, pub updated_at: NaiveDateTime,
@@ -91,7 +101,7 @@ impl User {
let email = email.to_lowercase(); let email = email.to_lowercase();
Self { Self {
uuid: get_uuid(), uuid: UserId(get_uuid()),
enabled: true, enabled: true,
created_at: now, created_at: now,
updated_at: now, updated_at: now,
@@ -163,8 +173,8 @@ impl User {
/// * `password` - A str which contains a hashed version of the users master password. /// * `password` - A str which contains a hashed version of the users master password.
/// * `new_key` - A String which contains the new aKey value of the users master password. /// * `new_key` - A String which contains the new aKey value of the users master password.
/// * `allow_next_route` - A Option<Vec<String>> with the function names of the next allowed (rocket) routes. /// * `allow_next_route` - A Option<Vec<String>> with the function names of the next allowed (rocket) routes.
/// These routes are able to use the previous stamp id for the next 2 minutes. /// These routes are able to use the previous stamp id for the next 2 minutes.
/// After these 2 minutes this stamp will expire. /// After these 2 minutes this stamp will expire.
/// ///
pub fn set_password( pub fn set_password(
&mut self, &mut self,
@@ -196,8 +206,8 @@ impl User {
/// ///
/// # Arguments /// # Arguments
/// * `route_exception` - A Vec<String> with the function names of the next allowed (rocket) routes. /// * `route_exception` - A Vec<String> with the function names of the next allowed (rocket) routes.
/// These routes are able to use the previous stamp id for the next 2 minutes. /// These routes are able to use the previous stamp id for the next 2 minutes.
/// After these 2 minutes this stamp will expire. /// After these 2 minutes this stamp will expire.
/// ///
pub fn set_stamp_exception(&mut self, route_exception: Vec<String>) { pub fn set_stamp_exception(&mut self, route_exception: Vec<String>) {
let stamp_exception = UserStampException { let stamp_exception = UserStampException {
@@ -214,20 +224,11 @@ impl User {
} }
} }
use super::{
Cipher, Device, EmergencyAccess, Favorite, Folder, Send, TwoFactor, TwoFactorIncomplete, UserOrgType,
UserOrganization,
};
use crate::db::DbConn;
use crate::api::EmptyResult;
use crate::error::MapResult;
/// Database methods /// Database methods
impl User { impl User {
pub async fn to_json(&self, conn: &mut DbConn) -> Value { pub async fn to_json(&self, conn: &mut DbConn) -> Value {
let mut orgs_json = Vec::new(); let mut orgs_json = Vec::new();
for c in UserOrganization::find_confirmed_by_user(&self.uuid, conn).await { for c in Membership::find_confirmed_by_user(&self.uuid, conn).await {
orgs_json.push(c.to_json(conn).await); orgs_json.push(c.to_json(conn).await);
} }
@@ -248,7 +249,6 @@ impl User {
"emailVerified": !CONFIG.mail_enabled() || self.verified_at.is_some(), "emailVerified": !CONFIG.mail_enabled() || self.verified_at.is_some(),
"premium": true, "premium": true,
"premiumFromOrganization": false, "premiumFromOrganization": false,
"masterPasswordHint": self.password_hint,
"culture": "en-US", "culture": "en-US",
"twoFactorEnabled": twofactor_enabled, "twoFactorEnabled": twofactor_enabled,
"key": self.akey, "key": self.akey,
@@ -266,8 +266,8 @@ impl User {
} }
pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult { pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult {
if self.email.trim().is_empty() { if !crate::util::is_valid_email(&self.email) {
err!("User email can't be empty") err!(format!("User email {} is not a valid email address", self.email))
} }
self.updated_at = Utc::now().naive_utc(); self.updated_at = Utc::now().naive_utc();
@@ -304,19 +304,18 @@ impl User {
} }
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult { pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
for user_org in UserOrganization::find_confirmed_by_user(&self.uuid, conn).await { for member in Membership::find_confirmed_by_user(&self.uuid, conn).await {
if user_org.atype == UserOrgType::Owner if member.atype == MembershipType::Owner
&& UserOrganization::count_confirmed_by_org_and_type(&user_org.org_uuid, UserOrgType::Owner, conn).await && Membership::count_confirmed_by_org_and_type(&member.org_uuid, MembershipType::Owner, conn).await <= 1
<= 1
{ {
err!("Can't delete last owner") err!("Can't delete last owner")
} }
} }
Send::delete_all_by_user(&self.uuid, conn).await?; super::Send::delete_all_by_user(&self.uuid, conn).await?;
EmergencyAccess::delete_all_by_user(&self.uuid, conn).await?; EmergencyAccess::delete_all_by_user(&self.uuid, conn).await?;
EmergencyAccess::delete_all_by_grantee_email(&self.email, conn).await?; EmergencyAccess::delete_all_by_grantee_email(&self.email, conn).await?;
UserOrganization::delete_all_by_user(&self.uuid, conn).await?; Membership::delete_all_by_user(&self.uuid, conn).await?;
Cipher::delete_all_by_user(&self.uuid, conn).await?; Cipher::delete_all_by_user(&self.uuid, conn).await?;
Favorite::delete_all_by_user(&self.uuid, conn).await?; Favorite::delete_all_by_user(&self.uuid, conn).await?;
Folder::delete_all_by_user(&self.uuid, conn).await?; Folder::delete_all_by_user(&self.uuid, conn).await?;
@@ -332,9 +331,9 @@ impl User {
}} }}
} }
pub async fn update_uuid_revision(uuid: &str, conn: &mut DbConn) { pub async fn update_uuid_revision(uuid: &UserId, conn: &mut DbConn) {
if let Err(e) = Self::_update_revision(uuid, &Utc::now().naive_utc(), conn).await { if let Err(e) = Self::_update_revision(uuid, &Utc::now().naive_utc(), conn).await {
warn!("Failed to update revision for {}: {:#?}", uuid, e); warn!("Failed to update revision for {uuid}: {e:#?}");
} }
} }
@@ -357,7 +356,7 @@ impl User {
Self::_update_revision(&self.uuid, &self.updated_at, conn).await Self::_update_revision(&self.uuid, &self.updated_at, conn).await
} }
async fn _update_revision(uuid: &str, date: &NaiveDateTime, conn: &mut DbConn) -> EmptyResult { async fn _update_revision(uuid: &UserId, date: &NaiveDateTime, conn: &mut DbConn) -> EmptyResult {
db_run! {conn: { db_run! {conn: {
retry(|| { retry(|| {
diesel::update(users::table.filter(users::uuid.eq(uuid))) diesel::update(users::table.filter(users::uuid.eq(uuid)))
@@ -379,7 +378,7 @@ impl User {
}} }}
} }
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> { pub async fn find_by_uuid(uuid: &UserId, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: { db_run! {conn: {
users::table.filter(users::uuid.eq(uuid)).first::<UserDb>(conn).ok().from_db() users::table.filter(users::uuid.eq(uuid)).first::<UserDb>(conn).ok().from_db()
}} }}
@@ -408,8 +407,8 @@ impl Invitation {
} }
pub async fn save(&self, conn: &mut DbConn) -> EmptyResult { pub async fn save(&self, conn: &mut DbConn) -> EmptyResult {
if self.email.trim().is_empty() { if !crate::util::is_valid_email(&self.email) {
err!("Invitation email can't be empty") err!(format!("Invitation email {} is not a valid email address", self.email))
} }
db_run! {conn: db_run! {conn:
@@ -458,3 +457,23 @@ impl Invitation {
} }
} }
} }
#[derive(
Clone,
Debug,
DieselNewType,
FromForm,
PartialEq,
Eq,
Hash,
Serialize,
Deserialize,
AsRef,
Deref,
Display,
From,
UuidFromParam,
)]
#[deref(forward)]
#[from(forward)]
pub struct UserId(String);

View File

@@ -226,6 +226,7 @@ table! {
collection_uuid -> Text, collection_uuid -> Text,
read_only -> Bool, read_only -> Bool,
hide_passwords -> Bool, hide_passwords -> Bool,
manage -> Bool,
} }
} }
@@ -295,6 +296,7 @@ table! {
groups_uuid -> Text, groups_uuid -> Text,
read_only -> Bool, read_only -> Bool,
hide_passwords -> Bool, hide_passwords -> Bool,
manage -> Bool,
} }
} }

View File

@@ -226,6 +226,7 @@ table! {
collection_uuid -> Text, collection_uuid -> Text,
read_only -> Bool, read_only -> Bool,
hide_passwords -> Bool, hide_passwords -> Bool,
manage -> Bool,
} }
} }
@@ -295,6 +296,7 @@ table! {
groups_uuid -> Text, groups_uuid -> Text,
read_only -> Bool, read_only -> Bool,
hide_passwords -> Bool, hide_passwords -> Bool,
manage -> Bool,
} }
} }

View File

@@ -226,6 +226,7 @@ table! {
collection_uuid -> Text, collection_uuid -> Text,
read_only -> Bool, read_only -> Bool,
hide_passwords -> Bool, hide_passwords -> Bool,
manage -> Bool,
} }
} }
@@ -295,6 +296,7 @@ table! {
groups_uuid -> Text, groups_uuid -> Text,
read_only -> Bool, read_only -> Bool,
hide_passwords -> Bool, hide_passwords -> Bool,
manage -> Bool,
} }
} }

View File

@@ -59,6 +59,8 @@ use yubico::yubicoerror::YubicoError as YubiErr;
#[derive(Serialize)] #[derive(Serialize)]
pub struct Empty {} pub struct Empty {}
pub struct Compact {}
// Error struct // Error struct
// Contains a String error message, meant for the user and an enum variant, with an error of different types. // Contains a String error message, meant for the user and an enum variant, with an error of different types.
// //
@@ -69,6 +71,7 @@ make_error! {
Empty(Empty): _no_source, _serialize, Empty(Empty): _no_source, _serialize,
// Used to represent err! calls // Used to represent err! calls
Simple(String): _no_source, _api_error, Simple(String): _no_source, _api_error,
Compact(Compact): _no_source, _api_error_small,
// Used in our custom http client to handle non-global IPs and blocked domains // Used in our custom http client to handle non-global IPs and blocked domains
CustomHttpClient(CustomHttpClientError): _has_source, _api_error, CustomHttpClient(CustomHttpClientError): _has_source, _api_error,
@@ -132,6 +135,12 @@ impl Error {
self self
} }
#[must_use]
pub fn with_kind(mut self, kind: ErrorKind) -> Self {
self.error = kind;
self
}
#[must_use] #[must_use]
pub const fn with_code(mut self, code: u16) -> Self { pub const fn with_code(mut self, code: u16) -> Self {
self.error_code = code; self.error_code = code;
@@ -200,6 +209,18 @@ fn _api_error(_: &impl std::any::Any, msg: &str) -> String {
_serialize(&json, "") _serialize(&json, "")
} }
fn _api_error_small(_: &impl std::any::Any, msg: &str) -> String {
let json = json!({
"message": msg,
"validationErrors": null,
"exceptionMessage": null,
"exceptionStackTrace": null,
"innerExceptionMessage": null,
"object": "error"
});
_serialize(&json, "")
}
// //
// Rocket responder impl // Rocket responder impl
// //
@@ -212,9 +233,8 @@ use rocket::response::{self, Responder, Response};
impl Responder<'_, 'static> for Error { impl Responder<'_, 'static> for Error {
fn respond_to(self, _: &Request<'_>) -> response::Result<'static> { fn respond_to(self, _: &Request<'_>) -> response::Result<'static> {
match self.error { match self.error {
ErrorKind::Empty(_) => {} // Don't print the error in this situation ErrorKind::Empty(_) | ErrorKind::Simple(_) | ErrorKind::Compact(_) => {} // Don't print the error in this situation
ErrorKind::Simple(_) => {} // Don't print the error in this situation _ => error!(target: "error", "{self:#?}"),
_ => error!(target: "error", "{:#?}", self),
}; };
let code = Status::from_code(self.error_code).unwrap_or(Status::BadRequest); let code = Status::from_code(self.error_code).unwrap_or(Status::BadRequest);
@@ -228,6 +248,10 @@ impl Responder<'_, 'static> for Error {
// //
#[macro_export] #[macro_export]
macro_rules! err { macro_rules! err {
($kind:ident, $msg:expr) => {{
error!("{}", $msg);
return Err($crate::error::Error::new($msg, $msg).with_kind($crate::error::ErrorKind::$kind($crate::error::$kind {})));
}};
($msg:expr) => {{ ($msg:expr) => {{
error!("{}", $msg); error!("{}", $msg);
return Err($crate::error::Error::new($msg, $msg)); return Err($crate::error::Error::new($msg, $msg));

View File

@@ -6,7 +6,7 @@ use std::{
time::Duration, time::Duration,
}; };
use hickory_resolver::{system_conf::read_system_conf, TokioAsyncResolver}; use hickory_resolver::{name_server::TokioConnectionProvider, TokioResolver};
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use regex::Regex; use regex::Regex;
use reqwest::{ use reqwest::{
@@ -173,7 +173,7 @@ impl std::error::Error for CustomHttpClientError {}
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
enum CustomDnsResolver { enum CustomDnsResolver {
Default(), Default(),
Hickory(Arc<TokioAsyncResolver>), Hickory(Arc<TokioResolver>),
} }
type BoxError = Box<dyn std::error::Error + Send + Sync>; type BoxError = Box<dyn std::error::Error + Send + Sync>;
@@ -184,9 +184,9 @@ impl CustomDnsResolver {
} }
fn new() -> Arc<Self> { fn new() -> Arc<Self> {
match read_system_conf() { match TokioResolver::builder(TokioConnectionProvider::default()) {
Ok((config, opts)) => { Ok(builder) => {
let resolver = TokioAsyncResolver::tokio(config.clone(), opts.clone()); let resolver = builder.build();
Arc::new(Self::Hickory(Arc::new(resolver))) Arc::new(Self::Hickory(Arc::new(resolver)))
} }
Err(e) => { Err(e) => {

View File

@@ -1,7 +1,6 @@
use std::str::FromStr;
use chrono::NaiveDateTime; use chrono::NaiveDateTime;
use percent_encoding::{percent_encode, NON_ALPHANUMERIC}; use percent_encoding::{percent_encode, NON_ALPHANUMERIC};
use std::{env::consts::EXE_SUFFIX, str::FromStr};
use lettre::{ use lettre::{
message::{Attachment, Body, Mailbox, Message, MultiPart, SinglePart}, message::{Attachment, Body, Mailbox, Message, MultiPart, SinglePart},
@@ -17,7 +16,7 @@ use crate::{
encode_jwt, generate_delete_claims, generate_emergency_access_invite_claims, generate_invite_claims, encode_jwt, generate_delete_claims, generate_emergency_access_invite_claims, generate_invite_claims,
generate_verify_email_claims, generate_verify_email_claims,
}, },
db::models::{Device, DeviceType, User}, db::models::{Device, DeviceType, EmergencyAccessId, MembershipId, OrganizationId, User, UserId},
error::Error, error::Error,
CONFIG, CONFIG,
}; };
@@ -26,7 +25,7 @@ fn sendmail_transport() -> AsyncSendmailTransport<Tokio1Executor> {
if let Some(command) = CONFIG.sendmail_command() { if let Some(command) = CONFIG.sendmail_command() {
AsyncSendmailTransport::new_with_command(command) AsyncSendmailTransport::new_with_command(command)
} else { } else {
AsyncSendmailTransport::new() AsyncSendmailTransport::new_with_command(format!("sendmail{EXE_SUFFIX}"))
} }
} }
@@ -86,7 +85,7 @@ fn smtp_transport() -> AsyncSmtpTransport<Tokio1Executor> {
smtp_client.authentication(selected_mechanisms) smtp_client.authentication(selected_mechanisms)
} else { } else {
// Only show a warning, and return without setting an actual authentication mechanism // Only show a warning, and return without setting an actual authentication mechanism
warn!("No valid SMTP Auth mechanism found for '{}', using default values", mechanism); warn!("No valid SMTP Auth mechanism found for '{mechanism}', using default values");
smtp_client smtp_client
} }
} }
@@ -166,8 +165,8 @@ pub async fn send_password_hint(address: &str, hint: Option<String>) -> EmptyRes
send_email(address, &subject, body_html, body_text).await send_email(address, &subject, body_html, body_text).await
} }
pub async fn send_delete_account(address: &str, uuid: &str) -> EmptyResult { pub async fn send_delete_account(address: &str, user_id: &UserId) -> EmptyResult {
let claims = generate_delete_claims(uuid.to_string()); let claims = generate_delete_claims(user_id.to_string());
let delete_token = encode_jwt(&claims); let delete_token = encode_jwt(&claims);
let (subject, body_html, body_text) = get_text( let (subject, body_html, body_text) = get_text(
@@ -175,7 +174,7 @@ pub async fn send_delete_account(address: &str, uuid: &str) -> EmptyResult {
json!({ json!({
"url": CONFIG.domain(), "url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(), "img_src": CONFIG._smtp_img_src(),
"user_id": uuid, "user_id": user_id,
"email": percent_encode(address.as_bytes(), NON_ALPHANUMERIC).to_string(), "email": percent_encode(address.as_bytes(), NON_ALPHANUMERIC).to_string(),
"token": delete_token, "token": delete_token,
}), }),
@@ -184,8 +183,8 @@ pub async fn send_delete_account(address: &str, uuid: &str) -> EmptyResult {
send_email(address, &subject, body_html, body_text).await send_email(address, &subject, body_html, body_text).await
} }
pub async fn send_verify_email(address: &str, uuid: &str) -> EmptyResult { pub async fn send_verify_email(address: &str, user_id: &UserId) -> EmptyResult {
let claims = generate_verify_email_claims(uuid.to_string()); let claims = generate_verify_email_claims(user_id.clone());
let verify_email_token = encode_jwt(&claims); let verify_email_token = encode_jwt(&claims);
let (subject, body_html, body_text) = get_text( let (subject, body_html, body_text) = get_text(
@@ -193,7 +192,7 @@ pub async fn send_verify_email(address: &str, uuid: &str) -> EmptyResult {
json!({ json!({
"url": CONFIG.domain(), "url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(), "img_src": CONFIG._smtp_img_src(),
"user_id": uuid, "user_id": user_id,
"email": percent_encode(address.as_bytes(), NON_ALPHANUMERIC).to_string(), "email": percent_encode(address.as_bytes(), NON_ALPHANUMERIC).to_string(),
"token": verify_email_token, "token": verify_email_token,
}), }),
@@ -202,6 +201,27 @@ pub async fn send_verify_email(address: &str, uuid: &str) -> EmptyResult {
send_email(address, &subject, body_html, body_text).await send_email(address, &subject, body_html, body_text).await
} }
pub async fn send_register_verify_email(email: &str, token: &str) -> EmptyResult {
let mut query = url::Url::parse("https://query.builder").unwrap();
query.query_pairs_mut().append_pair("email", email).append_pair("token", token);
let query_string = match query.query() {
None => err!("Failed to build verify URL query parameters"),
Some(query) => query,
};
let (subject, body_html, body_text) = get_text(
"email/register_verify_email",
json!({
// `url.Url` would place the anchor `#` after the query parameters
"url": format!("{}/#/finish-signup/?{query_string}", CONFIG.domain()),
"img_src": CONFIG._smtp_img_src(),
"email": email,
}),
)?;
send_email(email, &subject, body_html, body_text).await
}
pub async fn send_welcome(address: &str) -> EmptyResult { pub async fn send_welcome(address: &str) -> EmptyResult {
let (subject, body_html, body_text) = get_text( let (subject, body_html, body_text) = get_text(
"email/welcome", "email/welcome",
@@ -214,8 +234,8 @@ pub async fn send_welcome(address: &str) -> EmptyResult {
send_email(address, &subject, body_html, body_text).await send_email(address, &subject, body_html, body_text).await
} }
pub async fn send_welcome_must_verify(address: &str, uuid: &str) -> EmptyResult { pub async fn send_welcome_must_verify(address: &str, user_id: &UserId) -> EmptyResult {
let claims = generate_verify_email_claims(uuid.to_string()); let claims = generate_verify_email_claims(user_id.clone());
let verify_email_token = encode_jwt(&claims); let verify_email_token = encode_jwt(&claims);
let (subject, body_html, body_text) = get_text( let (subject, body_html, body_text) = get_text(
@@ -223,7 +243,7 @@ pub async fn send_welcome_must_verify(address: &str, uuid: &str) -> EmptyResult
json!({ json!({
"url": CONFIG.domain(), "url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(), "img_src": CONFIG._smtp_img_src(),
"user_id": uuid, "user_id": user_id,
"token": verify_email_token, "token": verify_email_token,
}), }),
)?; )?;
@@ -259,8 +279,8 @@ pub async fn send_single_org_removed_from_org(address: &str, org_name: &str) ->
pub async fn send_invite( pub async fn send_invite(
user: &User, user: &User,
org_id: Option<String>, org_id: OrganizationId,
org_user_id: Option<String>, member_id: MembershipId,
org_name: &str, org_name: &str,
invited_by_email: Option<String>, invited_by_email: Option<String>,
) -> EmptyResult { ) -> EmptyResult {
@@ -268,7 +288,7 @@ pub async fn send_invite(
user.uuid.clone(), user.uuid.clone(),
user.email.clone(), user.email.clone(),
org_id.clone(), org_id.clone(),
org_user_id.clone(), member_id.clone(),
invited_by_email, invited_by_email,
); );
let invite_token = encode_jwt(&claims); let invite_token = encode_jwt(&claims);
@@ -278,8 +298,8 @@ pub async fn send_invite(
query_params query_params
.append_pair("email", &user.email) .append_pair("email", &user.email)
.append_pair("organizationName", org_name) .append_pair("organizationName", org_name)
.append_pair("organizationId", org_id.as_deref().unwrap_or("_")) .append_pair("organizationId", &org_id)
.append_pair("organizationUserId", org_user_id.as_deref().unwrap_or("_")) .append_pair("organizationUserId", &member_id)
.append_pair("token", &invite_token); .append_pair("token", &invite_token);
if user.private_key.is_some() { if user.private_key.is_some() {
query_params.append_pair("orgUserHasExistingUser", "true"); query_params.append_pair("orgUserHasExistingUser", "true");
@@ -294,7 +314,7 @@ pub async fn send_invite(
"email/send_org_invite", "email/send_org_invite",
json!({ json!({
// `url.Url` would place the anchor `#` after the query parameters // `url.Url` would place the anchor `#` after the query parameters
"url": format!("{}/#/accept-organization/?{}", CONFIG.domain(), query_string), "url": format!("{}/#/accept-organization/?{query_string}", CONFIG.domain()),
"img_src": CONFIG._smtp_img_src(), "img_src": CONFIG._smtp_img_src(),
"org_name": org_name, "org_name": org_name,
}), }),
@@ -305,15 +325,15 @@ pub async fn send_invite(
pub async fn send_emergency_access_invite( pub async fn send_emergency_access_invite(
address: &str, address: &str,
uuid: &str, user_id: UserId,
emer_id: &str, emer_id: EmergencyAccessId,
grantor_name: &str, grantor_name: &str,
grantor_email: &str, grantor_email: &str,
) -> EmptyResult { ) -> EmptyResult {
let claims = generate_emergency_access_invite_claims( let claims = generate_emergency_access_invite_claims(
String::from(uuid), user_id,
String::from(address), String::from(address),
String::from(emer_id), emer_id.clone(),
String::from(grantor_name), String::from(grantor_name),
String::from(grantor_email), String::from(grantor_email),
); );
@@ -323,7 +343,7 @@ pub async fn send_emergency_access_invite(
{ {
let mut query_params = query.query_pairs_mut(); let mut query_params = query.query_pairs_mut();
query_params query_params
.append_pair("id", emer_id) .append_pair("id", &emer_id.to_string())
.append_pair("name", grantor_name) .append_pair("name", grantor_name)
.append_pair("email", address) .append_pair("email", address)
.append_pair("token", &encode_jwt(&claims)); .append_pair("token", &encode_jwt(&claims));
@@ -550,6 +570,20 @@ pub async fn send_change_email(address: &str, token: &str) -> EmptyResult {
send_email(address, &subject, body_html, body_text).await send_email(address, &subject, body_html, body_text).await
} }
pub async fn send_change_email_existing(address: &str, acting_address: &str) -> EmptyResult {
let (subject, body_html, body_text) = get_text(
"email/change_email_existing",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"existing_address": address,
"acting_address": acting_address,
}),
)?;
send_email(address, &subject, body_html, body_text).await
}
pub async fn send_test(address: &str) -> EmptyResult { pub async fn send_test(address: &str) -> EmptyResult {
let (subject, body_html, body_text) = get_text( let (subject, body_html, body_text) = get_text(
"email/smtp_test", "email/smtp_test",
@@ -595,13 +629,13 @@ async fn send_with_selected_transport(email: Message) -> EmptyResult {
// Match some common errors and make them more user friendly // Match some common errors and make them more user friendly
Err(e) => { Err(e) => {
if e.is_client() { if e.is_client() {
debug!("Sendmail client error: {:#?}", e); debug!("Sendmail client error: {e:?}");
err!(format!("Sendmail client error: {e}")); err!(format!("Sendmail client error: {e}"));
} else if e.is_response() { } else if e.is_response() {
debug!("Sendmail response error: {:#?}", e); debug!("Sendmail response error: {e:?}");
err!(format!("Sendmail response error: {e}")); err!(format!("Sendmail response error: {e}"));
} else { } else {
debug!("Sendmail error: {:#?}", e); debug!("Sendmail error: {e:?}");
err!(format!("Sendmail error: {e}")); err!(format!("Sendmail error: {e}"));
} }
} }
@@ -612,13 +646,13 @@ async fn send_with_selected_transport(email: Message) -> EmptyResult {
// Match some common errors and make them more user friendly // Match some common errors and make them more user friendly
Err(e) => { Err(e) => {
if e.is_client() { if e.is_client() {
debug!("SMTP client error: {:#?}", e); debug!("SMTP client error: {e:#?}");
err!(format!("SMTP client error: {e}")); err!(format!("SMTP client error: {e}"));
} else if e.is_transient() { } else if e.is_transient() {
debug!("SMTP 4xx error: {:#?}", e); debug!("SMTP 4xx error: {e:#?}");
err!(format!("SMTP 4xx error: {e}")); err!(format!("SMTP 4xx error: {e}"));
} else if e.is_permanent() { } else if e.is_permanent() {
debug!("SMTP 5xx error: {:#?}", e); debug!("SMTP 5xx error: {e:#?}");
let mut msg = e.to_string(); let mut msg = e.to_string();
// Add a special check for 535 to add a more descriptive message // Add a special check for 535 to add a more descriptive message
if msg.contains("(535)") { if msg.contains("(535)") {
@@ -626,13 +660,13 @@ async fn send_with_selected_transport(email: Message) -> EmptyResult {
} }
err!(format!("SMTP 5xx error: {msg}")); err!(format!("SMTP 5xx error: {msg}"));
} else if e.is_timeout() { } else if e.is_timeout() {
debug!("SMTP timeout error: {:#?}", e); debug!("SMTP timeout error: {e:#?}");
err!(format!("SMTP timeout error: {e}")); err!(format!("SMTP timeout error: {e}"));
} else if e.is_tls() { } else if e.is_tls() {
debug!("SMTP encryption error: {:#?}", e); debug!("SMTP encryption error: {e:#?}");
err!(format!("SMTP encryption error: {e}")); err!(format!("SMTP encryption error: {e}"));
} else { } else {
debug!("SMTP error: {:#?}", e); debug!("SMTP error: {e:#?}");
err!(format!("SMTP error: {e}")); err!(format!("SMTP error: {e}"));
} }
} }

View File

@@ -24,6 +24,8 @@ extern crate log;
extern crate diesel; extern crate diesel;
#[macro_use] #[macro_use]
extern crate diesel_migrations; extern crate diesel_migrations;
#[macro_use]
extern crate diesel_derive_newtype;
use std::{ use std::{
collections::HashMap, collections::HashMap,
@@ -428,10 +430,7 @@ fn init_logging() -> Result<log::LevelFilter, Error> {
} }
None => error!( None => error!(
target: "panic", target: "panic",
"thread '{}' panicked at '{}'\n{:}", "thread '{thread}' panicked at '{msg}'\n{backtrace:}"
thread,
msg,
backtrace
), ),
} }
})); }));
@@ -451,7 +450,7 @@ fn chain_syslog(logger: fern::Dispatch) -> fern::Dispatch {
match syslog::unix(syslog_fmt) { match syslog::unix(syslog_fmt) {
Ok(sl) => logger.chain(sl), Ok(sl) => logger.chain(sl),
Err(e) => { Err(e) => {
error!("Unable to connect to syslog: {:?}", e); error!("Unable to connect to syslog: {e:?}");
logger logger
} }
} }
@@ -467,7 +466,7 @@ async fn check_data_folder() {
let data_folder = &CONFIG.data_folder(); let data_folder = &CONFIG.data_folder();
let path = Path::new(data_folder); let path = Path::new(data_folder);
if !path.exists() { if !path.exists() {
error!("Data folder '{}' doesn't exist.", data_folder); error!("Data folder '{data_folder}' doesn't exist.");
if is_running_in_container() { if is_running_in_container() {
error!("Verify that your data volume is mounted at the correct location."); error!("Verify that your data volume is mounted at the correct location.");
} else { } else {
@@ -476,7 +475,7 @@ async fn check_data_folder() {
exit(1); exit(1);
} }
if !path.is_dir() { if !path.is_dir() {
error!("Data folder '{}' is not a directory.", data_folder); error!("Data folder '{data_folder}' is not a directory.");
exit(1); exit(1);
} }
@@ -550,7 +549,7 @@ async fn create_db_pool() -> db::DbPool {
match util::retry_db(db::DbPool::from_config, CONFIG.db_connection_retries()).await { match util::retry_db(db::DbPool::from_config, CONFIG.db_connection_retries()).await {
Ok(p) => p, Ok(p) => p,
Err(e) => { Err(e) => {
error!("Error creating database pool: {:?}", e); error!("Error creating database pool: {e:?}");
exit(1); exit(1);
} }
} }

View File

@@ -38,8 +38,8 @@ img {
max-width: 130px; max-width: 130px;
} }
#users-table .vw-actions, #orgs-table .vw-actions { #users-table .vw-actions, #orgs-table .vw-actions {
min-width: 135px; min-width: 155px;
max-width: 140px; max-width: 160px;
} }
#users-table .vw-org-cell { #users-table .vw-org-cell {
max-height: 120px; max-height: 120px;

View File

@@ -29,7 +29,7 @@ function isValidIp(ip) {
return ipv4Regex.test(ip) || ipv6Regex.test(ip); return ipv4Regex.test(ip) || ipv6Regex.test(ip);
} }
function checkVersions(platform, installed, latest, commit=null) { function checkVersions(platform, installed, latest, commit=null, pre_release=false) {
if (installed === "-" || latest === "-") { if (installed === "-" || latest === "-") {
document.getElementById(`${platform}-failed`).classList.remove("d-none"); document.getElementById(`${platform}-failed`).classList.remove("d-none");
return; return;
@@ -37,10 +37,12 @@ function checkVersions(platform, installed, latest, commit=null) {
// Only check basic versions, no commit revisions // Only check basic versions, no commit revisions
if (commit === null || installed.indexOf("-") === -1) { if (commit === null || installed.indexOf("-") === -1) {
if (installed !== latest) { if (platform === "web" && pre_release === true) {
document.getElementById(`${platform}-warning`).classList.remove("d-none"); document.getElementById(`${platform}-prerelease`).classList.remove("d-none");
} else { } else if (installed == latest) {
document.getElementById(`${platform}-success`).classList.remove("d-none"); document.getElementById(`${platform}-success`).classList.remove("d-none");
} else {
document.getElementById(`${platform}-warning`).classList.remove("d-none");
} }
} else { } else {
// Check if this is a branched version. // Check if this is a branched version.
@@ -86,7 +88,7 @@ async function generateSupportString(event, dj) {
supportString += `* Running within a container: ${dj.running_within_container} (Base: ${dj.container_base_image})\n`; supportString += `* Running within a container: ${dj.running_within_container} (Base: ${dj.container_base_image})\n`;
supportString += `* Database type: ${dj.db_type}\n`; supportString += `* Database type: ${dj.db_type}\n`;
supportString += `* Database version: ${dj.db_version}\n`; supportString += `* Database version: ${dj.db_version}\n`;
supportString += `* Environment settings overridden!: ${dj.overrides !== ""}\n`; supportString += `* Uses config.json: ${dj.overrides !== ""}\n`;
supportString += `* Uses a reverse proxy: ${dj.ip_header_exists}\n`; supportString += `* Uses a reverse proxy: ${dj.ip_header_exists}\n`;
if (dj.ip_header_exists) { if (dj.ip_header_exists) {
supportString += `* IP Header check: ${dj.ip_header_match} (${dj.ip_header_name})\n`; supportString += `* IP Header check: ${dj.ip_header_match} (${dj.ip_header_name})\n`;
@@ -94,6 +96,9 @@ async function generateSupportString(event, dj) {
supportString += `* Internet access: ${dj.has_http_access}\n`; supportString += `* Internet access: ${dj.has_http_access}\n`;
supportString += `* Internet access via a proxy: ${dj.uses_proxy}\n`; supportString += `* Internet access via a proxy: ${dj.uses_proxy}\n`;
supportString += `* DNS Check: ${dnsCheck}\n`; supportString += `* DNS Check: ${dnsCheck}\n`;
if (dj.tz_env !== "") {
supportString += `* TZ environment: ${dj.tz_env}\n`;
}
supportString += `* Browser/Server Time Check: ${timeCheck}\n`; supportString += `* Browser/Server Time Check: ${timeCheck}\n`;
supportString += `* Server/NTP Time Check: ${ntpTimeCheck}\n`; supportString += `* Server/NTP Time Check: ${ntpTimeCheck}\n`;
supportString += `* Domain Configuration Check: ${domainCheck}\n`; supportString += `* Domain Configuration Check: ${domainCheck}\n`;
@@ -206,7 +211,7 @@ function initVersionCheck(dj) {
if (!dj.running_within_container) { if (!dj.running_within_container) {
const webInstalled = dj.web_vault_version; const webInstalled = dj.web_vault_version;
const webLatest = dj.latest_web_build; const webLatest = dj.latest_web_build;
checkVersions("web", webInstalled, webLatest); checkVersions("web", webInstalled, webLatest, null, dj.web_vault_pre_release);
} }
} }
@@ -236,8 +241,11 @@ function checkSecurityHeaders(headers, omit) {
"referrer-policy": ["same-origin"], "referrer-policy": ["same-origin"],
"x-xss-protection": ["0"], "x-xss-protection": ["0"],
"x-robots-tag": ["noindex", "nofollow"], "x-robots-tag": ["noindex", "nofollow"],
"cross-origin-resource-policy": ["same-origin"],
"content-security-policy": [ "content-security-policy": [
"default-src 'self'", "default-src 'none'",
"font-src 'self'",
"manifest-src 'self'",
"base-uri 'self'", "base-uri 'self'",
"form-action 'self'", "form-action 'self'",
"object-src 'self' blob:", "object-src 'self' blob:",

View File

@@ -152,7 +152,7 @@ const ORG_TYPES = {
"name": "User", "name": "User",
"bg": "blue" "bg": "blue"
}, },
"3": { "4": {
"name": "Manager", "name": "Manager",
"bg": "green" "bg": "green"
}, },

View File

@@ -1,6 +1,6 @@
/*! /*!
* Bootstrap v5.3.3 (https://getbootstrap.com/) * Bootstrap v5.3.6 (https://getbootstrap.com/)
* Copyright 2011-2024 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors) * Copyright 2011-2025 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors)
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE) * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
*/ */
(function (global, factory) { (function (global, factory) {
@@ -205,7 +205,7 @@
* @param {HTMLElement} element * @param {HTMLElement} element
* @return void * @return void
* *
* @see https://www.charistheo.io/blog/2021/02/restart-a-css-animation-with-javascript/#restarting-a-css-animation * @see https://www.harrytheo.com/blog/2021/02/restart-a-css-animation-with-javascript/#restarting-a-css-animation
*/ */
const reflow = element => { const reflow = element => {
element.offsetHeight; // eslint-disable-line no-unused-expressions element.offsetHeight; // eslint-disable-line no-unused-expressions
@@ -250,7 +250,7 @@
}); });
}; };
const execute = (possibleCallback, args = [], defaultValue = possibleCallback) => { const execute = (possibleCallback, args = [], defaultValue = possibleCallback) => {
return typeof possibleCallback === 'function' ? possibleCallback(...args) : defaultValue; return typeof possibleCallback === 'function' ? possibleCallback.call(...args) : defaultValue;
}; };
const executeAfterTransition = (callback, transitionElement, waitForTransition = true) => { const executeAfterTransition = (callback, transitionElement, waitForTransition = true) => {
if (!waitForTransition) { if (!waitForTransition) {
@@ -572,7 +572,7 @@
const bsKeys = Object.keys(element.dataset).filter(key => key.startsWith('bs') && !key.startsWith('bsConfig')); const bsKeys = Object.keys(element.dataset).filter(key => key.startsWith('bs') && !key.startsWith('bsConfig'));
for (const key of bsKeys) { for (const key of bsKeys) {
let pureKey = key.replace(/^bs/, ''); let pureKey = key.replace(/^bs/, '');
pureKey = pureKey.charAt(0).toLowerCase() + pureKey.slice(1, pureKey.length); pureKey = pureKey.charAt(0).toLowerCase() + pureKey.slice(1);
attributes[pureKey] = normalizeData(element.dataset[key]); attributes[pureKey] = normalizeData(element.dataset[key]);
} }
return attributes; return attributes;
@@ -647,7 +647,7 @@
* Constants * Constants
*/ */
const VERSION = '5.3.3'; const VERSION = '5.3.6';
/** /**
* Class definition * Class definition
@@ -673,6 +673,8 @@
this[propertyName] = null; this[propertyName] = null;
} }
} }
// Private
_queueCallback(callback, element, isAnimated = true) { _queueCallback(callback, element, isAnimated = true) {
executeAfterTransition(callback, element, isAnimated); executeAfterTransition(callback, element, isAnimated);
} }
@@ -1604,11 +1606,11 @@
this._element.style[dimension] = ''; this._element.style[dimension] = '';
this._queueCallback(complete, this._element, true); this._queueCallback(complete, this._element, true);
} }
// Private
_isShown(element = this._element) { _isShown(element = this._element) {
return element.classList.contains(CLASS_NAME_SHOW$7); return element.classList.contains(CLASS_NAME_SHOW$7);
} }
// Private
_configAfterMerge(config) { _configAfterMerge(config) {
config.toggle = Boolean(config.toggle); // Coerce string values config.toggle = Boolean(config.toggle); // Coerce string values
config.parent = getElement(config.parent); config.parent = getElement(config.parent);
@@ -2666,7 +2668,6 @@
var popperOffsets = computeOffsets({ var popperOffsets = computeOffsets({
reference: referenceClientRect, reference: referenceClientRect,
element: popperRect, element: popperRect,
strategy: 'absolute',
placement: placement placement: placement
}); });
var popperClientRect = rectToClientRect(Object.assign({}, popperRect, popperOffsets)); var popperClientRect = rectToClientRect(Object.assign({}, popperRect, popperOffsets));
@@ -2994,7 +2995,6 @@
state.modifiersData[name] = computeOffsets({ state.modifiersData[name] = computeOffsets({
reference: state.rects.reference, reference: state.rects.reference,
element: state.rects.popper, element: state.rects.popper,
strategy: 'absolute',
placement: state.placement placement: state.placement
}); });
} // eslint-disable-next-line import/no-unused-modules } // eslint-disable-next-line import/no-unused-modules
@@ -3690,6 +3690,9 @@
this._element.setAttribute('aria-expanded', 'false'); this._element.setAttribute('aria-expanded', 'false');
Manipulator.removeDataAttribute(this._menu, 'popper'); Manipulator.removeDataAttribute(this._menu, 'popper');
EventHandler.trigger(this._element, EVENT_HIDDEN$5, relatedTarget); EventHandler.trigger(this._element, EVENT_HIDDEN$5, relatedTarget);
// Explicitly return focus to the trigger element
this._element.focus();
} }
_getConfig(config) { _getConfig(config) {
config = super._getConfig(config); config = super._getConfig(config);
@@ -3701,7 +3704,7 @@
} }
_createPopper() { _createPopper() {
if (typeof Popper === 'undefined') { if (typeof Popper === 'undefined') {
throw new TypeError('Bootstrap\'s dropdowns require Popper (https://popper.js.org)'); throw new TypeError('Bootstrap\'s dropdowns require Popper (https://popper.js.org/docs/v2/)');
} }
let referenceElement = this._element; let referenceElement = this._element;
if (this._config.reference === 'parent') { if (this._config.reference === 'parent') {
@@ -3780,7 +3783,7 @@
} }
return { return {
...defaultBsPopperConfig, ...defaultBsPopperConfig,
...execute(this._config.popperConfig, [defaultBsPopperConfig]) ...execute(this._config.popperConfig, [undefined, defaultBsPopperConfig])
}; };
} }
_selectMenuItem({ _selectMenuItem({
@@ -4967,7 +4970,7 @@
return this._config.sanitize ? sanitizeHtml(arg, this._config.allowList, this._config.sanitizeFn) : arg; return this._config.sanitize ? sanitizeHtml(arg, this._config.allowList, this._config.sanitizeFn) : arg;
} }
_resolvePossibleFunction(arg) { _resolvePossibleFunction(arg) {
return execute(arg, [this]); return execute(arg, [undefined, this]);
} }
_putElementInTemplate(element, templateElement) { _putElementInTemplate(element, templateElement) {
if (this._config.html) { if (this._config.html) {
@@ -5066,7 +5069,7 @@
class Tooltip extends BaseComponent { class Tooltip extends BaseComponent {
constructor(element, config) { constructor(element, config) {
if (typeof Popper === 'undefined') { if (typeof Popper === 'undefined') {
throw new TypeError('Bootstrap\'s tooltips require Popper (https://popper.js.org)'); throw new TypeError('Bootstrap\'s tooltips require Popper (https://popper.js.org/docs/v2/)');
} }
super(element, config); super(element, config);
@@ -5112,7 +5115,6 @@
if (!this._isEnabled) { if (!this._isEnabled) {
return; return;
} }
this._activeTrigger.click = !this._activeTrigger.click;
if (this._isShown()) { if (this._isShown()) {
this._leave(); this._leave();
return; return;
@@ -5300,7 +5302,7 @@
return offset; return offset;
} }
_resolvePossibleFunction(arg) { _resolvePossibleFunction(arg) {
return execute(arg, [this._element]); return execute(arg, [this._element, this._element]);
} }
_getPopperConfig(attachment) { _getPopperConfig(attachment) {
const defaultBsPopperConfig = { const defaultBsPopperConfig = {
@@ -5338,7 +5340,7 @@
}; };
return { return {
...defaultBsPopperConfig, ...defaultBsPopperConfig,
...execute(this._config.popperConfig, [defaultBsPopperConfig]) ...execute(this._config.popperConfig, [undefined, defaultBsPopperConfig])
}; };
} }
_setListeners() { _setListeners() {
@@ -6212,7 +6214,6 @@
} }
// Private // Private
_maybeScheduleHide() { _maybeScheduleHide() {
if (!this._config.autohide) { if (!this._config.autohide) {
return; return;

View File

@@ -1,7 +1,7 @@
@charset "UTF-8"; @charset "UTF-8";
/*! /*!
* Bootstrap v5.3.3 (https://getbootstrap.com/) * Bootstrap v5.3.6 (https://getbootstrap.com/)
* Copyright 2011-2024 The Bootstrap Authors * Copyright 2011-2025 The Bootstrap Authors
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE) * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
*/ */
:root, :root,
@@ -517,8 +517,8 @@ legend {
width: 100%; width: 100%;
padding: 0; padding: 0;
margin-bottom: 0.5rem; margin-bottom: 0.5rem;
font-size: calc(1.275rem + 0.3vw);
line-height: inherit; line-height: inherit;
font-size: calc(1.275rem + 0.3vw);
} }
@media (min-width: 1200px) { @media (min-width: 1200px) {
legend { legend {
@@ -601,9 +601,9 @@ progress {
} }
.display-1 { .display-1 {
font-size: calc(1.625rem + 4.5vw);
font-weight: 300; font-weight: 300;
line-height: 1.2; line-height: 1.2;
font-size: calc(1.625rem + 4.5vw);
} }
@media (min-width: 1200px) { @media (min-width: 1200px) {
.display-1 { .display-1 {
@@ -612,9 +612,9 @@ progress {
} }
.display-2 { .display-2 {
font-size: calc(1.575rem + 3.9vw);
font-weight: 300; font-weight: 300;
line-height: 1.2; line-height: 1.2;
font-size: calc(1.575rem + 3.9vw);
} }
@media (min-width: 1200px) { @media (min-width: 1200px) {
.display-2 { .display-2 {
@@ -623,9 +623,9 @@ progress {
} }
.display-3 { .display-3 {
font-size: calc(1.525rem + 3.3vw);
font-weight: 300; font-weight: 300;
line-height: 1.2; line-height: 1.2;
font-size: calc(1.525rem + 3.3vw);
} }
@media (min-width: 1200px) { @media (min-width: 1200px) {
.display-3 { .display-3 {
@@ -634,9 +634,9 @@ progress {
} }
.display-4 { .display-4 {
font-size: calc(1.475rem + 2.7vw);
font-weight: 300; font-weight: 300;
line-height: 1.2; line-height: 1.2;
font-size: calc(1.475rem + 2.7vw);
} }
@media (min-width: 1200px) { @media (min-width: 1200px) {
.display-4 { .display-4 {
@@ -645,9 +645,9 @@ progress {
} }
.display-5 { .display-5 {
font-size: calc(1.425rem + 2.1vw);
font-weight: 300; font-weight: 300;
line-height: 1.2; line-height: 1.2;
font-size: calc(1.425rem + 2.1vw);
} }
@media (min-width: 1200px) { @media (min-width: 1200px) {
.display-5 { .display-5 {
@@ -656,9 +656,9 @@ progress {
} }
.display-6 { .display-6 {
font-size: calc(1.375rem + 1.5vw);
font-weight: 300; font-weight: 300;
line-height: 1.2; line-height: 1.2;
font-size: calc(1.375rem + 1.5vw);
} }
@media (min-width: 1200px) { @media (min-width: 1200px) {
.display-6 { .display-6 {
@@ -803,7 +803,7 @@ progress {
} }
.col { .col {
flex: 1 0 0%; flex: 1 0 0;
} }
.row-cols-auto > * { .row-cols-auto > * {
@@ -1012,7 +1012,7 @@ progress {
@media (min-width: 576px) { @media (min-width: 576px) {
.col-sm { .col-sm {
flex: 1 0 0%; flex: 1 0 0;
} }
.row-cols-sm-auto > * { .row-cols-sm-auto > * {
flex: 0 0 auto; flex: 0 0 auto;
@@ -1181,7 +1181,7 @@ progress {
} }
@media (min-width: 768px) { @media (min-width: 768px) {
.col-md { .col-md {
flex: 1 0 0%; flex: 1 0 0;
} }
.row-cols-md-auto > * { .row-cols-md-auto > * {
flex: 0 0 auto; flex: 0 0 auto;
@@ -1350,7 +1350,7 @@ progress {
} }
@media (min-width: 992px) { @media (min-width: 992px) {
.col-lg { .col-lg {
flex: 1 0 0%; flex: 1 0 0;
} }
.row-cols-lg-auto > * { .row-cols-lg-auto > * {
flex: 0 0 auto; flex: 0 0 auto;
@@ -1519,7 +1519,7 @@ progress {
} }
@media (min-width: 1200px) { @media (min-width: 1200px) {
.col-xl { .col-xl {
flex: 1 0 0%; flex: 1 0 0;
} }
.row-cols-xl-auto > * { .row-cols-xl-auto > * {
flex: 0 0 auto; flex: 0 0 auto;
@@ -1688,7 +1688,7 @@ progress {
} }
@media (min-width: 1400px) { @media (min-width: 1400px) {
.col-xxl { .col-xxl {
flex: 1 0 0%; flex: 1 0 0;
} }
.row-cols-xxl-auto > * { .row-cols-xxl-auto > * {
flex: 0 0 auto; flex: 0 0 auto;
@@ -2156,10 +2156,6 @@ progress {
display: block; display: block;
padding: 0; padding: 0;
} }
.form-control::-moz-placeholder {
color: var(--bs-secondary-color);
opacity: 1;
}
.form-control::placeholder { .form-control::placeholder {
color: var(--bs-secondary-color); color: var(--bs-secondary-color);
opacity: 1; opacity: 1;
@@ -2607,9 +2603,11 @@ textarea.form-control-lg {
top: 0; top: 0;
left: 0; left: 0;
z-index: 2; z-index: 2;
max-width: 100%;
height: 100%; height: 100%;
padding: 1rem 0.75rem; padding: 1rem 0.75rem;
overflow: hidden; overflow: hidden;
color: rgba(var(--bs-body-color-rgb), 0.65);
text-align: start; text-align: start;
text-overflow: ellipsis; text-overflow: ellipsis;
white-space: nowrap; white-space: nowrap;
@@ -2627,17 +2625,10 @@ textarea.form-control-lg {
.form-floating > .form-control-plaintext { .form-floating > .form-control-plaintext {
padding: 1rem 0.75rem; padding: 1rem 0.75rem;
} }
.form-floating > .form-control::-moz-placeholder, .form-floating > .form-control-plaintext::-moz-placeholder {
color: transparent;
}
.form-floating > .form-control::placeholder, .form-floating > .form-control::placeholder,
.form-floating > .form-control-plaintext::placeholder { .form-floating > .form-control-plaintext::placeholder {
color: transparent; color: transparent;
} }
.form-floating > .form-control:not(:-moz-placeholder-shown), .form-floating > .form-control-plaintext:not(:-moz-placeholder-shown) {
padding-top: 1.625rem;
padding-bottom: 0.625rem;
}
.form-floating > .form-control:focus, .form-floating > .form-control:not(:placeholder-shown), .form-floating > .form-control:focus, .form-floating > .form-control:not(:placeholder-shown),
.form-floating > .form-control-plaintext:focus, .form-floating > .form-control-plaintext:focus,
.form-floating > .form-control-plaintext:not(:placeholder-shown) { .form-floating > .form-control-plaintext:not(:placeholder-shown) {
@@ -2652,43 +2643,30 @@ textarea.form-control-lg {
.form-floating > .form-select { .form-floating > .form-select {
padding-top: 1.625rem; padding-top: 1.625rem;
padding-bottom: 0.625rem; padding-bottom: 0.625rem;
} padding-left: 0.75rem;
.form-floating > .form-control:not(:-moz-placeholder-shown) ~ label {
color: rgba(var(--bs-body-color-rgb), 0.65);
transform: scale(0.85) translateY(-0.5rem) translateX(0.15rem);
} }
.form-floating > .form-control:focus ~ label, .form-floating > .form-control:focus ~ label,
.form-floating > .form-control:not(:placeholder-shown) ~ label, .form-floating > .form-control:not(:placeholder-shown) ~ label,
.form-floating > .form-control-plaintext ~ label, .form-floating > .form-control-plaintext ~ label,
.form-floating > .form-select ~ label { .form-floating > .form-select ~ label {
color: rgba(var(--bs-body-color-rgb), 0.65);
transform: scale(0.85) translateY(-0.5rem) translateX(0.15rem); transform: scale(0.85) translateY(-0.5rem) translateX(0.15rem);
} }
.form-floating > .form-control:not(:-moz-placeholder-shown) ~ label::after {
position: absolute;
inset: 1rem 0.375rem;
z-index: -1;
height: 1.5em;
content: "";
background-color: var(--bs-body-bg);
border-radius: var(--bs-border-radius);
}
.form-floating > .form-control:focus ~ label::after,
.form-floating > .form-control:not(:placeholder-shown) ~ label::after,
.form-floating > .form-control-plaintext ~ label::after,
.form-floating > .form-select ~ label::after {
position: absolute;
inset: 1rem 0.375rem;
z-index: -1;
height: 1.5em;
content: "";
background-color: var(--bs-body-bg);
border-radius: var(--bs-border-radius);
}
.form-floating > .form-control:-webkit-autofill ~ label { .form-floating > .form-control:-webkit-autofill ~ label {
color: rgba(var(--bs-body-color-rgb), 0.65);
transform: scale(0.85) translateY(-0.5rem) translateX(0.15rem); transform: scale(0.85) translateY(-0.5rem) translateX(0.15rem);
} }
.form-floating > textarea:focus ~ label::after,
.form-floating > textarea:not(:placeholder-shown) ~ label::after {
position: absolute;
inset: 1rem 0.375rem;
z-index: -1;
height: 1.5em;
content: "";
background-color: var(--bs-body-bg);
border-radius: var(--bs-border-radius);
}
.form-floating > textarea:disabled ~ label::after {
background-color: var(--bs-secondary-bg);
}
.form-floating > .form-control-plaintext ~ label { .form-floating > .form-control-plaintext ~ label {
border-width: var(--bs-border-width) 0; border-width: var(--bs-border-width) 0;
} }
@@ -2696,10 +2674,6 @@ textarea.form-control-lg {
.form-floating > .form-control:disabled ~ label { .form-floating > .form-control:disabled ~ label {
color: #6c757d; color: #6c757d;
} }
.form-floating > :disabled ~ label::after,
.form-floating > .form-control:disabled ~ label::after {
background-color: var(--bs-secondary-bg);
}
.input-group { .input-group {
position: relative; position: relative;
@@ -2782,7 +2756,7 @@ textarea.form-control-lg {
border-bottom-right-radius: 0; border-bottom-right-radius: 0;
} }
.input-group > :not(:first-child):not(.dropdown-menu):not(.valid-tooltip):not(.valid-feedback):not(.invalid-tooltip):not(.invalid-feedback) { .input-group > :not(:first-child):not(.dropdown-menu):not(.valid-tooltip):not(.valid-feedback):not(.invalid-tooltip):not(.invalid-feedback) {
margin-left: calc(var(--bs-border-width) * -1); margin-left: calc(-1 * var(--bs-border-width));
border-top-left-radius: 0; border-top-left-radius: 0;
border-bottom-left-radius: 0; border-bottom-left-radius: 0;
} }
@@ -2824,7 +2798,7 @@ textarea.form-control-lg {
.was-validated .form-control:valid, .form-control.is-valid { .was-validated .form-control:valid, .form-control.is-valid {
border-color: var(--bs-form-valid-border-color); border-color: var(--bs-form-valid-border-color);
padding-right: calc(1.5em + 0.75rem); padding-right: calc(1.5em + 0.75rem);
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%23198754' d='M2.3 6.73.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e"); background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%23198754' d='M2.3 6.73.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1'/%3e%3c/svg%3e");
background-repeat: no-repeat; background-repeat: no-repeat;
background-position: right calc(0.375em + 0.1875rem) center; background-position: right calc(0.375em + 0.1875rem) center;
background-size: calc(0.75em + 0.375rem) calc(0.75em + 0.375rem); background-size: calc(0.75em + 0.375rem) calc(0.75em + 0.375rem);
@@ -2843,7 +2817,7 @@ textarea.form-control-lg {
border-color: var(--bs-form-valid-border-color); border-color: var(--bs-form-valid-border-color);
} }
.was-validated .form-select:valid:not([multiple]):not([size]), .was-validated .form-select:valid:not([multiple])[size="1"], .form-select.is-valid:not([multiple]):not([size]), .form-select.is-valid:not([multiple])[size="1"] { .was-validated .form-select:valid:not([multiple]):not([size]), .was-validated .form-select:valid:not([multiple])[size="1"], .form-select.is-valid:not([multiple]):not([size]), .form-select.is-valid:not([multiple])[size="1"] {
--bs-form-select-bg-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%23198754' d='M2.3 6.73.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e"); --bs-form-select-bg-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%23198754' d='M2.3 6.73.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1'/%3e%3c/svg%3e");
padding-right: 4.125rem; padding-right: 4.125rem;
background-position: right 0.75rem center, center right 2.25rem; background-position: right 0.75rem center, center right 2.25rem;
background-size: 16px 12px, calc(0.75em + 0.375rem) calc(0.75em + 0.375rem); background-size: 16px 12px, calc(0.75em + 0.375rem) calc(0.75em + 0.375rem);
@@ -3755,7 +3729,7 @@ textarea.form-control-lg {
} }
.btn-group > :not(.btn-check:first-child) + .btn, .btn-group > :not(.btn-check:first-child) + .btn,
.btn-group > .btn-group:not(:first-child) { .btn-group > .btn-group:not(:first-child) {
margin-left: calc(var(--bs-border-width) * -1); margin-left: calc(-1 * var(--bs-border-width));
} }
.btn-group > .btn:not(:last-child):not(.dropdown-toggle), .btn-group > .btn:not(:last-child):not(.dropdown-toggle),
.btn-group > .btn.dropdown-toggle-split:first-child, .btn-group > .btn.dropdown-toggle-split:first-child,
@@ -3802,14 +3776,15 @@ textarea.form-control-lg {
} }
.btn-group-vertical > .btn:not(:first-child), .btn-group-vertical > .btn:not(:first-child),
.btn-group-vertical > .btn-group:not(:first-child) { .btn-group-vertical > .btn-group:not(:first-child) {
margin-top: calc(var(--bs-border-width) * -1); margin-top: calc(-1 * var(--bs-border-width));
} }
.btn-group-vertical > .btn:not(:last-child):not(.dropdown-toggle), .btn-group-vertical > .btn:not(:last-child):not(.dropdown-toggle),
.btn-group-vertical > .btn-group:not(:last-child) > .btn { .btn-group-vertical > .btn-group:not(:last-child) > .btn {
border-bottom-right-radius: 0; border-bottom-right-radius: 0;
border-bottom-left-radius: 0; border-bottom-left-radius: 0;
} }
.btn-group-vertical > .btn ~ .btn, .btn-group-vertical > .btn:nth-child(n+3),
.btn-group-vertical > :not(.btn-check) + .btn,
.btn-group-vertical > .btn-group:not(:first-child) > .btn { .btn-group-vertical > .btn-group:not(:first-child) > .btn {
border-top-left-radius: 0; border-top-left-radius: 0;
border-top-right-radius: 0; border-top-right-radius: 0;
@@ -3933,8 +3908,8 @@ textarea.form-control-lg {
.nav-justified > .nav-link, .nav-justified > .nav-link,
.nav-justified .nav-item { .nav-justified .nav-item {
flex-basis: 0;
flex-grow: 1; flex-grow: 1;
flex-basis: 0;
text-align: center; text-align: center;
} }
@@ -4035,8 +4010,8 @@ textarea.form-control-lg {
} }
.navbar-collapse { .navbar-collapse {
flex-basis: 100%;
flex-grow: 1; flex-grow: 1;
flex-basis: 100%;
align-items: center; align-items: center;
} }
@@ -4531,7 +4506,7 @@ textarea.form-control-lg {
flex-flow: row wrap; flex-flow: row wrap;
} }
.card-group > .card { .card-group > .card {
flex: 1 0 0%; flex: 1 0 0;
margin-bottom: 0; margin-bottom: 0;
} }
.card-group > .card + .card { .card-group > .card + .card {
@@ -4542,24 +4517,24 @@ textarea.form-control-lg {
border-top-right-radius: 0; border-top-right-radius: 0;
border-bottom-right-radius: 0; border-bottom-right-radius: 0;
} }
.card-group > .card:not(:last-child) .card-img-top, .card-group > .card:not(:last-child) > .card-img-top,
.card-group > .card:not(:last-child) .card-header { .card-group > .card:not(:last-child) > .card-header {
border-top-right-radius: 0; border-top-right-radius: 0;
} }
.card-group > .card:not(:last-child) .card-img-bottom, .card-group > .card:not(:last-child) > .card-img-bottom,
.card-group > .card:not(:last-child) .card-footer { .card-group > .card:not(:last-child) > .card-footer {
border-bottom-right-radius: 0; border-bottom-right-radius: 0;
} }
.card-group > .card:not(:first-child) { .card-group > .card:not(:first-child) {
border-top-left-radius: 0; border-top-left-radius: 0;
border-bottom-left-radius: 0; border-bottom-left-radius: 0;
} }
.card-group > .card:not(:first-child) .card-img-top, .card-group > .card:not(:first-child) > .card-img-top,
.card-group > .card:not(:first-child) .card-header { .card-group > .card:not(:first-child) > .card-header {
border-top-left-radius: 0; border-top-left-radius: 0;
} }
.card-group > .card:not(:first-child) .card-img-bottom, .card-group > .card:not(:first-child) > .card-img-bottom,
.card-group > .card:not(:first-child) .card-footer { .card-group > .card:not(:first-child) > .card-footer {
border-bottom-left-radius: 0; border-bottom-left-radius: 0;
} }
} }
@@ -4576,11 +4551,11 @@ textarea.form-control-lg {
--bs-accordion-btn-padding-y: 1rem; --bs-accordion-btn-padding-y: 1rem;
--bs-accordion-btn-color: var(--bs-body-color); --bs-accordion-btn-color: var(--bs-body-color);
--bs-accordion-btn-bg: var(--bs-accordion-bg); --bs-accordion-btn-bg: var(--bs-accordion-bg);
--bs-accordion-btn-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='none' stroke='%23212529' stroke-linecap='round' stroke-linejoin='round'%3e%3cpath d='M2 5L8 11L14 5'/%3e%3c/svg%3e"); --bs-accordion-btn-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='none' stroke='%23212529' stroke-linecap='round' stroke-linejoin='round'%3e%3cpath d='m2 5 6 6 6-6'/%3e%3c/svg%3e");
--bs-accordion-btn-icon-width: 1.25rem; --bs-accordion-btn-icon-width: 1.25rem;
--bs-accordion-btn-icon-transform: rotate(-180deg); --bs-accordion-btn-icon-transform: rotate(-180deg);
--bs-accordion-btn-icon-transition: transform 0.2s ease-in-out; --bs-accordion-btn-icon-transition: transform 0.2s ease-in-out;
--bs-accordion-btn-active-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='none' stroke='%23052c65' stroke-linecap='round' stroke-linejoin='round'%3e%3cpath d='M2 5L8 11L14 5'/%3e%3c/svg%3e"); --bs-accordion-btn-active-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='none' stroke='%23052c65' stroke-linecap='round' stroke-linejoin='round'%3e%3cpath d='m2 5 6 6 6-6'/%3e%3c/svg%3e");
--bs-accordion-btn-focus-box-shadow: 0 0 0 0.25rem rgba(13, 110, 253, 0.25); --bs-accordion-btn-focus-box-shadow: 0 0 0 0.25rem rgba(13, 110, 253, 0.25);
--bs-accordion-body-padding-x: 1.25rem; --bs-accordion-body-padding-x: 1.25rem;
--bs-accordion-body-padding-y: 1rem; --bs-accordion-body-padding-y: 1rem;
@@ -4690,16 +4665,15 @@ textarea.form-control-lg {
.accordion-flush > .accordion-item:last-child { .accordion-flush > .accordion-item:last-child {
border-bottom: 0; border-bottom: 0;
} }
.accordion-flush > .accordion-item > .accordion-header .accordion-button, .accordion-flush > .accordion-item > .accordion-header .accordion-button.collapsed { .accordion-flush > .accordion-item > .accordion-collapse,
border-radius: 0; .accordion-flush > .accordion-item > .accordion-header .accordion-button,
} .accordion-flush > .accordion-item > .accordion-header .accordion-button.collapsed {
.accordion-flush > .accordion-item > .accordion-collapse {
border-radius: 0; border-radius: 0;
} }
[data-bs-theme=dark] .accordion-button::after { [data-bs-theme=dark] .accordion-button::after {
--bs-accordion-btn-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%236ea8fe'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e"); --bs-accordion-btn-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%236ea8fe'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708'/%3e%3c/svg%3e");
--bs-accordion-btn-active-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%236ea8fe'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e"); --bs-accordion-btn-active-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%236ea8fe'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708'/%3e%3c/svg%3e");
} }
.breadcrumb { .breadcrumb {
@@ -4803,7 +4777,7 @@ textarea.form-control-lg {
} }
.page-item:not(:first-child) .page-link { .page-item:not(:first-child) .page-link {
margin-left: calc(var(--bs-border-width) * -1); margin-left: calc(-1 * var(--bs-border-width));
} }
.page-item:first-child .page-link { .page-item:first-child .page-link {
border-top-left-radius: var(--bs-pagination-border-radius); border-top-left-radius: var(--bs-pagination-border-radius);
@@ -4952,7 +4926,7 @@ textarea.form-control-lg {
@keyframes progress-bar-stripes { @keyframes progress-bar-stripes {
0% { 0% {
background-position-x: 1rem; background-position-x: var(--bs-progress-height);
} }
} }
.progress, .progress,
@@ -5046,22 +5020,6 @@ textarea.form-control-lg {
counter-increment: section; counter-increment: section;
} }
.list-group-item-action {
width: 100%;
color: var(--bs-list-group-action-color);
text-align: inherit;
}
.list-group-item-action:hover, .list-group-item-action:focus {
z-index: 1;
color: var(--bs-list-group-action-hover-color);
text-decoration: none;
background-color: var(--bs-list-group-action-hover-bg);
}
.list-group-item-action:active {
color: var(--bs-list-group-action-active-color);
background-color: var(--bs-list-group-action-active-bg);
}
.list-group-item { .list-group-item {
position: relative; position: relative;
display: block; display: block;
@@ -5098,6 +5056,22 @@ textarea.form-control-lg {
border-top-width: var(--bs-list-group-border-width); border-top-width: var(--bs-list-group-border-width);
} }
.list-group-item-action {
width: 100%;
color: var(--bs-list-group-action-color);
text-align: inherit;
}
.list-group-item-action:not(.active):hover, .list-group-item-action:not(.active):focus {
z-index: 1;
color: var(--bs-list-group-action-hover-color);
text-decoration: none;
background-color: var(--bs-list-group-action-hover-bg);
}
.list-group-item-action:not(.active):active {
color: var(--bs-list-group-action-active-color);
background-color: var(--bs-list-group-action-active-bg);
}
.list-group-horizontal { .list-group-horizontal {
flex-direction: row; flex-direction: row;
} }
@@ -5357,19 +5331,19 @@ textarea.form-control-lg {
.btn-close { .btn-close {
--bs-btn-close-color: #000; --bs-btn-close-color: #000;
--bs-btn-close-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23000'%3e%3cpath d='M.293.293a1 1 0 0 1 1.414 0L8 6.586 14.293.293a1 1 0 1 1 1.414 1.414L9.414 8l6.293 6.293a1 1 0 0 1-1.414 1.414L8 9.414l-6.293 6.293a1 1 0 0 1-1.414-1.414L6.586 8 .293 1.707a1 1 0 0 1 0-1.414z'/%3e%3c/svg%3e"); --bs-btn-close-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23000'%3e%3cpath d='M.293.293a1 1 0 0 1 1.414 0L8 6.586 14.293.293a1 1 0 1 1 1.414 1.414L9.414 8l6.293 6.293a1 1 0 0 1-1.414 1.414L8 9.414l-6.293 6.293a1 1 0 0 1-1.414-1.414L6.586 8 .293 1.707a1 1 0 0 1 0-1.414'/%3e%3c/svg%3e");
--bs-btn-close-opacity: 0.5; --bs-btn-close-opacity: 0.5;
--bs-btn-close-hover-opacity: 0.75; --bs-btn-close-hover-opacity: 0.75;
--bs-btn-close-focus-shadow: 0 0 0 0.25rem rgba(13, 110, 253, 0.25); --bs-btn-close-focus-shadow: 0 0 0 0.25rem rgba(13, 110, 253, 0.25);
--bs-btn-close-focus-opacity: 1; --bs-btn-close-focus-opacity: 1;
--bs-btn-close-disabled-opacity: 0.25; --bs-btn-close-disabled-opacity: 0.25;
--bs-btn-close-white-filter: invert(1) grayscale(100%) brightness(200%);
box-sizing: content-box; box-sizing: content-box;
width: 1em; width: 1em;
height: 1em; height: 1em;
padding: 0.25em 0.25em; padding: 0.25em 0.25em;
color: var(--bs-btn-close-color); color: var(--bs-btn-close-color);
background: transparent var(--bs-btn-close-bg) center/1em auto no-repeat; background: transparent var(--bs-btn-close-bg) center/1em auto no-repeat;
filter: var(--bs-btn-close-filter);
border: 0; border: 0;
border-radius: 0.375rem; border-radius: 0.375rem;
opacity: var(--bs-btn-close-opacity); opacity: var(--bs-btn-close-opacity);
@@ -5393,11 +5367,16 @@ textarea.form-control-lg {
} }
.btn-close-white { .btn-close-white {
filter: var(--bs-btn-close-white-filter); --bs-btn-close-filter: invert(1) grayscale(100%) brightness(200%);
} }
[data-bs-theme=dark] .btn-close { :root,
filter: var(--bs-btn-close-white-filter); [data-bs-theme=light] {
--bs-btn-close-filter: ;
}
[data-bs-theme=dark] {
--bs-btn-close-filter: invert(1) grayscale(100%) brightness(200%);
} }
.toast { .toast {
@@ -5474,7 +5453,7 @@ textarea.form-control-lg {
--bs-modal-width: 500px; --bs-modal-width: 500px;
--bs-modal-padding: 1rem; --bs-modal-padding: 1rem;
--bs-modal-margin: 0.5rem; --bs-modal-margin: 0.5rem;
--bs-modal-color: ; --bs-modal-color: var(--bs-body-color);
--bs-modal-bg: var(--bs-body-bg); --bs-modal-bg: var(--bs-body-bg);
--bs-modal-border-color: var(--bs-border-color-translucent); --bs-modal-border-color: var(--bs-border-color-translucent);
--bs-modal-border-width: var(--bs-border-width); --bs-modal-border-width: var(--bs-border-width);
@@ -5510,8 +5489,8 @@ textarea.form-control-lg {
pointer-events: none; pointer-events: none;
} }
.modal.fade .modal-dialog { .modal.fade .modal-dialog {
transition: transform 0.3s ease-out;
transform: translate(0, -50px); transform: translate(0, -50px);
transition: transform 0.3s ease-out;
} }
@media (prefers-reduced-motion: reduce) { @media (prefers-reduced-motion: reduce) {
.modal.fade .modal-dialog { .modal.fade .modal-dialog {
@@ -5586,7 +5565,10 @@ textarea.form-control-lg {
} }
.modal-header .btn-close { .modal-header .btn-close {
padding: calc(var(--bs-modal-header-padding-y) * 0.5) calc(var(--bs-modal-header-padding-x) * 0.5); padding: calc(var(--bs-modal-header-padding-y) * 0.5) calc(var(--bs-modal-header-padding-x) * 0.5);
margin: calc(-0.5 * var(--bs-modal-header-padding-y)) calc(-0.5 * var(--bs-modal-header-padding-x)) calc(-0.5 * var(--bs-modal-header-padding-y)) auto; margin-top: calc(-0.5 * var(--bs-modal-header-padding-y));
margin-right: calc(-0.5 * var(--bs-modal-header-padding-x));
margin-bottom: calc(-0.5 * var(--bs-modal-header-padding-y));
margin-left: auto;
} }
.modal-title { .modal-title {
@@ -6107,6 +6089,7 @@ textarea.form-control-lg {
color: #fff; color: #fff;
text-align: center; text-align: center;
background: none; background: none;
filter: var(--bs-carousel-control-icon-filter);
border: 0; border: 0;
opacity: 0.5; opacity: 0.5;
transition: opacity 0.15s ease; transition: opacity 0.15s ease;
@@ -6145,11 +6128,11 @@ textarea.form-control-lg {
} }
.carousel-control-prev-icon { .carousel-control-prev-icon {
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z'/%3e%3c/svg%3e") /*rtl:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M4.646 1.646a.5.5 0 0 1 .708 0l6 6a.5.5 0 0 1 0 .708l-6 6a.5.5 0 0 1-.708-.708L10.293 8 4.646 2.354a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e")*/; background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0'/%3e%3c/svg%3e") /*rtl:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M4.646 1.646a.5.5 0 0 1 .708 0l6 6a.5.5 0 0 1 0 .708l-6 6a.5.5 0 0 1-.708-.708L10.293 8 4.646 2.354a.5.5 0 0 1 0-.708'/%3e%3c/svg%3e")*/;
} }
.carousel-control-next-icon { .carousel-control-next-icon {
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M4.646 1.646a.5.5 0 0 1 .708 0l6 6a.5.5 0 0 1 0 .708l-6 6a.5.5 0 0 1-.708-.708L10.293 8 4.646 2.354a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e") /*rtl:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z'/%3e%3c/svg%3e")*/; background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M4.646 1.646a.5.5 0 0 1 .708 0l6 6a.5.5 0 0 1 0 .708l-6 6a.5.5 0 0 1-.708-.708L10.293 8 4.646 2.354a.5.5 0 0 1 0-.708'/%3e%3c/svg%3e") /*rtl:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0'/%3e%3c/svg%3e")*/;
} }
.carousel-indicators { .carousel-indicators {
@@ -6175,7 +6158,7 @@ textarea.form-control-lg {
margin-left: 3px; margin-left: 3px;
text-indent: -999px; text-indent: -999px;
cursor: pointer; cursor: pointer;
background-color: #fff; background-color: var(--bs-carousel-indicator-active-bg);
background-clip: padding-box; background-clip: padding-box;
border: 0; border: 0;
border-top: 10px solid transparent; border-top: 10px solid transparent;
@@ -6199,31 +6182,27 @@ textarea.form-control-lg {
left: 15%; left: 15%;
padding-top: 1.25rem; padding-top: 1.25rem;
padding-bottom: 1.25rem; padding-bottom: 1.25rem;
color: #fff; color: var(--bs-carousel-caption-color);
text-align: center; text-align: center;
} }
.carousel-dark .carousel-control-prev-icon, .carousel-dark {
.carousel-dark .carousel-control-next-icon { --bs-carousel-indicator-active-bg: #000;
filter: invert(1) grayscale(100); --bs-carousel-caption-color: #000;
} --bs-carousel-control-icon-filter: invert(1) grayscale(100);
.carousel-dark .carousel-indicators [data-bs-target] {
background-color: #000;
}
.carousel-dark .carousel-caption {
color: #000;
} }
[data-bs-theme=dark] .carousel .carousel-control-prev-icon, :root,
[data-bs-theme=dark] .carousel .carousel-control-next-icon, [data-bs-theme=dark].carousel .carousel-control-prev-icon, [data-bs-theme=light] {
[data-bs-theme=dark].carousel .carousel-control-next-icon { --bs-carousel-indicator-active-bg: #fff;
filter: invert(1) grayscale(100); --bs-carousel-caption-color: #fff;
--bs-carousel-control-icon-filter: ;
} }
[data-bs-theme=dark] .carousel .carousel-indicators [data-bs-target], [data-bs-theme=dark].carousel .carousel-indicators [data-bs-target] {
background-color: #000; [data-bs-theme=dark] {
} --bs-carousel-indicator-active-bg: #000;
[data-bs-theme=dark] .carousel .carousel-caption, [data-bs-theme=dark].carousel .carousel-caption { --bs-carousel-caption-color: #000;
color: #000; --bs-carousel-control-icon-filter: invert(1) grayscale(100);
} }
.spinner-grow, .spinner-grow,
@@ -6773,7 +6752,10 @@ textarea.form-control-lg {
} }
.offcanvas-header .btn-close { .offcanvas-header .btn-close {
padding: calc(var(--bs-offcanvas-padding-y) * 0.5) calc(var(--bs-offcanvas-padding-x) * 0.5); padding: calc(var(--bs-offcanvas-padding-y) * 0.5) calc(var(--bs-offcanvas-padding-x) * 0.5);
margin: calc(-0.5 * var(--bs-offcanvas-padding-y)) calc(-0.5 * var(--bs-offcanvas-padding-x)) calc(-0.5 * var(--bs-offcanvas-padding-y)) auto; margin-top: calc(-0.5 * var(--bs-offcanvas-padding-y));
margin-right: calc(-0.5 * var(--bs-offcanvas-padding-x));
margin-bottom: calc(-0.5 * var(--bs-offcanvas-padding-y));
margin-left: auto;
} }
.offcanvas-title { .offcanvas-title {
@@ -7174,6 +7156,10 @@ textarea.form-control-lg {
.visually-hidden-focusable:not(:focus):not(:focus-within):not(caption) { .visually-hidden-focusable:not(:focus):not(:focus-within):not(caption) {
position: absolute !important; position: absolute !important;
} }
.visually-hidden *,
.visually-hidden-focusable:not(:focus):not(:focus-within) * {
overflow: hidden !important;
}
.stretched-link::after { .stretched-link::after {
position: absolute; position: absolute;

View File

@@ -4,13 +4,12 @@
* *
* To rebuild or modify this file with the latest versions of the included * To rebuild or modify this file with the latest versions of the included
* software please visit: * software please visit:
* https://datatables.net/download/#bs5/dt-2.1.8 * https://datatables.net/download/#bs5/dt-2.3.1
* *
* Included libraries: * Included libraries:
* DataTables 2.1.8 * DataTables 2.3.1
*/ */
@charset "UTF-8";
:root { :root {
--dt-row-selected: 13, 110, 253; --dt-row-selected: 13, 110, 253;
--dt-row-selected-text: 255, 255, 255; --dt-row-selected-text: 255, 255, 255;
@@ -43,6 +42,9 @@ table.dataTable tr.dt-hasChild td.dt-control:before {
border-bottom: 0px solid transparent; border-bottom: 0px solid transparent;
border-right: 5px solid transparent; border-right: 5px solid transparent;
} }
table.dataTable tfoot:empty {
display: none;
}
html.dark table.dataTable td.dt-control:before, html.dark table.dataTable td.dt-control:before,
:root[data-bs-theme=dark] table.dataTable td.dt-control:before, :root[data-bs-theme=dark] table.dataTable td.dt-control:before,
@@ -90,8 +92,8 @@ table.dataTable thead > tr > td.dt-ordering-asc span.dt-column-order:before {
position: absolute; position: absolute;
display: block; display: block;
bottom: 50%; bottom: 50%;
content: ""; content: "\25B2";
content: ""/""; content: "\25B2"/"";
} }
table.dataTable thead > tr > th.dt-orderable-desc span.dt-column-order:after, table.dataTable thead > tr > th.dt-ordering-desc span.dt-column-order:after, table.dataTable thead > tr > th.dt-orderable-desc span.dt-column-order:after, table.dataTable thead > tr > th.dt-ordering-desc span.dt-column-order:after,
table.dataTable thead > tr > td.dt-orderable-desc span.dt-column-order:after, table.dataTable thead > tr > td.dt-orderable-desc span.dt-column-order:after,
@@ -99,27 +101,17 @@ table.dataTable thead > tr > td.dt-ordering-desc span.dt-column-order:after {
position: absolute; position: absolute;
display: block; display: block;
top: 50%; top: 50%;
content: ""; content: "\25BC";
content: ""/""; content: "\25BC"/"";
}
table.dataTable thead > tr > th.dt-orderable-asc, table.dataTable thead > tr > th.dt-orderable-desc, table.dataTable thead > tr > th.dt-ordering-asc, table.dataTable thead > tr > th.dt-ordering-desc,
table.dataTable thead > tr > td.dt-orderable-asc,
table.dataTable thead > tr > td.dt-orderable-desc,
table.dataTable thead > tr > td.dt-ordering-asc,
table.dataTable thead > tr > td.dt-ordering-desc {
position: relative;
padding-right: 30px;
} }
table.dataTable thead > tr > th.dt-orderable-asc span.dt-column-order, table.dataTable thead > tr > th.dt-orderable-desc span.dt-column-order, table.dataTable thead > tr > th.dt-ordering-asc span.dt-column-order, table.dataTable thead > tr > th.dt-ordering-desc span.dt-column-order, table.dataTable thead > tr > th.dt-orderable-asc span.dt-column-order, table.dataTable thead > tr > th.dt-orderable-desc span.dt-column-order, table.dataTable thead > tr > th.dt-ordering-asc span.dt-column-order, table.dataTable thead > tr > th.dt-ordering-desc span.dt-column-order,
table.dataTable thead > tr > td.dt-orderable-asc span.dt-column-order, table.dataTable thead > tr > td.dt-orderable-asc span.dt-column-order,
table.dataTable thead > tr > td.dt-orderable-desc span.dt-column-order, table.dataTable thead > tr > td.dt-orderable-desc span.dt-column-order,
table.dataTable thead > tr > td.dt-ordering-asc span.dt-column-order, table.dataTable thead > tr > td.dt-ordering-asc span.dt-column-order,
table.dataTable thead > tr > td.dt-ordering-desc span.dt-column-order { table.dataTable thead > tr > td.dt-ordering-desc span.dt-column-order {
position: absolute; position: relative;
right: 12px;
top: 0;
bottom: 0;
width: 12px; width: 12px;
height: 20px;
} }
table.dataTable thead > tr > th.dt-orderable-asc span.dt-column-order:before, table.dataTable thead > tr > th.dt-orderable-asc span.dt-column-order:after, table.dataTable thead > tr > th.dt-orderable-desc span.dt-column-order:before, table.dataTable thead > tr > th.dt-orderable-desc span.dt-column-order:after, table.dataTable thead > tr > th.dt-ordering-asc span.dt-column-order:before, table.dataTable thead > tr > th.dt-ordering-asc span.dt-column-order:after, table.dataTable thead > tr > th.dt-ordering-desc span.dt-column-order:before, table.dataTable thead > tr > th.dt-ordering-desc span.dt-column-order:after, table.dataTable thead > tr > th.dt-orderable-asc span.dt-column-order:before, table.dataTable thead > tr > th.dt-orderable-asc span.dt-column-order:after, table.dataTable thead > tr > th.dt-orderable-desc span.dt-column-order:before, table.dataTable thead > tr > th.dt-orderable-desc span.dt-column-order:after, table.dataTable thead > tr > th.dt-ordering-asc span.dt-column-order:before, table.dataTable thead > tr > th.dt-ordering-asc span.dt-column-order:after, table.dataTable thead > tr > th.dt-ordering-desc span.dt-column-order:before, table.dataTable thead > tr > th.dt-ordering-desc span.dt-column-order:after,
table.dataTable thead > tr > td.dt-orderable-asc span.dt-column-order:before, table.dataTable thead > tr > td.dt-orderable-asc span.dt-column-order:before,
@@ -161,6 +153,40 @@ table.dataTable thead > tr > td:active {
outline: none; outline: none;
} }
table.dataTable thead > tr > th div.dt-column-header,
table.dataTable thead > tr > th div.dt-column-footer,
table.dataTable thead > tr > td div.dt-column-header,
table.dataTable thead > tr > td div.dt-column-footer,
table.dataTable tfoot > tr > th div.dt-column-header,
table.dataTable tfoot > tr > th div.dt-column-footer,
table.dataTable tfoot > tr > td div.dt-column-header,
table.dataTable tfoot > tr > td div.dt-column-footer {
display: flex;
justify-content: space-between;
align-items: center;
gap: 4px;
}
table.dataTable thead > tr > th div.dt-column-header span.dt-column-title,
table.dataTable thead > tr > th div.dt-column-footer span.dt-column-title,
table.dataTable thead > tr > td div.dt-column-header span.dt-column-title,
table.dataTable thead > tr > td div.dt-column-footer span.dt-column-title,
table.dataTable tfoot > tr > th div.dt-column-header span.dt-column-title,
table.dataTable tfoot > tr > th div.dt-column-footer span.dt-column-title,
table.dataTable tfoot > tr > td div.dt-column-header span.dt-column-title,
table.dataTable tfoot > tr > td div.dt-column-footer span.dt-column-title {
flex-grow: 1;
}
table.dataTable thead > tr > th div.dt-column-header span.dt-column-title:empty,
table.dataTable thead > tr > th div.dt-column-footer span.dt-column-title:empty,
table.dataTable thead > tr > td div.dt-column-header span.dt-column-title:empty,
table.dataTable thead > tr > td div.dt-column-footer span.dt-column-title:empty,
table.dataTable tfoot > tr > th div.dt-column-header span.dt-column-title:empty,
table.dataTable tfoot > tr > th div.dt-column-footer span.dt-column-title:empty,
table.dataTable tfoot > tr > td div.dt-column-header span.dt-column-title:empty,
table.dataTable tfoot > tr > td div.dt-column-footer span.dt-column-title:empty {
display: none;
}
div.dt-scroll-body > table.dataTable > thead > tr > th, div.dt-scroll-body > table.dataTable > thead > tr > th,
div.dt-scroll-body > table.dataTable > thead > tr > td { div.dt-scroll-body > table.dataTable > thead > tr > td {
overflow: hidden; overflow: hidden;
@@ -251,10 +277,30 @@ table.dataTable th,
table.dataTable td { table.dataTable td {
box-sizing: border-box; box-sizing: border-box;
} }
table.dataTable th.dt-type-numeric, table.dataTable th.dt-type-date,
table.dataTable td.dt-type-numeric,
table.dataTable td.dt-type-date {
text-align: right;
}
table.dataTable th.dt-type-numeric div.dt-column-header,
table.dataTable th.dt-type-numeric div.dt-column-footer, table.dataTable th.dt-type-date div.dt-column-header,
table.dataTable th.dt-type-date div.dt-column-footer,
table.dataTable td.dt-type-numeric div.dt-column-header,
table.dataTable td.dt-type-numeric div.dt-column-footer,
table.dataTable td.dt-type-date div.dt-column-header,
table.dataTable td.dt-type-date div.dt-column-footer {
flex-direction: row-reverse;
}
table.dataTable th.dt-left, table.dataTable th.dt-left,
table.dataTable td.dt-left { table.dataTable td.dt-left {
text-align: left; text-align: left;
} }
table.dataTable th.dt-left div.dt-column-header,
table.dataTable th.dt-left div.dt-column-footer,
table.dataTable td.dt-left div.dt-column-header,
table.dataTable td.dt-left div.dt-column-footer {
flex-direction: row;
}
table.dataTable th.dt-center, table.dataTable th.dt-center,
table.dataTable td.dt-center { table.dataTable td.dt-center {
text-align: center; text-align: center;
@@ -263,10 +309,22 @@ table.dataTable th.dt-right,
table.dataTable td.dt-right { table.dataTable td.dt-right {
text-align: right; text-align: right;
} }
table.dataTable th.dt-right div.dt-column-header,
table.dataTable th.dt-right div.dt-column-footer,
table.dataTable td.dt-right div.dt-column-header,
table.dataTable td.dt-right div.dt-column-footer {
flex-direction: row-reverse;
}
table.dataTable th.dt-justify, table.dataTable th.dt-justify,
table.dataTable td.dt-justify { table.dataTable td.dt-justify {
text-align: justify; text-align: justify;
} }
table.dataTable th.dt-justify div.dt-column-header,
table.dataTable th.dt-justify div.dt-column-footer,
table.dataTable td.dt-justify div.dt-column-header,
table.dataTable td.dt-justify div.dt-column-footer {
flex-direction: row;
}
table.dataTable th.dt-nowrap, table.dataTable th.dt-nowrap,
table.dataTable td.dt-nowrap { table.dataTable td.dt-nowrap {
white-space: nowrap; white-space: nowrap;
@@ -276,11 +334,6 @@ table.dataTable td.dt-empty {
text-align: center; text-align: center;
vertical-align: top; vertical-align: top;
} }
table.dataTable th.dt-type-numeric, table.dataTable th.dt-type-date,
table.dataTable td.dt-type-numeric,
table.dataTable td.dt-type-date {
text-align: right;
}
table.dataTable thead th, table.dataTable thead th,
table.dataTable thead td, table.dataTable thead td,
table.dataTable tfoot th, table.dataTable tfoot th,
@@ -293,6 +346,16 @@ table.dataTable tfoot th.dt-head-left,
table.dataTable tfoot td.dt-head-left { table.dataTable tfoot td.dt-head-left {
text-align: left; text-align: left;
} }
table.dataTable thead th.dt-head-left div.dt-column-header,
table.dataTable thead th.dt-head-left div.dt-column-footer,
table.dataTable thead td.dt-head-left div.dt-column-header,
table.dataTable thead td.dt-head-left div.dt-column-footer,
table.dataTable tfoot th.dt-head-left div.dt-column-header,
table.dataTable tfoot th.dt-head-left div.dt-column-footer,
table.dataTable tfoot td.dt-head-left div.dt-column-header,
table.dataTable tfoot td.dt-head-left div.dt-column-footer {
flex-direction: row;
}
table.dataTable thead th.dt-head-center, table.dataTable thead th.dt-head-center,
table.dataTable thead td.dt-head-center, table.dataTable thead td.dt-head-center,
table.dataTable tfoot th.dt-head-center, table.dataTable tfoot th.dt-head-center,
@@ -305,12 +368,32 @@ table.dataTable tfoot th.dt-head-right,
table.dataTable tfoot td.dt-head-right { table.dataTable tfoot td.dt-head-right {
text-align: right; text-align: right;
} }
table.dataTable thead th.dt-head-right div.dt-column-header,
table.dataTable thead th.dt-head-right div.dt-column-footer,
table.dataTable thead td.dt-head-right div.dt-column-header,
table.dataTable thead td.dt-head-right div.dt-column-footer,
table.dataTable tfoot th.dt-head-right div.dt-column-header,
table.dataTable tfoot th.dt-head-right div.dt-column-footer,
table.dataTable tfoot td.dt-head-right div.dt-column-header,
table.dataTable tfoot td.dt-head-right div.dt-column-footer {
flex-direction: row-reverse;
}
table.dataTable thead th.dt-head-justify, table.dataTable thead th.dt-head-justify,
table.dataTable thead td.dt-head-justify, table.dataTable thead td.dt-head-justify,
table.dataTable tfoot th.dt-head-justify, table.dataTable tfoot th.dt-head-justify,
table.dataTable tfoot td.dt-head-justify { table.dataTable tfoot td.dt-head-justify {
text-align: justify; text-align: justify;
} }
table.dataTable thead th.dt-head-justify div.dt-column-header,
table.dataTable thead th.dt-head-justify div.dt-column-footer,
table.dataTable thead td.dt-head-justify div.dt-column-header,
table.dataTable thead td.dt-head-justify div.dt-column-footer,
table.dataTable tfoot th.dt-head-justify div.dt-column-header,
table.dataTable tfoot th.dt-head-justify div.dt-column-footer,
table.dataTable tfoot td.dt-head-justify div.dt-column-header,
table.dataTable tfoot td.dt-head-justify div.dt-column-footer {
flex-direction: row;
}
table.dataTable thead th.dt-head-nowrap, table.dataTable thead th.dt-head-nowrap,
table.dataTable thead td.dt-head-nowrap, table.dataTable thead td.dt-head-nowrap,
table.dataTable tfoot th.dt-head-nowrap, table.dataTable tfoot th.dt-head-nowrap,
@@ -408,6 +491,9 @@ div.dt-container div.dt-layout-table > div {
margin-left: 0; margin-left: 0;
} }
} }
div.dt-container {
position: relative;
}
div.dt-container div.dt-length label { div.dt-container div.dt-length label {
font-weight: normal; font-weight: normal;
text-align: left; text-align: left;
@@ -496,14 +582,19 @@ table.dataTable.table-sm > thead > tr td.dt-orderable-asc,
table.dataTable.table-sm > thead > tr td.dt-orderable-desc, table.dataTable.table-sm > thead > tr td.dt-orderable-desc,
table.dataTable.table-sm > thead > tr td.dt-ordering-asc, table.dataTable.table-sm > thead > tr td.dt-ordering-asc,
table.dataTable.table-sm > thead > tr td.dt-ordering-desc { table.dataTable.table-sm > thead > tr td.dt-ordering-desc {
padding-right: 20px; padding-right: 0.25rem;
} }
table.dataTable.table-sm > thead > tr th.dt-orderable-asc span.dt-column-order, table.dataTable.table-sm > thead > tr th.dt-orderable-desc span.dt-column-order, table.dataTable.table-sm > thead > tr th.dt-ordering-asc span.dt-column-order, table.dataTable.table-sm > thead > tr th.dt-ordering-desc span.dt-column-order, table.dataTable.table-sm > thead > tr th.dt-orderable-asc span.dt-column-order, table.dataTable.table-sm > thead > tr th.dt-orderable-desc span.dt-column-order, table.dataTable.table-sm > thead > tr th.dt-ordering-asc span.dt-column-order, table.dataTable.table-sm > thead > tr th.dt-ordering-desc span.dt-column-order,
table.dataTable.table-sm > thead > tr td.dt-orderable-asc span.dt-column-order, table.dataTable.table-sm > thead > tr td.dt-orderable-asc span.dt-column-order,
table.dataTable.table-sm > thead > tr td.dt-orderable-desc span.dt-column-order, table.dataTable.table-sm > thead > tr td.dt-orderable-desc span.dt-column-order,
table.dataTable.table-sm > thead > tr td.dt-ordering-asc span.dt-column-order, table.dataTable.table-sm > thead > tr td.dt-ordering-asc span.dt-column-order,
table.dataTable.table-sm > thead > tr td.dt-ordering-desc span.dt-column-order { table.dataTable.table-sm > thead > tr td.dt-ordering-desc span.dt-column-order {
right: 5px; right: 0.25rem;
}
table.dataTable.table-sm > thead > tr th.dt-type-date span.dt-column-order, table.dataTable.table-sm > thead > tr th.dt-type-numeric span.dt-column-order,
table.dataTable.table-sm > thead > tr td.dt-type-date span.dt-column-order,
table.dataTable.table-sm > thead > tr td.dt-type-numeric span.dt-column-order {
left: 0.25rem;
} }
div.dt-scroll-head table.table-bordered { div.dt-scroll-head table.table-bordered {

File diff suppressed because it is too large Load Diff

View File

@@ -24,6 +24,7 @@
<dt class="col-sm-5">Web Installed <dt class="col-sm-5">Web Installed
<span class="badge bg-success d-none" id="web-success" title="Latest version is installed.">Ok</span> <span class="badge bg-success d-none" id="web-success" title="Latest version is installed.">Ok</span>
<span class="badge bg-warning text-dark d-none" id="web-warning" title="There seems to be an update available.">Update</span> <span class="badge bg-warning text-dark d-none" id="web-warning" title="There seems to be an update available.">Update</span>
<span class="badge bg-info text-dark d-none" id="web-prerelease" title="You seem to be using a pre-release version.">Pre-Release</span>
</dt> </dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="web-installed">{{page_data.web_vault_version}}</span> <span id="web-installed">{{page_data.web_vault_version}}</span>
@@ -68,10 +69,14 @@
<span class="d-block"><b>No</b></span> <span class="d-block"><b>No</b></span>
{{/unless}} {{/unless}}
</dd> </dd>
<dt class="col-sm-5">Environment settings overridden</dt> <dt class="col-sm-5">Uses config.json
{{#if page_data.overrides}}
<span class="badge bg-info text-dark" title="Environment variables are overwritten by a config.json.">Note</span>
{{/if}}
</dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
{{#if page_data.overrides}} {{#if page_data.overrides}}
<span class="d-block" title="The following settings are overridden: {{page_data.overrides}}"><b>Yes</b></span> <abbr class="d-block" title="The following settings are overridden: {{page_data.overrides}}"><b>Yes</b></abbr>
{{/if}} {{/if}}
{{#unless page_data.overrides}} {{#unless page_data.overrides}}
<span class="d-block"><b>No</b></span> <span class="d-block"><b>No</b></span>
@@ -154,7 +159,11 @@
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="dns-resolved">{{page_data.dns_resolved}}</span> <span id="dns-resolved">{{page_data.dns_resolved}}</span>
</dd> </dd>
<dt class="col-sm-5">Date & Time (Local)</dt> <dt class="col-sm-5">Date & Time (Local)
{{#if page_data.tz_env}}
<span class="badge bg-success" title="Configured TZ environment variable">{{page_data.tz_env}}</span>
{{/if}}
</dt>
<dd class="col-sm-7"> <dd class="col-sm-7">
<span><b>Server:</b> {{page_data.server_time_local}}</span> <span><b>Server:</b> {{page_data.server_time_local}}</span>
</dd> </dd>

View File

@@ -43,7 +43,7 @@
<span class="d-block"><strong>Groups:</strong> {{group_count}}</span> <span class="d-block"><strong>Groups:</strong> {{group_count}}</span>
<span class="d-block"><strong>Events:</strong> {{event_count}}</span> <span class="d-block"><strong>Events:</strong> {{event_count}}</span>
</td> </td>
<td class="text-end px-0 small"> <td class="text-end px-1 small">
<button type="button" class="btn btn-sm btn-link p-0 border-0 float-right" vw-delete-organization data-vw-org-uuid="{{id}}" data-vw-org-name="{{name}}" data-vw-billing-email="{{billingEmail}}">Delete Organization</button><br> <button type="button" class="btn btn-sm btn-link p-0 border-0 float-right" vw-delete-organization data-vw-org-uuid="{{id}}" data-vw-org-name="{{name}}" data-vw-billing-email="{{billingEmail}}">Delete Organization</button><br>
</td> </td>
</tr> </tr>

View File

@@ -60,7 +60,7 @@
{{/each}} {{/each}}
</div> </div>
</td> </td>
<td class="text-end px-0 small"> <td class="text-end px-1 small">
<span data-vw-user-uuid="{{id}}" data-vw-user-email="{{email}}"> <span data-vw-user-uuid="{{id}}" data-vw-user-email="{{email}}">
{{#if twoFactorEnabled}} {{#if twoFactorEnabled}}
<button type="button" class="btn btn-sm btn-link p-0 border-0 float-right" vw-remove2fa>Remove all 2FA</button><br> <button type="button" class="btn btn-sm btn-link p-0 border-0 float-right" vw-remove2fa>Remove all 2FA</button><br>

View File

@@ -0,0 +1,6 @@
Your Email Change
<!---------------->
A user ({{ acting_address }}) recently tried to change their account to use this email address ({{ existing_address }}). An account already exists with this email ({{ existing_address }}).
If you did not try to change an email address, contact your administrator.
{{> email/email_footer_text }}

View File

@@ -0,0 +1,16 @@
Your Email Change
<!---------------->
{{> email/email_header }}
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center">
A user ({{ acting_address }}) recently tried to change their account to use this email address ({{ existing_address }}). An account already exists with this email ({{ existing_address }}).
</td>
</tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center">
If you did not try to change an email address, contact your administrator.
</td>
</tr>
</table>
{{> email/email_footer }}

View File

@@ -0,0 +1,8 @@
Verify Your Email
<!---------------->
Verify this email address to finish creating your account by clicking the link below.
Verify Email Address Now: {{{url}}}
If you did not request to verify your account, you can safely ignore this email.
{{> email/email_footer_text }}

View File

@@ -0,0 +1,24 @@
Verify Your Email
<!---------------->
{{> email/email_header }}
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center">
Verify this email address to finish creating your account by clicking the link below.
</td>
</tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center">
<a href="{{{url}}}"
clicktracking=off target="_blank" style="color: #ffffff; text-decoration: none; text-align: center; cursor: pointer; display: inline-block; border-radius: 5px; background-color: #3c8dbc; border-color: #3c8dbc; border-style: solid; border-width: 10px 20px; margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
Verify Email Address Now
</a>
</td>
</tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center">
If you did not request to verify your account, you can safely ignore this email.
</td>
</tr>
</table>
{{> email/email_footer }}

View File

@@ -1,7 +1,7 @@
Removed from {{{org_name}}} Your access to {{{org_name}}} has been revoked.
<!----------------> <!---------------->
You have been removed from organization *{{org_name}}* because your account does not have Two-step Login enabled. Your user account has been removed from the *{{org_name}}* organization because you do not have two-step login configured.
Before you can re-join this organization you need to set up two-step login on your user account.
You can enable two-step login in your account settings.
You can enable Two-step Login in your account settings.
{{> email/email_footer_text }} {{> email/email_footer_text }}

View File

@@ -1,15 +1,16 @@
Removed from {{{org_name}}} Your access to {{{org_name}}} has been revoked.
<!----------------> <!---------------->
{{> email/email_header }} {{> email/email_header }}
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;"> <table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;"> <tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center"> <td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center">
You have been removed from organization <b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{org_name}}</b> because your account does not have Two-step Login enabled. Your user account has been removed from the <b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{org_name}}</b> organization because you do not have two-step login configured.<br>
Before you can re-join this organization you need to set up two-step login on your user account.
</td> </td>
</tr> </tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;"> <tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center"> <td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center">
You can enable Two-step Login in your account settings. You can enable two-step login in your account settings.
</td> </td>
</tr> </tr>
</table> </table>

View File

@@ -21,7 +21,15 @@ a[href$="/settings/sponsored-families"] {
} }
/* Hide the `Enterprise Single Sign-On` button on the login page */ /* Hide the `Enterprise Single Sign-On` button on the login page */
a[routerlink="/sso"] { app-root form.ng-untouched button.\!tw-text-primary-600:nth-child(4) {
@extend %vw-hide;
}
/* Hide Log in with passkey on the login page */
app-root form.ng-untouched a[routerlink="/login-with-passkey"] {
@extend %vw-hide;
}
/* Hide the or text followed by the two buttons hidden above */
app-root form.ng-untouched > div:nth-child(1) > div:nth-child(3) > div:nth-child(2) {
@extend %vw-hide; @extend %vw-hide;
} }
@@ -31,69 +39,88 @@ a[href$="/settings/two-factor"] {
@extend %vw-hide; @extend %vw-hide;
} }
/* Hide Business Owned checkbox */
app-org-info > form:nth-child(1) > div:nth-child(3) {
@extend %vw-hide;
}
/* Hide the `This account is owned by a business` checkbox and label */
#ownedBusiness,
label[for^="ownedBusiness"] {
@extend %vw-hide;
}
/* Hide the radio button and label for the `Custom` org user type */
#userTypeCustom,
label[for^="userTypeCustom"] {
@extend %vw-hide;
}
/* Hide Business Name */
app-org-account form div bit-form-field.tw-block:nth-child(3) {
@extend %vw-hide;
}
/* Hide organization plans */ /* Hide organization plans */
app-organization-plans > form > bit-section:nth-child(2) { app-organization-plans > form > bit-section:nth-child(2) {
@extend %vw-hide; @extend %vw-hide;
} }
/* Hide Collection Management Form */
app-org-account form.ng-untouched:nth-child(5) {
@extend %vw-hide;
}
/* Hide 'Member Access' Report Card from Org Reports */
app-org-reports-home > app-report-list > div.tw-inline-grid > div:nth-child(6) {
@extend %vw-hide;
}
/* Hide Device Verification form at the Two Step Login screen */ /* Hide Device Verification form at the Two Step Login screen */
app-security > app-two-factor-setup > form { app-security > app-two-factor-setup > form {
@extend %vw-hide; @extend %vw-hide;
} }
/* Hide unsupported Custom Role options */
bit-dialog div.tw-ml-4:has(bit-form-control input),
bit-dialog div.tw-col-span-4:has(input[formcontrolname*="access"], input[formcontrolname*="manage"]) {
@extend %vw-hide;
}
/* Change collapsed menu icon to Vaultwarden */
bit-nav-logo bit-nav-item a:before {
content: "";
background-image: url("../images/icon-white.svg");
background-repeat: no-repeat;
background-position: center center;
height: 32px;
display: block;
}
bit-nav-logo bit-nav-item .bwi-shield {
@extend %vw-hide;
}
/**** END Static Vaultwarden Changes ****/ /**** END Static Vaultwarden Changes ****/
/**** START Dynamic Vaultwarden Changes ****/ /**** START Dynamic Vaultwarden Changes ****/
{{#if signup_disabled}} {{#if signup_disabled}}
/* From web vault 2025.1.2 and onwards, the signup button is hidden
when signups are disabled as the web vault checks the /api/config endpoint.
Note that the clients tend to cache this endpoint for about 1 hour, so it might
take a while for the change to take effect. To avoid the button appearing
when it shouldn't, we'll keep this style in place for a couple of versions */
/* Hide the register link on the login screen */ /* Hide the register link on the login screen */
app-frontend-layout > app-login > form > div > div > div > p { {{#if (webver "<2025.3.0")}}
app-login form div + div + div + div + hr,
app-login form div + div + div + div + hr + p {
@extend %vw-hide;
}
{{else}}
app-root a[routerlink="/signup"] {
@extend %vw-hide; @extend %vw-hide;
} }
{{/if}} {{/if}}
{{/if}}
/* Hide `Email` 2FA if mail is not enabled */
{{#unless mail_enabled}} {{#unless mail_enabled}}
app-two-factor-setup ul.list-group.list-group-2fa li.list-group-item:nth-child(5) { /* Hide `Email` 2FA if mail is not enabled */
.providers-2fa-1 {
@extend %vw-hide; @extend %vw-hide;
} }
{{/unless}} {{/unless}}
/* Hide `YubiKey OTP security key` 2FA if it is not enabled */
{{#unless yubico_enabled}} {{#unless yubico_enabled}}
app-two-factor-setup ul.list-group.list-group-2fa li.list-group-item:nth-child(2) { /* Hide `YubiKey OTP security key` 2FA if it is not enabled */
.providers-2fa-3 {
@extend %vw-hide; @extend %vw-hide;
} }
{{/unless}} {{/unless}}
/* Hide Emergency Access if not allowed */
{{#unless emergency_access_allowed}} {{#unless emergency_access_allowed}}
/* Hide Emergency Access if not allowed */
bit-nav-item[route="settings/emergency-access"] { bit-nav-item[route="settings/emergency-access"] {
@extend %vw-hide; @extend %vw-hide;
} }
{{/unless}} {{/unless}}
/* Hide Sends if not allowed */
{{#unless sends_allowed}} {{#unless sends_allowed}}
/* Hide Sends if not allowed */
bit-nav-item[route="sends"] { bit-nav-item[route="sends"] {
@extend %vw-hide; @extend %vw-hide;
} }

View File

@@ -1,13 +1,12 @@
// //
// Web Headers and caching // Web Headers and caching
// //
use std::{collections::HashMap, io::Cursor, ops::Deref, path::Path}; use std::{collections::HashMap, io::Cursor, path::Path};
use num_traits::ToPrimitive; use num_traits::ToPrimitive;
use rocket::{ use rocket::{
fairing::{Fairing, Info, Kind}, fairing::{Fairing, Info, Kind},
http::{ContentType, Header, HeaderMap, Method, Status}, http::{ContentType, Header, HeaderMap, Method, Status},
request::FromParam,
response::{self, Responder}, response::{self, Responder},
Data, Orbit, Request, Response, Rocket, Data, Orbit, Request, Response, Rocket,
}; };
@@ -56,9 +55,18 @@ impl Fairing for AppHeaders {
res.set_raw_header("Referrer-Policy", "same-origin"); res.set_raw_header("Referrer-Policy", "same-origin");
res.set_raw_header("X-Content-Type-Options", "nosniff"); res.set_raw_header("X-Content-Type-Options", "nosniff");
res.set_raw_header("X-Robots-Tag", "noindex, nofollow"); res.set_raw_header("X-Robots-Tag", "noindex, nofollow");
// Obsolete in modern browsers, unsafe (XS-Leak), and largely replaced by CSP // Obsolete in modern browsers, unsafe (XS-Leak), and largely replaced by CSP
res.set_raw_header("X-XSS-Protection", "0"); res.set_raw_header("X-XSS-Protection", "0");
// The `Cross-Origin-Resource-Policy` header should not be set on images or on the `icon_external` route.
// Otherwise some clients, like the Bitwarden Desktop, will fail to download the icons
if !(res.headers().get_one("Content-Type").is_some_and(|v| v.starts_with("image/"))
|| req.route().is_some_and(|v| v.name.as_deref() == Some("icon_external")))
{
res.set_raw_header("Cross-Origin-Resource-Policy", "same-origin");
}
// Do not send the Content-Security-Policy (CSP) Header and X-Frame-Options for the *-connector.html files. // Do not send the Content-Security-Policy (CSP) Header and X-Frame-Options for the *-connector.html files.
// This can cause issues when some MFA requests needs to open a popup or page within the clients like WebAuthn, or Duo. // This can cause issues when some MFA requests needs to open a popup or page within the clients like WebAuthn, or Duo.
// This is the same behavior as upstream Bitwarden. // This is the same behavior as upstream Bitwarden.
@@ -75,7 +83,9 @@ impl Fairing for AppHeaders {
// # Mail Relay: https://bitwarden.com/blog/add-privacy-and-security-using-email-aliases-with-bitwarden/ // # Mail Relay: https://bitwarden.com/blog/add-privacy-and-security-using-email-aliases-with-bitwarden/
// app.simplelogin.io, app.addy.io, api.fastmail.com, quack.duckduckgo.com // app.simplelogin.io, app.addy.io, api.fastmail.com, quack.duckduckgo.com
let csp = format!( let csp = format!(
"default-src 'self'; \ "default-src 'none'; \
font-src 'self'; \
manifest-src 'self'; \
base-uri 'self'; \ base-uri 'self'; \
form-action 'self'; \ form-action 'self'; \
object-src 'self' blob:; \ object-src 'self' blob:; \
@@ -223,42 +233,6 @@ impl<'r, R: 'r + Responder<'r, 'static> + Send> Responder<'r, 'static> for Cache
} }
} }
pub struct SafeString(String);
impl fmt::Display for SafeString {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.0.fmt(f)
}
}
impl Deref for SafeString {
type Target = String;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl AsRef<Path> for SafeString {
#[inline]
fn as_ref(&self) -> &Path {
Path::new(&self.0)
}
}
impl<'r> FromParam<'r> for SafeString {
type Error = ();
#[inline(always)]
fn from_param(param: &'r str) -> Result<Self, Self::Error> {
if param.chars().all(|c| matches!(c, 'a'..='z' | 'A'..='Z' |'0'..='9' | '-')) {
Ok(SafeString(param.to_string()))
} else {
Err(())
}
}
}
// Log all the routes from the main paths list, and the attachments endpoint // Log all the routes from the main paths list, and the attachments endpoint
// Effectively ignores, any static file route, and the alive endpoint // Effectively ignores, any static file route, and the alive endpoint
const LOGGED_ROUTES: [&str; 7] = ["/api", "/admin", "/identity", "/icons", "/attachments", "/events", "/notifications"]; const LOGGED_ROUTES: [&str; 7] = ["/api", "/admin", "/identity", "/icons", "/attachments", "/events", "/notifications"];
@@ -294,8 +268,8 @@ impl Fairing for BetterLogging {
} else { } else {
"http" "http"
}; };
let addr = format!("{}://{}:{}", &scheme, &config.address, &config.port); let addr = format!("{scheme}://{}:{}", &config.address, &config.port);
info!(target: "start", "Rocket has launched from {}", addr); info!(target: "start", "Rocket has launched from {addr}");
} }
async fn on_request(&self, request: &mut Request<'_>, _data: &mut Data<'_>) { async fn on_request(&self, request: &mut Request<'_>, _data: &mut Data<'_>) {
@@ -309,8 +283,8 @@ impl Fairing for BetterLogging {
let uri_subpath = uri_path_str.strip_prefix(&CONFIG.domain_path()).unwrap_or(&uri_path_str); let uri_subpath = uri_path_str.strip_prefix(&CONFIG.domain_path()).unwrap_or(&uri_path_str);
if self.0 || LOGGED_ROUTES.iter().any(|r| uri_subpath.starts_with(r)) { if self.0 || LOGGED_ROUTES.iter().any(|r| uri_subpath.starts_with(r)) {
match uri.query() { match uri.query() {
Some(q) => info!(target: "request", "{} {}?{}", method, uri_path_str, &q[..q.len().min(30)]), Some(q) => info!(target: "request", "{method} {uri_path_str}?{}", &q[..q.len().min(30)]),
None => info!(target: "request", "{} {}", method, uri_path_str), None => info!(target: "request", "{method} {uri_path_str}"),
}; };
} }
} }
@@ -325,9 +299,9 @@ impl Fairing for BetterLogging {
if self.0 || LOGGED_ROUTES.iter().any(|r| uri_subpath.starts_with(r)) { if self.0 || LOGGED_ROUTES.iter().any(|r| uri_subpath.starts_with(r)) {
let status = response.status(); let status = response.status();
if let Some(ref route) = request.route() { if let Some(ref route) = request.route() {
info!(target: "response", "{} => {}", route, status) info!(target: "response", "{route} => {status}")
} else { } else {
info!(target: "response", "{}", status) info!(target: "response", "{status}")
} }
} }
} }
@@ -352,7 +326,7 @@ pub fn get_display_size(size: i64) -> String {
} }
} }
format!("{:.2} {}", size, UNITS[unit_counter]) format!("{size:.2} {}", UNITS[unit_counter])
} }
pub fn get_uuid() -> String { pub fn get_uuid() -> String {
@@ -498,6 +472,23 @@ pub fn parse_date(date: &str) -> NaiveDateTime {
DateTime::parse_from_rfc3339(date).unwrap().naive_utc() DateTime::parse_from_rfc3339(date).unwrap().naive_utc()
} }
/// Returns true or false if an email address is valid or not
///
/// Some extra checks instead of only using email_address
/// This prevents from weird email formats still excepted but in the end invalid
pub fn is_valid_email(email: &str) -> bool {
let Ok(email) = email_address::EmailAddress::from_str(email) else {
return false;
};
let Ok(email_url) = url::Url::parse(&format!("https://{}", email.domain())) else {
return false;
};
if email_url.path().ne("/") || email_url.domain().is_none() || email_url.query().is_some() {
return false;
}
true
}
// //
// Deployment environment methods // Deployment environment methods
// //
@@ -708,7 +699,7 @@ where
return Err(e); return Err(e);
} }
warn!("Can't connect to database, retrying: {:?}", e); warn!("Can't connect to database, retrying: {e:?}");
sleep(Duration::from_millis(1_000)).await; sleep(Duration::from_millis(1_000)).await;
} }
@@ -761,9 +752,20 @@ pub fn convert_json_key_lcase_first(src_json: Value) -> Value {
/// Parses the experimental client feature flags string into a HashMap. /// Parses the experimental client feature flags string into a HashMap.
pub fn parse_experimental_client_feature_flags(experimental_client_feature_flags: &str) -> HashMap<String, bool> { pub fn parse_experimental_client_feature_flags(experimental_client_feature_flags: &str) -> HashMap<String, bool> {
let feature_states = experimental_client_feature_flags.split(',').map(|f| (f.trim().to_owned(), true)).collect(); // These flags could still be configured, but are deprecated and not used anymore
// To prevent old installations from starting filter these out and not error out
feature_states const DEPRECATED_FLAGS: &[&str] =
&["autofill-overlay", "autofill-v2", "browser-fileless-import", "extension-refresh", "fido2-vault-credentials"];
experimental_client_feature_flags
.split(',')
.filter_map(|f| {
let flag = f.trim();
if !flag.is_empty() && !DEPRECATED_FLAGS.contains(&flag) {
return Some((flag.to_owned(), true));
}
None
})
.collect()
} }
/// TODO: This is extracted from IpAddr::is_global, which is unstable: /// TODO: This is extracted from IpAddr::is_global, which is unstable: