Compare commits

...

179 Commits

Author SHA1 Message Date
Stefan Melmuk
2903a3a13a only validate SMTP_FROM if necessary (#5442) 2025-01-25 05:46:43 +01:00
Mathijs van Veluw
952992c85b Org fixes (#5438)
* Security fixes for admin and sendmail

Because the Vaultwarden Admin Backend endpoints did not validated the Content-Type during a request, it was possible to update settings via CSRF. But, this was only possible if there was no `ADMIN_TOKEN` set at all. To make sure these environments are also safe I added the needed content-type checks at the functions.
This could cause some users who have scripts which uses cURL for example to adjust there commands to provide the correct headers.

By using a crafted favicon and having access to the Admin Backend an attacker could run custom commands on the host/container where Vaultwarden is running on. The main issue here is that we allowed the sendmail binary name/path to be changed. To mitigate this we removed this configuration item and only then `sendmail` binary as a name can be used.
This could cause some issues where the `sendmail` binary is not in the `$PATH` and thus not able to be started. In these cases the admins should make sure `$PATH` is set correctly or create a custom shell script or symlink at a location which is in the `$PATH`.

Added an extra security header and adjusted the CSP to be more strict by setting `default-src` to `none` and added the needed missing specific policies.

Also created a general email validation function which does some more checking to catch invalid email address not found by the email_address crate.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix security issue with organizationId validation

Because of a invalid check/validation of the OrganizationId which most of the time is located in the path but sometimes provided as a URL Parameter, the parameter overruled the path ID during the Guard checks.
This resulted in someone being able to execute commands as an Admin or Owner of the OrganizationId fetched from the parameter, but the API endpoints then used the OrganizationId located in the path instead.

This commit fixes the extraction of the OrganizationId in the Guard and also added some extra validations of this OrgId in several functions.

Also added an extra `OrgMemberHeaders` which can be used to only allow access to organization endpoints which should only be accessible by members of that org.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Update server version in config endpoint

Updated the server version reported to the clients to `2025.1.0`.
This should make Vaultwarden future proof for the newer clients released by Bitwarden.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix and adjust build workflow

The build workflow had an issue with some `if` checks.
For one they had two `$` signs, and it is not recommended to use `always()` since canceling a workflow does not cancel those calls.
Using `!cancelled()` is the preferred way.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Update crates

Signed-off-by: BlackDex <black.dex@gmail.com>

* Allow sendmail to be configurable

This reverts a previous change which removed the sendmail to be configurable.
We now set the config to be read-only, and omit all read-only values from being stored during a save action from the admin interface.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Add more org_id checks

Added more org_id checks at all functions which use the org_id in there path.

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-25 01:32:09 +01:00
Stefan Melmuk
c0be36a17f update web-vault to v2025.1.1 and add /api/devices (#5422)
* add /api/devices endpoints

* load pending device requests

* order pending authrequests by creation date

* update web-vault to v2025.1.1
2025-01-23 12:30:55 +01:00
Mathijs van Veluw
d1dee04615 Add manage role for collections and groups (#5386)
* Add manage role for collections and groups

This commit will add the manage role/column to collections and groups.
We need this to allow users part of a collection either directly or via groups to be able to delete ciphers.
Without this, they are only able to either edit or view them when using new clients, since these check the manage role.

Still trying to keep it compatible with previous versions and able to revert to an older Vaultwarden version and the `access_all` feature of the older installations.
In a future version we should really check and fix these rights and create some kind of migration step to also remove the `access_all` feature and convert that to a `manage` option.
But this commit at least creates the base for this already.

This should resolve #5367

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix an issue with access_all

If owners or admins do not have the `access_all` flag set, in case they do not want to see all collection on the password manager view, they didn't see any collections at all anymore.

This should fix that they are still able to view all the collections and have access to it.

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-21 23:33:41 +01:00
Stefan Melmuk
ef2695de0c improve admin invite (#5403)
* check for admin invite

* refactor the invitation logic

* cleanup check for undefined token

* prevent wrong user from accepting invitation
2025-01-20 20:21:44 +01:00
Daniel
29f2b433f0 Simplify container image attestation (#5387) 2025-01-13 19:16:10 +01:00
Mathijs van Veluw
07f80346b4 Fix version detection on bake (#5382) 2025-01-11 11:54:38 +01:00
Mathijs van Veluw
4f68eafa3e Add Attestations for containers and artifacts (#5378)
* Add Attestations for containers and artifacts

This commit will add attestation actions to sign the containers and binaries which can be verified via the gh cli.
https://cli.github.com/manual/gh_attestation_verify

The binaries from both Alpine and Debian based images are extracted and attested so that you can verify the binaries of all the containers.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Adjust attest to use globbing

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-10 21:32:38 +01:00
Integral
327d369188 refactor: replace static with const for global constants (#5260)
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2025-01-10 21:06:38 +01:00
Mathijs van Veluw
ca7483df85 Fix an issue with login with device (#5379)
During the refactoring done in #5320 there has a buggy slipped through which changed a uuid.
This commit fixes this, and also made some vars pass by reference.

Fixes #5377

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-10 20:37:23 +01:00
Helmut K. C. Tessarek
16b6d2a71e build: raise msrv (1.83.0) rust toolchain (1.84.0) (#5374)
* build: raise msrv (1.83.0) rust toolchain (1.84.0)

* build: also update docker images
2025-01-10 20:34:48 +01:00
Stefan Melmuk
871a3f214a rename membership and adopt newtype pattern (#5320)
* rename membership

rename UserOrganization to Membership to clarify the relation
and prevent confusion whether something refers to a member(ship) or user

* use newtype pattern

* implement custom derive macro IdFromParam

* add UuidFromParam macro for UUIDs

* add macros to Docker build

Co-authored-by: dfunkt <dfunkt@users.noreply.github.com>

---------

Co-authored-by: dfunkt <dfunkt@users.noreply.github.com>
2025-01-09 18:37:23 +01:00
Mathijs van Veluw
10d12676cf Allow building with Rust v1.84.0 or newer (#5371) 2025-01-09 12:33:02 +01:00
Mathijs van Veluw
dec3a9603a Update crates and web-vault to v2025.1.0 (#5368)
- Updated the web-vault to use v2025.1.0 (pre-release)
- Updated crates

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-08 18:14:08 +01:00
Mathijs van Veluw
86aaf27659 Prevent new users/members to be stored in db when invite fails (#5350)
* Prevent new users/members when invite fails

Currently when a (new) user gets invited as a member to an org, and SMTP is enabled, but sending the invite fails, the user is still created.
They will only not have received a mail, and admins/owners need to re-invite the member again.
Since the dialog window still keeps on-top when this fails, it kinda invites to click try again, but that will fail in mentioning the user is already a member.

To prevent this weird flow, this commit will delete the user, invite and member if sending the mail failed.
This allows the inviter to try again if there was a temporary hiccup for example, or contact the server admin and does not leave stray users/members around.

Fixes #5349

Signed-off-by: BlackDex <black.dex@gmail.com>

* Adjust deleting records

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-08 18:13:45 +01:00
Stefan Melmuk
bc913d1156 fix manager role in admin users overview (#5359)
due to the hack the returned type has changed
2025-01-07 12:47:37 +01:00
Mathijs van Veluw
ef4bff09eb Fix issue with key-rotate (#5348)
The new web-vault seems to call an extra endpoint, which looks like it is only used when passkeys can be used for login.
Since we do not support this (yet), we can just return an empty data object.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-04 23:00:05 +01:00
Mathijs van Veluw
4816f77fd7 Add partial role support for manager only using web-vault v2024.12.0 (#5219)
* Add partial role support for manager only

- Add the custom role which replaces the manager role
- Added mini-details endpoint used by v2024.11.1

These changes try to add the custom role in such a way that it stays compatible with the older manager role.
It will convert a manager role into a custom role, and if a manager has `access-all` rights, it will enable the correct custom roles.
Upon saving it will convert these back to the old format.

What this does is making sure you are able to revert back to an older version of Vaultwarden without issues.
This way we can support newer web-vault's and still be compatible with a previous Vaultwarden version if needed.

In the future this needs to be changed to full role support though.

Fixed the 2FA hide CSS since the order of options has changed

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix hide passkey login

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix hide create account

Signed-off-by: BlackDex <black.dex@gmail.com>

* Small changes for v2024.12.0

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix hide create account link

Signed-off-by: BlackDex <black.dex@gmail.com>

* Add pre-release web-vault

Signed-off-by: BlackDex <black.dex@gmail.com>

* Rename function to mention swapping uuid's

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-04 19:31:59 +01:00
Mathijs van Veluw
dfd9e65396 Refactor the uri match fix and fix ssh-key sync (#5339)
* Refactor the uri match change

Refactored the uri match fix to also convert numbers within a string to an int.
If it fails it will be null.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix ssh-key sync issues

If any of the mandatory ssh-key json data values are not a string or are an empty string, this will break the mobile clients.
This commit fixes this by checking if any of the values are missing or invalid and converts the json data to `null`.
It will ensure the clients can sync and show the vault.

Fixes #5343
Fixes #5322

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-04 19:11:46 +01:00
Mathijs van Veluw
b1481c7c1a Update crates and GHA (#5346)
- Updated crates to the latest version
- Updated GitHub Actions to the latest version

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-01-04 19:02:15 +01:00
Stefan Melmuk
d9e0d68f20 fix group issue in send_invite (#5321) 2024-12-31 13:28:19 +01:00
Timshel
08183fc999 Add TOTP delete endpoint (#5327) 2024-12-30 16:57:52 +01:00
Mathijs van Veluw
d9b043d32c Fix issues when uri match is a string (#5332) 2024-12-29 21:26:03 +01:00
Ephemera42
ed4ad67e73 Add inline-menu-positioning-improvements feature flag (#5313) 2024-12-20 17:49:46 +01:00
Mathijs van Veluw
a523c82f5f Use updated fern instead of patch (#5298)
Signed-off-by: BlackDex <black.dex@gmail.com>
2024-12-15 23:13:29 +01:00
Mathijs van Veluw
4d6d3443ae Allow adding connect-src entries (#5293)
Bitwarden allows to use self-hosted forwarded email services.
But for this to work you need to add custom URL's to the `connect-src` CSP entry.

This commit allows setting this and checks if the URL starts with `https://` else it will abort loading.

Fixes #5290

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-12-15 00:27:20 +01:00
Mathijs van Veluw
9cd400db6c Some refactoring and optimizations (#5291)
- Refactored several code to use more modern syntax
- Made some checks a bit more strict
- Updated crates

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-12-14 00:55:34 +01:00
Helmut K. C. Tessarek
fd51230044 feat: mask _smtp_img_src in support string (#5281) 2024-12-12 14:35:07 +01:00
Mathijs van Veluw
45e5f06b86 Some Backend Admin fixes and updates (#5272)
* Some Backend Admin fixes and updates

- Updated datatables
- Added a `X-Robots-Tags` header to prevent indexing
- Modified some layout settings
- Added Websocket check to diagnostics
- Added Security Header checks to diagnostics
- Added Error page response checks to diagnostics
- Modifed support string layout a bit

Signed-off-by: BlackDex <black.dex@gmail.com>

* Some small fixes

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-12-10 21:52:12 +01:00
Daniel
620ad92331 Update crates (#5268)
- fixes CVE-2024-12224
2024-12-10 17:59:28 +01:00
Mathijs van Veluw
c9860af11c Fix another sync issue with native clients (#5259)
The `reprompt` value somehow sometimes has a value of `4`.
This isn't a valid value, and doesn't cause issues with other clients, but the native clients are more strict.

This commit fixes this by validating the value before storing and returning.

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-12-08 21:48:19 +01:00
Daniel
d7adce97df Update Alpine to version 3.21 (#5256) 2024-12-06 11:05:52 +01:00
Mathijs van Veluw
71b3d3c818 Update Rust and crates (#5248)
* Update Rust and crates

- Updated Rust to v1.83.0
- Updated MSRV to v1.82.0 (Needed for html5gum crate)
- Updated icon fetching code to match new html5gum version
- Updated workflows
- Enabled edition 2024 clippy lints
  Nightly reports some clippy hints, but that would be too much to change in this PR i think.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Some additional updates

- Patch fern to allow syslog-7 feature
- Fixed diesel logger which was broken because of the sqlite backup feature
  Refactored the sqlite backup because of this
- Added a build workflow test to include the query_logger feature

Signed-off-by: BlackDex <black.dex@gmail.com>

* Also patch yubico-rs and latest updates

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-12-05 22:10:59 +01:00
chuangjinglu
da3701c0cf chore: fix some comments (#5224)
Signed-off-by: chuangjinglu <chuangjinglu@outlook.com>
2024-11-25 18:35:00 +01:00
Mathijs van Veluw
96813b1317 Fix editing members which have access-all rights (#5213)
With web-vault v2024.6.2 and lower, if a user has access-all rights either as an org-member or via a group it shouldn't return individual collections.

This probably needs to be changed with newer versions which do not support the `access-all` feature anymore and work with manage.
But with the current version this should solve access right issues.

Fixes #5212

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-20 17:38:16 +01:00
Mathijs van Veluw
b0b953f348 Fix push not working (#5214)
The new native mobile clients seem to use PascalCase for the push payload.
Also the date/time could cause issues.

This PR fixes this by formatting the date/time correctly and use PascalCase for the payload key's
I now receive cipher updates and login-with-device requests again.

Fixes #5182

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-20 17:32:44 +01:00
Mathijs van Veluw
cdfdc6ff4f Fix Org Import duplicate collections (#5200)
This fixes an issue with collections be duplicated same as was an issue with folders.
Also made some optimizations by using HashSet where possible and device the Vec/Hash capacity.
And instead of passing objects only use the UUID which was the only value we needed.

Also found an issue with importing a personal export via the Org import where folders are used.
Since Org's do not use folder we needed to clear those out, same as Bitwarden does.

Fixes #5193

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-17 21:33:23 +01:00
Daniel García
2393c3f3c0 Support SSH keys on desktop 2024.12 (#5187)
* Support SSH keys on desktop 2024.12

* Document flags in .env.template

* Validate key rotation contents
2024-11-15 18:38:16 +01:00
Daniel García
0d16b38a68 Some more authrequest changes (#5188) 2024-11-15 11:25:51 +01:00
Stefan Melmuk
ff33534c07 don't infer manage permission for groups (#5190)
the web-vault v2024.6.2 currently cannot deal with manage permission so
instead of relying on the org user type this should just default to false
2024-11-13 19:19:19 +01:00
Stefan Melmuk
adb21d5c1a fix password hint check (#5189)
* fix password hint check

don't show password hints if you have disabled the hints with
PASSWORD_HINTS_ALLOWED=false or if you have not configured mail and
opted into showing password hints

* update descriptions for pw hints options
2024-11-12 21:22:25 +01:00
Mathijs van Veluw
e927b8aa5e Remove auth-request deletion (#5184)
2FA is needed to login even when using login-with-device.
If the user didn't saved the 2FA token they still need to provide this.
We deleted the auth-request after validation the request, but before 2FA was triggered.

Removing the deletion of this record from that point as it will get cleaned-up automatically anyways.

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-12 15:48:39 +01:00
Mathijs van Veluw
ba48ca68fc fix hibp username encoding and pw hint check (#5180)
* fix hibp username encoding

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix password-hint check

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-12 11:09:28 +01:00
Mathijs van Veluw
294b429436 Add dynamic CSS support (#4940)
* Add dynamic CSS support

Together with https://github.com/dani-garcia/bw_web_builds/pull/180 this PR will add support for dynamic CSS changes.

For example, we could hide the register link if signups are not allowed.
In the future show or hide the SSO button depending on if it is enabled or not.

There also is a special `user.vaultwarden.scss` file so that users can add custom CSS without the need to modify the default (static) changes.
This will prevent future changes from not being applied and still have the custom user changes to be added.

Also added a special redirect when someone goes directly to `/index.html` as that might cause issues with loading other scripts and files.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Add versions and fallback to built-in

- Add both Vaultwarden and web-vault versions to the css_options.
- Fallback to the inner templates if rendering or compiling the scss fails.
  This ensures the basics are always working even if someone breaks the templates.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Fix fallback code to actually work

The fallback now works by using an alternative `reg!` macro.
This adds an extra template register which prefixes the template with `fallback_`.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Updated the wiki link in the user template

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-11 20:14:04 +01:00
Daniel García
37c14c3c69 More authrequest fixes (#5176) 2024-11-11 20:13:02 +01:00
Mathijs van Veluw
d0581da638 Fix if logic error (#5171)
Fixing a logical error in an if statement where we used `&&` which should have been `||`.

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-11 11:50:33 +01:00
Daniel García
38aad4f7be Limit HIBP to authed users 2024-11-10 23:59:06 +01:00
BlackDex
20d9e885bf Update crates and fix several issues
Signed-off-by: BlackDex <black.dex@gmail.com>
2024-11-10 23:56:19 +01:00
Mathijs van Veluw
2f20ad86f9 Update README (#5153)
Updating the Readme to be more modern and more clear.
Added and moved several shields/badges and changed some default colors to have a better contrast.
Added a Disclaimer section.

Closes #4901
Closes #4930
Closes #4931
Closes #5024

Co-authored-by: ipitio <21136719+ipitio@users.noreply.github.com>
Co-authored-by: Robert Schütz <github@dotlambda.de>
Co-authored-by: Yonas Yanfa <yonas.y@gmail.com>
Co-authored-by: KUSUMA RUSHIKESH <141169227+rushi-k12@users.noreply.github.com>
2024-11-02 22:20:10 +01:00
Mathijs van Veluw
33bae5fbe9 Update crates and fix Mail issue (#5125)
- Updated all the crates
  Including in this update is an update from lettre, which solves an issue with some specific SMTP mail providers.
2024-10-24 19:13:20 +02:00
Daniel
f60502a17e Add documentation for the extension-refresh feature flag (#5112) 2024-10-21 00:05:11 +02:00
Mathijs van Veluw
13f4b66e62 Hide user name on invite status (#5110)
A possible user disclosure when you invite an user into an organization which already has an account on the same instance.
This was because we always returned the user's name.
To prevent this, this PR only returns the user's name if the status is accepted or higher, else we will return null.
This is the same as Bitwarden does.

Resolves a reported issue.

Also resolved a new `nightly` reported clippy regarding a regex within a loop.
2024-10-19 18:22:21 +02:00
Daniel
c967d0ddc1 Add extension-refresh feature flag (#5106)
- in case people want to try out the new extension design
2024-10-19 18:21:00 +02:00
Mathijs van Veluw
ae6ed0ece8 Fix collection management and match some json output (#5095)
- Fixed collection management to be usable from the Password Manager UI
- Checked and brought in-to-sync with upstream several json responses
- Fixed a small issue with the `fields` response when it was empty

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-10-18 20:37:32 +02:00
Daniel
b7c254eb30 Update Rust to 1.82.0 (#5099)
- raise MSRV to 1.80.0
- also update the crates
2024-10-18 20:34:31 +02:00
Mathijs van Veluw
a47b484172 Fix org invite url being html encoded (#5100)
Ever since we changed to pass the full url as a template value handlebars now html-encodes this.
This causes issues with the plain/text mails, but it also could potentially cause issues with the text/html templates.

This PR encloses the template values inside triple braces `{{{ }}}` which prevents html-encoding.
Since the URL is generated via the `url` crate the values are percent-encoded anyway.

Fixes #5097

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-10-18 20:34:11 +02:00
Mathijs van Veluw
65629a99f0 Fix field type to actually be hidden (#5082)
In an oversight i forgot to set the type to a hidden type if converting the int was not possible.
This fixes that.

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-10-13 20:32:15 +02:00
Mathijs van Veluw
49c5dec9b6 Fix iOS sync by converting field types to int (#5081)
It seems the iOS clients are not able to handle the `type` key within the `fields` array when they are of the type string.
All other clients seem to handle this just fine though.

This PR fixes this by validating it is a number, if this is not the case, try to convert the string to a number, or return the default of `1`.
`1` is used as this is the type `hidden` and should prevent accidental data disclosure.

Fixes #5069

Possibly Fixes #5016
Possibly Fixes #5002

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-10-13 20:25:09 +02:00
Mathijs van Veluw
cd195ff243 Fix --version from failing without config (#5055)
* Fix `--version` from failing without config

Since we added the option to show the web-vault version also when running `--version` this causes the config to always be validated.
While this is not very bad in general, it could cause the command to quit during the config validation, and not show the version, but also errors.
This is probably not very useful for this specific command, unlike the `--backup` for example.

To fix this, and preventing the config from being validated, i added an AtomicBool to check if we need to validate the config on first load.
This prevents errors, and will just show the Vaultwarden version, and if possible the web-vault version too.

Fixes #5046

Signed-off-by: BlackDex <black.dex@gmail.com>

* Adjusted the code bsaed upon review

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-10-11 18:58:25 +02:00
Mathijs van Veluw
e3541763fd Updates and collection management fixes (#5072)
* Fix collections not editable by managers

Since a newer version of the web-vault we use manager were not able to create sub collections anymore.
This was because of some missing details in the response of some json objects.

This commit fixes this by using the `to_json_details` instead of the `to_json`

Fixes #5066
Fixes #5044

* Update crates and GitHub Actions

- Updated all the crates
- Updated all the GHA dependencies
- Configured the trivy workflow to only run on the main repo and not on forks
  Also selected a random new scheduled date so it will not run at the same time as all other forks.
  The two changes should help running this scan every day without failing, and also prevent the same for new or updated forks.
2024-10-11 18:42:40 +02:00
Mathijs van Veluw
f0efec7c96 Fix compiling for Windows targets (#5053)
The `unix::signal` was also included during Windows compilations.
This of course will not work. Fix this by only including it for `unix` targets.

Also changed all other conditional compilation options to use `cfg(unix)` instead of `cfg(not(windows))`.
The latter may also include `wasm` for example, or any other future target family.
This way we will only match `unix`

Fixes #5052
2024-10-06 13:49:00 +02:00
Mathijs van Veluw
040e2a7bb0 Add extra linting (#4977)
* Add extra linting

Added extra linting for some code styles.
Also added the Rust Edition 2024 lints.

Closes #4974

Signed-off-by: BlackDex <black.dex@gmail.com>

* Adjusted according to comments

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-09-23 20:25:32 +02:00
Daniel García
d184c8f08c Fix keyword collision in Rust 2024 and add new api/config value (#4975)
* Avoid keyword collision with gen in Rust 2024

* Include new api/config setting to disable user registration, not yet used by clients

* Actually qualify CONFIG
2024-09-20 21:39:00 +02:00
Mathijs van Veluw
7d6dec6413 Fix encrypted lastUsedDate (#4972)
It appears that some password histories have an encrypted value on the `lastUsedDate`
Instead of only checking if it is a string, also check if it is a valid RFC Date/Time String.
If not, set it also to epoch 0.

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-09-20 20:38:42 +02:00
Mathijs van Veluw
de01111082 Fix Device Type column for 2FA migration (#4971) 2024-09-20 12:06:06 +02:00
Stefan Melmuk
0bd8f607cb remove backtics from postgresql migrations (#4968) 2024-09-19 19:18:09 +02:00
Daniel
21efc0800d Actually use Device Type for mails (#4916)
- match Bitwarden behaviour
- add a different segment in mails for Device Name
2024-09-18 19:03:15 +02:00
Stefan Melmuk
1031c2e286 fix 2fa policy check on registration (#4956) 2024-09-18 19:00:10 +02:00
Mathijs van Veluw
1bf85201e7 Fix Pw History null dates (#4966)
It seemed to have been possible to have `null` date values.
This PR fixes this by setting the epoch start date if either the date does not exists or is not a string.

This should solve sync issues with the new native mobile clients.

Fixes https://github.com/dani-garcia/vaultwarden/pull/4932#issuecomment-2357581292

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-09-18 18:57:08 +02:00
Stefan Melmuk
6ceed9284d fix invitation link via /admin (#4950) 2024-09-13 22:08:59 +02:00
Mathijs van Veluw
25d99e3506 Fix collection update from native client (#4937) 2024-09-10 21:33:59 +02:00
Mathijs van Veluw
dca14285fd Fix sync with new native clients (#4932) 2024-09-09 11:36:37 +02:00
Daniel
66baa5e7d8 Update Rust version & crates (#4928) 2024-09-07 10:39:29 +02:00
Timshel
248e561b3f Add orgUserHasExistingUser parameters to org invite (#4827) 2024-09-01 15:55:41 +02:00
Mathijs van Veluw
55623ad9c6 Update web-vault, crates and gha (#4909)
- Updated the web-vault to fix an issue with personal export.
  Thanks to @stefan0xC for patching this.
  Fixes #4875
- Updated crates to there latest version
- Updated the GitHub Actions
- Updated the xx image to the latest version

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-09-01 15:53:03 +02:00
Mathijs van Veluw
e9acd8bd3c Add a CLI feature to backup the SQLite DB (#4906)
* Add a CLI feature to backup the SQLite DB

Many users request to add the sqlite3 binary to the container image.
This isn't really ideal as that might bring in other dependencies and will only bloat the image.
There main reason is to create a backup of the database.

While there already was a feature within the admin interface to do so (or by using the admin API call), this might not be easy.

This PR adds several ways to generate a backup.
1. By calling the Vaultwarden binary with the `backup` command like:
  - `/vaultwarden backup`
  - `docker exec -it vaultwarden /vaultwarden backup`
2. By sending the USR1 signal to the running process like:
  - `kill -s USR1 $(pidof vaultwarden)
  - `killall -s USR1 vaultwarden)

This should help users to more easily create backups of there SQLite database.

Also added the Web-Vault version number when using `-v/--version` to the output.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Spelling and small adjustments

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-09-01 15:52:29 +02:00
Mathijs van Veluw
544b7229e8 Allow enforcing Single Org with pw reset policy (#4903)
* Allow enforcing Single Org with pw reset policy

Bitwarden only allows the Reset Password policy to be set when the Single Org policy is enabled already.
This PR adds a check so that this can be enforced when a config option is enabled.

Since Vaultwarden encouraged to use multiple orgs when groups were not available yet we should not enable this by default now.
This might be something to do in the future.

When enabled, it will prevent the Reset Password policy to be enabled if the Single Org policy is not enabled.
It will also prevent the Single Org policy to be disabled if the Reset Password policy is enabled.

Fixes #4855

Signed-off-by: BlackDex <black.dex@gmail.com>

* Removed some extra if checks

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-08-30 21:37:59 +02:00
Mathijs van Veluw
978f009293 Allow Org Master-Pw policy enforcement (#4899)
* Allow Org Master-Pw policy enforcement

We didn't returned the master password policy for the user.
If the `Require existing members to change their passwords` check was enabled this should trigger the login to show a change password dialog.

All the master password policies are merged into one during the login response and it will contain the max values and all `true` values which are set by all the different orgs if a user is an accepted member.

Fixes #4507

Signed-off-by: BlackDex <black.dex@gmail.com>

* Use .reduce instead of .fold

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-08-27 19:37:51 +02:00
Mathijs van Veluw
92f1530e96 Allow custom umask setting (#4896)
To provide a way to add more security regarding file/folder permissions
this PR adds a way to allow setting a custom `UMASK` variable.

This allows people to set a more secure default like only allowing the
owner the the process/container to read/write files and folders.

Examples:
 - `UMASK=022` File: 644 | Folder: 755 (Default of the containers)
   This means Owner read/write and group/world read-only
 - `UMASK=027` File: 640 | Folder: 750
   This means Owner read/write, group read-only, world no access
 - `UMASK=077` File: 600 | Folder: 700
   This measn Owner read/write and group/world no access

resolves #4571

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-08-27 19:37:17 +02:00
Mathijs van Veluw
2b824e8096 Updated security readme (#4892)
Update the security readme with a new GPG security key and some small
other changes.

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-08-24 18:59:27 +02:00
Mathijs van Veluw
059661be48 Update crates (GHSA-wq9x-qwcq-mmgf) (#4889)
- Updated crates
- Fixed MSRV to actually be N-2
- Changed some features to use the `dep:` prefix.
  This is needed for edition-2024 anyway although that will be a while before we can use that.

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-08-23 22:06:11 +02:00
Mathijs van Veluw
0f3f97cc76 Update issue template (#4882)
Updated the issue template a bit regarding some remarks in the previous pr.

Also made it so that collapsing all items will show all the specific
item id's instead of there types. Easy for editiing :).

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-08-23 22:03:57 +02:00
philomathic_life
aa0fe7785a Remove version from server config info (#4885) 2024-08-22 21:42:30 +02:00
Timshel
65d11a9720 Switch to Whitelisting in .dockerignore (#4856) 2024-08-21 21:59:17 +02:00
Mathijs van Veluw
c722006385 Fix Login with device (#4878)
Fixed an issue with login with device for the new Bitwrden Beta clients.
They seem to not support ISO8601 milli date/time, only micro.

Also updated the device display names to match Upstream and added the
CLI devices which were missing.

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-08-21 21:57:52 +02:00
Stefan Melmuk
aaab7f9640 remove overzealous sanity check (#4879)
when cloning an item from an organization to the personal vault
the client sends the collection id of the cloned item
2024-08-21 21:54:13 +02:00
Mathijs van Veluw
cbdb5657f1 Update issue template (#4876)
Updated the issue template to use a form and guide users to provide all
information useful to troublshoot issues

Als updated links to prefer the usage of GitHub Discussions.

Signed-off-by: BlackDex <black.dex@gmail.com>
2024-08-20 21:25:28 +02:00
Mathijs van Veluw
669b9db758 Fix Vaultwarden Admin page error messages (#4869)
Since the change to camelCase variables the error messages in the
Vaultwarden Admin were not shown correctly anymore.

This PR fixes this by changing the case of the json key's.
Also updated the save and delete of the config to provide a more
descriptive error instead of only `Io` or which ever other error might
occure.

Fixes #4834
2024-08-18 21:04:22 +02:00
Timshel
3466a8040e Remove unecessary email normalization (#4840) 2024-08-17 22:48:59 +02:00
Daniel
7d47155d83 Update email footer padding values (#4838)
- looks better, the Github logo was too close to the bottom
- also fix a minor issue in the new device log in HTML template
2024-08-17 22:48:10 +02:00
Mathijs van Veluw
9e26014b4d Fix manager in web-vault v2024.6.2 for collections (#4860)
The web-vault v2024.6.2 we use needs some extra information to allow
managers to actually be able to manage collections.

The v2024.6.2 web-vault has somewhat of a mixture of the newer roles and
older manager roles. To at least fix this for the web-vault we bundle
these changes will make the manager able to manage.

For future web-vaults we would need a lot more changes to be done to fix
this in a better way though.

Fixes #4844
2024-08-15 12:36:00 +02:00
Mathijs van Veluw
339612c917 Fix Duo Redirect not using path (#4862)
The URL crate treats `https://domain.tld/path` differently then
`https://domain.tld/path/` the latter will make sure a `.join()` will
append the given path instead of using the base as a relative path.

Fixes #4858
2024-08-15 12:29:51 +02:00
Mathijs van Veluw
9eebbf3b9f Update GitHub Action Workflows (#4849) 2024-08-13 12:52:07 +02:00
Mathijs van Veluw
b557c11724 Fix data disclosure on organization endpoints (#4837)
- All users were able to request organizational details from any org,
  even if they were not a member (anymore).
  Now it will check if that user is a member of the org or not.
- The `/organization/<uuid>/keys` endpoint returned also the private keys.
  This should not be the case. Also, according to the upstream server
  code the endpoint changed, but the clients do not seem to use it.
  I added it anyway just in case they will in the future.
- Also require a valid login before being able to retreve those org
  keys. Upstream does not do this, but i see no reason why not.

Fixes: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-39925
2024-08-11 19:39:56 +02:00
Daniel
a1204cc935 Update Rust to 1.80.1 (#4831) 2024-08-09 11:52:56 +02:00
Mathijs van Veluw
1ea511cbfc Updated web-vault to v2024.6.2b (#4826) 2024-08-08 17:04:04 +02:00
Mathijs van Veluw
2e6a6fa39f Update crates, web-vault and fixes (#4823)
* Update crates, web-vault and fixes

- Updated crates
- Updated web-vault to v2024.6.2
  This version is currently the latest version compatible with our API implementation.
  For newer versions we need more code updates to make it compatible.
  Thanks to @stefan0xC this version fixes #4628
- Added a small fix to prevent errors in the Vaultwarden and Client logs.
  The v2024.6.2 web-vault calls an endpoint with invalid arguments.
  If this happens we ignore the call and just return an Ok.
- Added the bulk-collection endpoint (Though not yet available in v2024.6.2)

Fixes #4628

* Prevent bulk remove collections to work
2024-08-07 22:46:03 +02:00
Daniel
e7d5c17ff7 Fix mail::send_incomplete_2fa_login panic issue (#4792)
- fixes https://github.com/dani-garcia/vaultwarden/issues/4528
2024-08-07 22:45:41 +02:00
Daniel
a7be8fab9b Remove lowercase conversion for featureStates (#4820)
- needed to match Bitwarden, some of the feature flags might have uppercase characters (for example: ```PM-4154-bulk-encryption-service```)
2024-08-07 21:55:58 +02:00
Stefan Melmuk
39d4d31080 make access_all optional (#4812)
* make access_all optional

* use #[serde(default)] instead of unwrapping
2024-08-01 19:45:42 +02:00
Mathijs van Veluw
c28246cf34 Secure send file uploads (#4810)
Currently there are no checks done during the actual upload of the file of a send item.
This PR adds several checks to make sure it only accepts the correct uploads.
2024-07-31 15:24:15 +02:00
Daniel
d7df0ad79e Rewrite the Push Notifications section in the configuration template (#4805)
- also update the European Union related information for a working setup
- fixes https://github.com/dani-garcia/vaultwarden/issues/4609
2024-07-30 19:42:56 +02:00
Stefan Melmuk
7c8ba0c232 fix issue with adding ciphers to organizations on native ios app (#4800)
* add organizationID alias for native ios

* add reverse sanity check
2024-07-30 11:50:05 +02:00
Daniel
d335187172 Update rust-toolchain.toml to 1.80.0 (#4784) 2024-07-25 21:23:09 +02:00
Timshel
f858523d92 Duo: use the formatted db email (#4779) 2024-07-25 20:25:44 +02:00
Mathijs van Veluw
529c39c6c5 Update Rust, Crates and GHA (#4783)
- Update Rust to v1.80.0
- Updated GitHub Actions
- Updated crates
2024-07-25 20:24:27 +02:00
Mathijs van Veluw
b428481ac0 Allow to increase the note size to 100_000 (#4772)
This PR adds a config option to allow the note size to increase to 100_000, instead of the default 10_000.
Since this might cause issues with the clients (in the future), and will cause issues with importing into a Bitwarden server, i added warnings regarding this.

Closes #3168
2024-07-24 21:49:01 +02:00
0x0fbc
b4b2701905 Add support for MFA with Duo's Universal Prompt (#4637)
* Add initial working Duo Universal Prompt support.

* Add db schema and models for Duo 2FA state storage

* store duo states in the database and validate during authentication

* cleanup & comments

* bump state/nonce length

* replace stray use of TimeDelta

* more cleanup

* bind Duo oauth flow to device id, drop redundant device type handling

* drop redundant alphanum string generation code

* error handling cleanup

* directly use JWT_VALIDITY_SECS constant instead of copying it to DuoClient instances

* remove redundant explicit returns, rustfmt

* rearrange constants, update comments, error message

* override charset on duo state column to ascii for mysql

* Reduce twofactor_duo_ctx state/nonce column size in postgres and maria

* Add fixes suggested by clippy

* rustfmt

* Update to use the make_http_request

* Don't handle OrganizationDuo

* move Duo API endpoint fmt strings out of macros and into format! calls

* Add missing indentation

Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>

* remove redundant expiry check when purging Duo contexts

---------

Co-authored-by: BlackDex <black.dex@gmail.com>
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2024-07-24 16:50:35 +02:00
Timshel
de66e56b6c Allow to override log level for specific target (#4305) 2024-07-24 16:49:03 +02:00
Stefan Melmuk
ecfebaf3c7 allow re-invitations of existing users (#4768)
* allow re-invitations of existing users

* auto-accept existing user if mail is disabled

Apply suggestions from code review

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>

---------

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>
2024-07-24 00:32:46 +02:00
Daniel
0e53f58288 Fix Dockerfile linter warnings (#4763)
- they seem to have started appearing with buildx v0.16.0
- skip lint check for FromPlatformFlagConstDisallowed and RedundantTargetPlatform
2024-07-24 00:28:07 +02:00
Daniel
bc7ceb2ee3 Update crates & fix crate vulnerability (#4771)
- fixes GHSA-q445-7m23-qrmw by updating openssl to version 0.10.66
2024-07-24 00:26:39 +02:00
Mathijs van Veluw
b27e6e30c9 Fix Email 2FA login on native app (#4762) 2024-07-17 16:20:54 +02:00
Mathijs van Veluw
505b30eec2 Fix for RSA Keys which are read only (#4744)
* Fix for RSA Keys which are read only

Sometimes an RSA Key file could be read only.
We currently failed because we also wanted to write.
Added an extra check if the file exists already and is not 0 in size.
If it does already exists and is larger then 0, then open in read only
mode.

Fixes #4644

* Updated code to work atomically

- Changed the code to work atomically
- Also show the alert generated from `Io`

* Fix spelling
2024-07-17 12:59:22 +02:00
Mathijs van Veluw
54bfcb8bc3 Update admin interface (#4737)
- Updated datatables
- Set Cookie Secure flag if the connection is https
- Prevent possible XSS via Organization Name
  Converted all `innerHTML` and `innerText` to the Safe Sink version `textContent`
- Removed `jsesc` function as handlebars escapes all these chars already and more by default
2024-07-12 22:59:48 +02:00
Daniel García
035f694d2f Improved HTTP client (#4740)
* Improved HTTP client

* Change config compat to use auto, rename blacklist

* Fix wrong doc references
2024-07-12 22:33:11 +02:00
Coby Geralnik
a4ab014ade Fix bug where secureNotes is empty (#4730) 2024-07-10 22:13:55 +02:00
Calvin Li
6fedfceaa9 chore: Dockerfile to Remove port 3012 (#4725) 2024-07-10 18:40:29 +02:00
Stefan Melmuk
8e8483481f use a custom plan of enterprise tier to fix limits (#4726)
* use a custom plan of enterprise tier to fix limits

* set maxStorageGb limit to max signed int value
2024-07-10 17:25:41 +02:00
Mathijs van Veluw
d04b94b77d Some fixes for emergency access (#4715)
- Add missing `Headers` parameter for some functions
   This allowed any request from allowing these endpoints by not validating the user correctly.
 - Changed the functions to retreive the emergency access record by
   using the user uuid which calls the endpoint, instead of validating afterwards.
   This is more secure and prevents the need of an if check.
2024-07-08 23:39:22 +02:00
Mathijs van Veluw
247d0706ff Update crates and web-vault (#4714)
- Updated the crates
   Removed the patch for mimalloc
 - Updated the web-vault to v2024.5.1b

The reason for not updating to v2024.6.x is that there are several items
not working correctly or need some more research.
2024-07-08 23:27:48 +02:00
Daniel
0e8b410798 Switch registry cache compression algorithm to zstd (#4704)
- faster builds than with gzip (the default)
2024-07-08 23:27:39 +02:00
Stefan Melmuk
fda77afc2a add group support for Cipher::get_collections() (#4592)
* add group support for Cipher::get_collections()

join group infos assigned to a collection to check
whether user has been given access to all collections via any group
or they have access to a specific collection via any group membership

* fix Collection::is_writable_by_user()

prevent side effects if groups are disabled

* differentiate the /collection endpoints

* return cipherDetails on post_collections_update()

* add collections_v2 endpoint
2024-07-04 20:28:19 +02:00
Daniel
d9835f530c Remove duplicate registry step (#4703) 2024-07-04 19:57:49 +02:00
Mathijs van Veluw
bd91964170 Fix duplicate folder creations during import (#4702)
During import you are able to select an existing folder, or with
Bitwarden exports it can contain existing folders already. In either
case it didn't matter, we always created new folders.

Bitwarden uses the same UUID of the selected or existing folders if they
are already there.

This PR fixes this by using the same behaviour.

Fixes #4700
2024-07-04 19:57:32 +02:00
Mathijs van Veluw
d42b264a93 Fix collections and native app issue (#4685)
Collections were not visible in the organization view.
This was because the `flexibleCollections` was set to `true`

Found an issue with loading some old created Secure Notes which had `{}` or `{"type":null}` as there `data` value.
This isn't allowed. When detected, replace it with `{"type":0}`

Fixes #4682
Fixes #4590
2024-07-03 21:11:04 +02:00
Daniel García
a4c7fadbf4 Change some missing PascalCase keys (#4671) 2024-06-24 21:17:59 +02:00
Daniel
8e2a87fd79 Remove mimalloc workaround (#4606)
- libatomic linking for armv6 has been fixed in 992c9da4c5
2024-06-24 19:44:21 +02:00
Daniel García
4233dbf3db Fix cipher creation on new android app (#4670) 2024-06-24 19:44:06 +02:00
Daniel García
a2bf8def2a Change API and structs to camelCase (#4386)
* Change API inputs/outputs and structs to camelCase

* Fix fields and password history

* Use convert_json_key_lcase_first

* Make sends lowercase

* Update admin and templates

* Update org revoke

* Fix sends expecting size to be a string on mobile

* Convert two-factor providers to string
2024-06-23 21:31:02 +02:00
Daniel García
8f05a90b96 Fix some more nightly errors and remove lint that will become an error by default (#4661) 2024-06-20 20:25:40 +02:00
Daniel García
9082e7cebb Fix some nightly build errors (#4657) 2024-06-20 09:35:52 +02:00
Mathijs van Veluw
55fdee3bf8 Update crates, web-vault and GHA (#4648)
- Updated all crates including Diesel and the new mysqlclient-sys
- Updated the MSRV to v1.78 as that is what Diesel mandates
- Added the mimalloc crate as a patch for now to fix armv6 static builds
  This probably makes #4606 possible
- Updated web-vault to v2024.5.1
- Updated GitHub Actions
  Fixed an issue with the localhost images for extracting the musl binaries.
2024-06-19 13:06:58 +02:00
Daniel García
377969ea67 Update rust and remove unused header values (#4645)
* Update rust and remove unused header values

* Missed one unused var
2024-06-16 22:05:17 +02:00
Mathijs van Veluw
f05398a6b3 Update admin interface dependencies (#4581)
- Updated JS/CSS dependencies
- Fixed a small issue regarding DNS IP detection
  fixes #3946
  fixes #3947
2024-05-25 15:39:36 +02:00
Timshel
9555ac7bb8 Remove compatibility route (#4578) 2024-05-25 15:29:58 +02:00
Stefan Melmuk
f01ef40a8e differentiate external groups by organization id (#4586) 2024-05-25 15:20:36 +02:00
Daniel
8e7b27cc36 Update Alpine to version 3.20 (#4583)
- needed to add double quotes, otherwise it was parsed as 3.2 instead of 3.20
2024-05-25 15:19:53 +02:00
Daniel
d230ee087c Fix web-vault version in Docker(files/Settings) (#4575) 2024-05-25 15:18:59 +02:00
Mathijs van Veluw
f8f14727b9 Update crates (#4587)
- Update crates including rocket and rocket_ws
2024-05-25 15:14:19 +02:00
FDHoho007
753a9e0bae Fix public api for domains with path prefix (#4500) 2024-05-19 20:33:31 +02:00
Stefan Melmuk
f5fb69b64f also delete organization_api_key (#4557) 2024-05-19 20:33:00 +02:00
Daniel
3261534438 Optimize Dockerfiles (#4532)
Move some ARGs closer to the build stage (potentially improving caching)
Remove redundant COPY commands
Remove redundant RUN command
Move CARGO_HOME's "&&" operator to the first line (improves consistency)
2024-05-19 20:32:36 +02:00
Rich Purnell
46762d9fde Improve commentary aesthetics (#4549) 2024-05-19 20:30:57 +02:00
Mathijs van Veluw
6cadb2627a Update Rust, crates and web-vault (#4558)
* Update Rust and crates

- Updated Rust to v1.78.0
- Updated crates

* Update web-vault to v2024.5.0
2024-05-19 20:30:34 +02:00
Daniel García
0fe93edea6 Some fixes for the new mobile apps (#4526) 2024-04-27 23:24:04 +02:00
Stefan Melmuk
e9aa5a545e fix emergency access invites (#4337)
* fix emergency access invites with no mail

when mail is disabled instead of accepting emergency access for all
invited users automatically, we only accept if the user already exists

on registration of a new account any open emergency access invitations
will be accepted, if mail is disabled

also prevent invited emergency access contacts to register if emergency
access is disabled (this is only relevant for when mail is enabled, if
mail is disabled they should have an Invitation entry)

* delete emergency access invitations

if an invited user is deleted in the /admin panel their emergency
access invitation will remain in the database which causes
the to_json_grantee_details fn to panic

* improve missing emergency access grantees

instead of returning an empty emergency access contact the entry should
not be added to the list. also the error handling can be improved a bit.
2024-04-27 22:16:05 +02:00
Stefan Melmuk
9dcc738f85 improve access to collections via groups (#4441)
* refactor get_org_collections_details

* improve access to collection check

* fix get_org_collection_detail too
2024-04-27 22:09:00 +02:00
Kristof Mattei
84a7c7da5d Pass in collection ids to notifier when sharing cipher. (#4517) 2024-04-27 21:53:10 +02:00
Mathijs van Veluw
ca9234ed86 Add extra (unsupported) container build arch's (#4524)
There was a PR (#4370) to add i686/i386 support for Vaultwarden.
That specific PR was not a viable way of adding this.

This PR adds extra architectures for Debian based containers which we
will not support by default. Those images will not be build and pushed
to our container registries.

Added the following architectures:
 - linux/386
 - linux/ppc64le
 - linux/s390x

Again, there will be no major support for these architectures, but it
will allow people who use these architectures to build a Debian based
binary more easily
2024-04-27 21:51:14 +02:00
Daniel García
27dc67fadd Implement custom DNS resolver (#3988) 2024-04-27 20:25:34 +02:00
Mathijs van Veluw
2ad33ec97f Update Crate and Rust (#4522)
* Update Crate and Rust

- Updated all crates
- Updated Rust to the latest patch version

* Updated GitHub Actions
2024-04-27 00:53:42 +02:00
Mathijs van Veluw
e1a8df96db Update Key Rotation web-vault v2024.3.x (#4446)
Key rotation was changed since 2024.1.x.
Multiple other items were added to be rotated like password-reset and emergency-access data to be part of just one POST instead of having multiple.

See: https://github.com/dani-garcia/bw_web_builds/pull/157
2024-04-06 14:42:53 +02:00
Mathijs van Veluw
e42a37c6c1 Update crates and some Clippy fixes (#4475)
- Updated all crates including reqwest
- Fixed some clippy lints reported by nightly Rust
2024-04-06 13:55:10 +02:00
Stefan Melmuk
129b835ac7 update web-vault to v2024.3.1 (new vertical layout) (#4468)
* update web-vault to v2024.3.0

* update web-vault to v2024.3.1
2024-04-06 11:45:25 +02:00
Daniel García
2d98aa3045 Use async verify for Yubikey (#4448) 2024-03-23 16:03:17 +01:00
Mathijs van Veluw
93636eb3c3 Update Rust and crates (#4445)
- Updated Rust to v1.77.0
- Updated several crates
  The `reqwest` update included `trust-dns` > `hickory-dns` changes.
  Also, `reqwest` v0.12 is not working correctly for us, that is something to investigate.
- Fixed a new clippy warning
2024-03-23 15:40:34 +01:00
Mathijs van Veluw
1e42755187 Update chrono and sqlite (#4436)
- Updated sqlite crate
- Updated chrono crate

The latter needed a lot of changes done, mostly `Duration` to `TimeDelta`.
And some changes on how to use Naive.
2024-03-19 19:47:30 +01:00
guangwu
ce8efcc48f fix: typos (#4440)
Signed-off-by: guoguangwu <guoguangwug@gmail.com>
2024-03-19 19:47:14 +01:00
Stefan Melmuk
79ce5b49bc automatically use email address as 2fa provider (#4317) 2024-03-17 22:35:02 +01:00
Matlink
7c3cad197c Fix #3624: fix manager permission within groups (#3754)
* Fix #3624: fix manager permission within groups

* Query returns UUID only

* Fix issue when user is manager and in a group having access to all collections

* optimize condition check

* fix(groups): renaming and optimizations

* fix: wrong organization group membership detection

* Simplify group membership check

Co-authored-by: Stefan Melmuk <509385+stefan0xC@users.noreply.github.com>

* Remove unused statement

* improve check if the user has access via groups

instead of returning the two lists of member ids and later checking if
they contain the uuid of the current user, we really only care if
the current user has full access via a group or if they have
access to a given collection via a group

* improve comments for get_org_collections_details

* small refactor to make it easier to review

* fix(groups): query full access via group only when necessary

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>

* chore(fmt): apply rustfmt

---------

Co-authored-by: Stefan Melmuk <509385+stefan0xC@users.noreply.github.com>
Co-authored-by: Stefan Melmuk <stefan.melmuk@gmail.com>
Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>
2024-03-17 22:11:34 +01:00
gzfrozen
000c606029 Change timestamp data type. (#4355)
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2024-03-17 22:04:37 +01:00
Jacques B
29144b2ce0 Small improvements around email change (#4415) 2024-03-17 19:55:03 +01:00
Helmut K. C. Tessarek
ea04b6f151 refactor: replace panic with a graceful exit (#4402)
* refactor: replace panic with a graceful exit

* fix: clippy errors

* fix: typo

* Update src/main.rs

Co-authored-by: Stefan Melmuk <509385+stefan0xC@users.noreply.github.com>

---------

Co-authored-by: Stefan Melmuk <509385+stefan0xC@users.noreply.github.com>
2024-03-17 19:53:41 +01:00
Mathijs van Veluw
3427217686 Remove custom WebSocket code (#4001)
* Remove custom WebSocket code

Remove our custom WebSocket code and only use the Rocket code.
Removed all options in regards to WebSockets
Added a new option `WEBSOCKET_DISABLED` which defaults too `false`.
This can be used to disable WebSockets if you really do not want to use it.

* Addressed remarks given and some updates

- Addressed comments given during review
- Updated crates, including Rocket to the latest merged v0.5 changes
- Removed an extra header which should not be sent for websocket connections

* Updated suggestions and crates

- Addressed the suggestions
- Updated Rocket to latest rc4
  Also made the needed code changes
- Updated all other crates
  Pinned `openssl` and `openssl-sys`

---------

Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2024-03-17 19:52:55 +01:00
Daniel García
a1fbd6d729 Improve JWT key initialization and avoid saving public key (#4085) 2024-03-17 15:11:20 +01:00
Krapp
2cbfe6fa5b Fix comment in events.rs (#4408)
I think
` // Collection events`
was repeated twice
2024-03-17 14:29:31 +01:00
one230six
d86c4f2c23 Signed-off-by: one230six <723682061@qq.com> (#4422)
Signed-off-by: one230six <723682061@qq.com>
2024-03-17 14:28:10 +01:00
Daniel García
6d73f30b4f Update crates 2024-03-17 14:25:49 +01:00
Calvin Li
d0c22b9fc9 fix: web API call for jquery 3.7.1 (#4400) 2024-03-02 19:09:36 +01:00
Mathijs van Veluw
d6b97090fa Update crates, GHA and a Python/JS scripts (#4357)
- Update all crates
- Update GHA
- Update Global Domains script to use main instead of master
  Also fixed some Python linting warnings
- Updated Admin JS and CSS libraries
2024-02-25 23:26:46 +01:00
seiuneko
94b077cb2d Fix env templateto ensure compatibility with systemd's EnvironmentFile parsing (#4315)
* fix: update env template for systemd compatibility

Adjust env template to ensure compatibility with systemd's EnvironmentFile parsing, which only recognizes line-starting comment symbols.

* Refactor SMTP and Rocket settings in .env.template

- Simplify the SMTP_SECURITY and SMTP_PORT options by providing a list of choices and default values
- Clarify the ROCKET_PORT default value depending on the environment (Docker or not)
2024-02-19 16:29:53 +01:00
Mathijs van Veluw
bb2412d033 Change the codegen-units for low resources (#4336)
It seems (as disscusses here #4320) a single codegen unit makes it still
crash. This sets it to the default 16 Rust uses for the release profile.
2024-02-10 13:04:08 +01:00
Mathijs van Veluw
b9bdc9b8e2 Update Rust, crates and web-vault (#4328)
- Updated Rust to v1.76.0
- Updated crates
- Updated web-vault to v2024.1.2b
- Fixed some Clippy lints
- Moved lint check configuration Cargo.toml
- Fixed issue with Reset Password Enrollment when logged-in via device
2024-02-08 22:16:29 +01:00
Mathijs van Veluw
897bdf8343 Update GHA Workflows (#4309)
- Update the workflow GH Actions.
- Configured the release workflow to always run on main/tag as discussed
  in #4226

Closes #4226
2024-02-03 16:41:25 +01:00
Mathijs van Veluw
569add453d Add Kubernetes environment detection (#4290)
Also check if we are running within a Kubernetes environment.
These do not always run using Docker or Podman of course.

Also renamed all the functions and variables to use `container` instead
of `docker`.
2024-02-02 21:44:19 +01:00
Mathijs van Veluw
77cd5b5954 Update crates to fix new builds (#4308)
Because handlebars yanked a version which was there for a few days, we
need to downgrade this crate. In this process update all the others.

Fixes #4307
2024-02-02 18:30:54 +01:00
Mathijs van Veluw
4438da39f9 Fix healthcheck when using .env file (#4299)
It seems Debian based images see the `.env` file in the `pwd` path, but
sourcing it via `. .env` breaks. It does work if you provide the full
path `/.env`. Changed the default to `/.env`.

Alpine does not have an issue with both ways.
2024-01-31 22:31:47 +01:00
Stefan Melmuk
0b2383ab56 fix push device registration (#4297)
don't try to register a push device when the device is new
it will be registered when the push token is saved

fixes #4296
2024-01-31 22:31:22 +01:00
135 changed files with 18898 additions and 16500 deletions

View File

@@ -1,40 +1,16 @@
# Local build artifacts
target
// Ignore everything
*
# Data folder
data
# Misc
.env
.env.template
.gitattributes
.gitignore
rustfmt.toml
# IDE files
.vscode
.idea
.editorconfig
*.iml
# Documentation
.github
*.md
*.txt
*.yml
*.yaml
# Docker
hooks
tools
Dockerfile
.dockerignore
docker/**
// Allow what is needed
!.git
!docker/healthcheck.sh
!docker/start.sh
!macros
!migrations
!src
# Web vault
web-vault
# Vaultwarden Resources
resources
!build.rs
!Cargo.lock
!Cargo.toml
!rustfmt.toml
!rust-toolchain.toml

View File

@@ -84,27 +84,28 @@
### WebSocket ###
#################
## Enables websocket notifications
# WEBSOCKET_ENABLED=false
## Controls the WebSocket server address and port
# WEBSOCKET_ADDRESS=0.0.0.0
# WEBSOCKET_PORT=3012
## Enable websocket notifications
# ENABLE_WEBSOCKET=true
##########################
### Push notifications ###
##########################
## Enables push notifications (requires key and id from https://bitwarden.com/host)
## If you choose "European Union" Data Region, uncomment PUSH_RELAY_URI and PUSH_IDENTITY_URI then replace .com by .eu
## Details about mobile client push notification:
## - https://github.com/dani-garcia/vaultwarden/wiki/Enabling-Mobile-Client-push-notification
# PUSH_ENABLED=false
# PUSH_INSTALLATION_ID=CHANGEME
# PUSH_INSTALLATION_KEY=CHANGEME
## Don't change this unless you know what you're doing.
# WARNING: Do not modify the following settings unless you fully understand their implications!
# Default Push Relay and Identity URIs
# PUSH_RELAY_URI=https://push.bitwarden.com
# PUSH_IDENTITY_URI=https://identity.bitwarden.com
# European Union Data Region Settings
# If you have selected "European Union" as your data region, use the following URIs instead.
# PUSH_RELAY_URI=https://api.bitwarden.eu
# PUSH_IDENTITY_URI=https://identity.bitwarden.eu
#####################
### Schedule jobs ###
@@ -156,6 +157,10 @@
## Cron schedule of the job that cleans old auth requests from the auth request.
## Defaults to every minute. Set blank to disable this job.
# AUTH_REQUEST_PURGE_SCHEDULE="30 * * * * *"
##
## Cron schedule of the job that cleans expired Duo contexts from the database. Does nothing if Duo MFA is disabled or set to use the legacy iframe prompt.
## Defaults to every minute. Set blank to disable this job.
# DUO_CONTEXT_PURGE_SCHEDULE="30 * * * * *"
########################
### General settings ###
@@ -275,12 +280,13 @@
## The default for new users. If changed, it will be updated during login for existing users.
# PASSWORD_ITERATIONS=600000
## Controls whether users can set password hints. This setting applies globally to all users.
## Controls whether users can set or show password hints. This setting applies globally to all users.
# PASSWORD_HINTS_ALLOWED=true
## Controls whether a password hint should be shown directly in the web page if
## SMTP service is not configured. Not recommended for publicly-accessible instances
## as this provides unauthenticated access to potentially sensitive data.
## SMTP service is not configured and password hints are allowed.
## Not recommended for publicly-accessible instances because this provides
## unauthenticated access to potentially sensitive data.
# SHOW_PASSWORD_HINT=false
#########################
@@ -324,15 +330,15 @@
## The default is 10 seconds, but this could be to low on slower network connections
# ICON_DOWNLOAD_TIMEOUT=10
## Icon blacklist Regex
## Any domains or IPs that match this regex won't be fetched by the icon service.
## Block HTTP domains/IPs by Regex
## Any domains or IPs that match this regex won't be fetched by the internal HTTP client.
## Useful to hide other servers in the local network. Check the WIKI for more details
## NOTE: Always enclose this regex withing single quotes!
# ICON_BLACKLIST_REGEX='^(192\.168\.0\.[0-9]+|192\.168\.1\.[0-9]+)$'
# HTTP_REQUEST_BLOCK_REGEX='^(192\.168\.0\.[0-9]+|192\.168\.1\.[0-9]+)$'
## Any IP which is not defined as a global IP will be blacklisted.
## Enabling this will cause the internal HTTP client to refuse to connect to any non global IP address.
## Useful to secure your internal environment: See https://en.wikipedia.org/wiki/Reserved_IP_addresses for a list of IPs which it will block
# ICON_BLACKLIST_NON_GLOBAL_IPS=true
# HTTP_REQUEST_BLOCK_NON_GLOBAL_IPS=true
## Client Settings
## Enable experimental feature flags for clients.
@@ -342,7 +348,11 @@
## - "autofill-overlay": Add an overlay menu to form fields for quick access to credentials.
## - "autofill-v2": Use the new autofill implementation.
## - "browser-fileless-import": Directly import credentials from other providers without a file.
## - "extension-refresh": Temporarily enable the new extension design until general availability (should be used with the beta Chrome extension)
## - "fido2-vault-credentials": Enable the use of FIDO2 security keys as second factor.
## - "inline-menu-positioning-improvements": Enable the use of inline menu password generator and identity suggestions in the browser extension.
## - "ssh-key-vault-item": Enable the creation and use of SSH key vault items. (Needs clients >=2024.12.0)
## - "ssh-agent": Enable SSH agent support on Desktop. (Needs desktop >=2024.12.0)
# EXPERIMENTAL_CLIENT_FEATURE_FLAGS=fido2-vault-credentials
## Require new device emails. When a user logs in an email is required to be sent.
@@ -366,8 +376,9 @@
## Log level
## Change the verbosity of the log output
## Valid values are "trace", "debug", "info", "warn", "error" and "off"
## Setting it to "trace" or "debug" would also show logs for mounted
## routes and static file, websocket and alive requests
## Setting it to "trace" or "debug" would also show logs for mounted routes and static file, websocket and alive requests
## For a specific module append a comma separated `path::to::module=log_level`
## For example, to only see debug logs for icons use: LOG_LEVEL="info,vaultwarden::api::icons=debug"
# LOG_LEVEL=info
## Token for the admin interface, preferably an Argon2 PCH string
@@ -400,6 +411,14 @@
## Multiple values must be separated with a whitespace.
# ALLOWED_IFRAME_ANCESTORS=
## Allowed connect-src (Know the risks!)
## https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/connect-src
## Allows other domains to URLs which can be loaded using script interfaces like the Forwarded email alias feature
## This adds the configured value to the 'Content-Security-Policy' headers 'connect-src' value.
## Multiple values must be separated with a whitespace. And only HTTPS values are allowed.
## Example: "https://my-addy-io.domain.tld https://my-simplelogin.domain.tld"
# ALLOWED_CONNECT_SRC=""
## Number of seconds, on average, between login requests from the same IP address before rate limiting kicks in.
# LOGIN_RATELIMIT_SECONDS=60
## Allow a burst of requests of up to this size, while maintaining the average indicated by `LOGIN_RATELIMIT_SECONDS`.
@@ -413,6 +432,18 @@
## KNOW WHAT YOU ARE DOING!
# ORG_GROUPS_ENABLED=false
## Increase secure note size limit (Know the risks!)
## Sets the secure note size limit to 100_000 instead of the default 10_000.
## WARNING: This could cause issues with clients. Also exports will not work on Bitwarden servers!
## KNOW WHAT YOU ARE DOING!
# INCREASE_NOTE_SIZE_LIMIT=false
## Enforce Single Org with Reset Password Policy
## Enforce that the Single Org policy is enabled before setting the Reset Password policy
## Bitwarden enforces this by default. In Vaultwarden we encouraged to use multiple organizations because groups were not available.
## Setting this to true will enforce the Single Org Policy to be enabled before you can enable the Reset Password policy.
# ENFORCE_SINGLE_ORG_WITH_RESET_PW_POLICY=false
########################
### MFA/2FA settings ###
########################
@@ -426,15 +457,21 @@
# YUBICO_SERVER=http://yourdomain.com/wsapi/2.0/verify
## Duo Settings
## You need to configure all options to enable global Duo support, otherwise users would need to configure it themselves
## You need to configure the DUO_IKEY, DUO_SKEY, and DUO_HOST options to enable global Duo support.
## Otherwise users will need to configure it themselves.
## Create an account and protect an application as mentioned in this link (only the first step, not the rest):
## https://help.bitwarden.com/article/setup-two-step-login-duo/#create-a-duo-security-account
## Then set the following options, based on the values obtained from the last step:
# DUO_IKEY=<Integration Key>
# DUO_SKEY=<Secret Key>
# DUO_IKEY=<Client ID>
# DUO_SKEY=<Client Secret>
# DUO_HOST=<API Hostname>
## After that, you should be able to follow the rest of the guide linked above,
## ignoring the fields that ask for the values that you already configured beforehand.
##
## If you want to attempt to use Duo's 'Traditional Prompt' (deprecated, iframe based) set DUO_USE_IFRAME to 'true'.
## Duo no longer supports this, but it still works for some integrations.
## If you aren't sure, leave this alone.
# DUO_USE_IFRAME=false
## Email 2FA settings
## Email token size
@@ -448,6 +485,11 @@
##
## Maximum attempts before an email token is reset and a new email will need to be sent.
# EMAIL_ATTEMPTS_LIMIT=3
##
## Setup email 2FA regardless of any organization policy
# EMAIL_2FA_ENFORCE_ON_VERIFIED_INVITE=false
## Automatically setup email 2FA as fallback provider when needed
# EMAIL_2FA_AUTO_FALLBACK=false
## Other MFA/2FA settings
## Disable 2FA remember
@@ -477,12 +519,19 @@
# SMTP_HOST=smtp.domain.tld
# SMTP_FROM=vaultwarden@domain.tld
# SMTP_FROM_NAME=Vaultwarden
# SMTP_SECURITY=starttls # ("starttls", "force_tls", "off") Enable a secure connection. Default is "starttls" (Explicit - ports 587 or 25), "force_tls" (Implicit - port 465) or "off", no encryption (port 25)
# SMTP_PORT=587 # Ports 587 (submission) and 25 (smtp) are standard without encryption and with encryption via STARTTLS (Explicit TLS). Port 465 (submissions) is used for encrypted submission (Implicit TLS).
# SMTP_USERNAME=username
# SMTP_PASSWORD=password
# SMTP_TIMEOUT=15
## Choose the type of secure connection for SMTP. The default is "starttls".
## The available options are:
## - "starttls": The default port is 587.
## - "force_tls": The default port is 465.
## - "off": The default port is 25.
## Ports 587 (submission) and 25 (smtp) are standard without encryption and with encryption via STARTTLS (Explicit TLS). Port 465 (submissions) is used for encrypted submission (Implicit TLS).
# SMTP_SECURITY=starttls
# SMTP_PORT=587
# Whether to send mail via the `sendmail` command
# USE_SENDMAIL=false
# Which sendmail command to use. The one found in the $PATH is used if not specified.
@@ -517,14 +566,15 @@
## Only use this as a last resort if you are not able to use a valid certificate.
# SMTP_ACCEPT_INVALID_HOSTNAMES=false
##########################
#######################
### Rocket settings ###
##########################
#######################
## Rocket specific settings
## See https://rocket.rs/v0.5/guide/configuration/ for more details.
# ROCKET_ADDRESS=0.0.0.0
# ROCKET_PORT=80 # Defaults to 80 in the Docker images, or 8000 otherwise.
## The default port is 8000, unless running in a Docker container, in which case it is 80.
# ROCKET_PORT=8000
# ROCKET_TLS={certs="/path/to/certs.pem",key="/path/to/key.pem"}

View File

@@ -1,66 +0,0 @@
---
name: Bug report
about: Use this ONLY for bugs in vaultwarden itself. Use the Discourse forum (link below) to request features or get help with usage/configuration. If in doubt, use the forum.
title: ''
labels: ''
assignees: ''
---
<!--
# ###
NOTE: Please update to the latest version of vaultwarden before reporting an issue!
This saves you and us a lot of time and troubleshooting.
See:
* https://github.com/dani-garcia/vaultwarden/issues/1180
* https://github.com/dani-garcia/vaultwarden/wiki/Updating-the-vaultwarden-image
# ###
-->
<!--
Please fill out the following template to make solving your problem easier and faster for us.
This is only a guideline. If you think that parts are unnecessary for your issue, feel free to remove them.
Remember to hide/redact personal or confidential information,
such as passwords, IP addresses, and DNS names as appropriate.
-->
### Subject of the issue
<!-- Describe your issue here. -->
### Deployment environment
<!--
=========================================================================================
Preferably, use the `Generate Support String` button on the admin page's Diagnostics tab.
That will auto-generate most of the info requested in this section.
=========================================================================================
-->
<!-- The version number, obtained from the logs (at startup) or the admin diagnostics page -->
<!-- This is NOT the version number shown on the web vault, which is versioned separately from vaultwarden -->
<!-- Remember to check if your issue exists on the latest version first! -->
* vaultwarden version:
<!-- How the server was installed: Docker image, OS package, built from source, etc. -->
* Install method:
* Clients used: <!-- web vault, desktop, Android, iOS, etc. (if applicable) -->
* Reverse proxy and version: <!-- if applicable -->
* MySQL/MariaDB or PostgreSQL version: <!-- if applicable -->
* Other relevant details:
### Steps to reproduce
<!-- Tell us how to reproduce this issue. What parameters did you set (differently from the defaults)
and how did you start vaultwarden? -->
### Expected behaviour
<!-- Tell us what you expected to happen -->
### Actual behaviour
<!-- Tell us what actually happened -->
### Troubleshooting data
<!-- Share any log files, screenshots, or other relevant troubleshooting data -->

167
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,167 @@
name: Bug Report
description: File a bug report
labels: ["bug"]
body:
#
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this bug report!
Please *do not* submit feature requests or ask for help on how to configure Vaultwarden here.
The [GitHub Discussions](https://github.com/dani-garcia/vaultwarden/discussions/) has sections for Questions and Ideas.
Also, make sure you are running [![GitHub Release](https://img.shields.io/github/release/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/releases/latest) of Vaultwarden!
And search for existing open or closed issues or discussions regarding your topic before posting.
Be sure to check and validate the Vaultwarden Admin Diagnostics (`/admin/diagnostics`) page for any errors!
See here [how to enable the admin page](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-admin-page).
#
- id: support-string
type: textarea
attributes:
label: Vaultwarden Support String
description: Output of the **Generate Support String** from the `/admin/diagnostics` page.
placeholder: |
1. Go to the Vaultwarden Admin of your instance https://example.domain.tld/admin/diagnostics
2. Click on `Generate Support String`
3. Click on `Copy To Clipboard`
4. Replace this text by pasting it into this textarea without any modifications
validations:
required: true
#
- id: version
type: input
attributes:
label: Vaultwarden Build Version
description: What version of Vaultwarden are you running?
placeholder: ex. v1.31.0 or v1.32.0-3466a804
validations:
required: true
#
- id: deployment
type: dropdown
attributes:
label: Deployment method
description: How did you deploy Vaultwarden?
multiple: false
options:
- Official Container Image
- Build from source
- OS Package (apt, yum/dnf, pacman, apk, nix, ...)
- Manually Extracted from Container Image
- Downloaded from GitHub Actions Release Workflow
- Other method
validations:
required: true
#
- id: deployment-other
type: textarea
attributes:
label: Custom deployment method
description: If you deployed Vaultwarden via any other method, please describe how.
#
- id: reverse-proxy
type: input
attributes:
label: Reverse Proxy
description: Are you using a reverse proxy, if so which and what version?
placeholder: ex. nginx 1.26.2, caddy 2.8.4, traefik 3.1.2, haproxy 3.0
validations:
required: true
#
- id: os
type: dropdown
attributes:
label: Host/Server Operating System
description: On what operating system are you running the Vaultwarden server?
multiple: false
options:
- Linux
- NAS/SAN
- Cloud
- Windows
- macOS
- Other
validations:
required: true
#
- id: os-version
type: input
attributes:
label: Operating System Version
description: What version of the operating system(s) are you seeing the problem on?
placeholder: ex. Arch Linux, Ubuntu 24.04, Kubernetes, Synology DSM 7.x, Windows 11
#
- id: clients
type: dropdown
attributes:
label: Clients
description: What client(s) are you seeing the problem on?
multiple: true
options:
- Web Vault
- Browser Extension
- CLI
- Desktop
- Android
- iOS
validations:
required: true
#
- id: client-version
type: input
attributes:
label: Client Version
description: What version(s) of the client(s) are you seeing the problem on?
placeholder: ex. CLI v2024.7.2, Firefox 130 - v2024.7.0
#
- id: reproduce
type: textarea
attributes:
label: Steps To Reproduce
description: How can we reproduce the behavior.
value: |
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. Click on '...'
5. Etc '...'
validations:
required: true
#
- id: expected
type: textarea
attributes:
label: Expected Result
description: A clear and concise description of what you expected to happen.
validations:
required: true
#
- id: actual
type: textarea
attributes:
label: Actual Result
description: A clear and concise description of what is happening.
validations:
required: true
#
- id: logs
type: textarea
attributes:
label: Logs
description: Provide the logs generated by Vaultwarden during the time this issue occurs.
render: text
#
- id: screenshots
type: textarea
attributes:
label: Screenshots or Videos
description: If applicable, add screenshots and/or a short video to help explain your problem.
#
- id: additional-context
type: textarea
attributes:
label: Additional Context
description: Add any other context about the problem here.

View File

@@ -1,8 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: Discourse forum for vaultwarden
url: https://vaultwarden.discourse.group/
about: Use this forum to request features or get help with usage/configuration.
- name: GitHub Discussions for vaultwarden
- name: GitHub Discussions for Vaultwarden
url: https://github.com/dani-garcia/vaultwarden/discussions
about: An alternative to the Discourse forum, if this is easier for you.
about: Use the discussions to request features or get help with usage/configuration.
- name: Discourse forum for Vaultwarden
url: https://vaultwarden.discourse.group/
about: An alternative to the GitHub Discussions, if this is easier for you.

View File

@@ -28,6 +28,7 @@ on:
jobs:
build:
# We use Ubuntu 22.04 here because this matches the library versions used within the Debian docker containers
runs-on: ubuntu-22.04
timeout-minutes: 120
# Make warnings errors, this is to prevent warnings slipping through.
@@ -46,7 +47,7 @@ jobs:
steps:
# Checkout the repo
- name: "Checkout"
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 #v4.1.1
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
# End Checkout the repo
@@ -74,7 +75,7 @@ jobs:
# Only install the clippy and rustfmt components on the default rust-toolchain
- name: "Install rust-toolchain version"
uses: dtolnay/rust-toolchain@be73d7920c329f220ce78e0234b8f96b7ae60248 # master @ 2023-12-07 - 10:22 PM GMT+1
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203 # master @ Dec 14, 2024, 5:49 AM GMT+1
if: ${{ matrix.channel == 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -84,7 +85,7 @@ jobs:
# Install the any other channel to be used for which we do not execute clippy and rustfmt
- name: "Install MSRV version"
uses: dtolnay/rust-toolchain@be73d7920c329f220ce78e0234b8f96b7ae60248 # master @ 2023-12-07 - 10:22 PM GMT+1
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203 # master @ Dec 14, 2024, 5:49 AM GMT+1
if: ${{ matrix.channel != 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -106,7 +107,8 @@ jobs:
# End Show environment
# Enable Rust Caching
- uses: Swatinem/rust-cache@23bce251a8cd2ffc3c1075eaa2367cf899916d84 # v2.7.3
- name: Rust Caching
uses: Swatinem/rust-cache@f0deed1e0edfc6a9be95417288c0e1099b1eeec3 # v2.7.7
with:
# Use a custom prefix-key to force a fresh start. This is sometimes needed with bigger changes.
# Like changing the build host from Ubuntu 20.04 to 22.04 for example.
@@ -116,33 +118,39 @@ jobs:
# Run cargo tests
# First test all features together, afterwards test them separately.
- name: "test features: sqlite,mysql,postgresql,enable_mimalloc,query_logger"
id: test_sqlite_mysql_postgresql_mimalloc_logger
if: ${{ !cancelled() }}
run: |
cargo test --features sqlite,mysql,postgresql,enable_mimalloc,query_logger
- name: "test features: sqlite,mysql,postgresql,enable_mimalloc"
id: test_sqlite_mysql_postgresql_mimalloc
if: $${{ always() }}
if: ${{ !cancelled() }}
run: |
cargo test --features sqlite,mysql,postgresql,enable_mimalloc
- name: "test features: sqlite,mysql,postgresql"
id: test_sqlite_mysql_postgresql
if: $${{ always() }}
if: ${{ !cancelled() }}
run: |
cargo test --features sqlite,mysql,postgresql
- name: "test features: sqlite"
id: test_sqlite
if: $${{ always() }}
if: ${{ !cancelled() }}
run: |
cargo test --features sqlite
- name: "test features: mysql"
id: test_mysql
if: $${{ always() }}
if: ${{ !cancelled() }}
run: |
cargo test --features mysql
- name: "test features: postgresql"
id: test_postgresql
if: $${{ always() }}
if: ${{ !cancelled() }}
run: |
cargo test --features postgresql
# End Run cargo tests
@@ -151,7 +159,7 @@ jobs:
# Run cargo clippy, and fail on warnings
- name: "clippy features: sqlite,mysql,postgresql,enable_mimalloc"
id: clippy
if: ${{ always() && matrix.channel == 'rust-toolchain' }}
if: ${{ !cancelled() && matrix.channel == 'rust-toolchain' }}
run: |
cargo clippy --features sqlite,mysql,postgresql,enable_mimalloc -- -D warnings
# End Run cargo clippy
@@ -160,7 +168,7 @@ jobs:
# Run cargo fmt (Only run on rust-toolchain defined version)
- name: "check formatting"
id: formatting
if: ${{ always() && matrix.channel == 'rust-toolchain' }}
if: ${{ !cancelled() && matrix.channel == 'rust-toolchain' }}
run: |
cargo fmt --all -- --check
# End Run cargo fmt
@@ -175,6 +183,7 @@ jobs:
echo "" >> $GITHUB_STEP_SUMMARY
echo "|Job|Status|" >> $GITHUB_STEP_SUMMARY
echo "|---|------|" >> $GITHUB_STEP_SUMMARY
echo "|test (sqlite,mysql,postgresql,enable_mimalloc,query_logger)|${{ steps.test_sqlite_mysql_postgresql_mimalloc_logger.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|test (sqlite,mysql,postgresql,enable_mimalloc)|${{ steps.test_sqlite_mysql_postgresql_mimalloc.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|test (sqlite,mysql,postgresql)|${{ steps.test_sqlite_mysql_postgresql.outcome }}|" >> $GITHUB_STEP_SUMMARY
echo "|test (sqlite)|${{ steps.test_sqlite.outcome }}|" >> $GITHUB_STEP_SUMMARY

View File

@@ -8,14 +8,26 @@ on: [
jobs:
hadolint:
name: Validate Dockerfile syntax
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
timeout-minutes: 30
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
# End Checkout the repo
# Start Docker Buildx
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@6524bf65af31da8d45b59e8c27de4bd072b392f5 # v3.8.0
# https://github.com/moby/buildkit/issues/3969
# Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills
with:
buildkitd-config-inline: |
[worker.oci]
max-parallelism = 2
driver-opts: |
network=host
# Download hadolint - https://github.com/hadolint/hadolint/releases
- name: Download hadolint
shell: bash
@@ -26,8 +38,18 @@ jobs:
HADOLINT_VERSION: 2.12.0
# End Download hadolint
# Test Dockerfiles
# Test Dockerfiles with hadolint
- name: Run hadolint
shell: bash
run: hadolint docker/Dockerfile.{debian,alpine}
# End Test Dockerfiles
# End Test Dockerfiles with hadolint
# Test Dockerfiles with docker build checks
- name: Run docker build check
shell: bash
run: |
echo "Checking docker/Dockerfile.debian"
docker build --check . -f docker/Dockerfile.debian
echo "Checking docker/Dockerfile.alpine"
docker build --check . -f docker/Dockerfile.alpine
# End Test Dockerfiles with docker build checks

View File

@@ -2,20 +2,10 @@ name: Release
on:
push:
paths:
- ".github/workflows/release.yml"
- "src/**"
- "migrations/**"
- "docker/**"
- "Cargo.*"
- "build.rs"
- "diesel.toml"
- "rust-toolchain.toml"
branches: # Only on paths above
branches:
- main
tags: # Always, regardless of paths above
tags:
- '*'
jobs:
@@ -23,7 +13,7 @@ jobs:
# Some checks to determine if we need to continue with building a new docker.
# We will skip this check if we are creating a tag, because that has the same hash as a previous run already.
skip_check:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
if: ${{ github.repository == 'dani-garcia/vaultwarden' }}
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
@@ -37,11 +27,16 @@ jobs:
if: ${{ github.ref_type == 'branch' }}
docker-build:
runs-on: ubuntu-22.04
permissions:
packages: write
contents: read
attestations: write
id-token: write
runs-on: ubuntu-24.04
timeout-minutes: 120
needs: skip_check
if: ${{ needs.skip_check.outputs.should_skip != 'true' && github.repository == 'dani-garcia/vaultwarden' }}
# Start a local docker registry to extract the final Alpine static build binaries
# Start a local docker registry to extract the compiled binaries to upload as artifacts and attest them
services:
registry:
image: registry:2
@@ -68,22 +63,22 @@ jobs:
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
with:
fetch-depth: 0
- name: Initialize QEMU binfmt support
uses: docker/setup-qemu-action@68827325e0b33c7199eb31dd4e31fbe9023e06e3 # v3.0.0
uses: docker/setup-qemu-action@53851d14592bedcffcf25ea515637cff71ef929a # v3.3.0
with:
platforms: "arm64,arm"
# Start Docker Buildx
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@f95db51fddba0c2d1ec667646a06c2ce06100226 # v3.0.0
uses: docker/setup-buildx-action@6524bf65af31da8d45b59e8c27de4bd072b392f5 # v3.8.0
# https://github.com/moby/buildkit/issues/3969
# Also set max parallelism to 2, the default of 4 breaks GitHub Actions
# Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills
with:
config-inline: |
buildkitd-config-inline: |
[worker.oci]
max-parallelism = 2
driver-opts: |
@@ -112,7 +107,7 @@ jobs:
# Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
@@ -126,7 +121,7 @@ jobs:
# Login to GitHub Container Registry
- name: Login to GitHub Container Registry
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -139,15 +134,9 @@ jobs:
run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.GHCR_REPO }}" | tee -a "${GITHUB_ENV}"
- name: Add registry for ghcr.io
if: ${{ env.HAVE_GHCR_LOGIN == 'true' }}
shell: bash
run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.GHCR_REPO }}" | tee -a "${GITHUB_ENV}"
# Login to Quay.io
- name: Login to Quay.io
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
with:
registry: quay.io
username: ${{ secrets.QUAY_USERNAME }}
@@ -167,7 +156,7 @@ jobs:
# Check if there is a GitHub Container Registry Login and use it for caching
if [[ -n "${HAVE_GHCR_LOGIN}" ]]; then
echo "BAKE_CACHE_FROM=type=registry,ref=${{ vars.GHCR_REPO }}-buildcache:${{ matrix.base_image }}" | tee -a "${GITHUB_ENV}"
echo "BAKE_CACHE_TO=type=registry,ref=${{ vars.GHCR_REPO }}-buildcache:${{ matrix.base_image }},mode=max" | tee -a "${GITHUB_ENV}"
echo "BAKE_CACHE_TO=type=registry,ref=${{ vars.GHCR_REPO }}-buildcache:${{ matrix.base_image }},compression=zstd,mode=max" | tee -a "${GITHUB_ENV}"
else
echo "BAKE_CACHE_FROM="
echo "BAKE_CACHE_TO="
@@ -175,13 +164,13 @@ jobs:
#
- name: Add localhost registry
if: ${{ matrix.base_image == 'alpine' }}
shell: bash
run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}localhost:5000/vaultwarden/server" | tee -a "${GITHUB_ENV}"
- name: Bake ${{ matrix.base_image }} containers
uses: docker/bake-action@849707117b03d39aba7924c50a10376a69e88d7d # v4.1.0
id: bake_vw
uses: docker/bake-action@5ca506d06f70338a4968df87fd8bfee5cbfb84c7 # v6.0.0
env:
BASE_TAGS: "${{ env.BASE_TAGS }}"
SOURCE_COMMIT: "${{ env.SOURCE_COMMIT }}"
@@ -191,16 +180,47 @@ jobs:
with:
pull: true
push: true
source: .
files: docker/docker-bake.hcl
targets: "${{ matrix.base_image }}-multi"
set: |
*.cache-from=${{ env.BAKE_CACHE_FROM }}
*.cache-to=${{ env.BAKE_CACHE_TO }}
- name: Extract digest SHA
shell: bash
run: |
GET_DIGEST_SHA="$(jq -r '.["${{ matrix.base_image }}-multi"]."containerimage.digest"' <<< '${{ steps.bake_vw.outputs.metadata }}')"
echo "DIGEST_SHA=${GET_DIGEST_SHA}" | tee -a "${GITHUB_ENV}"
# Attest container images
- name: Attest - docker.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_DOCKERHUB_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@7668571508540a607bdfd90a87a560489fe372eb # v2.1.0
with:
subject-name: ${{ vars.DOCKERHUB_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
push-to-registry: true
- name: Attest - ghcr.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_GHCR_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@7668571508540a607bdfd90a87a560489fe372eb # v2.1.0
with:
subject-name: ${{ vars.GHCR_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
push-to-registry: true
- name: Attest - quay.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_QUAY_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@7668571508540a607bdfd90a87a560489fe372eb # v2.1.0
with:
subject-name: ${{ vars.QUAY_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
push-to-registry: true
# Extract the Alpine binaries from the containers
- name: Extract binaries
if: ${{ matrix.base_image == 'alpine' }}
shell: bash
run: |
# Check which main tag we are going to build determined by github.ref_type
@@ -210,59 +230,65 @@ jobs:
EXTRACT_TAG="testing"
fi
# Check which base_image was used and append -alpine if needed
if [[ "${{ matrix.base_image }}" == "alpine" ]]; then
EXTRACT_TAG="${EXTRACT_TAG}-alpine"
fi
# After each extraction the image is removed.
# This is needed because using different platforms doesn't trigger a new pull/download
# Extract amd64 binary
docker create --name amd64 --platform=linux/amd64 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp amd64:/vaultwarden vaultwarden-amd64
docker create --name amd64 --platform=linux/amd64 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp amd64:/vaultwarden vaultwarden-amd64-${{ matrix.base_image }}
docker rm --force amd64
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
# Extract arm64 binary
docker create --name arm64 --platform=linux/arm64 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp arm64:/vaultwarden vaultwarden-arm64
docker create --name arm64 --platform=linux/arm64 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp arm64:/vaultwarden vaultwarden-arm64-${{ matrix.base_image }}
docker rm --force arm64
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
# Extract armv7 binary
docker create --name armv7 --platform=linux/arm/v7 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp armv7:/vaultwarden vaultwarden-armv7
docker create --name armv7 --platform=linux/arm/v7 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp armv7:/vaultwarden vaultwarden-armv7-${{ matrix.base_image }}
docker rm --force armv7
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
# Extract armv6 binary
docker create --name armv6 --platform=linux/arm/v6 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp armv6:/vaultwarden vaultwarden-armv6
docker create --name armv6 --platform=linux/arm/v6 "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
docker cp armv6:/vaultwarden vaultwarden-armv6-${{ matrix.base_image }}
docker rm --force armv6
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker rmi --force "localhost:5000/vaultwarden/server:${EXTRACT_TAG}"
# Upload artifacts to Github Actions
- name: "Upload amd64 artifact"
uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
if: ${{ matrix.base_image == 'alpine' }}
# Upload artifacts to Github Actions and Attest the binaries
- name: "Upload amd64 artifact ${{ matrix.base_image }}"
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b #v4.5.0
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-amd64
path: vaultwarden-amd64
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-amd64-${{ matrix.base_image }}
path: vaultwarden-amd64-${{ matrix.base_image }}
- name: "Upload arm64 artifact"
uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
if: ${{ matrix.base_image == 'alpine' }}
- name: "Upload arm64 artifact ${{ matrix.base_image }}"
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b #v4.5.0
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-arm64
path: vaultwarden-arm64
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-arm64-${{ matrix.base_image }}
path: vaultwarden-arm64-${{ matrix.base_image }}
- name: "Upload armv7 artifact"
uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
if: ${{ matrix.base_image == 'alpine' }}
- name: "Upload armv7 artifact ${{ matrix.base_image }}"
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b #v4.5.0
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv7
path: vaultwarden-armv7
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv7-${{ matrix.base_image }}
path: vaultwarden-armv7-${{ matrix.base_image }}
- name: "Upload armv6 artifact"
uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
if: ${{ matrix.base_image == 'alpine' }}
- name: "Upload armv6 artifact ${{ matrix.base_image }}"
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b #v4.5.0
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv6
path: vaultwarden-armv6
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv6-${{ matrix.base_image }}
path: vaultwarden-armv6-${{ matrix.base_image }}
- name: "Attest artifacts ${{ matrix.base_image }}"
uses: actions/attest-build-provenance@7668571508540a607bdfd90a87a560489fe372eb # v2.1.0
with:
subject-path: vaultwarden-*
# End Upload artifacts to Github Actions

View File

@@ -13,11 +13,12 @@ name: Cleanup
jobs:
releasecache-cleanup:
name: Releasecache Cleanup
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
continue-on-error: true
timeout-minutes: 30
steps:
- name: Delete vaultwarden-buildcache containers
uses: actions/delete-package-versions@0d39a63126868f5eefaa47169615edd3c0f61e20 # v4.1.1
uses: actions/delete-package-versions@e5bc658cc4c965c472efe991f8beea3981499c55 # v5.0.0
with:
package-name: 'vaultwarden-buildcache'
package-type: 'container'

View File

@@ -9,15 +9,18 @@ on:
pull_request:
branches: [ "main" ]
schedule:
- cron: '00 12 * * *'
- cron: '08 11 * * *'
permissions:
contents: read
jobs:
trivy-scan:
# Only run this in the master repo and not on forks
# When all forks run this at the same time, it is causing `Too Many Requests` issues
if: ${{ github.repository == 'dani-garcia/vaultwarden' }}
name: Check
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
timeout-minutes: 30
permissions:
contents: read
@@ -25,10 +28,13 @@ jobs:
actions: read
steps:
- name: Checkout code
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 #v4.1.1
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@d43c1f16c00cfd3978dde6c07f4bbcf9eb6993ca # v0.16.1
uses: aquasecurity/trivy-action@18f2510ee396bbf400402947b394f2dd8c87dbb0 # v0.29.0
env:
TRIVY_DB_REPOSITORY: docker.io/aquasec/trivy-db:2,public.ecr.aws/aquasecurity/trivy-db:2,ghcr.io/aquasecurity/trivy-db:2
TRIVY_JAVA_DB_REPOSITORY: docker.io/aquasec/trivy-java-db:1,public.ecr.aws/aquasecurity/trivy-java-db:1,ghcr.io/aquasecurity/trivy-java-db:1
with:
scan-type: repo
ignore-unfixed: true
@@ -37,6 +43,6 @@ jobs:
severity: CRITICAL,HIGH
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@b7bf0a3ed3ecfa44160715d7c442788f65f0f923 # v3.23.2
uses: github/codeql-action/upload-sarif@86b04fb0e47484f7282357688f21d5d0e32175fe # v3.27.5
with:
sarif_file: 'trivy-results.sarif'

View File

@@ -1,7 +1,7 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
rev: v4.6.0
hooks:
- id: check-yaml
- id: check-json

2533
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,11 @@
workspace = { members = ["macros"] }
[package]
name = "vaultwarden"
version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2021"
rust-version = "1.73.0"
rust-version = "1.83.0"
resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden"
@@ -18,144 +20,149 @@ build = "build.rs"
enable_syslog = []
mysql = ["diesel/mysql", "diesel_migrations/mysql"]
postgresql = ["diesel/postgres", "diesel_migrations/postgres"]
sqlite = ["diesel/sqlite", "diesel_migrations/sqlite", "libsqlite3-sys"]
sqlite = ["diesel/sqlite", "diesel_migrations/sqlite", "dep:libsqlite3-sys"]
# Enable to use a vendored and statically linked openssl
vendored_openssl = ["openssl/vendored"]
# Enable MiMalloc memory allocator to replace the default malloc
# This can improve performance for Alpine builds
enable_mimalloc = ["mimalloc"]
enable_mimalloc = ["dep:mimalloc"]
# This is a development dependency, and should only be used during development!
# It enables the usage of the diesel_logger crate, which is able to output the generated queries.
# You also need to set an env variable `QUERY_LOGGER=1` to fully activate this so you do not have to re-compile
# if you want to turn off the logging for a specific run.
query_logger = ["diesel_logger"]
query_logger = ["dep:diesel_logger"]
# Enable unstable features, requires nightly
# Currently only used to enable rusts official ip support
unstable = []
[target."cfg(not(windows))".dependencies]
[target."cfg(unix)".dependencies]
# Logging
syslog = "6.1.0"
syslog = "7.0.0"
[dependencies]
macros = { path = "./macros" }
# Logging
log = "0.4.20"
fern = { version = "0.6.2", features = ["syslog-6", "reopen-1"] }
tracing = { version = "0.1.40", features = ["log"] } # Needed to have lettre and webauthn-rs trace logging to work
log = "0.4.25"
fern = { version = "0.7.1", features = ["syslog-7", "reopen-1"] }
tracing = { version = "0.1.41", features = ["log"] } # Needed to have lettre and webauthn-rs trace logging to work
# A `dotenv` implementation for Rust
dotenvy = { version = "0.15.7", default-features = false }
# Lazy initialization
once_cell = "1.19.0"
once_cell = "1.20.2"
# Numerical libraries
num-traits = "0.2.17"
num-derive = "0.4.1"
bigdecimal = "0.4.2"
num-traits = "0.2.19"
num-derive = "0.4.2"
bigdecimal = "0.4.7"
# Web framework
rocket = { version = "0.5.0", features = ["tls", "json"], default-features = false }
rocket_ws = { version ="0.1.0" }
rocket = { version = "0.5.1", features = ["tls", "json"], default-features = false }
rocket_ws = { version ="0.1.1" }
# WebSockets libraries
tokio-tungstenite = "0.20.1"
rmpv = "1.0.1" # MessagePack library
rmpv = "1.3.0" # MessagePack library
# Concurrent HashMap used for WebSocket messaging and favicons
dashmap = "5.5.3"
dashmap = "6.1.0"
# Async futures
futures = "0.3.30"
tokio = { version = "1.35.1", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal"] }
futures = "0.3.31"
tokio = { version = "1.43.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] }
# A generic serialization/deserialization framework
serde = { version = "1.0.195", features = ["derive"] }
serde_json = "1.0.111"
serde = { version = "1.0.217", features = ["derive"] }
serde_json = "1.0.137"
# A safe, extensible ORM and Query builder
diesel = { version = "2.1.4", features = ["chrono", "r2d2", "numeric"] }
diesel_migrations = "2.1.0"
diesel_logger = { version = "0.3.0", optional = true }
diesel = { version = "2.2.6", features = ["chrono", "r2d2", "numeric"] }
diesel_migrations = "2.2.0"
diesel_logger = { version = "0.4.0", optional = true }
derive_more = { version = "1.0.0", features = ["from", "into", "as_ref", "deref", "display"] }
diesel-derive-newtype = "2.1.2"
# Bundled/Static SQLite
libsqlite3-sys = { version = "0.27.0", features = ["bundled"], optional = true }
libsqlite3-sys = { version = "0.30.1", features = ["bundled"], optional = true }
# Crypto-related libraries
rand = { version = "0.8.5", features = ["small_rng"] }
ring = "0.17.7"
ring = "0.17.8"
# UUID generation
uuid = { version = "1.7.0", features = ["v4"] }
uuid = { version = "1.12.1", features = ["v4"] }
# Date and time libraries
chrono = { version = "0.4.33", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.8.5"
time = "0.3.31"
chrono = { version = "0.4.39", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.10.1"
time = "0.3.37"
# Job scheduler
job_scheduler_ng = "2.0.4"
job_scheduler_ng = "2.0.5"
# Data encoding library Hex/Base32/Base64
data-encoding = "2.5.0"
data-encoding = "2.7.0"
# JWT library
jsonwebtoken = "9.2.0"
jsonwebtoken = "9.3.0"
# TOTP library
totp-lite = "2.0.1"
# Yubico Library
yubico = { version = "0.11.0", features = ["online-tokio"], default-features = false }
yubico = { version = "0.12.0", features = ["online-tokio"], default-features = false }
# WebAuthn libraries
webauthn-rs = "0.3.2"
# Handling of URL's for WebAuthn and favicons
url = "2.5.0"
url = "2.5.4"
# Email libraries
lettre = { version = "0.11.3", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
lettre = { version = "0.11.11", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
percent-encoding = "2.3.1" # URL encoding library used for URL's in the emails
email_address = "0.2.4"
email_address = "0.2.9"
# HTML Template library
handlebars = { version = "5.1.1", features = ["dir_source"] }
handlebars = { version = "6.3.0", features = ["dir_source"] }
# HTTP client (Used for favicons, version check, DUO and HIBP API)
reqwest = { version = "0.11.23", features = ["stream", "json", "gzip", "brotli", "socks", "cookies", "trust-dns", "native-tls-alpn"] }
reqwest = { version = "0.12.12", features = ["native-tls-alpn", "stream", "json", "gzip", "brotli", "socks", "cookies"] }
hickory-resolver = "0.24.2"
# Favicon extraction libraries
html5gum = "0.5.7"
regex = { version = "1.10.3", features = ["std", "perf", "unicode-perl"], default-features = false }
html5gum = "0.7.0"
regex = { version = "1.11.1", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.3.1"
bytes = "1.5.0"
bytes = "1.9.0"
# Cache function results (Used for version check and favicon fetching)
cached = { version = "0.48.1", features = ["async"] }
cached = { version = "0.54.0", features = ["async"] }
# Used for custom short lived cookie jar during favicon extraction
cookie = "0.16.2"
cookie_store = "0.19.1"
cookie = "0.18.1"
cookie_store = "0.21.1"
# Used by U2F, JWT and PostgreSQL
openssl = "0.10.63"
openssl = "0.10.68"
# CLI argument parsing
pico-args = "0.5.0"
# Macro ident concatenation
paste = "1.0.14"
governor = "0.6.0"
paste = "1.0.15"
governor = "0.8.0"
# Check client versions for specific features.
semver = "1.0.21"
semver = "1.0.25"
# Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "0.1.39", features = ["secure"], default-features = false, optional = true }
which = "6.0.0"
mimalloc = { version = "0.1.43", features = ["secure"], default-features = false, optional = true }
which = "7.0.1"
# Argon2 library with support for the PHC format
argon2 = "0.5.3"
@@ -163,6 +170,12 @@ argon2 = "0.5.3"
# Reading a password from the cli for generating the Argon2id ADMIN_TOKEN
rpassword = "7.3.1"
# Loading a dynamic CSS Stylesheet
grass_compiler = { version = "0.13.4", default-features = false }
[patch.crates-io]
# Patch yubico to remove duplicate crates of older versions
yubico = { git = "https://github.com/BlackDex/yubico-rs", rev = "00df14811f58155c0f02e3ab10f1570ed3e115c6" }
# Strip debuginfo from the release builds
# The symbols are the provide better panic traces
@@ -172,7 +185,6 @@ strip = "debuginfo"
lto = "fat"
codegen-units = 1
# A little bit of a speedup
[profile.dev]
split-debuginfo = "unpacked"
@@ -190,3 +202,79 @@ strip = "symbols"
lto = "fat"
codegen-units = 1
panic = "abort"
# Profile for systems with low resources
# It will use less resources during build
[profile.release-low]
inherits = "release"
strip = "symbols"
lto = "thin"
codegen-units = 16
# Linting config
# https://doc.rust-lang.org/rustc/lints/groups.html
[lints.rust]
# Forbid
unsafe_code = "forbid"
non_ascii_idents = "forbid"
# Deny
deprecated_in_future = "deny"
future_incompatible = { level = "deny", priority = -1 }
keyword_idents = { level = "deny", priority = -1 }
let_underscore = { level = "deny", priority = -1 }
noop_method_call = "deny"
refining_impl_trait = { level = "deny", priority = -1 }
rust_2018_idioms = { level = "deny", priority = -1 }
rust_2021_compatibility = { level = "deny", priority = -1 }
rust_2024_compatibility = { level = "deny", priority = -1 }
edition_2024_expr_fragment_specifier = "allow" # Once changed to Rust 2024 this should be removed and macro's should be validated again
single_use_lifetimes = "deny"
trivial_casts = "deny"
trivial_numeric_casts = "deny"
unused = { level = "deny", priority = -1 }
unused_import_braces = "deny"
unused_lifetimes = "deny"
unused_qualifications = "deny"
variant_size_differences = "deny"
# Allow the following lints since these cause issues with Rust v1.84.0 or newer
# Building Vaultwarden with Rust v1.85.0 and edition 2024 also works without issues
if_let_rescope = "allow"
tail_expr_drop_order = "allow"
# https://rust-lang.github.io/rust-clippy/stable/index.html
[lints.clippy]
# Warn
dbg_macro = "warn"
todo = "warn"
# Deny
case_sensitive_file_extension_comparisons = "deny"
cast_lossless = "deny"
clone_on_ref_ptr = "deny"
equatable_if_let = "deny"
filter_map_next = "deny"
float_cmp_const = "deny"
inefficient_to_string = "deny"
iter_on_empty_collections = "deny"
iter_on_single_items = "deny"
linkedlist = "deny"
macro_use_imports = "deny"
manual_assert = "deny"
manual_instant_elapsed = "deny"
manual_string_new = "deny"
match_on_vec_items = "deny"
match_wildcard_for_single_variants = "deny"
mem_forget = "deny"
needless_continue = "deny"
needless_lifetimes = "deny"
option_option = "deny"
string_add_assign = "deny"
string_to_string = "deny"
unnecessary_join = "deny"
unnecessary_self_imports = "deny"
unnested_or_patterns = "deny"
unused_async = "deny"
unused_self = "deny"
verbose_file_reads = "deny"
zero_sized_map_values = "deny"

202
README.md
View File

@@ -1,102 +1,144 @@
### Alternative implementation of the Bitwarden server API written in Rust and compatible with [upstream Bitwarden clients](https://bitwarden.com/download/)*, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.
![Vaultwarden Logo](./resources/vaultwarden-logo-auto.svg)
📢 Note: This project was known as Bitwarden_RS and has been renamed to separate itself from the official Bitwarden server in the hopes of avoiding confusion and trademark/branding issues. Please see [#1642](https://github.com/dani-garcia/vaultwarden/discussions/1642) for more explanation.
An alternative server implementation of the Bitwarden Client API, written in Rust and compatible with [official Bitwarden clients](https://bitwarden.com/download/) [[disclaimer](#disclaimer)], perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.
---
[![Build](https://github.com/dani-garcia/vaultwarden/actions/workflows/build.yml/badge.svg)](https://github.com/dani-garcia/vaultwarden/actions/workflows/build.yml)
[![ghcr.io](https://img.shields.io/badge/ghcr.io-download-blue)](https://github.com/dani-garcia/vaultwarden/pkgs/container/vaultwarden)
[![Docker Pulls](https://img.shields.io/docker/pulls/vaultwarden/server.svg)](https://hub.docker.com/r/vaultwarden/server)
[![Quay.io](https://img.shields.io/badge/Quay.io-download-blue)](https://quay.io/repository/vaultwarden/server)
[![Dependency Status](https://deps.rs/repo/github/dani-garcia/vaultwarden/status.svg)](https://deps.rs/repo/github/dani-garcia/vaultwarden)
[![GitHub Release](https://img.shields.io/github/release/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/releases/latest)
[![AGPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/blob/main/LICENSE.txt)
[![Matrix Chat](https://img.shields.io/matrix/vaultwarden:matrix.org.svg?logo=matrix)](https://matrix.to/#/#vaultwarden:matrix.org)
Image is based on [Rust implementation of Bitwarden API](https://github.com/dani-garcia/vaultwarden).
[![GitHub Release](https://img.shields.io/github/release/dani-garcia/vaultwarden.svg?style=for-the-badge&logo=vaultwarden&color=005AA4)](https://github.com/dani-garcia/vaultwarden/releases/latest)
[![ghcr.io Pulls](https://img.shields.io/badge/dynamic/json?style=for-the-badge&logo=github&logoColor=fff&color=005AA4&url=https%3A%2F%2Fipitio.github.io%2Fbackage%2Fdani-garcia%2Fvaultwarden%2Fvaultwarden.json&query=%24.downloads&label=ghcr.io%20pulls&cacheSeconds=14400)](https://github.com/dani-garcia/vaultwarden/pkgs/container/vaultwarden)
[![Docker Pulls](https://img.shields.io/docker/pulls/vaultwarden/server.svg?style=for-the-badge&logo=docker&logoColor=fff&color=005AA4&label=docker.io%20pulls)](https://hub.docker.com/r/vaultwarden/server)
[![Quay.io](https://img.shields.io/badge/quay.io-download-005AA4?style=for-the-badge&logo=redhat&cacheSeconds=14400)](https://quay.io/repository/vaultwarden/server) <br>
[![Contributors](https://img.shields.io/github/contributors-anon/dani-garcia/vaultwarden.svg?style=flat-square&logo=vaultwarden&color=005AA4)](https://github.com/dani-garcia/vaultwarden/graphs/contributors)
[![Forks](https://img.shields.io/github/forks/dani-garcia/vaultwarden.svg?style=flat-square&logo=github&logoColor=fff&color=005AA4)](https://github.com/dani-garcia/vaultwarden/network/members)
[![Stars](https://img.shields.io/github/stars/dani-garcia/vaultwarden.svg?style=flat-square&logo=github&logoColor=fff&color=005AA4)](https://github.com/dani-garcia/vaultwarden/stargazers)
[![Issues Open](https://img.shields.io/github/issues/dani-garcia/vaultwarden.svg?style=flat-square&logo=github&logoColor=fff&color=005AA4&cacheSeconds=300)](https://github.com/dani-garcia/vaultwarden/issues)
[![Issues Closed](https://img.shields.io/github/issues-closed/dani-garcia/vaultwarden.svg?style=flat-square&logo=github&logoColor=fff&color=005AA4&cacheSeconds=300)](https://github.com/dani-garcia/vaultwarden/issues?q=is%3Aissue+is%3Aclosed)
[![AGPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg?style=flat-square&logo=vaultwarden&color=944000&cacheSeconds=14400)](https://github.com/dani-garcia/vaultwarden/blob/main/LICENSE.txt) <br>
[![Dependency Status](https://img.shields.io/badge/dynamic/xml?url=https%3A%2F%2Fdeps.rs%2Frepo%2Fgithub%2Fdani-garcia%2Fvaultwarden%2Fstatus.svg&query=%2F*%5Blocal-name()%3D'svg'%5D%2F*%5Blocal-name()%3D'g'%5D%5B2%5D%2F*%5Blocal-name()%3D'text'%5D%5B4%5D&style=flat-square&logo=rust&label=dependencies&color=005AA4)](https://deps.rs/repo/github/dani-garcia/vaultwarden)
[![GHA Release](https://img.shields.io/github/actions/workflow/status/dani-garcia/vaultwarden/release.yml?style=flat-square&logo=github&logoColor=fff&label=Release%20Workflow)](https://github.com/dani-garcia/vaultwarden/actions/workflows/release.yml)
[![GHA Build](https://img.shields.io/github/actions/workflow/status/dani-garcia/vaultwarden/build.yml?style=flat-square&logo=github&logoColor=fff&label=Build%20Workflow)](https://github.com/dani-garcia/vaultwarden/actions/workflows/build.yml) <br>
[![Matrix Chat](https://img.shields.io/matrix/vaultwarden:matrix.org.svg?style=flat-square&logo=matrix&logoColor=fff&color=953B00&cacheSeconds=14400)](https://matrix.to/#/#vaultwarden:matrix.org)
[![GitHub Discussions](https://img.shields.io/github/discussions/dani-garcia/vaultwarden?style=flat-square&logo=github&logoColor=fff&color=953B00&cacheSeconds=300)](https://github.com/dani-garcia/vaultwarden/discussions)
[![Discourse Discussions](https://img.shields.io/discourse/topics?server=https%3A%2F%2Fvaultwarden.discourse.group%2F&style=flat-square&logo=discourse&color=953B00)](https://vaultwarden.discourse.group/)
**This project is not associated with the [Bitwarden](https://bitwarden.com/) project nor Bitwarden, Inc.**
> [!IMPORTANT]
> **When using this server, please report any bugs or suggestions directly to us (see [Get in touch](#get-in-touch)), regardless of whatever clients you are using (mobile, desktop, browser...). DO NOT use the official Bitwarden support channels.**
#### ⚠️**IMPORTANT**⚠️: When using this server, please report any bugs or suggestions to us directly (look at the bottom of this page for ways to get in touch), regardless of whatever clients you are using (mobile, desktop, browser...). DO NOT use the official support channels.
---
<br>
## Features
Basically full implementation of Bitwarden API is provided including:
A nearly complete implementation of the Bitwarden Client API is provided, including:
* Organizations support
* Attachments and Send
* Vault API support
* Serving the static files for Vault interface
* Website icons API
* Authenticator and U2F support
* YubiKey and Duo support
* Emergency Access
* [Personal Vault](https://bitwarden.com/help/managing-items/)
* [Send](https://bitwarden.com/help/about-send/)
* [Attachments](https://bitwarden.com/help/attachments/)
* [Website icons](https://bitwarden.com/help/website-icons/)
* [Personal API Key](https://bitwarden.com/help/personal-api-key/)
* [Organizations](https://bitwarden.com/help/getting-started-organizations/)
- [Collections](https://bitwarden.com/help/about-collections/),
[Password Sharing](https://bitwarden.com/help/sharing/),
[Member Roles](https://bitwarden.com/help/user-types-access-control/),
[Groups](https://bitwarden.com/help/about-groups/),
[Event Logs](https://bitwarden.com/help/event-logs/),
[Admin Password Reset](https://bitwarden.com/help/admin-reset/),
[Directory Connector](https://bitwarden.com/help/directory-sync/),
[Policies](https://bitwarden.com/help/policies/)
* [Multi/Two Factor Authentication](https://bitwarden.com/help/bitwarden-field-guide-two-step-login/)
- [Authenticator](https://bitwarden.com/help/setup-two-step-login-authenticator/),
[Email](https://bitwarden.com/help/setup-two-step-login-email/),
[FIDO2 WebAuthn](https://bitwarden.com/help/setup-two-step-login-fido/),
[YubiKey](https://bitwarden.com/help/setup-two-step-login-yubikey/),
[Duo](https://bitwarden.com/help/setup-two-step-login-duo/)
* [Emergency Access](https://bitwarden.com/help/emergency-access/)
* [Vaultwarden Admin Backend](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-admin-page)
* [Modified Web Vault client](https://github.com/dani-garcia/bw_web_builds) (Bundled within our containers)
## Installation
Pull the docker image and mount a volume from the host for persistent storage:
```sh
docker pull vaultwarden/server:latest
docker run -d --name vaultwarden -v /vw-data/:/data/ --restart unless-stopped -p 80:80 vaultwarden/server:latest
```
This will preserve any persistent data under /vw-data/, you can adapt the path to whatever suits you.
**IMPORTANT**: Most modern web browsers disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault via HTTPS or localhost.
This can be configured in [vaultwarden directly](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS) or using a third-party reverse proxy ([some examples](https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples)).
If you have an available domain name, you can get HTTPS certificates with [Let's Encrypt](https://letsencrypt.org/), or you can generate self-signed certificates with utilities like [mkcert](https://github.com/FiloSottile/mkcert). Some proxies automatically do this step, like Caddy (see examples linked above).
<br>
## Usage
See the [vaultwarden wiki](https://github.com/dani-garcia/vaultwarden/wiki) for more information on how to configure and run the vaultwarden server.
> [!IMPORTANT]
> Most modern web browsers disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault via HTTPS or localhost.
>
>This can be configured in [Vaultwarden directly](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS) or using a third-party reverse proxy ([some examples](https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples)).
>
>If you have an available domain name, you can get HTTPS certificates with [Let's Encrypt](https://letsencrypt.org/), or you can generate self-signed certificates with utilities like [mkcert](https://github.com/FiloSottile/mkcert). Some proxies automatically do this step, like Caddy or Traefik (see examples linked above).
> [!TIP]
>**For more detailed examples on how to install, use and configure Vaultwarden you can check our [Wiki](https://github.com/dani-garcia/vaultwarden/wiki).**
The main way to use Vaultwarden is via our container images which are published to [ghcr.io](https://github.com/dani-garcia/vaultwarden/pkgs/container/vaultwarden), [docker.io](https://hub.docker.com/r/vaultwarden/server) and [quay.io](https://quay.io/repository/vaultwarden/server).
There are also [community driven packages](https://github.com/dani-garcia/vaultwarden/wiki/Third-party-packages) which can be used, but those might be lagging behind the latest version or might deviate in the way Vaultwarden is configured, as described in our [Wiki](https://github.com/dani-garcia/vaultwarden/wiki).
### Docker/Podman CLI
Pull the container image and mount a volume from the host for persistent storage.<br>
You can replace `docker` with `podman` if you prefer to use podman.
```shell
docker pull vaultwarden/server:latest
docker run --detach --name vaultwarden \
--env DOMAIN="https://vw.domain.tld" \
--volume /vw-data/:/data/ \
--restart unless-stopped \
--publish 80:80 \
vaultwarden/server:latest
```
This will preserve any persistent data under `/vw-data/`, you can adapt the path to whatever suits you.
### Docker Compose
To use Docker compose you need to create a `compose.yaml` which will hold the configuration to run the Vaultwarden container.
```yaml
services:
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
restart: unless-stopped
environment:
DOMAIN: "https://vw.domain.tld"
volumes:
- ./vw-data/:/data/
ports:
- 80:80
```
<br>
## Get in touch
To ask a question, offer suggestions or new features or to get help configuring or installing the software, please use [GitHub Discussions](https://github.com/dani-garcia/vaultwarden/discussions) or [the forum](https://vaultwarden.discourse.group/).
If you spot any bugs or crashes with vaultwarden itself, please [create an issue](https://github.com/dani-garcia/vaultwarden/issues/). Make sure you are on the latest version and there aren't any similar issues open, though!
Have a question, suggestion or need help? Join our community on [Matrix](https://matrix.to/#/#vaultwarden:matrix.org), [GitHub Discussions](https://github.com/dani-garcia/vaultwarden/discussions) or [Discourse Forums](https://vaultwarden.discourse.group/).
If you prefer to chat, we're usually hanging around at [#vaultwarden:matrix.org](https://matrix.to/#/#vaultwarden:matrix.org) room on Matrix. Feel free to join us!
Encountered a bug or crash? Please search our issue tracker and discussions to see if it's already been reported. If not, please [start a new discussion](https://github.com/dani-garcia/vaultwarden/discussions) or [create a new issue](https://github.com/dani-garcia/vaultwarden/issues/). Ensure you're using the latest version of Vaultwarden and there aren't any similar issues open or closed!
<br>
## Contributors
### Sponsors
Thanks for your contribution to the project!
<!--
<table>
<tr>
<td align="center">
<a href="https://github.com/username">
<img src="https://avatars.githubusercontent.com/u/725423?s=75&v=4" width="75px;" alt="username"/>
<br />
<sub><b>username</b></sub>
</a>
</td>
</tr>
</table>
[![Contributors Count](https://img.shields.io/github/contributors-anon/dani-garcia/vaultwarden?style=for-the-badge&logo=vaultwarden&color=005AA4)](https://github.com/dani-garcia/vaultwarden/graphs/contributors)<br>
[![Contributors Avatars](https://contributors-img.web.app/image?repo=dani-garcia/vaultwarden)](https://github.com/dani-garcia/vaultwarden/graphs/contributors)
<br/>
-->
<br>
<table>
<tr>
<td align="center">
<a href="https://github.com/themightychris" style="width: 75px">
<sub><b>Chris Alfano</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center">
<a href="https://github.com/numberly" style="width: 75px">
<sub><b>Numberly</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center">
<a href="https://github.com/IQ333777" style="width: 75px">
<sub><b>IQ333777</b></sub>
</a>
</td>
</tr>
</table>
## Disclaimer
**This project is not associated with [Bitwarden](https://bitwarden.com/) or Bitwarden, Inc.**
However, one of the active maintainers for Vaultwarden is employed by Bitwarden and is allowed to contribute to the project on their own time. These contributions are independent of Bitwarden and are reviewed by other maintainers.
The maintainers work together to set the direction for the project, focusing on serving the self-hosting community, including individuals, families, and small organizations, while ensuring the project's sustainability.
**Please note:** We cannot be held liable for any data loss that may occur while using Vaultwarden. This includes passwords, attachments, and other information handled by the application. We highly recommend performing regular backups of your files and database. However, should you experience data loss, we encourage you to contact us immediately.
<br>
## Bitwarden_RS
This project was known as Bitwarden_RS and has been renamed to separate itself from the official Bitwarden server in the hopes of avoiding confusion and trademark/branding issues.<br>
Please see [#1642 - v1.21.0 release and project rename to Vaultwarden](https://github.com/dani-garcia/vaultwarden/discussions/1642) for more explanation.

View File

@@ -21,7 +21,7 @@ notify us. We welcome working with you to resolve the issue promptly. Thanks in
The following bug classes are out-of scope:
- Bugs that are already reported on Vaultwarden's issue tracker (https://github.com/dani-garcia/vaultwarden/issues)
- Bugs that are not part of Vaultwarden, like on the the web-vault or mobile and desktop clients. These issues need to be reported in the respective project issue tracker at https://github.com/bitwarden to which we are not associated
- Bugs that are not part of Vaultwarden, like on the web-vault or mobile and desktop clients. These issues need to be reported in the respective project issue tracker at https://github.com/bitwarden to which we are not associated
- Issues in an upstream software dependency (ex: Rust, or External Libraries) which are already reported to the upstream maintainer
- Attacks requiring physical access to a user's device
- Issues related to software or protocols not under Vaultwarden's control
@@ -39,7 +39,11 @@ Thank you for helping keep Vaultwarden and our users safe!
# How to contact us
- You can contact us on Matrix https://matrix.to/#/#vaultwarden:matrix.org (user: `@danig:matrix.org`)
- You can send an ![security-contact](/.github/security-contact.gif) to report a security issue.
- If you want to send an encrypted email you can use the following GPG key:<br>
https://keyserver.ubuntu.com/pks/lookup?search=0xB9B7A108373276BF3C0406F9FC8A7D14C3CD543A&fingerprint=on&op=index
- You can contact us on Matrix https://matrix.to/#/#vaultwarden:matrix.org (users: `@danig:matrix.org` and/or `@blackdex:matrix.org`)
- You can send an ![security-contact](/.github/security-contact.gif) to report a security issue.<br>
If you want to send an encrypted email you can use the following GPG key: 13BB3A34C9E380258CE43D595CB150B31F6426BC<br>
It can be found on several public GPG key servers.<br>
* https://keys.openpgp.org/search?q=security%40vaultwarden.org
* https://keys.mailvelope.com/pks/lookup?op=get&search=security%40vaultwarden.org
* https://pgpkeys.eu/pks/lookup?search=security%40vaultwarden.org&fingerprint=on&op=index
* https://keyserver.ubuntu.com/pks/lookup?search=security%40vaultwarden.org&fingerprint=on&op=index

View File

@@ -17,6 +17,13 @@ fn main() {
"You need to enable one DB backend. To build with previous defaults do: cargo build --features sqlite"
);
// Use check-cfg to let cargo know which cfg's we define,
// and avoid warnings when they are used in the code.
println!("cargo::rustc-check-cfg=cfg(sqlite)");
println!("cargo::rustc-check-cfg=cfg(mysql)");
println!("cargo::rustc-check-cfg=cfg(postgresql)");
println!("cargo::rustc-check-cfg=cfg(query_logger)");
// Rerun when these paths are changed.
// Someone could have checked-out a tag or specific commit, but no other files changed.
println!("cargo:rerun-if-changed=.git");
@@ -49,11 +56,11 @@ fn run(args: &[&str]) -> Result<String, std::io::Error> {
/// This method reads info from Git, namely tags, branch, and revision
/// To access these values, use:
/// - env!("GIT_EXACT_TAG")
/// - env!("GIT_LAST_TAG")
/// - env!("GIT_BRANCH")
/// - env!("GIT_REV")
/// - env!("VW_VERSION")
/// - `env!("GIT_EXACT_TAG")`
/// - `env!("GIT_LAST_TAG")`
/// - `env!("GIT_BRANCH")`
/// - `env!("GIT_REV")`
/// - `env!("VW_VERSION")`
fn version_from_git_info() -> Result<String, std::io::Error> {
// The exact tag for the current commit, can be empty when
// the current commit doesn't have an associated tag

View File

@@ -1,12 +1,13 @@
---
vault_version: "v2024.1.2"
vault_image_digest: "sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b"
# Cross Compile Docker Helper Scripts v1.3.0
vault_version: "v2025.1.1"
vault_image_digest: "sha256:cb6b2095a4afc1d9d243a33f6d09211f40e3d82c7ae829fd025df5ff175a4918"
# Cross Compile Docker Helper Scripts v1.6.1
# We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts
xx_image_digest: "sha256:c9609ace652bbe51dd4ce90e0af9d48a4590f1214246da5bc70e46f6dd586edc"
rust_version: 1.75.0 # Rust version to be used
# https://github.com/tonistiigi/xx | https://hub.docker.com/r/tonistiigi/xx/tags
xx_image_digest: "sha256:9c207bead753dda9430bdd15425c6518fc7a03d866103c516a2c6889188f5894"
rust_version: 1.84.0 # Rust version to be used
debian_version: bookworm # Debian release name to be used
alpine_version: 3.19 # Alpine version to be used
alpine_version: "3.21" # Alpine version to be used
# For which platforms/architectures will we try to build images
platforms: ["linux/amd64", "linux/arm64", "linux/arm/v7", "linux/arm/v6"]
# Determine the build images per OS/Arch

View File

@@ -1,4 +1,5 @@
# syntax=docker/dockerfile:1
# check=skip=FromPlatformFlagConstDisallowed,RedundantTargetPlatform
# This file was generated using a Jinja2 template.
# Please make your changes in `DockerSettings.yaml` or `Dockerfile.j2` and then `make`
@@ -18,27 +19,27 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2024.1.2
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2024.1.2
# [docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b]
# $ docker pull docker.io/vaultwarden/web-vault:v2025.1.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.1.1
# [docker.io/vaultwarden/web-vault@sha256:cb6b2095a4afc1d9d243a33f6d09211f40e3d82c7ae829fd025df5ff175a4918]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b
# [docker.io/vaultwarden/web-vault:v2024.1.2]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:cb6b2095a4afc1d9d243a33f6d09211f40e3d82c7ae829fd025df5ff175a4918
# [docker.io/vaultwarden/web-vault:v2025.1.1]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b as vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:cb6b2095a4afc1d9d243a33f6d09211f40e3d82c7ae829fd025df5ff175a4918 AS vault
########################## ALPINE BUILD IMAGES ##########################
## NOTE: The Alpine Base Images do not support other platforms then linux/amd64
## And for Alpine we define all build images here, they will only be loaded when actually used
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.75.0 as build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.75.0 as build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.75.0 as build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.75.0 as build_armv6
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.84.0 AS build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.84.0 AS build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.84.0 AS build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.84.0 AS build_armv6
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
FROM --platform=linux/amd64 build_${TARGETARCH}${TARGETVARIANT} as build
FROM --platform=linux/amd64 build_${TARGETARCH}${TARGETVARIANT} AS build
ARG TARGETARCH
ARG TARGETVARIANT
ARG TARGETPLATFORM
@@ -58,33 +59,30 @@ ENV DEBIAN_FRONTEND=noninteractive \
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
RUN mkdir -pv "${CARGO_HOME}" && \
rustup set profile minimal
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Shared variables across Debian and Alpine
# Environment variables for Cargo on Alpine based builds
RUN echo "export CARGO_TARGET=${RUST_MUSL_CROSS_TARGET}" >> /env-cargo && \
# To be able to build the armv6 image with mimalloc we need to tell the linker to also look for libatomic
if [[ "${TARGETARCH}${TARGETVARIANT}" == "armv6" ]] ; then echo "export RUSTFLAGS='-Clink-arg=-latomic'" >> /env-cargo ; fi && \
# Output the current contents of the file
cat /env-cargo
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
RUN source /env-cargo && \
rustup target add "${CARGO_TARGET}"
ARG CARGO_PROFILE=release
ARG VW_VERSION
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain.toml ./rust-toolchain.toml
COPY ./build.rs ./build.rs
COPY ./Cargo.* ./rust-toolchain.toml ./build.rs ./
COPY ./macros ./macros
ARG CARGO_PROFILE=release
# Configure the DB ARG as late as possible to not invalidate the cached layers above
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
@@ -97,6 +95,8 @@ RUN source /env-cargo && \
# To avoid copying unneeded files, use .dockerignore
COPY . .
ARG VW_VERSION
# Builds again, this time it will be the actual source files being build
RUN source /env-cargo && \
# Make sure that we actually build the project by updating the src/main.rs timestamp
@@ -127,7 +127,7 @@ RUN source /env-cargo && \
# To uninstall: docker run --privileged --rm tonistiigi/binfmt --uninstall 'qemu-*'
#
# We need to add `--platform` here, because of a podman bug: https://github.com/containers/buildah/issues/4742
FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.19
FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.21
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@@ -144,14 +144,12 @@ RUN mkdir /data && \
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
COPY docker/healthcheck.sh docker/start.sh /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/final/vaultwarden .

View File

@@ -1,4 +1,5 @@
# syntax=docker/dockerfile:1
# check=skip=FromPlatformFlagConstDisallowed,RedundantTargetPlatform
# This file was generated using a Jinja2 template.
# Please make your changes in `DockerSettings.yaml` or `Dockerfile.j2` and then `make`
@@ -18,24 +19,24 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2024.1.2
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2024.1.2
# [docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b]
# $ docker pull docker.io/vaultwarden/web-vault:v2025.1.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.1.1
# [docker.io/vaultwarden/web-vault@sha256:cb6b2095a4afc1d9d243a33f6d09211f40e3d82c7ae829fd025df5ff175a4918]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b
# [docker.io/vaultwarden/web-vault:v2024.1.2]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:cb6b2095a4afc1d9d243a33f6d09211f40e3d82c7ae829fd025df5ff175a4918
# [docker.io/vaultwarden/web-vault:v2025.1.1]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b as vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:cb6b2095a4afc1d9d243a33f6d09211f40e3d82c7ae829fd025df5ff175a4918 AS vault
########################## Cross Compile Docker Helper Scripts ##########################
## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts
## And these bash scripts do not have any significant difference if at all
FROM --platform=linux/amd64 docker.io/tonistiigi/xx@sha256:c9609ace652bbe51dd4ce90e0af9d48a4590f1214246da5bc70e46f6dd586edc AS xx
FROM --platform=linux/amd64 docker.io/tonistiigi/xx@sha256:9c207bead753dda9430bdd15425c6518fc7a03d866103c516a2c6889188f5894 AS xx
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.75.0-slim-bookworm as build
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.84.0-slim-bookworm AS build
COPY --from=xx / /
ARG TARGETARCH
ARG TARGETVARIANT
@@ -64,10 +65,7 @@ RUN apt-get update && \
"libc6-$(xx-info debian-arch)-cross" \
"libc6-dev-$(xx-info debian-arch)-cross" \
"linux-libc-dev-$(xx-info debian-arch)-cross" && \
# Run xx-cargo early, since it sometimes seems to break when run at a later stage
echo "export CARGO_TARGET=$(xx-cargo --print-target-triple)" >> /env-cargo
RUN xx-apt-get install -y \
xx-apt-get install -y \
--no-install-recommends \
gcc \
libmariadb3 \
@@ -78,19 +76,29 @@ RUN xx-apt-get install -y \
# Force install arch dependend mariadb dev packages
# Installing them the normal way breaks several other packages (again)
apt-get download "libmariadb-dev-compat:$(xx-info debian-arch)" "libmariadb-dev:$(xx-info debian-arch)" && \
dpkg --force-all -i ./libmariadb-dev*.deb
dpkg --force-all -i ./libmariadb-dev*.deb && \
# Run xx-cargo early, since it sometimes seems to break when run at a later stage
echo "export CARGO_TARGET=$(xx-cargo --print-target-triple)" >> /env-cargo
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
RUN mkdir -pv "${CARGO_HOME}" && \
rustup set profile minimal
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Environment variables for cargo across Debian and Alpine
# Environment variables for Cargo on Debian based builds
ARG ARCH_OPENSSL_LIB_DIR \
ARCH_OPENSSL_INCLUDE_DIR
RUN source /env-cargo && \
if xx-info is-cross ; then \
# Some special variables if needed to override some build paths
if [[ -n "${ARCH_OPENSSL_LIB_DIR}" && -n "${ARCH_OPENSSL_INCLUDE_DIR}" ]]; then \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_LIB_DIR=${ARCH_OPENSSL_LIB_DIR}" >> /env-cargo && \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_INCLUDE_DIR=${ARCH_OPENSSL_INCLUDE_DIR}" >> /env-cargo ; \
fi && \
# We can't use xx-cargo since that uses clang, which doesn't work for our libraries.
# Because of this we generate the needed environment variables here which we can load in the needed steps.
echo "export CC_$(echo "${CARGO_TARGET}" | tr '[:upper:]' '[:lower:]' | tr - _)=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \
@@ -103,19 +111,17 @@ RUN source /env-cargo && \
# Output the current contents of the file
cat /env-cargo
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
RUN source /env-cargo && \
rustup target add "${CARGO_TARGET}"
ARG CARGO_PROFILE=release
ARG VW_VERSION
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain.toml ./rust-toolchain.toml
COPY ./build.rs ./build.rs
COPY ./Cargo.* ./rust-toolchain.toml ./build.rs ./
COPY ./macros ./macros
ARG CARGO_PROFILE=release
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Builds your dependencies and removes the
# dummy project, except the target folder
@@ -128,6 +134,8 @@ RUN source /env-cargo && \
# To avoid copying unneeded files, use .dockerignore
COPY . .
ARG VW_VERSION
# Builds again, this time it will be the actual source files being build
RUN source /env-cargo && \
# Make sure that we actually build the project by updating the src/main.rs timestamp
@@ -179,14 +187,12 @@ RUN mkdir /data && \
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
COPY docker/healthcheck.sh docker/start.sh /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/final/vaultwarden .

View File

@@ -1,4 +1,5 @@
# syntax=docker/dockerfile:1
# check=skip=FromPlatformFlagConstDisallowed,RedundantTargetPlatform
# This file was generated using a Jinja2 template.
# Please make your changes in `DockerSettings.yaml` or `Dockerfile.j2` and then `make`
@@ -26,7 +27,7 @@
# $ docker image inspect --format "{{ '{{' }}.RepoTags}}" docker.io/vaultwarden/web-vault@{{ vault_image_digest }}
# [docker.io/vaultwarden/web-vault:{{ vault_version }}]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@{{ vault_image_digest }} as vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@{{ vault_image_digest }} AS vault
{% if base == "debian" %}
########################## Cross Compile Docker Helper Scripts ##########################
@@ -38,13 +39,13 @@ FROM --platform=linux/amd64 docker.io/tonistiigi/xx@{{ xx_image_digest }} AS xx
## NOTE: The Alpine Base Images do not support other platforms then linux/amd64
## And for Alpine we define all build images here, they will only be loaded when actually used
{% for arch in build_stage_image[base].arch_image %}
FROM --platform={{ build_stage_image[base].platform }} {{ build_stage_image[base].arch_image[arch] }} as build_{{ arch }}
FROM --platform={{ build_stage_image[base].platform }} {{ build_stage_image[base].arch_image[arch] }} AS build_{{ arch }}
{% endfor %}
{% endif %}
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
FROM --platform={{ build_stage_image[base].platform }} {{ build_stage_image[base].image }} as build
FROM --platform={{ build_stage_image[base].platform }} {{ build_stage_image[base].image }} AS build
{% if base == "debian" %}
COPY --from=xx / /
{% endif %}
@@ -82,10 +83,7 @@ RUN apt-get update && \
"libc6-$(xx-info debian-arch)-cross" \
"libc6-dev-$(xx-info debian-arch)-cross" \
"linux-libc-dev-$(xx-info debian-arch)-cross" && \
# Run xx-cargo early, since it sometimes seems to break when run at a later stage
echo "export CARGO_TARGET=$(xx-cargo --print-target-triple)" >> /env-cargo
RUN xx-apt-get install -y \
xx-apt-get install -y \
--no-install-recommends \
gcc \
libmariadb3 \
@@ -96,21 +94,31 @@ RUN xx-apt-get install -y \
# Force install arch dependend mariadb dev packages
# Installing them the normal way breaks several other packages (again)
apt-get download "libmariadb-dev-compat:$(xx-info debian-arch)" "libmariadb-dev:$(xx-info debian-arch)" && \
dpkg --force-all -i ./libmariadb-dev*.deb
dpkg --force-all -i ./libmariadb-dev*.deb && \
# Run xx-cargo early, since it sometimes seems to break when run at a later stage
echo "export CARGO_TARGET=$(xx-cargo --print-target-triple)" >> /env-cargo
{% endif %}
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
RUN mkdir -pv "${CARGO_HOME}" && \
rustup set profile minimal
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
{% if base == "debian" %}
# Environment variables for cargo across Debian and Alpine
# Environment variables for Cargo on Debian based builds
ARG ARCH_OPENSSL_LIB_DIR \
ARCH_OPENSSL_INCLUDE_DIR
RUN source /env-cargo && \
if xx-info is-cross ; then \
# Some special variables if needed to override some build paths
if [[ -n "${ARCH_OPENSSL_LIB_DIR}" && -n "${ARCH_OPENSSL_INCLUDE_DIR}" ]]; then \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_LIB_DIR=${ARCH_OPENSSL_LIB_DIR}" >> /env-cargo && \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_INCLUDE_DIR=${ARCH_OPENSSL_INCLUDE_DIR}" >> /env-cargo ; \
fi && \
# We can't use xx-cargo since that uses clang, which doesn't work for our libraries.
# Because of this we generate the needed environment variables here which we can load in the needed steps.
echo "export CC_$(echo "${CARGO_TARGET}" | tr '[:upper:]' '[:lower:]' | tr - _)=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \
@@ -123,30 +131,29 @@ RUN source /env-cargo && \
# Output the current contents of the file
cat /env-cargo
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
{% elif base == "alpine" %}
# Shared variables across Debian and Alpine
# Environment variables for Cargo on Alpine based builds
RUN echo "export CARGO_TARGET=${RUST_MUSL_CROSS_TARGET}" >> /env-cargo && \
# To be able to build the armv6 image with mimalloc we need to tell the linker to also look for libatomic
if [[ "${TARGETARCH}${TARGETVARIANT}" == "armv6" ]] ; then echo "export RUSTFLAGS='-Clink-arg=-latomic'" >> /env-cargo ; fi && \
# Output the current contents of the file
cat /env-cargo
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
{% endif %}
RUN source /env-cargo && \
rustup target add "${CARGO_TARGET}"
ARG CARGO_PROFILE=release
ARG VW_VERSION
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain.toml ./rust-toolchain.toml
COPY ./build.rs ./build.rs
COPY ./Cargo.* ./rust-toolchain.toml ./build.rs ./
COPY ./macros ./macros
ARG CARGO_PROFILE=release
# Configure the DB ARG as late as possible to not invalidate the cached layers above
{% if base == "debian" %}
ARG DB=sqlite,mysql,postgresql
{% elif base == "alpine" %}
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
{% endif %}
# Builds your dependencies and removes the
# dummy project, except the target folder
@@ -159,6 +166,8 @@ RUN source /env-cargo && \
# To avoid copying unneeded files, use .dockerignore
COPY . .
ARG VW_VERSION
# Builds again, this time it will be the actual source files being build
RUN source /env-cargo && \
# Make sure that we actually build the project by updating the src/main.rs timestamp
@@ -222,14 +231,12 @@ RUN mkdir /data && \
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
COPY docker/healthcheck.sh docker/start.sh /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/final/vaultwarden .

View File

@@ -11,6 +11,11 @@ With just these two files we can build both Debian and Alpine images for the fol
- armv7 (linux/arm/v7)
- armv6 (linux/arm/v6)
Some unsupported platforms for Debian based images. These are not built and tested by default and are only provided to make it easier for users to build for these architectures.
- 386 (linux/386)
- ppc64le (linux/ppc64le)
- s390x (linux/s390x)
To build these containers you need to enable QEMU binfmt support to be able to run/emulate architectures which are different then your host.<br>
This ensures the container build process can run binaries from other architectures.<br>
@@ -41,7 +46,7 @@ There also is an option to use an other docker container to provide support for
```bash
# To install and activate
docker run --privileged --rm tonistiigi/binfmt --install arm64,arm
# To unistall
# To uninstall
docker run --privileged --rm tonistiigi/binfmt --uninstall 'qemu-*'
```

View File

@@ -17,7 +17,7 @@ variable "SOURCE_REPOSITORY_URL" {
default = null
}
// The commit hash of of the current commit this build was triggered on
// The commit hash of the current commit this build was triggered on
variable "SOURCE_COMMIT" {
default = null
}
@@ -125,6 +125,40 @@ target "debian-armv6" {
tags = generate_tags("", "-armv6")
}
// ==== Start of unsupported Debian architecture targets ===
// These are provided just to help users build for these rare platforms
// They will not be built by default
target "debian-386" {
inherits = ["debian"]
platforms = ["linux/386"]
tags = generate_tags("", "-386")
args = {
ARCH_OPENSSL_LIB_DIR = "/usr/lib/i386-linux-gnu"
ARCH_OPENSSL_INCLUDE_DIR = "/usr/include/i386-linux-gnu"
}
}
target "debian-ppc64le" {
inherits = ["debian"]
platforms = ["linux/ppc64le"]
tags = generate_tags("", "-ppc64le")
args = {
ARCH_OPENSSL_LIB_DIR = "/usr/lib/powerpc64le-linux-gnu"
ARCH_OPENSSL_INCLUDE_DIR = "/usr/include/powerpc64le-linux-gnu"
}
}
target "debian-s390x" {
inherits = ["debian"]
platforms = ["linux/s390x"]
tags = generate_tags("", "-s390x")
args = {
ARCH_OPENSSL_LIB_DIR = "/usr/lib/s390x-linux-gnu"
ARCH_OPENSSL_INCLUDE_DIR = "/usr/include/s390x-linux-gnu"
}
}
// ==== End of unsupported Debian architecture targets ===
// A Group to build all platforms individually for local testing
group "debian-all" {
targets = ["debian-amd64", "debian-arm64", "debian-armv7", "debian-armv6"]

View File

@@ -1,17 +1,15 @@
#!/bin/sh
#!/usr/bin/env sh
# Use the value of the corresponding env var (if present),
# or a default value otherwise.
: "${DATA_FOLDER:="data"}"
: "${DATA_FOLDER:="/data"}"
: "${ROCKET_PORT:="80"}"
: "${ENV_FILE:="/.env"}"
CONFIG_FILE="${DATA_FOLDER}"/config.json
# Check if there is a .env file configured
# Check if the $ENV_FILE file exist and is readable
# If that is the case, load it into the environment before running any check
if [ -z "${ENV_FILE}" ]; then
ENV_FILE=".env"
fi
if [ -r "${ENV_FILE}" ]; then
# shellcheck disable=SC1090
. "${ENV_FILE}"

View File

@@ -1,5 +1,9 @@
#!/bin/sh
if [ -n "${UMASK}" ]; then
umask "${UMASK}"
fi
if [ -r /etc/vaultwarden.sh ]; then
. /etc/vaultwarden.sh
elif [ -r /etc/bitwarden_rs.sh ]; then

13
macros/Cargo.toml Normal file
View File

@@ -0,0 +1,13 @@
[package]
name = "macros"
version = "0.1.0"
edition = "2021"
[lib]
name = "macros"
path = "src/lib.rs"
proc-macro = true
[dependencies]
quote = "1.0.38"
syn = "2.0.96"

58
macros/src/lib.rs Normal file
View File

@@ -0,0 +1,58 @@
extern crate proc_macro;
use proc_macro::TokenStream;
use quote::quote;
#[proc_macro_derive(UuidFromParam)]
pub fn derive_uuid_from_param(input: TokenStream) -> TokenStream {
let ast = syn::parse(input).unwrap();
impl_derive_uuid_macro(&ast)
}
fn impl_derive_uuid_macro(ast: &syn::DeriveInput) -> TokenStream {
let name = &ast.ident;
let gen = quote! {
#[automatically_derived]
impl<'r> rocket::request::FromParam<'r> for #name {
type Error = ();
#[inline(always)]
fn from_param(param: &'r str) -> Result<Self, Self::Error> {
if uuid::Uuid::parse_str(param).is_ok() {
Ok(Self(param.to_string()))
} else {
Err(())
}
}
}
};
gen.into()
}
#[proc_macro_derive(IdFromParam)]
pub fn derive_id_from_param(input: TokenStream) -> TokenStream {
let ast = syn::parse(input).unwrap();
impl_derive_safestring_macro(&ast)
}
fn impl_derive_safestring_macro(ast: &syn::DeriveInput) -> TokenStream {
let name = &ast.ident;
let gen = quote! {
#[automatically_derived]
impl<'r> rocket::request::FromParam<'r> for #name {
type Error = ();
#[inline(always)]
fn from_param(param: &'r str) -> Result<Self, Self::Error> {
if param.chars().all(|c| matches!(c, 'a'..='z' | 'A'..='Z' |'0'..='9' | '-')) {
Ok(Self(param.to_string()))
} else {
Err(())
}
}
}
};
gen.into()
}

View File

@@ -0,0 +1 @@
ALTER TABLE twofactor MODIFY last_used BIGINT NOT NULL;

View File

@@ -0,0 +1 @@
DROP TABLE twofactor_duo_ctx;

View File

@@ -0,0 +1,8 @@
CREATE TABLE twofactor_duo_ctx (
state VARCHAR(64) NOT NULL,
user_email VARCHAR(255) NOT NULL,
nonce VARCHAR(64) NOT NULL,
exp BIGINT NOT NULL,
PRIMARY KEY (state)
);

View File

@@ -0,0 +1 @@
ALTER TABLE `twofactor_incomplete` DROP COLUMN `device_type`;

View File

@@ -0,0 +1 @@
ALTER TABLE twofactor_incomplete ADD COLUMN device_type INTEGER NOT NULL DEFAULT 14; -- 14 = Unknown Browser

View File

@@ -0,0 +1,5 @@
ALTER TABLE users_collections
ADD COLUMN manage BOOLEAN NOT NULL DEFAULT FALSE;
ALTER TABLE collections_groups
ADD COLUMN manage BOOLEAN NOT NULL DEFAULT FALSE;

View File

@@ -0,0 +1,3 @@
ALTER TABLE twofactor
ALTER COLUMN last_used TYPE BIGINT,
ALTER COLUMN last_used SET NOT NULL;

View File

@@ -0,0 +1 @@
DROP TABLE twofactor_duo_ctx;

View File

@@ -0,0 +1,8 @@
CREATE TABLE twofactor_duo_ctx (
state VARCHAR(64) NOT NULL,
user_email VARCHAR(255) NOT NULL,
nonce VARCHAR(64) NOT NULL,
exp BIGINT NOT NULL,
PRIMARY KEY (state)
);

View File

@@ -0,0 +1 @@
ALTER TABLE twofactor_incomplete DROP COLUMN device_type;

View File

@@ -0,0 +1 @@
ALTER TABLE twofactor_incomplete ADD COLUMN device_type INTEGER NOT NULL DEFAULT 14; -- 14 = Unknown Browser

View File

@@ -0,0 +1,5 @@
ALTER TABLE users_collections
ADD COLUMN manage BOOLEAN NOT NULL DEFAULT FALSE;
ALTER TABLE collections_groups
ADD COLUMN manage BOOLEAN NOT NULL DEFAULT FALSE;

View File

@@ -0,0 +1 @@
-- Integer size in SQLite is already i64, so we don't need to do anything

View File

@@ -0,0 +1 @@
DROP TABLE twofactor_duo_ctx;

View File

@@ -0,0 +1,8 @@
CREATE TABLE twofactor_duo_ctx (
state TEXT NOT NULL,
user_email TEXT NOT NULL,
nonce TEXT NOT NULL,
exp INTEGER NOT NULL,
PRIMARY KEY (state)
);

View File

@@ -0,0 +1 @@
ALTER TABLE `twofactor_incomplete` DROP COLUMN `device_type`;

View File

@@ -0,0 +1 @@
ALTER TABLE twofactor_incomplete ADD COLUMN device_type INTEGER NOT NULL DEFAULT 14; -- 14 = Unknown Browser

View File

@@ -0,0 +1,5 @@
ALTER TABLE users_collections
ADD COLUMN manage BOOLEAN NOT NULL DEFAULT 0; -- FALSE
ALTER TABLE collections_groups
ADD COLUMN manage BOOLEAN NOT NULL DEFAULT 0; -- FALSE

View File

@@ -0,0 +1,78 @@
<svg width="1365.8256" height="280.48944" version="1.1" viewBox="0 0 1365.8255 280.48944" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<style>
@media (prefers-color-scheme: dark) {
svg { -webkit-filter:invert(0.90); filter:invert(0.90); }
}</style>
<title>Vaultwarden Logo</title>
<defs>
<mask id="d">
<rect x="-60" y="-60" width="120" height="120" fill="#fff"/>
<circle id="b" cy="-40" r="3"/>
<use transform="rotate(72)" xlink:href="#b"/>
<use transform="rotate(144)" xlink:href="#b"/>
<use transform="rotate(216)" xlink:href="#b"/>
<use transform="rotate(-72)" xlink:href="#b"/>
</mask>
</defs>
<g transform="translate(-10.708266,-9.2965379)" aria-label="aultwarden">
<path d="m371.55338 223.43649-5.76172-14.84375h-0.78125q-7.51953 9.47266-15.52735 13.1836-7.91015 3.61328-20.70312 3.61328-15.72266 0-24.80469-8.98438-8.98437-8.98437-8.98437-25.58593 0-17.38282 12.10937-25.58594 12.20703-8.30078 36.71875-9.17969l18.94531-0.58594v-4.78515q0-16.60157-16.99218-16.60157-13.08594 0-30.76172 7.91016l-9.86328-20.11719q18.84765-9.86328 41.79687-9.86328 21.97266 0 33.69141 9.57031 11.71875 9.57032 11.71875 29.10157v72.7539zm-8.78907-50.58593-11.52343 0.39062q-12.98829 0.39063-19.33594 4.6875-6.34766 4.29688-6.34766 13.08594 0 12.59765 14.45313 12.59765 10.35156 0 16.5039-5.95703 6.25-5.95703 6.25-15.82031zm137.59766 50.58593-4.00391-13.96484h-1.5625q-4.78515 7.61719-13.57422 11.81641-8.78906 4.10156-20.01953 4.10156-19.23828 0-29.0039-10.25391-9.76563-10.35156-9.76563-29.6875v-71.1914h29.78516v63.76953q0 11.8164 4.19922 17.77343 4.19922 5.85938 13.3789 5.85938 12.5 0 18.06641-8.30078 5.56641-8.39844 5.56641-27.73438v-51.36718h29.78515v109.17968zm83.88672 0h-29.78516v-151.953122h29.78516zm77.24609-21.77734q7.8125 0 18.75-3.41797v22.16797q-11.13281 4.98047-27.34375 4.98047-17.87109 0-26.07422-8.98438-8.10547-9.08203-8.10547-27.14843v-52.63672h-14.25781v-12.59766l16.40625-9.96094 8.59375-23.046872h19.04297v23.242192h30.56641v22.36328h-30.56641v52.63672q0 6.34765 3.51563 9.375 3.61328 3.02734 9.47265 3.02734z"/>
<path d="m791.27994 223.43649-19.62891-62.79297q-1.85547-5.76171-6.93359-26.17187h-0.78125q-3.90625 17.08984-6.83594 26.36719l-20.21484 62.59765h-18.75l-29.19922-107.03125h16.99219q10.35156 40.33203 15.72265 61.42578 5.46875 21.09375 6.25 28.41797h0.78125q1.07422-5.5664 3.41797-14.35547 2.44141-8.88671 4.19922-14.0625l19.62891-61.42578h17.57812l19.14063 61.42578q5.46875 16.79688 7.42187 28.22266h0.78125q0.39063-3.51562 2.05078-10.83984 1.75781-7.32422 20.41016-78.8086h16.79687l-29.58984 107.03125zm133.98437 0-3.22265-15.23437h-0.78125q-8.00782 10.05859-16.01563 13.67187-7.91015 3.51563-19.82422 3.51563-15.91797 0-25-8.20313-8.98437-8.20312-8.98437-23.33984 0-32.42188 51.85547-33.98438l18.16406-0.58593v-6.64063q0-12.59765-5.46875-18.55469-5.37109-6.05468-17.28516-6.05468-13.3789 0-30.27343 8.20312l-4.98047-12.40234q7.91015-4.29688 17.28515-6.73828 9.47266-2.44141 18.94532-2.44141 19.14062 0 28.32031 8.49609 9.27734 8.4961 9.27734 27.2461v73.04687zm-36.62109-11.42578q15.13672 0 23.73047-8.30078 8.6914-8.30078 8.6914-23.24219v-9.66797l-16.21093 0.6836q-19.33594 0.68359-27.92969 6.05469-8.49609 5.27343-8.49609 16.5039 0 8.78906 5.27343 13.37891 5.3711 4.58984 14.94141 4.58984zm130.85938-97.55859q7.1289 0 12.793 1.17187l-2.2461 15.03907q-6.6407-1.46485-11.7188-1.46485-12.9883 0-22.26561 10.54688-9.17968 10.54687-9.17968 26.26953v57.42187h-16.21094v-107.03125h13.37891l1.85546 19.82422h0.78125q5.95704-10.44922 14.35551-16.11328 8.3984-5.66406 18.457-5.66406zm101.6602 94.6289h-0.879q-11.2304 16.3086-33.5937 16.3086-20.9961 0-32.7148-14.35547-11.6211-14.35547-11.6211-40.82031 0-26.46485 11.7187-41.11328 11.7188-14.64844 32.6172-14.64844 21.7773 0 33.3984 15.82031h1.2696l-0.6836-7.71484-0.3907-7.51953v-43.554692h16.211v151.953122h-13.1836zm-32.4219 2.73438q16.6015 0 24.0234-8.98438 7.5195-9.08203 7.5195-29.19921v-3.41797q0-22.75391-7.6171-32.42188-7.5196-9.76562-24.1211-9.76562-14.2578 0-21.875 11.13281-7.5196 11.03516-7.5196 31.25 0 20.50781 7.5196 30.95703 7.5195 10.44922 22.0703 10.44922zm127.3437 13.57422q-23.7304 0-37.5-14.45313-13.6718-14.45312-13.6718-40.13672 0-25.8789 12.6953-41.11328 12.7929-15.23437 34.2773-15.23437 20.1172 0 31.8359 13.28125 11.7188 13.18359 11.7188 34.86328v10.25391h-73.7305q0.4883 18.84765 9.4727 28.61328 9.082 9.76562 25.4883 9.76562 17.2851 0 34.1797-7.22656v14.45312q-8.5938 3.71094-16.3086 5.27344-7.6172 1.66016-18.4571 1.66016zm-4.3945-97.36328q-12.8906 0-20.6055 8.39843-7.6172 8.39844-8.9843 23.24219h55.957q0-15.33203-6.836-23.4375-6.8359-8.20312-19.5312-8.20312zm144.6289 95.41015v-69.23828q0-13.08594-5.957-19.53125-5.9571-6.44531-18.6524-6.44531-16.7968 0-24.6093 9.08203t-7.8125 29.98047v56.15234h-16.211v-107.03125h13.1836l2.6367 14.64844h0.7813q4.9804-7.91016 13.9648-12.20703 8.9844-4.39453 20.0196-4.39453 19.3359 0 29.1015 9.375 9.7656 9.27734 9.7656 29.78515v69.82422z"/>
</g>
<g transform="translate(-10.708266,-9.2965379)">
<g id="e" transform="matrix(2.6712834,0,0,2.6712834,150.95027,149.53854)">
<g id="f" mask="url(#d)">
<path d="m-31.1718-33.813208 26.496029 74.188883h9.3515399l26.49603-74.188883h-9.767164l-16.728866 47.588948q-1.662496 4.571864-2.805462 8.624198-1.142966 3.948427-1.870308 7.585137-.72734199-3.63671-1.8703079-7.689043-1.142966-4.052334-2.805462-8.728104l-16.624959-47.381136z" stroke="#000" stroke-width="4.51171"/>
<circle transform="scale(-1,1)" r="43" fill="none" stroke="#000" stroke-width="9"/>
<g id="g" transform="scale(-1,1)">
<polygon id="a" points="46 -3 46 3 51 0" stroke="#000" stroke-linejoin="round" stroke-width="3"/>
<use transform="rotate(11.25)" xlink:href="#a"/>
<use transform="rotate(22.5)" xlink:href="#a"/>
<use transform="rotate(33.75)" xlink:href="#a"/>
<use transform="rotate(45)" xlink:href="#a"/>
<use transform="rotate(56.25)" xlink:href="#a"/>
<use transform="rotate(67.5)" xlink:href="#a"/>
<use transform="rotate(78.75)" xlink:href="#a"/>
<use transform="rotate(90)" xlink:href="#a"/>
<use transform="rotate(101.25)" xlink:href="#a"/>
<use transform="rotate(112.5)" xlink:href="#a"/>
<use transform="rotate(123.75)" xlink:href="#a"/>
<use transform="rotate(135)" xlink:href="#a"/>
<use transform="rotate(146.25)" xlink:href="#a"/>
<use transform="rotate(157.5)" xlink:href="#a"/>
<use transform="rotate(168.75)" xlink:href="#a"/>
<use transform="scale(-1)" xlink:href="#a"/>
<use transform="rotate(191.25)" xlink:href="#a"/>
<use transform="rotate(202.5)" xlink:href="#a"/>
<use transform="rotate(213.75)" xlink:href="#a"/>
<use transform="rotate(225)" xlink:href="#a"/>
<use transform="rotate(236.25)" xlink:href="#a"/>
<use transform="rotate(247.5)" xlink:href="#a"/>
<use transform="rotate(258.75)" xlink:href="#a"/>
<use transform="rotate(-90)" xlink:href="#a"/>
<use transform="rotate(-78.75)" xlink:href="#a"/>
<use transform="rotate(-67.5)" xlink:href="#a"/>
<use transform="rotate(-56.25)" xlink:href="#a"/>
<use transform="rotate(-45)" xlink:href="#a"/>
<use transform="rotate(-33.75)" xlink:href="#a"/>
<use transform="rotate(-22.5)" xlink:href="#a"/>
<use transform="rotate(-11.25)" xlink:href="#a"/>
</g>
<g id="h" transform="scale(-1,1)">
<polygon id="c" points="7 -42 -7 -42 0 -35" stroke="#000" stroke-linejoin="round" stroke-width="6"/>
<use transform="rotate(72)" xlink:href="#c"/>
<use transform="rotate(144)" xlink:href="#c"/>
<use transform="rotate(216)" xlink:href="#c"/>
<use transform="rotate(-72)" xlink:href="#c"/>
</g>
</g>
<mask>
<rect x="-60" y="-60" width="120" height="120" fill="#fff"/>
<circle cy="-40" r="3"/>
<use transform="rotate(72)" xlink:href="#b"/>
<use transform="rotate(144)" xlink:href="#b"/>
<use transform="rotate(216)" xlink:href="#b"/>
<use transform="rotate(-72)" xlink:href="#b"/>
</mask>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 7.6 KiB

View File

@@ -1,4 +1,4 @@
[toolchain]
channel = "1.75.0"
channel = "1.84.0"
components = [ "rustfmt", "clippy" ]
profile = "minimal"

View File

@@ -1,4 +1,5 @@
use once_cell::sync::Lazy;
use reqwest::Method;
use serde::de::DeserializeOwned;
use serde_json::Value;
use std::env;
@@ -17,14 +18,15 @@ use crate::{
core::{log_event, two_factor},
unregister_push_device, ApiResult, EmptyResult, JsonResult, Notify,
},
auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp},
auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp, Secure},
config::ConfigBuilder,
db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType},
error::{Error, MapResult},
http_client::make_http_request,
mail,
util::{
docker_base_image, format_naive_datetime_local, get_display_size, get_reqwest_client, is_running_in_docker,
NumberOrString,
container_base_image, format_naive_datetime_local, get_display_size, get_web_vault_version,
is_running_in_container, NumberOrString,
},
CONFIG, VERSION,
};
@@ -48,7 +50,7 @@ pub fn routes() -> Vec<Route> {
disable_user,
enable_user,
remove_2fa,
update_user_org_type,
update_membership_type,
update_revision_users,
post_config,
delete_config,
@@ -60,6 +62,7 @@ pub fn routes() -> Vec<Route> {
diagnostics,
get_diagnostics_config,
resend_user_invite,
get_diagnostics_http,
]
}
@@ -96,6 +99,7 @@ const DT_FMT: &str = "%Y-%m-%d %H:%M:%S %Z";
const BASE_TEMPLATE: &str = "admin/base";
const ACTING_ADMIN_USER: &str = "vaultwarden-admin-00000-000000000000";
pub const FAKE_ADMIN_UUID: &str = "00000000-0000-0000-0000-000000000000";
fn admin_path() -> String {
format!("{}{}", CONFIG.domain_path(), ADMIN_PATH)
@@ -167,8 +171,13 @@ struct LoginForm {
redirect: Option<String>,
}
#[post("/", data = "<data>")]
fn post_admin_login(data: Form<LoginForm>, cookies: &CookieJar<'_>, ip: ClientIp) -> Result<Redirect, AdminResponse> {
#[post("/", format = "application/x-www-form-urlencoded", data = "<data>")]
fn post_admin_login(
data: Form<LoginForm>,
cookies: &CookieJar<'_>,
ip: ClientIp,
secure: Secure,
) -> Result<Redirect, AdminResponse> {
let data = data.into_inner();
let redirect = data.redirect;
@@ -190,9 +199,10 @@ fn post_admin_login(data: Form<LoginForm>, cookies: &CookieJar<'_>, ip: ClientIp
let cookie = Cookie::build((COOKIE_NAME, jwt))
.path(admin_path())
.max_age(rocket::time::Duration::minutes(CONFIG.admin_session_lifetime()))
.max_age(time::Duration::minutes(CONFIG.admin_session_lifetime()))
.same_site(SameSite::Strict)
.http_only(true);
.http_only(true)
.secure(secure.https);
cookies.add(cookie);
if let Some(redirect) = redirect {
@@ -265,21 +275,21 @@ fn admin_page_login() -> ApiResult<Html<String>> {
render_admin_login(None, None)
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct InviteData {
email: String,
}
async fn get_user_or_404(uuid: &str, conn: &mut DbConn) -> ApiResult<User> {
if let Some(user) = User::find_by_uuid(uuid, conn).await {
async fn get_user_or_404(user_id: &UserId, conn: &mut DbConn) -> ApiResult<User> {
if let Some(user) = User::find_by_uuid(user_id, conn).await {
Ok(user)
} else {
err_code!("User doesn't exist", Status::NotFound.code);
}
}
#[post("/invite", data = "<data>")]
#[post("/invite", format = "application/json", data = "<data>")]
async fn invite_user(data: Json<InviteData>, _token: AdminToken, mut conn: DbConn) -> JsonResult {
let data: InviteData = data.into_inner();
if User::find_by_mail(&data.email, &mut conn).await.is_some() {
@@ -290,7 +300,9 @@ async fn invite_user(data: Json<InviteData>, _token: AdminToken, mut conn: DbCon
async fn _generate_invite(user: &User, conn: &mut DbConn) -> EmptyResult {
if CONFIG.mail_enabled() {
mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None).await
let org_id: OrganizationId = FAKE_ADMIN_UUID.to_string().into();
let member_id: MembershipId = FAKE_ADMIN_UUID.to_string().into();
mail::send_invite(user, org_id, member_id, &CONFIG.invitation_org_name(), None).await
} else {
let invitation = Invitation::new(&user.email);
invitation.save(conn).await
@@ -303,7 +315,7 @@ async fn invite_user(data: Json<InviteData>, _token: AdminToken, mut conn: DbCon
Ok(Json(user.to_json(&mut conn).await))
}
#[post("/test/smtp", data = "<data>")]
#[post("/test/smtp", format = "application/json", data = "<data>")]
async fn test_smtp(data: Json<InviteData>, _token: AdminToken) -> EmptyResult {
let data: InviteData = data.into_inner();
@@ -326,9 +338,9 @@ async fn get_users_json(_token: AdminToken, mut conn: DbConn) -> Json<Value> {
let mut users_json = Vec::with_capacity(users.len());
for u in users {
let mut usr = u.to_json(&mut conn).await;
usr["UserEnabled"] = json!(u.enabled);
usr["CreatedAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
usr["LastActive"] = match u.last_active(&mut conn).await {
usr["userEnabled"] = json!(u.enabled);
usr["createdAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
usr["lastActive"] = match u.last_active(&mut conn).await {
Some(dt) => json!(format_naive_datetime_local(&dt, DT_FMT)),
None => json!(None::<String>),
};
@@ -364,37 +376,37 @@ async fn users_overview(_token: AdminToken, mut conn: DbConn) -> ApiResult<Html<
async fn get_user_by_mail_json(mail: &str, _token: AdminToken, mut conn: DbConn) -> JsonResult {
if let Some(u) = User::find_by_mail(mail, &mut conn).await {
let mut usr = u.to_json(&mut conn).await;
usr["UserEnabled"] = json!(u.enabled);
usr["CreatedAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
usr["userEnabled"] = json!(u.enabled);
usr["createdAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
Ok(Json(usr))
} else {
err_code!("User doesn't exist", Status::NotFound.code);
}
}
#[get("/users/<uuid>")]
async fn get_user_json(uuid: &str, _token: AdminToken, mut conn: DbConn) -> JsonResult {
let u = get_user_or_404(uuid, &mut conn).await?;
#[get("/users/<user_id>")]
async fn get_user_json(user_id: UserId, _token: AdminToken, mut conn: DbConn) -> JsonResult {
let u = get_user_or_404(&user_id, &mut conn).await?;
let mut usr = u.to_json(&mut conn).await;
usr["UserEnabled"] = json!(u.enabled);
usr["CreatedAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
usr["userEnabled"] = json!(u.enabled);
usr["createdAt"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
Ok(Json(usr))
}
#[post("/users/<uuid>/delete")]
async fn delete_user(uuid: &str, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let user = get_user_or_404(uuid, &mut conn).await?;
#[post("/users/<user_id>/delete", format = "application/json")]
async fn delete_user(user_id: UserId, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let user = get_user_or_404(&user_id, &mut conn).await?;
// Get the user_org records before deleting the actual user
let user_orgs = UserOrganization::find_any_state_by_user(uuid, &mut conn).await;
// Get the membership records before deleting the actual user
let memberships = Membership::find_any_state_by_user(&user_id, &mut conn).await;
let res = user.delete(&mut conn).await;
for user_org in user_orgs {
for membership in memberships {
log_event(
EventType::OrganizationUserRemoved as i32,
&user_org.uuid,
&user_org.org_uuid,
ACTING_ADMIN_USER,
&membership.uuid,
&membership.org_uuid,
&ACTING_ADMIN_USER.into(),
14, // Use UnknownBrowser type
&token.ip.ip,
&mut conn,
@@ -405,9 +417,9 @@ async fn delete_user(uuid: &str, token: AdminToken, mut conn: DbConn) -> EmptyRe
res
}
#[post("/users/<uuid>/deauth")]
async fn deauth_user(uuid: &str, _token: AdminToken, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?;
#[post("/users/<user_id>/deauth", format = "application/json")]
async fn deauth_user(user_id: UserId, _token: AdminToken, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let mut user = get_user_or_404(&user_id, &mut conn).await?;
nt.send_logout(&user, None).await;
@@ -426,9 +438,9 @@ async fn deauth_user(uuid: &str, _token: AdminToken, mut conn: DbConn, nt: Notif
user.save(&mut conn).await
}
#[post("/users/<uuid>/disable")]
async fn disable_user(uuid: &str, _token: AdminToken, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?;
#[post("/users/<user_id>/disable", format = "application/json")]
async fn disable_user(user_id: UserId, _token: AdminToken, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let mut user = get_user_or_404(&user_id, &mut conn).await?;
Device::delete_all_by_user(&user.uuid, &mut conn).await?;
user.reset_security_stamp();
user.enabled = false;
@@ -440,33 +452,35 @@ async fn disable_user(uuid: &str, _token: AdminToken, mut conn: DbConn, nt: Noti
save_result
}
#[post("/users/<uuid>/enable")]
async fn enable_user(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?;
#[post("/users/<user_id>/enable", format = "application/json")]
async fn enable_user(user_id: UserId, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&user_id, &mut conn).await?;
user.enabled = true;
user.save(&mut conn).await
}
#[post("/users/<uuid>/remove-2fa")]
async fn remove_2fa(uuid: &str, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?;
#[post("/users/<user_id>/remove-2fa", format = "application/json")]
async fn remove_2fa(user_id: UserId, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(&user_id, &mut conn).await?;
TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?;
two_factor::enforce_2fa_policy(&user, ACTING_ADMIN_USER, 14, &token.ip.ip, &mut conn).await?;
two_factor::enforce_2fa_policy(&user, &ACTING_ADMIN_USER.into(), 14, &token.ip.ip, &mut conn).await?;
user.totp_recover = None;
user.save(&mut conn).await
}
#[post("/users/<uuid>/invite/resend")]
async fn resend_user_invite(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
if let Some(user) = User::find_by_uuid(uuid, &mut conn).await {
#[post("/users/<user_id>/invite/resend", format = "application/json")]
async fn resend_user_invite(user_id: UserId, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
if let Some(user) = User::find_by_uuid(&user_id, &mut conn).await {
//TODO: replace this with user.status check when it will be available (PR#3397)
if !user.password_hash.is_empty() {
err_code!("User already accepted invitation", Status::BadRequest.code);
}
if CONFIG.mail_enabled() {
mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None).await
let org_id: OrganizationId = FAKE_ADMIN_UUID.to_string().into();
let member_id: MembershipId = FAKE_ADMIN_UUID.to_string().into();
mail::send_invite(&user, org_id, member_id, &CONFIG.invitation_org_name(), None).await
} else {
Ok(())
}
@@ -475,42 +489,45 @@ async fn resend_user_invite(uuid: &str, _token: AdminToken, mut conn: DbConn) ->
}
}
#[derive(Deserialize, Debug)]
struct UserOrgTypeData {
#[derive(Debug, Deserialize)]
struct MembershipTypeData {
user_type: NumberOrString,
user_uuid: String,
org_uuid: String,
user_uuid: UserId,
org_uuid: OrganizationId,
}
#[post("/users/org_type", data = "<data>")]
async fn update_user_org_type(data: Json<UserOrgTypeData>, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let data: UserOrgTypeData = data.into_inner();
#[post("/users/org_type", format = "application/json", data = "<data>")]
async fn update_membership_type(data: Json<MembershipTypeData>, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let data: MembershipTypeData = data.into_inner();
let mut user_to_edit =
match UserOrganization::find_by_user_and_org(&data.user_uuid, &data.org_uuid, &mut conn).await {
Some(user) => user,
None => err!("The specified user isn't member of the organization"),
};
let Some(mut member_to_edit) = Membership::find_by_user_and_org(&data.user_uuid, &data.org_uuid, &mut conn).await
else {
err!("The specified user isn't member of the organization")
};
let new_type = match UserOrgType::from_str(&data.user_type.into_string()) {
let new_type = match MembershipType::from_str(&data.user_type.into_string()) {
Some(new_type) => new_type as i32,
None => err!("Invalid type"),
};
if user_to_edit.atype == UserOrgType::Owner && new_type != UserOrgType::Owner {
if member_to_edit.atype == MembershipType::Owner && new_type != MembershipType::Owner {
// Removing owner permission, check that there is at least one other confirmed owner
if UserOrganization::count_confirmed_by_org_and_type(&data.org_uuid, UserOrgType::Owner, &mut conn).await <= 1 {
if Membership::count_confirmed_by_org_and_type(&data.org_uuid, MembershipType::Owner, &mut conn).await <= 1 {
err!("Can't change the type of the last owner")
}
}
// This check is also done at api::organizations::{accept_invite(), _confirm_invite, _activate_user(), edit_user()}, update_user_org_type
// This check is also done at api::organizations::{accept_invite, _confirm_invite, _activate_member, edit_member}, update_membership_type
// It returns different error messages per function.
if new_type < UserOrgType::Admin {
match OrgPolicy::is_user_allowed(&user_to_edit.user_uuid, &user_to_edit.org_uuid, true, &mut conn).await {
if new_type < MembershipType::Admin {
match OrgPolicy::is_user_allowed(&member_to_edit.user_uuid, &member_to_edit.org_uuid, true, &mut conn).await {
Ok(_) => {}
Err(OrgPolicyErr::TwoFactorMissing) => {
err!("You cannot modify this user to this type because it has no two-step login method activated");
if CONFIG.email_2fa_auto_fallback() {
two_factor::email::find_and_activate_email_2fa(&member_to_edit.user_uuid, &mut conn).await?;
} else {
err!("You cannot modify this user to this type because they have not setup 2FA");
}
}
Err(OrgPolicyErr::SingleOrgEnforced) => {
err!("You cannot modify this user to this type because it is a member of an organization which forbids it");
@@ -520,20 +537,20 @@ async fn update_user_org_type(data: Json<UserOrgTypeData>, token: AdminToken, mu
log_event(
EventType::OrganizationUserUpdated as i32,
&user_to_edit.uuid,
&member_to_edit.uuid,
&data.org_uuid,
ACTING_ADMIN_USER,
&ACTING_ADMIN_USER.into(),
14, // Use UnknownBrowser type
&token.ip.ip,
&mut conn,
)
.await;
user_to_edit.atype = new_type;
user_to_edit.save(&mut conn).await
member_to_edit.atype = new_type;
member_to_edit.save(&mut conn).await
}
#[post("/users/update_revision")]
#[post("/users/update_revision", format = "application/json")]
async fn update_revision_users(_token: AdminToken, mut conn: DbConn) -> EmptyResult {
User::update_all_revisions(&mut conn).await
}
@@ -544,7 +561,7 @@ async fn organizations_overview(_token: AdminToken, mut conn: DbConn) -> ApiResu
let mut organizations_json = Vec::with_capacity(organizations.len());
for o in organizations {
let mut org = o.to_json();
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &mut conn).await);
org["user_count"] = json!(Membership::count_by_org(&o.uuid, &mut conn).await);
org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &mut conn).await);
org["collection_count"] = json!(Collection::count_by_org(&o.uuid, &mut conn).await);
org["group_count"] = json!(Group::count_by_org(&o.uuid, &mut conn).await);
@@ -558,17 +575,12 @@ async fn organizations_overview(_token: AdminToken, mut conn: DbConn) -> ApiResu
Ok(Html(text))
}
#[post("/organizations/<uuid>/delete")]
async fn delete_organization(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let org = Organization::find_by_uuid(uuid, &mut conn).await.map_res("Organization doesn't exist")?;
#[post("/organizations/<org_id>/delete", format = "application/json")]
async fn delete_organization(org_id: OrganizationId, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
let org = Organization::find_by_uuid(&org_id, &mut conn).await.map_res("Organization doesn't exist")?;
org.delete(&mut conn).await
}
#[derive(Deserialize)]
struct WebVaultVersion {
version: String,
}
#[derive(Deserialize)]
struct GitRelease {
tag_name: String,
@@ -590,15 +602,14 @@ struct TimeApi {
}
async fn get_json_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
let json_api = get_reqwest_client();
Ok(json_api.get(url).send().await?.error_for_status()?.json::<T>().await?)
Ok(make_http_request(Method::GET, url)?.send().await?.error_for_status()?.json::<T>().await?)
}
async fn has_http_access() -> bool {
let http_access = get_reqwest_client();
match http_access.head("https://github.com/dani-garcia/vaultwarden").send().await {
let Ok(req) = make_http_request(Method::HEAD, "https://github.com/dani-garcia/vaultwarden") else {
return false;
};
match req.send().await {
Ok(r) => r.status().is_success(),
_ => false,
}
@@ -608,7 +619,7 @@ use cached::proc_macro::cached;
/// Cache this function to prevent API call rate limit. Github only allows 60 requests per hour, and we use 3 here already.
/// It will cache this function for 300 seconds (5 minutes) which should prevent the exhaustion of the rate limit.
#[cached(time = 300, sync_writes = true)]
async fn get_release_info(has_http_access: bool, running_within_docker: bool) -> (String, String, String) {
async fn get_release_info(has_http_access: bool, running_within_container: bool) -> (String, String, String) {
// If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway.
if has_http_access {
(
@@ -625,9 +636,9 @@ async fn get_release_info(has_http_access: bool, running_within_docker: bool) ->
}
_ => "-".to_string(),
},
// Do not fetch the web-vault version when running within Docker.
// Do not fetch the web-vault version when running within a container.
// The web-vault version is embedded within the container it self, and should not be updated manually
if running_within_docker {
if running_within_container {
"-".to_string()
} else {
match get_json_api::<GitRelease>(
@@ -668,20 +679,8 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
use chrono::prelude::*;
use std::net::ToSocketAddrs;
// Get current running versions
let web_vault_version: WebVaultVersion =
match std::fs::read_to_string(format!("{}/{}", CONFIG.web_vault_folder(), "vw-version.json")) {
Ok(s) => serde_json::from_str(&s)?,
_ => match std::fs::read_to_string(format!("{}/{}", CONFIG.web_vault_folder(), "version.json")) {
Ok(s) => serde_json::from_str(&s)?,
_ => WebVaultVersion {
version: String::from("Version file missing"),
},
},
};
// Execute some environment checks
let running_within_docker = is_running_in_docker();
let running_within_container = is_running_in_container();
let has_http_access = has_http_access().await;
let uses_proxy = env::var_os("HTTP_PROXY").is_some()
|| env::var_os("http_proxy").is_some()
@@ -695,12 +694,12 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
};
let (latest_release, latest_commit, latest_web_build) =
get_release_info(has_http_access, running_within_docker).await;
get_release_info(has_http_access, running_within_container).await;
let ip_header_name = match &ip_header.0 {
Some(h) => h,
_ => "",
};
let ip_header_name = &ip_header.0.unwrap_or_default();
// Get current running versions
let web_vault_version = get_web_vault_version();
let diagnostics_json = json!({
"dns_resolved": dns_resolved,
@@ -708,22 +707,23 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
"latest_release": latest_release,
"latest_commit": latest_commit,
"web_vault_enabled": &CONFIG.web_vault_enabled(),
"web_vault_version": web_vault_version.version.trim_start_matches('v'),
"web_vault_version": web_vault_version,
"latest_web_build": latest_web_build,
"running_within_docker": running_within_docker,
"docker_base_image": if running_within_docker { docker_base_image() } else { "Not applicable" },
"running_within_container": running_within_container,
"container_base_image": if running_within_container { container_base_image() } else { "Not applicable" },
"has_http_access": has_http_access,
"ip_header_exists": &ip_header.0.is_some(),
"ip_header_match": ip_header_name == CONFIG.ip_header(),
"ip_header_exists": !ip_header_name.is_empty(),
"ip_header_match": ip_header_name.eq(&CONFIG.ip_header()),
"ip_header_name": ip_header_name,
"ip_header_config": &CONFIG.ip_header(),
"uses_proxy": uses_proxy,
"enable_websocket": &CONFIG.enable_websocket(),
"db_type": *DB_TYPE,
"db_version": get_sql_server_version(&mut conn).await,
"admin_url": format!("{}/diagnostics", admin_url()),
"overrides": &CONFIG.get_overrides().join(", "),
"host_arch": std::env::consts::ARCH,
"host_os": std::env::consts::OS,
"host_arch": env::consts::ARCH,
"host_os": env::consts::OS,
"server_time_local": Local::now().format("%Y-%m-%d %H:%M:%S %Z").to_string(),
"server_time": Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(), // Run the server date/time check as late as possible to minimize the time difference
"ntp_time": get_ntp_time(has_http_access).await, // Run the ntp check as late as possible to minimize the time difference
@@ -733,27 +733,41 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
Ok(Html(text))
}
#[get("/diagnostics/config")]
#[get("/diagnostics/config", format = "application/json")]
fn get_diagnostics_config(_token: AdminToken) -> Json<Value> {
let support_json = CONFIG.get_support_json();
Json(support_json)
}
#[post("/config", data = "<data>")]
#[get("/diagnostics/http?<code>")]
fn get_diagnostics_http(code: u16, _token: AdminToken) -> EmptyResult {
err_code!(format!("Testing error {code} response"), code);
}
#[post("/config", format = "application/json", data = "<data>")]
fn post_config(data: Json<ConfigBuilder>, _token: AdminToken) -> EmptyResult {
let data: ConfigBuilder = data.into_inner();
CONFIG.update_config(data)
if let Err(e) = CONFIG.update_config(data, true) {
err!(format!("Unable to save config: {e:?}"))
}
Ok(())
}
#[post("/config/delete")]
#[post("/config/delete", format = "application/json")]
fn delete_config(_token: AdminToken) -> EmptyResult {
CONFIG.delete_user_config()
if let Err(e) = CONFIG.delete_user_config() {
err!(format!("Unable to delete config: {e:?}"))
}
Ok(())
}
#[post("/config/backup_db")]
async fn backup_db(_token: AdminToken, mut conn: DbConn) -> EmptyResult {
#[post("/config/backup_db", format = "application/json")]
async fn backup_db(_token: AdminToken, mut conn: DbConn) -> ApiResult<String> {
if *CAN_BACKUP {
backup_database(&mut conn).await
match backup_database(&mut conn).await {
Ok(f) => Ok(format!("Backup to '{f}' was successful")),
Err(e) => err!(format!("Backup was unsuccessful {e}")),
}
} else {
err!("Can't back up current DB (Only SQLite supports this feature)");
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,11 @@
use chrono::{Duration, Utc};
use chrono::{TimeDelta, Utc};
use rocket::{serde::json::Json, Route};
use serde_json::Value;
use crate::{
api::{
core::{CipherSyncData, CipherSyncType},
EmptyResult, JsonResult, JsonUpcase,
EmptyResult, JsonResult,
},
auth::{decode_emergency_access_invite, Headers},
db::{models::*, DbConn, DbPool},
@@ -43,31 +43,33 @@ pub fn routes() -> Vec<Route> {
async fn get_contacts(headers: Headers, mut conn: DbConn) -> Json<Value> {
if !CONFIG.emergency_access_allowed() {
return Json(json!({
"Data": [{
"Id": "",
"Status": 2,
"Type": 0,
"WaitTimeDays": 0,
"GranteeId": "",
"Email": "",
"Name": "NOTE: Emergency Access is disabled!",
"Object": "emergencyAccessGranteeDetails",
"data": [{
"id": "",
"status": 2,
"type": 0,
"waitTimeDays": 0,
"granteeId": "",
"email": "",
"name": "NOTE: Emergency Access is disabled!",
"object": "emergencyAccessGranteeDetails",
}],
"Object": "list",
"ContinuationToken": null
"object": "list",
"continuationToken": null
}));
}
let emergency_access_list = EmergencyAccess::find_all_by_grantor_uuid(&headers.user.uuid, &mut conn).await;
let mut emergency_access_list_json = Vec::with_capacity(emergency_access_list.len());
for ea in emergency_access_list {
emergency_access_list_json.push(ea.to_json_grantee_details(&mut conn).await);
if let Some(grantee) = ea.to_json_grantee_details(&mut conn).await {
emergency_access_list_json.push(grantee)
}
}
Json(json!({
"Data": emergency_access_list_json,
"Object": "list",
"ContinuationToken": null
"data": emergency_access_list_json,
"object": "list",
"continuationToken": null
}))
}
@@ -84,18 +86,20 @@ async fn get_grantees(headers: Headers, mut conn: DbConn) -> Json<Value> {
}
Json(json!({
"Data": emergency_access_list_json,
"Object": "list",
"ContinuationToken": null
"data": emergency_access_list_json,
"object": "list",
"continuationToken": null
}))
}
#[get("/emergency-access/<emer_id>")]
async fn get_emergency_access(emer_id: &str, mut conn: DbConn) -> JsonResult {
async fn get_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?;
match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emergency_access) => Ok(Json(emergency_access.to_json_grantee_details(&mut conn).await)),
match EmergencyAccess::find_by_uuid_and_grantor_uuid(&emer_id, &headers.user.uuid, &mut conn).await {
Some(emergency_access) => Ok(Json(
emergency_access.to_json_grantee_details(&mut conn).await.expect("Grantee user should exist but does not!"),
)),
None => err!("Emergency access not valid."),
}
}
@@ -105,42 +109,49 @@ async fn get_emergency_access(emer_id: &str, mut conn: DbConn) -> JsonResult {
// region put/post
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct EmergencyAccessUpdateData {
Type: NumberOrString,
WaitTimeDays: i32,
KeyEncrypted: Option<String>,
r#type: NumberOrString,
wait_time_days: i32,
key_encrypted: Option<String>,
}
#[put("/emergency-access/<emer_id>", data = "<data>")]
async fn put_emergency_access(emer_id: &str, data: JsonUpcase<EmergencyAccessUpdateData>, conn: DbConn) -> JsonResult {
post_emergency_access(emer_id, data, conn).await
async fn put_emergency_access(
emer_id: EmergencyAccessId,
data: Json<EmergencyAccessUpdateData>,
headers: Headers,
conn: DbConn,
) -> JsonResult {
post_emergency_access(emer_id, data, headers, conn).await
}
#[post("/emergency-access/<emer_id>", data = "<data>")]
async fn post_emergency_access(
emer_id: &str,
data: JsonUpcase<EmergencyAccessUpdateData>,
emer_id: EmergencyAccessId,
data: Json<EmergencyAccessUpdateData>,
headers: Headers,
mut conn: DbConn,
) -> JsonResult {
check_emergency_access_enabled()?;
let data: EmergencyAccessUpdateData = data.into_inner().data;
let data: EmergencyAccessUpdateData = data.into_inner();
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emergency_access) => emergency_access,
None => err!("Emergency access not valid."),
let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(&emer_id, &headers.user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
let new_type = match EmergencyAccessType::from_str(&data.Type.into_string()) {
let new_type = match EmergencyAccessType::from_str(&data.r#type.into_string()) {
Some(new_type) => new_type as i32,
None => err!("Invalid emergency access type."),
};
emergency_access.atype = new_type;
emergency_access.wait_time_days = data.WaitTimeDays;
if data.KeyEncrypted.is_some() {
emergency_access.key_encrypted = data.KeyEncrypted;
emergency_access.wait_time_days = data.wait_time_days;
if data.key_encrypted.is_some() {
emergency_access.key_encrypted = data.key_encrypted;
}
emergency_access.save(&mut conn).await?;
@@ -152,26 +163,30 @@ async fn post_emergency_access(
// region delete
#[delete("/emergency-access/<emer_id>")]
async fn delete_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult {
async fn delete_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_enabled()?;
let grantor_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => {
if emer.grantor_uuid != grantor_user.uuid && emer.grantee_uuid != Some(grantor_user.uuid) {
err!("Emergency access not valid.")
}
emer
let emergency_access = match (
EmergencyAccess::find_by_uuid_and_grantor_uuid(&emer_id, &headers.user.uuid, &mut conn).await,
EmergencyAccess::find_by_uuid_and_grantee_uuid(&emer_id, &headers.user.uuid, &mut conn).await,
) {
(Some(grantor_emer), None) => {
info!("Grantor deleted emergency access {emer_id}");
grantor_emer
}
None => err!("Emergency access not valid."),
(None, Some(grantee_emer)) => {
info!("Grantee deleted emergency access {emer_id}");
grantee_emer
}
_ => err!("Emergency access not valid."),
};
emergency_access.delete(&mut conn).await?;
Ok(())
}
#[post("/emergency-access/<emer_id>/delete")]
async fn post_delete_emergency_access(emer_id: &str, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_delete_emergency_access(emer_id: EmergencyAccessId, headers: Headers, conn: DbConn) -> EmptyResult {
delete_emergency_access(emer_id, headers, conn).await
}
@@ -180,24 +195,24 @@ async fn post_delete_emergency_access(emer_id: &str, headers: Headers, conn: DbC
// region invite
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct EmergencyAccessInviteData {
Email: String,
Type: NumberOrString,
WaitTimeDays: i32,
email: String,
r#type: NumberOrString,
wait_time_days: i32,
}
#[post("/emergency-access/invite", data = "<data>")]
async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
async fn send_invite(data: Json<EmergencyAccessInviteData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_enabled()?;
let data: EmergencyAccessInviteData = data.into_inner().data;
let email = data.Email.to_lowercase();
let wait_time_days = data.WaitTimeDays;
let data: EmergencyAccessInviteData = data.into_inner();
let email = data.email.to_lowercase();
let wait_time_days = data.wait_time_days;
let emergency_access_status = EmergencyAccessStatus::Invited as i32;
let new_type = match EmergencyAccessType::from_str(&data.Type.into_string()) {
let new_type = match EmergencyAccessType::from_str(&data.r#type.into_string()) {
Some(new_type) => new_type as i32,
None => err!("Invalid emergency access type."),
};
@@ -209,7 +224,7 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
err!("You can not set yourself as an emergency contact.")
}
let grantee_user = match User::find_by_mail(&email, &mut conn).await {
let (grantee_user, new_user) = match User::find_by_mail(&email, &mut conn).await {
None => {
if !CONFIG.invitations_allowed() {
err!(format!("Grantee user does not exist: {}", &email))
@@ -226,9 +241,10 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
let mut user = User::new(email.clone());
user.save(&mut conn).await?;
user
(user, true)
}
Some(user) => user,
Some(user) if user.password_hash.is_empty() => (user, true),
Some(user) => (user, false),
};
if EmergencyAccess::find_by_grantor_uuid_and_grantee_uuid_or_email(
@@ -250,51 +266,40 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite(
&new_emergency_access.email.expect("Grantee email does not exists"),
&grantee_user.uuid,
&new_emergency_access.uuid,
grantee_user.uuid,
new_emergency_access.uuid,
&grantor_user.name,
&grantor_user.email,
)
.await?;
} else {
// Automatically mark user as accepted if no email invites
match User::find_by_mail(&email, &mut conn).await {
Some(user) => match accept_invite_process(&user.uuid, &mut new_emergency_access, &email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
},
None => err!("Grantee user not found."),
}
} else if !new_user {
// if mail is not enabled immediately accept the invitation for existing users
new_emergency_access.accept_invite(&grantee_user.uuid, &email, &mut conn).await?;
}
Ok(())
}
#[post("/emergency-access/<emer_id>/reinvite")]
async fn resend_invite(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult {
async fn resend_invite(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_enabled()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(&emer_id, &headers.user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if emergency_access.grantor_uuid != headers.user.uuid {
err!("Emergency access not valid.");
}
if emergency_access.status != EmergencyAccessStatus::Invited as i32 {
err!("The grantee user is already accepted or confirmed to the organization");
}
let email = match emergency_access.email.clone() {
Some(email) => email,
None => err!("Email not valid."),
let Some(email) = emergency_access.email.clone() else {
err!("Email not valid.")
};
let grantee_user = match User::find_by_mail(&email, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
let Some(grantee_user) = User::find_by_mail(&email, &mut conn).await else {
err!("Grantee user not found.")
};
let grantor_user = headers.user;
@@ -302,40 +307,40 @@ async fn resend_invite(emer_id: &str, headers: Headers, mut conn: DbConn) -> Emp
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite(
&email,
&grantor_user.uuid,
&emergency_access.uuid,
grantor_user.uuid,
emergency_access.uuid,
&grantor_user.name,
&grantor_user.email,
)
.await?;
} else {
if Invitation::find_by_mail(&email, &mut conn).await.is_none() {
let invitation = Invitation::new(&email);
invitation.save(&mut conn).await?;
}
// Automatically mark user as accepted if no email invites
match accept_invite_process(&grantee_user.uuid, &mut emergency_access, &email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
}
} else if !grantee_user.password_hash.is_empty() {
// accept the invitation for existing user
emergency_access.accept_invite(&grantee_user.uuid, &email, &mut conn).await?;
} else if CONFIG.invitations_allowed() && Invitation::find_by_mail(&email, &mut conn).await.is_none() {
let invitation = Invitation::new(&email);
invitation.save(&mut conn).await?;
}
Ok(())
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct AcceptData {
Token: String,
token: String,
}
#[post("/emergency-access/<emer_id>/accept", data = "<data>")]
async fn accept_invite(emer_id: &str, data: JsonUpcase<AcceptData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
async fn accept_invite(
emer_id: EmergencyAccessId,
data: Json<AcceptData>,
headers: Headers,
mut conn: DbConn,
) -> EmptyResult {
check_emergency_access_enabled()?;
let data: AcceptData = data.into_inner().data;
let token = &data.Token;
let data: AcceptData = data.into_inner();
let token = &data.token;
let claims = decode_emergency_access_invite(token)?;
// This can happen if the user who received the invite used a different email to signup.
@@ -352,25 +357,24 @@ async fn accept_invite(emer_id: &str, data: JsonUpcase<AcceptData>, headers: Hea
None => err!("Invited user not found"),
};
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
// We need to search for the uuid in combination with the email, since we do not yet store the uuid of the grantee in the database.
// The uuid of the grantee gets stored once accepted.
let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_email(&emer_id, &headers.user.email, &mut conn).await
else {
err!("Emergency access not valid.")
};
// get grantor user to send Accepted email
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
let Some(grantor_user) = User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await else {
err!("Grantor user not found.")
};
if emer_id == claims.emer_id
&& grantor_user.name == claims.grantor_name
&& grantor_user.email == claims.grantor_email
{
match accept_invite_process(&grantee_user.uuid, &mut emergency_access, &grantee_user.email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
}
emergency_access.accept_invite(&grantee_user.uuid, &grantee_user.email, &mut conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite_accepted(&grantor_user.email, &grantee_user.email).await?;
@@ -382,48 +386,29 @@ async fn accept_invite(emer_id: &str, data: JsonUpcase<AcceptData>, headers: Hea
}
}
async fn accept_invite_process(
grantee_uuid: &str,
emergency_access: &mut EmergencyAccess,
grantee_email: &str,
conn: &mut DbConn,
) -> EmptyResult {
if emergency_access.email.is_none() || emergency_access.email.as_ref().unwrap() != grantee_email {
err!("User email does not match invite.");
}
if emergency_access.status == EmergencyAccessStatus::Accepted as i32 {
err!("Emergency contact already accepted.");
}
emergency_access.status = EmergencyAccessStatus::Accepted as i32;
emergency_access.grantee_uuid = Some(String::from(grantee_uuid));
emergency_access.email = None;
emergency_access.save(conn).await
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct ConfirmData {
Key: String,
key: String,
}
#[post("/emergency-access/<emer_id>/confirm", data = "<data>")]
async fn confirm_emergency_access(
emer_id: &str,
data: JsonUpcase<ConfirmData>,
emer_id: EmergencyAccessId,
data: Json<ConfirmData>,
headers: Headers,
mut conn: DbConn,
) -> JsonResult {
check_emergency_access_enabled()?;
let confirming_user = headers.user;
let data: ConfirmData = data.into_inner().data;
let key = data.Key;
let data: ConfirmData = data.into_inner();
let key = data.key;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(&emer_id, &confirming_user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if emergency_access.status != EmergencyAccessStatus::Accepted as i32
@@ -432,15 +417,13 @@ async fn confirm_emergency_access(
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&confirming_user.uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
let Some(grantor_user) = User::find_by_uuid(&confirming_user.uuid, &mut conn).await else {
err!("Grantor user not found.")
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
let Some(grantee_user) = User::find_by_uuid(grantee_uuid, &mut conn).await else {
err!("Grantee user not found.")
};
emergency_access.status = EmergencyAccessStatus::Confirmed as i32;
@@ -463,24 +446,22 @@ async fn confirm_emergency_access(
// region access emergency access
#[post("/emergency-access/<emer_id>/initiate")]
async fn initiate_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn initiate_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?;
let initiating_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(&emer_id, &initiating_user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if emergency_access.status != EmergencyAccessStatus::Confirmed as i32
|| emergency_access.grantee_uuid != Some(initiating_user.uuid)
{
if emergency_access.status != EmergencyAccessStatus::Confirmed as i32 {
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
let Some(grantor_user) = User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await else {
err!("Grantor user not found.")
};
let now = Utc::now().naive_utc();
@@ -503,29 +484,26 @@ async fn initiate_emergency_access(emer_id: &str, headers: Headers, mut conn: Db
}
#[post("/emergency-access/<emer_id>/approve")]
async fn approve_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn approve_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(&emer_id, &headers.user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
|| emergency_access.grantor_uuid != headers.user.uuid
{
if emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32 {
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&headers.user.uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
let Some(grantor_user) = User::find_by_uuid(&headers.user.uuid, &mut conn).await else {
err!("Grantor user not found.")
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
let Some(grantee_user) = User::find_by_uuid(grantee_uuid, &mut conn).await else {
err!("Grantee user not found.")
};
emergency_access.status = EmergencyAccessStatus::RecoveryApproved as i32;
@@ -541,37 +519,31 @@ async fn approve_emergency_access(emer_id: &str, headers: Headers, mut conn: DbC
}
#[post("/emergency-access/<emer_id>/reject")]
async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn reject_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
let Some(mut emergency_access) =
EmergencyAccess::find_by_uuid_and_grantor_uuid(&emer_id, &headers.user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if (emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
&& emergency_access.status != EmergencyAccessStatus::RecoveryApproved as i32)
|| emergency_access.grantor_uuid != headers.user.uuid
if emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
&& emergency_access.status != EmergencyAccessStatus::RecoveryApproved as i32
{
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&headers.user.uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
};
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
let grantee_user = match User::find_by_uuid(grantee_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantee user not found."),
let Some(grantee_user) = User::find_by_uuid(grantee_uuid, &mut conn).await else {
err!("Grantee user not found.")
};
emergency_access.status = EmergencyAccessStatus::Confirmed as i32;
emergency_access.save(&mut conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_recovery_rejected(&grantee_user.email, &grantor_user.name).await?;
mail::send_emergency_access_recovery_rejected(&grantee_user.email, &headers.user.name).await?;
}
Ok(Json(emergency_access.to_json()))
} else {
@@ -584,12 +556,13 @@ async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbCo
// region action
#[post("/emergency-access/<emer_id>/view")]
async fn view_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn view_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?;
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
let Some(emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(&emer_id, &headers.user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if !is_valid_request(&emergency_access, &headers.user.uuid, EmergencyAccessType::View) {
@@ -614,89 +587,89 @@ async fn view_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn
}
Ok(Json(json!({
"Ciphers": ciphers_json,
"KeyEncrypted": &emergency_access.key_encrypted,
"Object": "emergencyAccessView",
"ciphers": ciphers_json,
"keyEncrypted": &emergency_access.key_encrypted,
"object": "emergencyAccessView",
})))
}
#[post("/emergency-access/<emer_id>/takeover")]
async fn takeover_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn takeover_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_enabled()?;
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
let Some(emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(&emer_id, &requesting_user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if !is_valid_request(&emergency_access, &requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
let Some(grantor_user) = User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await else {
err!("Grantor user not found.")
};
let result = json!({
"Kdf": grantor_user.client_kdf_type,
"KdfIterations": grantor_user.client_kdf_iter,
"KdfMemory": grantor_user.client_kdf_memory,
"KdfParallelism": grantor_user.client_kdf_parallelism,
"KeyEncrypted": &emergency_access.key_encrypted,
"Object": "emergencyAccessTakeover",
"kdf": grantor_user.client_kdf_type,
"kdfIterations": grantor_user.client_kdf_iter,
"kdfMemory": grantor_user.client_kdf_memory,
"kdfParallelism": grantor_user.client_kdf_parallelism,
"keyEncrypted": &emergency_access.key_encrypted,
"object": "emergencyAccessTakeover",
});
Ok(Json(result))
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct EmergencyAccessPasswordData {
NewMasterPasswordHash: String,
Key: String,
new_master_password_hash: String,
key: String,
}
#[post("/emergency-access/<emer_id>/password", data = "<data>")]
async fn password_emergency_access(
emer_id: &str,
data: JsonUpcase<EmergencyAccessPasswordData>,
emer_id: EmergencyAccessId,
data: Json<EmergencyAccessPasswordData>,
headers: Headers,
mut conn: DbConn,
) -> EmptyResult {
check_emergency_access_enabled()?;
let data: EmergencyAccessPasswordData = data.into_inner().data;
let new_master_password_hash = &data.NewMasterPasswordHash;
let data: EmergencyAccessPasswordData = data.into_inner();
let new_master_password_hash = &data.new_master_password_hash;
//let key = &data.Key;
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
let Some(emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(&emer_id, &requesting_user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if !is_valid_request(&emergency_access, &requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
let mut grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
let Some(mut grantor_user) = User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await else {
err!("Grantor user not found.")
};
// change grantor_user password
grantor_user.set_password(new_master_password_hash, Some(data.Key), true, None);
grantor_user.set_password(new_master_password_hash, Some(data.key), true, None);
grantor_user.save(&mut conn).await?;
// Disable TwoFactor providers since they will otherwise block logins
TwoFactor::delete_all_by_user(&grantor_user.uuid, &mut conn).await?;
// Remove grantor from all organisations unless Owner
for user_org in UserOrganization::find_any_state_by_user(&grantor_user.uuid, &mut conn).await {
if user_org.atype != UserOrgType::Owner as i32 {
user_org.delete(&mut conn).await?;
for member in Membership::find_any_state_by_user(&grantor_user.uuid, &mut conn).await {
if member.atype != MembershipType::Owner as i32 {
member.delete(&mut conn).await?;
}
}
Ok(())
@@ -705,39 +678,39 @@ async fn password_emergency_access(
// endregion
#[get("/emergency-access/<emer_id>/policies")]
async fn policies_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn policies_emergency_access(emer_id: EmergencyAccessId, headers: Headers, mut conn: DbConn) -> JsonResult {
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
None => err!("Emergency access not valid."),
let Some(emergency_access) =
EmergencyAccess::find_by_uuid_and_grantee_uuid(&emer_id, &requesting_user.uuid, &mut conn).await
else {
err!("Emergency access not valid.")
};
if !is_valid_request(&emergency_access, &requesting_user.uuid, EmergencyAccessType::Takeover) {
err!("Emergency access not valid.")
}
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await {
Some(user) => user,
None => err!("Grantor user not found."),
let Some(grantor_user) = User::find_by_uuid(&emergency_access.grantor_uuid, &mut conn).await else {
err!("Grantor user not found.")
};
let policies = OrgPolicy::find_confirmed_by_user(&grantor_user.uuid, &mut conn);
let policies_json: Vec<Value> = policies.await.iter().map(OrgPolicy::to_json).collect();
Ok(Json(json!({
"Data": policies_json,
"Object": "list",
"ContinuationToken": null
"data": policies_json,
"object": "list",
"continuationToken": null
})))
}
fn is_valid_request(
emergency_access: &EmergencyAccess,
requesting_user_uuid: &str,
requesting_user_id: &UserId,
requested_access_type: EmergencyAccessType,
) -> bool {
emergency_access.grantee_uuid.is_some()
&& emergency_access.grantee_uuid.as_ref().unwrap() == requesting_user_uuid
&& emergency_access.grantee_uuid.as_ref().unwrap() == requesting_user_id
&& emergency_access.status == EmergencyAccessStatus::RecoveryApproved as i32
&& emergency_access.atype == requested_access_type as i32
}
@@ -766,7 +739,7 @@ pub async fn emergency_request_timeout_job(pool: DbPool) {
for mut emer in emergency_access_list {
// The find_all_recoveries_initiated already checks if the recovery_initiated_at is not null (None)
let recovery_allowed_at =
emer.recovery_initiated_at.unwrap() + Duration::days(i64::from(emer.wait_time_days));
emer.recovery_initiated_at.unwrap() + TimeDelta::try_days(i64::from(emer.wait_time_days)).unwrap();
if recovery_allowed_at.le(&now) {
// Only update the access status
// Updating the whole record could cause issues when the emergency_notification_reminder_job is also active
@@ -822,10 +795,10 @@ pub async fn emergency_notification_reminder_job(pool: DbPool) {
// The find_all_recoveries_initiated already checks if the recovery_initiated_at is not null (None)
// Calculate the day before the recovery will become active
let final_recovery_reminder_at =
emer.recovery_initiated_at.unwrap() + Duration::days(i64::from(emer.wait_time_days - 1));
emer.recovery_initiated_at.unwrap() + TimeDelta::try_days(i64::from(emer.wait_time_days - 1)).unwrap();
// Calculate if a day has passed since the previous notification, else no notification has been sent before
let next_recovery_reminder_at = if let Some(last_notification_at) = emer.last_notification_at {
last_notification_at + Duration::days(1)
last_notification_at + TimeDelta::try_days(1).unwrap()
} else {
now
};

View File

@@ -5,10 +5,10 @@ use rocket::{form::FromForm, serde::json::Json, Route};
use serde_json::Value;
use crate::{
api::{EmptyResult, JsonResult, JsonUpcaseVec},
api::{EmptyResult, JsonResult},
auth::{AdminHeaders, Headers},
db::{
models::{Cipher, Event, UserOrganization},
models::{Cipher, CipherId, Event, Membership, MembershipId, OrganizationId, UserId},
DbConn, DbPool,
},
util::parse_date,
@@ -22,7 +22,6 @@ pub fn routes() -> Vec<Route> {
}
#[derive(FromForm)]
#[allow(non_snake_case)]
struct EventRange {
start: String,
end: String,
@@ -32,7 +31,16 @@ struct EventRange {
// Upstream: https://github.com/bitwarden/server/blob/9ecf69d9cabce732cf2c57976dd9afa5728578fb/src/Api/Controllers/EventsController.cs#LL84C35-L84C41
#[get("/organizations/<org_id>/events?<data..>")]
async fn get_org_events(org_id: &str, data: EventRange, _headers: AdminHeaders, mut conn: DbConn) -> JsonResult {
async fn get_org_events(
org_id: OrganizationId,
data: EventRange,
headers: AdminHeaders,
mut conn: DbConn,
) -> JsonResult {
if org_id != headers.org_id {
err!("Organization not found", "Organization id's do not match");
}
// Return an empty vec when we org events are disabled.
// This prevents client errors
let events_json: Vec<Value> = if !CONFIG.org_events_enabled() {
@@ -45,7 +53,7 @@ async fn get_org_events(org_id: &str, data: EventRange, _headers: AdminHeaders,
parse_date(&data.end)
};
Event::find_by_organization_uuid(org_id, &start_date, &end_date, &mut conn)
Event::find_by_organization_uuid(&org_id, &start_date, &end_date, &mut conn)
.await
.iter()
.map(|e| e.to_json())
@@ -53,21 +61,21 @@ async fn get_org_events(org_id: &str, data: EventRange, _headers: AdminHeaders,
};
Ok(Json(json!({
"Data": events_json,
"Object": "list",
"ContinuationToken": get_continuation_token(&events_json),
"data": events_json,
"object": "list",
"continuationToken": get_continuation_token(&events_json),
})))
}
#[get("/ciphers/<cipher_id>/events?<data..>")]
async fn get_cipher_events(cipher_id: &str, data: EventRange, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn get_cipher_events(cipher_id: CipherId, data: EventRange, headers: Headers, mut conn: DbConn) -> JsonResult {
// Return an empty vec when we org events are disabled.
// This prevents client errors
let events_json: Vec<Value> = if !CONFIG.org_events_enabled() {
Vec::with_capacity(0)
} else {
let mut events_json = Vec::with_capacity(0);
if UserOrganization::user_has_ge_admin_access_to_cipher(&headers.user.uuid, cipher_id, &mut conn).await {
if Membership::user_has_ge_admin_access_to_cipher(&headers.user.uuid, &cipher_id, &mut conn).await {
let start_date = parse_date(&data.start);
let end_date = if let Some(before_date) = &data.continuation_token {
parse_date(before_date)
@@ -75,7 +83,7 @@ async fn get_cipher_events(cipher_id: &str, data: EventRange, headers: Headers,
parse_date(&data.end)
};
events_json = Event::find_by_cipher_uuid(cipher_id, &start_date, &end_date, &mut conn)
events_json = Event::find_by_cipher_uuid(&cipher_id, &start_date, &end_date, &mut conn)
.await
.iter()
.map(|e| e.to_json())
@@ -85,20 +93,23 @@ async fn get_cipher_events(cipher_id: &str, data: EventRange, headers: Headers,
};
Ok(Json(json!({
"Data": events_json,
"Object": "list",
"ContinuationToken": get_continuation_token(&events_json),
"data": events_json,
"object": "list",
"continuationToken": get_continuation_token(&events_json),
})))
}
#[get("/organizations/<org_id>/users/<user_org_id>/events?<data..>")]
#[get("/organizations/<org_id>/users/<member_id>/events?<data..>")]
async fn get_user_events(
org_id: &str,
user_org_id: &str,
org_id: OrganizationId,
member_id: MembershipId,
data: EventRange,
_headers: AdminHeaders,
headers: AdminHeaders,
mut conn: DbConn,
) -> JsonResult {
if org_id != headers.org_id {
err!("Organization not found", "Organization id's do not match");
}
// Return an empty vec when we org events are disabled.
// This prevents client errors
let events_json: Vec<Value> = if !CONFIG.org_events_enabled() {
@@ -111,7 +122,7 @@ async fn get_user_events(
parse_date(&data.end)
};
Event::find_by_org_and_user_org(org_id, user_org_id, &start_date, &end_date, &mut conn)
Event::find_by_org_and_member(&org_id, &member_id, &start_date, &end_date, &mut conn)
.await
.iter()
.map(|e| e.to_json())
@@ -119,13 +130,13 @@ async fn get_user_events(
};
Ok(Json(json!({
"Data": events_json,
"Object": "list",
"ContinuationToken": get_continuation_token(&events_json),
"data": events_json,
"object": "list",
"continuationToken": get_continuation_token(&events_json),
})))
}
fn get_continuation_token(events_json: &Vec<Value>) -> Option<&str> {
fn get_continuation_token(events_json: &[Value]) -> Option<&str> {
// When the length of the vec equals the max page_size there probably is more data
// When it is less, then all events are loaded.
if events_json.len() as i64 == Event::PAGE_SIZE {
@@ -145,33 +156,33 @@ pub fn main_routes() -> Vec<Route> {
routes![post_events_collect,]
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct EventCollection {
// Mandatory
Type: i32,
Date: String,
r#type: i32,
date: String,
// Optional
CipherId: Option<String>,
OrganizationId: Option<String>,
cipher_id: Option<CipherId>,
organization_id: Option<OrganizationId>,
}
// Upstream:
// https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Events/Controllers/CollectController.cs
// https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Core/Services/Implementations/EventService.cs
#[post("/collect", format = "application/json", data = "<data>")]
async fn post_events_collect(data: JsonUpcaseVec<EventCollection>, headers: Headers, mut conn: DbConn) -> EmptyResult {
async fn post_events_collect(data: Json<Vec<EventCollection>>, headers: Headers, mut conn: DbConn) -> EmptyResult {
if !CONFIG.org_events_enabled() {
return Ok(());
}
for event in data.iter().map(|d| &d.data) {
let event_date = parse_date(&event.Date);
match event.Type {
for event in data.iter() {
let event_date = parse_date(&event.date);
match event.r#type {
1000..=1099 => {
_log_user_event(
event.Type,
event.r#type,
&headers.user.uuid,
headers.device.atype,
Some(event_date),
@@ -181,11 +192,11 @@ async fn post_events_collect(data: JsonUpcaseVec<EventCollection>, headers: Head
.await;
}
1600..=1699 => {
if let Some(org_uuid) = &event.OrganizationId {
if let Some(org_id) = &event.organization_id {
_log_event(
event.Type,
org_uuid,
org_uuid,
event.r#type,
org_id,
org_id,
&headers.user.uuid,
headers.device.atype,
Some(event_date),
@@ -196,13 +207,13 @@ async fn post_events_collect(data: JsonUpcaseVec<EventCollection>, headers: Head
}
}
_ => {
if let Some(cipher_uuid) = &event.CipherId {
if let Some(cipher_uuid) = &event.cipher_id {
if let Some(cipher) = Cipher::find_by_uuid(cipher_uuid, &mut conn).await {
if let Some(org_uuid) = cipher.organization_uuid {
if let Some(org_id) = cipher.organization_uuid {
_log_event(
event.Type,
event.r#type,
cipher_uuid,
&org_uuid,
&org_id,
&headers.user.uuid,
headers.device.atype,
Some(event_date),
@@ -219,38 +230,38 @@ async fn post_events_collect(data: JsonUpcaseVec<EventCollection>, headers: Head
Ok(())
}
pub async fn log_user_event(event_type: i32, user_uuid: &str, device_type: i32, ip: &IpAddr, conn: &mut DbConn) {
pub async fn log_user_event(event_type: i32, user_id: &UserId, device_type: i32, ip: &IpAddr, conn: &mut DbConn) {
if !CONFIG.org_events_enabled() {
return;
}
_log_user_event(event_type, user_uuid, device_type, None, ip, conn).await;
_log_user_event(event_type, user_id, device_type, None, ip, conn).await;
}
async fn _log_user_event(
event_type: i32,
user_uuid: &str,
user_id: &UserId,
device_type: i32,
event_date: Option<NaiveDateTime>,
ip: &IpAddr,
conn: &mut DbConn,
) {
let orgs = UserOrganization::get_org_uuid_by_user(user_uuid, conn).await;
let orgs = Membership::get_orgs_by_user(user_id, conn).await;
let mut events: Vec<Event> = Vec::with_capacity(orgs.len() + 1); // We need an event per org and one without an org
// Upstream saves the event also without any org_uuid.
// Upstream saves the event also without any org_id.
let mut event = Event::new(event_type, event_date);
event.user_uuid = Some(String::from(user_uuid));
event.act_user_uuid = Some(String::from(user_uuid));
event.user_uuid = Some(user_id.clone());
event.act_user_uuid = Some(user_id.clone());
event.device_type = Some(device_type);
event.ip_address = Some(ip.to_string());
events.push(event);
// For each org a user is a member of store these events per org
for org_uuid in orgs {
for org_id in orgs {
let mut event = Event::new(event_type, event_date);
event.user_uuid = Some(String::from(user_uuid));
event.org_uuid = Some(org_uuid);
event.act_user_uuid = Some(String::from(user_uuid));
event.user_uuid = Some(user_id.clone());
event.org_uuid = Some(org_id);
event.act_user_uuid = Some(user_id.clone());
event.device_type = Some(device_type);
event.ip_address = Some(ip.to_string());
events.push(event);
@@ -262,8 +273,8 @@ async fn _log_user_event(
pub async fn log_event(
event_type: i32,
source_uuid: &str,
org_uuid: &str,
act_user_uuid: &str,
org_id: &OrganizationId,
act_user_id: &UserId,
device_type: i32,
ip: &IpAddr,
conn: &mut DbConn,
@@ -271,15 +282,15 @@ pub async fn log_event(
if !CONFIG.org_events_enabled() {
return;
}
_log_event(event_type, source_uuid, org_uuid, act_user_uuid, device_type, None, ip, conn).await;
_log_event(event_type, source_uuid, org_id, act_user_id, device_type, None, ip, conn).await;
}
#[allow(clippy::too_many_arguments)]
async fn _log_event(
event_type: i32,
source_uuid: &str,
org_uuid: &str,
act_user_uuid: &str,
org_id: &OrganizationId,
act_user_id: &UserId,
device_type: i32,
event_date: Option<NaiveDateTime>,
ip: &IpAddr,
@@ -289,33 +300,33 @@ async fn _log_event(
let mut event = Event::new(event_type, event_date);
match event_type {
// 1000..=1099 Are user events, they need to be logged via log_user_event()
// Collection Events
// Cipher Events
1100..=1199 => {
event.cipher_uuid = Some(String::from(source_uuid));
event.cipher_uuid = Some(source_uuid.to_string().into());
}
// Collection Events
1300..=1399 => {
event.collection_uuid = Some(String::from(source_uuid));
event.collection_uuid = Some(source_uuid.to_string().into());
}
// Group Events
1400..=1499 => {
event.group_uuid = Some(String::from(source_uuid));
event.group_uuid = Some(source_uuid.to_string().into());
}
// Org User Events
1500..=1599 => {
event.org_user_uuid = Some(String::from(source_uuid));
event.org_user_uuid = Some(source_uuid.to_string().into());
}
// 1600..=1699 Are organizational events, and they do not need the source_uuid
// Policy Events
1700..=1799 => {
event.policy_uuid = Some(String::from(source_uuid));
event.policy_uuid = Some(source_uuid.to_string().into());
}
// Ignore others
_ => {}
}
event.org_uuid = Some(String::from(org_uuid));
event.act_user_uuid = Some(String::from(act_user_uuid));
event.org_uuid = Some(org_id.clone());
event.act_user_uuid = Some(act_user_id.clone());
event.device_type = Some(device_type);
event.ip_address = Some(ip.to_string());
event.save(conn).await.unwrap_or(());

View File

@@ -2,7 +2,7 @@ use rocket::serde::json::Json;
use serde_json::Value;
use crate::{
api::{EmptyResult, JsonResult, JsonUpcase, Notify, UpdateType},
api::{EmptyResult, JsonResult, Notify, UpdateType},
auth::Headers,
db::{models::*, DbConn},
};
@@ -17,37 +17,32 @@ async fn get_folders(headers: Headers, mut conn: DbConn) -> Json<Value> {
let folders_json: Vec<Value> = folders.iter().map(Folder::to_json).collect();
Json(json!({
"Data": folders_json,
"Object": "list",
"ContinuationToken": null,
"data": folders_json,
"object": "list",
"continuationToken": null,
}))
}
#[get("/folders/<uuid>")]
async fn get_folder(uuid: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
let folder = match Folder::find_by_uuid(uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
};
if folder.user_uuid != headers.user.uuid {
err!("Folder belongs to another user")
#[get("/folders/<folder_id>")]
async fn get_folder(folder_id: FolderId, headers: Headers, mut conn: DbConn) -> JsonResult {
match Folder::find_by_uuid_and_user(&folder_id, &headers.user.uuid, &mut conn).await {
Some(folder) => Ok(Json(folder.to_json())),
_ => err!("Invalid folder", "Folder does not exist or belongs to another user"),
}
Ok(Json(folder.to_json()))
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
pub struct FolderData {
pub Name: String,
pub name: String,
pub id: Option<FolderId>,
}
#[post("/folders", data = "<data>")]
async fn post_folders(data: JsonUpcase<FolderData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
let data: FolderData = data.into_inner().data;
async fn post_folders(data: Json<FolderData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
let data: FolderData = data.into_inner();
let mut folder = Folder::new(headers.user.uuid, data.Name);
let mut folder = Folder::new(headers.user.uuid, data.name);
folder.save(&mut conn).await?;
nt.send_folder_update(UpdateType::SyncFolderCreate, &folder, &headers.device.uuid, &mut conn).await;
@@ -55,37 +50,32 @@ async fn post_folders(data: JsonUpcase<FolderData>, headers: Headers, mut conn:
Ok(Json(folder.to_json()))
}
#[post("/folders/<uuid>", data = "<data>")]
#[post("/folders/<folder_id>", data = "<data>")]
async fn post_folder(
uuid: &str,
data: JsonUpcase<FolderData>,
folder_id: FolderId,
data: Json<FolderData>,
headers: Headers,
conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
put_folder(uuid, data, headers, conn, nt).await
put_folder(folder_id, data, headers, conn, nt).await
}
#[put("/folders/<uuid>", data = "<data>")]
#[put("/folders/<folder_id>", data = "<data>")]
async fn put_folder(
uuid: &str,
data: JsonUpcase<FolderData>,
folder_id: FolderId,
data: Json<FolderData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
let data: FolderData = data.into_inner().data;
let data: FolderData = data.into_inner();
let mut folder = match Folder::find_by_uuid(uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
let Some(mut folder) = Folder::find_by_uuid_and_user(&folder_id, &headers.user.uuid, &mut conn).await else {
err!("Invalid folder", "Folder does not exist or belongs to another user")
};
if folder.user_uuid != headers.user.uuid {
err!("Folder belongs to another user")
}
folder.name = data.Name;
folder.name = data.name;
folder.save(&mut conn).await?;
nt.send_folder_update(UpdateType::SyncFolderUpdate, &folder, &headers.device.uuid, &mut conn).await;
@@ -93,22 +83,17 @@ async fn put_folder(
Ok(Json(folder.to_json()))
}
#[post("/folders/<uuid>/delete")]
async fn delete_folder_post(uuid: &str, headers: Headers, conn: DbConn, nt: Notify<'_>) -> EmptyResult {
delete_folder(uuid, headers, conn, nt).await
#[post("/folders/<folder_id>/delete")]
async fn delete_folder_post(folder_id: FolderId, headers: Headers, conn: DbConn, nt: Notify<'_>) -> EmptyResult {
delete_folder(folder_id, headers, conn, nt).await
}
#[delete("/folders/<uuid>")]
async fn delete_folder(uuid: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let folder = match Folder::find_by_uuid(uuid, &mut conn).await {
Some(folder) => folder,
_ => err!("Invalid folder"),
#[delete("/folders/<folder_id>")]
async fn delete_folder(folder_id: FolderId, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let Some(folder) = Folder::find_by_uuid_and_user(&folder_id, &headers.user.uuid, &mut conn).await else {
err!("Invalid folder", "Folder does not exist or belongs to another user")
};
if folder.user_uuid != headers.user.uuid {
err!("Folder belongs to another user")
}
// Delete the actual folder entry
folder.delete(&mut conn).await?;

View File

@@ -12,12 +12,13 @@ pub use accounts::purge_auth_requests;
pub use ciphers::{purge_trashed_ciphers, CipherData, CipherSyncData, CipherSyncType};
pub use emergency_access::{emergency_notification_reminder_job, emergency_request_timeout_job};
pub use events::{event_cleanup_job, log_event, log_user_event};
use reqwest::Method;
pub use sends::purge_sends;
pub fn routes() -> Vec<Route> {
let mut eq_domains_routes = routes![get_eq_domains, post_eq_domains, put_eq_domains];
let mut hibp_routes = routes![hibp_breach];
let mut meta_routes = routes![alive, now, version, config];
let mut meta_routes = routes![alive, now, version, config, get_api_webauthn];
let mut routes = Vec::new();
routes.append(&mut accounts::routes());
@@ -49,19 +50,20 @@ pub fn events_routes() -> Vec<Route> {
use rocket::{serde::json::Json, serde::json::Value, Catcher, Route};
use crate::{
api::{JsonResult, JsonUpcase, Notify, UpdateType},
api::{JsonResult, Notify, UpdateType},
auth::Headers,
db::DbConn,
error::Error,
util::{get_reqwest_client, parse_experimental_client_feature_flags},
http_client::make_http_request,
util::parse_experimental_client_feature_flags,
};
#[derive(Serialize, Deserialize, Debug)]
#[allow(non_snake_case)]
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct GlobalDomain {
Type: i32,
Domains: Vec<String>,
Excluded: bool,
r#type: i32,
domains: Vec<String>,
excluded: bool,
}
const GLOBAL_DOMAINS: &str = include_str!("../../static/global_domains.json");
@@ -81,38 +83,38 @@ fn _get_eq_domains(headers: Headers, no_excluded: bool) -> Json<Value> {
let mut globals: Vec<GlobalDomain> = from_str(GLOBAL_DOMAINS).unwrap();
for global in &mut globals {
global.Excluded = excluded_globals.contains(&global.Type);
global.excluded = excluded_globals.contains(&global.r#type);
}
if no_excluded {
globals.retain(|g| !g.Excluded);
globals.retain(|g| !g.excluded);
}
Json(json!({
"EquivalentDomains": equivalent_domains,
"GlobalEquivalentDomains": globals,
"Object": "domains",
"equivalentDomains": equivalent_domains,
"globalEquivalentDomains": globals,
"object": "domains",
}))
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct EquivDomainData {
ExcludedGlobalEquivalentDomains: Option<Vec<i32>>,
EquivalentDomains: Option<Vec<Vec<String>>>,
excluded_global_equivalent_domains: Option<Vec<i32>>,
equivalent_domains: Option<Vec<Vec<String>>>,
}
#[post("/settings/domains", data = "<data>")]
async fn post_eq_domains(
data: JsonUpcase<EquivDomainData>,
data: Json<EquivDomainData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
let data: EquivDomainData = data.into_inner().data;
let data: EquivDomainData = data.into_inner();
let excluded_globals = data.ExcludedGlobalEquivalentDomains.unwrap_or_default();
let equivalent_domains = data.EquivalentDomains.unwrap_or_default();
let excluded_globals = data.excluded_global_equivalent_domains.unwrap_or_default();
let equivalent_domains = data.equivalent_domains.unwrap_or_default();
let mut user = headers.user;
use serde_json::to_string;
@@ -128,25 +130,19 @@ async fn post_eq_domains(
}
#[put("/settings/domains", data = "<data>")]
async fn put_eq_domains(
data: JsonUpcase<EquivDomainData>,
headers: Headers,
conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
async fn put_eq_domains(data: Json<EquivDomainData>, headers: Headers, conn: DbConn, nt: Notify<'_>) -> JsonResult {
post_eq_domains(data, headers, conn, nt).await
}
#[get("/hibp/breach?<username>")]
async fn hibp_breach(username: &str) -> JsonResult {
let url = format!(
"https://haveibeenpwned.com/api/v3/breachedaccount/{username}?truncateResponse=false&includeUnverified=false"
);
async fn hibp_breach(username: &str, _headers: Headers) -> JsonResult {
let username: String = url::form_urlencoded::byte_serialize(username.as_bytes()).collect();
if let Some(api_key) = crate::CONFIG.hibp_api_key() {
let hibp_client = get_reqwest_client();
let url = format!(
"https://haveibeenpwned.com/api/v3/breachedaccount/{username}?truncateResponse=false&includeUnverified=false"
);
let res = hibp_client.get(&url).header("hibp-api-key", api_key).send().await?;
let res = make_http_request(Method::GET, &url)?.header("hibp-api-key", api_key).send().await?;
// If we get a 404, return a 404, it means no breached accounts
if res.status() == 404 {
@@ -157,15 +153,15 @@ async fn hibp_breach(username: &str) -> JsonResult {
Ok(Json(value))
} else {
Ok(Json(json!([{
"Name": "HaveIBeenPwned",
"Title": "Manual HIBP Check",
"Domain": "haveibeenpwned.com",
"BreachDate": "2019-08-18T00:00:00Z",
"AddedDate": "2019-08-18T00:00:00Z",
"Description": format!("Go to: <a href=\"https://haveibeenpwned.com/account/{username}\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/account/{username}</a> for a manual check.<br/><br/>HaveIBeenPwned API key not set!<br/>Go to <a href=\"https://haveibeenpwned.com/API/Key\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/API/Key</a> to purchase an API key from HaveIBeenPwned.<br/><br/>"),
"LogoPath": "vw_static/hibp.png",
"PwnCount": 0,
"DataClasses": [
"name": "HaveIBeenPwned",
"title": "Manual HIBP Check",
"domain": "haveibeenpwned.com",
"breachDate": "2019-08-18T00:00:00Z",
"addedDate": "2019-08-18T00:00:00Z",
"description": format!("Go to: <a href=\"https://haveibeenpwned.com/account/{username}\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/account/{username}</a> for a manual check.<br/><br/>HaveIBeenPwned API key not set!<br/>Go to <a href=\"https://haveibeenpwned.com/API/Key\" target=\"_blank\" rel=\"noreferrer\">https://haveibeenpwned.com/API/Key</a> to purchase an API key from HaveIBeenPwned.<br/><br/>"),
"logoPath": "vw_static/hibp.png",
"pwnCount": 0,
"dataClasses": [
"Error - No API key set!"
]
}])))
@@ -188,22 +184,41 @@ fn version() -> Json<&'static str> {
Json(crate::VERSION.unwrap_or_default())
}
#[get("/webauthn")]
fn get_api_webauthn(_headers: Headers) -> Json<Value> {
// Prevent a 404 error, which also causes key-rotation issues
// It looks like this is used when login with passkeys is enabled, which Vaultwarden does not (yet) support
// An empty list/data also works fine
Json(json!({
"object": "list",
"data": [],
"continuationToken": null
}))
}
#[get("/config")]
fn config() -> Json<Value> {
let domain = crate::CONFIG.domain();
let feature_states = parse_experimental_client_feature_flags(&crate::CONFIG.experimental_client_feature_flags());
let mut feature_states =
parse_experimental_client_feature_flags(&crate::CONFIG.experimental_client_feature_flags());
// Force the new key rotation feature
feature_states.insert("key-rotation-improvements".to_string(), true);
feature_states.insert("flexible-collections-v-1".to_string(), false);
Json(json!({
// Note: The clients use this version to handle backwards compatibility concerns
// This means they expect a version that closely matches the Bitwarden server version
// We should make sure that we keep this updated when we support the new server features
// Version history:
// - Individual cipher key encryption: 2023.9.1
"version": "2023.9.1",
// - Individual cipher key encryption: 2024.2.0
"version": "2025.1.0",
"gitHash": option_env!("GIT_REV"),
"server": {
"name": "Vaultwarden",
"url": "https://github.com/dani-garcia/vaultwarden",
"version": crate::VERSION
"url": "https://github.com/dani-garcia/vaultwarden"
},
"settings": {
"disableUserRegistration": !crate::CONFIG.signups_allowed() && crate::CONFIG.signups_domains_whitelist().is_empty(),
},
"environment": {
"vault": domain,

File diff suppressed because it is too large Load Diff

View File

@@ -1,13 +1,14 @@
use chrono::Utc;
use rocket::{
request::{self, FromRequest, Outcome},
request::{FromRequest, Outcome},
serde::json::Json,
Request, Route,
};
use std::collections::HashSet;
use crate::{
api::{EmptyResult, JsonUpcase},
api::EmptyResult,
auth,
db::{models::*, DbConn},
mail, CONFIG,
@@ -18,103 +19,99 @@ pub fn routes() -> Vec<Route> {
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct OrgImportGroupData {
Name: String,
ExternalId: String,
MemberExternalIds: Vec<String>,
name: String,
external_id: String,
member_external_ids: Vec<String>,
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct OrgImportUserData {
Email: String,
ExternalId: String,
Deleted: bool,
email: String,
external_id: String,
deleted: bool,
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct OrgImportData {
Groups: Vec<OrgImportGroupData>,
Members: Vec<OrgImportUserData>,
OverwriteExisting: bool,
// LargeImport: bool, // For now this will not be used, upstream uses this to prevent syncs of more then 2000 users or groups without the flag set.
groups: Vec<OrgImportGroupData>,
members: Vec<OrgImportUserData>,
overwrite_existing: bool,
// largeImport: bool, // For now this will not be used, upstream uses this to prevent syncs of more then 2000 users or groups without the flag set.
}
#[post("/public/organization/import", data = "<data>")]
async fn ldap_import(data: JsonUpcase<OrgImportData>, token: PublicToken, mut conn: DbConn) -> EmptyResult {
async fn ldap_import(data: Json<OrgImportData>, token: PublicToken, mut conn: DbConn) -> EmptyResult {
// Most of the logic for this function can be found here
// https://github.com/bitwarden/server/blob/fd892b2ff4547648a276734fb2b14a8abae2c6f5/src/Core/Services/Implementations/OrganizationService.cs#L1797
let org_id = token.0;
let data = data.into_inner().data;
let data = data.into_inner();
for user_data in &data.Members {
if user_data.Deleted {
for user_data in &data.members {
let mut user_created: bool = false;
if user_data.deleted {
// If user is marked for deletion and it exists, revoke it
if let Some(mut user_org) =
UserOrganization::find_by_email_and_org(&user_data.Email, &org_id, &mut conn).await
{
if let Some(mut member) = Membership::find_by_email_and_org(&user_data.email, &org_id, &mut conn).await {
// Only revoke a user if it is not the last confirmed owner
let revoked = if user_org.atype == UserOrgType::Owner
&& user_org.status == UserOrgStatus::Confirmed as i32
let revoked = if member.atype == MembershipType::Owner
&& member.status == MembershipStatus::Confirmed as i32
{
if UserOrganization::count_confirmed_by_org_and_type(&org_id, UserOrgType::Owner, &mut conn).await
<= 1
if Membership::count_confirmed_by_org_and_type(&org_id, MembershipType::Owner, &mut conn).await <= 1
{
warn!("Can't revoke the last owner");
false
} else {
user_org.revoke()
member.revoke()
}
} else {
user_org.revoke()
member.revoke()
};
let ext_modified = user_org.set_external_id(Some(user_data.ExternalId.clone()));
let ext_modified = member.set_external_id(Some(user_data.external_id.clone()));
if revoked || ext_modified {
user_org.save(&mut conn).await?;
member.save(&mut conn).await?;
}
}
// If user is part of the organization, restore it
} else if let Some(mut user_org) =
UserOrganization::find_by_email_and_org(&user_data.Email, &org_id, &mut conn).await
{
let restored = user_org.restore();
let ext_modified = user_org.set_external_id(Some(user_data.ExternalId.clone()));
} else if let Some(mut member) = Membership::find_by_email_and_org(&user_data.email, &org_id, &mut conn).await {
let restored = member.restore();
let ext_modified = member.set_external_id(Some(user_data.external_id.clone()));
if restored || ext_modified {
user_org.save(&mut conn).await?;
member.save(&mut conn).await?;
}
} else {
// If user is not part of the organization
let user = match User::find_by_mail(&user_data.Email, &mut conn).await {
let user = match User::find_by_mail(&user_data.email, &mut conn).await {
Some(user) => user, // exists in vaultwarden
None => {
// User does not exist yet
let mut new_user = User::new(user_data.Email.clone());
let mut new_user = User::new(user_data.email.clone());
new_user.save(&mut conn).await?;
if !CONFIG.mail_enabled() {
let invitation = Invitation::new(&new_user.email);
invitation.save(&mut conn).await?;
Invitation::new(&new_user.email).save(&mut conn).await?;
}
user_created = true;
new_user
}
};
let user_org_status = if CONFIG.mail_enabled() || user.password_hash.is_empty() {
UserOrgStatus::Invited as i32
let member_status = if CONFIG.mail_enabled() || user.password_hash.is_empty() {
MembershipStatus::Invited as i32
} else {
UserOrgStatus::Accepted as i32 // Automatically mark user as accepted if no email invites
MembershipStatus::Accepted as i32 // Automatically mark user as accepted if no email invites
};
let mut new_org_user = UserOrganization::new(user.uuid.clone(), org_id.clone());
new_org_user.set_external_id(Some(user_data.ExternalId.clone()));
new_org_user.access_all = false;
new_org_user.atype = UserOrgType::User as i32;
new_org_user.status = user_org_status;
let mut new_member = Membership::new(user.uuid.clone(), org_id.clone());
new_member.set_external_id(Some(user_data.external_id.clone()));
new_member.access_all = false;
new_member.atype = MembershipType::User as i32;
new_member.status = member_status;
new_org_user.save(&mut conn).await?;
new_member.save(&mut conn).await?;
if CONFIG.mail_enabled() {
let (org_name, org_email) = match Organization::find_by_uuid(&org_id, &mut conn).await {
@@ -122,26 +119,34 @@ async fn ldap_import(data: JsonUpcase<OrgImportData>, token: PublicToken, mut co
None => err!("Error looking up organization"),
};
mail::send_invite(
&user_data.Email,
&user.uuid,
Some(org_id.clone()),
Some(new_org_user.uuid),
&org_name,
Some(org_email),
)
.await?;
if let Err(e) =
mail::send_invite(&user, org_id.clone(), new_member.uuid.clone(), &org_name, Some(org_email)).await
{
// Upon error delete the user, invite and org member records when needed
if user_created {
user.delete(&mut conn).await?;
} else {
new_member.delete(&mut conn).await?;
}
err!(format!("Error sending invite: {e:?} "));
}
}
}
}
if CONFIG.org_groups_enabled() {
for group_data in &data.Groups {
let group_uuid = match Group::find_by_external_id(&group_data.ExternalId, &mut conn).await {
for group_data in &data.groups {
let group_uuid = match Group::find_by_external_id_and_org(&group_data.external_id, &org_id, &mut conn).await
{
Some(group) => group.uuid,
None => {
let mut group =
Group::new(org_id.clone(), group_data.Name.clone(), false, Some(group_data.ExternalId.clone()));
let mut group = Group::new(
org_id.clone(),
group_data.name.clone(),
false,
Some(group_data.external_id.clone()),
);
group.save(&mut conn).await?;
group.uuid
}
@@ -149,10 +154,9 @@ async fn ldap_import(data: JsonUpcase<OrgImportData>, token: PublicToken, mut co
GroupUser::delete_all_by_group(&group_uuid, &mut conn).await?;
for ext_id in &group_data.MemberExternalIds {
if let Some(user_org) = UserOrganization::find_by_external_id_and_org(ext_id, &org_id, &mut conn).await
{
let mut group_user = GroupUser::new(group_uuid.clone(), user_org.uuid.clone());
for ext_id in &group_data.member_external_ids {
if let Some(member) = Membership::find_by_external_id_and_org(ext_id, &org_id, &mut conn).await {
let mut group_user = GroupUser::new(group_uuid.clone(), member.uuid.clone());
group_user.save(&mut conn).await?;
}
}
@@ -162,23 +166,22 @@ async fn ldap_import(data: JsonUpcase<OrgImportData>, token: PublicToken, mut co
}
// If this flag is enabled, any user that isn't provided in the Users list will be removed (by default they will be kept unless they have Deleted == true)
if data.OverwriteExisting {
if data.overwrite_existing {
// Generate a HashSet to quickly verify if a member is listed or not.
let sync_members: HashSet<String> = data.Members.into_iter().map(|m| m.ExternalId).collect();
for user_org in UserOrganization::find_by_org(&org_id, &mut conn).await {
if let Some(ref user_external_id) = user_org.external_id {
let sync_members: HashSet<String> = data.members.into_iter().map(|m| m.external_id).collect();
for member in Membership::find_by_org(&org_id, &mut conn).await {
if let Some(ref user_external_id) = member.external_id {
if !sync_members.contains(user_external_id) {
if user_org.atype == UserOrgType::Owner && user_org.status == UserOrgStatus::Confirmed as i32 {
if member.atype == MembershipType::Owner && member.status == MembershipStatus::Confirmed as i32 {
// Removing owner, check that there is at least one other confirmed owner
if UserOrganization::count_confirmed_by_org_and_type(&org_id, UserOrgType::Owner, &mut conn)
.await
if Membership::count_confirmed_by_org_and_type(&org_id, MembershipType::Owner, &mut conn).await
<= 1
{
warn!("Can't delete the last owner");
continue;
}
}
user_org.delete(&mut conn).await?;
member.delete(&mut conn).await?;
}
}
}
@@ -187,13 +190,13 @@ async fn ldap_import(data: JsonUpcase<OrgImportData>, token: PublicToken, mut co
Ok(())
}
pub struct PublicToken(String);
pub struct PublicToken(OrganizationId);
#[rocket::async_trait]
impl<'r> FromRequest<'r> for PublicToken {
type Error = &'static str;
async fn from_request(request: &'r Request<'_>) -> request::Outcome<Self, Self::Error> {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = request.headers();
// Get access_token
let access_token: &str = match headers.get_one("Authorization") {
@@ -204,24 +207,19 @@ impl<'r> FromRequest<'r> for PublicToken {
None => err_handler!("No access token provided"),
};
// Check JWT token is valid and get device and user from it
let claims = match auth::decode_api_org(access_token) {
Ok(claims) => claims,
Err(_) => err_handler!("Invalid claim"),
let Ok(claims) = auth::decode_api_org(access_token) else {
err_handler!("Invalid claim")
};
// Check if time is between claims.nbf and claims.exp
let time_now = Utc::now().naive_utc().timestamp();
let time_now = Utc::now().timestamp();
if time_now < claims.nbf {
err_handler!("Token issued in the future");
}
if time_now > claims.exp {
err_handler!("Token expired");
}
// Check if claims.iss is host|claims.scope[0]
let host = match auth::Host::from_request(request).await {
Outcome::Success(host) => host,
_ => err_handler!("Error getting Host"),
};
let complete_host = format!("{}|{}", host.host, claims.scope[0]);
// Check if claims.iss is domain|claims.scope[0]
let complete_host = format!("{}|{}", CONFIG.domain_origin(), claims.scope[0]);
if complete_host != claims.iss {
err_handler!("Token not issued by this server");
}
@@ -232,13 +230,12 @@ impl<'r> FromRequest<'r> for PublicToken {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
let org_uuid = match claims.client_id.strip_prefix("organization.") {
Some(uuid) => uuid,
None => err_handler!("Malformed client_id"),
let Some(org_id) = claims.client_id.strip_prefix("organization.") else {
err_handler!("Malformed client_id")
};
let org_api_key = match OrganizationApiKey::find_by_org_uuid(org_uuid, &conn).await {
Some(org_api_key) => org_api_key,
None => err_handler!("Invalid client_id"),
let org_id: OrganizationId = org_id.to_string().into();
let Some(org_api_key) = OrganizationApiKey::find_by_org_uuid(&org_id, &conn).await else {
err_handler!("Invalid client_id")
};
if org_api_key.org_uuid != claims.client_sub {
err_handler!("Token not issued for this org");

View File

@@ -1,6 +1,6 @@
use std::path::Path;
use chrono::{DateTime, Duration, Utc};
use chrono::{DateTime, TimeDelta, Utc};
use num_traits::ToPrimitive;
use rocket::form::Form;
use rocket::fs::NamedFile;
@@ -9,10 +9,10 @@ use rocket::serde::json::Json;
use serde_json::Value;
use crate::{
api::{ApiResult, EmptyResult, JsonResult, JsonUpcase, Notify, UpdateType},
api::{ApiResult, EmptyResult, JsonResult, Notify, UpdateType},
auth::{ClientIp, Headers, Host},
db::{models::*, DbConn, DbPool},
util::{NumberOrString, SafeString},
util::NumberOrString,
CONFIG,
};
@@ -48,23 +48,26 @@ pub async fn purge_sends(pool: DbPool) {
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct SendData {
Type: i32,
Key: String,
Password: Option<String>,
MaxAccessCount: Option<NumberOrString>,
ExpirationDate: Option<DateTime<Utc>>,
DeletionDate: DateTime<Utc>,
Disabled: bool,
HideEmail: Option<bool>,
#[serde(rename_all = "camelCase")]
pub struct SendData {
r#type: i32,
key: String,
password: Option<String>,
max_access_count: Option<NumberOrString>,
expiration_date: Option<DateTime<Utc>>,
deletion_date: DateTime<Utc>,
disabled: bool,
hide_email: Option<bool>,
// Data field
Name: String,
Notes: Option<String>,
Text: Option<Value>,
File: Option<Value>,
FileLength: Option<NumberOrString>,
name: String,
notes: Option<String>,
text: Option<Value>,
file: Option<Value>,
file_length: Option<NumberOrString>,
// Used for key rotations
pub id: Option<SendId>,
}
/// Enforces the `Disable Send` policy. A non-owner/admin user belonging to
@@ -76,9 +79,9 @@ struct SendData {
/// There is also a Vaultwarden-specific `sends_allowed` config setting that
/// controls this policy globally.
async fn enforce_disable_send_policy(headers: &Headers, conn: &mut DbConn) -> EmptyResult {
let user_uuid = &headers.user.uuid;
let user_id = &headers.user.uuid;
if !CONFIG.sends_allowed()
|| OrgPolicy::is_applicable_to_user(user_uuid, OrgPolicyType::DisableSend, None, conn).await
|| OrgPolicy::is_applicable_to_user(user_id, OrgPolicyType::DisableSend, None, conn).await
{
err!("Due to an Enterprise Policy, you are only able to delete an existing Send.")
}
@@ -92,9 +95,9 @@ async fn enforce_disable_send_policy(headers: &Headers, conn: &mut DbConn) -> Em
///
/// Ref: https://bitwarden.com/help/article/policies/#send-options
async fn enforce_disable_hide_email_policy(data: &SendData, headers: &Headers, conn: &mut DbConn) -> EmptyResult {
let user_uuid = &headers.user.uuid;
let hide_email = data.HideEmail.unwrap_or(false);
if hide_email && OrgPolicy::is_hide_email_disabled(user_uuid, conn).await {
let user_id = &headers.user.uuid;
let hide_email = data.hide_email.unwrap_or(false);
if hide_email && OrgPolicy::is_hide_email_disabled(user_id, conn).await {
err!(
"Due to an Enterprise Policy, you are not allowed to hide your email address \
from recipients when creating or editing a Send."
@@ -103,41 +106,41 @@ async fn enforce_disable_hide_email_policy(data: &SendData, headers: &Headers, c
Ok(())
}
fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
let data_val = if data.Type == SendType::Text as i32 {
data.Text
} else if data.Type == SendType::File as i32 {
data.File
fn create_send(data: SendData, user_id: UserId) -> ApiResult<Send> {
let data_val = if data.r#type == SendType::Text as i32 {
data.text
} else if data.r#type == SendType::File as i32 {
data.file
} else {
err!("Invalid Send type")
};
let data_str = if let Some(mut d) = data_val {
d.as_object_mut().and_then(|o| o.remove("Response"));
d.as_object_mut().and_then(|o| o.remove("response"));
serde_json::to_string(&d)?
} else {
err!("Send data not provided");
};
if data.DeletionDate > Utc::now() + Duration::days(31) {
if data.deletion_date > Utc::now() + TimeDelta::try_days(31).unwrap() {
err!(
"You cannot have a Send with a deletion date that far into the future. Adjust the Deletion Date to a value less than 31 days from now and try again."
);
}
let mut send = Send::new(data.Type, data.Name, data_str, data.Key, data.DeletionDate.naive_utc());
send.user_uuid = Some(user_uuid);
send.notes = data.Notes;
send.max_access_count = match data.MaxAccessCount {
let mut send = Send::new(data.r#type, data.name, data_str, data.key, data.deletion_date.naive_utc());
send.user_uuid = Some(user_id);
send.notes = data.notes;
send.max_access_count = match data.max_access_count {
Some(m) => Some(m.into_i32()?),
_ => None,
};
send.expiration_date = data.ExpirationDate.map(|d| d.naive_utc());
send.disabled = data.Disabled;
send.hide_email = data.HideEmail;
send.atype = data.Type;
send.expiration_date = data.expiration_date.map(|d| d.naive_utc());
send.disabled = data.disabled;
send.hide_email = data.hide_email;
send.atype = data.r#type;
send.set_password(data.Password.as_deref());
send.set_password(data.password.as_deref());
Ok(send)
}
@@ -148,34 +151,28 @@ async fn get_sends(headers: Headers, mut conn: DbConn) -> Json<Value> {
let sends_json: Vec<Value> = sends.await.iter().map(|s| s.to_json()).collect();
Json(json!({
"Data": sends_json,
"Object": "list",
"ContinuationToken": null
"data": sends_json,
"object": "list",
"continuationToken": null
}))
}
#[get("/sends/<uuid>")]
async fn get_send(uuid: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
let send = match Send::find_by_uuid(uuid, &mut conn).await {
Some(send) => send,
None => err!("Send not found"),
};
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
err!("Send is not owned by user")
#[get("/sends/<send_id>")]
async fn get_send(send_id: SendId, headers: Headers, mut conn: DbConn) -> JsonResult {
match Send::find_by_uuid_and_user(&send_id, &headers.user.uuid, &mut conn).await {
Some(send) => Ok(Json(send.to_json())),
None => err!("Send not found", "Invalid send uuid or does not belong to user"),
}
Ok(Json(send.to_json()))
}
#[post("/sends", data = "<data>")]
async fn post_send(data: JsonUpcase<SendData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
async fn post_send(data: Json<SendData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let data: SendData = data.into_inner().data;
let data: SendData = data.into_inner();
enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?;
if data.Type == SendType::File as i32 {
if data.r#type == SendType::File as i32 {
err!("File sends should use /api/sends/file")
}
@@ -195,7 +192,7 @@ async fn post_send(data: JsonUpcase<SendData>, headers: Headers, mut conn: DbCon
#[derive(FromForm)]
struct UploadData<'f> {
model: Json<crate::util::UpCase<SendData>>,
model: Json<SendData>,
data: TempFile<'f>,
}
@@ -215,7 +212,7 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
model,
mut data,
} = data.into_inner();
let model = model.into_inner().data;
let model = model.into_inner();
let Some(size) = data.len().to_i64() else {
err!("Invalid send size");
@@ -252,7 +249,7 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
err!("Send content is not a file");
}
let file_id = crate::crypto::generate_send_id();
let file_id = crate::crypto::generate_send_file_id();
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(&send.uuid);
let file_path = folder_path.join(&file_id);
tokio::fs::create_dir_all(&folder_path).await?;
@@ -263,9 +260,9 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
let mut data_value: Value = serde_json::from_str(&send.data)?;
if let Some(o) = data_value.as_object_mut() {
o.insert(String::from("Id"), Value::String(file_id));
o.insert(String::from("Size"), Value::Number(size.into()));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(size)));
o.insert(String::from("id"), Value::String(file_id));
o.insert(String::from("size"), Value::Number(size.into()));
o.insert(String::from("sizeName"), Value::String(crate::util::get_display_size(size)));
}
send.data = serde_json::to_string(&data_value)?;
@@ -285,18 +282,18 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
// Upstream: https://github.com/bitwarden/server/blob/d0c793c95181dfb1b447eb450f85ba0bfd7ef643/src/Api/Controllers/SendsController.cs#L190
#[post("/sends/file/v2", data = "<data>")]
async fn post_send_file_v2(data: JsonUpcase<SendData>, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn post_send_file_v2(data: Json<SendData>, headers: Headers, mut conn: DbConn) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let data = data.into_inner().data;
let data = data.into_inner();
if data.Type != SendType::File as i32 {
if data.r#type != SendType::File as i32 {
err!("Send content is not a file");
}
enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?;
let file_length = match &data.FileLength {
let file_length = match &data.file_length {
Some(m) => m.into_i64()?,
_ => err!("Invalid send length"),
};
@@ -327,13 +324,13 @@ async fn post_send_file_v2(data: JsonUpcase<SendData>, headers: Headers, mut con
let mut send = create_send(data, headers.user.uuid)?;
let file_id = crate::crypto::generate_send_id();
let file_id = crate::crypto::generate_send_file_id();
let mut data_value: Value = serde_json::from_str(&send.data)?;
if let Some(o) = data_value.as_object_mut() {
o.insert(String::from("Id"), Value::String(file_id.clone()));
o.insert(String::from("Size"), Value::Number(file_length.into()));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(file_length)));
o.insert(String::from("id"), Value::String(file_id.clone()));
o.insert(String::from("size"), Value::Number(file_length.into()));
o.insert(String::from("sizeName"), Value::String(crate::util::get_display_size(file_length)));
}
send.data = serde_json::to_string(&data_value)?;
send.save(&mut conn).await?;
@@ -346,11 +343,19 @@ async fn post_send_file_v2(data: JsonUpcase<SendData>, headers: Headers, mut con
})))
}
// https://github.com/bitwarden/server/blob/d0c793c95181dfb1b447eb450f85ba0bfd7ef643/src/Api/Controllers/SendsController.cs#L243
#[post("/sends/<send_uuid>/file/<file_id>", format = "multipart/form-data", data = "<data>")]
#[derive(Deserialize)]
#[allow(non_snake_case)]
pub struct SendFileData {
id: SendFileId,
size: u64,
fileName: String,
}
// https://github.com/bitwarden/server/blob/66f95d1c443490b653e5a15d32977e2f5a3f9e32/src/Api/Tools/Controllers/SendsController.cs#L250
#[post("/sends/<send_id>/file/<file_id>", format = "multipart/form-data", data = "<data>")]
async fn post_send_file_v2_data(
send_uuid: &str,
file_id: &str,
send_id: SendId,
file_id: SendFileId,
data: Form<UploadDataV2<'_>>,
headers: Headers,
mut conn: DbConn,
@@ -360,19 +365,51 @@ async fn post_send_file_v2_data(
let mut data = data.into_inner();
let Some(send) = Send::find_by_uuid(send_uuid, &mut conn).await else {
err!("Send not found. Unable to save the file.")
let Some(send) = Send::find_by_uuid_and_user(&send_id, &headers.user.uuid, &mut conn).await else {
err!("Send not found. Unable to save the file.", "Invalid send uuid or does not belong to user.")
};
let Some(send_user_id) = &send.user_uuid else {
err!("Sends are only supported for users at the moment")
};
if send_user_id != &headers.user.uuid {
err!("Send doesn't belong to user");
if send.atype != SendType::File as i32 {
err!("Send is not a file type send.");
}
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(send_uuid);
let Ok(send_data) = serde_json::from_str::<SendFileData>(&send.data) else {
err!("Unable to decode send data as json.")
};
match data.data.raw_name() {
Some(raw_file_name) if raw_file_name.dangerous_unsafe_unsanitized_raw() == send_data.fileName => (),
Some(raw_file_name) => err!(
"Send file name does not match.",
format!(
"Expected file name '{}' got '{}'",
send_data.fileName,
raw_file_name.dangerous_unsafe_unsanitized_raw()
)
),
_ => err!("Send file name does not match or is not provided."),
}
if file_id != send_data.id {
err!("Send file does not match send data.", format!("Expected id {} got {file_id}", send_data.id));
}
let Some(size) = data.data.len().to_u64() else {
err!("Send file size overflow.");
};
if size != send_data.size {
err!("Send file size does not match.", format!("Expected a file size of {} got {size}", send_data.size));
}
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(send_id);
let file_path = folder_path.join(file_id);
// Check if the file already exists, if that is the case do not overwrite it
if tokio::fs::metadata(&file_path).await.is_ok() {
err!("Send file has already been uploaded.", format!("File {file_path:?} already exists"))
}
tokio::fs::create_dir_all(&folder_path).await?;
if let Err(_err) = data.data.persist_to(&file_path).await {
@@ -392,22 +429,21 @@ async fn post_send_file_v2_data(
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
pub struct SendAccessData {
pub Password: Option<String>,
pub password: Option<String>,
}
#[post("/sends/access/<access_id>", data = "<data>")]
async fn post_access(
access_id: &str,
data: JsonUpcase<SendAccessData>,
data: Json<SendAccessData>,
mut conn: DbConn,
ip: ClientIp,
nt: Notify<'_>,
) -> JsonResult {
let mut send = match Send::find_by_access_id(access_id, &mut conn).await {
Some(s) => s,
None => err_code!(SEND_INACCESSIBLE_MSG, 404),
let Some(mut send) = Send::find_by_access_id(access_id, &mut conn).await else {
err_code!(SEND_INACCESSIBLE_MSG, 404)
};
if let Some(max_access_count) = send.max_access_count {
@@ -431,7 +467,7 @@ async fn post_access(
}
if send.password_hash.is_some() {
match data.into_inner().data.Password {
match data.into_inner().password {
Some(ref p) if send.check_password(p) => { /* Nothing to do here */ }
Some(_) => err!("Invalid password", format!("IP: {}.", ip.ip)),
None => err_code!("Password not provided", format!("IP: {}.", ip.ip), 401),
@@ -449,7 +485,7 @@ async fn post_access(
UpdateType::SyncSendUpdate,
&send,
&send.update_users_revision(&mut conn).await,
&String::from("00000000-0000-0000-0000-000000000000"),
&String::from("00000000-0000-0000-0000-000000000000").into(),
&mut conn,
)
.await;
@@ -459,16 +495,15 @@ async fn post_access(
#[post("/sends/<send_id>/access/file/<file_id>", data = "<data>")]
async fn post_access_file(
send_id: &str,
file_id: &str,
data: JsonUpcase<SendAccessData>,
send_id: SendId,
file_id: SendFileId,
data: Json<SendAccessData>,
host: Host,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
let mut send = match Send::find_by_uuid(send_id, &mut conn).await {
Some(s) => s,
None => err_code!(SEND_INACCESSIBLE_MSG, 404),
let Some(mut send) = Send::find_by_uuid(&send_id, &mut conn).await else {
err_code!(SEND_INACCESSIBLE_MSG, 404)
};
if let Some(max_access_count) = send.max_access_count {
@@ -492,7 +527,7 @@ async fn post_access_file(
}
if send.password_hash.is_some() {
match data.into_inner().data.Password {
match data.into_inner().password {
Some(ref p) if send.check_password(p) => { /* Nothing to do here */ }
Some(_) => err!("Invalid password."),
None => err_code!("Password not provided", 401),
@@ -507,22 +542,22 @@ async fn post_access_file(
UpdateType::SyncSendUpdate,
&send,
&send.update_users_revision(&mut conn).await,
&String::from("00000000-0000-0000-0000-000000000000"),
&String::from("00000000-0000-0000-0000-000000000000").into(),
&mut conn,
)
.await;
let token_claims = crate::auth::generate_send_claims(send_id, file_id);
let token_claims = crate::auth::generate_send_claims(&send_id, &file_id);
let token = crate::auth::encode_jwt(&token_claims);
Ok(Json(json!({
"Object": "send-fileDownload",
"Id": file_id,
"Url": format!("{}/api/sends/{}/{}?t={}", &host.host, send_id, file_id, token)
"object": "send-fileDownload",
"id": file_id,
"url": format!("{}/api/sends/{}/{}?t={}", &host.host, send_id, file_id, token)
})))
}
#[get("/sends/<send_id>/<file_id>?<t>")]
async fn download_send(send_id: SafeString, file_id: SafeString, t: &str) -> Option<NamedFile> {
async fn download_send(send_id: SendId, file_id: SendFileId, t: &str) -> Option<NamedFile> {
if let Ok(claims) = crate::auth::decode_send(t) {
if claims.sub == format!("{send_id}/{file_id}") {
return NamedFile::open(Path::new(&CONFIG.sends_folder()).join(send_id).join(file_id)).await.ok();
@@ -531,37 +566,55 @@ async fn download_send(send_id: SafeString, file_id: SafeString, t: &str) -> Opt
None
}
#[put("/sends/<id>", data = "<data>")]
#[put("/sends/<send_id>", data = "<data>")]
async fn put_send(
id: &str,
data: JsonUpcase<SendData>,
send_id: SendId,
data: Json<SendData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let data: SendData = data.into_inner().data;
let data: SendData = data.into_inner();
enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?;
let mut send = match Send::find_by_uuid(id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
let Some(mut send) = Send::find_by_uuid_and_user(&send_id, &headers.user.uuid, &mut conn).await else {
err!("Send not found", "Send send_id is invalid or does not belong to user")
};
update_send_from_data(&mut send, data, &headers, &mut conn, &nt, UpdateType::SyncSendUpdate).await?;
Ok(Json(send.to_json()))
}
pub async fn update_send_from_data(
send: &mut Send,
data: SendData,
headers: &Headers,
conn: &mut DbConn,
nt: &Notify<'_>,
ut: UpdateType,
) -> EmptyResult {
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
err!("Send is not owned by user")
}
if send.atype != data.Type {
if send.atype != data.r#type {
err!("Sends can't change type")
}
if data.deletion_date > Utc::now() + TimeDelta::try_days(31).unwrap() {
err!(
"You cannot have a Send with a deletion date that far into the future. Adjust the Deletion Date to a value less than 31 days from now and try again."
);
}
// When updating a file Send, we receive nulls in the File field, as it's immutable,
// so we only need to update the data field in the Text case
if data.Type == SendType::Text as i32 {
let data_str = if let Some(mut d) = data.Text {
d.as_object_mut().and_then(|d| d.remove("Response"));
if data.r#type == SendType::Text as i32 {
let data_str = if let Some(mut d) = data.text {
d.as_object_mut().and_then(|d| d.remove("response"));
serde_json::to_string(&d)?
} else {
err!("Send data not provided");
@@ -569,52 +622,36 @@ async fn put_send(
send.data = data_str;
}
if data.DeletionDate > Utc::now() + Duration::days(31) {
err!(
"You cannot have a Send with a deletion date that far into the future. Adjust the Deletion Date to a value less than 31 days from now and try again."
);
}
send.name = data.Name;
send.akey = data.Key;
send.deletion_date = data.DeletionDate.naive_utc();
send.notes = data.Notes;
send.max_access_count = match data.MaxAccessCount {
send.name = data.name;
send.akey = data.key;
send.deletion_date = data.deletion_date.naive_utc();
send.notes = data.notes;
send.max_access_count = match data.max_access_count {
Some(m) => Some(m.into_i32()?),
_ => None,
};
send.expiration_date = data.ExpirationDate.map(|d| d.naive_utc());
send.hide_email = data.HideEmail;
send.disabled = data.Disabled;
send.expiration_date = data.expiration_date.map(|d| d.naive_utc());
send.hide_email = data.hide_email;
send.disabled = data.disabled;
// Only change the value if it's present
if let Some(password) = data.Password {
if let Some(password) = data.password {
send.set_password(Some(&password));
}
send.save(&mut conn).await?;
nt.send_send_update(
UpdateType::SyncSendUpdate,
&send,
&send.update_users_revision(&mut conn).await,
&headers.device.uuid,
&mut conn,
)
.await;
Ok(Json(send.to_json()))
send.save(conn).await?;
if ut != UpdateType::None {
nt.send_send_update(ut, send, &send.update_users_revision(conn).await, &headers.device.uuid, conn).await;
}
Ok(())
}
#[delete("/sends/<id>")]
async fn delete_send(id: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let send = match Send::find_by_uuid(id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
#[delete("/sends/<send_id>")]
async fn delete_send(send_id: SendId, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let Some(send) = Send::find_by_uuid_and_user(&send_id, &headers.user.uuid, &mut conn).await else {
err!("Send not found", "Invalid send uuid, or does not belong to user")
};
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
err!("Send is not owned by user")
}
send.delete(&mut conn).await?;
nt.send_send_update(
UpdateType::SyncSendDelete,
@@ -628,19 +665,14 @@ async fn delete_send(id: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_
Ok(())
}
#[put("/sends/<id>/remove-password")]
async fn put_remove_password(id: &str, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
#[put("/sends/<send_id>/remove-password")]
async fn put_remove_password(send_id: SendId, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> JsonResult {
enforce_disable_send_policy(&headers, &mut conn).await?;
let mut send = match Send::find_by_uuid(id, &mut conn).await {
Some(s) => s,
None => err!("Send not found"),
let Some(mut send) = Send::find_by_uuid_and_user(&send_id, &headers.user.uuid, &mut conn).await else {
err!("Send not found", "Invalid send uuid, or does not belong to user")
};
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
err!("Send is not owned by user")
}
send.set_password(None);
send.save(&mut conn).await?;
nt.send_send_update(

View File

@@ -3,14 +3,11 @@ use rocket::serde::json::Json;
use rocket::Route;
use crate::{
api::{
core::log_user_event, core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase,
PasswordOrOtpData,
},
api::{core::log_user_event, core::two_factor::_generate_recover_code, EmptyResult, JsonResult, PasswordOrOtpData},
auth::{ClientIp, Headers},
crypto,
db::{
models::{EventType, TwoFactor, TwoFactorType},
models::{EventType, TwoFactor, TwoFactorType, UserId},
DbConn,
},
util::NumberOrString,
@@ -19,12 +16,12 @@ use crate::{
pub use crate::config::CONFIG;
pub fn routes() -> Vec<Route> {
routes![generate_authenticator, activate_authenticator, activate_authenticator_put,]
routes![generate_authenticator, activate_authenticator, activate_authenticator_put, disable_authenticator]
}
#[post("/two-factor/get-authenticator", data = "<data>")]
async fn generate_authenticator(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner().data;
async fn generate_authenticator(data: Json<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner();
let user = headers.user;
data.validate(&user, false, &mut conn).await?;
@@ -38,36 +35,32 @@ async fn generate_authenticator(data: JsonUpcase<PasswordOrOtpData>, headers: He
};
Ok(Json(json!({
"Enabled": enabled,
"Key": key,
"Object": "twoFactorAuthenticator"
"enabled": enabled,
"key": key,
"object": "twoFactorAuthenticator"
})))
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct EnableAuthenticatorData {
Key: String,
Token: NumberOrString,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
key: String,
token: NumberOrString,
master_password_hash: Option<String>,
otp: Option<String>,
}
#[post("/two-factor/authenticator", data = "<data>")]
async fn activate_authenticator(
data: JsonUpcase<EnableAuthenticatorData>,
headers: Headers,
mut conn: DbConn,
) -> JsonResult {
let data: EnableAuthenticatorData = data.into_inner().data;
let key = data.Key;
let token = data.Token.into_string();
async fn activate_authenticator(data: Json<EnableAuthenticatorData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EnableAuthenticatorData = data.into_inner();
let key = data.key;
let token = data.token.into_string();
let mut user = headers.user;
PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
master_password_hash: data.master_password_hash,
otp: data.otp,
}
.validate(&user, true, &mut conn)
.await?;
@@ -90,23 +83,19 @@ async fn activate_authenticator(
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
Ok(Json(json!({
"Enabled": true,
"Key": key,
"Object": "twoFactorAuthenticator"
"enabled": true,
"key": key,
"object": "twoFactorAuthenticator"
})))
}
#[put("/two-factor/authenticator", data = "<data>")]
async fn activate_authenticator_put(
data: JsonUpcase<EnableAuthenticatorData>,
headers: Headers,
conn: DbConn,
) -> JsonResult {
async fn activate_authenticator_put(data: Json<EnableAuthenticatorData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_authenticator(data, headers, conn).await
}
pub async fn validate_totp_code_str(
user_uuid: &str,
user_id: &UserId,
totp_code: &str,
secret: &str,
ip: &ClientIp,
@@ -116,11 +105,11 @@ pub async fn validate_totp_code_str(
err!("TOTP code is not a number");
}
validate_totp_code(user_uuid, totp_code, secret, ip, conn).await
validate_totp_code(user_id, totp_code, secret, ip, conn).await
}
pub async fn validate_totp_code(
user_uuid: &str,
user_id: &UserId,
totp_code: &str,
secret: &str,
ip: &ClientIp,
@@ -128,16 +117,15 @@ pub async fn validate_totp_code(
) -> EmptyResult {
use totp_lite::{totp_custom, Sha1};
let decoded_secret = match BASE32.decode(secret.as_bytes()) {
Ok(s) => s,
Err(_) => err!("Invalid TOTP secret"),
let Ok(decoded_secret) = BASE32.decode(secret.as_bytes()) else {
err!("Invalid TOTP secret")
};
let mut twofactor =
match TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::Authenticator as i32, conn).await {
Some(tf) => tf,
_ => TwoFactor::new(user_uuid.to_string(), TwoFactorType::Authenticator, secret.to_string()),
};
let mut twofactor = match TwoFactor::find_by_user_and_type(user_id, TwoFactorType::Authenticator as i32, conn).await
{
Some(tf) => tf,
_ => TwoFactor::new(user_id.clone(), TwoFactorType::Authenticator, secret.to_string()),
};
// The amount of steps back and forward in time
// Also check if we need to disable time drifted TOTP codes.
@@ -156,8 +144,8 @@ pub async fn validate_totp_code(
let time = (current_timestamp + step * 30i64) as u64;
let generated = totp_custom::<Sha1>(30, 6, &decoded_secret, time);
// Check the the given code equals the generated and if the time_step is larger then the one last used.
if generated == totp_code && time_step > i64::from(twofactor.last_used) {
// Check the given code equals the generated and if the time_step is larger then the one last used.
if generated == totp_code && time_step > twofactor.last_used {
// If the step does not equals 0 the time is drifted either server or client side.
if step != 0 {
warn!("TOTP Time drift detected. The step offset is {}", step);
@@ -165,10 +153,10 @@ pub async fn validate_totp_code(
// Save the last used time step so only totp time steps higher then this one are allowed.
// This will also save a newly created twofactor if the code is correct.
twofactor.last_used = time_step as i32;
twofactor.last_used = time_step;
twofactor.save(conn).await?;
return Ok(());
} else if generated == totp_code && time_step <= i64::from(twofactor.last_used) {
} else if generated == totp_code && time_step <= twofactor.last_used {
warn!("This TOTP or a TOTP code within {} steps back or forward has already been used!", steps);
err!(
format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip),
@@ -187,3 +175,47 @@ pub async fn validate_totp_code(
}
);
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct DisableAuthenticatorData {
key: String,
master_password_hash: String,
r#type: NumberOrString,
}
#[delete("/two-factor/authenticator", data = "<data>")]
async fn disable_authenticator(data: Json<DisableAuthenticatorData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let user = headers.user;
let type_ = data.r#type.into_i32()?;
if !user.check_valid_password(&data.master_password_hash) {
err!("Invalid password");
}
if let Some(twofactor) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await {
if twofactor.data == data.key {
twofactor.delete(&mut conn).await?;
log_user_event(
EventType::UserDisabled2fa as i32,
&user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
)
.await;
} else {
err!(format!("TOTP key for user {} does not match recorded value, cannot deactivate", &user.email));
}
}
if TwoFactor::find_by_user(&user.uuid, &mut conn).await.is_empty() {
super::enforce_2fa_policy(&user, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await?;
}
Ok(Json(json!({
"enabled": false,
"keys": type_,
"object": "twoFactorProvider"
})))
}

View File

@@ -5,17 +5,17 @@ use rocket::Route;
use crate::{
api::{
core::log_user_event, core::two_factor::_generate_recover_code, ApiResult, EmptyResult, JsonResult, JsonUpcase,
core::log_user_event, core::two_factor::_generate_recover_code, ApiResult, EmptyResult, JsonResult,
PasswordOrOtpData,
},
auth::Headers,
crypto,
db::{
models::{EventType, TwoFactor, TwoFactorType, User},
models::{EventType, TwoFactor, TwoFactorType, User, UserId},
DbConn,
},
error::MapResult,
util::get_reqwest_client,
http_client::make_http_request,
CONFIG,
};
@@ -92,8 +92,8 @@ impl DuoStatus {
const DISABLED_MESSAGE_DEFAULT: &str = "<To use the global Duo keys, please leave these fields untouched>";
#[post("/two-factor/get-duo", data = "<data>")]
async fn get_duo(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner().data;
async fn get_duo(data: Json<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner();
let user = headers.user;
data.validate(&user, false, &mut conn).await?;
@@ -109,16 +109,16 @@ async fn get_duo(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn
let json = if let Some(data) = data {
json!({
"Enabled": enabled,
"Host": data.host,
"SecretKey": data.sk,
"IntegrationKey": data.ik,
"Object": "twoFactorDuo"
"enabled": enabled,
"host": data.host,
"secretKey": data.sk,
"integrationKey": data.ik,
"object": "twoFactorDuo"
})
} else {
json!({
"Enabled": enabled,
"Object": "twoFactorDuo"
"enabled": enabled,
"object": "twoFactorDuo"
})
};
@@ -126,21 +126,21 @@ async fn get_duo(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn
}
#[derive(Deserialize)]
#[allow(non_snake_case, dead_code)]
#[serde(rename_all = "camelCase")]
struct EnableDuoData {
Host: String,
SecretKey: String,
IntegrationKey: String,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
host: String,
secret_key: String,
integration_key: String,
master_password_hash: Option<String>,
otp: Option<String>,
}
impl From<EnableDuoData> for DuoData {
fn from(d: EnableDuoData) -> Self {
Self {
host: d.Host,
ik: d.IntegrationKey,
sk: d.SecretKey,
host: d.host,
ik: d.integration_key,
sk: d.secret_key,
}
}
}
@@ -151,17 +151,17 @@ fn check_duo_fields_custom(data: &EnableDuoData) -> bool {
st.is_empty() || s == DISABLED_MESSAGE_DEFAULT
}
!empty_or_default(&data.Host) && !empty_or_default(&data.SecretKey) && !empty_or_default(&data.IntegrationKey)
!empty_or_default(&data.host) && !empty_or_default(&data.secret_key) && !empty_or_default(&data.integration_key)
}
#[post("/two-factor/duo", data = "<data>")]
async fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EnableDuoData = data.into_inner().data;
async fn activate_duo(data: Json<EnableDuoData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EnableDuoData = data.into_inner();
let mut user = headers.user;
PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash.clone(),
Otp: data.Otp.clone(),
master_password_hash: data.master_password_hash.clone(),
otp: data.otp.clone(),
}
.validate(&user, true, &mut conn)
.await?;
@@ -184,16 +184,16 @@ async fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, mut con
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
Ok(Json(json!({
"Enabled": true,
"Host": data.host,
"SecretKey": data.sk,
"IntegrationKey": data.ik,
"Object": "twoFactorDuo"
"enabled": true,
"host": data.host,
"secretKey": data.sk,
"integrationKey": data.ik,
"object": "twoFactorDuo"
})))
}
#[put("/two-factor/duo", data = "<data>")]
async fn activate_duo_put(data: JsonUpcase<EnableDuoData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn activate_duo_put(data: Json<EnableDuoData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_duo(data, headers, conn).await
}
@@ -210,10 +210,7 @@ async fn duo_api_request(method: &str, path: &str, params: &str, data: &DuoData)
let m = Method::from_str(method).unwrap_or_default();
let client = get_reqwest_client();
client
.request(m, &url)
make_http_request(m, &url)?
.basic_auth(username, Some(password))
.header(header::USER_AGENT, "vaultwarden:Duo/1.0 (Rust)")
.header(header::DATE, date)
@@ -231,13 +228,12 @@ const AUTH_PREFIX: &str = "AUTH";
const DUO_PREFIX: &str = "TX";
const APP_PREFIX: &str = "APP";
async fn get_user_duo_data(uuid: &str, conn: &mut DbConn) -> DuoStatus {
async fn get_user_duo_data(user_id: &UserId, conn: &mut DbConn) -> DuoStatus {
let type_ = TwoFactorType::Duo as i32;
// If the user doesn't have an entry, disabled
let twofactor = match TwoFactor::find_by_user_and_type(uuid, type_, conn).await {
Some(t) => t,
None => return DuoStatus::Disabled(DuoData::global().is_some()),
let Some(twofactor) = TwoFactor::find_by_user_and_type(user_id, type_, conn).await else {
return DuoStatus::Disabled(DuoData::global().is_some());
};
// If the user has the required values, we use those
@@ -255,7 +251,7 @@ async fn get_user_duo_data(uuid: &str, conn: &mut DbConn) -> DuoStatus {
}
// let (ik, sk, ak, host) = get_duo_keys();
async fn get_duo_keys_email(email: &str, conn: &mut DbConn) -> ApiResult<(String, String, String, String)> {
pub(crate) async fn get_duo_keys_email(email: &str, conn: &mut DbConn) -> ApiResult<(String, String, String, String)> {
let data = match User::find_by_mail(email, conn).await {
Some(u) => get_user_duo_data(&u.uuid, conn).await.data(),
_ => DuoData::global(),
@@ -284,10 +280,6 @@ fn sign_duo_values(key: &str, email: &str, ikey: &str, prefix: &str, expire: i64
}
pub async fn validate_duo_login(email: &str, response: &str, conn: &mut DbConn) -> EmptyResult {
// email is as entered by the user, so it needs to be normalized before
// comparison with auth_user below.
let email = &email.to_lowercase();
let split: Vec<&str> = response.split(':').collect();
if split.len() != 2 {
err!(
@@ -340,14 +332,12 @@ fn parse_duo_values(key: &str, val: &str, ikey: &str, prefix: &str, time: i64) -
err!("Prefixes don't match")
}
let cookie_vec = match BASE64.decode(u_b64.as_bytes()) {
Ok(c) => c,
Err(_) => err!("Invalid Duo cookie encoding"),
let Ok(cookie_vec) = BASE64.decode(u_b64.as_bytes()) else {
err!("Invalid Duo cookie encoding")
};
let cookie = match String::from_utf8(cookie_vec) {
Ok(c) => c,
Err(_) => err!("Invalid Duo cookie encoding"),
let Ok(cookie) = String::from_utf8(cookie_vec) else {
err!("Invalid Duo cookie encoding")
};
let cookie_split: Vec<&str> = cookie.split('|').collect();

View File

@@ -0,0 +1,495 @@
use chrono::Utc;
use data_encoding::HEXLOWER;
use jsonwebtoken::{Algorithm, DecodingKey, EncodingKey, Header, Validation};
use reqwest::{header, StatusCode};
use ring::digest::{digest, Digest, SHA512_256};
use serde::Serialize;
use std::collections::HashMap;
use crate::{
api::{core::two_factor::duo::get_duo_keys_email, EmptyResult},
crypto,
db::{
models::{DeviceId, EventType, TwoFactorDuoContext},
DbConn, DbPool,
},
error::Error,
http_client::make_http_request,
CONFIG,
};
use url::Url;
// The location on this service that Duo should redirect users to. For us, this is a bridge
// built in to the Bitwarden clients.
// See: https://github.com/bitwarden/clients/blob/main/apps/web/src/connectors/duo-redirect.ts
const DUO_REDIRECT_LOCATION: &str = "duo-redirect-connector.html";
// Number of seconds that a JWT we generate for Duo should be valid for.
const JWT_VALIDITY_SECS: i64 = 300;
// Number of seconds that a Duo context stored in the database should be valid for.
const CTX_VALIDITY_SECS: i64 = 300;
// Expected algorithm used by Duo to sign JWTs.
const DUO_RESP_SIGNATURE_ALG: Algorithm = Algorithm::HS512;
// Signature algorithm we're using to sign JWTs for Duo. Must be either HS512 or HS256.
const JWT_SIGNATURE_ALG: Algorithm = Algorithm::HS512;
// Size of random strings for state and nonce. Must be at least 16 characters and at most 1024 characters.
// If increasing this above 64, also increase the size of the twofactor_duo_ctx.state and
// twofactor_duo_ctx.nonce database columns for postgres and mariadb.
const STATE_LENGTH: usize = 64;
// client_assertion payload for health checks and obtaining MFA results.
#[derive(Debug, Serialize, Deserialize)]
struct ClientAssertion {
pub iss: String,
pub sub: String,
pub aud: String,
pub exp: i64,
pub jti: String,
pub iat: i64,
}
// authorization request payload sent with clients to Duo for MFA
#[derive(Debug, Serialize, Deserialize)]
struct AuthorizationRequest {
pub response_type: String,
pub scope: String,
pub exp: i64,
pub client_id: String,
pub redirect_uri: String,
pub state: String,
pub duo_uname: String,
pub iss: String,
pub aud: String,
pub nonce: String,
}
// Duo service health check responses
#[derive(Debug, Serialize, Deserialize)]
#[serde(untagged)]
enum HealthCheckResponse {
HealthOK {
stat: String,
},
HealthFail {
message: String,
message_detail: String,
},
}
// Outer structure of response when exchanging authz code for MFA results
#[derive(Debug, Serialize, Deserialize)]
struct IdTokenResponse {
id_token: String, // IdTokenClaims
access_token: String,
expires_in: i64,
token_type: String,
}
// Inner structure of IdTokenResponse.id_token
#[derive(Debug, Serialize, Deserialize)]
struct IdTokenClaims {
preferred_username: String,
nonce: String,
}
// Duo OIDC Authorization Client
// See https://duo.com/docs/oauthapi
struct DuoClient {
client_id: String, // Duo Client ID (DuoData.ik)
client_secret: String, // Duo Client Secret (DuoData.sk)
api_host: String, // Duo API hostname (DuoData.host)
redirect_uri: String, // URL in this application clients should call for MFA verification
}
impl DuoClient {
// Construct a new DuoClient
fn new(client_id: String, client_secret: String, api_host: String, redirect_uri: String) -> DuoClient {
DuoClient {
client_id,
client_secret,
api_host,
redirect_uri,
}
}
// Generate a client assertion for health checks and authorization code exchange.
fn new_client_assertion(&self, url: &str) -> ClientAssertion {
let now = Utc::now().timestamp();
let jwt_id = crypto::get_random_string_alphanum(STATE_LENGTH);
ClientAssertion {
iss: self.client_id.clone(),
sub: self.client_id.clone(),
aud: url.to_string(),
exp: now + JWT_VALIDITY_SECS,
jti: jwt_id,
iat: now,
}
}
// Given a serde-serializable struct, attempt to encode it as a JWT
fn encode_duo_jwt<T: Serialize>(&self, jwt_payload: T) -> Result<String, Error> {
match jsonwebtoken::encode(
&Header::new(JWT_SIGNATURE_ALG),
&jwt_payload,
&EncodingKey::from_secret(self.client_secret.as_bytes()),
) {
Ok(token) => Ok(token),
Err(e) => err!(format!("Error encoding Duo JWT: {e:?}")),
}
}
// "required" health check to verify the integration is configured and Duo's services
// are up.
// https://duo.com/docs/oauthapi#health-check
async fn health_check(&self) -> Result<(), Error> {
let health_check_url: String = format!("https://{}/oauth/v1/health_check", self.api_host);
let jwt_payload = self.new_client_assertion(&health_check_url);
let token = match self.encode_duo_jwt(jwt_payload) {
Ok(token) => token,
Err(e) => return Err(e),
};
let mut post_body = HashMap::new();
post_body.insert("client_assertion", token);
post_body.insert("client_id", self.client_id.clone());
let res = match make_http_request(reqwest::Method::POST, &health_check_url)?
.header(header::USER_AGENT, "vaultwarden:Duo/2.0 (Rust)")
.form(&post_body)
.send()
.await
{
Ok(r) => r,
Err(e) => err!(format!("Error requesting Duo health check: {e:?}")),
};
let response: HealthCheckResponse = match res.json::<HealthCheckResponse>().await {
Ok(r) => r,
Err(e) => err!(format!("Duo health check response decode error: {e:?}")),
};
let health_stat: String = match response {
HealthCheckResponse::HealthOK {
stat,
} => stat,
HealthCheckResponse::HealthFail {
message,
message_detail,
} => err!(format!("Duo health check FAIL response, msg: {}, detail: {}", message, message_detail)),
};
if health_stat != "OK" {
err!(format!("Duo health check failed, got OK-like body with stat {health_stat}"));
}
Ok(())
}
// Constructs the URL for the authorization request endpoint on Duo's service.
// Clients are sent here to continue authentication.
// https://duo.com/docs/oauthapi#authorization-request
fn make_authz_req_url(&self, duo_username: &str, state: String, nonce: String) -> Result<String, Error> {
let now = Utc::now().timestamp();
let jwt_payload = AuthorizationRequest {
response_type: String::from("code"),
scope: String::from("openid"),
exp: now + JWT_VALIDITY_SECS,
client_id: self.client_id.clone(),
redirect_uri: self.redirect_uri.clone(),
state,
duo_uname: String::from(duo_username),
iss: self.client_id.clone(),
aud: format!("https://{}", self.api_host),
nonce,
};
let token = self.encode_duo_jwt(jwt_payload)?;
let authz_endpoint = format!("https://{}/oauth/v1/authorize", self.api_host);
let mut auth_url = match Url::parse(authz_endpoint.as_str()) {
Ok(url) => url,
Err(e) => err!(format!("Error parsing Duo authorization URL: {e:?}")),
};
{
let mut query_params = auth_url.query_pairs_mut();
query_params.append_pair("response_type", "code");
query_params.append_pair("client_id", self.client_id.as_str());
query_params.append_pair("request", token.as_str());
}
let final_auth_url = auth_url.to_string();
Ok(final_auth_url)
}
// Exchange the authorization code obtained from an access token provided by the user
// for the result of the MFA and validate.
// See: https://duo.com/docs/oauthapi#access-token (under Response Format)
async fn exchange_authz_code_for_result(
&self,
duo_code: &str,
duo_username: &str,
nonce: &str,
) -> Result<(), Error> {
if duo_code.is_empty() {
err!("Empty Duo authorization code")
}
let token_url = format!("https://{}/oauth/v1/token", self.api_host);
let jwt_payload = self.new_client_assertion(&token_url);
let token = match self.encode_duo_jwt(jwt_payload) {
Ok(token) => token,
Err(e) => return Err(e),
};
let mut post_body = HashMap::new();
post_body.insert("grant_type", String::from("authorization_code"));
post_body.insert("code", String::from(duo_code));
// Must be the same URL that was supplied in the authorization request for the supplied duo_code
post_body.insert("redirect_uri", self.redirect_uri.clone());
post_body
.insert("client_assertion_type", String::from("urn:ietf:params:oauth:client-assertion-type:jwt-bearer"));
post_body.insert("client_assertion", token);
let res = match make_http_request(reqwest::Method::POST, &token_url)?
.header(header::USER_AGENT, "vaultwarden:Duo/2.0 (Rust)")
.form(&post_body)
.send()
.await
{
Ok(r) => r,
Err(e) => err!(format!("Error exchanging Duo code: {e:?}")),
};
let status_code = res.status();
if status_code != StatusCode::OK {
err!(format!("Failure response from Duo: {}", status_code))
}
let response: IdTokenResponse = match res.json::<IdTokenResponse>().await {
Ok(r) => r,
Err(e) => err!(format!("Error decoding ID token response: {e:?}")),
};
let mut validation = Validation::new(DUO_RESP_SIGNATURE_ALG);
validation.set_required_spec_claims(&["exp", "aud", "iss"]);
validation.set_audience(&[&self.client_id]);
validation.set_issuer(&[token_url.as_str()]);
let token_data = match jsonwebtoken::decode::<IdTokenClaims>(
&response.id_token,
&DecodingKey::from_secret(self.client_secret.as_bytes()),
&validation,
) {
Ok(c) => c,
Err(e) => err!(format!("Failed to decode Duo token {e:?}")),
};
let matching_nonces = crypto::ct_eq(nonce, &token_data.claims.nonce);
let matching_usernames = crypto::ct_eq(duo_username, &token_data.claims.preferred_username);
if !(matching_nonces && matching_usernames) {
err!("Error validating Duo authorization, nonce or username mismatch.")
};
Ok(())
}
}
struct DuoAuthContext {
pub state: String,
pub user_email: String,
pub nonce: String,
pub exp: i64,
}
// Given a state string, retrieve the associated Duo auth context and
// delete the retrieved state from the database.
async fn extract_context(state: &str, conn: &mut DbConn) -> Option<DuoAuthContext> {
let ctx: TwoFactorDuoContext = match TwoFactorDuoContext::find_by_state(state, conn).await {
Some(c) => c,
None => return None,
};
if ctx.exp < Utc::now().timestamp() {
ctx.delete(conn).await.ok();
return None;
}
// Copy the context data, so that we can delete the context from
// the database before returning.
let ret_ctx = DuoAuthContext {
state: ctx.state.clone(),
user_email: ctx.user_email.clone(),
nonce: ctx.nonce.clone(),
exp: ctx.exp,
};
ctx.delete(conn).await.ok();
Some(ret_ctx)
}
// Task to clean up expired Duo authentication contexts that may have accumulated in the database.
pub async fn purge_duo_contexts(pool: DbPool) {
debug!("Purging Duo authentication contexts");
if let Ok(mut conn) = pool.get().await {
TwoFactorDuoContext::purge_expired_duo_contexts(&mut conn).await;
} else {
error!("Failed to get DB connection while purging expired Duo authentications")
}
}
// Construct the url that Duo should redirect users to.
fn make_callback_url(client_name: &str) -> Result<String, Error> {
// Get the location of this application as defined in the config.
let base = match Url::parse(&format!("{}/", CONFIG.domain())) {
Ok(url) => url,
Err(e) => err!(format!("Error parsing configured domain URL (check your domain configuration): {e:?}")),
};
// Add the client redirect bridge location
let mut callback = match base.join(DUO_REDIRECT_LOCATION) {
Ok(url) => url,
Err(e) => err!(format!("Error constructing Duo redirect URL (check your domain configuration): {e:?}")),
};
// Add the 'client' string with the authenticating device type. The callback connector uses this
// information to figure out how it should handle certain clients.
{
let mut query_params = callback.query_pairs_mut();
query_params.append_pair("client", client_name);
}
Ok(callback.to_string())
}
// Pre-redirect first stage of the Duo OIDC authentication flow.
// Returns the "AuthUrl" that should be returned to clients for MFA.
pub async fn get_duo_auth_url(
email: &str,
client_id: &str,
device_identifier: &DeviceId,
conn: &mut DbConn,
) -> Result<String, Error> {
let (ik, sk, _, host) = get_duo_keys_email(email, conn).await?;
let callback_url = match make_callback_url(client_id) {
Ok(url) => url,
Err(e) => return Err(e),
};
let client = DuoClient::new(ik, sk, host, callback_url);
match client.health_check().await {
Ok(()) => {}
Err(e) => return Err(e),
};
// Generate random OAuth2 state and OIDC Nonce
let state: String = crypto::get_random_string_alphanum(STATE_LENGTH);
let nonce: String = crypto::get_random_string_alphanum(STATE_LENGTH);
// Bind the nonce to the device that's currently authing by hashing the nonce and device id
// and sending the result as the OIDC nonce.
let d: Digest = digest(&SHA512_256, format!("{nonce}{device_identifier}").as_bytes());
let hash: String = HEXLOWER.encode(d.as_ref());
match TwoFactorDuoContext::save(state.as_str(), email, nonce.as_str(), CTX_VALIDITY_SECS, conn).await {
Ok(()) => client.make_authz_req_url(email, state, hash),
Err(e) => err!(format!("Error saving Duo authentication context: {e:?}")),
}
}
// Post-redirect second stage of the Duo OIDC authentication flow.
// Exchanges an authorization code for the MFA result with Duo's API and validates the result.
pub async fn validate_duo_login(
email: &str,
two_factor_token: &str,
client_id: &str,
device_identifier: &DeviceId,
conn: &mut DbConn,
) -> EmptyResult {
// Result supplied to us by clients in the form "<authz code>|<state>"
let split: Vec<&str> = two_factor_token.split('|').collect();
if split.len() != 2 {
err!(
"Invalid response length",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
);
}
let code = split[0];
let state = split[1];
let (ik, sk, _, host) = get_duo_keys_email(email, conn).await?;
// Get the context by the state reported by the client. If we don't have one,
// it means the context is either missing or expired.
let ctx = match extract_context(state, conn).await {
Some(c) => c,
None => {
err!(
"Error validating duo authentication",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
)
}
};
// Context validation steps
let matching_usernames = crypto::ct_eq(email, &ctx.user_email);
// Probably redundant, but we're double-checking them anyway.
let matching_states = crypto::ct_eq(state, &ctx.state);
let unexpired_context = ctx.exp > Utc::now().timestamp();
if !(matching_usernames && matching_states && unexpired_context) {
err!(
"Error validating duo authentication",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
)
}
let callback_url = match make_callback_url(client_id) {
Ok(url) => url,
Err(e) => return Err(e),
};
let client = DuoClient::new(ik, sk, host, callback_url);
match client.health_check().await {
Ok(()) => {}
Err(e) => return Err(e),
};
let d: Digest = digest(&SHA512_256, format!("{}{}", ctx.nonce, device_identifier).as_bytes());
let hash: String = HEXLOWER.encode(d.as_ref());
match client.exchange_authz_code_for_result(code, email, hash.as_str()).await {
Ok(_) => Ok(()),
Err(_) => {
err!(
"Error validating duo authentication",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
)
}
}
}

View File

@@ -1,16 +1,16 @@
use chrono::{Duration, NaiveDateTime, Utc};
use chrono::{DateTime, TimeDelta, Utc};
use rocket::serde::json::Json;
use rocket::Route;
use crate::{
api::{
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, PasswordOrOtpData,
EmptyResult, JsonResult, PasswordOrOtpData,
},
auth::Headers,
crypto,
db::{
models::{EventType, TwoFactor, TwoFactorType},
models::{EventType, TwoFactor, TwoFactorType, User, UserId},
DbConn,
},
error::{Error, MapResult},
@@ -22,28 +22,30 @@ pub fn routes() -> Vec<Route> {
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct SendEmailLoginData {
Email: String,
MasterPasswordHash: String,
// DeviceIdentifier: String, // Currently not used
#[serde(alias = "Email")]
email: String,
#[serde(alias = "MasterPasswordHash")]
master_password_hash: String,
}
/// User is trying to login and wants to use email 2FA.
/// Does not require Bearer token
#[post("/two-factor/send-email-login", data = "<data>")] // JsonResult
async fn send_email_login(data: JsonUpcase<SendEmailLoginData>, mut conn: DbConn) -> EmptyResult {
let data: SendEmailLoginData = data.into_inner().data;
async fn send_email_login(data: Json<SendEmailLoginData>, mut conn: DbConn) -> EmptyResult {
let data: SendEmailLoginData = data.into_inner();
use crate::db::models::User;
// Get the user
let user = match User::find_by_mail(&data.Email, &mut conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again."),
let Some(user) = User::find_by_mail(&data.email, &mut conn).await else {
err!("Username or password is incorrect. Try again.")
};
// Check password
if !user.check_valid_password(&data.MasterPasswordHash) {
if !user.check_valid_password(&data.master_password_hash) {
err!("Username or password is incorrect. Try again.")
}
@@ -57,10 +59,9 @@ async fn send_email_login(data: JsonUpcase<SendEmailLoginData>, mut conn: DbConn
}
/// Generate the token, save the data for later verification and send email to user
pub async fn send_token(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn send_token(user_id: &UserId, conn: &mut DbConn) -> EmptyResult {
let type_ = TwoFactorType::Email as i32;
let mut twofactor =
TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await.map_res("Two factor not found")?;
let mut twofactor = TwoFactor::find_by_user_and_type(user_id, type_, conn).await.map_res("Two factor not found")?;
let generated_token = crypto::generate_email_token(CONFIG.email_token_size());
@@ -76,8 +77,8 @@ pub async fn send_token(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
/// When user clicks on Manage email 2FA show the user the related information
#[post("/two-factor/get-email", data = "<data>")]
async fn get_email(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner().data;
async fn get_email(data: Json<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner();
let user = headers.user;
data.validate(&user, false, &mut conn).await?;
@@ -92,30 +93,30 @@ async fn get_email(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut co
};
Ok(Json(json!({
"Email": mfa_email,
"Enabled": enabled,
"Object": "twoFactorEmail"
"email": mfa_email,
"enabled": enabled,
"object": "twoFactorEmail"
})))
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct SendEmailData {
/// Email where 2FA codes will be sent to, can be different than user email account.
Email: String,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
email: String,
master_password_hash: Option<String>,
otp: Option<String>,
}
/// Send a verification email to the specified email address to check whether it exists/belongs to user.
#[post("/two-factor/send-email", data = "<data>")]
async fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: SendEmailData = data.into_inner().data;
async fn send_email(data: Json<SendEmailData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: SendEmailData = data.into_inner();
let user = headers.user;
PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
master_password_hash: data.master_password_hash,
otp: data.otp,
}
.validate(&user, false, &mut conn)
.await?;
@@ -131,7 +132,7 @@ async fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, mut conn:
}
let generated_token = crypto::generate_email_token(CONFIG.email_token_size());
let twofactor_data = EmailTokenData::new(data.Email, generated_token);
let twofactor_data = EmailTokenData::new(data.email, generated_token);
// Uses EmailVerificationChallenge as type to show that it's not verified yet.
let twofactor = TwoFactor::new(user.uuid, TwoFactorType::EmailVerificationChallenge, twofactor_data.to_json());
@@ -143,24 +144,24 @@ async fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, mut conn:
}
#[derive(Deserialize, Serialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct EmailData {
Email: String,
Token: String,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
email: String,
token: String,
master_password_hash: Option<String>,
otp: Option<String>,
}
/// Verify email belongs to user and can be used for 2FA email codes.
#[put("/two-factor/email", data = "<data>")]
async fn email(data: JsonUpcase<EmailData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EmailData = data.into_inner().data;
async fn email(data: Json<EmailData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EmailData = data.into_inner();
let mut user = headers.user;
// This is the last step in the verification process, delete the otp directly afterwards
PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
master_password_hash: data.master_password_hash,
otp: data.otp,
}
.validate(&user, true, &mut conn)
.await?;
@@ -171,12 +172,11 @@ async fn email(data: JsonUpcase<EmailData>, headers: Headers, mut conn: DbConn)
let mut email_data = EmailTokenData::from_json(&twofactor.data)?;
let issued_token = match &email_data.last_token {
Some(t) => t,
_ => err!("No token available"),
let Some(issued_token) = &email_data.last_token else {
err!("No token available")
};
if !crypto::ct_eq(issued_token, data.Token) {
if !crypto::ct_eq(issued_token, data.token) {
err!("Token is invalid")
}
@@ -190,26 +190,25 @@ async fn email(data: JsonUpcase<EmailData>, headers: Headers, mut conn: DbConn)
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
Ok(Json(json!({
"Email": email_data.email,
"Enabled": "true",
"Object": "twoFactorEmail"
"email": email_data.email,
"enabled": "true",
"object": "twoFactorEmail"
})))
}
/// Validate the email code when used as TwoFactor token mechanism
pub async fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn validate_email_code_str(user_id: &UserId, token: &str, data: &str, conn: &mut DbConn) -> EmptyResult {
let mut email_data = EmailTokenData::from_json(data)?;
let mut twofactor = TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::Email as i32, conn)
let mut twofactor = TwoFactor::find_by_user_and_type(user_id, TwoFactorType::Email as i32, conn)
.await
.map_res("Two factor not found")?;
let issued_token = match &email_data.last_token {
Some(t) => t,
_ => err!(
let Some(issued_token) = &email_data.last_token else {
err!(
"No token available",
ErrorEvent {
event: EventType::UserFailedLogIn2fa
}
),
)
};
if !crypto::ct_eq(issued_token, token) {
@@ -232,9 +231,9 @@ pub async fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, c
twofactor.data = email_data.to_json();
twofactor.save(conn).await?;
let date = NaiveDateTime::from_timestamp_opt(email_data.token_sent, 0).expect("Email token timestamp invalid.");
let date = DateTime::from_timestamp(email_data.token_sent, 0).expect("Email token timestamp invalid.").naive_utc();
let max_time = CONFIG.email_expiration_time() as i64;
if date + Duration::seconds(max_time) < Utc::now().naive_utc() {
if date + TimeDelta::try_seconds(max_time).unwrap() < Utc::now().naive_utc() {
err!(
"Token has expired",
ErrorEvent {
@@ -265,14 +264,14 @@ impl EmailTokenData {
EmailTokenData {
email,
last_token: Some(token),
token_sent: Utc::now().naive_utc().timestamp(),
token_sent: Utc::now().timestamp(),
attempts: 0,
}
}
pub fn set_token(&mut self, token: String) {
self.last_token = Some(token);
self.token_sent = Utc::now().naive_utc().timestamp();
self.token_sent = Utc::now().timestamp();
}
pub fn reset_token(&mut self) {
@@ -289,7 +288,7 @@ impl EmailTokenData {
}
pub fn from_json(string: &str) -> Result<EmailTokenData, Error> {
let res: Result<EmailTokenData, crate::serde_json::Error> = serde_json::from_str(string);
let res: Result<EmailTokenData, serde_json::Error> = serde_json::from_str(string);
match res {
Ok(x) => Ok(x),
Err(_) => err!("Could not decode EmailTokenData from string"),
@@ -297,6 +296,15 @@ impl EmailTokenData {
}
}
pub async fn activate_email_2fa(user: &User, conn: &mut DbConn) -> EmptyResult {
if user.verified_at.is_none() {
err!("Auto-enabling of email 2FA failed because the users email address has not been verified!");
}
let twofactor_data = EmailTokenData::new(user.email.clone(), String::new());
let twofactor = TwoFactor::new(user.uuid.clone(), TwoFactorType::Email, twofactor_data.to_json());
twofactor.save(conn).await
}
/// Takes an email address and obscures it by replacing it with asterisks except two characters.
pub fn obscure_email(email: &str) -> String {
let split: Vec<&str> = email.rsplitn(2, '@').collect();
@@ -318,6 +326,14 @@ pub fn obscure_email(email: &str) -> String {
format!("{}@{}", new_name, &domain)
}
pub async fn find_and_activate_email_2fa(user_id: &UserId, conn: &mut DbConn) -> EmptyResult {
if let Some(user) = User::find_by_uuid(user_id, conn).await {
activate_email_2fa(&user, conn).await
} else {
err!("User not found!");
}
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -1,4 +1,4 @@
use chrono::{Duration, Utc};
use chrono::{TimeDelta, Utc};
use data_encoding::BASE32;
use rocket::serde::json::Json;
use rocket::Route;
@@ -7,7 +7,7 @@ use serde_json::Value;
use crate::{
api::{
core::{log_event, log_user_event},
EmptyResult, JsonResult, JsonUpcase, PasswordOrOtpData,
EmptyResult, JsonResult, PasswordOrOtpData,
},
auth::{ClientHeaders, Headers},
crypto,
@@ -19,6 +19,7 @@ use crate::{
pub mod authenticator;
pub mod duo;
pub mod duo_oidc;
pub mod email;
pub mod protected_actions;
pub mod webauthn;
@@ -50,52 +51,51 @@ async fn get_twofactor(headers: Headers, mut conn: DbConn) -> Json<Value> {
let twofactors_json: Vec<Value> = twofactors.iter().map(TwoFactor::to_json_provider).collect();
Json(json!({
"Data": twofactors_json,
"Object": "list",
"ContinuationToken": null,
"data": twofactors_json,
"object": "list",
"continuationToken": null,
}))
}
#[post("/two-factor/get-recover", data = "<data>")]
async fn get_recover(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner().data;
async fn get_recover(data: Json<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner();
let user = headers.user;
data.validate(&user, true, &mut conn).await?;
Ok(Json(json!({
"Code": user.totp_recover,
"Object": "twoFactorRecover"
"code": user.totp_recover,
"object": "twoFactorRecover"
})))
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct RecoverTwoFactor {
MasterPasswordHash: String,
Email: String,
RecoveryCode: String,
master_password_hash: String,
email: String,
recovery_code: String,
}
#[post("/two-factor/recover", data = "<data>")]
async fn recover(data: JsonUpcase<RecoverTwoFactor>, client_headers: ClientHeaders, mut conn: DbConn) -> JsonResult {
let data: RecoverTwoFactor = data.into_inner().data;
async fn recover(data: Json<RecoverTwoFactor>, client_headers: ClientHeaders, mut conn: DbConn) -> JsonResult {
let data: RecoverTwoFactor = data.into_inner();
use crate::db::models::User;
// Get the user
let mut user = match User::find_by_mail(&data.Email, &mut conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again."),
let Some(mut user) = User::find_by_mail(&data.email, &mut conn).await else {
err!("Username or password is incorrect. Try again.")
};
// Check password
if !user.check_valid_password(&data.MasterPasswordHash) {
if !user.check_valid_password(&data.master_password_hash) {
err!("Username or password is incorrect. Try again.")
}
// Check if recovery code is correct
if !user.check_valid_recovery_code(&data.RecoveryCode) {
if !user.check_valid_recovery_code(&data.recovery_code) {
err!("Recovery code is incorrect. Try again.")
}
@@ -127,27 +127,27 @@ async fn _generate_recover_code(user: &mut User, conn: &mut DbConn) {
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct DisableTwoFactorData {
MasterPasswordHash: Option<String>,
Otp: Option<String>,
Type: NumberOrString,
master_password_hash: Option<String>,
otp: Option<String>,
r#type: NumberOrString,
}
#[post("/two-factor/disable", data = "<data>")]
async fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: DisableTwoFactorData = data.into_inner().data;
async fn disable_twofactor(data: Json<DisableTwoFactorData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: DisableTwoFactorData = data.into_inner();
let user = headers.user;
// Delete directly after a valid token has been provided
PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
master_password_hash: data.master_password_hash,
otp: data.otp,
}
.validate(&user, true, &mut conn)
.await?;
let type_ = data.Type.into_i32()?;
let type_ = data.r#type.into_i32()?;
if let Some(twofactor) = TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await {
twofactor.delete(&mut conn).await?;
@@ -160,30 +160,29 @@ async fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Head
}
Ok(Json(json!({
"Enabled": false,
"Type": type_,
"Object": "twoFactorProvider"
"enabled": false,
"type": type_,
"object": "twoFactorProvider"
})))
}
#[put("/two-factor/disable", data = "<data>")]
async fn disable_twofactor_put(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn disable_twofactor_put(data: Json<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
disable_twofactor(data, headers, conn).await
}
pub async fn enforce_2fa_policy(
user: &User,
act_uuid: &str,
act_user_id: &UserId,
device_type: i32,
ip: &std::net::IpAddr,
conn: &mut DbConn,
) -> EmptyResult {
for member in UserOrganization::find_by_user_and_policy(&user.uuid, OrgPolicyType::TwoFactorAuthentication, conn)
.await
.into_iter()
for member in
Membership::find_by_user_and_policy(&user.uuid, OrgPolicyType::TwoFactorAuthentication, conn).await.into_iter()
{
// Policy only applies to non-Owner/non-Admin members who have accepted joining the org
if member.atype < UserOrgType::Admin {
if member.atype < MembershipType::Admin {
if CONFIG.mail_enabled() {
let org = Organization::find_by_uuid(&member.org_uuid, conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name).await?;
@@ -196,7 +195,7 @@ pub async fn enforce_2fa_policy(
EventType::OrganizationUserRevoked as i32,
&member.uuid,
&member.org_uuid,
act_uuid,
act_user_id,
device_type,
ip,
conn,
@@ -209,16 +208,16 @@ pub async fn enforce_2fa_policy(
}
pub async fn enforce_2fa_policy_for_org(
org_uuid: &str,
act_uuid: &str,
org_id: &OrganizationId,
act_user_id: &UserId,
device_type: i32,
ip: &std::net::IpAddr,
conn: &mut DbConn,
) -> EmptyResult {
let org = Organization::find_by_uuid(org_uuid, conn).await.unwrap();
for member in UserOrganization::find_confirmed_by_org(org_uuid, conn).await.into_iter() {
let org = Organization::find_by_uuid(org_id, conn).await.unwrap();
for member in Membership::find_confirmed_by_org(org_id, conn).await.into_iter() {
// Don't enforce the policy for Admins and Owners.
if member.atype < UserOrgType::Admin && TwoFactor::find_by_user(&member.user_uuid, conn).await.is_empty() {
if member.atype < MembershipType::Admin && TwoFactor::find_by_user(&member.user_uuid, conn).await.is_empty() {
if CONFIG.mail_enabled() {
let user = User::find_by_uuid(&member.user_uuid, conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name).await?;
@@ -230,8 +229,8 @@ pub async fn enforce_2fa_policy_for_org(
log_event(
EventType::OrganizationUserRevoked as i32,
&member.uuid,
org_uuid,
act_uuid,
org_id,
act_user_id,
device_type,
ip,
conn,
@@ -259,7 +258,7 @@ pub async fn send_incomplete_2fa_notifications(pool: DbPool) {
};
let now = Utc::now().naive_utc();
let time_limit = Duration::minutes(CONFIG.incomplete_2fa_time_limit());
let time_limit = TimeDelta::try_minutes(CONFIG.incomplete_2fa_time_limit()).unwrap();
let time_before = now - time_limit;
let incomplete_logins = TwoFactorIncomplete::find_logins_before(&time_before, &mut conn).await;
for login in incomplete_logins {
@@ -268,10 +267,24 @@ pub async fn send_incomplete_2fa_notifications(pool: DbPool) {
"User {} did not complete a 2FA login within the configured time limit. IP: {}",
user.email, login.ip_address
);
mail::send_incomplete_2fa_login(&user.email, &login.ip_address, &login.login_time, &login.device_name)
.await
.expect("Error sending incomplete 2FA email");
login.delete(&mut conn).await.expect("Error deleting incomplete 2FA record");
match mail::send_incomplete_2fa_login(
&user.email,
&login.ip_address,
&login.login_time,
&login.device_name,
&DeviceType::from_i32(login.device_type).to_string(),
)
.await
{
Ok(_) => {
if let Err(e) = login.delete(&mut conn).await {
error!("Error deleting incomplete 2FA record: {e:#?}");
}
}
Err(e) => {
error!("Error sending incomplete 2FA email: {e:#?}");
}
}
}
}

View File

@@ -1,12 +1,12 @@
use chrono::{Duration, NaiveDateTime, Utc};
use rocket::Route;
use chrono::{DateTime, TimeDelta, Utc};
use rocket::{serde::json::Json, Route};
use crate::{
api::{EmptyResult, JsonUpcase},
api::EmptyResult,
auth::Headers,
crypto,
db::{
models::{TwoFactor, TwoFactorType},
models::{TwoFactor, TwoFactorType, UserId},
DbConn,
},
error::{Error, MapResult},
@@ -18,7 +18,7 @@ pub fn routes() -> Vec<Route> {
}
/// Data stored in the TwoFactor table in the db
#[derive(Serialize, Deserialize, Debug)]
#[derive(Debug, Serialize, Deserialize)]
pub struct ProtectedActionData {
/// Token issued to validate the protected action
pub token: String,
@@ -32,7 +32,7 @@ impl ProtectedActionData {
pub fn new(token: String) -> Self {
Self {
token,
token_sent: Utc::now().naive_utc().timestamp(),
token_sent: Utc::now().timestamp(),
attempts: 0,
}
}
@@ -42,7 +42,7 @@ impl ProtectedActionData {
}
pub fn from_json(string: &str) -> Result<Self, Error> {
let res: Result<Self, crate::serde_json::Error> = serde_json::from_str(string);
let res: Result<Self, serde_json::Error> = serde_json::from_str(string);
match res {
Ok(x) => Ok(x),
Err(_) => err!("Could not decode ProtectedActionData from string"),
@@ -82,32 +82,33 @@ async fn request_otp(headers: Headers, mut conn: DbConn) -> EmptyResult {
}
#[derive(Deserialize, Serialize, Debug)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct ProtectedActionVerify {
OTP: String,
#[serde(rename = "OTP", alias = "otp")]
otp: String,
}
#[post("/accounts/verify-otp", data = "<data>")]
async fn verify_otp(data: JsonUpcase<ProtectedActionVerify>, headers: Headers, mut conn: DbConn) -> EmptyResult {
async fn verify_otp(data: Json<ProtectedActionVerify>, headers: Headers, mut conn: DbConn) -> EmptyResult {
if !CONFIG.mail_enabled() {
err!("Email is disabled for this server. Either enable email or login using your master password instead of login via device.");
}
let user = headers.user;
let data: ProtectedActionVerify = data.into_inner().data;
let data: ProtectedActionVerify = data.into_inner();
// Delete the token after one validation attempt
// This endpoint only gets called for the vault export, and doesn't need a second attempt
validate_protected_action_otp(&data.OTP, &user.uuid, true, &mut conn).await
validate_protected_action_otp(&data.otp, &user.uuid, true, &mut conn).await
}
pub async fn validate_protected_action_otp(
otp: &str,
user_uuid: &str,
user_id: &UserId,
delete_if_valid: bool,
conn: &mut DbConn,
) -> EmptyResult {
let pa = TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::ProtectedActions as i32, conn)
let pa = TwoFactor::find_by_user_and_type(user_id, TwoFactorType::ProtectedActions as i32, conn)
.await
.map_res("Protected action token not found, try sending the code again or restart the process")?;
let mut pa_data = ProtectedActionData::from_json(&pa.data)?;
@@ -122,9 +123,9 @@ pub async fn validate_protected_action_otp(
// Check if the token has expired (Using the email 2fa expiration time)
let date =
NaiveDateTime::from_timestamp_opt(pa_data.token_sent, 0).expect("Protected Action token timestamp invalid.");
DateTime::from_timestamp(pa_data.token_sent, 0).expect("Protected Action token timestamp invalid.").naive_utc();
let max_time = CONFIG.email_expiration_time() as i64;
if date + Duration::seconds(max_time) < Utc::now().naive_utc() {
if date + TimeDelta::try_seconds(max_time).unwrap() < Utc::now().naive_utc() {
pa.delete(conn).await?;
err!("Token has expired")
}

View File

@@ -7,11 +7,11 @@ use webauthn_rs::{base64_data::Base64UrlSafeData, proto::*, AuthenticationState,
use crate::{
api::{
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, PasswordOrOtpData,
EmptyResult, JsonResult, PasswordOrOtpData,
},
auth::Headers,
db::{
models::{EventType, TwoFactor, TwoFactorType},
models::{EventType, TwoFactor, TwoFactorType, UserId},
DbConn,
},
error::Error,
@@ -96,20 +96,20 @@ pub struct WebauthnRegistration {
impl WebauthnRegistration {
fn to_json(&self) -> Value {
json!({
"Id": self.id,
"Name": self.name,
"id": self.id,
"name": self.name,
"migrated": self.migrated,
})
}
}
#[post("/two-factor/get-webauthn", data = "<data>")]
async fn get_webauthn(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn get_webauthn(data: Json<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
if !CONFIG.domain_set() {
err!("`DOMAIN` environment variable is not set. Webauthn disabled")
}
let data: PasswordOrOtpData = data.into_inner().data;
let data: PasswordOrOtpData = data.into_inner();
let user = headers.user;
data.validate(&user, false, &mut conn).await?;
@@ -118,19 +118,15 @@ async fn get_webauthn(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut
let registrations_json: Vec<Value> = registrations.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({
"Enabled": enabled,
"Keys": registrations_json,
"Object": "twoFactorWebAuthn"
"enabled": enabled,
"keys": registrations_json,
"object": "twoFactorWebAuthn"
})))
}
#[post("/two-factor/get-webauthn-challenge", data = "<data>")]
async fn generate_webauthn_challenge(
data: JsonUpcase<PasswordOrOtpData>,
headers: Headers,
mut conn: DbConn,
) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner().data;
async fn generate_webauthn_challenge(data: Json<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner();
let user = headers.user;
data.validate(&user, false, &mut conn).await?;
@@ -152,7 +148,7 @@ async fn generate_webauthn_challenge(
)?;
let type_ = TwoFactorType::WebauthnRegisterChallenge;
TwoFactor::new(user.uuid, type_, serde_json::to_string(&state)?).save(&mut conn).await?;
TwoFactor::new(user.uuid.clone(), type_, serde_json::to_string(&state)?).save(&mut conn).await?;
let mut challenge_value = serde_json::to_value(challenge.public_key)?;
challenge_value["status"] = "ok".into();
@@ -161,102 +157,94 @@ async fn generate_webauthn_challenge(
}
#[derive(Debug, Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct EnableWebauthnData {
Id: NumberOrString, // 1..5
Name: String,
DeviceResponse: RegisterPublicKeyCredentialCopy,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
id: NumberOrString, // 1..5
name: String,
device_response: RegisterPublicKeyCredentialCopy,
master_password_hash: Option<String>,
otp: Option<String>,
}
// This is copied from RegisterPublicKeyCredential to change the Response objects casing
#[derive(Debug, Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct RegisterPublicKeyCredentialCopy {
pub Id: String,
pub RawId: Base64UrlSafeData,
pub Response: AuthenticatorAttestationResponseRawCopy,
pub Type: String,
pub id: String,
pub raw_id: Base64UrlSafeData,
pub response: AuthenticatorAttestationResponseRawCopy,
pub r#type: String,
}
// This is copied from AuthenticatorAttestationResponseRaw to change clientDataJSON to clientDataJson
#[derive(Debug, Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
pub struct AuthenticatorAttestationResponseRawCopy {
pub AttestationObject: Base64UrlSafeData,
pub ClientDataJson: Base64UrlSafeData,
#[serde(rename = "AttestationObject", alias = "attestationObject")]
pub attestation_object: Base64UrlSafeData,
#[serde(rename = "clientDataJson", alias = "clientDataJSON")]
pub client_data_json: Base64UrlSafeData,
}
impl From<RegisterPublicKeyCredentialCopy> for RegisterPublicKeyCredential {
fn from(r: RegisterPublicKeyCredentialCopy) -> Self {
Self {
id: r.Id,
raw_id: r.RawId,
id: r.id,
raw_id: r.raw_id,
response: AuthenticatorAttestationResponseRaw {
attestation_object: r.Response.AttestationObject,
client_data_json: r.Response.ClientDataJson,
attestation_object: r.response.attestation_object,
client_data_json: r.response.client_data_json,
},
type_: r.Type,
type_: r.r#type,
}
}
}
// This is copied from PublicKeyCredential to change the Response objects casing
#[derive(Debug, Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
pub struct PublicKeyCredentialCopy {
pub Id: String,
pub RawId: Base64UrlSafeData,
pub Response: AuthenticatorAssertionResponseRawCopy,
pub Extensions: Option<AuthenticationExtensionsClientOutputsCopy>,
pub Type: String,
pub id: String,
pub raw_id: Base64UrlSafeData,
pub response: AuthenticatorAssertionResponseRawCopy,
pub extensions: Option<AuthenticationExtensionsClientOutputs>,
pub r#type: String,
}
// This is copied from AuthenticatorAssertionResponseRaw to change clientDataJSON to clientDataJson
#[derive(Debug, Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
pub struct AuthenticatorAssertionResponseRawCopy {
pub AuthenticatorData: Base64UrlSafeData,
pub ClientDataJson: Base64UrlSafeData,
pub Signature: Base64UrlSafeData,
pub UserHandle: Option<Base64UrlSafeData>,
}
#[derive(Debug, Deserialize)]
#[allow(non_snake_case)]
pub struct AuthenticationExtensionsClientOutputsCopy {
#[serde(default)]
pub Appid: bool,
pub authenticator_data: Base64UrlSafeData,
#[serde(rename = "clientDataJson", alias = "clientDataJSON")]
pub client_data_json: Base64UrlSafeData,
pub signature: Base64UrlSafeData,
pub user_handle: Option<Base64UrlSafeData>,
}
impl From<PublicKeyCredentialCopy> for PublicKeyCredential {
fn from(r: PublicKeyCredentialCopy) -> Self {
Self {
id: r.Id,
raw_id: r.RawId,
id: r.id,
raw_id: r.raw_id,
response: AuthenticatorAssertionResponseRaw {
authenticator_data: r.Response.AuthenticatorData,
client_data_json: r.Response.ClientDataJson,
signature: r.Response.Signature,
user_handle: r.Response.UserHandle,
authenticator_data: r.response.authenticator_data,
client_data_json: r.response.client_data_json,
signature: r.response.signature,
user_handle: r.response.user_handle,
},
extensions: r.Extensions.map(|e| AuthenticationExtensionsClientOutputs {
appid: e.Appid,
}),
type_: r.Type,
extensions: r.extensions,
type_: r.r#type,
}
}
}
#[post("/two-factor/webauthn", data = "<data>")]
async fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EnableWebauthnData = data.into_inner().data;
async fn activate_webauthn(data: Json<EnableWebauthnData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EnableWebauthnData = data.into_inner();
let mut user = headers.user;
PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
master_password_hash: data.master_password_hash,
otp: data.otp,
}
.validate(&user, true, &mut conn)
.await?;
@@ -274,13 +262,13 @@ async fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Header
// Verify the credentials with the saved state
let (credential, _data) =
WebauthnConfig::load().register_credential(&data.DeviceResponse.into(), &state, |_| Ok(false))?;
WebauthnConfig::load().register_credential(&data.device_response.into(), &state, |_| Ok(false))?;
let mut registrations: Vec<_> = get_webauthn_registrations(&user.uuid, &mut conn).await?.1;
// TODO: Check for repeated ID's
registrations.push(WebauthnRegistration {
id: data.Id.into_i32()?,
name: data.Name,
id: data.id.into_i32()?,
name: data.name,
migrated: false,
credential,
@@ -296,42 +284,41 @@ async fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Header
let keys_json: Vec<Value> = registrations.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({
"Enabled": true,
"Keys": keys_json,
"Object": "twoFactorU2f"
"enabled": true,
"keys": keys_json,
"object": "twoFactorU2f"
})))
}
#[put("/two-factor/webauthn", data = "<data>")]
async fn activate_webauthn_put(data: JsonUpcase<EnableWebauthnData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn activate_webauthn_put(data: Json<EnableWebauthnData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_webauthn(data, headers, conn).await
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct DeleteU2FData {
Id: NumberOrString,
MasterPasswordHash: String,
id: NumberOrString,
master_password_hash: String,
}
#[delete("/two-factor/webauthn", data = "<data>")]
async fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let id = data.data.Id.into_i32()?;
if !headers.user.check_valid_password(&data.data.MasterPasswordHash) {
async fn delete_webauthn(data: Json<DeleteU2FData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let id = data.id.into_i32()?;
if !headers.user.check_valid_password(&data.master_password_hash) {
err!("Invalid password");
}
let mut tf =
match TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::Webauthn as i32, &mut conn).await {
Some(tf) => tf,
None => err!("Webauthn data not found!"),
};
let Some(mut tf) =
TwoFactor::find_by_user_and_type(&headers.user.uuid, TwoFactorType::Webauthn as i32, &mut conn).await
else {
err!("Webauthn data not found!")
};
let mut data: Vec<WebauthnRegistration> = serde_json::from_str(&tf.data)?;
let item_pos = match data.iter().position(|r| r.id == id) {
Some(p) => p,
None => err!("Webauthn entry not found"),
let Some(item_pos) = data.iter().position(|r| r.id == id) else {
err!("Webauthn entry not found")
};
let removed_item = data.remove(item_pos);
@@ -358,27 +345,27 @@ async fn delete_webauthn(data: JsonUpcase<DeleteU2FData>, headers: Headers, mut
let keys_json: Vec<Value> = data.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({
"Enabled": true,
"Keys": keys_json,
"Object": "twoFactorU2f"
"enabled": true,
"keys": keys_json,
"object": "twoFactorU2f"
})))
}
pub async fn get_webauthn_registrations(
user_uuid: &str,
user_id: &UserId,
conn: &mut DbConn,
) -> Result<(bool, Vec<WebauthnRegistration>), Error> {
let type_ = TwoFactorType::Webauthn as i32;
match TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await {
match TwoFactor::find_by_user_and_type(user_id, type_, conn).await {
Some(tf) => Ok((tf.enabled, serde_json::from_str(&tf.data)?)),
None => Ok((false, Vec::new())), // If no data, return empty list
}
}
pub async fn generate_webauthn_login(user_uuid: &str, conn: &mut DbConn) -> JsonResult {
pub async fn generate_webauthn_login(user_id: &UserId, conn: &mut DbConn) -> JsonResult {
// Load saved credentials
let creds: Vec<Credential> =
get_webauthn_registrations(user_uuid, conn).await?.1.into_iter().map(|r| r.credential).collect();
get_webauthn_registrations(user_id, conn).await?.1.into_iter().map(|r| r.credential).collect();
if creds.is_empty() {
err!("No Webauthn devices registered")
@@ -389,7 +376,7 @@ pub async fn generate_webauthn_login(user_uuid: &str, conn: &mut DbConn) -> Json
let (response, state) = WebauthnConfig::load().generate_challenge_authenticate_options(creds, Some(ext))?;
// Save the challenge state for later validation
TwoFactor::new(user_uuid.into(), TwoFactorType::WebauthnLoginChallenge, serde_json::to_string(&state)?)
TwoFactor::new(user_id.clone(), TwoFactorType::WebauthnLoginChallenge, serde_json::to_string(&state)?)
.save(conn)
.await?;
@@ -397,9 +384,9 @@ pub async fn generate_webauthn_login(user_uuid: &str, conn: &mut DbConn) -> Json
Ok(Json(serde_json::to_value(response.public_key)?))
}
pub async fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn validate_webauthn_login(user_id: &UserId, response: &str, conn: &mut DbConn) -> EmptyResult {
let type_ = TwoFactorType::WebauthnLoginChallenge as i32;
let state = match TwoFactor::find_by_user_and_type(user_uuid, type_, conn).await {
let state = match TwoFactor::find_by_user_and_type(user_id, type_, conn).await {
Some(tf) => {
let state: AuthenticationState = serde_json::from_str(&tf.data)?;
tf.delete(conn).await?;
@@ -413,10 +400,10 @@ pub async fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &mut
),
};
let rsp: crate::util::UpCase<PublicKeyCredentialCopy> = serde_json::from_str(response)?;
let rsp: PublicKeyCredential = rsp.data.into();
let rsp: PublicKeyCredentialCopy = serde_json::from_str(response)?;
let rsp: PublicKeyCredential = rsp.into();
let mut registrations = get_webauthn_registrations(user_uuid, conn).await?.1;
let mut registrations = get_webauthn_registrations(user_id, conn).await?.1;
// If the credential we received is migrated from U2F, enable the U2F compatibility
//let use_u2f = registrations.iter().any(|r| r.migrated && r.credential.cred_id == rsp.raw_id.0);
@@ -426,7 +413,7 @@ pub async fn validate_webauthn_login(user_uuid: &str, response: &str, conn: &mut
if &reg.credential.cred_id == cred_id {
reg.credential.counter = auth_data.counter;
TwoFactor::new(user_uuid.to_string(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?)
TwoFactor::new(user_id.clone(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?)
.save(conn)
.await?;
return Ok(());

View File

@@ -1,12 +1,12 @@
use rocket::serde::json::Json;
use rocket::Route;
use serde_json::Value;
use yubico::{config::Config, verify};
use yubico::{config::Config, verify_async};
use crate::{
api::{
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, PasswordOrOtpData,
EmptyResult, JsonResult, PasswordOrOtpData,
},
auth::Headers,
db::{
@@ -21,33 +21,35 @@ pub fn routes() -> Vec<Route> {
routes![generate_yubikey, activate_yubikey, activate_yubikey_put,]
}
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct EnableYubikeyData {
Key1: Option<String>,
Key2: Option<String>,
Key3: Option<String>,
Key4: Option<String>,
Key5: Option<String>,
Nfc: bool,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
key1: Option<String>,
key2: Option<String>,
key3: Option<String>,
key4: Option<String>,
key5: Option<String>,
nfc: bool,
master_password_hash: Option<String>,
otp: Option<String>,
}
#[derive(Deserialize, Serialize, Debug)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
pub struct YubikeyMetadata {
Keys: Vec<String>,
pub Nfc: bool,
#[serde(rename = "keys", alias = "Keys")]
keys: Vec<String>,
#[serde(rename = "nfc", alias = "Nfc")]
pub nfc: bool,
}
fn parse_yubikeys(data: &EnableYubikeyData) -> Vec<String> {
let data_keys = [&data.Key1, &data.Key2, &data.Key3, &data.Key4, &data.Key5];
let data_keys = [&data.key1, &data.key2, &data.key3, &data.key4, &data.key5];
data_keys.iter().filter_map(|e| e.as_ref().cloned()).collect()
}
fn jsonify_yubikeys(yubikeys: Vec<String>) -> serde_json::Value {
fn jsonify_yubikeys(yubikeys: Vec<String>) -> Value {
let mut result = Value::Object(serde_json::Map::new());
for (i, key) in yubikeys.into_iter().enumerate() {
@@ -74,56 +76,53 @@ async fn verify_yubikey_otp(otp: String) -> EmptyResult {
let config = Config::default().set_client_id(yubico_id).set_key(yubico_secret);
match CONFIG.yubico_server() {
Some(server) => {
tokio::task::spawn_blocking(move || verify(otp, config.set_api_hosts(vec![server]))).await.unwrap()
}
None => tokio::task::spawn_blocking(move || verify(otp, config)).await.unwrap(),
Some(server) => verify_async(otp, config.set_api_hosts(vec![server])).await,
None => verify_async(otp, config).await,
}
.map_res("Failed to verify OTP")
.and(Ok(()))
}
#[post("/two-factor/get-yubikey", data = "<data>")]
async fn generate_yubikey(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn generate_yubikey(data: Json<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
// Make sure the credentials are set
get_yubico_credentials()?;
let data: PasswordOrOtpData = data.into_inner().data;
let data: PasswordOrOtpData = data.into_inner();
let user = headers.user;
data.validate(&user, false, &mut conn).await?;
let user_uuid = &user.uuid;
let user_id = &user.uuid;
let yubikey_type = TwoFactorType::YubiKey as i32;
let r = TwoFactor::find_by_user_and_type(user_uuid, yubikey_type, &mut conn).await;
let r = TwoFactor::find_by_user_and_type(user_id, yubikey_type, &mut conn).await;
if let Some(r) = r {
let yubikey_metadata: YubikeyMetadata = serde_json::from_str(&r.data)?;
let mut result = jsonify_yubikeys(yubikey_metadata.Keys);
let mut result = jsonify_yubikeys(yubikey_metadata.keys);
result["Enabled"] = Value::Bool(true);
result["Nfc"] = Value::Bool(yubikey_metadata.Nfc);
result["Object"] = Value::String("twoFactorU2f".to_owned());
result["enabled"] = Value::Bool(true);
result["nfc"] = Value::Bool(yubikey_metadata.nfc);
result["object"] = Value::String("twoFactorU2f".to_owned());
Ok(Json(result))
} else {
Ok(Json(json!({
"Enabled": false,
"Object": "twoFactorU2f",
"enabled": false,
"object": "twoFactorU2f",
})))
}
}
#[post("/two-factor/yubikey", data = "<data>")]
async fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EnableYubikeyData = data.into_inner().data;
async fn activate_yubikey(data: Json<EnableYubikeyData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: EnableYubikeyData = data.into_inner();
let mut user = headers.user;
PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash.clone(),
Otp: data.Otp.clone(),
master_password_hash: data.master_password_hash.clone(),
otp: data.otp.clone(),
}
.validate(&user, true, &mut conn)
.await?;
@@ -139,8 +138,8 @@ async fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers,
if yubikeys.is_empty() {
return Ok(Json(json!({
"Enabled": false,
"Object": "twoFactorU2f",
"enabled": false,
"object": "twoFactorU2f",
})));
}
@@ -157,8 +156,8 @@ async fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers,
let yubikey_ids: Vec<String> = yubikeys.into_iter().map(|x| (x[..12]).to_owned()).collect();
let yubikey_metadata = YubikeyMetadata {
Keys: yubikey_ids,
Nfc: data.Nfc,
keys: yubikey_ids,
nfc: data.nfc,
};
yubikey_data.data = serde_json::to_string(&yubikey_metadata).unwrap();
@@ -168,17 +167,17 @@ async fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers,
log_user_event(EventType::UserUpdated2fa as i32, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
let mut result = jsonify_yubikeys(yubikey_metadata.Keys);
let mut result = jsonify_yubikeys(yubikey_metadata.keys);
result["Enabled"] = Value::Bool(true);
result["Nfc"] = Value::Bool(yubikey_metadata.Nfc);
result["Object"] = Value::String("twoFactorU2f".to_owned());
result["enabled"] = Value::Bool(true);
result["nfc"] = Value::Bool(yubikey_metadata.nfc);
result["object"] = Value::String("twoFactorU2f".to_owned());
Ok(Json(result))
}
#[put("/two-factor/yubikey", data = "<data>")]
async fn activate_yubikey_put(data: JsonUpcase<EnableYubikeyData>, headers: Headers, conn: DbConn) -> JsonResult {
async fn activate_yubikey_put(data: Json<EnableYubikeyData>, headers: Headers, conn: DbConn) -> JsonResult {
activate_yubikey(data, headers, conn).await
}
@@ -190,14 +189,10 @@ pub async fn validate_yubikey_login(response: &str, twofactor_data: &str) -> Emp
let yubikey_metadata: YubikeyMetadata = serde_json::from_str(twofactor_data).expect("Can't parse Yubikey Metadata");
let response_id = &response[..12];
if !yubikey_metadata.Keys.contains(&response_id.to_owned()) {
if !yubikey_metadata.keys.contains(&response_id.to_owned()) {
err!("Given Yubikey is not registered");
}
let result = verify_yubikey_otp(response.to_owned()).await;
match result {
Ok(_answer) => Ok(()),
Err(_e) => err!("Failed to verify Yubikey against OTP server"),
}
verify_yubikey_otp(response.to_owned()).await.map_res("Failed to verify Yubikey against OTP server")?;
Ok(())
}

View File

@@ -1,4 +1,5 @@
use std::{
collections::HashMap,
net::IpAddr,
sync::Arc,
time::{Duration, SystemTime},
@@ -16,14 +17,14 @@ use rocket::{http::ContentType, response::Redirect, Route};
use tokio::{
fs::{create_dir_all, remove_file, symlink_metadata, File},
io::{AsyncReadExt, AsyncWriteExt},
net::lookup_host,
};
use html5gum::{Emitter, HtmlString, InfallibleTokenizer, Readable, StringReader, Tokenizer};
use html5gum::{Emitter, HtmlString, Readable, StringReader, Tokenizer};
use crate::{
error::Error,
util::{get_reqwest_client_builder, Cached},
http_client::{get_reqwest_client_builder, should_block_address, CustomHttpClientError},
util::Cached,
CONFIG,
};
@@ -49,48 +50,32 @@ static CLIENT: Lazy<Client> = Lazy::new(|| {
let icon_download_timeout = Duration::from_secs(CONFIG.icon_download_timeout());
let pool_idle_timeout = Duration::from_secs(10);
// Reuse the client between requests
let client = get_reqwest_client_builder()
get_reqwest_client_builder()
.cookie_provider(Arc::clone(&cookie_store))
.timeout(icon_download_timeout)
.pool_max_idle_per_host(5) // Configure the Hyper Pool to only have max 5 idle connections
.pool_idle_timeout(pool_idle_timeout) // Configure the Hyper Pool to timeout after 10 seconds
.trust_dns(true)
.default_headers(default_headers.clone());
match client.build() {
Ok(client) => client,
Err(e) => {
error!("Possible trust-dns error, trying with trust-dns disabled: '{e}'");
get_reqwest_client_builder()
.cookie_provider(cookie_store)
.timeout(icon_download_timeout)
.pool_max_idle_per_host(5) // Configure the Hyper Pool to only have max 5 idle connections
.pool_idle_timeout(pool_idle_timeout) // Configure the Hyper Pool to timeout after 10 seconds
.trust_dns(false)
.default_headers(default_headers)
.build()
.expect("Failed to build client")
}
}
.default_headers(default_headers.clone())
.build()
.expect("Failed to build client")
});
// Build Regex only once since this takes a lot of time.
static ICON_SIZE_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?x)(\d+)\D*(\d+)").unwrap());
// Special HashMap which holds the user defined Regex to speedup matching the regex.
static ICON_BLACKLIST_REGEX: Lazy<dashmap::DashMap<String, Regex>> = Lazy::new(dashmap::DashMap::new);
async fn icon_redirect(domain: &str, template: &str) -> Option<Redirect> {
#[get("/<domain>/icon.png")]
fn icon_external(domain: &str) -> Option<Redirect> {
if !is_valid_domain(domain) {
warn!("Invalid domain: {}", domain);
return None;
}
if check_domain_blacklist_reason(domain).await.is_some() {
if should_block_address(domain) {
warn!("Blocked address: {}", domain);
return None;
}
let url = template.replace("{}", domain);
let url = CONFIG._icon_service_url().replace("{}", domain);
match CONFIG.icon_redirect_code() {
301 => Some(Redirect::moved(url)), // legacy permanent redirect
302 => Some(Redirect::found(url)), // legacy temporary redirect
@@ -103,11 +88,6 @@ async fn icon_redirect(domain: &str, template: &str) -> Option<Redirect> {
}
}
#[get("/<domain>/icon.png")]
async fn icon_external(domain: &str) -> Option<Redirect> {
icon_redirect(domain, &CONFIG._icon_service_url()).await
}
#[get("/<domain>/icon.png")]
async fn icon_internal(domain: &str) -> Cached<(ContentType, Vec<u8>)> {
const FALLBACK_ICON: &[u8] = include_bytes!("../static/images/fallback-icon.png");
@@ -121,6 +101,15 @@ async fn icon_internal(domain: &str) -> Cached<(ContentType, Vec<u8>)> {
);
}
if should_block_address(domain) {
warn!("Blocked address: {}", domain);
return Cached::ttl(
(ContentType::new("image", "png"), FALLBACK_ICON.to_vec()),
CONFIG.icon_cache_negttl(),
true,
);
}
match get_icon(domain).await {
Some((icon, icon_type)) => {
Cached::ttl((ContentType::new("image", icon_type), icon), CONFIG.icon_cache_ttl(), true)
@@ -166,155 +155,6 @@ fn is_valid_domain(domain: &str) -> bool {
true
}
/// TODO: This is extracted from IpAddr::is_global, which is unstable:
/// https://doc.rust-lang.org/nightly/std/net/enum.IpAddr.html#method.is_global
/// Remove once https://github.com/rust-lang/rust/issues/27709 is merged
#[allow(clippy::nonminimal_bool)]
#[cfg(not(feature = "unstable"))]
fn is_global(ip: IpAddr) -> bool {
match ip {
IpAddr::V4(ip) => {
// check if this address is 192.0.0.9 or 192.0.0.10. These addresses are the only two
// globally routable addresses in the 192.0.0.0/24 range.
if u32::from(ip) == 0xc0000009 || u32::from(ip) == 0xc000000a {
return true;
}
!ip.is_private()
&& !ip.is_loopback()
&& !ip.is_link_local()
&& !ip.is_broadcast()
&& !ip.is_documentation()
&& !(ip.octets()[0] == 100 && (ip.octets()[1] & 0b1100_0000 == 0b0100_0000))
&& !(ip.octets()[0] == 192 && ip.octets()[1] == 0 && ip.octets()[2] == 0)
&& !(ip.octets()[0] & 240 == 240 && !ip.is_broadcast())
&& !(ip.octets()[0] == 198 && (ip.octets()[1] & 0xfe) == 18)
// Make sure the address is not in 0.0.0.0/8
&& ip.octets()[0] != 0
}
IpAddr::V6(ip) => {
if ip.is_multicast() && ip.segments()[0] & 0x000f == 14 {
true
} else {
!ip.is_multicast()
&& !ip.is_loopback()
&& !((ip.segments()[0] & 0xffc0) == 0xfe80)
&& !((ip.segments()[0] & 0xfe00) == 0xfc00)
&& !ip.is_unspecified()
&& !((ip.segments()[0] == 0x2001) && (ip.segments()[1] == 0xdb8))
}
}
}
}
#[cfg(feature = "unstable")]
fn is_global(ip: IpAddr) -> bool {
ip.is_global()
}
/// These are some tests to check that the implementations match
/// The IPv4 can be all checked in 5 mins or so and they are correct as of nightly 2020-07-11
/// The IPV6 can't be checked in a reasonable time, so we check about ten billion random ones, so far correct
/// Note that the is_global implementation is subject to change as new IP RFCs are created
///
/// To run while showing progress output:
/// cargo test --features sqlite,unstable -- --nocapture --ignored
#[cfg(test)]
#[cfg(feature = "unstable")]
mod tests {
use super::*;
#[test]
#[ignore]
fn test_ipv4_global() {
for a in 0..u8::MAX {
println!("Iter: {}/255", a);
for b in 0..u8::MAX {
for c in 0..u8::MAX {
for d in 0..u8::MAX {
let ip = IpAddr::V4(std::net::Ipv4Addr::new(a, b, c, d));
assert_eq!(ip.is_global(), is_global(ip))
}
}
}
}
}
#[test]
#[ignore]
fn test_ipv6_global() {
use ring::rand::{SecureRandom, SystemRandom};
let mut v = [0u8; 16];
let rand = SystemRandom::new();
for i in 0..1_000 {
println!("Iter: {}/1_000", i);
for _ in 0..10_000_000 {
rand.fill(&mut v).expect("Error generating random values");
let ip = IpAddr::V6(std::net::Ipv6Addr::new(
(v[14] as u16) << 8 | v[15] as u16,
(v[12] as u16) << 8 | v[13] as u16,
(v[10] as u16) << 8 | v[11] as u16,
(v[8] as u16) << 8 | v[9] as u16,
(v[6] as u16) << 8 | v[7] as u16,
(v[4] as u16) << 8 | v[5] as u16,
(v[2] as u16) << 8 | v[3] as u16,
(v[0] as u16) << 8 | v[1] as u16,
));
assert_eq!(ip.is_global(), is_global(ip))
}
}
}
}
#[derive(Clone)]
enum DomainBlacklistReason {
Regex,
IP,
}
use cached::proc_macro::cached;
#[cached(key = "String", convert = r#"{ domain.to_string() }"#, size = 16, time = 60)]
async fn check_domain_blacklist_reason(domain: &str) -> Option<DomainBlacklistReason> {
// First check the blacklist regex if there is a match.
// This prevents the blocked domain(s) from being leaked via a DNS lookup.
if let Some(blacklist) = CONFIG.icon_blacklist_regex() {
// Use the pre-generate Regex stored in a Lazy HashMap if there's one, else generate it.
let is_match = if let Some(regex) = ICON_BLACKLIST_REGEX.get(&blacklist) {
regex.is_match(domain)
} else {
// Clear the current list if the previous key doesn't exists.
// To prevent growing of the HashMap after someone has changed it via the admin interface.
if ICON_BLACKLIST_REGEX.len() >= 1 {
ICON_BLACKLIST_REGEX.clear();
}
// Generate the regex to store in too the Lazy Static HashMap.
let blacklist_regex = Regex::new(&blacklist).unwrap();
let is_match = blacklist_regex.is_match(domain);
ICON_BLACKLIST_REGEX.insert(blacklist.clone(), blacklist_regex);
is_match
};
if is_match {
debug!("Blacklisted domain: {} matched ICON_BLACKLIST_REGEX", domain);
return Some(DomainBlacklistReason::Regex);
}
}
if CONFIG.icon_blacklist_non_global_ips() {
if let Ok(s) = lookup_host((domain, 0)).await {
for addr in s {
if !is_global(addr.ip()) {
debug!("IP {} for domain '{}' is not a global IP!", addr.ip(), domain);
return Some(DomainBlacklistReason::IP);
}
}
}
}
None
}
async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
let path = format!("{}/{}.png", CONFIG.icon_cache_folder(), domain);
@@ -342,6 +182,13 @@ async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
Some((icon.to_vec(), icon_type.unwrap_or("x-icon").to_string()))
}
Err(e) => {
// If this error comes from the custom resolver, this means this is a blocked domain
// or non global IP, don't save the miss file in this case to avoid leaking it
if let Some(error) = CustomHttpClientError::downcast_ref(&e) {
warn!("{error}");
return None;
}
warn!("Unable to download icon: {:?}", e);
let miss_indicator = path + ".miss";
save_icon(&miss_indicator, &[]).await;
@@ -414,11 +261,7 @@ impl Icon {
}
}
fn get_favicons_node(
dom: InfallibleTokenizer<StringReader<'_>, FaviconEmitter>,
icons: &mut Vec<Icon>,
url: &url::Url,
) {
fn get_favicons_node(dom: Tokenizer<StringReader<'_>, FaviconEmitter>, icons: &mut Vec<Icon>, url: &url::Url) {
const TAG_LINK: &[u8] = b"link";
const TAG_BASE: &[u8] = b"base";
const TAG_HEAD: &[u8] = b"head";
@@ -427,7 +270,7 @@ fn get_favicons_node(
let mut base_url = url.clone();
let mut icon_tags: Vec<Tag> = Vec::new();
for token in dom {
for Ok(token) in dom {
let tag_name: &[u8] = &token.tag.name;
match tag_name {
TAG_LINK => {
@@ -448,9 +291,7 @@ fn get_favicons_node(
TAG_HEAD if token.closing => {
break;
}
_ => {
continue;
}
_ => {}
}
}
@@ -491,42 +332,48 @@ async fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
let ssldomain = format!("https://{domain}");
let httpdomain = format!("http://{domain}");
// First check the domain as given during the request for both HTTPS and HTTP.
let resp = match get_page(&ssldomain).or_else(|_| get_page(&httpdomain)).await {
Ok(c) => Ok(c),
Err(e) => {
let mut sub_resp = Err(e);
// First check the domain as given during the request for HTTPS.
let resp = match get_page(&ssldomain).await {
Err(e) if CustomHttpClientError::downcast_ref(&e).is_none() => {
// If we get an error that is not caused by the blacklist, we retry with HTTP
match get_page(&httpdomain).await {
mut sub_resp @ Err(_) => {
// When the domain is not an IP, and has more then one dot, remove all subdomains.
let is_ip = domain.parse::<IpAddr>();
if is_ip.is_err() && domain.matches('.').count() > 1 {
let mut domain_parts = domain.split('.');
let base_domain = format!(
"{base}.{tld}",
tld = domain_parts.next_back().unwrap(),
base = domain_parts.next_back().unwrap()
);
if is_valid_domain(&base_domain) {
let sslbase = format!("https://{base_domain}");
let httpbase = format!("http://{base_domain}");
debug!("[get_icon_url]: Trying without subdomains '{base_domain}'");
// When the domain is not an IP, and has more then one dot, remove all subdomains.
let is_ip = domain.parse::<IpAddr>();
if is_ip.is_err() && domain.matches('.').count() > 1 {
let mut domain_parts = domain.split('.');
let base_domain = format!(
"{base}.{tld}",
tld = domain_parts.next_back().unwrap(),
base = domain_parts.next_back().unwrap()
);
if is_valid_domain(&base_domain) {
let sslbase = format!("https://{base_domain}");
let httpbase = format!("http://{base_domain}");
debug!("[get_icon_url]: Trying without subdomains '{base_domain}'");
sub_resp = get_page(&sslbase).or_else(|_| get_page(&httpbase)).await;
}
sub_resp = get_page(&sslbase).or_else(|_| get_page(&httpbase)).await;
}
// When the domain is not an IP, and has less then 2 dots, try to add www. infront of it.
} else if is_ip.is_err() && domain.matches('.').count() < 2 {
let www_domain = format!("www.{domain}");
if is_valid_domain(&www_domain) {
let sslwww = format!("https://{www_domain}");
let httpwww = format!("http://{www_domain}");
debug!("[get_icon_url]: Trying with www. prefix '{www_domain}'");
sub_resp = get_page(&sslwww).or_else(|_| get_page(&httpwww)).await;
// When the domain is not an IP, and has less then 2 dots, try to add www. infront of it.
} else if is_ip.is_err() && domain.matches('.').count() < 2 {
let www_domain = format!("www.{domain}");
if is_valid_domain(&www_domain) {
let sslwww = format!("https://{www_domain}");
let httpwww = format!("http://{www_domain}");
debug!("[get_icon_url]: Trying with www. prefix '{www_domain}'");
sub_resp = get_page(&sslwww).or_else(|_| get_page(&httpwww)).await;
}
}
sub_resp
}
res => res,
}
sub_resp
}
// If we get a result or a blacklist error, just continue
res => res,
};
// Create the iconlist
@@ -548,7 +395,7 @@ async fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
// 384KB should be more than enough for the HTML, though as we only really need the HTML header.
let limited_reader = stream_to_bytes_limit(content, 384 * 1024).await?.to_vec();
let dom = Tokenizer::new_with_emitter(limited_reader.to_reader(), FaviconEmitter::default()).infallible();
let dom = Tokenizer::new_with_emitter(limited_reader.to_reader(), FaviconEmitter::default());
get_favicons_node(dom, &mut iconlist, &url);
} else {
// Add the default favicon.ico to the list with just the given domain
@@ -573,21 +420,12 @@ async fn get_page(url: &str) -> Result<Response, Error> {
}
async fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
match check_domain_blacklist_reason(url::Url::parse(url).unwrap().host_str().unwrap_or_default()).await {
Some(DomainBlacklistReason::Regex) => warn!("Favicon '{}' is from a blacklisted domain!", url),
Some(DomainBlacklistReason::IP) => warn!("Favicon '{}' is hosted on a non-global IP!", url),
None => (),
}
let mut client = CLIENT.get(url);
if !referer.is_empty() {
client = client.header("Referer", referer)
}
match client.send().await {
Ok(c) => c.error_for_status().map_err(Into::into),
Err(e) => err_silent!(format!("{e}")),
}
Ok(client.send().await?.error_for_status()?)
}
/// Returns a Integer with the priority of the type of the icon which to prefer.
@@ -603,6 +441,9 @@ async fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Err
/// priority2 = get_icon_priority("https://example.com/path/to/a/favicon.ico", "");
/// ```
fn get_icon_priority(href: &str, sizes: &str) -> u8 {
static PRIORITY_MAP: Lazy<HashMap<&'static str, u8>> =
Lazy::new(|| [(".png", 10), (".jpg", 20), (".jpeg", 20)].into_iter().collect());
// Check if there is a dimension set
let (width, height) = parse_sizes(sizes);
@@ -627,13 +468,9 @@ fn get_icon_priority(href: &str, sizes: &str) -> u8 {
200
}
} else {
// Change priority by file extension
if href.ends_with(".png") {
10
} else if href.ends_with(".jpg") || href.ends_with(".jpeg") {
20
} else {
30
match href.rsplit_once('.') {
Some((_, extension)) => PRIORITY_MAP.get(&*extension.to_ascii_lowercase()).copied().unwrap_or(30),
None => 30,
}
}
}
@@ -670,12 +507,6 @@ fn parse_sizes(sizes: &str) -> (u16, u16) {
}
async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> {
match check_domain_blacklist_reason(domain).await {
Some(DomainBlacklistReason::Regex) => err_silent!("Domain is blacklisted", domain),
Some(DomainBlacklistReason::IP) => err_silent!("Host resolves to a non-global IP", domain),
None => (),
}
let icon_result = get_icon_url(domain).await?;
let mut buffer = Bytes::new();
@@ -711,22 +542,19 @@ async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> {
_ => debug!("Extracted icon from data:image uri is invalid"),
};
} else {
match get_page_with_referer(&icon.href, &icon_result.referer).await {
Ok(res) => {
buffer = stream_to_bytes_limit(res, 5120 * 1024).await?; // 5120KB/5MB for each icon max (Same as icons.bitwarden.net)
let res = get_page_with_referer(&icon.href, &icon_result.referer).await?;
// Check if the icon type is allowed, else try an icon from the list.
icon_type = get_icon_type(&buffer);
if icon_type.is_none() {
buffer.clear();
debug!("Icon from {}, is not a valid image type", icon.href);
continue;
}
info!("Downloaded icon from {}", icon.href);
break;
}
Err(e) => debug!("{:?}", e),
};
buffer = stream_to_bytes_limit(res, 5120 * 1024).await?; // 5120KB/5MB for each icon max (Same as icons.bitwarden.net)
// Check if the icon type is allowed, else try an icon from the list.
icon_type = get_icon_type(&buffer);
if icon_type.is_none() {
buffer.clear();
debug!("Icon from {}, is not a valid image type", icon.href);
continue;
}
info!("Downloaded icon from {}", icon.href);
break;
}
}
@@ -789,7 +617,7 @@ use cookie_store::CookieStore;
pub struct Jar(std::sync::RwLock<CookieStore>);
impl reqwest::cookie::CookieStore for Jar {
fn set_cookies(&self, cookie_headers: &mut dyn Iterator<Item = &header::HeaderValue>, url: &url::Url) {
fn set_cookies(&self, cookie_headers: &mut dyn Iterator<Item = &HeaderValue>, url: &url::Url) {
use cookie::{Cookie as RawCookie, ParseError as RawCookieParseError};
use time::Duration;
@@ -808,7 +636,7 @@ impl reqwest::cookie::CookieStore for Jar {
cookie_store.store_response_cookies(cookies, url);
}
fn cookies(&self, url: &url::Url) -> Option<header::HeaderValue> {
fn cookies(&self, url: &url::Url) -> Option<HeaderValue> {
let cookie_store = self.0.read().unwrap();
let s = cookie_store
.get_request_values(url)
@@ -820,7 +648,7 @@ impl reqwest::cookie::CookieStore for Jar {
return None;
}
header::HeaderValue::from_maybe_shared(Bytes::from(s)).ok()
HeaderValue::from_maybe_shared(Bytes::from(s)).ok()
}
}
@@ -828,7 +656,7 @@ impl reqwest::cookie::CookieStore for Jar {
/// The FaviconEmitter is using an optimized version of the DefaultEmitter.
/// This prevents emitting tags like comments, doctype and also strings between the tags.
/// But it will also only emit the tags we need and only if they have the correct attributes
/// Therefor parsing the HTML content is faster.
/// Therefore parsing the HTML content is faster.
use std::collections::BTreeMap;
#[derive(Default)]

View File

@@ -12,10 +12,10 @@ use crate::{
core::{
accounts::{PreloginData, RegisterData, _prelogin, _register},
log_user_event,
two_factor::{authenticator, duo, email, enforce_2fa_policy, webauthn, yubikey},
two_factor::{authenticator, duo, duo_oidc, email, enforce_2fa_policy, webauthn, yubikey},
},
push::register_push_device,
ApiResult, EmptyResult, JsonResult, JsonUpcase,
ApiResult, EmptyResult, JsonResult,
},
auth::{generate_organization_api_key_login_claims, ClientHeaders, ClientIp},
db::{models::*, DbConn},
@@ -31,7 +31,7 @@ pub fn routes() -> Vec<Route> {
async fn login(data: Form<ConnectData>, client_header: ClientHeaders, mut conn: DbConn) -> JsonResult {
let data: ConnectData = data.into_inner();
let mut user_uuid: Option<String> = None;
let mut user_id: Option<UserId> = None;
let login_result = match data.grant_type.as_ref() {
"refresh_token" => {
@@ -48,7 +48,7 @@ async fn login(data: Form<ConnectData>, client_header: ClientHeaders, mut conn:
_check_is_some(&data.device_name, "device_name cannot be blank")?;
_check_is_some(&data.device_type, "device_type cannot be blank")?;
_password_login(data, &mut user_uuid, &mut conn, &client_header.ip).await
_password_login(data, &mut user_id, &mut conn, &client_header.ip).await
}
"client_credentials" => {
_check_is_some(&data.client_id, "client_id cannot be blank")?;
@@ -59,17 +59,17 @@ async fn login(data: Form<ConnectData>, client_header: ClientHeaders, mut conn:
_check_is_some(&data.device_name, "device_name cannot be blank")?;
_check_is_some(&data.device_type, "device_type cannot be blank")?;
_api_key_login(data, &mut user_uuid, &mut conn, &client_header.ip).await
_api_key_login(data, &mut user_id, &mut conn, &client_header.ip).await
}
t => err!("Invalid type", t),
};
if let Some(user_uuid) = user_uuid {
if let Some(user_id) = user_id {
match &login_result {
Ok(_) => {
log_user_event(
EventType::UserLoggedIn as i32,
&user_uuid,
&user_id,
client_header.device_type,
&client_header.ip.ip,
&mut conn,
@@ -80,7 +80,7 @@ async fn login(data: Form<ConnectData>, client_header: ClientHeaders, mut conn:
if let Some(ev) = e.get_event() {
log_user_event(
ev.event as i32,
&user_uuid,
&user_id,
client_header.device_type,
&client_header.ip.ip,
&mut conn,
@@ -111,7 +111,7 @@ async fn _refresh_login(data: ConnectData, conn: &mut DbConn) -> JsonResult {
// Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156
// ---
// let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
// let members = Membership::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec);
device.save(conn).await?;
@@ -120,24 +120,28 @@ async fn _refresh_login(data: ConnectData, conn: &mut DbConn) -> JsonResult {
"expires_in": expires_in,
"token_type": "Bearer",
"refresh_token": device.refresh_token,
"Key": user.akey,
"PrivateKey": user.private_key,
"Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter,
"KdfMemory": user.client_kdf_memory,
"KdfParallelism": user.client_kdf_parallelism,
"ResetMasterPassword": false, // TODO: according to official server seems something like: user.password_hash.is_empty(), but would need testing
"scope": scope,
"unofficialServer": true,
});
Ok(Json(result))
}
#[derive(Default, Deserialize, Serialize)]
#[serde(rename_all = "camelCase")]
struct MasterPasswordPolicy {
min_complexity: u8,
min_length: u32,
require_lower: bool,
require_upper: bool,
require_numbers: bool,
require_special: bool,
enforce_on_login: bool,
}
async fn _password_login(
data: ConnectData,
user_uuid: &mut Option<String>,
user_id: &mut Option<UserId>,
conn: &mut DbConn,
ip: &ClientIp,
) -> JsonResult {
@@ -153,28 +157,29 @@ async fn _password_login(
// Get the user
let username = data.username.as_ref().unwrap().trim();
let mut user = match User::find_by_mail(username, conn).await {
Some(user) => user,
None => err!("Username or password is incorrect. Try again", format!("IP: {}. Username: {}.", ip.ip, username)),
let Some(mut user) = User::find_by_mail(username, conn).await else {
err!("Username or password is incorrect. Try again", format!("IP: {}. Username: {}.", ip.ip, username))
};
// Set the user_uuid here to be passed back used for event logging.
*user_uuid = Some(user.uuid.clone());
// Set the user_id here to be passed back used for event logging.
*user_id = Some(user.uuid.clone());
// Check password
let password = data.password.as_ref().unwrap();
if let Some(auth_request_uuid) = data.auth_request.clone() {
if let Some(auth_request) = AuthRequest::find_by_uuid(auth_request_uuid.as_str(), conn).await {
if !auth_request.check_access_code(password) {
err!(
"Username or access code is incorrect. Try again",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn,
}
)
// Check if the user is disabled
if !user.enabled {
err!(
"This user has been disabled",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn
}
} else {
)
}
let password = data.password.as_ref().unwrap();
// If we get an auth request, we don't check the user's password, but the access code of the auth request
if let Some(ref auth_request_id) = data.auth_request {
let Some(auth_request) = AuthRequest::find_by_uuid_and_user(auth_request_id, &user.uuid, conn).await else {
err!(
"Auth request not found. Try again.",
format!("IP: {}. Username: {}.", ip.ip, username),
@@ -182,6 +187,24 @@ async fn _password_login(
event: EventType::UserFailedLogIn,
}
)
};
let expiration_time = auth_request.creation_date + chrono::Duration::minutes(5);
let request_expired = Utc::now().naive_utc() >= expiration_time;
if auth_request.user_uuid != user.uuid
|| !auth_request.approved.unwrap_or(false)
|| request_expired
|| ip.ip.to_string() != auth_request.request_ip
|| !auth_request.check_access_code(password)
{
err!(
"Username or access code is incorrect. Try again",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn,
}
)
}
} else if !user.check_valid_password(password) {
err!(
@@ -193,8 +216,8 @@ async fn _password_login(
)
}
// Change the KDF Iterations
if user.password_iterations != CONFIG.password_iterations() {
// Change the KDF Iterations (only when not logging in with an auth request)
if data.auth_request.is_none() && user.password_iterations != CONFIG.password_iterations() {
user.password_iterations = CONFIG.password_iterations();
user.set_password(password, None, false, None);
@@ -203,17 +226,6 @@ async fn _password_login(
}
}
// Check if the user is disabled
if !user.enabled {
err!(
"This user has been disabled",
format!("IP: {}. Username: {}.", ip.ip, username),
ErrorEvent {
event: EventType::UserFailedLogIn
}
)
}
let now = Utc::now().naive_utc();
if user.verified_at.is_none() && CONFIG.mail_enabled() && CONFIG.signups_verify() {
@@ -253,7 +265,7 @@ async fn _password_login(
let twofactor_token = twofactor_auth(&user, &data, &mut device, ip, conn).await?;
if CONFIG.mail_enabled() && new_device {
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device.name).await {
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device).await {
error!("Error sending new device email: {:#?}", e);
if CONFIG.require_device_email() {
@@ -268,7 +280,9 @@ async fn _password_login(
}
// register push device
register_push_device(&mut device, conn).await?;
if !new_device {
register_push_device(&mut device, conn).await?;
}
// Common
// ---
@@ -276,10 +290,40 @@ async fn _password_login(
// Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156
// ---
// let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
// let members = Membership::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec);
device.save(conn).await?;
// Fetch all valid Master Password Policies and merge them into one with all true's and larges numbers as one policy
let master_password_policies: Vec<MasterPasswordPolicy> =
OrgPolicy::find_accepted_and_confirmed_by_user_and_active_policy(
&user.uuid,
OrgPolicyType::MasterPassword,
conn,
)
.await
.into_iter()
.filter_map(|p| serde_json::from_str(&p.data).ok())
.collect();
let master_password_policy = if !master_password_policies.is_empty() {
let mut mpp_json = json!(master_password_policies.into_iter().reduce(|acc, policy| {
MasterPasswordPolicy {
min_complexity: acc.min_complexity.max(policy.min_complexity),
min_length: acc.min_length.max(policy.min_length),
require_lower: acc.require_lower || policy.require_lower,
require_upper: acc.require_upper || policy.require_upper,
require_numbers: acc.require_numbers || policy.require_numbers,
require_special: acc.require_special || policy.require_special,
enforce_on_login: acc.enforce_on_login || policy.enforce_on_login,
}
}));
mpp_json["object"] = json!("masterPasswordPolicy");
mpp_json
} else {
json!({"object": "masterPasswordPolicy"})
};
let mut result = json!({
"access_token": access_token,
"expires_in": expires_in,
@@ -293,9 +337,11 @@ async fn _password_login(
"KdfIterations": user.client_kdf_iter,
"KdfMemory": user.client_kdf_memory,
"KdfParallelism": user.client_kdf_parallelism,
"ResetMasterPassword": false,// TODO: Same as above
"ResetMasterPassword": false, // TODO: Same as above
"ForcePasswordReset": false,
"MasterPasswordPolicy": master_password_policy,
"scope": scope,
"unofficialServer": true,
"UserDecryptionOptions": {
"HasMasterPassword": !user.password_hash.is_empty(),
"Object": "userDecryptionOptions"
@@ -312,7 +358,7 @@ async fn _password_login(
async fn _api_key_login(
data: ConnectData,
user_uuid: &mut Option<String>,
user_id: &mut Option<UserId>,
conn: &mut DbConn,
ip: &ClientIp,
) -> JsonResult {
@@ -321,7 +367,7 @@ async fn _api_key_login(
// Validate scope
match data.scope.as_ref().unwrap().as_ref() {
"api" => _user_api_key_login(data, user_uuid, conn, ip).await,
"api" => _user_api_key_login(data, user_id, conn, ip).await,
"api.organization" => _organization_api_key_login(data, conn, ip).await,
_ => err!("Scope not supported"),
}
@@ -329,23 +375,22 @@ async fn _api_key_login(
async fn _user_api_key_login(
data: ConnectData,
user_uuid: &mut Option<String>,
user_id: &mut Option<UserId>,
conn: &mut DbConn,
ip: &ClientIp,
) -> JsonResult {
// Get the user via the client_id
let client_id = data.client_id.as_ref().unwrap();
let client_user_uuid = match client_id.strip_prefix("user.") {
Some(uuid) => uuid,
None => err!("Malformed client_id", format!("IP: {}.", ip.ip)),
let Some(client_user_id) = client_id.strip_prefix("user.") else {
err!("Malformed client_id", format!("IP: {}.", ip.ip))
};
let user = match User::find_by_uuid(client_user_uuid, conn).await {
Some(user) => user,
None => err!("Invalid client_id", format!("IP: {}.", ip.ip)),
let client_user_id: UserId = client_user_id.into();
let Some(user) = User::find_by_uuid(&client_user_id, conn).await else {
err!("Invalid client_id", format!("IP: {}.", ip.ip))
};
// Set the user_uuid here to be passed back used for event logging.
*user_uuid = Some(user.uuid.clone());
// Set the user_id here to be passed back used for event logging.
*user_id = Some(user.uuid.clone());
// Check if the user is disabled
if !user.enabled {
@@ -374,7 +419,7 @@ async fn _user_api_key_login(
if CONFIG.mail_enabled() && new_device {
let now = Utc::now().naive_utc();
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device.name).await {
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device).await {
error!("Error sending new device email: {:#?}", e);
if CONFIG.require_device_email() {
@@ -395,7 +440,7 @@ async fn _user_api_key_login(
// Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156
// ---
// let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
// let members = Membership::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec);
device.save(conn).await?;
@@ -414,9 +459,8 @@ async fn _user_api_key_login(
"KdfIterations": user.client_kdf_iter,
"KdfMemory": user.client_kdf_memory,
"KdfParallelism": user.client_kdf_parallelism,
"ResetMasterPassword": false, // TODO: Same as above
"ResetMasterPassword": false, // TODO: according to official server seems something like: user.password_hash.is_empty(), but would need testing
"scope": "api",
"unofficialServer": true,
});
Ok(Json(result))
@@ -425,13 +469,12 @@ async fn _user_api_key_login(
async fn _organization_api_key_login(data: ConnectData, conn: &mut DbConn, ip: &ClientIp) -> JsonResult {
// Get the org via the client_id
let client_id = data.client_id.as_ref().unwrap();
let org_uuid = match client_id.strip_prefix("organization.") {
Some(uuid) => uuid,
None => err!("Malformed client_id", format!("IP: {}.", ip.ip)),
let Some(org_id) = client_id.strip_prefix("organization.") else {
err!("Malformed client_id", format!("IP: {}.", ip.ip))
};
let org_api_key = match OrganizationApiKey::find_by_org_uuid(org_uuid, conn).await {
Some(org_api_key) => org_api_key,
None => err!("Invalid client_id", format!("IP: {}.", ip.ip)),
let org_id: OrganizationId = org_id.to_string().into();
let Some(org_api_key) = OrganizationApiKey::find_by_org_uuid(&org_id, conn).await else {
err!("Invalid client_id", format!("IP: {}.", ip.ip))
};
// Check API key.
@@ -448,7 +491,6 @@ async fn _organization_api_key_login(data: ConnectData, conn: &mut DbConn, ip: &
"expires_in": 3600,
"token_type": "Bearer",
"scope": "api.organization",
"unofficialServer": true,
})))
}
@@ -488,14 +530,16 @@ async fn twofactor_auth(
return Ok(None);
}
TwoFactorIncomplete::mark_incomplete(&user.uuid, &device.uuid, &device.name, ip, conn).await?;
TwoFactorIncomplete::mark_incomplete(&user.uuid, &device.uuid, &device.name, device.atype, ip, conn).await?;
let twofactor_ids: Vec<_> = twofactors.iter().map(|tf| tf.atype).collect();
let selected_id = data.two_factor_provider.unwrap_or(twofactor_ids[0]); // If we aren't given a two factor provider, assume the first one
let twofactor_code = match data.two_factor_token {
Some(ref code) => code,
None => err_json!(_json_err_twofactor(&twofactor_ids, &user.uuid, conn).await?, "2FA token not provided"),
None => {
err_json!(_json_err_twofactor(&twofactor_ids, &user.uuid, data, conn).await?, "2FA token not provided")
}
};
let selected_twofactor = twofactors.into_iter().find(|tf| tf.atype == selected_id && tf.enabled);
@@ -512,7 +556,23 @@ async fn twofactor_auth(
Some(TwoFactorType::Webauthn) => webauthn::validate_webauthn_login(&user.uuid, twofactor_code, conn).await?,
Some(TwoFactorType::YubiKey) => yubikey::validate_yubikey_login(twofactor_code, &selected_data?).await?,
Some(TwoFactorType::Duo) => {
duo::validate_duo_login(data.username.as_ref().unwrap().trim(), twofactor_code, conn).await?
match CONFIG.duo_use_iframe() {
true => {
// Legacy iframe prompt flow
duo::validate_duo_login(&user.email, twofactor_code, conn).await?
}
false => {
// OIDC based flow
duo_oidc::validate_duo_login(
&user.email,
twofactor_code,
data.client_id.as_ref().unwrap(),
data.device_identifier.as_ref().unwrap(),
conn,
)
.await?
}
}
}
Some(TwoFactorType::Email) => {
email::validate_email_code_str(&user.uuid, twofactor_code, &selected_data?, conn).await?
@@ -525,7 +585,7 @@ async fn twofactor_auth(
}
_ => {
err_json!(
_json_err_twofactor(&twofactor_ids, &user.uuid, conn).await?,
_json_err_twofactor(&twofactor_ids, &user.uuid, data, conn).await?,
"2FA Remember token not provided"
)
}
@@ -553,12 +613,20 @@ fn _selected_data(tf: Option<TwoFactor>) -> ApiResult<String> {
tf.map(|t| t.data).map_res("Two factor doesn't exist")
}
async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &mut DbConn) -> ApiResult<Value> {
async fn _json_err_twofactor(
providers: &[i32],
user_id: &UserId,
data: &ConnectData,
conn: &mut DbConn,
) -> ApiResult<Value> {
let mut result = json!({
"error" : "invalid_grant",
"error_description" : "Two factor required.",
"TwoFactorProviders" : providers,
"TwoFactorProviders2" : {} // { "0" : null }
"TwoFactorProviders" : providers.iter().map(ToString::to_string).collect::<Vec<String>>(),
"TwoFactorProviders2" : {}, // { "0" : null }
"MasterPasswordPolicy": {
"Object": "masterPasswordPolicy"
}
});
for provider in providers {
@@ -568,46 +636,62 @@ async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &mut DbCo
Some(TwoFactorType::Authenticator) => { /* Nothing to do for TOTP */ }
Some(TwoFactorType::Webauthn) if CONFIG.domain_set() => {
let request = webauthn::generate_webauthn_login(user_uuid, conn).await?;
let request = webauthn::generate_webauthn_login(user_id, conn).await?;
result["TwoFactorProviders2"][provider.to_string()] = request.0;
}
Some(TwoFactorType::Duo) => {
let email = match User::find_by_uuid(user_uuid, conn).await {
let email = match User::find_by_uuid(user_id, conn).await {
Some(u) => u.email,
None => err!("User does not exist"),
};
let (signature, host) = duo::generate_duo_signature(&email, conn).await?;
match CONFIG.duo_use_iframe() {
true => {
// Legacy iframe prompt flow
let (signature, host) = duo::generate_duo_signature(&email, conn).await?;
result["TwoFactorProviders2"][provider.to_string()] = json!({
"Host": host,
"Signature": signature,
})
}
false => {
// OIDC based flow
let auth_url = duo_oidc::get_duo_auth_url(
&email,
data.client_id.as_ref().unwrap(),
data.device_identifier.as_ref().unwrap(),
conn,
)
.await?;
result["TwoFactorProviders2"][provider.to_string()] = json!({
"Host": host,
"Signature": signature,
});
result["TwoFactorProviders2"][provider.to_string()] = json!({
"AuthUrl": auth_url,
})
}
}
}
Some(tf_type @ TwoFactorType::YubiKey) => {
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await {
Some(tf) => tf,
None => err!("No YubiKey devices registered"),
let Some(twofactor) = TwoFactor::find_by_user_and_type(user_id, tf_type as i32, conn).await else {
err!("No YubiKey devices registered")
};
let yubikey_metadata: yubikey::YubikeyMetadata = serde_json::from_str(&twofactor.data)?;
result["TwoFactorProviders2"][provider.to_string()] = json!({
"Nfc": yubikey_metadata.Nfc,
"Nfc": yubikey_metadata.nfc,
})
}
Some(tf_type @ TwoFactorType::Email) => {
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await {
Some(tf) => tf,
None => err!("No twofactor email registered"),
let Some(twofactor) = TwoFactor::find_by_user_and_type(user_id, tf_type as i32, conn).await else {
err!("No twofactor email registered")
};
// Send email immediately if email is the only 2FA option
if providers.len() == 1 {
email::send_token(user_uuid, conn).await?
email::send_token(user_id, conn).await?
}
let email_data = email::EmailTokenData::from_json(&twofactor.data)?;
@@ -624,19 +708,18 @@ async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &mut DbCo
}
#[post("/accounts/prelogin", data = "<data>")]
async fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> Json<Value> {
async fn prelogin(data: Json<PreloginData>, conn: DbConn) -> Json<Value> {
_prelogin(data, conn).await
}
#[post("/accounts/register", data = "<data>")]
async fn identity_register(data: JsonUpcase<RegisterData>, conn: DbConn) -> JsonResult {
async fn identity_register(data: Json<RegisterData>, conn: DbConn) -> JsonResult {
_register(data, conn).await
}
// https://github.com/bitwarden/jslib/blob/master/common/src/models/request/tokenRequest.ts
// https://github.com/bitwarden/mobile/blob/master/src/Core/Models/Request/TokenRequest.cs
#[derive(Debug, Clone, Default, FromForm)]
#[allow(non_snake_case)]
struct ConnectData {
#[field(name = uncased("grant_type"))]
#[field(name = uncased("granttype"))]
@@ -663,7 +746,7 @@ struct ConnectData {
#[field(name = uncased("device_identifier"))]
#[field(name = uncased("deviceidentifier"))]
device_identifier: Option<String>,
device_identifier: Option<DeviceId>,
#[field(name = uncased("device_name"))]
#[field(name = uncased("devicename"))]
device_name: Option<String>,
@@ -686,7 +769,7 @@ struct ConnectData {
#[field(name = uncased("twofactorremember"))]
two_factor_remember: Option<i32>,
#[field(name = uncased("authrequest"))]
auth_request: Option<String>,
auth_request: Option<AuthRequestId>,
}
fn _check_is_some<T>(value: &Option<T>, msg: &str) -> EmptyResult {

View File

@@ -23,7 +23,7 @@ pub use crate::api::{
icons::routes as icons_routes,
identity::routes as identity_routes,
notifications::routes as notifications_routes,
notifications::{start_notification_server, AnonymousNotify, Notify, UpdateType, WS_ANONYMOUS_SUBSCRIPTIONS},
notifications::{AnonymousNotify, Notify, UpdateType, WS_ANONYMOUS_SUBSCRIPTIONS, WS_USERS},
push::{
push_cipher_update, push_folder_update, push_logout, push_send_update, push_user_update, register_push_device,
unregister_push_device,
@@ -33,23 +33,18 @@ pub use crate::api::{
web::static_files,
};
use crate::db::{models::User, DbConn};
use crate::util;
// Type aliases for API methods results
type ApiResult<T> = Result<T, crate::error::Error>;
pub type JsonResult = ApiResult<Json<Value>>;
pub type EmptyResult = ApiResult<()>;
type JsonUpcase<T> = Json<util::UpCase<T>>;
type JsonUpcaseVec<T> = Json<Vec<util::UpCase<T>>>;
type JsonVec<T> = Json<Vec<T>>;
// Common structs representing JSON data received
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
struct PasswordOrOtpData {
MasterPasswordHash: Option<String>,
Otp: Option<String>,
master_password_hash: Option<String>,
otp: Option<String>,
}
impl PasswordOrOtpData {
@@ -59,7 +54,7 @@ impl PasswordOrOtpData {
pub async fn validate(&self, user: &User, delete_if_valid: bool, conn: &mut DbConn) -> EmptyResult {
use crate::api::core::two_factor::protected_actions::validate_protected_action_otp;
match (self.MasterPasswordHash.as_deref(), self.Otp.as_deref()) {
match (self.master_password_hash.as_deref(), self.otp.as_deref()) {
(Some(pw_hash), None) => {
if !user.check_valid_password(pw_hash) {
err!("Invalid password");

View File

@@ -1,28 +1,16 @@
use std::{
net::{IpAddr, SocketAddr},
sync::Arc,
time::Duration,
};
use std::{net::IpAddr, sync::Arc, time::Duration};
use chrono::{NaiveDateTime, Utc};
use rmpv::Value;
use rocket::{
futures::{SinkExt, StreamExt},
Route,
};
use tokio::{
net::{TcpListener, TcpStream},
sync::mpsc::Sender,
};
use tokio_tungstenite::{
accept_hdr_async,
tungstenite::{handshake, Message},
};
use rocket::{futures::StreamExt, Route};
use tokio::sync::mpsc::Sender;
use rocket_ws::{Message, WebSocket};
use crate::{
auth::{ClientIp, WsAccessTokenHeader},
db::{
models::{Cipher, Folder, Send as DbSend, User},
models::{AuthRequestId, Cipher, CollectionId, DeviceId, Folder, Send as DbSend, User, UserId},
DbConn,
},
Error, CONFIG,
@@ -30,7 +18,7 @@ use crate::{
use once_cell::sync::Lazy;
static WS_USERS: Lazy<Arc<WebSocketUsers>> = Lazy::new(|| {
pub static WS_USERS: Lazy<Arc<WebSocketUsers>> = Lazy::new(|| {
Arc::new(WebSocketUsers {
map: Arc::new(dashmap::DashMap::new()),
})
@@ -47,8 +35,15 @@ use super::{
push_send_update, push_user_update,
};
static NOTIFICATIONS_DISABLED: Lazy<bool> = Lazy::new(|| !CONFIG.enable_websocket() && !CONFIG.push_enabled());
pub fn routes() -> Vec<Route> {
routes![websockets_hub, anonymous_websockets_hub]
if CONFIG.enable_websocket() {
routes![websockets_hub, anonymous_websockets_hub]
} else {
info!("WebSocket are disabled, realtime sync functionality will not work!");
routes![]
}
}
#[derive(FromForm, Debug)]
@@ -58,13 +53,13 @@ struct WsAccessToken {
struct WSEntryMapGuard {
users: Arc<WebSocketUsers>,
user_uuid: String,
user_uuid: UserId,
entry_uuid: uuid::Uuid,
addr: IpAddr,
}
impl WSEntryMapGuard {
fn new(users: Arc<WebSocketUsers>, user_uuid: String, entry_uuid: uuid::Uuid, addr: IpAddr) -> Self {
fn new(users: Arc<WebSocketUsers>, user_uuid: UserId, entry_uuid: uuid::Uuid, addr: IpAddr) -> Self {
Self {
users,
user_uuid,
@@ -77,7 +72,7 @@ impl WSEntryMapGuard {
impl Drop for WSEntryMapGuard {
fn drop(&mut self) {
info!("Closing WS connection from {}", self.addr);
if let Some(mut entry) = self.users.map.get_mut(&self.user_uuid) {
if let Some(mut entry) = self.users.map.get_mut(self.user_uuid.as_ref()) {
entry.retain(|(uuid, _)| uuid != &self.entry_uuid);
}
}
@@ -106,9 +101,10 @@ impl Drop for WSAnonymousEntryMapGuard {
}
}
#[allow(tail_expr_drop_order)]
#[get("/hub?<data..>")]
fn websockets_hub<'r>(
ws: rocket_ws::WebSocket,
ws: WebSocket,
data: WsAccessToken,
ip: ClientIp,
header_token: WsAccessTokenHeader,
@@ -134,7 +130,7 @@ fn websockets_hub<'r>(
// Add a channel to send messages to this client to the map
let entry_uuid = uuid::Uuid::new_v4();
let (tx, rx) = tokio::sync::mpsc::channel::<Message>(100);
users.map.entry(claims.sub.clone()).or_default().push((entry_uuid, tx));
users.map.entry(claims.sub.to_string()).or_default().push((entry_uuid, tx));
// Once the guard goes out of scope, the connection will have been closed and the entry will be deleted from the map
(rx, WSEntryMapGuard::new(users, claims.sub, entry_uuid, addr))
@@ -161,7 +157,6 @@ fn websockets_hub<'r>(
if serde_json::from_str(msg).ok() == Some(INITIAL_MESSAGE) {
yield Message::binary(INITIAL_RESPONSE);
continue;
}
}
@@ -191,12 +186,9 @@ fn websockets_hub<'r>(
})
}
#[allow(tail_expr_drop_order)]
#[get("/anonymous-hub?<token..>")]
fn anonymous_websockets_hub<'r>(
ws: rocket_ws::WebSocket,
token: String,
ip: ClientIp,
) -> Result<rocket_ws::Stream!['r], Error> {
fn anonymous_websockets_hub<'r>(ws: WebSocket, token: String, ip: ClientIp) -> Result<rocket_ws::Stream!['r], Error> {
let addr = ip.ip;
info!("Accepting Anonymous Rocket WS connection from {addr}");
@@ -232,7 +224,6 @@ fn anonymous_websockets_hub<'r>(
if serde_json::from_str(msg).ok() == Some(INITIAL_MESSAGE) {
yield Message::binary(INITIAL_RESPONSE);
continue;
}
}
@@ -297,9 +288,9 @@ fn serialize(val: Value) -> Vec<u8> {
}
fn serialize_date(date: NaiveDateTime) -> Value {
let seconds: i64 = date.timestamp();
let nanos: i64 = date.timestamp_subsec_nanos().into();
let timestamp = nanos << 34 | seconds;
let seconds: i64 = date.and_utc().timestamp();
let nanos: i64 = date.and_utc().timestamp_subsec_nanos().into();
let timestamp = (nanos << 34) | seconds;
let bs = timestamp.to_be_bytes();
@@ -337,8 +328,8 @@ pub struct WebSocketUsers {
}
impl WebSocketUsers {
async fn send_update(&self, user_uuid: &str, data: &[u8]) {
if let Some(user) = self.map.get(user_uuid).map(|v| v.clone()) {
async fn send_update(&self, user_id: &UserId, data: &[u8]) {
if let Some(user) = self.map.get(user_id.as_ref()).map(|v| v.clone()) {
for (_, sender) in user.iter() {
if let Err(e) = sender.send(Message::binary(data)).await {
error!("Error sending WS update {e}");
@@ -349,30 +340,42 @@ impl WebSocketUsers {
// NOTE: The last modified date needs to be updated before calling these methods
pub async fn send_user_update(&self, ut: UpdateType, user: &User) {
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
}
let data = create_update(
vec![("UserId".into(), user.uuid.clone().into()), ("Date".into(), serialize_date(user.updated_at))],
vec![("UserId".into(), user.uuid.to_string().into()), ("Date".into(), serialize_date(user.updated_at))],
ut,
None,
);
self.send_update(&user.uuid, &data).await;
if CONFIG.enable_websocket() {
self.send_update(&user.uuid, &data).await;
}
if CONFIG.push_enabled() {
push_user_update(ut, user);
}
}
pub async fn send_logout(&self, user: &User, acting_device_uuid: Option<String>) {
pub async fn send_logout(&self, user: &User, acting_device_id: Option<DeviceId>) {
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
}
let data = create_update(
vec![("UserId".into(), user.uuid.clone().into()), ("Date".into(), serialize_date(user.updated_at))],
vec![("UserId".into(), user.uuid.to_string().into()), ("Date".into(), serialize_date(user.updated_at))],
UpdateType::LogOut,
acting_device_uuid.clone(),
acting_device_id.clone(),
);
self.send_update(&user.uuid, &data).await;
if CONFIG.enable_websocket() {
self.send_update(&user.uuid, &data).await;
}
if CONFIG.push_enabled() {
push_logout(user, acting_device_uuid);
push_logout(user, acting_device_id.clone());
}
}
@@ -380,23 +383,29 @@ impl WebSocketUsers {
&self,
ut: UpdateType,
folder: &Folder,
acting_device_uuid: &String,
acting_device_id: &DeviceId,
conn: &mut DbConn,
) {
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
}
let data = create_update(
vec![
("Id".into(), folder.uuid.clone().into()),
("UserId".into(), folder.user_uuid.clone().into()),
("Id".into(), folder.uuid.to_string().into()),
("UserId".into(), folder.user_uuid.to_string().into()),
("RevisionDate".into(), serialize_date(folder.updated_at)),
],
ut,
Some(acting_device_uuid.into()),
Some(acting_device_id.clone()),
);
self.send_update(&folder.user_uuid, &data).await;
if CONFIG.enable_websocket() {
self.send_update(&folder.user_uuid, &data).await;
}
if CONFIG.push_enabled() {
push_folder_update(ut, folder, acting_device_uuid, conn).await;
push_folder_update(ut, folder, acting_device_id, conn).await;
}
}
@@ -404,42 +413,48 @@ impl WebSocketUsers {
&self,
ut: UpdateType,
cipher: &Cipher,
user_uuids: &[String],
acting_device_uuid: &String,
collection_uuids: Option<Vec<String>>,
user_ids: &[UserId],
acting_device_id: &DeviceId,
collection_uuids: Option<Vec<CollectionId>>,
conn: &mut DbConn,
) {
let org_uuid = convert_option(cipher.organization_uuid.clone());
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
}
let org_id = convert_option(cipher.organization_uuid.as_deref());
// Depending if there are collections provided or not, we need to have different values for the following variables.
// The user_uuid should be `null`, and the revision date should be set to now, else the clients won't sync the collection change.
let (user_uuid, collection_uuids, revision_date) = if let Some(collection_uuids) = collection_uuids {
let (user_id, collection_uuids, revision_date) = if let Some(collection_uuids) = collection_uuids {
(
Value::Nil,
Value::Array(collection_uuids.into_iter().map(|v| v.into()).collect::<Vec<rmpv::Value>>()),
Value::Array(collection_uuids.into_iter().map(|v| v.to_string().into()).collect::<Vec<Value>>()),
serialize_date(Utc::now().naive_utc()),
)
} else {
(convert_option(cipher.user_uuid.clone()), Value::Nil, serialize_date(cipher.updated_at))
(convert_option(cipher.user_uuid.as_deref()), Value::Nil, serialize_date(cipher.updated_at))
};
let data = create_update(
vec![
("Id".into(), cipher.uuid.clone().into()),
("UserId".into(), user_uuid),
("OrganizationId".into(), org_uuid),
("Id".into(), cipher.uuid.to_string().into()),
("UserId".into(), user_id),
("OrganizationId".into(), org_id),
("CollectionIds".into(), collection_uuids),
("RevisionDate".into(), revision_date),
],
ut,
Some(acting_device_uuid.into()),
Some(acting_device_id.clone()),
);
for uuid in user_uuids {
self.send_update(uuid, &data).await;
if CONFIG.enable_websocket() {
for uuid in user_ids {
self.send_update(uuid, &data).await;
}
}
if CONFIG.push_enabled() && user_uuids.len() == 1 {
push_cipher_update(ut, cipher, acting_device_uuid, conn).await;
if CONFIG.push_enabled() && user_ids.len() == 1 {
push_cipher_update(ut, cipher, acting_device_id, conn).await;
}
}
@@ -447,66 +462,83 @@ impl WebSocketUsers {
&self,
ut: UpdateType,
send: &DbSend,
user_uuids: &[String],
acting_device_uuid: &String,
user_ids: &[UserId],
acting_device_id: &DeviceId,
conn: &mut DbConn,
) {
let user_uuid = convert_option(send.user_uuid.clone());
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
}
let user_id = convert_option(send.user_uuid.as_deref());
let data = create_update(
vec![
("Id".into(), send.uuid.clone().into()),
("UserId".into(), user_uuid),
("Id".into(), send.uuid.to_string().into()),
("UserId".into(), user_id),
("RevisionDate".into(), serialize_date(send.revision_date)),
],
ut,
None,
);
for uuid in user_uuids {
self.send_update(uuid, &data).await;
if CONFIG.enable_websocket() {
for uuid in user_ids {
self.send_update(uuid, &data).await;
}
}
if CONFIG.push_enabled() && user_uuids.len() == 1 {
push_send_update(ut, send, acting_device_uuid, conn).await;
if CONFIG.push_enabled() && user_ids.len() == 1 {
push_send_update(ut, send, acting_device_id, conn).await;
}
}
pub async fn send_auth_request(
&self,
user_uuid: &String,
user_id: &UserId,
auth_request_uuid: &String,
acting_device_uuid: &String,
acting_device_id: &DeviceId,
conn: &mut DbConn,
) {
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
}
let data = create_update(
vec![("Id".into(), auth_request_uuid.clone().into()), ("UserId".into(), user_uuid.clone().into())],
vec![("Id".into(), auth_request_uuid.clone().into()), ("UserId".into(), user_id.to_string().into())],
UpdateType::AuthRequest,
Some(acting_device_uuid.to_string()),
Some(acting_device_id.clone()),
);
self.send_update(user_uuid, &data).await;
if CONFIG.enable_websocket() {
self.send_update(user_id, &data).await;
}
if CONFIG.push_enabled() {
push_auth_request(user_uuid.to_string(), auth_request_uuid.to_string(), conn).await;
push_auth_request(user_id.clone(), auth_request_uuid.to_string(), conn).await;
}
}
pub async fn send_auth_response(
&self,
user_uuid: &String,
auth_response_uuid: &str,
approving_device_uuid: String,
user_id: &UserId,
auth_request_id: &AuthRequestId,
approving_device_id: &DeviceId,
conn: &mut DbConn,
) {
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
}
let data = create_update(
vec![("Id".into(), auth_response_uuid.to_owned().into()), ("UserId".into(), user_uuid.clone().into())],
vec![("Id".into(), auth_request_id.to_string().into()), ("UserId".into(), user_id.to_string().into())],
UpdateType::AuthRequestResponse,
approving_device_uuid.clone().into(),
Some(approving_device_id.clone()),
);
self.send_update(auth_response_uuid, &data).await;
if CONFIG.enable_websocket() {
self.send_update(user_id, &data).await;
}
if CONFIG.push_enabled() {
push_auth_response(user_uuid.to_string(), auth_response_uuid.to_string(), approving_device_uuid, conn)
.await;
push_auth_response(user_id, auth_request_id, approving_device_id, conn).await;
}
}
}
@@ -525,13 +557,16 @@ impl AnonymousWebSocketSubscriptions {
}
}
pub async fn send_auth_response(&self, user_uuid: &String, auth_response_uuid: &str) {
pub async fn send_auth_response(&self, user_id: &UserId, auth_request_id: &AuthRequestId) {
if !CONFIG.enable_websocket() {
return;
}
let data = create_anonymous_update(
vec![("Id".into(), auth_response_uuid.to_owned().into()), ("UserId".into(), user_uuid.clone().into())],
vec![("Id".into(), auth_request_id.to_string().into()), ("UserId".into(), user_id.to_string().into())],
UpdateType::AuthRequestResponse,
user_uuid.to_string(),
user_id.clone(),
);
self.send_update(auth_response_uuid, &data).await;
self.send_update(auth_request_id, &data).await;
}
}
@@ -543,14 +578,14 @@ impl AnonymousWebSocketSubscriptions {
"ReceiveMessage", // Target
[ // Arguments
{
"ContextId": acting_device_uuid || Nil,
"ContextId": acting_device_id || Nil,
"Type": ut as i32,
"Payload": {}
}
]
]
*/
fn create_update(payload: Vec<(Value, Value)>, ut: UpdateType, acting_device_uuid: Option<String>) -> Vec<u8> {
fn create_update(payload: Vec<(Value, Value)>, ut: UpdateType, acting_device_id: Option<DeviceId>) -> Vec<u8> {
use rmpv::Value as V;
let value = V::Array(vec![
@@ -559,7 +594,7 @@ fn create_update(payload: Vec<(Value, Value)>, ut: UpdateType, acting_device_uui
V::Nil,
"ReceiveMessage".into(),
V::Array(vec![V::Map(vec![
("ContextId".into(), acting_device_uuid.map(|v| v.into()).unwrap_or_else(|| V::Nil)),
("ContextId".into(), acting_device_id.map(|v| v.to_string().into()).unwrap_or_else(|| V::Nil)),
("Type".into(), (ut as i32).into()),
("Payload".into(), payload.into()),
])]),
@@ -568,7 +603,7 @@ fn create_update(payload: Vec<(Value, Value)>, ut: UpdateType, acting_device_uui
serialize(value)
}
fn create_anonymous_update(payload: Vec<(Value, Value)>, ut: UpdateType, user_id: String) -> Vec<u8> {
fn create_anonymous_update(payload: Vec<(Value, Value)>, ut: UpdateType, user_id: UserId) -> Vec<u8> {
use rmpv::Value as V;
let value = V::Array(vec![
@@ -579,7 +614,7 @@ fn create_anonymous_update(payload: Vec<(Value, Value)>, ut: UpdateType, user_id
V::Array(vec![V::Map(vec![
("Type".into(), (ut as i32).into()),
("Payload".into(), payload.into()),
("UserId".into(), user_id.into()),
("UserId".into(), user_id.to_string().into()),
])]),
]);
@@ -620,127 +655,3 @@ pub enum UpdateType {
pub type Notify<'a> = &'a rocket::State<Arc<WebSocketUsers>>;
pub type AnonymousNotify<'a> = &'a rocket::State<Arc<AnonymousWebSocketSubscriptions>>;
pub fn start_notification_server() -> Arc<WebSocketUsers> {
let users = Arc::clone(&WS_USERS);
if CONFIG.websocket_enabled() {
let users2 = Arc::<WebSocketUsers>::clone(&users);
tokio::spawn(async move {
let addr = (CONFIG.websocket_address(), CONFIG.websocket_port());
info!("Starting WebSockets server on {}:{}", addr.0, addr.1);
let listener = TcpListener::bind(addr).await.expect("Can't listen on websocket port");
let (shutdown_tx, mut shutdown_rx) = tokio::sync::oneshot::channel::<()>();
CONFIG.set_ws_shutdown_handle(shutdown_tx);
loop {
tokio::select! {
Ok((stream, addr)) = listener.accept() => {
tokio::spawn(handle_connection(stream, Arc::<WebSocketUsers>::clone(&users2), addr));
}
_ = &mut shutdown_rx => {
break;
}
}
}
info!("Shutting down WebSockets server!")
});
}
users
}
async fn handle_connection(stream: TcpStream, users: Arc<WebSocketUsers>, addr: SocketAddr) -> Result<(), Error> {
let mut user_uuid: Option<String> = None;
info!("Accepting WS connection from {addr}");
// Accept connection, do initial handshake, validate auth token and get the user ID
use handshake::server::{Request, Response};
let mut stream = accept_hdr_async(stream, |req: &Request, res: Response| {
if let Some(token) = get_request_token(req) {
if let Ok(claims) = crate::auth::decode_login(&token) {
user_uuid = Some(claims.sub);
return Ok(res);
}
}
Err(Response::builder().status(401).body(None).unwrap())
})
.await?;
let user_uuid = user_uuid.expect("User UUID should be set after the handshake");
let (mut rx, guard) = {
// Add a channel to send messages to this client to the map
let entry_uuid = uuid::Uuid::new_v4();
let (tx, rx) = tokio::sync::mpsc::channel::<Message>(100);
users.map.entry(user_uuid.clone()).or_default().push((entry_uuid, tx));
// Once the guard goes out of scope, the connection will have been closed and the entry will be deleted from the map
(rx, WSEntryMapGuard::new(users, user_uuid, entry_uuid, addr.ip()))
};
let _guard = guard;
let mut interval = tokio::time::interval(Duration::from_secs(15));
loop {
tokio::select! {
res = stream.next() => {
match res {
Some(Ok(message)) => {
match message {
// Respond to any pings
Message::Ping(ping) => stream.send(Message::Pong(ping)).await?,
Message::Pong(_) => {/* Ignored */},
// We should receive an initial message with the protocol and version, and we will reply to it
Message::Text(ref message) => {
let msg = message.strip_suffix(RECORD_SEPARATOR as char).unwrap_or(message);
if serde_json::from_str(msg).ok() == Some(INITIAL_MESSAGE) {
stream.send(Message::binary(INITIAL_RESPONSE)).await?;
continue;
}
}
// Just echo anything else the client sends
_ => stream.send(message).await?,
}
}
_ => break,
}
}
res = rx.recv() => {
match res {
Some(res) => stream.send(res).await?,
None => break,
}
}
_ = interval.tick() => stream.send(Message::Ping(create_ping())).await?
}
}
Ok(())
}
fn get_request_token(req: &handshake::server::Request) -> Option<String> {
const ACCESS_TOKEN_KEY: &str = "access_token=";
if let Some(Ok(auth)) = req.headers().get("Authorization").map(|a| a.to_str()) {
if let Some(token_part) = auth.strip_prefix("Bearer ") {
return Some(token_part.to_owned());
}
}
if let Some(params) = req.uri().query() {
let params_iter = params.split('&').take(1);
for val in params_iter {
if let Some(stripped) = val.strip_prefix(ACCESS_TOKEN_KEY) {
return Some(stripped.to_owned());
}
}
}
None
}

View File

@@ -1,11 +1,15 @@
use reqwest::header::{ACCEPT, AUTHORIZATION, CONTENT_TYPE};
use reqwest::{
header::{ACCEPT, AUTHORIZATION, CONTENT_TYPE},
Method,
};
use serde_json::Value;
use tokio::sync::RwLock;
use crate::{
api::{ApiResult, EmptyResult, UpdateType},
db::models::{Cipher, Device, Folder, Send, User},
util::get_reqwest_client,
db::models::{AuthRequestId, Cipher, Device, DeviceId, Folder, Send, User, UserId},
http_client::make_http_request,
util::format_date,
CONFIG,
};
@@ -50,8 +54,7 @@ async fn get_auth_push_token() -> ApiResult<String> {
("client_secret", &client_secret),
];
let res = match get_reqwest_client()
.post(&format!("{}/connect/token", CONFIG.push_identity_uri()))
let res = match make_http_request(Method::POST, &format!("{}/connect/token", CONFIG.push_identity_uri()))?
.form(&params)
.send()
.await
@@ -104,8 +107,7 @@ pub async fn register_push_device(device: &mut Device, conn: &mut crate::db::DbC
let auth_push_token = get_auth_push_token().await?;
let auth_header = format!("Bearer {}", &auth_push_token);
if let Err(e) = get_reqwest_client()
.post(CONFIG.push_relay_uri() + "/push/register")
if let Err(e) = make_http_request(Method::POST, &(CONFIG.push_relay_uri() + "/push/register"))?
.header(CONTENT_TYPE, "application/json")
.header(ACCEPT, "application/json")
.header(AUTHORIZATION, auth_header)
@@ -114,26 +116,25 @@ pub async fn register_push_device(device: &mut Device, conn: &mut crate::db::DbC
.await?
.error_for_status()
{
err!(format!("An error occured while proceeding registration of a device: {e}"));
err!(format!("An error occurred while proceeding registration of a device: {e}"));
}
if let Err(e) = device.save(conn).await {
err!(format!("An error occured while trying to save the (registered) device push uuid: {e}"));
err!(format!("An error occurred while trying to save the (registered) device push uuid: {e}"));
}
Ok(())
}
pub async fn unregister_push_device(push_uuid: Option<String>) -> EmptyResult {
if !CONFIG.push_enabled() || push_uuid.is_none() {
pub async fn unregister_push_device(push_id: Option<String>) -> EmptyResult {
if !CONFIG.push_enabled() || push_id.is_none() {
return Ok(());
}
let auth_push_token = get_auth_push_token().await?;
let auth_header = format!("Bearer {}", &auth_push_token);
match get_reqwest_client()
.delete(CONFIG.push_relay_uri() + "/push/" + &push_uuid.unwrap())
match make_http_request(Method::DELETE, &(CONFIG.push_relay_uri() + "/push/" + &push_id.unwrap()))?
.header(AUTHORIZATION, auth_header)
.send()
.await
@@ -147,51 +148,48 @@ pub async fn unregister_push_device(push_uuid: Option<String>) -> EmptyResult {
pub async fn push_cipher_update(
ut: UpdateType,
cipher: &Cipher,
acting_device_uuid: &String,
acting_device_id: &DeviceId,
conn: &mut crate::db::DbConn,
) {
// We shouldn't send a push notification on cipher update if the cipher belongs to an organization, this isn't implemented in the upstream server too.
if cipher.organization_uuid.is_some() {
return;
};
let user_uuid = match &cipher.user_uuid {
Some(c) => c,
None => {
debug!("Cipher has no uuid");
return;
}
let Some(user_id) = &cipher.user_uuid else {
debug!("Cipher has no uuid");
return;
};
if Device::check_user_has_push_device(user_uuid, conn).await {
if Device::check_user_has_push_device(user_id, conn).await {
send_to_push_relay(json!({
"userId": user_uuid,
"userId": user_id,
"organizationId": (),
"deviceId": acting_device_uuid,
"identifier": acting_device_uuid,
"deviceId": acting_device_id,
"identifier": acting_device_id,
"type": ut as i32,
"payload": {
"id": cipher.uuid,
"userId": cipher.user_uuid,
"organizationId": (),
"revisionDate": cipher.updated_at
"Id": cipher.uuid,
"UserId": cipher.user_uuid,
"OrganizationId": (),
"RevisionDate": format_date(&cipher.updated_at)
}
}))
.await;
}
}
pub fn push_logout(user: &User, acting_device_uuid: Option<String>) {
let acting_device_uuid: Value = acting_device_uuid.map(|v| v.into()).unwrap_or_else(|| Value::Null);
pub fn push_logout(user: &User, acting_device_id: Option<DeviceId>) {
let acting_device_id: Value = acting_device_id.map(|v| v.to_string().into()).unwrap_or_else(|| Value::Null);
tokio::task::spawn(send_to_push_relay(json!({
"userId": user.uuid,
"organizationId": (),
"deviceId": acting_device_uuid,
"identifier": acting_device_uuid,
"deviceId": acting_device_id,
"identifier": acting_device_id,
"type": UpdateType::LogOut as i32,
"payload": {
"userId": user.uuid,
"date": user.updated_at
"UserId": user.uuid,
"Date": format_date(&user.updated_at)
}
})));
}
@@ -204,8 +202,8 @@ pub fn push_user_update(ut: UpdateType, user: &User) {
"identifier": (),
"type": ut as i32,
"payload": {
"userId": user.uuid,
"date": user.updated_at
"UserId": user.uuid,
"Date": format_date(&user.updated_at)
}
})));
}
@@ -213,38 +211,38 @@ pub fn push_user_update(ut: UpdateType, user: &User) {
pub async fn push_folder_update(
ut: UpdateType,
folder: &Folder,
acting_device_uuid: &String,
acting_device_id: &DeviceId,
conn: &mut crate::db::DbConn,
) {
if Device::check_user_has_push_device(&folder.user_uuid, conn).await {
tokio::task::spawn(send_to_push_relay(json!({
"userId": folder.user_uuid,
"organizationId": (),
"deviceId": acting_device_uuid,
"identifier": acting_device_uuid,
"deviceId": acting_device_id,
"identifier": acting_device_id,
"type": ut as i32,
"payload": {
"id": folder.uuid,
"userId": folder.user_uuid,
"revisionDate": folder.updated_at
"Id": folder.uuid,
"UserId": folder.user_uuid,
"RevisionDate": format_date(&folder.updated_at)
}
})));
}
}
pub async fn push_send_update(ut: UpdateType, send: &Send, acting_device_uuid: &String, conn: &mut crate::db::DbConn) {
pub async fn push_send_update(ut: UpdateType, send: &Send, acting_device_id: &DeviceId, conn: &mut crate::db::DbConn) {
if let Some(s) = &send.user_uuid {
if Device::check_user_has_push_device(s, conn).await {
tokio::task::spawn(send_to_push_relay(json!({
"userId": send.user_uuid,
"organizationId": (),
"deviceId": acting_device_uuid,
"identifier": acting_device_uuid,
"deviceId": acting_device_id,
"identifier": acting_device_id,
"type": ut as i32,
"payload": {
"id": send.uuid,
"userId": send.user_uuid,
"revisionDate": send.revision_date
"Id": send.uuid,
"UserId": send.user_uuid,
"RevisionDate": format_date(&send.revision_date)
}
})));
}
@@ -266,8 +264,15 @@ async fn send_to_push_relay(notification_data: Value) {
let auth_header = format!("Bearer {}", &auth_push_token);
if let Err(e) = get_reqwest_client()
.post(CONFIG.push_relay_uri() + "/push/send")
let req = match make_http_request(Method::POST, &(CONFIG.push_relay_uri() + "/push/send")) {
Ok(r) => r,
Err(e) => {
error!("An error occurred while sending a send update to the push relay: {}", e);
return;
}
};
if let Err(e) = req
.header(ACCEPT, "application/json")
.header(CONTENT_TYPE, "application/json")
.header(AUTHORIZATION, &auth_header)
@@ -279,38 +284,38 @@ async fn send_to_push_relay(notification_data: Value) {
};
}
pub async fn push_auth_request(user_uuid: String, auth_request_uuid: String, conn: &mut crate::db::DbConn) {
if Device::check_user_has_push_device(user_uuid.as_str(), conn).await {
pub async fn push_auth_request(user_id: UserId, auth_request_id: String, conn: &mut crate::db::DbConn) {
if Device::check_user_has_push_device(&user_id, conn).await {
tokio::task::spawn(send_to_push_relay(json!({
"userId": user_uuid,
"userId": user_id,
"organizationId": (),
"deviceId": null,
"identifier": null,
"type": UpdateType::AuthRequest as i32,
"payload": {
"id": auth_request_uuid,
"userId": user_uuid,
"Id": auth_request_id,
"UserId": user_id,
}
})));
}
}
pub async fn push_auth_response(
user_uuid: String,
auth_request_uuid: String,
approving_device_uuid: String,
user_id: &UserId,
auth_request_id: &AuthRequestId,
approving_device_id: &DeviceId,
conn: &mut crate::db::DbConn,
) {
if Device::check_user_has_push_device(user_uuid.as_str(), conn).await {
if Device::check_user_has_push_device(user_id, conn).await {
tokio::task::spawn(send_to_push_relay(json!({
"userId": user_uuid,
"userId": user_id,
"organizationId": (),
"deviceId": approving_device_uuid,
"identifier": approving_device_uuid,
"deviceId": approving_device_id,
"identifier": approving_device_id,
"type": UpdateType::AuthRequestResponse as i32,
"payload": {
"id": auth_request_uuid,
"userId": user_uuid,
"Id": auth_request_id,
"UserId": user_id,
}
})));
}

View File

@@ -1,13 +1,20 @@
use std::path::{Path, PathBuf};
use rocket::{fs::NamedFile, http::ContentType, response::content::RawHtml as Html, serde::json::Json, Catcher, Route};
use rocket::{
fs::NamedFile,
http::ContentType,
response::{content::RawCss as Css, content::RawHtml as Html, Redirect},
serde::json::Json,
Catcher, Route,
};
use serde_json::Value;
use crate::{
api::{core::now, ApiResult, EmptyResult},
auth::decode_file_download,
db::models::{AttachmentId, CipherId},
error::Error,
util::{Cached, SafeString},
util::Cached,
CONFIG,
};
@@ -16,7 +23,7 @@ pub fn routes() -> Vec<Route> {
// crate::utils::LOGGED_ROUTES to make sure they appear in the log
let mut routes = routes![attachments, alive, alive_head, static_files];
if CONFIG.web_vault_enabled() {
routes.append(&mut routes![web_index, web_index_head, app_id, web_files]);
routes.append(&mut routes![web_index, web_index_direct, web_index_head, app_id, web_files, vaultwarden_css]);
}
#[cfg(debug_assertions)]
@@ -45,11 +52,65 @@ fn not_found() -> ApiResult<Html<String>> {
Ok(Html(text))
}
#[get("/css/vaultwarden.css")]
fn vaultwarden_css() -> Cached<Css<String>> {
let css_options = json!({
"signup_disabled": !CONFIG.signups_allowed() && CONFIG.signups_domains_whitelist().is_empty(),
"mail_enabled": CONFIG.mail_enabled(),
"yubico_enabled": CONFIG._enable_yubico() && (CONFIG.yubico_client_id().is_some() == CONFIG.yubico_secret_key().is_some()),
"emergency_access_allowed": CONFIG.emergency_access_allowed(),
"sends_allowed": CONFIG.sends_allowed(),
"load_user_scss": true,
});
let scss = match CONFIG.render_template("scss/vaultwarden.scss", &css_options) {
Ok(t) => t,
Err(e) => {
// Something went wrong loading the template. Use the fallback
warn!("Loading scss/vaultwarden.scss.hbs or scss/user.vaultwarden.scss.hbs failed. {e}");
CONFIG
.render_fallback_template("scss/vaultwarden.scss", &css_options)
.expect("Fallback scss/vaultwarden.scss.hbs to render")
}
};
let css = match grass_compiler::from_string(
scss,
&grass_compiler::Options::default().style(grass_compiler::OutputStyle::Compressed),
) {
Ok(css) => css,
Err(e) => {
// Something went wrong compiling the scss. Use the fallback
warn!("Compiling the Vaultwarden SCSS styles failed. {e}");
let mut css_options = css_options;
css_options["load_user_scss"] = json!(false);
let scss = CONFIG
.render_fallback_template("scss/vaultwarden.scss", &css_options)
.expect("Fallback scss/vaultwarden.scss.hbs to render");
grass_compiler::from_string(
scss,
&grass_compiler::Options::default().style(grass_compiler::OutputStyle::Compressed),
)
.expect("SCSS to compile")
}
};
// Cache for one day should be enough and not too much
Cached::ttl(Css(css), 86_400, false)
}
#[get("/")]
async fn web_index() -> Cached<Option<NamedFile>> {
Cached::short(NamedFile::open(Path::new(&CONFIG.web_vault_folder()).join("index.html")).await.ok(), false)
}
// Make sure that `/index.html` redirect to actual domain path.
// If not, this might cause issues with the web-vault
#[get("/index.html")]
fn web_index_direct() -> Redirect {
Redirect::to(format!("{}/", CONFIG.domain_path()))
}
#[head("/")]
fn web_index_head() -> EmptyResult {
// Add an explicit HEAD route to prevent uptime monitoring services from
@@ -98,16 +159,16 @@ async fn web_files(p: PathBuf) -> Cached<Option<NamedFile>> {
Cached::long(NamedFile::open(Path::new(&CONFIG.web_vault_folder()).join(p)).await.ok(), true)
}
#[get("/attachments/<uuid>/<file_id>?<token>")]
async fn attachments(uuid: SafeString, file_id: SafeString, token: String) -> Option<NamedFile> {
#[get("/attachments/<cipher_id>/<file_id>?<token>")]
async fn attachments(cipher_id: CipherId, file_id: AttachmentId, token: String) -> Option<NamedFile> {
let Ok(claims) = decode_file_download(&token) else {
return None;
};
if claims.sub != *uuid || claims.file_id != *file_id {
if claims.sub != cipher_id || claims.file_id != file_id {
return None;
}
NamedFile::open(Path::new(&CONFIG.attachments_folder()).join(uuid).join(file_id)).await.ok()
NamedFile::open(Path::new(&CONFIG.attachments_folder()).join(cipher_id.as_ref()).join(file_id.as_ref())).await.ok()
}
// We use DbConn here to let the alive healthcheck also verify the database connection.
@@ -170,11 +231,11 @@ pub fn static_files(filename: &str) -> Result<(ContentType, &'static [u8]), Erro
}
"bootstrap.css" => Ok((ContentType::CSS, include_bytes!("../static/scripts/bootstrap.css"))),
"bootstrap.bundle.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/bootstrap.bundle.js"))),
"jdenticon.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jdenticon.js"))),
"jdenticon-3.3.0.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jdenticon-3.3.0.js"))),
"datatables.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/datatables.js"))),
"datatables.css" => Ok((ContentType::CSS, include_bytes!("../static/scripts/datatables.css"))),
"jquery-3.7.0.slim.js" => {
Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.7.0.slim.js")))
"jquery-3.7.1.slim.js" => {
Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.7.1.slim.js")))
}
_ => err!(format!("Static file not found: {filename}")),
}

View File

@@ -1,18 +1,28 @@
// JWT Handling
//
use chrono::{Duration, Utc};
use chrono::{TimeDelta, Utc};
use jsonwebtoken::{errors::ErrorKind, Algorithm, DecodingKey, EncodingKey, Header};
use num_traits::FromPrimitive;
use once_cell::sync::Lazy;
use jsonwebtoken::{self, errors::ErrorKind, Algorithm, DecodingKey, EncodingKey, Header};
use once_cell::sync::{Lazy, OnceCell};
use openssl::rsa::Rsa;
use serde::de::DeserializeOwned;
use serde::ser::Serialize;
use std::{
env,
fs::File,
io::{Read, Write},
net::IpAddr,
};
use crate::db::models::{
AttachmentId, CipherId, CollectionId, DeviceId, EmergencyAccessId, MembershipId, OrgApiKeyId, OrganizationId,
SendFileId, SendId, UserId,
};
use crate::{error::Error, CONFIG};
const JWT_ALGORITHM: Algorithm = Algorithm::RS256;
pub static DEFAULT_VALIDITY: Lazy<Duration> = Lazy::new(|| Duration::hours(2));
pub static DEFAULT_VALIDITY: Lazy<TimeDelta> = Lazy::new(|| TimeDelta::try_hours(2).unwrap());
static JWT_HEADER: Lazy<Header> = Lazy::new(|| Header::new(JWT_ALGORITHM));
pub static JWT_LOGIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|login", CONFIG.domain_origin()));
@@ -26,23 +36,55 @@ static JWT_SEND_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|send", CONFIG.do
static JWT_ORG_API_KEY_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|api.organization", CONFIG.domain_origin()));
static JWT_FILE_DOWNLOAD_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|file_download", CONFIG.domain_origin()));
static PRIVATE_RSA_KEY: Lazy<EncodingKey> = Lazy::new(|| {
let key =
std::fs::read(CONFIG.private_rsa_key()).unwrap_or_else(|e| panic!("Error loading private RSA Key. \n{e}"));
EncodingKey::from_rsa_pem(&key).unwrap_or_else(|e| panic!("Error decoding private RSA Key.\n{e}"))
});
static PUBLIC_RSA_KEY: Lazy<DecodingKey> = Lazy::new(|| {
let key = std::fs::read(CONFIG.public_rsa_key()).unwrap_or_else(|e| panic!("Error loading public RSA Key. \n{e}"));
DecodingKey::from_rsa_pem(&key).unwrap_or_else(|e| panic!("Error decoding public RSA Key.\n{e}"))
});
static PRIVATE_RSA_KEY: OnceCell<EncodingKey> = OnceCell::new();
static PUBLIC_RSA_KEY: OnceCell<DecodingKey> = OnceCell::new();
pub fn load_keys() {
Lazy::force(&PRIVATE_RSA_KEY);
Lazy::force(&PUBLIC_RSA_KEY);
pub fn initialize_keys() -> Result<(), Error> {
fn read_key(create_if_missing: bool) -> Result<(Rsa<openssl::pkey::Private>, Vec<u8>), Error> {
let mut priv_key_buffer = Vec::with_capacity(2048);
let mut priv_key_file = File::options()
.create(create_if_missing)
.truncate(false)
.read(true)
.write(create_if_missing)
.open(CONFIG.private_rsa_key())?;
#[allow(clippy::verbose_file_reads)]
let bytes_read = priv_key_file.read_to_end(&mut priv_key_buffer)?;
let rsa_key = if bytes_read > 0 {
Rsa::private_key_from_pem(&priv_key_buffer[..bytes_read])?
} else if create_if_missing {
// Only create the key if the file doesn't exist or is empty
let rsa_key = Rsa::generate(2048)?;
priv_key_buffer = rsa_key.private_key_to_pem()?;
priv_key_file.write_all(&priv_key_buffer)?;
info!("Private key '{}' created correctly", CONFIG.private_rsa_key());
rsa_key
} else {
err!("Private key does not exist or invalid format", CONFIG.private_rsa_key());
};
Ok((rsa_key, priv_key_buffer))
}
let (priv_key, priv_key_buffer) = read_key(true).or_else(|_| read_key(false))?;
let pub_key_buffer = priv_key.public_key_to_pem()?;
let enc = EncodingKey::from_rsa_pem(&priv_key_buffer)?;
let dec: DecodingKey = DecodingKey::from_rsa_pem(&pub_key_buffer)?;
if PRIVATE_RSA_KEY.set(enc).is_err() {
err!("PRIVATE_RSA_KEY must only be initialized once")
}
if PUBLIC_RSA_KEY.set(dec).is_err() {
err!("PUBLIC_RSA_KEY must only be initialized once")
}
Ok(())
}
pub fn encode_jwt<T: Serialize>(claims: &T) -> String {
match jsonwebtoken::encode(&JWT_HEADER, claims, &PRIVATE_RSA_KEY) {
match jsonwebtoken::encode(&JWT_HEADER, claims, PRIVATE_RSA_KEY.wait()) {
Ok(token) => token,
Err(e) => panic!("Error encoding jwt {e}"),
}
@@ -56,7 +98,7 @@ fn decode_jwt<T: DeserializeOwned>(token: &str, issuer: String) -> Result<T, Err
validation.set_issuer(&[issuer]);
let token = token.replace(char::is_whitespace, "");
match jsonwebtoken::decode(&token, &PUBLIC_RSA_KEY, &validation) {
match jsonwebtoken::decode(&token, PUBLIC_RSA_KEY.wait(), &validation) {
Ok(d) => Ok(d.claims),
Err(err) => match *err.kind() {
ErrorKind::InvalidToken => err!("Token is invalid"),
@@ -112,7 +154,7 @@ pub struct LoginJwtClaims {
// Issuer
pub iss: String,
// Subject
pub sub: String,
pub sub: UserId,
pub premium: bool,
pub name: String,
@@ -133,7 +175,7 @@ pub struct LoginJwtClaims {
// user security_stamp
pub sstamp: String,
// device uuid
pub device: String,
pub device: DeviceId,
// [ "api", "offline_access" ]
pub scope: Vec<String>,
// [ "Application" ]
@@ -149,31 +191,31 @@ pub struct InviteJwtClaims {
// Issuer
pub iss: String,
// Subject
pub sub: String,
pub sub: UserId,
pub email: String,
pub org_id: Option<String>,
pub user_org_id: Option<String>,
pub org_id: OrganizationId,
pub member_id: MembershipId,
pub invited_by_email: Option<String>,
}
pub fn generate_invite_claims(
uuid: String,
user_id: UserId,
email: String,
org_id: Option<String>,
user_org_id: Option<String>,
org_id: OrganizationId,
member_id: MembershipId,
invited_by_email: Option<String>,
) -> InviteJwtClaims {
let time_now = Utc::now().naive_utc();
let time_now = Utc::now();
let expire_hours = i64::from(CONFIG.invitation_expiration_hours());
InviteJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::hours(expire_hours)).timestamp(),
exp: (time_now + TimeDelta::try_hours(expire_hours).unwrap()).timestamp(),
iss: JWT_INVITE_ISSUER.to_string(),
sub: uuid,
sub: user_id,
email,
org_id,
user_org_id,
member_id,
invited_by_email,
}
}
@@ -187,28 +229,28 @@ pub struct EmergencyAccessInviteJwtClaims {
// Issuer
pub iss: String,
// Subject
pub sub: String,
pub sub: UserId,
pub email: String,
pub emer_id: String,
pub emer_id: EmergencyAccessId,
pub grantor_name: String,
pub grantor_email: String,
}
pub fn generate_emergency_access_invite_claims(
uuid: String,
user_id: UserId,
email: String,
emer_id: String,
emer_id: EmergencyAccessId,
grantor_name: String,
grantor_email: String,
) -> EmergencyAccessInviteJwtClaims {
let time_now = Utc::now().naive_utc();
let time_now = Utc::now();
let expire_hours = i64::from(CONFIG.invitation_expiration_hours());
EmergencyAccessInviteJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::hours(expire_hours)).timestamp(),
exp: (time_now + TimeDelta::try_hours(expire_hours).unwrap()).timestamp(),
iss: JWT_EMERGENCY_ACCESS_INVITE_ISSUER.to_string(),
sub: uuid,
sub: user_id,
email,
emer_id,
grantor_name,
@@ -225,21 +267,24 @@ pub struct OrgApiKeyLoginJwtClaims {
// Issuer
pub iss: String,
// Subject
pub sub: String,
pub sub: OrgApiKeyId,
pub client_id: String,
pub client_sub: String,
pub client_sub: OrganizationId,
pub scope: Vec<String>,
}
pub fn generate_organization_api_key_login_claims(uuid: String, org_id: String) -> OrgApiKeyLoginJwtClaims {
let time_now = Utc::now().naive_utc();
pub fn generate_organization_api_key_login_claims(
org_api_key_uuid: OrgApiKeyId,
org_id: OrganizationId,
) -> OrgApiKeyLoginJwtClaims {
let time_now = Utc::now();
OrgApiKeyLoginJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::hours(1)).timestamp(),
exp: (time_now + TimeDelta::try_hours(1).unwrap()).timestamp(),
iss: JWT_ORG_API_KEY_ISSUER.to_string(),
sub: uuid,
client_id: format!("organization.{org_id}"),
sub: org_api_key_uuid,
client_id: format!("organization.{}", org_id),
client_sub: org_id,
scope: vec!["api.organization".into()],
}
@@ -254,18 +299,18 @@ pub struct FileDownloadClaims {
// Issuer
pub iss: String,
// Subject
pub sub: String,
pub sub: CipherId,
pub file_id: String,
pub file_id: AttachmentId,
}
pub fn generate_file_download_claims(uuid: String, file_id: String) -> FileDownloadClaims {
let time_now = Utc::now().naive_utc();
pub fn generate_file_download_claims(cipher_id: CipherId, file_id: AttachmentId) -> FileDownloadClaims {
let time_now = Utc::now();
FileDownloadClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::minutes(5)).timestamp(),
exp: (time_now + TimeDelta::try_minutes(5).unwrap()).timestamp(),
iss: JWT_FILE_DOWNLOAD_ISSUER.to_string(),
sub: uuid,
sub: cipher_id,
file_id,
}
}
@@ -283,42 +328,42 @@ pub struct BasicJwtClaims {
}
pub fn generate_delete_claims(uuid: String) -> BasicJwtClaims {
let time_now = Utc::now().naive_utc();
let time_now = Utc::now();
let expire_hours = i64::from(CONFIG.invitation_expiration_hours());
BasicJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::hours(expire_hours)).timestamp(),
exp: (time_now + TimeDelta::try_hours(expire_hours).unwrap()).timestamp(),
iss: JWT_DELETE_ISSUER.to_string(),
sub: uuid,
}
}
pub fn generate_verify_email_claims(uuid: String) -> BasicJwtClaims {
let time_now = Utc::now().naive_utc();
pub fn generate_verify_email_claims(user_id: UserId) -> BasicJwtClaims {
let time_now = Utc::now();
let expire_hours = i64::from(CONFIG.invitation_expiration_hours());
BasicJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::hours(expire_hours)).timestamp(),
exp: (time_now + TimeDelta::try_hours(expire_hours).unwrap()).timestamp(),
iss: JWT_VERIFYEMAIL_ISSUER.to_string(),
sub: uuid,
sub: user_id.to_string(),
}
}
pub fn generate_admin_claims() -> BasicJwtClaims {
let time_now = Utc::now().naive_utc();
let time_now = Utc::now();
BasicJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::minutes(CONFIG.admin_session_lifetime())).timestamp(),
exp: (time_now + TimeDelta::try_minutes(CONFIG.admin_session_lifetime()).unwrap()).timestamp(),
iss: JWT_ADMIN_ISSUER.to_string(),
sub: "admin_panel".to_string(),
}
}
pub fn generate_send_claims(send_id: &str, file_id: &str) -> BasicJwtClaims {
let time_now = Utc::now().naive_utc();
pub fn generate_send_claims(send_id: &SendId, file_id: &SendFileId) -> BasicJwtClaims {
let time_now = Utc::now();
BasicJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::minutes(2)).timestamp(),
exp: (time_now + TimeDelta::try_minutes(2).unwrap()).timestamp(),
iss: JWT_SEND_ISSUER.to_string(),
sub: format!("{send_id}/{file_id}"),
}
@@ -333,7 +378,7 @@ use rocket::{
};
use crate::db::{
models::{Collection, Device, User, UserOrgStatus, UserOrgType, UserOrganization, UserStampException},
models::{Collection, Device, Membership, MembershipStatus, MembershipType, User, UserStampException},
DbConn,
};
@@ -355,8 +400,6 @@ impl<'r> FromRequest<'r> for Host {
referer.to_string()
} else {
// Try to guess from the headers
use std::env;
let protocol = if let Some(proto) = headers.get_one("X-Forwarded-Proto") {
proto
} else if env::var("ROCKET_TLS").is_ok() {
@@ -367,10 +410,8 @@ impl<'r> FromRequest<'r> for Host {
let host = if let Some(host) = headers.get_one("X-Forwarded-Host") {
host
} else if let Some(host) = headers.get_one("Host") {
host
} else {
""
headers.get_one("Host").unwrap_or_default()
};
format!("{protocol}://{host}")
@@ -383,7 +424,6 @@ impl<'r> FromRequest<'r> for Host {
}
pub struct ClientHeaders {
pub host: String,
pub device_type: i32,
pub ip: ClientIp,
}
@@ -393,7 +433,6 @@ impl<'r> FromRequest<'r> for ClientHeaders {
type Error = &'static str;
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let host = try_outcome!(Host::from_request(request).await).host;
let ip = match ClientIp::from_request(request).await {
Outcome::Success(ip) => ip,
_ => err_handler!("Error getting Client IP"),
@@ -403,7 +442,6 @@ impl<'r> FromRequest<'r> for ClientHeaders {
request.headers().get_one("device-type").map(|d| d.parse().unwrap_or(14)).unwrap_or_else(|| 14);
Outcome::Success(ClientHeaders {
host,
device_type,
ip,
})
@@ -440,42 +478,38 @@ impl<'r> FromRequest<'r> for Headers {
};
// Check JWT token is valid and get device and user from it
let claims = match decode_login(access_token) {
Ok(claims) => claims,
Err(_) => err_handler!("Invalid claim"),
let Ok(claims) = decode_login(access_token) else {
err_handler!("Invalid claim")
};
let device_uuid = claims.device;
let user_uuid = claims.sub;
let device_id = claims.device;
let user_id = claims.sub;
let mut conn = match DbConn::from_request(request).await {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
let device = match Device::find_by_uuid_and_user(&device_uuid, &user_uuid, &mut conn).await {
Some(device) => device,
None => err_handler!("Invalid device id"),
let Some(device) = Device::find_by_uuid_and_user(&device_id, &user_id, &mut conn).await else {
err_handler!("Invalid device id")
};
let user = match User::find_by_uuid(&user_uuid, &mut conn).await {
Some(user) => user,
None => err_handler!("Device has no user associated"),
let Some(user) = User::find_by_uuid(&user_id, &mut conn).await else {
err_handler!("Device has no user associated")
};
if user.security_stamp != claims.sstamp {
if let Some(stamp_exception) =
user.stamp_exception.as_deref().and_then(|s| serde_json::from_str::<UserStampException>(s).ok())
{
let current_route = match request.route().and_then(|r| r.name.as_deref()) {
Some(name) => name,
_ => err_handler!("Error getting current route for stamp exception"),
let Some(current_route) = request.route().and_then(|r| r.name.as_deref()) else {
err_handler!("Error getting current route for stamp exception")
};
// Check if the stamp exception has expired first.
// Then, check if the current route matches any of the allowed routes.
// After that check the stamp in exception matches the one in the claims.
if Utc::now().naive_utc().timestamp() > stamp_exception.expire {
if Utc::now().timestamp() > stamp_exception.expire {
// If the stamp exception has been expired remove it from the database.
// This prevents checking this stamp exception for new requests.
let mut user = user;
@@ -507,9 +541,8 @@ pub struct OrgHeaders {
pub host: String,
pub device: Device,
pub user: User,
pub org_user_type: UserOrgType,
pub org_user: UserOrganization,
pub org_id: String,
pub membership_type: MembershipType,
pub membership: Membership,
pub ip: ClientIp,
}
@@ -523,35 +556,28 @@ impl<'r> FromRequest<'r> for OrgHeaders {
// org_id is usually the second path param ("/organizations/<org_id>"),
// but there are cases where it is a query value.
// First check the path, if this is not a valid uuid, try the query values.
let url_org_id: Option<&str> = {
let mut url_org_id = None;
if let Some(Ok(org_id)) = request.param::<&str>(1) {
if uuid::Uuid::parse_str(org_id).is_ok() {
url_org_id = Some(org_id);
}
let url_org_id: Option<OrganizationId> = {
if let Some(Ok(org_id)) = request.param::<OrganizationId>(1) {
Some(org_id.clone())
} else if let Some(Ok(org_id)) = request.query_value::<OrganizationId>("organizationId") {
Some(org_id.clone())
} else {
None
}
if let Some(Ok(org_id)) = request.query_value::<&str>("organizationId") {
if uuid::Uuid::parse_str(org_id).is_ok() {
url_org_id = Some(org_id);
}
}
url_org_id
};
match url_org_id {
Some(org_id) => {
Some(org_id) if uuid::Uuid::parse_str(&org_id).is_ok() => {
let mut conn = match DbConn::from_request(request).await {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
let user = headers.user;
let org_user = match UserOrganization::find_by_user_and_org(&user.uuid, org_id, &mut conn).await {
Some(user) => {
if user.status == UserOrgStatus::Confirmed as i32 {
user
let membership = match Membership::find_by_user_and_org(&user.uuid, &org_id, &mut conn).await {
Some(member) => {
if member.status == MembershipStatus::Confirmed as i32 {
member
} else {
err_handler!("The current user isn't confirmed member of the organization")
}
@@ -563,16 +589,15 @@ impl<'r> FromRequest<'r> for OrgHeaders {
host: headers.host,
device: headers.device,
user,
org_user_type: {
if let Some(org_usr_type) = UserOrgType::from_i32(org_user.atype) {
membership_type: {
if let Some(org_usr_type) = MembershipType::from_i32(membership.atype) {
org_usr_type
} else {
// This should only happen if the DB is corrupted
err_handler!("Unknown user type in the database")
}
},
org_user,
org_id: String::from(org_id),
membership,
ip: headers.ip,
})
}
@@ -585,9 +610,9 @@ pub struct AdminHeaders {
pub host: String,
pub device: Device,
pub user: User,
pub org_user_type: UserOrgType,
pub client_version: Option<String>,
pub membership_type: MembershipType,
pub ip: ClientIp,
pub org_id: OrganizationId,
}
#[rocket::async_trait]
@@ -596,15 +621,14 @@ impl<'r> FromRequest<'r> for AdminHeaders {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await);
let client_version = request.headers().get_one("Bitwarden-Client-Version").map(String::from);
if headers.org_user_type >= UserOrgType::Admin {
if headers.membership_type >= MembershipType::Admin {
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
client_version,
membership_type: headers.membership_type,
ip: headers.ip,
org_id: headers.membership.org_uuid,
})
} else {
err_handler!("You need to be Admin or Owner to call this endpoint")
@@ -626,16 +650,16 @@ impl From<AdminHeaders> for Headers {
// col_id is usually the fourth path param ("/organizations/<org_id>/collections/<col_id>"),
// but there could be cases where it is a query value.
// First check the path, if this is not a valid uuid, try the query values.
fn get_col_id(request: &Request<'_>) -> Option<String> {
fn get_col_id(request: &Request<'_>) -> Option<CollectionId> {
if let Some(Ok(col_id)) = request.param::<String>(3) {
if uuid::Uuid::parse_str(&col_id).is_ok() {
return Some(col_id);
return Some(col_id.into());
}
}
if let Some(Ok(col_id)) = request.query_value::<String>("collectionId") {
if uuid::Uuid::parse_str(&col_id).is_ok() {
return Some(col_id);
return Some(col_id.into());
}
}
@@ -649,8 +673,8 @@ pub struct ManagerHeaders {
pub host: String,
pub device: Device,
pub user: User,
pub org_user_type: UserOrgType,
pub ip: ClientIp,
pub org_id: OrganizationId,
}
#[rocket::async_trait]
@@ -659,7 +683,7 @@ impl<'r> FromRequest<'r> for ManagerHeaders {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await);
if headers.org_user_type >= UserOrgType::Manager {
if headers.membership_type >= MembershipType::Manager {
match get_col_id(request) {
Some(col_id) => {
let mut conn = match DbConn::from_request(request).await {
@@ -667,7 +691,7 @@ impl<'r> FromRequest<'r> for ManagerHeaders {
_ => err_handler!("Error getting DB"),
};
if !can_access_collection(&headers.org_user, &col_id, &mut conn).await {
if !Collection::can_access_collection(&headers.membership, &col_id, &mut conn).await {
err_handler!("The current user isn't a manager for this collection")
}
}
@@ -678,8 +702,8 @@ impl<'r> FromRequest<'r> for ManagerHeaders {
host: headers.host,
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
ip: headers.ip,
org_id: headers.membership.org_uuid,
})
} else {
err_handler!("You need to be a Manager, Admin or Owner to call this endpoint")
@@ -704,8 +728,7 @@ pub struct ManagerHeadersLoose {
pub host: String,
pub device: Device,
pub user: User,
pub org_user: UserOrganization,
pub org_user_type: UserOrgType,
pub membership: Membership,
pub ip: ClientIp,
}
@@ -715,13 +738,12 @@ impl<'r> FromRequest<'r> for ManagerHeadersLoose {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await);
if headers.org_user_type >= UserOrgType::Manager {
if headers.membership_type >= MembershipType::Manager {
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
org_user: headers.org_user,
org_user_type: headers.org_user_type,
membership: headers.membership,
ip: headers.ip,
})
} else {
@@ -740,22 +762,18 @@ impl From<ManagerHeadersLoose> for Headers {
}
}
}
async fn can_access_collection(org_user: &UserOrganization, col_id: &str, conn: &mut DbConn) -> bool {
org_user.has_full_access()
|| Collection::has_access_by_collection_and_user_uuid(col_id, &org_user.user_uuid, conn).await
}
impl ManagerHeaders {
pub async fn from_loose(
h: ManagerHeadersLoose,
collections: &Vec<String>,
collections: &Vec<CollectionId>,
conn: &mut DbConn,
) -> Result<ManagerHeaders, Error> {
for col_id in collections {
if uuid::Uuid::parse_str(col_id).is_err() {
if uuid::Uuid::parse_str(col_id.as_ref()).is_err() {
err!("Collection Id is malformed!");
}
if !can_access_collection(&h.org_user, col_id, conn).await {
if !Collection::can_access_collection(&h.membership, col_id, conn).await {
err!("You don't have access to all collections!");
}
}
@@ -764,17 +782,17 @@ impl ManagerHeaders {
host: h.host,
device: h.device,
user: h.user,
org_user_type: h.org_user_type,
ip: h.ip,
org_id: h.membership.org_uuid,
})
}
}
pub struct OwnerHeaders {
pub host: String,
pub device: Device,
pub user: User,
pub ip: ClientIp,
pub org_id: OrganizationId,
}
#[rocket::async_trait]
@@ -783,12 +801,12 @@ impl<'r> FromRequest<'r> for OwnerHeaders {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await);
if headers.org_user_type == UserOrgType::Owner {
if headers.membership_type == MembershipType::Owner {
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
ip: headers.ip,
org_id: headers.membership.org_uuid,
})
} else {
err_handler!("You need to be Owner to call this endpoint")
@@ -796,10 +814,33 @@ impl<'r> FromRequest<'r> for OwnerHeaders {
}
}
pub struct OrgMemberHeaders {
pub host: String,
pub user: User,
pub org_id: OrganizationId,
}
#[rocket::async_trait]
impl<'r> FromRequest<'r> for OrgMemberHeaders {
type Error = &'static str;
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(OrgHeaders::from_request(request).await);
if headers.membership_type >= MembershipType::User {
Outcome::Success(Self {
host: headers.host,
user: headers.user,
org_id: headers.membership.org_uuid,
})
} else {
err_handler!("You need to be a Member of the Organization to call this endpoint")
}
}
}
//
// Client IP address detection
//
use std::net::IpAddr;
pub struct ClientIp {
pub ip: IpAddr,
@@ -832,6 +873,35 @@ impl<'r> FromRequest<'r> for ClientIp {
}
}
pub struct Secure {
pub https: bool,
}
#[rocket::async_trait]
impl<'r> FromRequest<'r> for Secure {
type Error = ();
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = request.headers();
// Try to guess from the headers
let protocol = match headers.get_one("X-Forwarded-Proto") {
Some(proto) => proto,
None => {
if env::var("ROCKET_TLS").is_ok() {
"https"
} else {
"http"
}
}
};
Outcome::Success(Secure {
https: protocol == "https",
})
}
}
pub struct WsAccessTokenHeader {
pub access_token: Option<String>,
}
@@ -854,3 +924,24 @@ impl<'r> FromRequest<'r> for WsAccessTokenHeader {
})
}
}
pub struct ClientVersion(pub semver::Version);
#[rocket::async_trait]
impl<'r> FromRequest<'r> for ClientVersion {
type Error = &'static str;
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = request.headers();
let Some(version) = headers.get_one("Bitwarden-Client-Version") else {
err_handler!("No Bitwarden-Client-Version header provided")
};
let Ok(version) = semver::Version::parse(version) else {
err_handler!("Invalid Bitwarden-Client-Version header provided")
};
Outcome::Success(ClientVersion(version))
}
}

View File

@@ -1,6 +1,11 @@
use std::env::consts::EXE_SUFFIX;
use std::process::exit;
use std::sync::RwLock;
use std::{
env::consts::EXE_SUFFIX,
process::exit,
sync::{
atomic::{AtomicBool, Ordering},
RwLock,
},
};
use job_scheduler_ng::Schedule;
use once_cell::sync::Lazy;
@@ -9,7 +14,7 @@ use reqwest::Url;
use crate::{
db::DbConnType,
error::Error,
util::{get_env, get_env_bool, parse_experimental_client_feature_flags},
util::{get_env, get_env_bool, get_web_vault_version, is_valid_email, parse_experimental_client_feature_flags},
};
static CONFIG_FILE: Lazy<String> = Lazy::new(|| {
@@ -17,6 +22,8 @@ static CONFIG_FILE: Lazy<String> = Lazy::new(|| {
get_env("CONFIG_FILE").unwrap_or_else(|| format!("{data_folder}/config.json"))
});
pub static SKIP_CONFIG_VALIDATION: AtomicBool = AtomicBool::new(false);
pub static CONFIG: Lazy<Config> = Lazy::new(|| {
Config::load().unwrap_or_else(|e| {
println!("Error loading config:\n {e:?}\n");
@@ -39,7 +46,6 @@ macro_rules! make_config {
struct Inner {
rocket_shutdown_handle: Option<rocket::Shutdown>,
ws_shutdown_handle: Option<tokio::sync::oneshot::Sender<()>>,
templates: Handlebars<'static>,
config: ConfigItems,
@@ -110,6 +116,14 @@ macro_rules! make_config {
serde_json::from_str(&config_str).map_err(Into::into)
}
fn clear_non_editable(&mut self) {
$($(
if !$editable {
self.$name = None;
}
)+)+
}
/// Merges the values of both builders into a new builder.
/// If both have the same element, `other` wins.
fn merge(&self, other: &Self, show_overrides: bool, overrides: &mut Vec<String>) -> Self {
@@ -147,6 +161,12 @@ macro_rules! make_config {
config.signups_domains_whitelist = config.signups_domains_whitelist.trim().to_lowercase();
config.org_creation_users = config.org_creation_users.trim().to_lowercase();
// Copy the values from the deprecated flags to the new ones
if config.http_request_block_regex.is_none() {
config.http_request_block_regex = config.icon_blacklist_regex.clone();
}
config
}
}
@@ -228,6 +248,7 @@ macro_rules! make_config {
// Besides Pass, only String types will be masked via _privacy_mask.
const PRIVACY_CONFIG: &[&str] = &[
"allowed_iframe_ancestors",
"allowed_connect_src",
"database_url",
"domain_origin",
"domain_path",
@@ -238,6 +259,7 @@ macro_rules! make_config {
"smtp_from",
"smtp_host",
"smtp_username",
"_smtp_img_src",
];
let cfg = {
@@ -326,7 +348,7 @@ macro_rules! make_config {
}
}
}};
( @build $value:expr, $config:expr, gen, $default_fn:expr ) => {{
( @build $value:expr, $config:expr, generated, $default_fn:expr ) => {{
let f: &dyn Fn(&ConfigItems) -> _ = &$default_fn;
f($config)
}};
@@ -344,10 +366,10 @@ macro_rules! make_config {
// }
//
// Where action applied when the value wasn't provided and can be:
// def: Use a default value
// auto: Value is auto generated based on other values
// option: Value is optional
// gen: Value is always autogenerated and it's original value ignored
// def: Use a default value
// auto: Value is auto generated based on other values
// option: Value is optional
// generated: Value is always autogenerated and it's original value ignored
make_config! {
folders {
/// Data folder |> Main data folder
@@ -361,7 +383,7 @@ make_config! {
/// Sends folder
sends_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "sends");
/// Temp folder |> Used for storing temporary file uploads
tmp_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "tmp");
tmp_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "tmp");
/// Templates folder
templates_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "templates");
/// Session JWT key
@@ -371,11 +393,7 @@ make_config! {
},
ws {
/// Enable websocket notifications
websocket_enabled: bool, false, def, false;
/// Websocket address
websocket_address: String, false, def, "0.0.0.0".to_string();
/// Websocket port
websocket_port: u16, false, def, 3012;
enable_websocket: bool, false, def, true;
},
push {
/// Enable push notifications
@@ -414,7 +432,9 @@ make_config! {
/// Auth Request cleanup schedule |> Cron schedule of the job that cleans old auth requests from the auth request.
/// Defaults to every minute. Set blank to disable this job.
auth_request_purge_schedule: String, false, def, "30 * * * * *".to_string();
/// Duo Auth context cleanup schedule |> Cron schedule of the job that cleans expired Duo contexts from the database. Does nothing if Duo MFA is disabled or set to use the legacy iframe prompt.
/// Defaults to once every minute. Set blank to disable this job.
duo_context_purge_schedule: String, false, def, "30 * * * * *".to_string();
},
/// General settings
@@ -489,11 +509,11 @@ make_config! {
/// Password iterations |> Number of server-side passwords hashing iterations for the password hash.
/// The default for new users. If changed, it will be updated during login for existing users.
password_iterations: i32, true, def, 600_000;
/// Allow password hints |> Controls whether users can set password hints. This setting applies globally to all users.
/// Allow password hints |> Controls whether users can set or show password hints. This setting applies globally to all users.
password_hints_allowed: bool, true, def, true;
/// Show password hint |> Controls whether a password hint should be shown directly in the web page
/// if SMTP service is not configured. Not recommended for publicly-accessible instances as this
/// provides unauthenticated access to potentially sensitive data.
/// Show password hint (Know the risks!) |> Controls whether a password hint should be shown directly in the web page
/// if SMTP service is not configured and password hints are allowed. Not recommended for publicly-accessible instances
/// because this provides unauthenticated access to potentially sensitive data.
show_password_hint: bool, true, def, false;
/// Admin token/Argon2 PHC |> The plain text token or Argon2 PHC string used to authenticate in this very same page. Changing it here will not deauthorize the current session!
@@ -512,7 +532,7 @@ make_config! {
/// Set to the string "none" (without quotes), to disable any headers and just use the remote IP
ip_header: String, true, def, "X-Real-IP".to_string();
/// Internal IP header property, used to avoid recomputing each time
_ip_header_enabled: bool, false, gen, |c| &c.ip_header.trim().to_lowercase() != "none";
_ip_header_enabled: bool, false, generated, |c| &c.ip_header.trim().to_lowercase() != "none";
/// Icon service |> The predefined icon services are: internal, bitwarden, duckduckgo, google.
/// To specify a custom icon service, set a URL template with exactly one instance of `{}`,
/// which is replaced with the domain. For example: `https://icon.example.com/domain/{}`.
@@ -521,9 +541,9 @@ make_config! {
/// corresponding icon at the external service.
icon_service: String, false, def, "internal".to_string();
/// _icon_service_url
_icon_service_url: String, false, gen, |c| generate_icon_service_url(&c.icon_service);
_icon_service_url: String, false, generated, |c| generate_icon_service_url(&c.icon_service);
/// _icon_service_csp
_icon_service_csp: String, false, gen, |c| generate_icon_service_csp(&c.icon_service, &c._icon_service_url);
_icon_service_csp: String, false, generated, |c| generate_icon_service_csp(&c.icon_service, &c._icon_service_url);
/// Icon redirect code |> The HTTP status code to use for redirects to an external icon service.
/// The supported codes are 301 (legacy permanent), 302 (legacy temporary), 307 (temporary), and 308 (permanent).
/// Temporary redirects are useful while testing different icon services, but once a service
@@ -536,12 +556,18 @@ make_config! {
icon_cache_negttl: u64, true, def, 259_200;
/// Icon download timeout |> Number of seconds when to stop attempting to download an icon.
icon_download_timeout: u64, true, def, 10;
/// Icon blacklist Regex |> Any domains or IPs that match this regex won't be fetched by the icon service.
/// [Deprecated] Icon blacklist Regex |> Use `http_request_block_regex` instead
icon_blacklist_regex: String, false, option;
/// [Deprecated] Icon blacklist non global IPs |> Use `http_request_block_non_global_ips` instead
icon_blacklist_non_global_ips: bool, false, def, true;
/// Block HTTP domains/IPs by Regex |> Any domains or IPs that match this regex won't be fetched by the internal HTTP client.
/// Useful to hide other servers in the local network. Check the WIKI for more details
icon_blacklist_regex: String, true, option;
/// Icon blacklist non global IPs |> Any IP which is not defined as a global IP will be blacklisted.
http_request_block_regex: String, true, option;
/// Block non global IPs |> Enabling this will cause the internal HTTP client to refuse to connect to any non global IP address.
/// Useful to secure your internal environment: See https://en.wikipedia.org/wiki/Reserved_IP_addresses for a list of IPs which it will block
icon_blacklist_non_global_ips: bool, true, def, true;
http_request_block_non_global_ips: bool, true, auto, |c| c.icon_blacklist_non_global_ips;
/// Disable Two-Factor remember |> Enabling this would force the users to use a second factor to login every time.
/// Note that the checkbox would still be present, but ignored.
@@ -569,8 +595,9 @@ make_config! {
use_syslog: bool, false, def, false;
/// Log file path
log_file: String, false, option;
/// Log level
log_level: String, false, def, "Info".to_string();
/// Log level |> Valid values are "trace", "debug", "info", "warn", "error" and "off"
/// For a specific module append it as a comma separated value "info,path::to::module=debug"
log_level: String, false, def, "info".to_string();
/// Enable DB WAL |> Turning this off might lead to worse performance, but might help if using vaultwarden on some exotic filesystems,
/// that do not support WAL. Please make sure you read project wiki on the topic before changing this setting.
@@ -594,6 +621,9 @@ make_config! {
/// Allowed iframe ancestors (Know the risks!) |> Allows other domains to embed the web vault into an iframe, useful for embedding into secure intranets
allowed_iframe_ancestors: String, true, def, String::new();
/// Allowed connect-src (Know the risks!) |> Allows other domains to URLs which can be loaded using script interfaces like the Forwarded email alias feature
allowed_connect_src: String, true, def, String::new();
/// Seconds between login requests |> Number of seconds, on average, between login and 2FA requests from the same IP address before rate limiting kicks in
login_ratelimit_seconds: u64, false, def, 60;
/// Max burst size for login requests |> Allow a burst of requests of up to this size, while maintaining the average indicated by `login_ratelimit_seconds`. Note that this applies to both the login and the 2FA, so it's recommended to allow a burst size of at least 2
@@ -608,7 +638,18 @@ make_config! {
admin_session_lifetime: i64, true, def, 20;
/// Enable groups (BETA!) (Know the risks!) |> Enables groups support for organizations (Currently contains known issues!).
org_groups_enabled: bool, false, def, false;
org_groups_enabled: bool, false, def, false;
/// Increase note size limit (Know the risks!) |> Sets the secure note size limit to 100_000 instead of the default 10_000.
/// WARNING: This could cause issues with clients. Also exports will not work on Bitwarden servers!
increase_note_size_limit: bool, true, def, false;
/// Generated max_note_size value to prevent if..else matching during every check
_max_note_size: usize, false, generated, |c| if c.increase_note_size_limit {100_000} else {10_000};
/// Enforce Single Org with Reset Password Policy |> Enforce that the Single Org policy is enabled before setting the Reset Password policy
/// Bitwarden enforces this by default. In Vaultwarden we encouraged to use multiple organizations because groups were not available.
/// Setting this to true will enforce the Single Org Policy to be enabled before you can enable the Reset Password policy.
enforce_single_org_with_reset_pw_policy: bool, false, def, false;
},
/// Yubikey settings
@@ -627,6 +668,8 @@ make_config! {
duo: _enable_duo {
/// Enabled
_enable_duo: bool, true, def, true;
/// Attempt to use deprecated iframe-based Traditional Prompt (Duo WebSDK 2)
duo_use_iframe: bool, false, def, false;
/// Integration Key
duo_ikey: String, true, option;
/// Secret Key
@@ -644,7 +687,7 @@ make_config! {
/// Use Sendmail |> Whether to send mail via the `sendmail` command
use_sendmail: bool, true, def, false;
/// Sendmail Command |> Which sendmail command to use. The one found in the $PATH is used if not specified.
sendmail_command: String, true, option;
sendmail_command: String, false, option;
/// Host
smtp_host: String, true, option;
/// DEPRECATED smtp_ssl |> DEPRECATED - Please use SMTP_SECURITY
@@ -672,7 +715,7 @@ make_config! {
/// Embed images as email attachments.
smtp_embed_images: bool, true, def, true;
/// _smtp_img_src
_smtp_img_src: String, false, gen, |c| generate_smtp_img_src(c.smtp_embed_images, &c.domain);
_smtp_img_src: String, false, generated, |c| generate_smtp_img_src(c.smtp_embed_images, &c.domain);
/// Enable SMTP debugging (Know the risks!) |> DANGEROUS: Enabling this will output very detailed SMTP messages. This could contain sensitive information like passwords and usernames! Only enable this during troubleshooting!
smtp_debug: bool, false, def, false;
/// Accept Invalid Certs (Know the risks!) |> DANGEROUS: Allow invalid certificates. This option introduces significant vulnerabilities to man-in-the-middle attacks!
@@ -691,6 +734,10 @@ make_config! {
email_expiration_time: u64, true, def, 600;
/// Maximum attempts |> Maximum attempts before an email token is reset and a new email will need to be sent
email_attempts_limit: u64, true, def, 3;
/// Automatically enforce at login |> Setup email 2FA provider regardless of any organization policy
email_2fa_enforce_on_verified_invite: bool, true, def, false;
/// Auto-enable 2FA (Know the risks!) |> Automatically setup email 2FA as fallback provider when needed
email_2fa_auto_fallback: bool, true, def, false;
},
}
@@ -728,6 +775,13 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
);
}
let connect_src = cfg.allowed_connect_src.to_lowercase();
for url in connect_src.split_whitespace() {
if !url.starts_with("https://") || Url::parse(url).is_err() {
err!("ALLOWED_CONNECT_SRC variable contains one or more invalid URLs. Only FQDN's starting with https are allowed");
}
}
let whitelist = &cfg.signups_domains_whitelist;
if !whitelist.is_empty() && whitelist.split(',').any(|d| d.trim().is_empty()) {
err!("`SIGNUPS_DOMAINS_WHITELIST` contains empty tokens");
@@ -779,8 +833,16 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
}
// TODO: deal with deprecated flags so they can be removed from this list, cf. #4263
const KNOWN_FLAGS: &[&str] =
&["autofill-overlay", "autofill-v2", "browser-fileless-import", "fido2-vault-credentials"];
const KNOWN_FLAGS: &[&str] = &[
"autofill-overlay",
"autofill-v2",
"browser-fileless-import",
"extension-refresh",
"fido2-vault-credentials",
"inline-menu-positioning-improvements",
"ssh-key-vault-item",
"ssh-agent",
];
let configured_flags = parse_experimental_client_feature_flags(&cfg.experimental_client_feature_flags);
let invalid_flags: Vec<_> = configured_flags.keys().filter(|flag| !KNOWN_FLAGS.contains(&flag.as_str())).collect();
if !invalid_flags.is_empty() {
@@ -841,12 +903,12 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
let command = cfg.sendmail_command.clone().unwrap_or_else(|| format!("sendmail{EXE_SUFFIX}"));
let mut path = std::path::PathBuf::from(&command);
// Check if we can find the sendmail command to execute when no absolute path is given
if !path.is_absolute() {
match which::which(&command) {
Ok(result) => path = result,
Err(_) => err!(format!("sendmail command {command:?} not found in $PATH")),
}
let Ok(which_path) = which::which(&command) else {
err!(format!("sendmail command {command} not found in $PATH"))
};
path = which_path;
}
match path.metadata() {
@@ -880,8 +942,8 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
}
}
if (cfg.smtp_host.is_some() || cfg.use_sendmail) && !cfg.smtp_from.contains('@') {
err!("SMTP_FROM does not contain a mandatory @ sign")
if (cfg.smtp_host.is_some() || cfg.use_sendmail) && !is_valid_email(&cfg.smtp_from) {
err!(format!("SMTP_FROM '{}' is not a valid email address", cfg.smtp_from))
}
if cfg._enable_email_2fa && cfg.email_token_size < 6 {
@@ -893,12 +955,19 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
err!("To enable email 2FA, a mail transport must be configured")
}
// Check if the icon blacklist regex is valid
if let Some(ref r) = cfg.icon_blacklist_regex {
if !cfg._enable_email_2fa && cfg.email_2fa_enforce_on_verified_invite {
err!("To enforce email 2FA on verified invitations, email 2fa has to be enabled!");
}
if !cfg._enable_email_2fa && cfg.email_2fa_auto_fallback {
err!("To use email 2FA as automatic fallback, email 2fa has to be enabled!");
}
// Check if the HTTP request block regex is valid
if let Some(ref r) = cfg.http_request_block_regex {
let validate_regex = regex::Regex::new(r);
match validate_regex {
Ok(_) => (),
Err(e) => err!(format!("`ICON_BLACKLIST_REGEX` is invalid: {e:#?}")),
Err(e) => err!(format!("`HTTP_REQUEST_BLOCK_REGEX` is invalid: {e:#?}")),
}
}
@@ -978,6 +1047,11 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
_ => {}
}
}
if cfg.increase_note_size_limit {
println!("[WARNING] Secure Note size limit is increased to 100_000!");
println!("[WARNING] This could cause issues with clients. Also exports will not work on Bitwarden servers!.");
}
Ok(())
}
@@ -1066,12 +1140,13 @@ impl Config {
// Fill any missing with defaults
let config = builder.build();
validate_config(&config)?;
if !SKIP_CONFIG_VALIDATION.load(Ordering::Relaxed) {
validate_config(&config)?;
}
Ok(Config {
inner: RwLock::new(Inner {
rocket_shutdown_handle: None,
ws_shutdown_handle: None,
templates: load_templates(&config.templates_folder),
config,
_env,
@@ -1081,12 +1156,17 @@ impl Config {
})
}
pub fn update_config(&self, other: ConfigBuilder) -> Result<(), Error> {
pub fn update_config(&self, other: ConfigBuilder, ignore_non_editable: bool) -> Result<(), Error> {
// Remove default values
//let builder = other.remove(&self.inner.read().unwrap()._env);
// TODO: Remove values that are defaults, above only checks those set by env and not the defaults
let builder = other;
let mut builder = other;
// Remove values that are not editable
if ignore_non_editable {
builder.clear_non_editable();
}
// Serialize now before we consume the builder
let config_str = serde_json::to_string_pretty(&builder)?;
@@ -1121,7 +1201,7 @@ impl Config {
let mut _overrides = Vec::new();
usr.merge(&other, false, &mut _overrides)
};
self.update_config(builder)
self.update_config(builder, false)
}
/// Tests whether an email's domain is allowed. A domain is allowed if it
@@ -1164,7 +1244,7 @@ impl Config {
}
pub fn delete_user_config(&self) -> Result<(), Error> {
crate::util::delete_file(&CONFIG_FILE)?;
std::fs::remove_file(&*CONFIG_FILE)?;
// Empty user config
let usr = ConfigBuilder::default();
@@ -1187,10 +1267,7 @@ impl Config {
}
pub fn private_rsa_key(&self) -> String {
format!("{}.pem", CONFIG.rsa_key_filename())
}
pub fn public_rsa_key(&self) -> String {
format!("{}.pub.pem", CONFIG.rsa_key_filename())
format!("{}.pem", self.rsa_key_filename())
}
pub fn mail_enabled(&self) -> bool {
let inner = &self.inner.read().unwrap().config;
@@ -1221,35 +1298,28 @@ impl Config {
token.is_some() && !token.unwrap().trim().is_empty()
}
pub fn render_template<T: serde::ser::Serialize>(
&self,
name: &str,
data: &T,
) -> Result<String, crate::error::Error> {
if CONFIG.reload_templates() {
pub fn render_template<T: serde::ser::Serialize>(&self, name: &str, data: &T) -> Result<String, Error> {
if self.reload_templates() {
warn!("RELOADING TEMPLATES");
let hb = load_templates(CONFIG.templates_folder());
hb.render(name, data).map_err(Into::into)
} else {
let hb = &CONFIG.inner.read().unwrap().templates;
let hb = &self.inner.read().unwrap().templates;
hb.render(name, data).map_err(Into::into)
}
}
pub fn render_fallback_template<T: serde::ser::Serialize>(&self, name: &str, data: &T) -> Result<String, Error> {
let hb = &self.inner.read().unwrap().templates;
hb.render(&format!("fallback_{name}"), data).map_err(Into::into)
}
pub fn set_rocket_shutdown_handle(&self, handle: rocket::Shutdown) {
self.inner.write().unwrap().rocket_shutdown_handle = Some(handle);
}
pub fn set_ws_shutdown_handle(&self, handle: tokio::sync::oneshot::Sender<()>) {
self.inner.write().unwrap().ws_shutdown_handle = Some(handle);
}
pub fn shutdown(&self) {
if let Ok(mut c) = self.inner.write() {
if let Some(handle) = c.ws_shutdown_handle.take() {
handle.send(()).ok();
}
if let Some(handle) = c.rocket_shutdown_handle.take() {
handle.notify();
}
@@ -1271,8 +1341,9 @@ where
hb.set_strict_mode(true);
// Register helpers
hb.register_helper("case", Box::new(case_helper));
hb.register_helper("jsesc", Box::new(js_escape_helper));
hb.register_helper("to_json", Box::new(to_json));
hb.register_helper("webver", Box::new(webver));
hb.register_helper("vwver", Box::new(vwver));
macro_rules! reg {
($name:expr) => {{
@@ -1283,6 +1354,11 @@ where
reg!($name);
reg!(concat!($name, $ext));
}};
(@withfallback $name:expr) => {{
let template = include_str!(concat!("static/templates/", $name, ".hbs"));
hb.register_template_string($name, template).unwrap();
hb.register_template_string(concat!("fallback_", $name), template).unwrap();
}};
}
// First register default templates here
@@ -1326,17 +1402,13 @@ where
reg!("404");
reg!(@withfallback "scss/vaultwarden.scss");
reg!("scss/user.vaultwarden.scss");
// And then load user templates to overwrite the defaults
// Use .hbs extension for the files
// Templates get registered with their relative name
hb.register_templates_directory(
path,
DirectorySourceOptions {
tpl_extension: ".hbs".to_owned(),
..Default::default()
},
)
.unwrap();
hb.register_templates_directory(path, DirectorySourceOptions::default()).unwrap();
hb
}
@@ -1359,32 +1431,6 @@ fn case_helper<'reg, 'rc>(
}
}
fn js_escape_helper<'reg, 'rc>(
h: &Helper<'rc>,
_r: &'reg Handlebars<'_>,
_ctx: &'rc Context,
_rc: &mut RenderContext<'reg, 'rc>,
out: &mut dyn Output,
) -> HelperResult {
let param =
h.param(0).ok_or_else(|| RenderErrorReason::Other(String::from("Param not found for helper \"jsesc\"")))?;
let no_quote = h.param(1).is_some();
let value = param
.value()
.as_str()
.ok_or_else(|| RenderErrorReason::Other(String::from("Param for helper \"jsesc\" is not a String")))?;
let mut escaped_value = value.replace('\\', "").replace('\'', "\\x22").replace('\"', "\\x27");
if !no_quote {
escaped_value = format!("&quot;{escaped_value}&quot;");
}
out.write(&escaped_value)?;
Ok(())
}
fn to_json<'reg, 'rc>(
h: &Helper<'rc>,
_r: &'reg Handlebars<'_>,
@@ -1401,3 +1447,42 @@ fn to_json<'reg, 'rc>(
out.write(&json)?;
Ok(())
}
// Configure the web-vault version as an integer so it can be used as a comparison smaller or greater then.
// The default is based upon the version since this feature is added.
static WEB_VAULT_VERSION: Lazy<semver::Version> = Lazy::new(|| {
let vault_version = get_web_vault_version();
// Use a single regex capture to extract version components
let re = regex::Regex::new(r"(\d{4})\.(\d{1,2})\.(\d{1,2})").unwrap();
re.captures(&vault_version)
.and_then(|c| {
(c.len() == 4).then(|| {
format!("{}.{}.{}", c.get(1).unwrap().as_str(), c.get(2).unwrap().as_str(), c.get(3).unwrap().as_str())
})
})
.and_then(|v| semver::Version::parse(&v).ok())
.unwrap_or_else(|| semver::Version::parse("2024.6.2").unwrap())
});
// Configure the Vaultwarden version as an integer so it can be used as a comparison smaller or greater then.
// The default is based upon the version since this feature is added.
static VW_VERSION: Lazy<semver::Version> = Lazy::new(|| {
let vw_version = crate::VERSION.unwrap_or("1.32.5");
// Use a single regex capture to extract version components
let re = regex::Regex::new(r"(\d{1})\.(\d{1,2})\.(\d{1,2})").unwrap();
re.captures(vw_version)
.and_then(|c| {
(c.len() == 4).then(|| {
format!("{}.{}.{}", c.get(1).unwrap().as_str(), c.get(2).unwrap().as_str(), c.get(3).unwrap().as_str())
})
})
.and_then(|v| semver::Version::parse(&v).ok())
.unwrap_or_else(|| semver::Version::parse("1.32.5").unwrap())
});
handlebars::handlebars_helper!(webver: | web_vault_version: String |
semver::VersionReq::parse(&web_vault_version).expect("Invalid web-vault version compare string").matches(&WEB_VAULT_VERSION)
);
handlebars::handlebars_helper!(vwver: | vw_version: String |
semver::VersionReq::parse(&vw_version).expect("Invalid Vaultwarden version compare string").matches(&VW_VERSION)
);

View File

@@ -6,7 +6,7 @@ use std::num::NonZeroU32;
use data_encoding::{Encoding, HEXLOWER};
use ring::{digest, hmac, pbkdf2};
static DIGEST_ALG: pbkdf2::Algorithm = pbkdf2::PBKDF2_HMAC_SHA256;
const DIGEST_ALG: pbkdf2::Algorithm = pbkdf2::PBKDF2_HMAC_SHA256;
const OUTPUT_LEN: usize = digest::SHA256_OUTPUT_LEN;
pub fn hash_password(secret: &[u8], salt: &[u8], iterations: u32) -> Vec<u8> {
@@ -84,14 +84,15 @@ pub fn generate_id<const N: usize>() -> String {
encode_random_bytes::<N>(HEXLOWER)
}
pub fn generate_send_id() -> String {
// Send IDs are globally scoped, so make them longer to avoid collisions.
pub fn generate_send_file_id() -> String {
// Send File IDs are globally scoped, so make them longer to avoid collisions.
generate_id::<32>() // 256 bits
}
pub fn generate_attachment_id() -> String {
use crate::db::models::AttachmentId;
pub fn generate_attachment_id() -> AttachmentId {
// Attachment IDs are scoped to a cipher, so they can be smaller.
generate_id::<10>() // 80 bits
AttachmentId(generate_id::<10>()) // 80 bits
}
/// Generates a numeric token for email-based verifications.

View File

@@ -300,19 +300,17 @@ pub trait FromDb {
impl<T: FromDb> FromDb for Vec<T> {
type Output = Vec<T::Output>;
#[allow(clippy::wrong_self_convention)]
#[inline(always)]
fn from_db(self) -> Self::Output {
self.into_iter().map(crate::db::FromDb::from_db).collect()
self.into_iter().map(FromDb::from_db).collect()
}
}
impl<T: FromDb> FromDb for Option<T> {
type Output = Option<T::Output>;
#[allow(clippy::wrong_self_convention)]
#[inline(always)]
fn from_db(self) -> Self::Output {
self.map(crate::db::FromDb::from_db)
self.map(FromDb::from_db)
}
}
@@ -368,19 +366,21 @@ pub mod models;
/// Creates a back-up of the sqlite database
/// MySQL/MariaDB and PostgreSQL are not supported.
pub async fn backup_database(conn: &mut DbConn) -> Result<(), Error> {
pub async fn backup_database(conn: &mut DbConn) -> Result<String, Error> {
db_run! {@raw conn:
postgresql, mysql {
let _ = conn;
err!("PostgreSQL and MySQL/MariaDB do not support this backup feature");
}
sqlite {
use std::path::Path;
let db_url = CONFIG.database_url();
let db_path = Path::new(&db_url).parent().unwrap().to_string_lossy();
let file_date = chrono::Utc::now().format("%Y%m%d_%H%M%S").to_string();
diesel::sql_query(format!("VACUUM INTO '{db_path}/db_{file_date}.sqlite3'")).execute(conn)?;
Ok(())
let db_path = std::path::Path::new(&db_url).parent().unwrap();
let backup_file = db_path
.join(format!("db_{}.sqlite3", chrono::Utc::now().format("%Y%m%d_%H%M%S")))
.to_string_lossy()
.into_owned();
diesel::sql_query(format!("VACUUM INTO '{backup_file}'")).execute(conn)?;
Ok(backup_file)
}
}
}
@@ -389,13 +389,13 @@ pub async fn backup_database(conn: &mut DbConn) -> Result<(), Error> {
pub async fn get_sql_server_version(conn: &mut DbConn) -> String {
db_run! {@raw conn:
postgresql, mysql {
sql_function!{
define_sql_function!{
fn version() -> diesel::sql_types::Text;
}
diesel::select(version()).get_result::<String>(conn).unwrap_or_else(|_| "Unknown".to_string())
}
sqlite {
sql_function!{
define_sql_function!{
fn sqlite_version() -> diesel::sql_types::Text;
}
diesel::select(sqlite_version()).get_result::<String>(conn).unwrap_or_else(|_| "Unknown".to_string())

View File

@@ -1,9 +1,12 @@
use std::io::ErrorKind;
use bigdecimal::{BigDecimal, ToPrimitive};
use derive_more::{AsRef, Deref, Display};
use serde_json::Value;
use super::{CipherId, OrganizationId, UserId};
use crate::CONFIG;
use macros::IdFromParam;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
@@ -11,8 +14,8 @@ db_object! {
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(id))]
pub struct Attachment {
pub id: String,
pub cipher_uuid: String,
pub id: AttachmentId,
pub cipher_uuid: CipherId,
pub file_name: String, // encrypted
pub file_size: i64,
pub akey: Option<String>,
@@ -21,7 +24,13 @@ db_object! {
/// Local methods
impl Attachment {
pub const fn new(id: String, cipher_uuid: String, file_name: String, file_size: i64, akey: Option<String>) -> Self {
pub const fn new(
id: AttachmentId,
cipher_uuid: CipherId,
file_name: String,
file_size: i64,
akey: Option<String>,
) -> Self {
Self {
id,
cipher_uuid,
@@ -42,13 +51,13 @@ impl Attachment {
pub fn to_json(&self, host: &str) -> Value {
json!({
"Id": self.id,
"Url": self.get_url(host),
"FileName": self.file_name,
"Size": self.file_size.to_string(),
"SizeName": crate::util::get_display_size(self.file_size),
"Key": self.akey,
"Object": "attachment"
"id": self.id,
"url": self.get_url(host),
"fileName": self.file_name,
"size": self.file_size.to_string(),
"sizeName": crate::util::get_display_size(self.file_size),
"key": self.akey,
"object": "attachment"
})
}
}
@@ -95,7 +104,7 @@ impl Attachment {
pub async fn delete(&self, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
crate::util::retry(
let _: () = crate::util::retry(
|| diesel::delete(attachments::table.filter(attachments::id.eq(&self.id))).execute(conn),
10,
)
@@ -103,7 +112,7 @@ impl Attachment {
let file_path = &self.get_file_path();
match crate::util::delete_file(file_path) {
match std::fs::remove_file(file_path) {
// Ignore "file not found" errors. This can happen when the
// upstream caller has already cleaned up the file as part of
// its own error handling.
@@ -117,14 +126,14 @@ impl Attachment {
}}
}
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_cipher(cipher_uuid: &CipherId, conn: &mut DbConn) -> EmptyResult {
for attachment in Attachment::find_by_cipher(cipher_uuid, conn).await {
attachment.delete(conn).await?;
}
Ok(())
}
pub async fn find_by_id(id: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_id(id: &AttachmentId, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
attachments::table
.filter(attachments::id.eq(id.to_lowercase()))
@@ -134,7 +143,7 @@ impl Attachment {
}}
}
pub async fn find_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_cipher(cipher_uuid: &CipherId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
attachments::table
.filter(attachments::cipher_uuid.eq(cipher_uuid))
@@ -144,7 +153,7 @@ impl Attachment {
}}
}
pub async fn size_by_user(user_uuid: &str, conn: &mut DbConn) -> i64 {
pub async fn size_by_user(user_uuid: &UserId, conn: &mut DbConn) -> i64 {
db_run! { conn: {
let result: Option<BigDecimal> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
@@ -161,7 +170,7 @@ impl Attachment {
}}
}
pub async fn count_by_user(user_uuid: &str, conn: &mut DbConn) -> i64 {
pub async fn count_by_user(user_uuid: &UserId, conn: &mut DbConn) -> i64 {
db_run! { conn: {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
@@ -172,7 +181,7 @@ impl Attachment {
}}
}
pub async fn size_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 {
pub async fn size_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> i64 {
db_run! { conn: {
let result: Option<BigDecimal> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
@@ -189,7 +198,7 @@ impl Attachment {
}}
}
pub async fn count_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 {
pub async fn count_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> i64 {
db_run! { conn: {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
@@ -203,7 +212,11 @@ impl Attachment {
// This will return all attachments linked to the user or org
// There is no filtering done here if the user actually has access!
// It is used to speed up the sync process, and the matching is done in a different part.
pub async fn find_all_by_user_and_orgs(user_uuid: &str, org_uuids: &Vec<String>, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_all_by_user_and_orgs(
user_uuid: &UserId,
org_uuids: &Vec<OrganizationId>,
conn: &mut DbConn,
) -> Vec<Self> {
db_run! { conn: {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
@@ -216,3 +229,20 @@ impl Attachment {
}}
}
}
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
IdFromParam,
)]
pub struct AttachmentId(pub String);

View File

@@ -1,5 +1,9 @@
use crate::crypto::ct_eq;
use super::{DeviceId, OrganizationId, UserId};
use crate::{crypto::ct_eq, util::format_date};
use chrono::{NaiveDateTime, Utc};
use derive_more::{AsRef, Deref, Display, From};
use macros::UuidFromParam;
use serde_json::Value;
db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset, Deserialize, Serialize)]
@@ -7,15 +11,15 @@ db_object! {
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))]
pub struct AuthRequest {
pub uuid: String,
pub user_uuid: String,
pub organization_uuid: Option<String>,
pub uuid: AuthRequestId,
pub user_uuid: UserId,
pub organization_uuid: Option<OrganizationId>,
pub request_device_identifier: String,
pub request_device_identifier: DeviceId,
pub device_type: i32, // https://github.com/bitwarden/server/blob/master/src/Core/Enums/DeviceType.cs
pub request_ip: String,
pub response_device_id: Option<String>,
pub response_device_id: Option<DeviceId>,
pub access_code: String,
pub public_key: String,
@@ -33,8 +37,8 @@ db_object! {
impl AuthRequest {
pub fn new(
user_uuid: String,
request_device_identifier: String,
user_uuid: UserId,
request_device_identifier: DeviceId,
device_type: i32,
request_ip: String,
access_code: String,
@@ -43,7 +47,7 @@ impl AuthRequest {
let now = Utc::now().naive_utc();
Self {
uuid: crate::util::get_uuid(),
uuid: AuthRequestId(crate::util::get_uuid()),
user_uuid,
organization_uuid: None,
@@ -61,6 +65,13 @@ impl AuthRequest {
authentication_date: None,
}
}
pub fn to_json_for_pending_device(&self) -> Value {
json!({
"id": self.uuid,
"creationDate": format_date(&self.creation_date),
})
}
}
use crate::db::DbConn;
@@ -101,7 +112,7 @@ impl AuthRequest {
}
}
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &AuthRequestId, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: {
auth_requests::table
.filter(auth_requests::uuid.eq(uuid))
@@ -111,7 +122,18 @@ impl AuthRequest {
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_uuid_and_user(uuid: &AuthRequestId, user_uuid: &UserId, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: {
auth_requests::table
.filter(auth_requests::uuid.eq(uuid))
.filter(auth_requests::user_uuid.eq(user_uuid))
.first::<AuthRequestDb>(conn)
.ok()
.from_db()
}}
}
pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
auth_requests::table
.filter(auth_requests::user_uuid.eq(user_uuid))
@@ -119,6 +141,20 @@ impl AuthRequest {
}}
}
pub async fn find_by_user_and_requested_device(
user_uuid: &UserId,
device_uuid: &DeviceId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! {conn: {
auth_requests::table
.filter(auth_requests::user_uuid.eq(user_uuid))
.filter(auth_requests::request_device_identifier.eq(device_uuid))
.order_by(auth_requests::creation_date.desc())
.first::<AuthRequestDb>(conn).ok().from_db()
}}
}
pub async fn find_created_before(dt: &NaiveDateTime, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
auth_requests::table
@@ -140,9 +176,27 @@ impl AuthRequest {
}
pub async fn purge_expired_auth_requests(conn: &mut DbConn) {
let expiry_time = Utc::now().naive_utc() - chrono::Duration::minutes(5); //after 5 minutes, clients reject the request
let expiry_time = Utc::now().naive_utc() - chrono::TimeDelta::try_minutes(5).unwrap(); //after 5 minutes, clients reject the request
for auth_request in Self::find_created_before(&expiry_time, conn).await {
auth_request.delete(conn).await.ok();
}
}
}
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
From,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
UuidFromParam,
)]
pub struct AuthRequestId(String);

View File

@@ -1,12 +1,15 @@
use crate::util::LowerCase;
use crate::CONFIG;
use chrono::{Duration, NaiveDateTime, Utc};
use chrono::{NaiveDateTime, TimeDelta, Utc};
use derive_more::{AsRef, Deref, Display, From};
use serde_json::Value;
use super::{
Attachment, CollectionCipher, Favorite, FolderCipher, Group, User, UserOrgStatus, UserOrgType, UserOrganization,
Attachment, CollectionCipher, CollectionId, Favorite, FolderCipher, FolderId, Group, Membership, MembershipStatus,
MembershipType, OrganizationId, User, UserId,
};
use crate::api::core::{CipherData, CipherSyncData, CipherSyncType};
use macros::UuidFromParam;
use std::borrow::Cow;
@@ -16,12 +19,12 @@ db_object! {
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))]
pub struct Cipher {
pub uuid: String,
pub uuid: CipherId,
pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime,
pub user_uuid: Option<String>,
pub organization_uuid: Option<String>,
pub user_uuid: Option<UserId>,
pub organization_uuid: Option<OrganizationId>,
pub key: Option<String>,
@@ -29,7 +32,8 @@ db_object! {
Login = 1,
SecureNote = 2,
Card = 3,
Identity = 4
Identity = 4,
SshKey = 5
*/
pub atype: i32,
pub name: String,
@@ -44,10 +48,9 @@ db_object! {
}
}
#[allow(dead_code)]
pub enum RepromptType {
None = 0,
Password = 1, // not currently used in server
Password = 1,
}
/// Local methods
@@ -56,7 +59,7 @@ impl Cipher {
let now = Utc::now().naive_utc();
Self {
uuid: crate::util::get_uuid(),
uuid: CipherId(crate::util::get_uuid()),
created_at: now,
updated_at: now,
@@ -78,21 +81,39 @@ impl Cipher {
}
}
pub fn validate_notes(cipher_data: &[CipherData]) -> EmptyResult {
pub fn validate_cipher_data(cipher_data: &[CipherData]) -> EmptyResult {
let mut validation_errors = serde_json::Map::new();
let max_note_size = CONFIG._max_note_size();
let max_note_size_msg =
format!("The field Notes exceeds the maximum encrypted value length of {} characters.", &max_note_size);
for (index, cipher) in cipher_data.iter().enumerate() {
if let Some(note) = &cipher.Notes {
if note.len() > 10_000 {
validation_errors.insert(
format!("Ciphers[{index}].Notes"),
serde_json::to_value([
"The field Notes exceeds the maximum encrypted value length of 10000 characters.",
])
.unwrap(),
);
// Validate the note size and if it is exceeded return a warning
if let Some(note) = &cipher.notes {
if note.len() > max_note_size {
validation_errors
.insert(format!("Ciphers[{index}].Notes"), serde_json::to_value([&max_note_size_msg]).unwrap());
}
}
// Validate the password history if it contains `null` values and if so, return a warning
if let Some(Value::Array(password_history)) = &cipher.password_history {
for pwh in password_history {
if let Value::Object(pwo) = pwh {
if pwo.get("password").is_some_and(|p| !p.is_string()) {
validation_errors.insert(
format!("Ciphers[{index}].Notes"),
serde_json::to_value([
"The password history contains a `null` value. Only strings are allowed.",
])
.unwrap(),
);
break;
}
}
}
}
}
if !validation_errors.is_empty() {
let err_json = json!({
"message": "The model state is invalid.",
@@ -116,7 +137,7 @@ impl Cipher {
pub async fn to_json(
&self,
host: &str,
user_uuid: &str,
user_uuid: &UserId,
cipher_sync_data: Option<&CipherSyncData>,
sync_type: CipherSyncType,
conn: &mut DbConn,
@@ -135,50 +156,146 @@ impl Cipher {
}
}
let fields_json = self.fields.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null);
let password_history_json =
self.password_history.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null);
// We don't need these values at all for Organizational syncs
// Skip any other database calls if this is the case and just return false.
let (read_only, hide_passwords) = if sync_type == CipherSyncType::User {
let (read_only, hide_passwords, _) = if sync_type == CipherSyncType::User {
match self.get_access_restrictions(user_uuid, cipher_sync_data, conn).await {
Some((ro, hp)) => (ro, hp),
Some((ro, hp, mn)) => (ro, hp, mn),
None => {
error!("Cipher ownership assertion failure");
(true, true)
(true, true, false)
}
}
} else {
(false, false)
(false, false, false)
};
let fields_json: Vec<_> = self
.fields
.as_ref()
.and_then(|s| {
serde_json::from_str::<Vec<LowerCase<Value>>>(s)
.inspect_err(|e| warn!("Error parsing fields {e:?} for {}", self.uuid))
.ok()
})
.map(|d| {
d.into_iter()
.map(|mut f| {
// Check if the `type` key is a number, strings break some clients
// The fallback type is the hidden type `1`. this should prevent accidental data disclosure
// If not try to convert the string value to a number and fallback to `1`
// If it is both not a number and not a string, fallback to `1`
match f.data.get("type") {
Some(t) if t.is_number() => {}
Some(t) if t.is_string() => {
let type_num = &t.as_str().unwrap_or("1").parse::<u8>().unwrap_or(1);
f.data["type"] = json!(type_num);
}
_ => {
f.data["type"] = json!(1);
}
}
f.data
})
.collect()
})
.unwrap_or_default();
let password_history_json: Vec<_> = self
.password_history
.as_ref()
.and_then(|s| {
serde_json::from_str::<Vec<LowerCase<Value>>>(s)
.inspect_err(|e| warn!("Error parsing password history {e:?} for {}", self.uuid))
.ok()
})
.map(|d| {
// Check every password history item if they are valid and return it.
// If a password field has the type `null` skip it, it breaks newer Bitwarden clients
// A second check is done to verify the lastUsedDate exists and is a valid DateTime string, if not the epoch start time will be used
d.into_iter()
.filter_map(|d| match d.data.get("password") {
Some(p) if p.is_string() => Some(d.data),
_ => None,
})
.map(|mut d| match d.get("lastUsedDate").and_then(|l| l.as_str()) {
Some(l) => {
d["lastUsedDate"] = json!(crate::util::validate_and_format_date(l));
d
}
_ => {
d["lastUsedDate"] = json!("1970-01-01T00:00:00.000000Z");
d
}
})
.collect()
})
.unwrap_or_default();
// Get the type_data or a default to an empty json object '{}'.
// If not passing an empty object, mobile clients will crash.
let mut type_data_json: Value =
serde_json::from_str(&self.data).unwrap_or_else(|_| Value::Object(serde_json::Map::new()));
let mut type_data_json =
serde_json::from_str::<LowerCase<Value>>(&self.data).map(|d| d.data).unwrap_or_else(|_| {
warn!("Error parsing data field for {}", self.uuid);
Value::Object(serde_json::Map::new())
});
// NOTE: This was marked as *Backwards Compatibility Code*, but as of January 2021 this is still being used by upstream
// Set the first element of the Uris array as Uri, this is needed several (mobile) clients.
if self.atype == 1 {
if type_data_json["Uris"].is_array() {
let uri = type_data_json["Uris"][0]["Uri"].clone();
type_data_json["Uri"] = uri;
} else {
// Upstream always has an Uri key/value
type_data_json["Uri"] = Value::Null;
// Upstream always has an `uri` key/value
type_data_json["uri"] = Value::Null;
if let Some(uris) = type_data_json["uris"].as_array_mut() {
if !uris.is_empty() {
// Fix uri match values first, they are only allowed to be a number or null
// If it is a string, convert it to an int or null if that fails
for uri in &mut *uris {
if uri["match"].is_string() {
let match_value = match uri["match"].as_str().unwrap_or_default().parse::<u8>() {
Ok(n) => json!(n),
_ => Value::Null,
};
uri["match"] = match_value;
}
}
type_data_json["uri"] = uris[0]["uri"].clone();
}
}
}
// Fix secure note issues when data is invalid
// This breaks at least the native mobile clients
if self.atype == 2 {
match type_data_json {
Value::Object(ref t) if t.get("type").is_some_and(|t| t.is_number()) => {}
_ => {
type_data_json = json!({"type": 0});
}
}
}
// Fix invalid SSH Entries
// This breaks at least the native mobile client if invalid
// The only way to fix this is by setting type_data_json to `null`
// Opening this ssh-key in the mobile client will probably crash the client, but you can edit, save and afterwards delete it
if self.atype == 5
&& (type_data_json["keyFingerprint"].as_str().is_none_or(|v| v.is_empty())
|| type_data_json["privateKey"].as_str().is_none_or(|v| v.is_empty())
|| type_data_json["publicKey"].as_str().is_none_or(|v| v.is_empty()))
{
warn!("Error parsing ssh-key, mandatory fields are invalid for {}", self.uuid);
type_data_json = Value::Null;
}
// Clone the type_data and add some default value.
let mut data_json = type_data_json.clone();
// NOTE: This was marked as *Backwards Compatibility Code*, but as of January 2021 this is still being used by upstream
// data_json should always contain the following keys with every atype
data_json["Fields"] = fields_json.clone();
data_json["Name"] = json!(self.name);
data_json["Notes"] = json!(self.notes);
data_json["PasswordHistory"] = password_history_json.clone();
data_json["fields"] = json!(fields_json);
data_json["name"] = json!(self.name);
data_json["notes"] = json!(self.notes);
data_json["passwordHistory"] = Value::Array(password_history_json.clone());
let collection_ids = if let Some(cipher_sync_data) = cipher_sync_data {
if let Some(cipher_collections) = cipher_sync_data.cipher_collections.get(&self.uuid) {
@@ -187,7 +304,7 @@ impl Cipher {
Cow::from(Vec::with_capacity(0))
}
} else {
Cow::from(self.get_collections(user_uuid.to_string(), conn).await)
Cow::from(self.get_admin_collections(user_uuid.clone(), conn).await)
};
// There are three types of cipher response models in upstream
@@ -198,48 +315,49 @@ impl Cipher {
//
// Ref: https://github.com/bitwarden/server/blob/master/src/Core/Models/Api/Response/CipherResponseModel.cs
let mut json_object = json!({
"Object": "cipherDetails",
"Id": self.uuid,
"Type": self.atype,
"CreationDate": format_date(&self.created_at),
"RevisionDate": format_date(&self.updated_at),
"DeletedDate": self.deleted_at.map_or(Value::Null, |d| Value::String(format_date(&d))),
"Reprompt": self.reprompt.unwrap_or(RepromptType::None as i32),
"OrganizationId": self.organization_uuid,
"Key": self.key,
"Attachments": attachments_json,
"object": "cipherDetails",
"id": self.uuid,
"type": self.atype,
"creationDate": format_date(&self.created_at),
"revisionDate": format_date(&self.updated_at),
"deletedDate": self.deleted_at.map_or(Value::Null, |d| Value::String(format_date(&d))),
"reprompt": self.reprompt.filter(|r| *r == RepromptType::None as i32 || *r == RepromptType::Password as i32).unwrap_or(RepromptType::None as i32),
"organizationId": self.organization_uuid,
"key": self.key,
"attachments": attachments_json,
// We have UseTotp set to true by default within the Organization model.
// This variable together with UsersGetPremium is used to show or hide the TOTP counter.
"OrganizationUseTotp": true,
"organizationUseTotp": true,
// This field is specific to the cipherDetails type.
"CollectionIds": collection_ids,
"collectionIds": collection_ids,
"Name": self.name,
"Notes": self.notes,
"Fields": fields_json,
"name": self.name,
"notes": self.notes,
"fields": fields_json,
"Data": data_json,
"data": data_json,
"PasswordHistory": password_history_json,
"passwordHistory": password_history_json,
// All Cipher types are included by default as null, but only the matching one will be populated
"Login": null,
"SecureNote": null,
"Card": null,
"Identity": null,
"login": null,
"secureNote": null,
"card": null,
"identity": null,
"sshKey": null,
});
// These values are only needed for user/default syncs
// Not during an organizational sync like `get_org_details`
// Skip adding these fields in that case
if sync_type == CipherSyncType::User {
json_object["FolderId"] = json!(if let Some(cipher_sync_data) = cipher_sync_data {
cipher_sync_data.cipher_folders.get(&self.uuid).map(|c| c.to_string())
json_object["folderId"] = json!(if let Some(cipher_sync_data) = cipher_sync_data {
cipher_sync_data.cipher_folders.get(&self.uuid).cloned()
} else {
self.get_folder_uuid(user_uuid, conn).await
});
json_object["Favorite"] = json!(if let Some(cipher_sync_data) = cipher_sync_data {
json_object["favorite"] = json!(if let Some(cipher_sync_data) = cipher_sync_data {
cipher_sync_data.cipher_favorites.contains(&self.uuid)
} else {
self.is_favorite(user_uuid, conn).await
@@ -247,15 +365,16 @@ impl Cipher {
// These values are true by default, but can be false if the
// cipher belongs to a collection or group where the org owner has enabled
// the "Read Only" or "Hide Passwords" restrictions for the user.
json_object["Edit"] = json!(!read_only);
json_object["ViewPassword"] = json!(!hide_passwords);
json_object["edit"] = json!(!read_only);
json_object["viewPassword"] = json!(!hide_passwords);
}
let key = match self.atype {
1 => "Login",
2 => "SecureNote",
3 => "Card",
4 => "Identity",
1 => "login",
2 => "secureNote",
3 => "card",
4 => "identity",
5 => "sshKey",
_ => panic!("Wrong type"),
};
@@ -263,7 +382,7 @@ impl Cipher {
json_object
}
pub async fn update_users_revision(&self, conn: &mut DbConn) -> Vec<String> {
pub async fn update_users_revision(&self, conn: &mut DbConn) -> Vec<UserId> {
let mut user_uuids = Vec::new();
match self.user_uuid {
Some(ref user_uuid) => {
@@ -274,17 +393,16 @@ impl Cipher {
// Belongs to Organization, need to update affected users
if let Some(ref org_uuid) = self.organization_uuid {
// users having access to the collection
let mut collection_users =
UserOrganization::find_by_cipher_and_org(&self.uuid, org_uuid, conn).await;
let mut collection_users = Membership::find_by_cipher_and_org(&self.uuid, org_uuid, conn).await;
if CONFIG.org_groups_enabled() {
// members of a group having access to the collection
let group_users =
UserOrganization::find_by_cipher_and_org_with_group(&self.uuid, org_uuid, conn).await;
Membership::find_by_cipher_and_org_with_group(&self.uuid, org_uuid, conn).await;
collection_users.extend(group_users);
}
for user_org in collection_users {
User::update_uuid_revision(&user_org.user_uuid, conn).await;
user_uuids.push(user_org.user_uuid.clone())
for member in collection_users {
User::update_uuid_revision(&member.user_uuid, conn).await;
user_uuids.push(member.user_uuid.clone())
}
}
}
@@ -342,7 +460,7 @@ impl Cipher {
}}
}
pub async fn delete_all_by_organization(org_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_organization(org_uuid: &OrganizationId, conn: &mut DbConn) -> EmptyResult {
// TODO: Optimize this by executing a DELETE directly on the database, instead of first fetching.
for cipher in Self::find_by_org(org_uuid, conn).await {
cipher.delete(conn).await?;
@@ -350,7 +468,7 @@ impl Cipher {
Ok(())
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
for cipher in Self::find_owned_by_user(user_uuid, conn).await {
cipher.delete(conn).await?;
}
@@ -361,59 +479,66 @@ impl Cipher {
pub async fn purge_trash(conn: &mut DbConn) {
if let Some(auto_delete_days) = CONFIG.trash_auto_delete_days() {
let now = Utc::now().naive_utc();
let dt = now - Duration::days(auto_delete_days);
let dt = now - TimeDelta::try_days(auto_delete_days).unwrap();
for cipher in Self::find_deleted_before(&dt, conn).await {
cipher.delete(conn).await.ok();
}
}
}
pub async fn move_to_folder(&self, folder_uuid: Option<String>, user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn move_to_folder(
&self,
folder_uuid: Option<FolderId>,
user_uuid: &UserId,
conn: &mut DbConn,
) -> EmptyResult {
User::update_uuid_revision(user_uuid, conn).await;
match (self.get_folder_uuid(user_uuid, conn).await, folder_uuid) {
// No changes
(None, None) => Ok(()),
(Some(ref old), Some(ref new)) if old == new => Ok(()),
(Some(ref old_folder), Some(ref new_folder)) if old_folder == new_folder => Ok(()),
// Add to folder
(None, Some(new)) => FolderCipher::new(&new, &self.uuid).save(conn).await,
(None, Some(new_folder)) => FolderCipher::new(new_folder, self.uuid.clone()).save(conn).await,
// Remove from folder
(Some(old), None) => match FolderCipher::find_by_folder_and_cipher(&old, &self.uuid, conn).await {
Some(old) => old.delete(conn).await,
None => err!("Couldn't move from previous folder"),
},
(Some(old_folder), None) => {
match FolderCipher::find_by_folder_and_cipher(&old_folder, &self.uuid, conn).await {
Some(old_folder) => old_folder.delete(conn).await,
None => err!("Couldn't move from previous folder"),
}
}
// Move to another folder
(Some(old), Some(new)) => {
if let Some(old) = FolderCipher::find_by_folder_and_cipher(&old, &self.uuid, conn).await {
old.delete(conn).await?;
(Some(old_folder), Some(new_folder)) => {
if let Some(old_folder) = FolderCipher::find_by_folder_and_cipher(&old_folder, &self.uuid, conn).await {
old_folder.delete(conn).await?;
}
FolderCipher::new(&new, &self.uuid).save(conn).await
FolderCipher::new(new_folder, self.uuid.clone()).save(conn).await
}
}
}
/// Returns whether this cipher is directly owned by the user.
pub fn is_owned_by_user(&self, user_uuid: &str) -> bool {
pub fn is_owned_by_user(&self, user_uuid: &UserId) -> bool {
self.user_uuid.is_some() && self.user_uuid.as_ref().unwrap() == user_uuid
}
/// Returns whether this cipher is owned by an org in which the user has full access.
async fn is_in_full_access_org(
&self,
user_uuid: &str,
user_uuid: &UserId,
cipher_sync_data: Option<&CipherSyncData>,
conn: &mut DbConn,
) -> bool {
if let Some(ref org_uuid) = self.organization_uuid {
if let Some(cipher_sync_data) = cipher_sync_data {
if let Some(cached_user_org) = cipher_sync_data.user_organizations.get(org_uuid) {
return cached_user_org.has_full_access();
if let Some(cached_member) = cipher_sync_data.members.get(org_uuid) {
return cached_member.has_full_access();
}
} else if let Some(user_org) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn).await {
return user_org.has_full_access();
} else if let Some(member) = Membership::find_by_user_and_org(user_uuid, org_uuid, conn).await {
return member.has_full_access();
}
}
false
@@ -422,7 +547,7 @@ impl Cipher {
/// Returns whether this cipher is owned by an group in which the user has full access.
async fn is_in_full_access_group(
&self,
user_uuid: &str,
user_uuid: &UserId,
cipher_sync_data: Option<&CipherSyncData>,
conn: &mut DbConn,
) -> bool {
@@ -431,7 +556,7 @@ impl Cipher {
}
if let Some(ref org_uuid) = self.organization_uuid {
if let Some(cipher_sync_data) = cipher_sync_data {
return cipher_sync_data.user_group_full_access_for_organizations.get(org_uuid).is_some();
return cipher_sync_data.user_group_full_access_for_organizations.contains(org_uuid);
} else {
return Group::is_in_full_access_group(user_uuid, org_uuid, conn).await;
}
@@ -442,14 +567,14 @@ impl Cipher {
/// Returns the user's access restrictions to this cipher. A return value
/// of None means that this cipher does not belong to the user, and is
/// not in any collection the user has access to. Otherwise, the user has
/// access to this cipher, and Some(read_only, hide_passwords) represents
/// access to this cipher, and Some(read_only, hide_passwords, manage) represents
/// the access restrictions.
pub async fn get_access_restrictions(
&self,
user_uuid: &str,
user_uuid: &UserId,
cipher_sync_data: Option<&CipherSyncData>,
conn: &mut DbConn,
) -> Option<(bool, bool)> {
) -> Option<(bool, bool, bool)> {
// Check whether this cipher is directly owned by the user, or is in
// a collection that the user has full access to. If so, there are no
// access restrictions.
@@ -457,21 +582,21 @@ impl Cipher {
|| self.is_in_full_access_org(user_uuid, cipher_sync_data, conn).await
|| self.is_in_full_access_group(user_uuid, cipher_sync_data, conn).await
{
return Some((false, false));
return Some((false, false, true));
}
let rows = if let Some(cipher_sync_data) = cipher_sync_data {
let mut rows: Vec<(bool, bool)> = Vec::new();
let mut rows: Vec<(bool, bool, bool)> = Vec::new();
if let Some(collections) = cipher_sync_data.cipher_collections.get(&self.uuid) {
for collection in collections {
//User permissions
if let Some(uc) = cipher_sync_data.user_collections.get(collection) {
rows.push((uc.read_only, uc.hide_passwords));
if let Some(cu) = cipher_sync_data.user_collections.get(collection) {
rows.push((cu.read_only, cu.hide_passwords, cu.manage));
}
//Group permissions
if let Some(cg) = cipher_sync_data.user_collections_groups.get(collection) {
rows.push((cg.read_only, cg.hide_passwords));
rows.push((cg.read_only, cg.hide_passwords, cg.manage));
}
}
}
@@ -498,15 +623,21 @@ impl Cipher {
// booleans and this behavior isn't portable anyway.
let mut read_only = true;
let mut hide_passwords = true;
for (ro, hp) in rows.iter() {
let mut manage = false;
for (ro, hp, mn) in rows.iter() {
read_only &= ro;
hide_passwords &= hp;
manage &= mn;
}
Some((read_only, hide_passwords))
Some((read_only, hide_passwords, manage))
}
async fn get_user_collections_access_flags(&self, user_uuid: &str, conn: &mut DbConn) -> Vec<(bool, bool)> {
async fn get_user_collections_access_flags(
&self,
user_uuid: &UserId,
conn: &mut DbConn,
) -> Vec<(bool, bool, bool)> {
db_run! {conn: {
// Check whether this cipher is in any collections accessible to the
// user. If so, retrieve the access flags for each collection.
@@ -517,13 +648,17 @@ impl Cipher {
.inner_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
.and(users_collections::user_uuid.eq(user_uuid))))
.select((users_collections::read_only, users_collections::hide_passwords))
.load::<(bool, bool)>(conn)
.select((users_collections::read_only, users_collections::hide_passwords, users_collections::manage))
.load::<(bool, bool, bool)>(conn)
.expect("Error getting user access restrictions")
}}
}
async fn get_group_collections_access_flags(&self, user_uuid: &str, conn: &mut DbConn) -> Vec<(bool, bool)> {
async fn get_group_collections_access_flags(
&self,
user_uuid: &UserId,
conn: &mut DbConn,
) -> Vec<(bool, bool, bool)> {
if !CONFIG.org_groups_enabled() {
return Vec::new();
}
@@ -543,49 +678,49 @@ impl Cipher {
users_organizations::uuid.eq(groups_users::users_organizations_uuid)
))
.filter(users_organizations::user_uuid.eq(user_uuid))
.select((collections_groups::read_only, collections_groups::hide_passwords))
.load::<(bool, bool)>(conn)
.select((collections_groups::read_only, collections_groups::hide_passwords, collections_groups::manage))
.load::<(bool, bool, bool)>(conn)
.expect("Error getting group access restrictions")
}}
}
pub async fn is_write_accessible_to_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool {
pub async fn is_write_accessible_to_user(&self, user_uuid: &UserId, conn: &mut DbConn) -> bool {
match self.get_access_restrictions(user_uuid, None, conn).await {
Some((read_only, _hide_passwords)) => !read_only,
Some((read_only, _hide_passwords, manage)) => !read_only || manage,
None => false,
}
}
pub async fn is_accessible_to_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool {
pub async fn is_accessible_to_user(&self, user_uuid: &UserId, conn: &mut DbConn) -> bool {
self.get_access_restrictions(user_uuid, None, conn).await.is_some()
}
// Returns whether this cipher is a favorite of the specified user.
pub async fn is_favorite(&self, user_uuid: &str, conn: &mut DbConn) -> bool {
pub async fn is_favorite(&self, user_uuid: &UserId, conn: &mut DbConn) -> bool {
Favorite::is_favorite(&self.uuid, user_uuid, conn).await
}
// Sets whether this cipher is a favorite of the specified user.
pub async fn set_favorite(&self, favorite: Option<bool>, user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn set_favorite(&self, favorite: Option<bool>, user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
match favorite {
None => Ok(()), // No change requested.
Some(status) => Favorite::set_favorite(status, &self.uuid, user_uuid, conn).await,
}
}
pub async fn get_folder_uuid(&self, user_uuid: &str, conn: &mut DbConn) -> Option<String> {
pub async fn get_folder_uuid(&self, user_uuid: &UserId, conn: &mut DbConn) -> Option<FolderId> {
db_run! {conn: {
folders_ciphers::table
.inner_join(folders::table)
.filter(folders::user_uuid.eq(&user_uuid))
.filter(folders_ciphers::cipher_uuid.eq(&self.uuid))
.select(folders_ciphers::folder_uuid)
.first::<String>(conn)
.first::<FolderId>(conn)
.ok()
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &CipherId, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: {
ciphers::table
.filter(ciphers::uuid.eq(uuid))
@@ -595,6 +730,21 @@ impl Cipher {
}}
}
pub async fn find_by_uuid_and_org(
cipher_uuid: &CipherId,
org_uuid: &OrganizationId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! {conn: {
ciphers::table
.filter(ciphers::uuid.eq(cipher_uuid))
.filter(ciphers::organization_uuid.eq(org_uuid))
.first::<CipherDb>(conn)
.ok()
.from_db()
}}
}
// Find all ciphers accessible or visible to the specified user.
//
// "Accessible" means the user has read access to the cipher, either via
@@ -607,7 +757,7 @@ impl Cipher {
// true, then the non-interesting ciphers will not be returned. As a
// result, those ciphers will not appear in "My Vault" for the org
// owner/admin, but they can still be accessed via the org vault view.
pub async fn find_by_user(user_uuid: &str, visible_only: bool, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &UserId, visible_only: bool, conn: &mut DbConn) -> Vec<Self> {
if CONFIG.org_groups_enabled() {
db_run! {conn: {
let mut query = ciphers::table
@@ -617,7 +767,7 @@ impl Cipher {
.left_join(users_organizations::table.on(
ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable())
.and(users_organizations::user_uuid.eq(user_uuid))
.and(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
.and(users_organizations::status.eq(MembershipStatus::Confirmed as i32))
))
.left_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
@@ -644,7 +794,7 @@ impl Cipher {
if !visible_only {
query = query.or_filter(
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin/owner
users_organizations::atype.le(MembershipType::Admin as i32) // Org admin/owner
);
}
@@ -662,7 +812,7 @@ impl Cipher {
.left_join(users_organizations::table.on(
ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable())
.and(users_organizations::user_uuid.eq(user_uuid))
.and(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
.and(users_organizations::status.eq(MembershipStatus::Confirmed as i32))
))
.left_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
@@ -676,7 +826,7 @@ impl Cipher {
if !visible_only {
query = query.or_filter(
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin/owner
users_organizations::atype.le(MembershipType::Admin as i32) // Org admin/owner
);
}
@@ -689,12 +839,12 @@ impl Cipher {
}
// Find all ciphers visible to the specified user.
pub async fn find_by_user_visible(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_user_visible(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
Self::find_by_user(user_uuid, true, conn).await
}
// Find all ciphers directly owned by the specified user.
pub async fn find_owned_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_owned_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
ciphers::table
.filter(
@@ -705,7 +855,7 @@ impl Cipher {
}}
}
pub async fn count_owned_by_user(user_uuid: &str, conn: &mut DbConn) -> i64 {
pub async fn count_owned_by_user(user_uuid: &UserId, conn: &mut DbConn) -> i64 {
db_run! {conn: {
ciphers::table
.filter(ciphers::user_uuid.eq(user_uuid))
@@ -716,7 +866,7 @@ impl Cipher {
}}
}
pub async fn find_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
ciphers::table
.filter(ciphers::organization_uuid.eq(org_uuid))
@@ -724,7 +874,7 @@ impl Cipher {
}}
}
pub async fn count_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 {
pub async fn count_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> i64 {
db_run! {conn: {
ciphers::table
.filter(ciphers::organization_uuid.eq(org_uuid))
@@ -735,7 +885,7 @@ impl Cipher {
}}
}
pub async fn find_by_folder(folder_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_folder(folder_uuid: &FolderId, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
folders_ciphers::table.inner_join(ciphers::table)
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
@@ -753,36 +903,132 @@ impl Cipher {
}}
}
pub async fn get_collections(&self, user_id: String, conn: &mut DbConn) -> Vec<String> {
db_run! {conn: {
ciphers_collections::table
.inner_join(collections::table.on(
collections::uuid.eq(ciphers_collections::collection_uuid)
))
.inner_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid).and(
users_organizations::user_uuid.eq(user_id.clone())
)
))
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid).and(
users_collections::user_uuid.eq(user_id.clone())
)
))
.filter(ciphers_collections::cipher_uuid.eq(&self.uuid))
.filter(users_collections::user_uuid.eq(user_id).or( // User has access to collection
users_organizations::access_all.eq(true).or( // User has access all
users_organizations::atype.le(UserOrgType::Admin as i32) // User is admin or owner
)
))
.select(ciphers_collections::collection_uuid)
.load::<String>(conn).unwrap_or_default()
}}
pub async fn get_collections(&self, user_uuid: UserId, conn: &mut DbConn) -> Vec<CollectionId> {
if CONFIG.org_groups_enabled() {
db_run! {conn: {
ciphers_collections::table
.filter(ciphers_collections::cipher_uuid.eq(&self.uuid))
.inner_join(collections::table.on(
collections::uuid.eq(ciphers_collections::collection_uuid)
))
.left_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid)
.and(users_organizations::user_uuid.eq(user_uuid.clone()))
))
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid)
.and(users_collections::user_uuid.eq(user_uuid.clone()))
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)))
.left_join(collections_groups::table.on(
collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid)
.and(collections_groups::groups_uuid.eq(groups::uuid))
))
.filter(users_organizations::access_all.eq(true) // User has access all
.or(users_collections::user_uuid.eq(user_uuid) // User has access to collection
.and(users_collections::read_only.eq(false)))
.or(groups::access_all.eq(true)) // Access via groups
.or(collections_groups::collections_uuid.is_not_null() // Access via groups
.and(collections_groups::read_only.eq(false)))
)
.select(ciphers_collections::collection_uuid)
.load::<CollectionId>(conn).unwrap_or_default()
}}
} else {
db_run! {conn: {
ciphers_collections::table
.filter(ciphers_collections::cipher_uuid.eq(&self.uuid))
.inner_join(collections::table.on(
collections::uuid.eq(ciphers_collections::collection_uuid)
))
.inner_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid)
.and(users_organizations::user_uuid.eq(user_uuid.clone()))
))
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid)
.and(users_collections::user_uuid.eq(user_uuid.clone()))
))
.filter(users_organizations::access_all.eq(true) // User has access all
.or(users_collections::user_uuid.eq(user_uuid) // User has access to collection
.and(users_collections::read_only.eq(false)))
)
.select(ciphers_collections::collection_uuid)
.load::<CollectionId>(conn).unwrap_or_default()
}}
}
}
pub async fn get_admin_collections(&self, user_uuid: UserId, conn: &mut DbConn) -> Vec<CollectionId> {
if CONFIG.org_groups_enabled() {
db_run! {conn: {
ciphers_collections::table
.filter(ciphers_collections::cipher_uuid.eq(&self.uuid))
.inner_join(collections::table.on(
collections::uuid.eq(ciphers_collections::collection_uuid)
))
.left_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid)
.and(users_organizations::user_uuid.eq(user_uuid.clone()))
))
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid)
.and(users_collections::user_uuid.eq(user_uuid.clone()))
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)))
.left_join(collections_groups::table.on(
collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid)
.and(collections_groups::groups_uuid.eq(groups::uuid))
))
.filter(users_organizations::access_all.eq(true) // User has access all
.or(users_collections::user_uuid.eq(user_uuid) // User has access to collection
.and(users_collections::read_only.eq(false)))
.or(groups::access_all.eq(true)) // Access via groups
.or(collections_groups::collections_uuid.is_not_null() // Access via groups
.and(collections_groups::read_only.eq(false)))
.or(users_organizations::atype.le(MembershipType::Admin as i32)) // User is admin or owner
)
.select(ciphers_collections::collection_uuid)
.load::<CollectionId>(conn).unwrap_or_default()
}}
} else {
db_run! {conn: {
ciphers_collections::table
.filter(ciphers_collections::cipher_uuid.eq(&self.uuid))
.inner_join(collections::table.on(
collections::uuid.eq(ciphers_collections::collection_uuid)
))
.inner_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid)
.and(users_organizations::user_uuid.eq(user_uuid.clone()))
))
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid)
.and(users_collections::user_uuid.eq(user_uuid.clone()))
))
.filter(users_organizations::access_all.eq(true) // User has access all
.or(users_collections::user_uuid.eq(user_uuid) // User has access to collection
.and(users_collections::read_only.eq(false)))
.or(users_organizations::atype.le(MembershipType::Admin as i32)) // User is admin or owner
)
.select(ciphers_collections::collection_uuid)
.load::<CollectionId>(conn).unwrap_or_default()
}}
}
}
/// Return a Vec with (cipher_uuid, collection_uuid)
/// This is used during a full sync so we only need one query for all collections accessible.
pub async fn get_collections_with_cipher_by_user(user_id: String, conn: &mut DbConn) -> Vec<(String, String)> {
pub async fn get_collections_with_cipher_by_user(
user_uuid: UserId,
conn: &mut DbConn,
) -> Vec<(CipherId, CollectionId)> {
db_run! {conn: {
ciphers_collections::table
.inner_join(collections::table.on(
@@ -790,12 +1036,12 @@ impl Cipher {
))
.inner_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid).and(
users_organizations::user_uuid.eq(user_id.clone())
users_organizations::user_uuid.eq(user_uuid.clone())
)
))
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid).and(
users_collections::user_uuid.eq(user_id.clone())
users_collections::user_uuid.eq(user_uuid.clone())
)
))
.left_join(groups_users::table.on(
@@ -809,14 +1055,32 @@ impl Cipher {
collections_groups::groups_uuid.eq(groups::uuid)
)
))
.or_filter(users_collections::user_uuid.eq(user_id)) // User has access to collection
.or_filter(users_collections::user_uuid.eq(user_uuid)) // User has access to collection
.or_filter(users_organizations::access_all.eq(true)) // User has access all
.or_filter(users_organizations::atype.le(UserOrgType::Admin as i32)) // User is admin or owner
.or_filter(users_organizations::atype.le(MembershipType::Admin as i32)) // User is admin or owner
.or_filter(groups::access_all.eq(true)) //Access via group
.or_filter(collections_groups::collections_uuid.is_not_null()) //Access via group
.select(ciphers_collections::all_columns)
.distinct()
.load::<(String, String)>(conn).unwrap_or_default()
.load::<(CipherId, CollectionId)>(conn).unwrap_or_default()
}}
}
}
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
From,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
UuidFromParam,
)]
pub struct CipherId(String);

View File

@@ -1,15 +1,20 @@
use derive_more::{AsRef, Deref, Display, From};
use serde_json::Value;
use super::{CollectionGroup, User, UserOrgStatus, UserOrgType, UserOrganization};
use super::{
CipherId, CollectionGroup, GroupUser, Membership, MembershipId, MembershipStatus, MembershipType, OrganizationId,
User, UserId,
};
use crate::CONFIG;
use macros::UuidFromParam;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[diesel(table_name = collections)]
#[diesel(primary_key(uuid))]
pub struct Collection {
pub uuid: String,
pub org_uuid: String,
pub uuid: CollectionId,
pub org_uuid: OrganizationId,
pub name: String,
pub external_id: Option<String>,
}
@@ -18,26 +23,27 @@ db_object! {
#[diesel(table_name = users_collections)]
#[diesel(primary_key(user_uuid, collection_uuid))]
pub struct CollectionUser {
pub user_uuid: String,
pub collection_uuid: String,
pub user_uuid: UserId,
pub collection_uuid: CollectionId,
pub read_only: bool,
pub hide_passwords: bool,
pub manage: bool,
}
#[derive(Identifiable, Queryable, Insertable)]
#[diesel(table_name = ciphers_collections)]
#[diesel(primary_key(cipher_uuid, collection_uuid))]
pub struct CollectionCipher {
pub cipher_uuid: String,
pub collection_uuid: String,
pub cipher_uuid: CipherId,
pub collection_uuid: CollectionId,
}
}
/// Local methods
impl Collection {
pub fn new(org_uuid: String, name: String, external_id: Option<String>) -> Self {
pub fn new(org_uuid: OrganizationId, name: String, external_id: Option<String>) -> Self {
let mut new_model = Self {
uuid: crate::util::get_uuid(),
uuid: CollectionId(crate::util::get_uuid()),
org_uuid,
name,
external_id: None,
@@ -49,11 +55,11 @@ impl Collection {
pub fn to_json(&self) -> Value {
json!({
"ExternalId": self.external_id,
"Id": self.uuid,
"OrganizationId": self.org_uuid,
"Name": self.name,
"Object": "collection",
"externalId": self.external_id,
"id": self.uuid,
"organizationId": self.org_uuid,
"name": self.name,
"object": "collection",
})
}
@@ -74,34 +80,66 @@ impl Collection {
pub async fn to_json_details(
&self,
user_uuid: &str,
user_uuid: &UserId,
cipher_sync_data: Option<&crate::api::core::CipherSyncData>,
conn: &mut DbConn,
) -> Value {
let (read_only, hide_passwords) = if let Some(cipher_sync_data) = cipher_sync_data {
match cipher_sync_data.user_organizations.get(&self.org_uuid) {
Some(uo) if uo.has_full_access() => (false, false),
Some(_) => {
if let Some(uc) = cipher_sync_data.user_collections.get(&self.uuid) {
(uc.read_only, uc.hide_passwords)
let (read_only, hide_passwords, manage) = if let Some(cipher_sync_data) = cipher_sync_data {
match cipher_sync_data.members.get(&self.org_uuid) {
// Only for Manager types Bitwarden returns true for the manage option
// Owners and Admins always have true. Users are not able to have full access
Some(m) if m.has_full_access() => (false, false, m.atype >= MembershipType::Manager),
Some(m) => {
// Only let a manager manage collections when the have full read/write access
let is_manager = m.atype == MembershipType::Manager;
if let Some(cu) = cipher_sync_data.user_collections.get(&self.uuid) {
(
cu.read_only,
cu.hide_passwords,
cu.manage || (is_manager && !cu.read_only && !cu.hide_passwords),
)
} else if let Some(cg) = cipher_sync_data.user_collections_groups.get(&self.uuid) {
(cg.read_only, cg.hide_passwords)
(
cg.read_only,
cg.hide_passwords,
cg.manage || (is_manager && !cg.read_only && !cg.hide_passwords),
)
} else {
(false, false)
(false, false, false)
}
}
_ => (true, true),
_ => (true, true, false),
}
} else {
(!self.is_writable_by_user(user_uuid, conn).await, self.hide_passwords_for_user(user_uuid, conn).await)
match Membership::find_confirmed_by_user_and_org(user_uuid, &self.org_uuid, conn).await {
Some(m) if m.has_full_access() => (false, false, m.atype >= MembershipType::Manager),
Some(_) if self.is_manageable_by_user(user_uuid, conn).await => (false, false, true),
Some(m) => {
let is_manager = m.atype == MembershipType::Manager;
let read_only = !self.is_writable_by_user(user_uuid, conn).await;
let hide_passwords = self.hide_passwords_for_user(user_uuid, conn).await;
(read_only, hide_passwords, is_manager && !read_only && !hide_passwords)
}
_ => (true, true, false),
}
};
let mut json_object = self.to_json();
json_object["Object"] = json!("collectionDetails");
json_object["ReadOnly"] = json!(read_only);
json_object["HidePasswords"] = json!(hide_passwords);
json_object["object"] = json!("collectionDetails");
json_object["readOnly"] = json!(read_only);
json_object["hidePasswords"] = json!(hide_passwords);
json_object["manage"] = json!(manage);
json_object
}
pub async fn can_access_collection(member: &Membership, col_id: &CollectionId, conn: &mut DbConn) -> bool {
member.has_status(MembershipStatus::Confirmed)
&& (member.has_full_access()
|| CollectionUser::has_access_to_collection_by_user(col_id, &member.user_uuid, conn).await
|| (CONFIG.org_groups_enabled()
&& (GroupUser::has_full_access_by_member(&member.org_uuid, &member.uuid, conn).await
|| GroupUser::has_access_to_collection_by_member(col_id, &member.uuid, conn).await)))
}
}
use crate::db::DbConn;
@@ -158,7 +196,7 @@ impl Collection {
}}
}
pub async fn delete_all_by_organization(org_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_organization(org_uuid: &OrganizationId, conn: &mut DbConn) -> EmptyResult {
for collection in Self::find_by_organization(org_uuid, conn).await {
collection.delete(conn).await?;
}
@@ -166,12 +204,12 @@ impl Collection {
}
pub async fn update_users_revision(&self, conn: &mut DbConn) {
for user_org in UserOrganization::find_by_collection_and_org(&self.uuid, &self.org_uuid, conn).await.iter() {
User::update_uuid_revision(&user_org.user_uuid, conn).await;
for member in Membership::find_by_collection_and_org(&self.uuid, &self.org_uuid, conn).await.iter() {
User::update_uuid_revision(&member.user_uuid, conn).await;
}
}
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &CollectionId, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
collections::table
.filter(collections::uuid.eq(uuid))
@@ -181,7 +219,7 @@ impl Collection {
}}
}
pub async fn find_by_user_uuid(user_uuid: String, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_user_uuid(user_uuid: UserId, conn: &mut DbConn) -> Vec<Self> {
if CONFIG.org_groups_enabled() {
db_run! { conn: {
collections::table
@@ -207,7 +245,7 @@ impl Collection {
)
))
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
users_organizations::status.eq(MembershipStatus::Confirmed as i32)
)
.filter(
users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection
@@ -238,7 +276,7 @@ impl Collection {
)
))
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
users_organizations::status.eq(MembershipStatus::Confirmed as i32)
)
.filter(
users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection
@@ -252,26 +290,19 @@ impl Collection {
}
}
// Check if a user has access to a specific collection
// FIXME: This needs to be reviewed. The query used by `find_by_user_uuid` could be adjusted to filter when needed.
// For now this is a good solution without making to much changes.
pub async fn has_access_by_collection_and_user_uuid(
collection_uuid: &str,
user_uuid: &str,
pub async fn find_by_organization_and_user_uuid(
org_uuid: &OrganizationId,
user_uuid: &UserId,
conn: &mut DbConn,
) -> bool {
Self::find_by_user_uuid(user_uuid.to_owned(), conn).await.into_iter().any(|c| c.uuid == collection_uuid)
}
pub async fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
) -> Vec<Self> {
Self::find_by_user_uuid(user_uuid.to_owned(), conn)
.await
.into_iter()
.filter(|c| c.org_uuid == org_uuid)
.filter(|c| &c.org_uuid == org_uuid)
.collect()
}
pub async fn find_by_organization(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_organization(org_uuid: &OrganizationId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
collections::table
.filter(collections::org_uuid.eq(org_uuid))
@@ -281,7 +312,7 @@ impl Collection {
}}
}
pub async fn count_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 {
pub async fn count_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> i64 {
db_run! { conn: {
collections::table
.filter(collections::org_uuid.eq(org_uuid))
@@ -292,7 +323,11 @@ impl Collection {
}}
}
pub async fn find_by_uuid_and_org(uuid: &str, org_uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_org(
uuid: &CollectionId,
org_uuid: &OrganizationId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: {
collections::table
.filter(collections::uuid.eq(uuid))
@@ -304,7 +339,7 @@ impl Collection {
}}
}
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: String, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_user(uuid: &CollectionId, user_uuid: UserId, conn: &mut DbConn) -> Option<Self> {
if CONFIG.org_groups_enabled() {
db_run! { conn: {
collections::table
@@ -333,7 +368,7 @@ impl Collection {
.filter(
users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner
users_organizations::atype.le(MembershipType::Admin as i32) // Org admin or owner
)).or(
groups::access_all.eq(true) // access_all in groups
).or( // access via groups
@@ -362,7 +397,7 @@ impl Collection {
.filter(
users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner
users_organizations::atype.le(MembershipType::Admin as i32) // Org admin or owner
))
).select(collections::all_columns)
.first::<CollectionDb>(conn).ok()
@@ -371,53 +406,69 @@ impl Collection {
}
}
pub async fn is_writable_by_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool {
pub async fn is_writable_by_user(&self, user_uuid: &UserId, conn: &mut DbConn) -> bool {
let user_uuid = user_uuid.to_string();
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid.clone())
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid)
)
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
collections_groups::collections_uuid.eq(collections::uuid)
)
))
.filter(collections::uuid.eq(&self.uuid))
.filter(
users_collections::collection_uuid.eq(&self.uuid).and(users_collections::read_only.eq(false)).or(// Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner
)).or(
groups::access_all.eq(true) // access_all in groups
).or( // access via groups
groups_users::users_organizations_uuid.eq(users_organizations::uuid).and(
collections_groups::collections_uuid.is_not_null().and(
collections_groups::read_only.eq(false))
if CONFIG.org_groups_enabled() {
db_run! { conn: {
collections::table
.filter(collections::uuid.eq(&self.uuid))
.inner_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid)
.and(users_organizations::user_uuid.eq(user_uuid.clone()))
))
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid)
.and(users_collections::user_uuid.eq(user_uuid))
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid)
.and(collections_groups::collections_uuid.eq(collections::uuid))
))
.filter(users_organizations::atype.le(MembershipType::Admin as i32) // Org admin or owner
.or(users_organizations::access_all.eq(true)) // access_all via membership
.or(users_collections::collection_uuid.eq(&self.uuid) // write access given to collection
.and(users_collections::read_only.eq(false)))
.or(groups::access_all.eq(true)) // access_all via group
.or(collections_groups::collections_uuid.is_not_null() // write access given via group
.and(collections_groups::read_only.eq(false)))
)
)
)
.count()
.first::<i64>(conn)
.ok()
.unwrap_or(0) != 0
}}
.count()
.first::<i64>(conn)
.ok()
.unwrap_or(0) != 0
}}
} else {
db_run! { conn: {
collections::table
.filter(collections::uuid.eq(&self.uuid))
.inner_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid)
.and(users_organizations::user_uuid.eq(user_uuid.clone()))
))
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid)
.and(users_collections::user_uuid.eq(user_uuid))
))
.filter(users_organizations::atype.le(MembershipType::Admin as i32) // Org admin or owner
.or(users_organizations::access_all.eq(true)) // access_all via membership
.or(users_collections::collection_uuid.eq(&self.uuid) // write access given to collection
.and(users_collections::read_only.eq(false)))
)
.count()
.first::<i64>(conn)
.ok()
.unwrap_or(0) != 0
}}
}
}
pub async fn hide_passwords_for_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool {
pub async fn hide_passwords_for_user(&self, user_uuid: &UserId, conn: &mut DbConn) -> bool {
let user_uuid = user_uuid.to_string();
db_run! { conn: {
collections::table
@@ -446,7 +497,7 @@ impl Collection {
.filter(
users_collections::collection_uuid.eq(&self.uuid).and(users_collections::hide_passwords.eq(true)).or(// Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner
users_organizations::atype.le(MembershipType::Admin as i32) // Org admin or owner
)).or(
groups::access_all.eq(true) // access_all in groups
).or( // access via groups
@@ -462,11 +513,61 @@ impl Collection {
.unwrap_or(0) != 0
}}
}
pub async fn is_manageable_by_user(&self, user_uuid: &UserId, conn: &mut DbConn) -> bool {
let user_uuid = user_uuid.to_string();
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid.clone())
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid)
)
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
collections_groups::collections_uuid.eq(collections::uuid)
)
))
.filter(collections::uuid.eq(&self.uuid))
.filter(
users_collections::collection_uuid.eq(&self.uuid).and(users_collections::manage.eq(true)).or(// Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(MembershipType::Admin as i32) // Org admin or owner
)).or(
groups::access_all.eq(true) // access_all in groups
).or( // access via groups
groups_users::users_organizations_uuid.eq(users_organizations::uuid).and(
collections_groups::collections_uuid.is_not_null().and(
collections_groups::manage.eq(true))
)
)
)
.count()
.first::<i64>(conn)
.ok()
.unwrap_or(0) != 0
}}
}
}
/// Database methods
impl CollectionUser {
pub async fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_organization_and_user_uuid(
org_uuid: &OrganizationId,
user_uuid: &UserId,
conn: &mut DbConn,
) -> Vec<Self> {
db_run! { conn: {
users_collections::table
.filter(users_collections::user_uuid.eq(user_uuid))
@@ -479,24 +580,29 @@ impl CollectionUser {
}}
}
pub async fn find_by_organization(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
pub async fn find_by_organization_swap_user_uuid_with_member_uuid(
org_uuid: &OrganizationId,
conn: &mut DbConn,
) -> Vec<CollectionMembership> {
let col_users = db_run! { conn: {
users_collections::table
.inner_join(collections::table.on(collections::uuid.eq(users_collections::collection_uuid)))
.filter(collections::org_uuid.eq(org_uuid))
.inner_join(users_organizations::table.on(users_organizations::user_uuid.eq(users_collections::user_uuid)))
.select((users_organizations::uuid, users_collections::collection_uuid, users_collections::read_only, users_collections::hide_passwords))
.select((users_organizations::uuid, users_collections::collection_uuid, users_collections::read_only, users_collections::hide_passwords, users_collections::manage))
.load::<CollectionUserDb>(conn)
.expect("Error loading users_collections")
.from_db()
}}
}};
col_users.into_iter().map(|c| c.into()).collect()
}
pub async fn save(
user_uuid: &str,
collection_uuid: &str,
user_uuid: &UserId,
collection_uuid: &CollectionId,
read_only: bool,
hide_passwords: bool,
manage: bool,
conn: &mut DbConn,
) -> EmptyResult {
User::update_uuid_revision(user_uuid, conn).await;
@@ -509,6 +615,7 @@ impl CollectionUser {
users_collections::collection_uuid.eq(collection_uuid),
users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords),
users_collections::manage.eq(manage),
))
.execute(conn)
{
@@ -523,6 +630,7 @@ impl CollectionUser {
users_collections::collection_uuid.eq(collection_uuid),
users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords),
users_collections::manage.eq(manage),
))
.execute(conn)
.map_res("Error adding user to collection")
@@ -537,12 +645,14 @@ impl CollectionUser {
users_collections::collection_uuid.eq(collection_uuid),
users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords),
users_collections::manage.eq(manage),
))
.on_conflict((users_collections::user_uuid, users_collections::collection_uuid))
.do_update()
.set((
users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords),
users_collections::manage.eq(manage),
))
.execute(conn)
.map_res("Error adding user to collection")
@@ -564,7 +674,7 @@ impl CollectionUser {
}}
}
pub async fn find_by_collection(collection_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_collection(collection_uuid: &CollectionId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_collections::table
.filter(users_collections::collection_uuid.eq(collection_uuid))
@@ -575,24 +685,25 @@ impl CollectionUser {
}}
}
pub async fn find_by_collection_swap_user_uuid_with_org_user_uuid(
collection_uuid: &str,
pub async fn find_by_collection_swap_user_uuid_with_member_uuid(
collection_uuid: &CollectionId,
conn: &mut DbConn,
) -> Vec<Self> {
db_run! { conn: {
) -> Vec<CollectionMembership> {
let col_users = db_run! { conn: {
users_collections::table
.filter(users_collections::collection_uuid.eq(collection_uuid))
.inner_join(users_organizations::table.on(users_organizations::user_uuid.eq(users_collections::user_uuid)))
.select((users_organizations::uuid, users_collections::collection_uuid, users_collections::read_only, users_collections::hide_passwords))
.select((users_organizations::uuid, users_collections::collection_uuid, users_collections::read_only, users_collections::hide_passwords, users_collections::manage))
.load::<CollectionUserDb>(conn)
.expect("Error loading users_collections")
.from_db()
}}
}};
col_users.into_iter().map(|c| c.into()).collect()
}
pub async fn find_by_collection_and_user(
collection_uuid: &str,
user_uuid: &str,
collection_uuid: &CollectionId,
user_uuid: &UserId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: {
@@ -606,7 +717,7 @@ impl CollectionUser {
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_collections::table
.filter(users_collections::user_uuid.eq(user_uuid))
@@ -617,7 +728,7 @@ impl CollectionUser {
}}
}
pub async fn delete_all_by_collection(collection_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_collection(collection_uuid: &CollectionId, conn: &mut DbConn) -> EmptyResult {
for collection in CollectionUser::find_by_collection(collection_uuid, conn).await.iter() {
User::update_uuid_revision(&collection.user_uuid, conn).await;
}
@@ -629,12 +740,16 @@ impl CollectionUser {
}}
}
pub async fn delete_all_by_user_and_org(user_uuid: &str, org_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_user_and_org(
user_uuid: &UserId,
org_uuid: &OrganizationId,
conn: &mut DbConn,
) -> EmptyResult {
let collectionusers = Self::find_by_organization_and_user_uuid(org_uuid, user_uuid, conn).await;
db_run! { conn: {
for user in collectionusers {
diesel::delete(users_collections::table.filter(
let _: () = diesel::delete(users_collections::table.filter(
users_collections::user_uuid.eq(user_uuid)
.and(users_collections::collection_uuid.eq(user.collection_uuid))
))
@@ -644,11 +759,19 @@ impl CollectionUser {
Ok(())
}}
}
pub async fn has_access_to_collection_by_user(
col_id: &CollectionId,
user_uuid: &UserId,
conn: &mut DbConn,
) -> bool {
Self::find_by_collection_and_user(col_id, user_uuid, conn).await.is_some()
}
}
/// Database methods
impl CollectionCipher {
pub async fn save(cipher_uuid: &str, collection_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn save(cipher_uuid: &CipherId, collection_uuid: &CollectionId, conn: &mut DbConn) -> EmptyResult {
Self::update_users_revision(collection_uuid, conn).await;
db_run! { conn:
@@ -678,7 +801,7 @@ impl CollectionCipher {
}
}
pub async fn delete(cipher_uuid: &str, collection_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete(cipher_uuid: &CipherId, collection_uuid: &CollectionId, conn: &mut DbConn) -> EmptyResult {
Self::update_users_revision(collection_uuid, conn).await;
db_run! { conn: {
@@ -692,7 +815,7 @@ impl CollectionCipher {
}}
}
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_cipher(cipher_uuid: &CipherId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(ciphers_collections::table.filter(ciphers_collections::cipher_uuid.eq(cipher_uuid)))
.execute(conn)
@@ -700,7 +823,7 @@ impl CollectionCipher {
}}
}
pub async fn delete_all_by_collection(collection_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_collection(collection_uuid: &CollectionId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(ciphers_collections::table.filter(ciphers_collections::collection_uuid.eq(collection_uuid)))
.execute(conn)
@@ -708,9 +831,63 @@ impl CollectionCipher {
}}
}
pub async fn update_users_revision(collection_uuid: &str, conn: &mut DbConn) {
pub async fn update_users_revision(collection_uuid: &CollectionId, conn: &mut DbConn) {
if let Some(collection) = Collection::find_by_uuid(collection_uuid, conn).await {
collection.update_users_revision(conn).await;
}
}
}
// Added in case we need the membership_uuid instead of the user_uuid
pub struct CollectionMembership {
pub membership_uuid: MembershipId,
pub collection_uuid: CollectionId,
pub read_only: bool,
pub hide_passwords: bool,
pub manage: bool,
}
impl CollectionMembership {
pub fn to_json_details_for_member(&self, membership_type: i32) -> Value {
json!({
"id": self.membership_uuid,
"readOnly": self.read_only,
"hidePasswords": self.hide_passwords,
"manage": membership_type >= MembershipType::Admin
|| self.manage
|| (membership_type == MembershipType::Manager
&& !self.read_only
&& !self.hide_passwords),
})
}
}
impl From<CollectionUser> for CollectionMembership {
fn from(c: CollectionUser) -> Self {
Self {
membership_uuid: c.user_uuid.to_string().into(),
collection_uuid: c.collection_uuid,
read_only: c.read_only,
hide_passwords: c.hide_passwords,
manage: c.manage,
}
}
}
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
From,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
UuidFromParam,
)]
pub struct CollectionId(String);

View File

@@ -1,7 +1,10 @@
use chrono::{NaiveDateTime, Utc};
use derive_more::{Display, From};
use serde_json::Value;
use crate::{crypto, CONFIG};
use core::fmt;
use super::{AuthRequest, UserId};
use crate::{crypto, util::format_date, CONFIG};
use macros::IdFromParam;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
@@ -9,26 +12,25 @@ db_object! {
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid, user_uuid))]
pub struct Device {
pub uuid: String,
pub uuid: DeviceId,
pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime,
pub user_uuid: String,
pub user_uuid: UserId,
pub name: String,
pub atype: i32, // https://github.com/bitwarden/server/blob/master/src/Core/Enums/DeviceType.cs
pub atype: i32, // https://github.com/bitwarden/server/blob/dcc199bcce4aa2d5621f6fab80f1b49d8b143418/src/Core/Enums/DeviceType.cs
pub push_uuid: Option<String>,
pub push_token: Option<String>,
pub refresh_token: String,
pub twofactor_remember: Option<String>,
}
}
/// Local methods
impl Device {
pub fn new(uuid: String, user_uuid: String, name: String, atype: i32) -> Self {
pub fn new(uuid: DeviceId, user_uuid: UserId, name: String, atype: i32) -> Self {
let now = Utc::now().naive_utc();
Self {
@@ -47,6 +49,18 @@ impl Device {
}
}
pub fn to_json(&self) -> Value {
json!({
"id": self.uuid,
"name": self.name,
"type": self.atype,
"identifier": self.push_uuid,
"creationDate": format_date(&self.created_at),
"isTrusted": false,
"object":"device"
})
}
pub fn refresh_twofactor_remember(&mut self) -> String {
use data_encoding::BASE64;
let twofactor_remember = crypto::encode_random_bytes::<180>(BASE64);
@@ -67,20 +81,20 @@ impl Device {
}
// Update the expiration of the device and the last update date
let time_now = Utc::now().naive_utc();
self.updated_at = time_now;
let time_now = Utc::now();
self.updated_at = time_now.naive_utc();
// ---
// Disabled these keys to be added to the JWT since they could cause the JWT to get too large
// Also These key/value pairs are not used anywhere by either Vaultwarden or Bitwarden Clients
// Because these might get used in the future, and they are added by the Bitwarden Server, lets keep it, but then commented out
// ---
// fn arg: orgs: Vec<super::UserOrganization>,
// fn arg: members: Vec<super::Membership>,
// ---
// let orgowner: Vec<_> = orgs.iter().filter(|o| o.atype == 0).map(|o| o.org_uuid.clone()).collect();
// let orgadmin: Vec<_> = orgs.iter().filter(|o| o.atype == 1).map(|o| o.org_uuid.clone()).collect();
// let orguser: Vec<_> = orgs.iter().filter(|o| o.atype == 2).map(|o| o.org_uuid.clone()).collect();
// let orgmanager: Vec<_> = orgs.iter().filter(|o| o.atype == 3).map(|o| o.org_uuid.clone()).collect();
// let orgowner: Vec<_> = members.iter().filter(|m| m.atype == 0).map(|o| o.org_uuid.clone()).collect();
// let orgadmin: Vec<_> = members.iter().filter(|m| m.atype == 1).map(|o| o.org_uuid.clone()).collect();
// let orguser: Vec<_> = members.iter().filter(|m| m.atype == 2).map(|o| o.org_uuid.clone()).collect();
// let orgmanager: Vec<_> = members.iter().filter(|m| m.atype == 3).map(|o| o.org_uuid.clone()).collect();
// Create the JWT claims struct, to send to the client
use crate::auth::{encode_jwt, LoginJwtClaims, DEFAULT_VALIDITY, JWT_LOGIN_ISSUER};
@@ -123,6 +137,36 @@ impl Device {
}
}
pub struct DeviceWithAuthRequest {
pub device: Device,
pub pending_auth_request: Option<AuthRequest>,
}
impl DeviceWithAuthRequest {
pub fn to_json(&self) -> Value {
let auth_request = match &self.pending_auth_request {
Some(auth_request) => auth_request.to_json_for_pending_device(),
None => Value::Null,
};
json!({
"id": self.device.uuid,
"name": self.device.name,
"type": self.device.atype,
"identifier": self.device.push_uuid,
"creationDate": format_date(&self.device.created_at),
"devicePendingAuthRequest": auth_request,
"isTrusted": false,
"object": "device",
})
}
pub fn from(c: Device, a: Option<AuthRequest>) -> Self {
Self {
device: c,
pending_auth_request: a,
}
}
}
use crate::db::DbConn;
use crate::api::EmptyResult;
@@ -150,7 +194,7 @@ impl Device {
}
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(devices::table.filter(devices::user_uuid.eq(user_uuid)))
.execute(conn)
@@ -158,7 +202,7 @@ impl Device {
}}
}
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_user(uuid: &DeviceId, user_uuid: &UserId, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
devices::table
.filter(devices::uuid.eq(uuid))
@@ -169,7 +213,17 @@ impl Device {
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_with_auth_request_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<DeviceWithAuthRequest> {
let devices = Self::find_by_user(user_uuid, conn).await;
let mut result = Vec::new();
for device in devices {
let auth_request = AuthRequest::find_by_user_and_requested_device(user_uuid, &device.uuid, conn).await;
result.push(DeviceWithAuthRequest::from(device, auth_request));
}
result
}
pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
devices::table
.filter(devices::user_uuid.eq(user_uuid))
@@ -179,7 +233,7 @@ impl Device {
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &DeviceId, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
devices::table
.filter(devices::uuid.eq(uuid))
@@ -189,7 +243,7 @@ impl Device {
}}
}
pub async fn clear_push_token_by_uuid(uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn clear_push_token_by_uuid(uuid: &DeviceId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::update(devices::table)
.filter(devices::uuid.eq(uuid))
@@ -208,7 +262,7 @@ impl Device {
}}
}
pub async fn find_latest_active_by_user(user_uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_latest_active_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
devices::table
.filter(devices::user_uuid.eq(user_uuid))
@@ -219,7 +273,7 @@ impl Device {
}}
}
pub async fn find_push_devices_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_push_devices_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
devices::table
.filter(devices::user_uuid.eq(user_uuid))
@@ -230,7 +284,7 @@ impl Device {
}}
}
pub async fn check_user_has_push_device(user_uuid: &str, conn: &mut DbConn) -> bool {
pub async fn check_user_has_push_device(user_uuid: &UserId, conn: &mut DbConn) -> bool {
db_run! { conn: {
devices::table
.filter(devices::user_uuid.eq(user_uuid))
@@ -243,60 +297,60 @@ impl Device {
}
}
#[derive(Display)]
pub enum DeviceType {
#[display("Android")]
Android = 0,
#[display("iOS")]
Ios = 1,
#[display("Chrome Extension")]
ChromeExtension = 2,
#[display("Firefox Extension")]
FirefoxExtension = 3,
#[display("Opera Extension")]
OperaExtension = 4,
#[display("Edge Extension")]
EdgeExtension = 5,
#[display("Windows")]
WindowsDesktop = 6,
#[display("macOS")]
MacOsDesktop = 7,
#[display("Linux")]
LinuxDesktop = 8,
#[display("Chrome")]
ChromeBrowser = 9,
#[display("Firefox")]
FirefoxBrowser = 10,
#[display("Opera")]
OperaBrowser = 11,
#[display("Edge")]
EdgeBrowser = 12,
#[display("Internet Explorer")]
IEBrowser = 13,
#[display("Unknown Browser")]
UnknownBrowser = 14,
#[display("Android")]
AndroidAmazon = 15,
#[display("UWP")]
Uwp = 16,
#[display("Safari")]
SafariBrowser = 17,
#[display("Vivaldi")]
VivaldiBrowser = 18,
#[display("Vivaldi Extension")]
VivaldiExtension = 19,
#[display("Safari Extension")]
SafariExtension = 20,
#[display("SDK")]
Sdk = 21,
#[display("Server")]
Server = 22,
}
impl fmt::Display for DeviceType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
DeviceType::Android => write!(f, "Android"),
DeviceType::Ios => write!(f, "iOS"),
DeviceType::ChromeExtension => write!(f, "Chrome Extension"),
DeviceType::FirefoxExtension => write!(f, "Firefox Extension"),
DeviceType::OperaExtension => write!(f, "Opera Extension"),
DeviceType::EdgeExtension => write!(f, "Edge Extension"),
DeviceType::WindowsDesktop => write!(f, "Windows Desktop"),
DeviceType::MacOsDesktop => write!(f, "MacOS Desktop"),
DeviceType::LinuxDesktop => write!(f, "Linux Desktop"),
DeviceType::ChromeBrowser => write!(f, "Chrome Browser"),
DeviceType::FirefoxBrowser => write!(f, "Firefox Browser"),
DeviceType::OperaBrowser => write!(f, "Opera Browser"),
DeviceType::EdgeBrowser => write!(f, "Edge Browser"),
DeviceType::IEBrowser => write!(f, "Internet Explorer"),
DeviceType::UnknownBrowser => write!(f, "Unknown Browser"),
DeviceType::AndroidAmazon => write!(f, "Android Amazon"),
DeviceType::Uwp => write!(f, "UWP"),
DeviceType::SafariBrowser => write!(f, "Safari Browser"),
DeviceType::VivaldiBrowser => write!(f, "Vivaldi Browser"),
DeviceType::VivaldiExtension => write!(f, "Vivaldi Extension"),
DeviceType::SafariExtension => write!(f, "Safari Extension"),
DeviceType::Sdk => write!(f, "SDK"),
DeviceType::Server => write!(f, "Server"),
}
}
#[display("Windows CLI")]
WindowsCLI = 23,
#[display("macOS CLI")]
MacOsCLI = 24,
#[display("Linux CLI")]
LinuxCLI = 25,
}
impl DeviceType {
@@ -325,7 +379,15 @@ impl DeviceType {
20 => DeviceType::SafariExtension,
21 => DeviceType::Sdk,
22 => DeviceType::Server,
23 => DeviceType::WindowsCLI,
24 => DeviceType::MacOsCLI,
25 => DeviceType::LinuxCLI,
_ => DeviceType::UnknownBrowser,
}
}
}
#[derive(
Clone, Debug, DieselNewType, Display, From, FromForm, Hash, PartialEq, Eq, Serialize, Deserialize, IdFromParam,
)]
pub struct DeviceId(String);

View File

@@ -1,9 +1,10 @@
use chrono::{NaiveDateTime, Utc};
use derive_more::{AsRef, Deref, Display, From};
use serde_json::Value;
use super::{User, UserId};
use crate::{api::EmptyResult, db::DbConn, error::MapResult};
use super::User;
use macros::UuidFromParam;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
@@ -11,9 +12,9 @@ db_object! {
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))]
pub struct EmergencyAccess {
pub uuid: String,
pub grantor_uuid: String,
pub grantee_uuid: Option<String>,
pub uuid: EmergencyAccessId,
pub grantor_uuid: UserId,
pub grantee_uuid: Option<UserId>,
pub email: Option<String>,
pub key_encrypted: Option<String>,
pub atype: i32, //EmergencyAccessType
@@ -26,14 +27,14 @@ db_object! {
}
}
/// Local methods
// Local methods
impl EmergencyAccess {
pub fn new(grantor_uuid: String, email: String, status: i32, atype: i32, wait_time_days: i32) -> Self {
pub fn new(grantor_uuid: UserId, email: String, status: i32, atype: i32, wait_time_days: i32) -> Self {
let now = Utc::now().naive_utc();
Self {
uuid: crate::util::get_uuid(),
uuid: EmergencyAccessId(crate::util::get_uuid()),
grantor_uuid,
grantee_uuid: None,
email: Some(email),
@@ -58,11 +59,11 @@ impl EmergencyAccess {
pub fn to_json(&self) -> Value {
json!({
"Id": self.uuid,
"Status": self.status,
"Type": self.atype,
"WaitTimeDays": self.wait_time_days,
"Object": "emergencyAccess",
"id": self.uuid,
"status": self.status,
"type": self.atype,
"waitTimeDays": self.wait_time_days,
"object": "emergencyAccess",
})
}
@@ -70,36 +71,43 @@ impl EmergencyAccess {
let grantor_user = User::find_by_uuid(&self.grantor_uuid, conn).await.expect("Grantor user not found.");
json!({
"Id": self.uuid,
"Status": self.status,
"Type": self.atype,
"WaitTimeDays": self.wait_time_days,
"GrantorId": grantor_user.uuid,
"Email": grantor_user.email,
"Name": grantor_user.name,
"Object": "emergencyAccessGrantorDetails",
"id": self.uuid,
"status": self.status,
"type": self.atype,
"waitTimeDays": self.wait_time_days,
"grantorId": grantor_user.uuid,
"email": grantor_user.email,
"name": grantor_user.name,
"object": "emergencyAccessGrantorDetails",
})
}
pub async fn to_json_grantee_details(&self, conn: &mut DbConn) -> Value {
let grantee_user = if let Some(grantee_uuid) = self.grantee_uuid.as_deref() {
Some(User::find_by_uuid(grantee_uuid, conn).await.expect("Grantee user not found."))
pub async fn to_json_grantee_details(&self, conn: &mut DbConn) -> Option<Value> {
let grantee_user = if let Some(grantee_uuid) = &self.grantee_uuid {
User::find_by_uuid(grantee_uuid, conn).await.expect("Grantee user not found.")
} else if let Some(email) = self.email.as_deref() {
Some(User::find_by_mail(email, conn).await.expect("Grantee user not found."))
match User::find_by_mail(email, conn).await {
Some(user) => user,
None => {
// remove outstanding invitations which should not exist
Self::delete_all_by_grantee_email(email, conn).await.ok();
return None;
}
}
} else {
None
return None;
};
json!({
"Id": self.uuid,
"Status": self.status,
"Type": self.atype,
"WaitTimeDays": self.wait_time_days,
"GranteeId": grantee_user.as_ref().map_or("", |u| &u.uuid),
"Email": grantee_user.as_ref().map_or("", |u| &u.email),
"Name": grantee_user.as_ref().map_or("", |u| &u.name),
"Object": "emergencyAccessGranteeDetails",
})
Some(json!({
"id": self.uuid,
"status": self.status,
"type": self.atype,
"waitTimeDays": self.wait_time_days,
"granteeId": grantee_user.uuid,
"email": grantee_user.email,
"name": grantee_user.name,
"object": "emergencyAccessGranteeDetails",
}))
}
}
@@ -174,7 +182,7 @@ impl EmergencyAccess {
// Update the grantee so that it will refresh it's status.
User::update_uuid_revision(self.grantee_uuid.as_ref().expect("Error getting grantee"), conn).await;
self.status = status;
self.updated_at = date.to_owned();
date.clone_into(&mut self.updated_at);
db_run! {conn: {
crate::util::retry(|| {
@@ -192,7 +200,7 @@ impl EmergencyAccess {
conn: &mut DbConn,
) -> EmptyResult {
self.last_notification_at = Some(date.to_owned());
self.updated_at = date.to_owned();
date.clone_into(&mut self.updated_at);
db_run! {conn: {
crate::util::retry(|| {
@@ -204,7 +212,7 @@ impl EmergencyAccess {
}}
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
for ea in Self::find_all_by_grantor_uuid(user_uuid, conn).await {
ea.delete(conn).await?;
}
@@ -214,6 +222,13 @@ impl EmergencyAccess {
Ok(())
}
pub async fn delete_all_by_grantee_email(grantee_email: &str, conn: &mut DbConn) -> EmptyResult {
for ea in Self::find_all_invited_by_grantee_email(grantee_email, conn).await {
ea.delete(conn).await?;
}
Ok(())
}
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
User::update_uuid_revision(&self.grantor_uuid, conn).await;
@@ -224,18 +239,9 @@ impl EmergencyAccess {
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::uuid.eq(uuid))
.first::<EmergencyAccessDb>(conn)
.ok().from_db()
}}
}
pub async fn find_by_grantor_uuid_and_grantee_uuid_or_email(
grantor_uuid: &str,
grantee_uuid: &str,
grantor_uuid: &UserId,
grantee_uuid: &UserId,
email: &str,
conn: &mut DbConn,
) -> Option<Self> {
@@ -257,7 +263,11 @@ impl EmergencyAccess {
}}
}
pub async fn find_by_uuid_and_grantor_uuid(uuid: &str, grantor_uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_grantor_uuid(
uuid: &EmergencyAccessId,
grantor_uuid: &UserId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::uuid.eq(uuid))
@@ -267,7 +277,35 @@ impl EmergencyAccess {
}}
}
pub async fn find_all_by_grantee_uuid(grantee_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_uuid_and_grantee_uuid(
uuid: &EmergencyAccessId,
grantee_uuid: &UserId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::uuid.eq(uuid))
.filter(emergency_access::grantee_uuid.eq(grantee_uuid))
.first::<EmergencyAccessDb>(conn)
.ok().from_db()
}}
}
pub async fn find_by_uuid_and_grantee_email(
uuid: &EmergencyAccessId,
grantee_email: &str,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::uuid.eq(uuid))
.filter(emergency_access::email.eq(grantee_email))
.first::<EmergencyAccessDb>(conn)
.ok().from_db()
}}
}
pub async fn find_all_by_grantee_uuid(grantee_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::grantee_uuid.eq(grantee_uuid))
@@ -285,13 +323,60 @@ impl EmergencyAccess {
}}
}
pub async fn find_all_by_grantor_uuid(grantor_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_all_invited_by_grantee_email(grantee_email: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::email.eq(grantee_email))
.filter(emergency_access::status.eq(EmergencyAccessStatus::Invited as i32))
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
}}
}
pub async fn find_all_by_grantor_uuid(grantor_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::grantor_uuid.eq(grantor_uuid))
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
}}
}
pub async fn accept_invite(
&mut self,
grantee_uuid: &UserId,
grantee_email: &str,
conn: &mut DbConn,
) -> EmptyResult {
if self.email.is_none() || self.email.as_ref().unwrap() != grantee_email {
err!("User email does not match invite.");
}
if self.status == EmergencyAccessStatus::Accepted as i32 {
err!("Emergency contact already accepted.");
}
self.status = EmergencyAccessStatus::Accepted as i32;
self.grantee_uuid = Some(grantee_uuid.clone());
self.email = None;
self.save(conn).await
}
}
// endregion
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
From,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
UuidFromParam,
)]
pub struct EmergencyAccessId(String);

View File

@@ -1,9 +1,9 @@
use crate::db::DbConn;
use chrono::{NaiveDateTime, TimeDelta, Utc};
//use derive_more::{AsRef, Deref, Display, From};
use serde_json::Value;
use crate::{api::EmptyResult, error::MapResult, CONFIG};
use chrono::{Duration, NaiveDateTime, Utc};
use super::{CipherId, CollectionId, GroupId, MembershipId, OrgPolicyId, OrganizationId, UserId};
use crate::{api::EmptyResult, db::DbConn, error::MapResult, CONFIG};
// https://bitwarden.com/help/event-logs/
@@ -15,20 +15,20 @@ db_object! {
#[diesel(table_name = event)]
#[diesel(primary_key(uuid))]
pub struct Event {
pub uuid: String,
pub uuid: EventId,
pub event_type: i32, // EventType
pub user_uuid: Option<String>,
pub org_uuid: Option<String>,
pub cipher_uuid: Option<String>,
pub collection_uuid: Option<String>,
pub group_uuid: Option<String>,
pub org_user_uuid: Option<String>,
pub act_user_uuid: Option<String>,
pub user_uuid: Option<UserId>,
pub org_uuid: Option<OrganizationId>,
pub cipher_uuid: Option<CipherId>,
pub collection_uuid: Option<CollectionId>,
pub group_uuid: Option<GroupId>,
pub org_user_uuid: Option<MembershipId>,
pub act_user_uuid: Option<UserId>,
// Upstream enum: https://github.com/bitwarden/server/blob/8a22c0479e987e756ce7412c48a732f9002f0a2d/src/Core/Enums/DeviceType.cs
pub device_type: Option<i32>,
pub ip_address: Option<String>,
pub event_date: NaiveDateTime,
pub policy_uuid: Option<String>,
pub policy_uuid: Option<OrgPolicyId>,
pub provider_uuid: Option<String>,
pub provider_user_uuid: Option<String>,
pub provider_org_uuid: Option<String>,
@@ -128,7 +128,7 @@ impl Event {
};
Self {
uuid: crate::util::get_uuid(),
uuid: EventId(crate::util::get_uuid()),
event_type,
user_uuid: None,
org_uuid: None,
@@ -246,7 +246,7 @@ impl Event {
/// ##############
/// Custom Queries
pub async fn find_by_organization_uuid(
org_uuid: &str,
org_uuid: &OrganizationId,
start: &NaiveDateTime,
end: &NaiveDateTime,
conn: &mut DbConn,
@@ -263,7 +263,7 @@ impl Event {
}}
}
pub async fn count_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 {
pub async fn count_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> i64 {
db_run! { conn: {
event::table
.filter(event::org_uuid.eq(org_uuid))
@@ -274,16 +274,16 @@ impl Event {
}}
}
pub async fn find_by_org_and_user_org(
org_uuid: &str,
user_org_uuid: &str,
pub async fn find_by_org_and_member(
org_uuid: &OrganizationId,
member_uuid: &MembershipId,
start: &NaiveDateTime,
end: &NaiveDateTime,
conn: &mut DbConn,
) -> Vec<Self> {
db_run! { conn: {
event::table
.inner_join(users_organizations::table.on(users_organizations::uuid.eq(user_org_uuid)))
.inner_join(users_organizations::table.on(users_organizations::uuid.eq(member_uuid)))
.filter(event::org_uuid.eq(org_uuid))
.filter(event::event_date.between(start, end))
.filter(event::user_uuid.eq(users_organizations::user_uuid.nullable()).or(event::act_user_uuid.eq(users_organizations::user_uuid.nullable())))
@@ -297,7 +297,7 @@ impl Event {
}
pub async fn find_by_cipher_uuid(
cipher_uuid: &str,
cipher_uuid: &CipherId,
start: &NaiveDateTime,
end: &NaiveDateTime,
conn: &mut DbConn,
@@ -316,7 +316,7 @@ impl Event {
pub async fn clean_events(conn: &mut DbConn) -> EmptyResult {
if let Some(days_to_retain) = CONFIG.events_days_retain() {
let dt = Utc::now().naive_utc() - Duration::days(days_to_retain);
let dt = Utc::now().naive_utc() - TimeDelta::try_days(days_to_retain).unwrap();
db_run! { conn: {
diesel::delete(event::table.filter(event::event_date.lt(dt)))
.execute(conn)
@@ -327,3 +327,6 @@ impl Event {
}
}
}
#[derive(Clone, Debug, DieselNewType, FromForm, Hash, PartialEq, Eq, Serialize, Deserialize)]
pub struct EventId(String);

View File

@@ -1,12 +1,12 @@
use super::User;
use super::{CipherId, User, UserId};
db_object! {
#[derive(Identifiable, Queryable, Insertable)]
#[diesel(table_name = favorites)]
#[diesel(primary_key(user_uuid, cipher_uuid))]
pub struct Favorite {
pub user_uuid: String,
pub cipher_uuid: String,
pub user_uuid: UserId,
pub cipher_uuid: CipherId,
}
}
@@ -17,7 +17,7 @@ use crate::error::MapResult;
impl Favorite {
// Returns whether the specified cipher is a favorite of the specified user.
pub async fn is_favorite(cipher_uuid: &str, user_uuid: &str, conn: &mut DbConn) -> bool {
pub async fn is_favorite(cipher_uuid: &CipherId, user_uuid: &UserId, conn: &mut DbConn) -> bool {
db_run! { conn: {
let query = favorites::table
.filter(favorites::cipher_uuid.eq(cipher_uuid))
@@ -29,7 +29,12 @@ impl Favorite {
}
// Sets whether the specified cipher is a favorite of the specified user.
pub async fn set_favorite(favorite: bool, cipher_uuid: &str, user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn set_favorite(
favorite: bool,
cipher_uuid: &CipherId,
user_uuid: &UserId,
conn: &mut DbConn,
) -> EmptyResult {
let (old, new) = (Self::is_favorite(cipher_uuid, user_uuid, conn).await, favorite);
match (old, new) {
(false, true) => {
@@ -62,7 +67,7 @@ impl Favorite {
}
// Delete all favorite entries associated with the specified cipher.
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_cipher(cipher_uuid: &CipherId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(favorites::table.filter(favorites::cipher_uuid.eq(cipher_uuid)))
.execute(conn)
@@ -71,7 +76,7 @@ impl Favorite {
}
// Delete all favorite entries associated with the specified user.
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(favorites::table.filter(favorites::user_uuid.eq(user_uuid)))
.execute(conn)
@@ -81,12 +86,12 @@ impl Favorite {
/// Return a vec with (cipher_uuid) this will only contain favorite flagged ciphers
/// This is used during a full sync so we only need one query for all favorite cipher matches.
pub async fn get_all_cipher_uuid_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<String> {
pub async fn get_all_cipher_uuid_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<CipherId> {
db_run! { conn: {
favorites::table
.filter(favorites::user_uuid.eq(user_uuid))
.select(favorites::cipher_uuid)
.load::<String>(conn)
.load::<CipherId>(conn)
.unwrap_or_default()
}}
}

View File

@@ -1,17 +1,19 @@
use chrono::{NaiveDateTime, Utc};
use derive_more::{AsRef, Deref, Display, From};
use serde_json::Value;
use super::User;
use super::{CipherId, User, UserId};
use macros::UuidFromParam;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[diesel(table_name = folders)]
#[diesel(primary_key(uuid))]
pub struct Folder {
pub uuid: String,
pub uuid: FolderId,
pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime,
pub user_uuid: String,
pub user_uuid: UserId,
pub name: String,
}
@@ -19,18 +21,18 @@ db_object! {
#[diesel(table_name = folders_ciphers)]
#[diesel(primary_key(cipher_uuid, folder_uuid))]
pub struct FolderCipher {
pub cipher_uuid: String,
pub folder_uuid: String,
pub cipher_uuid: CipherId,
pub folder_uuid: FolderId,
}
}
/// Local methods
impl Folder {
pub fn new(user_uuid: String, name: String) -> Self {
pub fn new(user_uuid: UserId, name: String) -> Self {
let now = Utc::now().naive_utc();
Self {
uuid: crate::util::get_uuid(),
uuid: FolderId(crate::util::get_uuid()),
created_at: now,
updated_at: now,
@@ -43,19 +45,19 @@ impl Folder {
use crate::util::format_date;
json!({
"Id": self.uuid,
"RevisionDate": format_date(&self.updated_at),
"Name": self.name,
"Object": "folder",
"id": self.uuid,
"revisionDate": format_date(&self.updated_at),
"name": self.name,
"object": "folder",
})
}
}
impl FolderCipher {
pub fn new(folder_uuid: &str, cipher_uuid: &str) -> Self {
pub fn new(folder_uuid: FolderId, cipher_uuid: CipherId) -> Self {
Self {
folder_uuid: folder_uuid.to_string(),
cipher_uuid: cipher_uuid.to_string(),
folder_uuid,
cipher_uuid,
}
}
}
@@ -113,24 +115,25 @@ impl Folder {
}}
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
for folder in Self::find_by_user(user_uuid, conn).await {
folder.delete(conn).await?;
}
Ok(())
}
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_user(uuid: &FolderId, user_uuid: &UserId, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
folders::table
.filter(folders::uuid.eq(uuid))
.filter(folders::user_uuid.eq(user_uuid))
.first::<FolderDb>(conn)
.ok()
.from_db()
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
folders::table
.filter(folders::user_uuid.eq(user_uuid))
@@ -176,7 +179,7 @@ impl FolderCipher {
}}
}
pub async fn delete_all_by_cipher(cipher_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_cipher(cipher_uuid: &CipherId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(folders_ciphers::table.filter(folders_ciphers::cipher_uuid.eq(cipher_uuid)))
.execute(conn)
@@ -184,7 +187,7 @@ impl FolderCipher {
}}
}
pub async fn delete_all_by_folder(folder_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_folder(folder_uuid: &FolderId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(folders_ciphers::table.filter(folders_ciphers::folder_uuid.eq(folder_uuid)))
.execute(conn)
@@ -192,7 +195,11 @@ impl FolderCipher {
}}
}
pub async fn find_by_folder_and_cipher(folder_uuid: &str, cipher_uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_folder_and_cipher(
folder_uuid: &FolderId,
cipher_uuid: &CipherId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: {
folders_ciphers::table
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
@@ -203,7 +210,7 @@ impl FolderCipher {
}}
}
pub async fn find_by_folder(folder_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_folder(folder_uuid: &FolderId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
folders_ciphers::table
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
@@ -215,14 +222,32 @@ impl FolderCipher {
/// Return a vec with (cipher_uuid, folder_uuid)
/// This is used during a full sync so we only need one query for all folder matches.
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<(String, String)> {
pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<(CipherId, FolderId)> {
db_run! { conn: {
folders_ciphers::table
.inner_join(folders::table)
.filter(folders::user_uuid.eq(user_uuid))
.select(folders_ciphers::all_columns)
.load::<(String, String)>(conn)
.load::<(CipherId, FolderId)>(conn)
.unwrap_or_default()
}}
}
}
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
From,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
UuidFromParam,
)]
pub struct FolderId(String);

View File

@@ -1,4 +1,10 @@
use super::{CollectionId, Membership, MembershipId, OrganizationId, User, UserId};
use crate::api::EmptyResult;
use crate::db::DbConn;
use crate::error::MapResult;
use chrono::{NaiveDateTime, Utc};
use derive_more::{AsRef, Deref, Display, From};
use macros::UuidFromParam;
use serde_json::Value;
db_object! {
@@ -6,8 +12,8 @@ db_object! {
#[diesel(table_name = groups)]
#[diesel(primary_key(uuid))]
pub struct Group {
pub uuid: String,
pub organizations_uuid: String,
pub uuid: GroupId,
pub organizations_uuid: OrganizationId,
pub name: String,
pub access_all: bool,
pub external_id: Option<String>,
@@ -19,28 +25,34 @@ db_object! {
#[diesel(table_name = collections_groups)]
#[diesel(primary_key(collections_uuid, groups_uuid))]
pub struct CollectionGroup {
pub collections_uuid: String,
pub groups_uuid: String,
pub collections_uuid: CollectionId,
pub groups_uuid: GroupId,
pub read_only: bool,
pub hide_passwords: bool,
pub manage: bool,
}
#[derive(Identifiable, Queryable, Insertable)]
#[diesel(table_name = groups_users)]
#[diesel(primary_key(groups_uuid, users_organizations_uuid))]
pub struct GroupUser {
pub groups_uuid: String,
pub users_organizations_uuid: String
pub groups_uuid: GroupId,
pub users_organizations_uuid: MembershipId
}
}
/// Local methods
impl Group {
pub fn new(organizations_uuid: String, name: String, access_all: bool, external_id: Option<String>) -> Self {
pub fn new(
organizations_uuid: OrganizationId,
name: String,
access_all: bool,
external_id: Option<String>,
) -> Self {
let now = Utc::now().naive_utc();
let mut new_model = Self {
uuid: crate::util::get_uuid(),
uuid: GroupId(crate::util::get_uuid()),
organizations_uuid,
name,
access_all,
@@ -58,38 +70,42 @@ impl Group {
use crate::util::format_date;
json!({
"Id": self.uuid,
"OrganizationId": self.organizations_uuid,
"Name": self.name,
"AccessAll": self.access_all,
"ExternalId": self.external_id,
"CreationDate": format_date(&self.creation_date),
"RevisionDate": format_date(&self.revision_date),
"Object": "group"
"id": self.uuid,
"organizationId": self.organizations_uuid,
"name": self.name,
"accessAll": self.access_all,
"externalId": self.external_id,
"creationDate": format_date(&self.creation_date),
"revisionDate": format_date(&self.revision_date),
"object": "group"
})
}
pub async fn to_json_details(&self, conn: &mut DbConn) -> Value {
// If both read_only and hide_passwords are false, then manage should be true
// You can't have an entry with read_only and manage, or hide_passwords and manage
// Or an entry with everything to false
let collections_groups: Vec<Value> = CollectionGroup::find_by_group(&self.uuid, conn)
.await
.iter()
.map(|entry| {
json!({
"Id": entry.collections_uuid,
"ReadOnly": entry.read_only,
"HidePasswords": entry.hide_passwords
"id": entry.collections_uuid,
"readOnly": entry.read_only,
"hidePasswords": entry.hide_passwords,
"manage": entry.manage,
})
})
.collect();
json!({
"Id": self.uuid,
"OrganizationId": self.organizations_uuid,
"Name": self.name,
"AccessAll": self.access_all,
"ExternalId": self.external_id,
"Collections": collections_groups,
"Object": "groupDetails"
"id": self.uuid,
"organizationId": self.organizations_uuid,
"name": self.name,
"accessAll": self.access_all,
"externalId": self.external_id,
"collections": collections_groups,
"object": "groupDetails"
})
}
@@ -103,18 +119,38 @@ impl Group {
}
impl CollectionGroup {
pub fn new(collections_uuid: String, groups_uuid: String, read_only: bool, hide_passwords: bool) -> Self {
pub fn new(
collections_uuid: CollectionId,
groups_uuid: GroupId,
read_only: bool,
hide_passwords: bool,
manage: bool,
) -> Self {
Self {
collections_uuid,
groups_uuid,
read_only,
hide_passwords,
manage,
}
}
pub fn to_json_details_for_group(&self) -> Value {
// If both read_only and hide_passwords are false, then manage should be true
// You can't have an entry with read_only and manage, or hide_passwords and manage
// Or an entry with everything to false
// For backwards compaibility and migration proposes we keep checking read_only and hide_password
json!({
"id": self.groups_uuid,
"readOnly": self.read_only,
"hidePasswords": self.hide_passwords,
"manage": self.manage || (!self.read_only && !self.hide_passwords),
})
}
}
impl GroupUser {
pub fn new(groups_uuid: String, users_organizations_uuid: String) -> Self {
pub fn new(groups_uuid: GroupId, users_organizations_uuid: MembershipId) -> Self {
Self {
groups_uuid,
users_organizations_uuid,
@@ -122,13 +158,6 @@ impl GroupUser {
}
}
use crate::db::DbConn;
use crate::api::EmptyResult;
use crate::error::MapResult;
use super::{User, UserOrganization};
/// Database methods
impl Group {
pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult {
@@ -165,27 +194,27 @@ impl Group {
}
}
pub async fn delete_all_by_organization(org_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_organization(org_uuid: &OrganizationId, conn: &mut DbConn) -> EmptyResult {
for group in Self::find_by_organization(org_uuid, conn).await {
group.delete(conn).await?;
}
Ok(())
}
pub async fn find_by_organization(organizations_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_organization(org_uuid: &OrganizationId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
groups::table
.filter(groups::organizations_uuid.eq(organizations_uuid))
.filter(groups::organizations_uuid.eq(org_uuid))
.load::<GroupDb>(conn)
.expect("Error loading groups")
.from_db()
}}
}
pub async fn count_by_org(organizations_uuid: &str, conn: &mut DbConn) -> i64 {
pub async fn count_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> i64 {
db_run! { conn: {
groups::table
.filter(groups::organizations_uuid.eq(organizations_uuid))
.filter(groups::organizations_uuid.eq(org_uuid))
.count()
.first::<i64>(conn)
.ok()
@@ -193,27 +222,33 @@ impl Group {
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_uuid_and_org(uuid: &GroupId, org_uuid: &OrganizationId, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
groups::table
.filter(groups::uuid.eq(uuid))
.filter(groups::organizations_uuid.eq(org_uuid))
.first::<GroupDb>(conn)
.ok()
.from_db()
}}
}
pub async fn find_by_external_id(id: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_external_id_and_org(
external_id: &str,
org_uuid: &OrganizationId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: {
groups::table
.filter(groups::external_id.eq(id))
.filter(groups::external_id.eq(external_id))
.filter(groups::organizations_uuid.eq(org_uuid))
.first::<GroupDb>(conn)
.ok()
.from_db()
}}
}
//Returns all organizations the user has full access to
pub async fn gather_user_organizations_full_access(user_uuid: &str, conn: &mut DbConn) -> Vec<String> {
pub async fn get_orgs_by_user_with_full_access(user_uuid: &UserId, conn: &mut DbConn) -> Vec<OrganizationId> {
db_run! { conn: {
groups_users::table
.inner_join(users_organizations::table.on(
@@ -226,12 +261,12 @@ impl Group {
.filter(groups::access_all.eq(true))
.select(groups::organizations_uuid)
.distinct()
.load::<String>(conn)
.load::<OrganizationId>(conn)
.expect("Error loading organization group full access information for user")
}}
}
pub async fn is_in_full_access_group(user_uuid: &str, org_uuid: &str, conn: &mut DbConn) -> bool {
pub async fn is_in_full_access_group(user_uuid: &UserId, org_uuid: &OrganizationId, conn: &mut DbConn) -> bool {
db_run! { conn: {
groups::table
.inner_join(groups_users::table.on(
@@ -260,13 +295,13 @@ impl Group {
}}
}
pub async fn update_revision(uuid: &str, conn: &mut DbConn) {
pub async fn update_revision(uuid: &GroupId, conn: &mut DbConn) {
if let Err(e) = Self::_update_revision(uuid, &Utc::now().naive_utc(), conn).await {
warn!("Failed to update revision for {}: {:#?}", uuid, e);
}
}
async fn _update_revision(uuid: &str, date: &NaiveDateTime, conn: &mut DbConn) -> EmptyResult {
async fn _update_revision(uuid: &GroupId, date: &NaiveDateTime, conn: &mut DbConn) -> EmptyResult {
db_run! {conn: {
crate::util::retry(|| {
diesel::update(groups::table.filter(groups::uuid.eq(uuid)))
@@ -293,6 +328,7 @@ impl CollectionGroup {
collections_groups::groups_uuid.eq(&self.groups_uuid),
collections_groups::read_only.eq(&self.read_only),
collections_groups::hide_passwords.eq(&self.hide_passwords),
collections_groups::manage.eq(&self.manage),
))
.execute(conn)
{
@@ -307,6 +343,7 @@ impl CollectionGroup {
collections_groups::groups_uuid.eq(&self.groups_uuid),
collections_groups::read_only.eq(&self.read_only),
collections_groups::hide_passwords.eq(&self.hide_passwords),
collections_groups::manage.eq(&self.manage),
))
.execute(conn)
.map_res("Error adding group to collection")
@@ -321,12 +358,14 @@ impl CollectionGroup {
collections_groups::groups_uuid.eq(&self.groups_uuid),
collections_groups::read_only.eq(self.read_only),
collections_groups::hide_passwords.eq(self.hide_passwords),
collections_groups::manage.eq(self.manage),
))
.on_conflict((collections_groups::collections_uuid, collections_groups::groups_uuid))
.do_update()
.set((
collections_groups::read_only.eq(self.read_only),
collections_groups::hide_passwords.eq(self.hide_passwords),
collections_groups::manage.eq(self.manage),
))
.execute(conn)
.map_res("Error adding group to collection")
@@ -334,7 +373,7 @@ impl CollectionGroup {
}
}
pub async fn find_by_group(group_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_group(group_uuid: &GroupId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
collections_groups::table
.filter(collections_groups::groups_uuid.eq(group_uuid))
@@ -344,7 +383,7 @@ impl CollectionGroup {
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
collections_groups::table
.inner_join(groups_users::table.on(
@@ -361,7 +400,7 @@ impl CollectionGroup {
}}
}
pub async fn find_by_collection(collection_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_collection(collection_uuid: &CollectionId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
collections_groups::table
.filter(collections_groups::collections_uuid.eq(collection_uuid))
@@ -387,7 +426,7 @@ impl CollectionGroup {
}}
}
pub async fn delete_all_by_group(group_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_group(group_uuid: &GroupId, conn: &mut DbConn) -> EmptyResult {
let group_users = GroupUser::find_by_group(group_uuid, conn).await;
for group_user in group_users {
group_user.update_user_revision(conn).await;
@@ -401,7 +440,7 @@ impl CollectionGroup {
}}
}
pub async fn delete_all_by_collection(collection_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_collection(collection_uuid: &CollectionId, conn: &mut DbConn) -> EmptyResult {
let collection_assigned_to_groups = CollectionGroup::find_by_collection(collection_uuid, conn).await;
for collection_assigned_to_group in collection_assigned_to_groups {
let group_users = GroupUser::find_by_group(&collection_assigned_to_group.groups_uuid, conn).await;
@@ -466,7 +505,7 @@ impl GroupUser {
}
}
pub async fn find_by_group(group_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_group(group_uuid: &GroupId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
groups_users::table
.filter(groups_users::groups_uuid.eq(group_uuid))
@@ -476,43 +515,80 @@ impl GroupUser {
}}
}
pub async fn find_by_user(users_organizations_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_member(member_uuid: &MembershipId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
groups_users::table
.filter(groups_users::users_organizations_uuid.eq(users_organizations_uuid))
.filter(groups_users::users_organizations_uuid.eq(member_uuid))
.load::<GroupUserDb>(conn)
.expect("Error loading groups for user")
.from_db()
}}
}
pub async fn has_access_to_collection_by_member(
collection_uuid: &CollectionId,
member_uuid: &MembershipId,
conn: &mut DbConn,
) -> bool {
db_run! { conn: {
groups_users::table
.inner_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid)
))
.filter(collections_groups::collections_uuid.eq(collection_uuid))
.filter(groups_users::users_organizations_uuid.eq(member_uuid))
.count()
.first::<i64>(conn)
.unwrap_or(0) != 0
}}
}
pub async fn has_full_access_by_member(
org_uuid: &OrganizationId,
member_uuid: &MembershipId,
conn: &mut DbConn,
) -> bool {
db_run! { conn: {
groups_users::table
.inner_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.filter(groups::organizations_uuid.eq(org_uuid))
.filter(groups::access_all.eq(true))
.filter(groups_users::users_organizations_uuid.eq(member_uuid))
.count()
.first::<i64>(conn)
.unwrap_or(0) != 0
}}
}
pub async fn update_user_revision(&self, conn: &mut DbConn) {
match UserOrganization::find_by_uuid(&self.users_organizations_uuid, conn).await {
Some(user) => User::update_uuid_revision(&user.user_uuid, conn).await,
None => warn!("User could not be found!"),
match Membership::find_by_uuid(&self.users_organizations_uuid, conn).await {
Some(member) => User::update_uuid_revision(&member.user_uuid, conn).await,
None => warn!("Member could not be found!"),
}
}
pub async fn delete_by_group_id_and_user_id(
group_uuid: &str,
users_organizations_uuid: &str,
pub async fn delete_by_group_and_member(
group_uuid: &GroupId,
member_uuid: &MembershipId,
conn: &mut DbConn,
) -> EmptyResult {
match UserOrganization::find_by_uuid(users_organizations_uuid, conn).await {
Some(user) => User::update_uuid_revision(&user.user_uuid, conn).await,
None => warn!("User could not be found!"),
match Membership::find_by_uuid(member_uuid, conn).await {
Some(member) => User::update_uuid_revision(&member.user_uuid, conn).await,
None => warn!("Member could not be found!"),
};
db_run! { conn: {
diesel::delete(groups_users::table)
.filter(groups_users::groups_uuid.eq(group_uuid))
.filter(groups_users::users_organizations_uuid.eq(users_organizations_uuid))
.filter(groups_users::users_organizations_uuid.eq(member_uuid))
.execute(conn)
.map_res("Error deleting group users")
}}
}
pub async fn delete_all_by_group(group_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_group(group_uuid: &GroupId, conn: &mut DbConn) -> EmptyResult {
let group_users = GroupUser::find_by_group(group_uuid, conn).await;
for group_user in group_users {
group_user.update_user_revision(conn).await;
@@ -526,17 +602,35 @@ impl GroupUser {
}}
}
pub async fn delete_all_by_user(users_organizations_uuid: &str, conn: &mut DbConn) -> EmptyResult {
match UserOrganization::find_by_uuid(users_organizations_uuid, conn).await {
Some(user) => User::update_uuid_revision(&user.user_uuid, conn).await,
None => warn!("User could not be found!"),
pub async fn delete_all_by_member(member_uuid: &MembershipId, conn: &mut DbConn) -> EmptyResult {
match Membership::find_by_uuid(member_uuid, conn).await {
Some(member) => User::update_uuid_revision(&member.user_uuid, conn).await,
None => warn!("Member could not be found!"),
}
db_run! { conn: {
diesel::delete(groups_users::table)
.filter(groups_users::users_organizations_uuid.eq(users_organizations_uuid))
.filter(groups_users::users_organizations_uuid.eq(member_uuid))
.execute(conn)
.map_res("Error deleting user groups")
}}
}
}
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
From,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
UuidFromParam,
)]
pub struct GroupId(String);

View File

@@ -12,22 +12,30 @@ mod org_policy;
mod organization;
mod send;
mod two_factor;
mod two_factor_duo_context;
mod two_factor_incomplete;
mod user;
pub use self::attachment::Attachment;
pub use self::auth_request::AuthRequest;
pub use self::cipher::Cipher;
pub use self::collection::{Collection, CollectionCipher, CollectionUser};
pub use self::device::{Device, DeviceType};
pub use self::emergency_access::{EmergencyAccess, EmergencyAccessStatus, EmergencyAccessType};
pub use self::attachment::{Attachment, AttachmentId};
pub use self::auth_request::{AuthRequest, AuthRequestId};
pub use self::cipher::{Cipher, CipherId, RepromptType};
pub use self::collection::{Collection, CollectionCipher, CollectionId, CollectionUser};
pub use self::device::{Device, DeviceId, DeviceType};
pub use self::emergency_access::{EmergencyAccess, EmergencyAccessId, EmergencyAccessStatus, EmergencyAccessType};
pub use self::event::{Event, EventType};
pub use self::favorite::Favorite;
pub use self::folder::{Folder, FolderCipher};
pub use self::group::{CollectionGroup, Group, GroupUser};
pub use self::org_policy::{OrgPolicy, OrgPolicyErr, OrgPolicyType};
pub use self::organization::{Organization, OrganizationApiKey, UserOrgStatus, UserOrgType, UserOrganization};
pub use self::send::{Send, SendType};
pub use self::folder::{Folder, FolderCipher, FolderId};
pub use self::group::{CollectionGroup, Group, GroupId, GroupUser};
pub use self::org_policy::{OrgPolicy, OrgPolicyErr, OrgPolicyId, OrgPolicyType};
pub use self::organization::{
Membership, MembershipId, MembershipStatus, MembershipType, OrgApiKeyId, Organization, OrganizationApiKey,
OrganizationId,
};
pub use self::send::{
id::{SendFileId, SendId},
Send, SendType,
};
pub use self::two_factor::{TwoFactor, TwoFactorType};
pub use self::two_factor_duo_context::TwoFactorDuoContext;
pub use self::two_factor_incomplete::TwoFactorIncomplete;
pub use self::user::{Invitation, User, UserKdfType, UserStampException};
pub use self::user::{Invitation, User, UserId, UserKdfType, UserStampException};

View File

@@ -1,20 +1,20 @@
use derive_more::{AsRef, From};
use serde::Deserialize;
use serde_json::Value;
use crate::api::EmptyResult;
use crate::db::DbConn;
use crate::error::MapResult;
use crate::util::UpCase;
use super::{TwoFactor, UserOrgStatus, UserOrgType, UserOrganization};
use super::{Membership, MembershipId, MembershipStatus, MembershipType, OrganizationId, TwoFactor, UserId};
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[diesel(table_name = org_policies)]
#[diesel(primary_key(uuid))]
pub struct OrgPolicy {
pub uuid: String,
pub org_uuid: String,
pub uuid: OrgPolicyId,
pub org_uuid: OrganizationId,
pub atype: i32,
pub enabled: bool,
pub data: String,
@@ -39,16 +39,18 @@ pub enum OrgPolicyType {
// https://github.com/bitwarden/server/blob/5cbdee137921a19b1f722920f0fa3cd45af2ef0f/src/Core/Models/Data/Organizations/Policies/SendOptionsPolicyData.cs
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
pub struct SendOptionsPolicyData {
pub DisableHideEmail: bool,
#[serde(rename = "disableHideEmail", alias = "DisableHideEmail")]
pub disable_hide_email: bool,
}
// https://github.com/bitwarden/server/blob/5cbdee137921a19b1f722920f0fa3cd45af2ef0f/src/Core/Models/Data/Organizations/Policies/ResetPasswordDataModel.cs
#[derive(Deserialize)]
#[allow(non_snake_case)]
#[serde(rename_all = "camelCase")]
pub struct ResetPasswordDataModel {
pub AutoEnrollEnabled: bool,
#[serde(rename = "autoEnrollEnabled", alias = "AutoEnrollEnabled")]
pub auto_enroll_enabled: bool,
}
pub type OrgPolicyResult = Result<(), OrgPolicyErr>;
@@ -61,9 +63,9 @@ pub enum OrgPolicyErr {
/// Local methods
impl OrgPolicy {
pub fn new(org_uuid: String, atype: OrgPolicyType, data: String) -> Self {
pub fn new(org_uuid: OrganizationId, atype: OrgPolicyType, data: String) -> Self {
Self {
uuid: crate::util::get_uuid(),
uuid: OrgPolicyId(crate::util::get_uuid()),
org_uuid,
atype: atype as i32,
enabled: false,
@@ -78,12 +80,12 @@ impl OrgPolicy {
pub fn to_json(&self) -> Value {
let data_json: Value = serde_json::from_str(&self.data).unwrap_or(Value::Null);
json!({
"Id": self.uuid,
"OrganizationId": self.org_uuid,
"Type": self.atype,
"Data": data_json,
"Enabled": self.enabled,
"Object": "policy",
"id": self.uuid,
"organizationId": self.org_uuid,
"type": self.atype,
"data": data_json,
"enabled": self.enabled,
"object": "policy",
})
}
}
@@ -114,7 +116,7 @@ impl OrgPolicy {
// We need to make sure we're not going to violate the unique constraint on org_uuid and atype.
// This happens automatically on other DBMS backends due to replace_into(). PostgreSQL does
// not support multiple constraints on ON CONFLICT clauses.
diesel::delete(
let _: () = diesel::delete(
org_policies::table
.filter(org_policies::org_uuid.eq(&self.org_uuid))
.filter(org_policies::atype.eq(&self.atype)),
@@ -141,17 +143,7 @@ impl OrgPolicy {
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
org_policies::table
.filter(org_policies::uuid.eq(uuid))
.first::<OrgPolicyDb>(conn)
.ok()
.from_db()
}}
}
pub async fn find_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
org_policies::table
.filter(org_policies::org_uuid.eq(org_uuid))
@@ -161,7 +153,7 @@ impl OrgPolicy {
}}
}
pub async fn find_confirmed_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_confirmed_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
org_policies::table
.inner_join(
@@ -170,7 +162,7 @@ impl OrgPolicy {
.and(users_organizations::user_uuid.eq(user_uuid)))
)
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
users_organizations::status.eq(MembershipStatus::Confirmed as i32)
)
.select(org_policies::all_columns)
.load::<OrgPolicyDb>(conn)
@@ -179,7 +171,11 @@ impl OrgPolicy {
}}
}
pub async fn find_by_org_and_type(org_uuid: &str, policy_type: OrgPolicyType, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_org_and_type(
org_uuid: &OrganizationId,
policy_type: OrgPolicyType,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: {
org_policies::table
.filter(org_policies::org_uuid.eq(org_uuid))
@@ -190,7 +186,7 @@ impl OrgPolicy {
}}
}
pub async fn delete_all_by_organization(org_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_organization(org_uuid: &OrganizationId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(org_policies::table.filter(org_policies::org_uuid.eq(org_uuid)))
.execute(conn)
@@ -199,7 +195,7 @@ impl OrgPolicy {
}
pub async fn find_accepted_and_confirmed_by_user_and_active_policy(
user_uuid: &str,
user_uuid: &UserId,
policy_type: OrgPolicyType,
conn: &mut DbConn,
) -> Vec<Self> {
@@ -211,10 +207,10 @@ impl OrgPolicy {
.and(users_organizations::user_uuid.eq(user_uuid)))
)
.filter(
users_organizations::status.eq(UserOrgStatus::Accepted as i32)
users_organizations::status.eq(MembershipStatus::Accepted as i32)
)
.or_filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
users_organizations::status.eq(MembershipStatus::Confirmed as i32)
)
.filter(org_policies::atype.eq(policy_type as i32))
.filter(org_policies::enabled.eq(true))
@@ -226,7 +222,7 @@ impl OrgPolicy {
}
pub async fn find_confirmed_by_user_and_active_policy(
user_uuid: &str,
user_uuid: &UserId,
policy_type: OrgPolicyType,
conn: &mut DbConn,
) -> Vec<Self> {
@@ -238,7 +234,7 @@ impl OrgPolicy {
.and(users_organizations::user_uuid.eq(user_uuid)))
)
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
users_organizations::status.eq(MembershipStatus::Confirmed as i32)
)
.filter(org_policies::atype.eq(policy_type as i32))
.filter(org_policies::enabled.eq(true))
@@ -253,21 +249,21 @@ impl OrgPolicy {
/// and the user is not an owner or admin of that org. This is only useful for checking
/// applicability of policy types that have these particular semantics.
pub async fn is_applicable_to_user(
user_uuid: &str,
user_uuid: &UserId,
policy_type: OrgPolicyType,
exclude_org_uuid: Option<&str>,
exclude_org_uuid: Option<&OrganizationId>,
conn: &mut DbConn,
) -> bool {
for policy in
OrgPolicy::find_accepted_and_confirmed_by_user_and_active_policy(user_uuid, policy_type, conn).await
{
// Check if we need to skip this organization.
if exclude_org_uuid.is_some() && exclude_org_uuid.unwrap() == policy.org_uuid {
if exclude_org_uuid.is_some() && *exclude_org_uuid.unwrap() == policy.org_uuid {
continue;
}
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, &policy.org_uuid, conn).await {
if user.atype < UserOrgType::Admin {
if let Some(user) = Membership::find_by_user_and_org(user_uuid, &policy.org_uuid, conn).await {
if user.atype < MembershipType::Admin {
return true;
}
}
@@ -276,8 +272,8 @@ impl OrgPolicy {
}
pub async fn is_user_allowed(
user_uuid: &str,
org_uuid: &str,
user_uuid: &UserId,
org_uuid: &OrganizationId,
exclude_current_org: bool,
conn: &mut DbConn,
) -> OrgPolicyResult {
@@ -305,11 +301,11 @@ impl OrgPolicy {
Ok(())
}
pub async fn org_is_reset_password_auto_enroll(org_uuid: &str, conn: &mut DbConn) -> bool {
pub async fn org_is_reset_password_auto_enroll(org_uuid: &OrganizationId, conn: &mut DbConn) -> bool {
match OrgPolicy::find_by_org_and_type(org_uuid, OrgPolicyType::ResetPassword, conn).await {
Some(policy) => match serde_json::from_str::<UpCase<ResetPasswordDataModel>>(&policy.data) {
Some(policy) => match serde_json::from_str::<ResetPasswordDataModel>(&policy.data) {
Ok(opts) => {
return policy.enabled && opts.data.AutoEnrollEnabled;
return policy.enabled && opts.auto_enroll_enabled;
}
_ => error!("Failed to deserialize ResetPasswordDataModel: {}", policy.data),
},
@@ -321,15 +317,15 @@ impl OrgPolicy {
/// Returns true if the user belongs to an org that has enabled the `DisableHideEmail`
/// option of the `Send Options` policy, and the user is not an owner or admin of that org.
pub async fn is_hide_email_disabled(user_uuid: &str, conn: &mut DbConn) -> bool {
pub async fn is_hide_email_disabled(user_uuid: &UserId, conn: &mut DbConn) -> bool {
for policy in
OrgPolicy::find_confirmed_by_user_and_active_policy(user_uuid, OrgPolicyType::SendOptions, conn).await
{
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, &policy.org_uuid, conn).await {
if user.atype < UserOrgType::Admin {
match serde_json::from_str::<UpCase<SendOptionsPolicyData>>(&policy.data) {
if let Some(user) = Membership::find_by_user_and_org(user_uuid, &policy.org_uuid, conn).await {
if user.atype < MembershipType::Admin {
match serde_json::from_str::<SendOptionsPolicyData>(&policy.data) {
Ok(opts) => {
if opts.data.DisableHideEmail {
if opts.disable_hide_email {
return true;
}
}
@@ -340,4 +336,20 @@ impl OrgPolicy {
}
false
}
pub async fn is_enabled_for_member(
member_uuid: &MembershipId,
policy_type: OrgPolicyType,
conn: &mut DbConn,
) -> bool {
if let Some(member) = Membership::find_by_uuid(member_uuid, conn).await {
if let Some(policy) = OrgPolicy::find_by_org_and_type(&member.org_uuid, policy_type, conn).await {
return policy.enabled;
}
}
false
}
}
#[derive(Clone, Debug, AsRef, DieselNewType, From, FromForm, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct OrgPolicyId(String);

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,10 @@
use chrono::{NaiveDateTime, Utc};
use serde_json::Value;
use super::User;
use crate::util::LowerCase;
use super::{OrganizationId, User, UserId};
use id::SendId;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
@@ -9,11 +12,10 @@ db_object! {
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))]
pub struct Send {
pub uuid: String,
pub user_uuid: Option<String>,
pub organization_uuid: Option<String>,
pub uuid: SendId,
pub user_uuid: Option<UserId>,
pub organization_uuid: Option<OrganizationId>,
pub name: String,
pub notes: Option<String>,
@@ -49,7 +51,7 @@ impl Send {
let now = Utc::now().naive_utc();
Self {
uuid: crate::util::get_uuid(),
uuid: SendId::from(crate::util::get_uuid()),
user_uuid: None,
organization_uuid: None,
@@ -122,48 +124,58 @@ impl Send {
use data_encoding::BASE64URL_NOPAD;
use uuid::Uuid;
let data: Value = serde_json::from_str(&self.data).unwrap_or_default();
let mut data = serde_json::from_str::<LowerCase<Value>>(&self.data).map(|d| d.data).unwrap_or_default();
// Mobile clients expect size to be a string instead of a number
if let Some(size) = data.get("size").and_then(|v| v.as_i64()) {
data["size"] = Value::String(size.to_string());
}
json!({
"Id": self.uuid,
"AccessId": BASE64URL_NOPAD.encode(Uuid::parse_str(&self.uuid).unwrap_or_default().as_bytes()),
"Type": self.atype,
"id": self.uuid,
"accessId": BASE64URL_NOPAD.encode(Uuid::parse_str(&self.uuid).unwrap_or_default().as_bytes()),
"type": self.atype,
"Name": self.name,
"Notes": self.notes,
"Text": if self.atype == SendType::Text as i32 { Some(&data) } else { None },
"File": if self.atype == SendType::File as i32 { Some(&data) } else { None },
"name": self.name,
"notes": self.notes,
"text": if self.atype == SendType::Text as i32 { Some(&data) } else { None },
"file": if self.atype == SendType::File as i32 { Some(&data) } else { None },
"Key": self.akey,
"MaxAccessCount": self.max_access_count,
"AccessCount": self.access_count,
"Password": self.password_hash.as_deref().map(|h| BASE64URL_NOPAD.encode(h)),
"Disabled": self.disabled,
"HideEmail": self.hide_email,
"key": self.akey,
"maxAccessCount": self.max_access_count,
"accessCount": self.access_count,
"password": self.password_hash.as_deref().map(|h| BASE64URL_NOPAD.encode(h)),
"disabled": self.disabled,
"hideEmail": self.hide_email,
"RevisionDate": format_date(&self.revision_date),
"ExpirationDate": self.expiration_date.as_ref().map(format_date),
"DeletionDate": format_date(&self.deletion_date),
"Object": "send",
"revisionDate": format_date(&self.revision_date),
"expirationDate": self.expiration_date.as_ref().map(format_date),
"deletionDate": format_date(&self.deletion_date),
"object": "send",
})
}
pub async fn to_json_access(&self, conn: &mut DbConn) -> Value {
use crate::util::format_date;
let data: Value = serde_json::from_str(&self.data).unwrap_or_default();
let mut data = serde_json::from_str::<LowerCase<Value>>(&self.data).map(|d| d.data).unwrap_or_default();
// Mobile clients expect size to be a string instead of a number
if let Some(size) = data.get("size").and_then(|v| v.as_i64()) {
data["size"] = Value::String(size.to_string());
}
json!({
"Id": self.uuid,
"Type": self.atype,
"id": self.uuid,
"type": self.atype,
"Name": self.name,
"Text": if self.atype == SendType::Text as i32 { Some(&data) } else { None },
"File": if self.atype == SendType::File as i32 { Some(&data) } else { None },
"name": self.name,
"text": if self.atype == SendType::Text as i32 { Some(&data) } else { None },
"file": if self.atype == SendType::File as i32 { Some(&data) } else { None },
"ExpirationDate": self.expiration_date.as_ref().map(format_date),
"CreatorIdentifier": self.creator_identifier(conn).await,
"Object": "send-access",
"expirationDate": self.expiration_date.as_ref().map(format_date),
"creatorIdentifier": self.creator_identifier(conn).await,
"object": "send-access",
})
}
}
@@ -231,7 +243,7 @@ impl Send {
}
}
pub async fn update_users_revision(&self, conn: &mut DbConn) -> Vec<String> {
pub async fn update_users_revision(&self, conn: &mut DbConn) -> Vec<UserId> {
let mut user_uuids = Vec::new();
match &self.user_uuid {
Some(user_uuid) => {
@@ -245,7 +257,7 @@ impl Send {
user_uuids
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
for send in Self::find_by_user(user_uuid, conn).await {
send.delete(conn).await?;
}
@@ -256,20 +268,19 @@ impl Send {
use data_encoding::BASE64URL_NOPAD;
use uuid::Uuid;
let uuid_vec = match BASE64URL_NOPAD.decode(access_id.as_bytes()) {
Ok(v) => v,
Err(_) => return None,
let Ok(uuid_vec) = BASE64URL_NOPAD.decode(access_id.as_bytes()) else {
return None;
};
let uuid = match Uuid::from_slice(&uuid_vec) {
Ok(u) => u.to_string(),
Ok(u) => SendId::from(u.to_string()),
Err(_) => return None,
};
Self::find_by_uuid(&uuid, conn).await
}
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &SendId, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: {
sends::table
.filter(sends::uuid.eq(uuid))
@@ -279,7 +290,18 @@ impl Send {
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_uuid_and_user(uuid: &SendId, user_uuid: &UserId, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: {
sends::table
.filter(sends::uuid.eq(uuid))
.filter(sends::user_uuid.eq(user_uuid))
.first::<SendDb>(conn)
.ok()
.from_db()
}}
}
pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
sends::table
.filter(sends::user_uuid.eq(user_uuid))
@@ -287,28 +309,21 @@ impl Send {
}}
}
pub async fn size_by_user(user_uuid: &str, conn: &mut DbConn) -> Option<i64> {
pub async fn size_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Option<i64> {
let sends = Self::find_by_user(user_uuid, conn).await;
#[allow(non_snake_case)]
#[derive(serde::Deserialize, Default)]
#[derive(serde::Deserialize)]
struct FileData {
Size: Option<NumberOrString>,
size: Option<NumberOrString>,
#[serde(rename = "size", alias = "Size")]
size: NumberOrString,
}
let mut total: i64 = 0;
for send in sends {
if send.atype == SendType::File as i32 {
let data: FileData = serde_json::from_str(&send.data).unwrap_or_default();
let size = match (data.size, data.Size) {
(Some(s), _) => s.into_i64(),
(_, Some(s)) => s.into_i64(),
(None, None) => continue,
};
if let Ok(size) = size {
if let Ok(size) =
serde_json::from_str::<FileData>(&send.data).map_err(Into::into).and_then(|d| d.size.into_i64())
{
total = total.checked_add(size)?;
};
}
@@ -317,7 +332,7 @@ impl Send {
Some(total)
}
pub async fn find_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_org(org_uuid: &OrganizationId, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
sends::table
.filter(sends::organization_uuid.eq(org_uuid))
@@ -334,3 +349,48 @@ impl Send {
}}
}
}
// separate namespace to avoid name collision with std::marker::Send
pub mod id {
use derive_more::{AsRef, Deref, Display, From};
use macros::{IdFromParam, UuidFromParam};
use std::marker::Send;
use std::path::Path;
#[derive(
Clone,
Debug,
AsRef,
Deref,
DieselNewType,
Display,
From,
FromForm,
Hash,
PartialEq,
Eq,
Serialize,
Deserialize,
UuidFromParam,
)]
pub struct SendId(String);
impl AsRef<Path> for SendId {
#[inline]
fn as_ref(&self) -> &Path {
Path::new(&self.0)
}
}
#[derive(
Clone, Debug, AsRef, Deref, Display, From, FromForm, Hash, PartialEq, Eq, Serialize, Deserialize, IdFromParam,
)]
pub struct SendFileId(String);
impl AsRef<Path> for SendFileId {
#[inline]
fn as_ref(&self) -> &Path {
Path::new(&self.0)
}
}
}

View File

@@ -1,5 +1,6 @@
use serde_json::Value;
use super::UserId;
use crate::{api::EmptyResult, db::DbConn, error::MapResult};
db_object! {
@@ -7,12 +8,12 @@ db_object! {
#[diesel(table_name = twofactor)]
#[diesel(primary_key(uuid))]
pub struct TwoFactor {
pub uuid: String,
pub user_uuid: String,
pub uuid: TwoFactorId,
pub user_uuid: UserId,
pub atype: i32,
pub enabled: bool,
pub data: String,
pub last_used: i32,
pub last_used: i64,
}
}
@@ -41,9 +42,9 @@ pub enum TwoFactorType {
/// Local methods
impl TwoFactor {
pub fn new(user_uuid: String, atype: TwoFactorType, data: String) -> Self {
pub fn new(user_uuid: UserId, atype: TwoFactorType, data: String) -> Self {
Self {
uuid: crate::util::get_uuid(),
uuid: TwoFactorId(crate::util::get_uuid()),
user_uuid,
atype: atype as i32,
enabled: true,
@@ -54,17 +55,17 @@ impl TwoFactor {
pub fn to_json(&self) -> Value {
json!({
"Enabled": self.enabled,
"Key": "", // This key and value vary
"Object": "twoFactorAuthenticator" // This value varies
"enabled": self.enabled,
"key": "", // This key and value vary
"Oobject": "twoFactorAuthenticator" // This value varies
})
}
pub fn to_json_provider(&self) -> Value {
json!({
"Enabled": self.enabled,
"Type": self.atype,
"Object": "twoFactorProvider"
"enabled": self.enabled,
"type": self.atype,
"object": "twoFactorProvider"
})
}
}
@@ -95,7 +96,7 @@ impl TwoFactor {
// We need to make sure we're not going to violate the unique constraint on user_uuid and atype.
// This happens automatically on other DBMS backends due to replace_into(). PostgreSQL does
// not support multiple constraints on ON CONFLICT clauses.
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(&self.user_uuid)).filter(twofactor::atype.eq(&self.atype)))
let _: () = diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(&self.user_uuid)).filter(twofactor::atype.eq(&self.atype)))
.execute(conn)
.map_res("Error deleting twofactor for insert")?;
@@ -118,7 +119,7 @@ impl TwoFactor {
}}
}
pub async fn find_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
pub async fn find_by_user(user_uuid: &UserId, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
twofactor::table
.filter(twofactor::user_uuid.eq(user_uuid))
@@ -129,7 +130,7 @@ impl TwoFactor {
}}
}
pub async fn find_by_user_and_type(user_uuid: &str, atype: i32, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_user_and_type(user_uuid: &UserId, atype: i32, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
twofactor::table
.filter(twofactor::user_uuid.eq(user_uuid))
@@ -140,7 +141,7 @@ impl TwoFactor {
}}
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid)))
.execute(conn)
@@ -217,3 +218,6 @@ impl TwoFactor {
Ok(())
}
}
#[derive(Clone, Debug, DieselNewType, FromForm, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct TwoFactorId(String);

View File

@@ -0,0 +1,84 @@
use chrono::Utc;
use crate::{api::EmptyResult, db::DbConn, error::MapResult};
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[diesel(table_name = twofactor_duo_ctx)]
#[diesel(primary_key(state))]
pub struct TwoFactorDuoContext {
pub state: String,
pub user_email: String,
pub nonce: String,
pub exp: i64,
}
}
impl TwoFactorDuoContext {
pub async fn find_by_state(state: &str, conn: &mut DbConn) -> Option<Self> {
db_run! {
conn: {
twofactor_duo_ctx::table
.filter(twofactor_duo_ctx::state.eq(state))
.first::<TwoFactorDuoContextDb>(conn)
.ok()
.from_db()
}
}
}
pub async fn save(state: &str, user_email: &str, nonce: &str, ttl: i64, conn: &mut DbConn) -> EmptyResult {
// A saved context should never be changed, only created or deleted.
let exists = Self::find_by_state(state, conn).await;
if exists.is_some() {
return Ok(());
};
let exp = Utc::now().timestamp() + ttl;
db_run! {
conn: {
diesel::insert_into(twofactor_duo_ctx::table)
.values((
twofactor_duo_ctx::state.eq(state),
twofactor_duo_ctx::user_email.eq(user_email),
twofactor_duo_ctx::nonce.eq(nonce),
twofactor_duo_ctx::exp.eq(exp)
))
.execute(conn)
.map_res("Error saving context to twofactor_duo_ctx")
}
}
}
pub async fn find_expired(conn: &mut DbConn) -> Vec<Self> {
let now = Utc::now().timestamp();
db_run! {
conn: {
twofactor_duo_ctx::table
.filter(twofactor_duo_ctx::exp.lt(now))
.load::<TwoFactorDuoContextDb>(conn)
.expect("Error finding expired contexts in twofactor_duo_ctx")
.from_db()
}
}
}
pub async fn delete(&self, conn: &mut DbConn) -> EmptyResult {
db_run! {
conn: {
diesel::delete(
twofactor_duo_ctx::table
.filter(twofactor_duo_ctx::state.eq(&self.state)))
.execute(conn)
.map_res("Error deleting from twofactor_duo_ctx")
}
}
}
pub async fn purge_expired_duo_contexts(conn: &mut DbConn) {
for context in Self::find_expired(conn).await {
context.delete(conn).await.ok();
}
}
}

View File

@@ -1,18 +1,28 @@
use chrono::{NaiveDateTime, Utc};
use crate::{api::EmptyResult, auth::ClientIp, db::DbConn, error::MapResult, CONFIG};
use crate::{
api::EmptyResult,
auth::ClientIp,
db::{
models::{DeviceId, UserId},
DbConn,
},
error::MapResult,
CONFIG,
};
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
#[diesel(table_name = twofactor_incomplete)]
#[diesel(primary_key(user_uuid, device_uuid))]
pub struct TwoFactorIncomplete {
pub user_uuid: String,
pub user_uuid: UserId,
// This device UUID is simply what's claimed by the device. It doesn't
// necessarily correspond to any UUID in the devices table, since a device
// must complete 2FA login before being added into the devices table.
pub device_uuid: String,
pub device_uuid: DeviceId,
pub device_name: String,
pub device_type: i32,
pub login_time: NaiveDateTime,
pub ip_address: String,
}
@@ -20,9 +30,10 @@ db_object! {
impl TwoFactorIncomplete {
pub async fn mark_incomplete(
user_uuid: &str,
device_uuid: &str,
user_uuid: &UserId,
device_uuid: &DeviceId,
device_name: &str,
device_type: i32,
ip: &ClientIp,
conn: &mut DbConn,
) -> EmptyResult {
@@ -44,6 +55,7 @@ impl TwoFactorIncomplete {
twofactor_incomplete::user_uuid.eq(user_uuid),
twofactor_incomplete::device_uuid.eq(device_uuid),
twofactor_incomplete::device_name.eq(device_name),
twofactor_incomplete::device_type.eq(device_type),
twofactor_incomplete::login_time.eq(Utc::now().naive_utc()),
twofactor_incomplete::ip_address.eq(ip.ip.to_string()),
))
@@ -52,7 +64,7 @@ impl TwoFactorIncomplete {
}}
}
pub async fn mark_complete(user_uuid: &str, device_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn mark_complete(user_uuid: &UserId, device_uuid: &DeviceId, conn: &mut DbConn) -> EmptyResult {
if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() {
return Ok(());
}
@@ -60,7 +72,11 @@ impl TwoFactorIncomplete {
Self::delete_by_user_and_device(user_uuid, device_uuid, conn).await
}
pub async fn find_by_user_and_device(user_uuid: &str, device_uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_user_and_device(
user_uuid: &UserId,
device_uuid: &DeviceId,
conn: &mut DbConn,
) -> Option<Self> {
db_run! { conn: {
twofactor_incomplete::table
.filter(twofactor_incomplete::user_uuid.eq(user_uuid))
@@ -85,7 +101,11 @@ impl TwoFactorIncomplete {
Self::delete_by_user_and_device(&self.user_uuid, &self.device_uuid, conn).await
}
pub async fn delete_by_user_and_device(user_uuid: &str, device_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_by_user_and_device(
user_uuid: &UserId,
device_uuid: &DeviceId,
conn: &mut DbConn,
) -> EmptyResult {
db_run! { conn: {
diesel::delete(twofactor_incomplete::table
.filter(twofactor_incomplete::user_uuid.eq(user_uuid))
@@ -95,7 +115,7 @@ impl TwoFactorIncomplete {
}}
}
pub async fn delete_all_by_user(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
pub async fn delete_all_by_user(user_uuid: &UserId, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(twofactor_incomplete::table.filter(twofactor_incomplete::user_uuid.eq(user_uuid)))
.execute(conn)

View File

@@ -1,8 +1,19 @@
use chrono::{Duration, NaiveDateTime, Utc};
use chrono::{NaiveDateTime, TimeDelta, Utc};
use derive_more::{AsRef, Deref, Display, From};
use serde_json::Value;
use crate::crypto;
use crate::CONFIG;
use super::{
Cipher, Device, EmergencyAccess, Favorite, Folder, Membership, MembershipType, TwoFactor, TwoFactorIncomplete,
};
use crate::{
api::EmptyResult,
crypto,
db::DbConn,
error::MapResult,
util::{format_date, get_uuid, retry},
CONFIG,
};
use macros::UuidFromParam;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
@@ -10,7 +21,7 @@ db_object! {
#[diesel(treat_none_as_null = true)]
#[diesel(primary_key(uuid))]
pub struct User {
pub uuid: String,
pub uuid: UserId,
pub enabled: bool,
pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime,
@@ -90,7 +101,7 @@ impl User {
let email = email.to_lowercase();
Self {
uuid: crate::util::get_uuid(),
uuid: UserId(get_uuid()),
enabled: true,
created_at: now,
updated_at: now,
@@ -107,7 +118,7 @@ impl User {
salt: crypto::get_random_bytes::<64>().to_vec(),
password_iterations: CONFIG.password_iterations(),
security_stamp: crate::util::get_uuid(),
security_stamp: get_uuid(),
stamp_exception: None,
password_hint: None,
@@ -144,14 +155,14 @@ impl User {
pub fn check_valid_recovery_code(&self, recovery_code: &str) -> bool {
if let Some(ref totp_recover) = self.totp_recover {
crate::crypto::ct_eq(recovery_code, totp_recover.to_lowercase())
crypto::ct_eq(recovery_code, totp_recover.to_lowercase())
} else {
false
}
}
pub fn check_valid_api_key(&self, key: &str) -> bool {
matches!(self.api_key, Some(ref api_key) if crate::crypto::ct_eq(api_key, key))
matches!(self.api_key, Some(ref api_key) if crypto::ct_eq(api_key, key))
}
/// Set the password hash generated
@@ -188,7 +199,7 @@ impl User {
}
pub fn reset_security_stamp(&mut self) {
self.security_stamp = crate::util::get_uuid();
self.security_stamp = get_uuid();
}
/// Set the stamp_exception to only allow a subsequent request matching a specific route using the current security-stamp.
@@ -202,7 +213,7 @@ impl User {
let stamp_exception = UserStampException {
routes: route_exception,
security_stamp: self.security_stamp.clone(),
expire: (Utc::now().naive_utc() + Duration::minutes(2)).timestamp(),
expire: (Utc::now() + TimeDelta::try_minutes(2).unwrap()).timestamp(),
};
self.stamp_exception = Some(serde_json::to_string(&stamp_exception).unwrap_or_default());
}
@@ -213,20 +224,11 @@ impl User {
}
}
use super::{
Cipher, Device, EmergencyAccess, Favorite, Folder, Send, TwoFactor, TwoFactorIncomplete, UserOrgType,
UserOrganization,
};
use crate::db::DbConn;
use crate::api::EmptyResult;
use crate::error::MapResult;
/// Database methods
impl User {
pub async fn to_json(&self, conn: &mut DbConn) -> Value {
let mut orgs_json = Vec::new();
for c in UserOrganization::find_confirmed_by_user(&self.uuid, conn).await {
for c in Membership::find_confirmed_by_user(&self.uuid, conn).await {
orgs_json.push(c.to_json(conn).await);
}
@@ -240,30 +242,33 @@ impl User {
};
json!({
"_Status": status as i32,
"Id": self.uuid,
"Name": self.name,
"Email": self.email,
"EmailVerified": !CONFIG.mail_enabled() || self.verified_at.is_some(),
"Premium": true,
"MasterPasswordHint": self.password_hint,
"Culture": "en-US",
"TwoFactorEnabled": twofactor_enabled,
"Key": self.akey,
"PrivateKey": self.private_key,
"SecurityStamp": self.security_stamp,
"Organizations": orgs_json,
"Providers": [],
"ProviderOrganizations": [],
"ForcePasswordReset": false,
"AvatarColor": self.avatar_color,
"Object": "profile",
"_status": status as i32,
"id": self.uuid,
"name": self.name,
"email": self.email,
"emailVerified": !CONFIG.mail_enabled() || self.verified_at.is_some(),
"premium": true,
"premiumFromOrganization": false,
"masterPasswordHint": self.password_hint,
"culture": "en-US",
"twoFactorEnabled": twofactor_enabled,
"key": self.akey,
"privateKey": self.private_key,
"securityStamp": self.security_stamp,
"organizations": orgs_json,
"providers": [],
"providerOrganizations": [],
"forcePasswordReset": false,
"avatarColor": self.avatar_color,
"usesKeyConnector": false,
"creationDate": format_date(&self.created_at),
"object": "profile",
})
}
pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult {
if self.email.trim().is_empty() {
err!("User email can't be empty")
if !crate::util::is_valid_email(&self.email) {
err!(format!("User email {} is not a valid email address", self.email))
}
self.updated_at = Utc::now().naive_utc();
@@ -300,18 +305,18 @@ impl User {
}
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
for user_org in UserOrganization::find_confirmed_by_user(&self.uuid, conn).await {
if user_org.atype == UserOrgType::Owner
&& UserOrganization::count_confirmed_by_org_and_type(&user_org.org_uuid, UserOrgType::Owner, conn).await
<= 1
for member in Membership::find_confirmed_by_user(&self.uuid, conn).await {
if member.atype == MembershipType::Owner
&& Membership::count_confirmed_by_org_and_type(&member.org_uuid, MembershipType::Owner, conn).await <= 1
{
err!("Can't delete last owner")
}
}
Send::delete_all_by_user(&self.uuid, conn).await?;
super::Send::delete_all_by_user(&self.uuid, conn).await?;
EmergencyAccess::delete_all_by_user(&self.uuid, conn).await?;
UserOrganization::delete_all_by_user(&self.uuid, conn).await?;
EmergencyAccess::delete_all_by_grantee_email(&self.email, conn).await?;
Membership::delete_all_by_user(&self.uuid, conn).await?;
Cipher::delete_all_by_user(&self.uuid, conn).await?;
Favorite::delete_all_by_user(&self.uuid, conn).await?;
Folder::delete_all_by_user(&self.uuid, conn).await?;
@@ -327,7 +332,7 @@ impl User {
}}
}
pub async fn update_uuid_revision(uuid: &str, conn: &mut DbConn) {
pub async fn update_uuid_revision(uuid: &UserId, conn: &mut DbConn) {
if let Err(e) = Self::_update_revision(uuid, &Utc::now().naive_utc(), conn).await {
warn!("Failed to update revision for {}: {:#?}", uuid, e);
}
@@ -337,7 +342,7 @@ impl User {
let updated_at = Utc::now().naive_utc();
db_run! {conn: {
crate::util::retry(|| {
retry(|| {
diesel::update(users::table)
.set(users::updated_at.eq(updated_at))
.execute(conn)
@@ -352,9 +357,9 @@ impl User {
Self::_update_revision(&self.uuid, &self.updated_at, conn).await
}
async fn _update_revision(uuid: &str, date: &NaiveDateTime, conn: &mut DbConn) -> EmptyResult {
async fn _update_revision(uuid: &UserId, date: &NaiveDateTime, conn: &mut DbConn) -> EmptyResult {
db_run! {conn: {
crate::util::retry(|| {
retry(|| {
diesel::update(users::table.filter(users::uuid.eq(uuid)))
.set(users::updated_at.eq(date))
.execute(conn)
@@ -374,7 +379,7 @@ impl User {
}}
}
pub async fn find_by_uuid(uuid: &str, conn: &mut DbConn) -> Option<Self> {
pub async fn find_by_uuid(uuid: &UserId, conn: &mut DbConn) -> Option<Self> {
db_run! {conn: {
users::table.filter(users::uuid.eq(uuid)).first::<UserDb>(conn).ok().from_db()
}}
@@ -403,8 +408,8 @@ impl Invitation {
}
pub async fn save(&self, conn: &mut DbConn) -> EmptyResult {
if self.email.trim().is_empty() {
err!("Invitation email can't be empty")
if !crate::util::is_valid_email(&self.email) {
err!(format!("Invitation email {} is not a valid email address", self.email))
}
db_run! {conn:
@@ -453,3 +458,23 @@ impl Invitation {
}
}
}
#[derive(
Clone,
Debug,
DieselNewType,
FromForm,
PartialEq,
Eq,
Hash,
Serialize,
Deserialize,
AsRef,
Deref,
Display,
From,
UuidFromParam,
)]
#[deref(forward)]
#[from(forward)]
pub struct UserId(String);

View File

@@ -160,7 +160,7 @@ table! {
atype -> Integer,
enabled -> Bool,
data -> Text,
last_used -> Integer,
last_used -> BigInt,
}
}
@@ -169,11 +169,21 @@ table! {
user_uuid -> Text,
device_uuid -> Text,
device_name -> Text,
device_type -> Integer,
login_time -> Timestamp,
ip_address -> Text,
}
}
table! {
twofactor_duo_ctx (state) {
state -> Text,
user_email -> Text,
nonce -> Text,
exp -> BigInt,
}
}
table! {
users (uuid) {
uuid -> Text,
@@ -216,6 +226,7 @@ table! {
collection_uuid -> Text,
read_only -> Bool,
hide_passwords -> Bool,
manage -> Bool,
}
}
@@ -285,6 +296,7 @@ table! {
groups_uuid -> Text,
read_only -> Bool,
hide_passwords -> Bool,
manage -> Bool,
}
}

View File

@@ -160,7 +160,7 @@ table! {
atype -> Integer,
enabled -> Bool,
data -> Text,
last_used -> Integer,
last_used -> BigInt,
}
}
@@ -169,11 +169,21 @@ table! {
user_uuid -> Text,
device_uuid -> Text,
device_name -> Text,
device_type -> Integer,
login_time -> Timestamp,
ip_address -> Text,
}
}
table! {
twofactor_duo_ctx (state) {
state -> Text,
user_email -> Text,
nonce -> Text,
exp -> BigInt,
}
}
table! {
users (uuid) {
uuid -> Text,
@@ -216,6 +226,7 @@ table! {
collection_uuid -> Text,
read_only -> Bool,
hide_passwords -> Bool,
manage -> Bool,
}
}
@@ -285,6 +296,7 @@ table! {
groups_uuid -> Text,
read_only -> Bool,
hide_passwords -> Bool,
manage -> Bool,
}
}

Some files were not shown because too many files have changed in this diff Show More