Compare commits

...

36 Commits

Author SHA1 Message Date
Mathijs van Veluw b04ed75f9f Update Rust, Crates, GHA and fix a DNS issue (#7108)
* Update Rust, Crates and GHA

- Updated Rust to v1.95.0
- Updated all the crates
- Update GitHub Actions

With the crate updates, hickory-resolver was updated which needed some changes.
During testing I found a bug with the fallback resolving from Tokio.
The resolver doesn't work if it receives only a `&str`, it needs a `port` too.
This fixed the resolving if Hickory failed to load.

Also, Hickory switched the resolving to prefer IPv6. While this is nice, it could break or slowdown resolving for IPv4 only environments.
Since we already have a flag to prefer IPv6, we check if this is set, else resolve IPv4 first and IPv6 afterwards.

Also, we returned just 1 IpAddr record, and ignored the rest. This could mean, a failed attempt to connect if the first IP endpoint has issues.
Same if the first records is IPv6 but the server doesn't support this, it never tried a possible returned IPv4 address.

We now return a full list of the resolved records unless one of the records matched a filtered address, than the whole resolving is ignored as was previously the case.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Adjust resolver builder path

Changed the way the resolver is constructed.
This way the default is always selected no matter which part of the hickory build fails.

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-04-18 15:03:41 +02:00
Mathijs van Veluw 0ed8ab68f7 Fix invalid refresh token response (#7105)
If the refresh token is invalid or expired we need to return a specific JSON and HTTP Status, else the clients will not logout.

Fixes #7060
Closes #7080

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-04-16 18:42:13 +02:00
Mathijs van Veluw dfebee57ec Fix recovery-code not working (#7102)
This commit fixes an issue where the recovery code isn't working anymore.

Fixes #7096

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-04-15 20:49:58 +02:00
Timshel bfe420a018 Dummy org Master password policy auth fix (#7097)
Co-authored-by: Timshel <timshel@users.noreply.github.com>
2026-04-15 20:44:55 +02:00
Mathijs van Veluw e7e4b9a86d Fix 2FA for Android (#7093)
The `RecoveryCode` Type should not be sent as a valid type which can be used.
Fixes #7092

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-04-13 21:47:20 +02:00
Mathijs van Veluw bb549986e6 Fix MFA Remember (#7085)
Signed-off-by: BlackDex <black.dex@gmail.com>
2026-04-12 21:04:32 +02:00
Mathijs van Veluw 39954af96a Crate and GHA updates (#7081)
Signed-off-by: BlackDex <black.dex@gmail.com>
2026-04-11 20:27:07 +02:00
idontneedonetho a6b43651ca Fix windows build issues (#7065)
Need to set signals to UNIX only so we can build on windows.
2026-04-08 15:35:18 +02:00
qaz741wsd856 3f28b583db Fix logout push identifiers and send logout before clearing devices (#7047)
* Fix logout push identifiers and send logout before clearing devices

* Refactor logout function parameters

* Fix parameters in logout notification functions
2026-04-05 22:43:58 +02:00
Hex d4f67429d6 Do not display unavailable 2FA options (#7013)
* do not display unavailable 2FA options

* use existing function to check webauthn support

* clarity in 2fa skip code
2026-04-05 22:43:06 +02:00
Hex fc43737868 Handle SIGTERM and SIGQUIT shutdown signals. (#7008)
* handle more shutdown signals

* disable Rocket's built-in signal handlers
2026-04-05 22:41:14 +02:00
Aaron Brager 43df0fb7f4 Change SQLite backup to use VACUUM INTO query (#6989)
* Refactor SQLite backup to use VACUUM INTO query

Replaced manual file creation for SQLite backup with a VACUUM INTO query.

* Fix VACUUM INTO query error handling
2026-04-05 22:40:00 +02:00
Stefan Melmuk d29cd29f55 prevent managers from creating collections (#6890)
managers without the access_all flag should not be able to create
collections. the manage all collections permission actually consists of
three separate custom permissions that have not been implemented yet for
more fine-grain access control.
2026-04-05 22:39:33 +02:00
Mathijs van Veluw 2811df2953 Fix Send icons (#7051)
Send uses icons to display if it is protected by password or not.
Bitwarden has added a feature to use email with an OTP for newer versions.
Vaultwarden does not yet support this, but this commit adds an Enum with all 3 the options.

The email option currently needs a feature-flag and newer web-vault/clients.

For now, this will at least fix the display of icons.

Fixes #6976

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-04-05 22:35:21 +02:00
Daniel 8f0e99b875 Disable deployments for release env (#7033)
As according to the docs:
https://docs.github.com/en/actions/how-tos/deploy/configure-and-manage-deployments/control-deployments#using-environments-without-deployments

This is useful when you want to use environments for:

Organizing secrets—group related secrets under an environment name without creating deployment records.
Access control—restrict which branches can use certain secrets via environment branch policies, without deployment tracking.
CI and testing jobs—reference an environment for its configuration without adding noise to the deployment history.
2026-04-01 23:04:34 +02:00
Mathijs van Veluw f07a91141a Fix empty string FolderId (#7048)
In newer versions of Bitwarden Clients instead of using `null` the folder_id will be an empty string.
This commit adds a special deserialize_with function to keep the same way of working code-wise.

Fixes #6962

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-04-01 23:04:10 +02:00
Mathijs van Veluw 787822854c Misc org fixes (#7032)
* Split vault org/personal purge endpoints

Signed-off-by: BlackDex <black.dex@gmail.com>

* Adjust several other call-sites

Signed-off-by: BlackDex <black.dex@gmail.com>

* Several other misc fixes

Signed-off-by: BlackDex <black.dex@gmail.com>

* Add some more validation for groups, collections and memberships

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-03-29 23:15:48 +02:00
Mathijs van Veluw f62a7a66c8 Rotate refresh-tokens on sstamp reset (#7031)
When a security-stamp gets reset/rotated we should also rotate all device refresh-tokens to invalidate them.
Else clients are still able to use old refresh tokens.

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-03-29 22:43:36 +02:00
Daniel 3a1378f469 Switch to attest action (#7017)
From the `attest-build-provenance` changelog:
> As of version 4, actions/attest-build-provenance is simply a wrapper on top of actions/attest.

> Existing applications may continue to use the attest-build-provenance action, but new implementations should use actions/attest instead. Please see the actions/attest repository for usage information.
2026-03-29 22:22:27 +02:00
Mathijs van Veluw dde63e209e Misc Updates (#7027)
- Update Rust to v1.94.1
- Updated all crates
- Update GHA
- Update global domains and ensure a newline is always present

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-03-29 22:21:39 +02:00
Mathijs van Veluw 235cf88231 Fix 2FA Remember to actually be 30 days (#6929)
Currently we always regenerate the 2FA Remember token, and always send that back to the client.
This is not the correct way, and in turn causes the remember token to never expire.

While this might be convenient, it is not really safe.
This commit changes the 2FA Remember Tokens from random string to a JWT.
This JWT has a lifetime of 30 days and is validated per device & user combination.

This does mean that once this commit is merged, and users are using this version, all their remember tokens will be invalidated.
From my point of view this isn't a bad thing, since those tokens should have expired already.

Only users who recently checked the remember checkbox within 30 days have to login again, but that is a minor inconvenience I think.

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-03-23 23:12:07 +01:00
Daniel García c0a78dd55a Use protected CI environment (#7004) 2026-03-23 22:25:03 +01:00
Mathijs van Veluw 711bb53d3d Update crates and GHA (#6980)
Updated all crates which are possible.

Updated all GitHub Actions to their latest version.
There was a supply-chain attack on the trivy action to which we were not exposed since we were using pinned sha hashes.
The latest version v0.35.0 is not vulnerable and that version will be used with this commit.

Also removed `dtolnay/rust-toolchain` as suggested by zizmor and adjusted the way to install the correct toolchain.
Since this GitHub Action did not used any version tagging, it was also cumbersome to update.

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-03-23 21:26:11 +01:00
Mathijs van Veluw 650defac75 Update Feature Flags (#6981)
* Update Feature Flags

Added new feature flags which could be supported without issues.
Removed all deprecated feature flags and only match supported flags.
Do not error on invalid flags during load, but do on config save via admin interface.
During load it will print a `WARNING`, this is to prevent breaking setups when flags are removed, but are still configured.

There are no feature flags anymore currently needed to be set by default, so those are removed now.

Signed-off-by: BlackDex <black.dex@gmail.com>

* Adjust code a bit and add Diagnostics check

Signed-off-by: BlackDex <black.dex@gmail.com>

* Update .env template

Signed-off-by: BlackDex <black.dex@gmail.com>

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-03-23 21:21:21 +01:00
Mathijs van Veluw 2b3736802d Fix email header base64 padding (#6961)
Newer versions of the Bitwarden client use Base64 with padding.
Since this is not a streaming string, but a defined length, we can just strip the `=` chars.

Fixes #6960

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-03-17 17:01:32 +01:00
Mathijs van Veluw 9c7df6412c Fix apikey login (#6922)
The API Key login needs some extra JSON return key's, same as password login.
Fixes #6912

Signed-off-by: BlackDex <black.dex@gmail.com>
2026-03-09 21:13:27 +01:00
Daniel 065c1f2cd5 Fix checkout action version (#6921)
- wasn't getting picked up when updating action due to being formatted as `#v6.0.0` instead of `# v6.0.0`
2026-03-09 19:35:14 +01:00
Daniel García 1a1d7f578a Support new desktop origin on CORS (#6920) 2026-03-09 19:14:28 +01:00
Mathijs van Veluw 2b16a05e54 Misc updates and fixes (#6910)
* Fix collection details response

Signed-off-by: BlackDex <black.dex@gmail.com>

* Misc updates and fixes

- Some clippy fixes
- Crate updates
- Updated Rust to v1.94.0
- Updated all GitHub Actions
- Updated web-vault v2026.2.0

Signed-off-by: BlackDex <black.dex@gmail.com>

* Remove commented out code

---------

Signed-off-by: BlackDex <black.dex@gmail.com>
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2026-03-09 18:38:22 +01:00
phoeagon c6e9948984 Add cxp-import-mobile and cxp-export-mobile: feature flags on mobile (#6853)
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2026-03-09 18:21:23 +01:00
Timshel ecdb18fcde Add 30s cache to SSO exchange_refresh_token (#6866)
Co-authored-by: Timshel <timshel@users.noreply.github.com>
2026-03-09 18:10:06 +01:00
pasarenicu df25d316d6 Add Webauthn related origins flag to known flags. (#6900)
support pm-30529-webauthn-related-origins flag
2026-03-09 18:06:41 +01:00
Chris Kruger 747286dccd fix: add ForcePasswordReset to api key login (#6904) 2026-03-09 18:04:17 +01:00
DerPlayer2001 e60105411b Feat(config): add feature flag for Safari account switching (#6891)
This enables the use of the Feature from this PR https://github.com/bitwarden/clients/pull/18339
2026-03-09 17:57:10 +01:00
Ken Watanabe 937857a0bc Merge commit from fork
* Fix WebAuthn backup flag update before signature verification

* fix format rust file

* Add test for migration update

* Remove webauthn test
2026-03-09 17:50:21 +01:00
Stefan Melmuk ba55191676 apply policies only to confirmed members (#6892) 2026-03-04 06:58:39 +01:00
54 changed files with 1566 additions and 1124 deletions
+16 -10
View File
@@ -372,16 +372,22 @@
## Note that clients cache the /api/config endpoint for about 1 hour and it could take some time before they are enabled or disabled! ## Note that clients cache the /api/config endpoint for about 1 hour and it could take some time before they are enabled or disabled!
## ##
## The following flags are available: ## The following flags are available:
## - "inline-menu-positioning-improvements": Enable the use of inline menu password generator and identity suggestions in the browser extension. ## - "pm-5594-safari-account-switching": Enable account switching in Safari. (Safari >= 2026.2.0)
## - "inline-menu-totp": Enable the use of inline menu TOTP codes in the browser extension. ## - "ssh-agent": Enable SSH agent support on Desktop. (Desktop >= 2024.12.0)
## - "ssh-agent": Enable SSH agent support on Desktop. (Needs desktop >=2024.12.0) ## - "ssh-agent-v2": Enable newer SSH agent support. (Desktop >= 2026.2.1)
## - "ssh-key-vault-item": Enable the creation and use of SSH key vault items. (Needs clients >=2024.12.0) ## - "ssh-key-vault-item": Enable the creation and use of SSH key vault items. (Clients >= 2024.12.0)
## - "pm-25373-windows-biometrics-v2": Enable the new implementation of biometrics on Windows. (Needs desktop >= 2025.11.0) ## - "pm-25373-windows-biometrics-v2": Enable the new implementation of biometrics on Windows. (Desktop >= 2025.11.0)
## - "export-attachments": Enable support for exporting attachments (Clients >=2025.4.0) ## - "anon-addy-self-host-alias": Enable configuring self-hosted Anon Addy alias generator. (Android >= 2025.3.0, iOS >= 2025.4.0)
## - "anon-addy-self-host-alias": Enable configuring self-hosted Anon Addy alias generator. (Needs Android >=2025.3.0, iOS >=2025.4.0) ## - "simple-login-self-host-alias": Enable configuring self-hosted Simple Login alias generator. (Android >= 2025.3.0, iOS >= 2025.4.0)
## - "simple-login-self-host-alias": Enable configuring self-hosted Simple Login alias generator. (Needs Android >=2025.3.0, iOS >=2025.4.0) ## - "mutual-tls": Enable the use of mutual TLS on Android (Clients >= 2025.2.0)
## - "mutual-tls": Enable the use of mutual TLS on Android (Client >= 2025.2.0) ## - "cxp-import-mobile": Enable the import via CXP on iOS (Clients >= 2025.9.2)
# EXPERIMENTAL_CLIENT_FEATURE_FLAGS=fido2-vault-credentials ## - "cxp-export-mobile": Enable the export via CXP on iOS (Clients >= 2025.9.2)
## - "pm-30529-webauthn-related-origins":
## - "desktop-ui-migration-milestone-1": Special feature flag for desktop UI (Desktop >= 2026.2.0)
## - "desktop-ui-migration-milestone-2": Special feature flag for desktop UI (Desktop >= 2026.2.0)
## - "desktop-ui-migration-milestone-3": Special feature flag for desktop UI (Desktop >= 2026.2.0)
## - "desktop-ui-migration-milestone-4": Special feature flag for desktop UI (Desktop >= 2026.2.0)
# EXPERIMENTAL_CLIENT_FEATURE_FLAGS=
## Require new device emails. When a user logs in an email is required to be sent. ## Require new device emails. When a user logs in an email is required to be sent.
## If sending the email fails the login attempt will fail!! ## If sending the email fails the login attempt will fail!!
-1
View File
@@ -1,3 +1,2 @@
# Ignore vendored scripts in GitHub stats # Ignore vendored scripts in GitHub stats
src/static/scripts/* linguist-vendored src/static/scripts/* linguist-vendored
+14 -23
View File
@@ -62,7 +62,7 @@ jobs:
# Checkout the repo # Checkout the repo
- name: "Checkout" - name: "Checkout"
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 #v6.0.0 uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with: with:
persist-credentials: false persist-credentials: false
fetch-depth: 0 fetch-depth: 0
@@ -85,32 +85,23 @@ jobs:
# End Determine rust-toolchain version # End Determine rust-toolchain version
# Only install the clippy and rustfmt components on the default rust-toolchain - name: "Install toolchain ${{steps.toolchain.outputs.RUST_TOOLCHAIN}} as default"
- name: "Install rust-toolchain version"
uses: dtolnay/rust-toolchain@efa25f7f19611383d5b0ccf2d1c8914531636bf9 # master @ Feb 13, 2026, 3:46 AM GMT+1
if: ${{ matrix.channel == 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
components: clippy, rustfmt
# End Uses the rust-toolchain file to determine version
# Install the any other channel to be used for which we do not execute clippy and rustfmt
- name: "Install MSRV version"
uses: dtolnay/rust-toolchain@efa25f7f19611383d5b0ccf2d1c8914531636bf9 # master @ Feb 13, 2026, 3:46 AM GMT+1
if: ${{ matrix.channel != 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
# End Install the MSRV channel to be used
# Set the current matrix toolchain version as default
- name: "Set toolchain ${{steps.toolchain.outputs.RUST_TOOLCHAIN}} as default"
env: env:
CHANNEL: ${{ matrix.channel }}
RUST_TOOLCHAIN: ${{steps.toolchain.outputs.RUST_TOOLCHAIN}} RUST_TOOLCHAIN: ${{steps.toolchain.outputs.RUST_TOOLCHAIN}}
run: | run: |
# Remove the rust-toolchain.toml # Remove the rust-toolchain.toml
rm rust-toolchain.toml rm rust-toolchain.toml
# Set the default
# Install the correct toolchain version
rustup toolchain install "${RUST_TOOLCHAIN}" --profile minimal --no-self-update
# If this matrix is the `rust-toolchain` flow, also install rustfmt and clippy
if [[ "${CHANNEL}" == 'rust-toolchain' ]]; then
rustup component add --toolchain "${RUST_TOOLCHAIN}" rustfmt clippy
fi
# Set as the default toolchain
rustup default "${RUST_TOOLCHAIN}" rustup default "${RUST_TOOLCHAIN}"
# Show environment # Show environment
@@ -122,7 +113,7 @@ jobs:
# Enable Rust Caching # Enable Rust Caching
- name: Rust Caching - name: Rust Caching
uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2 uses: Swatinem/rust-cache@c19371144df3bb44fab255c43d04cbc2ab54d1c4 # v2.9.1
with: with:
# Use a custom prefix-key to force a fresh start. This is sometimes needed with bigger changes. # Use a custom prefix-key to force a fresh start. This is sometimes needed with bigger changes.
# Like changing the build host from Ubuntu 20.04 to 22.04 for example. # Like changing the build host from Ubuntu 20.04 to 22.04 for example.
+1 -1
View File
@@ -20,7 +20,7 @@ jobs:
steps: steps:
# Checkout the repo # Checkout the repo
- name: "Checkout" - name: "Checkout"
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 #v6.0.0 uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with: with:
persist-credentials: false persist-credentials: false
# End Checkout the repo # End Checkout the repo
+2 -2
View File
@@ -20,7 +20,7 @@ jobs:
steps: steps:
# Start Docker Buildx # Start Docker Buildx
- name: Setup Docker Buildx - name: Setup Docker Buildx
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0 uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
# https://github.com/moby/buildkit/issues/3969 # https://github.com/moby/buildkit/issues/3969
# Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills # Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills
with: with:
@@ -40,7 +40,7 @@ jobs:
# End Download hadolint # End Download hadolint
# Checkout the repo # Checkout the repo
- name: Checkout - name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 #v6.0.0 uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with: with:
persist-credentials: false persist-credentials: false
# End Checkout the repo # End Checkout the repo
+52 -45
View File
@@ -20,23 +20,27 @@ defaults:
run: run:
shell: bash shell: bash
env: # A "release" environment must be created in the repository settings
# The *_REPO variables need to be configured as repository variables # (Settings > Environments > New environment) with the following
# Append `/settings/variables/actions` to your repo url # variables and secrets configured as needed.
# DOCKERHUB_REPO needs to be 'index.docker.io/<user>/<repo>' #
# Check for Docker hub credentials in secrets # Variables (only set the ones for registries you want to push to):
HAVE_DOCKERHUB_LOGIN: ${{ vars.DOCKERHUB_REPO != '' && secrets.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }} # DOCKERHUB_REPO: 'index.docker.io/<user>/<repo>'
# GHCR_REPO needs to be 'ghcr.io/<user>/<repo>' # QUAY_REPO: 'quay.io/<user>/<repo>'
# Check for Github credentials in secrets # GHCR_REPO: 'ghcr.io/<user>/<repo>'
HAVE_GHCR_LOGIN: ${{ vars.GHCR_REPO != '' && github.repository_owner != '' && secrets.GITHUB_TOKEN != '' }} #
# QUAY_REPO needs to be 'quay.io/<user>/<repo>' # Secrets (only required when the corresponding *_REPO variable is set):
# Check for Quay.io credentials in secrets # DOCKERHUB_REPO => DOCKERHUB_USERNAME, DOCKERHUB_TOKEN
HAVE_QUAY_LOGIN: ${{ vars.QUAY_REPO != '' && secrets.QUAY_USERNAME != '' && secrets.QUAY_TOKEN != '' }} # QUAY_REPO => QUAY_USERNAME, QUAY_TOKEN
# GITHUB_TOKEN is provided automatically
jobs: jobs:
docker-build: docker-build:
name: Build Vaultwarden containers name: Build Vaultwarden containers
if: ${{ github.repository == 'dani-garcia/vaultwarden' }} if: ${{ github.repository == 'dani-garcia/vaultwarden' }}
environment:
name: release
deployment: false
permissions: permissions:
packages: write # Needed to upload packages and artifacts packages: write # Needed to upload packages and artifacts
contents: read contents: read
@@ -54,13 +58,13 @@ jobs:
steps: steps:
- name: Initialize QEMU binfmt support - name: Initialize QEMU binfmt support
uses: docker/setup-qemu-action@c7c53464625b32c7a7e944ae62b3e17d2b600130 # v3.7.0 uses: docker/setup-qemu-action@ce360397dd3f832beb865e1373c09c0e9f86d70a # v4.0.0
with: with:
platforms: "arm64,arm" platforms: "arm64,arm"
# Start Docker Buildx # Start Docker Buildx
- name: Setup Docker Buildx - name: Setup Docker Buildx
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0 uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
# https://github.com/moby/buildkit/issues/3969 # https://github.com/moby/buildkit/issues/3969
# Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills # Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills
with: with:
@@ -73,7 +77,7 @@ jobs:
# Checkout the repo # Checkout the repo
- name: Checkout - name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 #v6.0.0 uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
# We need fetch-depth of 0 so we also get all the tag metadata # We need fetch-depth of 0 so we also get all the tag metadata
with: with:
persist-credentials: false persist-credentials: false
@@ -102,14 +106,14 @@ jobs:
# Login to Docker Hub # Login to Docker Hub
- name: Login to Docker Hub - name: Login to Docker Hub
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0 uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
if: ${{ env.HAVE_DOCKERHUB_LOGIN == 'true' }} if: ${{ vars.DOCKERHUB_REPO != '' }}
- name: Add registry for DockerHub - name: Add registry for DockerHub
if: ${{ env.HAVE_DOCKERHUB_LOGIN == 'true' }} if: ${{ vars.DOCKERHUB_REPO != '' }}
env: env:
DOCKERHUB_REPO: ${{ vars.DOCKERHUB_REPO }} DOCKERHUB_REPO: ${{ vars.DOCKERHUB_REPO }}
run: | run: |
@@ -117,15 +121,15 @@ jobs:
# Login to GitHub Container Registry # Login to GitHub Container Registry
- name: Login to GitHub Container Registry - name: Login to GitHub Container Registry
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0 uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.repository_owner }} username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }} password: ${{ secrets.GITHUB_TOKEN }}
if: ${{ env.HAVE_GHCR_LOGIN == 'true' }} if: ${{ vars.GHCR_REPO != '' }}
- name: Add registry for ghcr.io - name: Add registry for ghcr.io
if: ${{ env.HAVE_GHCR_LOGIN == 'true' }} if: ${{ vars.GHCR_REPO != '' }}
env: env:
GHCR_REPO: ${{ vars.GHCR_REPO }} GHCR_REPO: ${{ vars.GHCR_REPO }}
run: | run: |
@@ -133,15 +137,15 @@ jobs:
# Login to Quay.io # Login to Quay.io
- name: Login to Quay.io - name: Login to Quay.io
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0 uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with: with:
registry: quay.io registry: quay.io
username: ${{ secrets.QUAY_USERNAME }} username: ${{ secrets.QUAY_USERNAME }}
password: ${{ secrets.QUAY_TOKEN }} password: ${{ secrets.QUAY_TOKEN }}
if: ${{ env.HAVE_QUAY_LOGIN == 'true' }} if: ${{ vars.QUAY_REPO != '' }}
- name: Add registry for Quay.io - name: Add registry for Quay.io
if: ${{ env.HAVE_QUAY_LOGIN == 'true' }} if: ${{ vars.QUAY_REPO != '' }}
env: env:
QUAY_REPO: ${{ vars.QUAY_REPO }} QUAY_REPO: ${{ vars.QUAY_REPO }}
run: | run: |
@@ -155,7 +159,7 @@ jobs:
run: | run: |
# #
# Check if there is a GitHub Container Registry Login and use it for caching # Check if there is a GitHub Container Registry Login and use it for caching
if [[ -n "${HAVE_GHCR_LOGIN}" ]]; then if [[ -n "${GHCR_REPO}" ]]; then
echo "BAKE_CACHE_FROM=type=registry,ref=${GHCR_REPO}-buildcache:${BASE_IMAGE}-${NORMALIZED_ARCH}" | tee -a "${GITHUB_ENV}" echo "BAKE_CACHE_FROM=type=registry,ref=${GHCR_REPO}-buildcache:${BASE_IMAGE}-${NORMALIZED_ARCH}" | tee -a "${GITHUB_ENV}"
echo "BAKE_CACHE_TO=type=registry,ref=${GHCR_REPO}-buildcache:${BASE_IMAGE}-${NORMALIZED_ARCH},compression=zstd,mode=max" | tee -a "${GITHUB_ENV}" echo "BAKE_CACHE_TO=type=registry,ref=${GHCR_REPO}-buildcache:${BASE_IMAGE}-${NORMALIZED_ARCH},compression=zstd,mode=max" | tee -a "${GITHUB_ENV}"
else else
@@ -181,7 +185,7 @@ jobs:
- name: Bake ${{ matrix.base_image }} containers - name: Bake ${{ matrix.base_image }} containers
id: bake_vw id: bake_vw
uses: docker/bake-action@5be5f02ff8819ecd3092ea6b2e6261c31774f2b4 # v6.10.0 uses: docker/bake-action@a66e1c87e2eca0503c343edf1d208c716d54b8a8 # v7.1.0
env: env:
BASE_TAGS: "${{ steps.determine-version.outputs.BASE_TAGS }}" BASE_TAGS: "${{ steps.determine-version.outputs.BASE_TAGS }}"
SOURCE_COMMIT: "${{ env.SOURCE_COMMIT }}" SOURCE_COMMIT: "${{ env.SOURCE_COMMIT }}"
@@ -218,7 +222,7 @@ jobs:
touch "${RUNNER_TEMP}/digests/${digest#sha256:}" touch "${RUNNER_TEMP}/digests/${digest#sha256:}"
- name: Upload digest - name: Upload digest
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
with: with:
name: digests-${{ env.NORMALIZED_ARCH }}-${{ matrix.base_image }} name: digests-${{ env.NORMALIZED_ARCH }}-${{ matrix.base_image }}
path: ${{ runner.temp }}/digests/* path: ${{ runner.temp }}/digests/*
@@ -233,12 +237,12 @@ jobs:
# Upload artifacts to Github Actions and Attest the binaries # Upload artifacts to Github Actions and Attest the binaries
- name: Attest binaries - name: Attest binaries
uses: actions/attest-build-provenance@96278af6caaf10aea03fd8d33a09a777ca52d62f # v3.2.0 uses: actions/attest@59d89421af93a897026c735860bf21b6eb4f7b26 # v4.1.0
with: with:
subject-path: vaultwarden-${{ env.NORMALIZED_ARCH }} subject-path: vaultwarden-${{ env.NORMALIZED_ARCH }}
- name: Upload binaries as artifacts - name: Upload binaries as artifacts
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0 uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
with: with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-${{ env.NORMALIZED_ARCH }}-${{ matrix.base_image }} name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-${{ env.NORMALIZED_ARCH }}-${{ matrix.base_image }}
path: vaultwarden-${{ env.NORMALIZED_ARCH }} path: vaultwarden-${{ env.NORMALIZED_ARCH }}
@@ -247,6 +251,9 @@ jobs:
name: Merge manifests name: Merge manifests
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: docker-build needs: docker-build
environment:
name: release
deployment: false
permissions: permissions:
packages: write # Needed to upload packages and artifacts packages: write # Needed to upload packages and artifacts
attestations: write # Needed to generate an artifact attestation for a build attestations: write # Needed to generate an artifact attestation for a build
@@ -257,7 +264,7 @@ jobs:
steps: steps:
- name: Download digests - name: Download digests
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0 uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with: with:
path: ${{ runner.temp }}/digests path: ${{ runner.temp }}/digests
pattern: digests-*-${{ matrix.base_image }} pattern: digests-*-${{ matrix.base_image }}
@@ -265,14 +272,14 @@ jobs:
# Login to Docker Hub # Login to Docker Hub
- name: Login to Docker Hub - name: Login to Docker Hub
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0 uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
if: ${{ env.HAVE_DOCKERHUB_LOGIN == 'true' }} if: ${{ vars.DOCKERHUB_REPO != '' }}
- name: Add registry for DockerHub - name: Add registry for DockerHub
if: ${{ env.HAVE_DOCKERHUB_LOGIN == 'true' }} if: ${{ vars.DOCKERHUB_REPO != '' }}
env: env:
DOCKERHUB_REPO: ${{ vars.DOCKERHUB_REPO }} DOCKERHUB_REPO: ${{ vars.DOCKERHUB_REPO }}
run: | run: |
@@ -280,15 +287,15 @@ jobs:
# Login to GitHub Container Registry # Login to GitHub Container Registry
- name: Login to GitHub Container Registry - name: Login to GitHub Container Registry
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0 uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.repository_owner }} username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }} password: ${{ secrets.GITHUB_TOKEN }}
if: ${{ env.HAVE_GHCR_LOGIN == 'true' }} if: ${{ vars.GHCR_REPO != '' }}
- name: Add registry for ghcr.io - name: Add registry for ghcr.io
if: ${{ env.HAVE_GHCR_LOGIN == 'true' }} if: ${{ vars.GHCR_REPO != '' }}
env: env:
GHCR_REPO: ${{ vars.GHCR_REPO }} GHCR_REPO: ${{ vars.GHCR_REPO }}
run: | run: |
@@ -296,15 +303,15 @@ jobs:
# Login to Quay.io # Login to Quay.io
- name: Login to Quay.io - name: Login to Quay.io
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0 uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with: with:
registry: quay.io registry: quay.io
username: ${{ secrets.QUAY_USERNAME }} username: ${{ secrets.QUAY_USERNAME }}
password: ${{ secrets.QUAY_TOKEN }} password: ${{ secrets.QUAY_TOKEN }}
if: ${{ env.HAVE_QUAY_LOGIN == 'true' }} if: ${{ vars.QUAY_REPO != '' }}
- name: Add registry for Quay.io - name: Add registry for Quay.io
if: ${{ env.HAVE_QUAY_LOGIN == 'true' }} if: ${{ vars.QUAY_REPO != '' }}
env: env:
QUAY_REPO: ${{ vars.QUAY_REPO }} QUAY_REPO: ${{ vars.QUAY_REPO }}
run: | run: |
@@ -357,24 +364,24 @@ jobs:
# Attest container images # Attest container images
- name: Attest - docker.io - ${{ matrix.base_image }} - name: Attest - docker.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_DOCKERHUB_LOGIN == 'true' && env.DIGEST_SHA != ''}} if: ${{ vars.DOCKERHUB_REPO != '' && env.DIGEST_SHA != ''}}
uses: actions/attest-build-provenance@96278af6caaf10aea03fd8d33a09a777ca52d62f # v3.2.0 uses: actions/attest@59d89421af93a897026c735860bf21b6eb4f7b26 # v4.1.0
with: with:
subject-name: ${{ vars.DOCKERHUB_REPO }} subject-name: ${{ vars.DOCKERHUB_REPO }}
subject-digest: ${{ env.DIGEST_SHA }} subject-digest: ${{ env.DIGEST_SHA }}
push-to-registry: true push-to-registry: true
- name: Attest - ghcr.io - ${{ matrix.base_image }} - name: Attest - ghcr.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_GHCR_LOGIN == 'true' && env.DIGEST_SHA != ''}} if: ${{ vars.GHCR_REPO != '' && env.DIGEST_SHA != ''}}
uses: actions/attest-build-provenance@96278af6caaf10aea03fd8d33a09a777ca52d62f # v3.2.0 uses: actions/attest@59d89421af93a897026c735860bf21b6eb4f7b26 # v4.1.0
with: with:
subject-name: ${{ vars.GHCR_REPO }} subject-name: ${{ vars.GHCR_REPO }}
subject-digest: ${{ env.DIGEST_SHA }} subject-digest: ${{ env.DIGEST_SHA }}
push-to-registry: true push-to-registry: true
- name: Attest - quay.io - ${{ matrix.base_image }} - name: Attest - quay.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_QUAY_LOGIN == 'true' && env.DIGEST_SHA != ''}} if: ${{ vars.QUAY_REPO != '' && env.DIGEST_SHA != ''}}
uses: actions/attest-build-provenance@96278af6caaf10aea03fd8d33a09a777ca52d62f # v3.2.0 uses: actions/attest@59d89421af93a897026c735860bf21b6eb4f7b26 # v4.1.0
with: with:
subject-name: ${{ vars.QUAY_REPO }} subject-name: ${{ vars.QUAY_REPO }}
subject-digest: ${{ env.DIGEST_SHA }} subject-digest: ${{ env.DIGEST_SHA }}
+3 -3
View File
@@ -33,12 +33,12 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 #v6.0.0 uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with: with:
persist-credentials: false persist-credentials: false
- name: Run Trivy vulnerability scanner - name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@c1824fd6edce30d7ab345a9989de00bbd46ef284 # 0.34.0 uses: aquasecurity/trivy-action@57a97c7e7821a5776cebc9bb87c984fa69cba8f1 # v0.35.0
env: env:
TRIVY_DB_REPOSITORY: docker.io/aquasec/trivy-db:2,public.ecr.aws/aquasecurity/trivy-db:2,ghcr.io/aquasecurity/trivy-db:2 TRIVY_DB_REPOSITORY: docker.io/aquasec/trivy-db:2,public.ecr.aws/aquasecurity/trivy-db:2,ghcr.io/aquasecurity/trivy-db:2
TRIVY_JAVA_DB_REPOSITORY: docker.io/aquasec/trivy-java-db:1,public.ecr.aws/aquasecurity/trivy-java-db:1,ghcr.io/aquasecurity/trivy-java-db:1 TRIVY_JAVA_DB_REPOSITORY: docker.io/aquasec/trivy-java-db:1,public.ecr.aws/aquasecurity/trivy-java-db:1,ghcr.io/aquasecurity/trivy-java-db:1
@@ -50,6 +50,6 @@ jobs:
severity: CRITICAL,HIGH severity: CRITICAL,HIGH
- name: Upload Trivy scan results to GitHub Security tab - name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@9e907b5e64f6b83e7804b09294d44122997950d6 # v4.32.3 uses: github/codeql-action/upload-sarif@95e58e9a2cdfd71adc6e0353d5c52f41a045d225 # v4.35.2
with: with:
sarif_file: 'trivy-results.sarif' sarif_file: 'trivy-results.sarif'
+2 -2
View File
@@ -16,11 +16,11 @@ jobs:
steps: steps:
# Checkout the repo # Checkout the repo
- name: Checkout - name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 #v6.0.0 uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with: with:
persist-credentials: false persist-credentials: false
# End Checkout the repo # End Checkout the repo
# When this version is updated, do not forget to update this in `.pre-commit-config.yaml` too # When this version is updated, do not forget to update this in `.pre-commit-config.yaml` too
- name: Spell Check Repo - name: Spell Check Repo
uses: crate-ci/typos@57b11c6b7e54c402ccd9cda953f1072ec4f78e33 # v1.43.5 uses: crate-ci/typos@cf5f1c29a8ac336af8568821ec41919923b05a83 # v1.45.1
+2 -2
View File
@@ -19,12 +19,12 @@ jobs:
security-events: write # To write the security report security-events: write # To write the security report
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 #v6.0.0 uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with: with:
persist-credentials: false persist-credentials: false
- name: Run zizmor - name: Run zizmor
uses: zizmorcore/zizmor-action@0dce2577a4760a2749d8cfb7a84b7d5585ebcb7d # v0.5.0 uses: zizmorcore/zizmor-action@b1d7e1fb5de872772f31590499237e7cce841e8e # v0.5.3
with: with:
# intentionally not scanning the entire repository, # intentionally not scanning the entire repository,
# since it contains integration tests. # since it contains integration tests.
+15 -13
View File
@@ -1,13 +1,13 @@
--- ---
repos: repos:
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: 3e8a8703264a2f4a69428a0aa4dcb512790b2c8c # v6.0.0 rev: 3e8a8703264a2f4a69428a0aa4dcb512790b2c8c # v6.0.0
hooks: hooks:
- id: check-yaml - id: check-yaml
- id: check-json - id: check-json
- id: check-toml - id: check-toml
- id: mixed-line-ending - id: mixed-line-ending
args: ["--fix=no"] args: [ "--fix=no" ]
- id: end-of-file-fixer - id: end-of-file-fixer
exclude: "(.*js$|.*css$)" exclude: "(.*js$|.*css$)"
- id: check-case-conflict - id: check-case-conflict
@@ -15,7 +15,14 @@ repos:
- id: detect-private-key - id: detect-private-key
- id: check-symlinks - id: check-symlinks
- id: forbid-submodules - id: forbid-submodules
- repo: local
# When this version is updated, do not forget to update this in `.github/workflows/typos.yaml` too
- repo: https://github.com/crate-ci/typos
rev: cf5f1c29a8ac336af8568821ec41919923b05a83 # v1.45.1
hooks:
- id: typos
- repo: local
hooks: hooks:
- id: fmt - id: fmt
name: fmt name: fmt
@@ -24,14 +31,14 @@ repos:
language: system language: system
always_run: true always_run: true
pass_filenames: false pass_filenames: false
args: ["--", "--check"] args: [ "--", "--check" ]
- id: cargo-test - id: cargo-test
name: cargo test name: cargo test
description: Test the package for errors. description: Test the package for errors.
entry: cargo test entry: cargo test
language: system language: system
args: ["--features", "sqlite,mysql,postgresql", "--"] args: [ "--features", "sqlite,mysql,postgresql", "--" ]
types_or: [rust, file] types_or: [ rust, file ]
files: (Cargo.toml|Cargo.lock|rust-toolchain.toml|rustfmt.toml|.*\.rs$) files: (Cargo.toml|Cargo.lock|rust-toolchain.toml|rustfmt.toml|.*\.rs$)
pass_filenames: false pass_filenames: false
- id: cargo-clippy - id: cargo-clippy
@@ -39,8 +46,8 @@ repos:
description: Lint Rust sources description: Lint Rust sources
entry: cargo clippy entry: cargo clippy
language: system language: system
args: ["--features", "sqlite,mysql,postgresql", "--", "-D", "warnings"] args: [ "--features", "sqlite,mysql,postgresql", "--", "-D", "warnings" ]
types_or: [rust, file] types_or: [ rust, file ]
files: (Cargo.toml|Cargo.lock|rust-toolchain.toml|rustfmt.toml|.*\.rs$) files: (Cargo.toml|Cargo.lock|rust-toolchain.toml|rustfmt.toml|.*\.rs$)
pass_filenames: false pass_filenames: false
- id: check-docker-templates - id: check-docker-templates
@@ -51,8 +58,3 @@ repos:
args: args:
- "-c" - "-c"
- "cd docker && make" - "cd docker && make"
# When this version is updated, do not forget to update this in `.github/workflows/typos.yaml` too
- repo: https://github.com/crate-ci/typos
rev: 57b11c6b7e54c402ccd9cda953f1072ec4f78e33 # v1.43.5
hooks:
- id: typos
Generated
+507 -531
View File
File diff suppressed because it is too large Load Diff
+18 -18
View File
@@ -1,6 +1,6 @@
[workspace.package] [workspace.package]
edition = "2021" edition = "2021"
rust-version = "1.91.0" rust-version = "1.93.0"
license = "AGPL-3.0-only" license = "AGPL-3.0-only"
repository = "https://github.com/dani-garcia/vaultwarden" repository = "https://github.com/dani-garcia/vaultwarden"
publish = false publish = false
@@ -79,7 +79,7 @@ dashmap = "6.1.0"
# Async futures # Async futures
futures = "0.3.32" futures = "0.3.32"
tokio = { version = "1.49.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] } tokio = { version = "1.52.1", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] }
tokio-util = { version = "0.7.18", features = ["compat"]} tokio-util = { version = "0.7.18", features = ["compat"]}
# A generic serialization/deserialization framework # A generic serialization/deserialization framework
@@ -88,25 +88,25 @@ serde_json = "1.0.149"
# A safe, extensible ORM and Query builder # A safe, extensible ORM and Query builder
# Currently pinned diesel to v2.3.3 as newer version break MySQL/MariaDB compatibility # Currently pinned diesel to v2.3.3 as newer version break MySQL/MariaDB compatibility
diesel = { version = "2.3.6", features = ["chrono", "r2d2", "numeric"] } diesel = { version = "2.3.7", features = ["chrono", "r2d2", "numeric"] }
diesel_migrations = "2.3.1" diesel_migrations = "2.3.1"
derive_more = { version = "2.1.1", features = ["from", "into", "as_ref", "deref", "display"] } derive_more = { version = "2.1.1", features = ["from", "into", "as_ref", "deref", "display"] }
diesel-derive-newtype = "2.1.2" diesel-derive-newtype = "2.1.2"
# Bundled/Static SQLite # Bundled/Static SQLite
libsqlite3-sys = { version = "0.35.0", features = ["bundled"], optional = true } libsqlite3-sys = { version = "0.36.0", features = ["bundled"], optional = true }
# Crypto-related libraries # Crypto-related libraries
rand = "0.10.0" rand = "0.10.1"
ring = "0.17.14" ring = "0.17.14"
subtle = "2.6.1" subtle = "2.6.1"
# UUID generation # UUID generation
uuid = { version = "1.21.0", features = ["v4"] } uuid = { version = "1.23.1", features = ["v4"] }
# Date and time libraries # Date and time libraries
chrono = { version = "0.4.43", features = ["clock", "serde"], default-features = false } chrono = { version = "0.4.44", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.10.4" chrono-tz = "0.10.4"
time = "0.3.47" time = "0.3.47"
@@ -136,7 +136,7 @@ webauthn-rs-core = "0.5.4"
url = "2.5.8" url = "2.5.8"
# Email libraries # Email libraries
lettre = { version = "0.11.19", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "hostname", "tracing", "tokio1-rustls", "ring", "rustls-native-certs"], default-features = false } lettre = { version = "0.11.21", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "hostname", "tracing", "tokio1-rustls", "ring", "rustls-native-certs"], default-features = false }
percent-encoding = "2.3.2" # URL encoding library used for URL's in the emails percent-encoding = "2.3.2" # URL encoding library used for URL's in the emails
email_address = "0.2.9" email_address = "0.2.9"
@@ -145,7 +145,7 @@ handlebars = { version = "6.4.0", features = ["dir_source"] }
# HTTP client (Used for favicons, version check, DUO and HIBP API) # HTTP client (Used for favicons, version check, DUO and HIBP API)
reqwest = { version = "0.12.28", features = ["rustls-tls", "rustls-tls-native-roots", "stream", "json", "deflate", "gzip", "brotli", "zstd", "socks", "cookies", "charset", "http2", "system-proxy"], default-features = false} reqwest = { version = "0.12.28", features = ["rustls-tls", "rustls-tls-native-roots", "stream", "json", "deflate", "gzip", "brotli", "zstd", "socks", "cookies", "charset", "http2", "system-proxy"], default-features = false}
hickory-resolver = "0.25.2" hickory-resolver = "0.26.0"
# Favicon extraction libraries # Favicon extraction libraries
html5gum = "0.8.3" html5gum = "0.8.3"
@@ -155,14 +155,14 @@ bytes = "1.11.1"
svg-hush = "0.9.6" svg-hush = "0.9.6"
# Cache function results (Used for version check and favicon fetching) # Cache function results (Used for version check and favicon fetching)
cached = { version = "0.56.0", features = ["async"] } cached = { version = "0.59.0", features = ["async"] }
# Used for custom short lived cookie jar during favicon extraction # Used for custom short lived cookie jar during favicon extraction
cookie = "0.18.1" cookie = "0.18.1"
cookie_store = "0.22.1" cookie_store = "0.22.1"
# Used by U2F, JWT and PostgreSQL # Used by U2F, JWT and PostgreSQL
openssl = "0.10.75" openssl = "0.10.77"
# CLI argument parsing # CLI argument parsing
pico-args = "0.5.0" pico-args = "0.5.0"
@@ -173,16 +173,16 @@ governor = "0.10.4"
# OIDC for SSO # OIDC for SSO
openidconnect = { version = "4.0.1", features = ["reqwest", "rustls-tls"] } openidconnect = { version = "4.0.1", features = ["reqwest", "rustls-tls"] }
mini-moka = "0.10.3" moka = { version = "0.12.15", features = ["future"] }
# Check client versions for specific features. # Check client versions for specific features.
semver = "1.0.27" semver = "1.0.28"
# Allow overriding the default memory allocator # Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow # Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "0.1.48", features = ["secure"], default-features = false, optional = true } mimalloc = { version = "0.1.48", features = ["secure"], default-features = false, optional = true }
which = "8.0.0" which = "8.0.2"
# Argon2 library with support for the PHC format # Argon2 library with support for the PHC format
argon2 = "0.5.3" argon2 = "0.5.3"
@@ -197,10 +197,10 @@ grass_compiler = { version = "0.13.4", default-features = false }
opendal = { version = "0.55.0", features = ["services-fs"], default-features = false } opendal = { version = "0.55.0", features = ["services-fs"], default-features = false }
# For retrieving AWS credentials, including temporary SSO credentials # For retrieving AWS credentials, including temporary SSO credentials
anyhow = { version = "1.0.101", optional = true } anyhow = { version = "1.0.102", optional = true }
aws-config = { version = "1.8.14", features = ["behavior-version-latest", "rt-tokio", "credentials-process", "sso"], default-features = false, optional = true } aws-config = { version = "1.8.15", features = ["behavior-version-latest", "rt-tokio", "credentials-process", "sso"], default-features = false, optional = true }
aws-credential-types = { version = "1.2.13", optional = true } aws-credential-types = { version = "1.2.14", optional = true }
aws-smithy-runtime-api = { version = "1.11.5", optional = true } aws-smithy-runtime-api = { version = "1.12.0", optional = true }
http = { version = "1.4.0", optional = true } http = { version = "1.4.0", optional = true }
reqsign = { version = "0.16.5", optional = true } reqsign = { version = "0.16.5", optional = true }
+3 -3
View File
@@ -1,11 +1,11 @@
--- ---
vault_version: "v2026.1.1" vault_version: "v2026.2.0"
vault_image_digest: "sha256:062fcf0d5dc37247dae61b0ee1ba5d20f9296e290d7ad1f6114ea5888f5738a7" vault_image_digest: "sha256:37c8661fa59dcdfbd3baa8366b6e950ef292b15adfeff1f57812b075c1fd3447"
# Cross Compile Docker Helper Scripts v1.9.0 # Cross Compile Docker Helper Scripts v1.9.0
# We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts # We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts
# https://github.com/tonistiigi/xx | https://hub.docker.com/r/tonistiigi/xx/tags # https://github.com/tonistiigi/xx | https://hub.docker.com/r/tonistiigi/xx/tags
xx_image_digest: "sha256:c64defb9ed5a91eacb37f96ccc3d4cd72521c4bd18d5442905b95e2226b0e707" xx_image_digest: "sha256:c64defb9ed5a91eacb37f96ccc3d4cd72521c4bd18d5442905b95e2226b0e707"
rust_version: 1.93.1 # Rust version to be used rust_version: 1.95.0 # Rust version to be used
debian_version: trixie # Debian release name to be used debian_version: trixie # Debian release name to be used
alpine_version: "3.23" # Alpine version to be used alpine_version: "3.23" # Alpine version to be used
# For which platforms/architectures will we try to build images # For which platforms/architectures will we try to build images
+10 -10
View File
@@ -19,23 +19,23 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2026.1.1 # $ docker pull docker.io/vaultwarden/web-vault:v2026.2.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2026.1.1 # $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2026.2.0
# [docker.io/vaultwarden/web-vault@sha256:062fcf0d5dc37247dae61b0ee1ba5d20f9296e290d7ad1f6114ea5888f5738a7] # [docker.io/vaultwarden/web-vault@sha256:37c8661fa59dcdfbd3baa8366b6e950ef292b15adfeff1f57812b075c1fd3447]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:062fcf0d5dc37247dae61b0ee1ba5d20f9296e290d7ad1f6114ea5888f5738a7 # $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:37c8661fa59dcdfbd3baa8366b6e950ef292b15adfeff1f57812b075c1fd3447
# [docker.io/vaultwarden/web-vault:v2026.1.1] # [docker.io/vaultwarden/web-vault:v2026.2.0]
# #
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:062fcf0d5dc37247dae61b0ee1ba5d20f9296e290d7ad1f6114ea5888f5738a7 AS vault FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:37c8661fa59dcdfbd3baa8366b6e950ef292b15adfeff1f57812b075c1fd3447 AS vault
########################## ALPINE BUILD IMAGES ########################## ########################## ALPINE BUILD IMAGES ##########################
## NOTE: The Alpine Base Images do not support other platforms then linux/amd64 and linux/arm64 ## NOTE: The Alpine Base Images do not support other platforms then linux/amd64 and linux/arm64
## And for Alpine we define all build images here, they will only be loaded when actually used ## And for Alpine we define all build images here, they will only be loaded when actually used
FROM --platform=$BUILDPLATFORM ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.93.1 AS build_amd64 FROM --platform=$BUILDPLATFORM ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.95.0 AS build_amd64
FROM --platform=$BUILDPLATFORM ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.93.1 AS build_arm64 FROM --platform=$BUILDPLATFORM ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.95.0 AS build_arm64
FROM --platform=$BUILDPLATFORM ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.93.1 AS build_armv7 FROM --platform=$BUILDPLATFORM ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.95.0 AS build_armv7
FROM --platform=$BUILDPLATFORM ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.93.1 AS build_armv6 FROM --platform=$BUILDPLATFORM ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.95.0 AS build_armv6
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006 # hadolint ignore=DL3006
+7 -7
View File
@@ -19,15 +19,15 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags, # - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to. # click the tag name to view the digest of the image it currently points to.
# - From the command line: # - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2026.1.1 # $ docker pull docker.io/vaultwarden/web-vault:v2026.2.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2026.1.1 # $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2026.2.0
# [docker.io/vaultwarden/web-vault@sha256:062fcf0d5dc37247dae61b0ee1ba5d20f9296e290d7ad1f6114ea5888f5738a7] # [docker.io/vaultwarden/web-vault@sha256:37c8661fa59dcdfbd3baa8366b6e950ef292b15adfeff1f57812b075c1fd3447]
# #
# - Conversely, to get the tag name from the digest: # - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:062fcf0d5dc37247dae61b0ee1ba5d20f9296e290d7ad1f6114ea5888f5738a7 # $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:37c8661fa59dcdfbd3baa8366b6e950ef292b15adfeff1f57812b075c1fd3447
# [docker.io/vaultwarden/web-vault:v2026.1.1] # [docker.io/vaultwarden/web-vault:v2026.2.0]
# #
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:062fcf0d5dc37247dae61b0ee1ba5d20f9296e290d7ad1f6114ea5888f5738a7 AS vault FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:37c8661fa59dcdfbd3baa8366b6e950ef292b15adfeff1f57812b075c1fd3447 AS vault
########################## Cross Compile Docker Helper Scripts ########################## ########################## Cross Compile Docker Helper Scripts ##########################
## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts ## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts
@@ -36,7 +36,7 @@ FROM --platform=linux/amd64 docker.io/tonistiigi/xx@sha256:c64defb9ed5a91eacb37f
########################## BUILD IMAGE ########################## ########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006 # hadolint ignore=DL3006
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.93.1-slim-trixie AS build FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.95.0-slim-trixie AS build
COPY --from=xx / / COPY --from=xx / /
ARG TARGETARCH ARG TARGETARCH
ARG TARGETVARIANT ARG TARGETVARIANT
+2 -2
View File
@@ -13,8 +13,8 @@ path = "src/lib.rs"
proc-macro = true proc-macro = true
[dependencies] [dependencies]
quote = "1.0.44" quote = "1.0.45"
syn = "2.0.114" syn = "2.0.117"
[lints] [lints]
workspace = true workspace = true
+1 -1
View File
@@ -1,4 +1,4 @@
[toolchain] [toolchain]
channel = "1.93.1" channel = "1.95.0"
components = [ "rustfmt", "clippy" ] components = [ "rustfmt", "clippy" ]
profile = "minimal" profile = "minimal"
+24 -7
View File
@@ -30,9 +30,10 @@ use crate::{
error::{Error, MapResult}, error::{Error, MapResult},
http_client::make_http_request, http_client::make_http_request,
mail, mail,
sso::FAKE_SSO_IDENTIFIER,
util::{ util::{
container_base_image, format_naive_datetime_local, get_active_web_release, get_display_size, container_base_image, format_naive_datetime_local, get_active_web_release, get_display_size,
is_running_in_container, NumberOrString, is_running_in_container, parse_experimental_client_feature_flags, FeatureFlagFilter, NumberOrString,
}, },
CONFIG, VERSION, CONFIG, VERSION,
}; };
@@ -315,7 +316,11 @@ async fn invite_user(data: Json<InviteData>, _token: AdminToken, conn: DbConn) -
async fn _generate_invite(user: &User, conn: &DbConn) -> EmptyResult { async fn _generate_invite(user: &User, conn: &DbConn) -> EmptyResult {
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
let org_id: OrganizationId = FAKE_ADMIN_UUID.to_string().into(); let org_id: OrganizationId = if CONFIG.sso_enabled() {
FAKE_SSO_IDENTIFIER.into()
} else {
FAKE_ADMIN_UUID.into()
};
let member_id: MembershipId = FAKE_ADMIN_UUID.to_string().into(); let member_id: MembershipId = FAKE_ADMIN_UUID.to_string().into();
mail::send_invite(user, org_id, member_id, &CONFIG.invitation_org_name(), None).await mail::send_invite(user, org_id, member_id, &CONFIG.invitation_org_name(), None).await
} else { } else {
@@ -472,7 +477,7 @@ async fn deauth_user(user_id: UserId, _token: AdminToken, conn: DbConn, nt: Noti
} }
Device::delete_all_by_user(&user.uuid, &conn).await?; Device::delete_all_by_user(&user.uuid, &conn).await?;
user.reset_security_stamp(); user.reset_security_stamp(&conn).await?;
user.save(&conn).await user.save(&conn).await
} }
@@ -480,14 +485,15 @@ async fn deauth_user(user_id: UserId, _token: AdminToken, conn: DbConn, nt: Noti
#[post("/users/<user_id>/disable", format = "application/json")] #[post("/users/<user_id>/disable", format = "application/json")]
async fn disable_user(user_id: UserId, _token: AdminToken, conn: DbConn, nt: Notify<'_>) -> EmptyResult { async fn disable_user(user_id: UserId, _token: AdminToken, conn: DbConn, nt: Notify<'_>) -> EmptyResult {
let mut user = get_user_or_404(&user_id, &conn).await?; let mut user = get_user_or_404(&user_id, &conn).await?;
Device::delete_all_by_user(&user.uuid, &conn).await?; user.reset_security_stamp(&conn).await?;
user.reset_security_stamp();
user.enabled = false; user.enabled = false;
let save_result = user.save(&conn).await; let save_result = user.save(&conn).await;
nt.send_logout(&user, None, &conn).await; nt.send_logout(&user, None, &conn).await;
Device::delete_all_by_user(&user.uuid, &conn).await?;
save_result save_result
} }
@@ -517,7 +523,11 @@ async fn resend_user_invite(user_id: UserId, _token: AdminToken, conn: DbConn) -
} }
if CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
let org_id: OrganizationId = FAKE_ADMIN_UUID.to_string().into(); let org_id: OrganizationId = if CONFIG.sso_enabled() {
FAKE_SSO_IDENTIFIER.into()
} else {
FAKE_ADMIN_UUID.into()
};
let member_id: MembershipId = FAKE_ADMIN_UUID.to_string().into(); let member_id: MembershipId = FAKE_ADMIN_UUID.to_string().into();
mail::send_invite(&user, org_id, member_id, &CONFIG.invitation_org_name(), None).await mail::send_invite(&user, org_id, member_id, &CONFIG.invitation_org_name(), None).await
} else { } else {
@@ -637,7 +647,6 @@ use cached::proc_macro::cached;
/// Cache this function to prevent API call rate limit. Github only allows 60 requests per hour, and we use 3 here already /// Cache this function to prevent API call rate limit. Github only allows 60 requests per hour, and we use 3 here already
/// It will cache this function for 600 seconds (10 minutes) which should prevent the exhaustion of the rate limit /// It will cache this function for 600 seconds (10 minutes) which should prevent the exhaustion of the rate limit
/// Any cache will be lost if Vaultwarden is restarted /// Any cache will be lost if Vaultwarden is restarted
use std::time::Duration; // Needed for cached
#[cached(time = 600, sync_writes = "default")] #[cached(time = 600, sync_writes = "default")]
async fn get_release_info(has_http_access: bool) -> (String, String, String) { async fn get_release_info(has_http_access: bool) -> (String, String, String) {
// If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway. // If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway.
@@ -734,6 +743,13 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> A
let ip_header_name = &ip_header.0.unwrap_or_default(); let ip_header_name = &ip_header.0.unwrap_or_default();
let invalid_feature_flags: Vec<String> = parse_experimental_client_feature_flags(
&CONFIG.experimental_client_feature_flags(),
FeatureFlagFilter::InvalidOnly,
)
.into_keys()
.collect();
let diagnostics_json = json!({ let diagnostics_json = json!({
"dns_resolved": dns_resolved, "dns_resolved": dns_resolved,
"current_release": VERSION, "current_release": VERSION,
@@ -756,6 +772,7 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> A
"db_version": get_sql_server_version(&conn).await, "db_version": get_sql_server_version(&conn).await,
"admin_url": format!("{}/diagnostics", admin_url()), "admin_url": format!("{}/diagnostics", admin_url()),
"overrides": &CONFIG.get_overrides().join(", "), "overrides": &CONFIG.get_overrides().join(", "),
"invalid_feature_flags": invalid_feature_flags,
"host_arch": env::consts::ARCH, "host_arch": env::consts::ARCH,
"host_os": env::consts::OS, "host_os": env::consts::OS,
"tz_env": env::var("TZ").unwrap_or_default(), "tz_env": env::var("TZ").unwrap_or_default(),
+35 -24
View File
@@ -22,7 +22,7 @@ use crate::{
DbConn, DbConn,
}, },
mail, mail,
util::{format_date, NumberOrString}, util::{deser_opt_nonempty_str, format_date, NumberOrString},
CONFIG, CONFIG,
}; };
@@ -106,7 +106,6 @@ pub struct RegisterData {
name: Option<String>, name: Option<String>,
#[allow(dead_code)]
organization_user_id: Option<MembershipId>, organization_user_id: Option<MembershipId>,
// Used only from the register/finish endpoint // Used only from the register/finish endpoint
@@ -296,7 +295,7 @@ pub async fn _register(data: Json<RegisterData>, email_verification: bool, conn:
set_kdf_data(&mut user, &data.kdf)?; set_kdf_data(&mut user, &data.kdf)?;
user.set_password(&data.master_password_hash, Some(data.key), true, None); user.set_password(&data.master_password_hash, Some(data.key), true, None, &conn).await?;
user.password_hint = password_hint; user.password_hint = password_hint;
// Add extra fields if present // Add extra fields if present
@@ -364,7 +363,9 @@ async fn post_set_password(data: Json<SetPasswordData>, headers: Headers, conn:
Some(data.key), Some(data.key),
false, false,
Some(vec![String::from("revision_date")]), // We need to allow revision-date to use the old security_timestamp Some(vec![String::from("revision_date")]), // We need to allow revision-date to use the old security_timestamp
); &conn,
)
.await?;
user.password_hint = password_hint; user.password_hint = password_hint;
if let Some(keys) = data.keys { if let Some(keys) = data.keys {
@@ -373,15 +374,13 @@ async fn post_set_password(data: Json<SetPasswordData>, headers: Headers, conn:
} }
if let Some(identifier) = data.org_identifier { if let Some(identifier) = data.org_identifier {
if identifier != crate::sso::FAKE_IDENTIFIER && identifier != crate::api::admin::FAKE_ADMIN_UUID { if identifier != crate::sso::FAKE_SSO_IDENTIFIER && identifier != crate::api::admin::FAKE_ADMIN_UUID {
let org = match Organization::find_by_uuid(&identifier.into(), &conn).await { let Some(org) = Organization::find_by_uuid(&identifier.into(), &conn).await else {
None => err!("Failed to retrieve the associated organization"), err!("Failed to retrieve the associated organization")
Some(org) => org,
}; };
let membership = match Membership::find_by_user_and_org(&user.uuid, &org.uuid, &conn).await { let Some(membership) = Membership::find_by_user_and_org(&user.uuid, &org.uuid, &conn).await else {
None => err!("Failed to retrieve the invitation"), err!("Failed to retrieve the invitation")
Some(org) => org,
}; };
accept_org_invite(&user, membership, None, &conn).await?; accept_org_invite(&user, membership, None, &conn).await?;
@@ -532,14 +531,16 @@ async fn post_password(data: Json<ChangePassData>, headers: Headers, conn: DbCon
String::from("get_public_keys"), String::from("get_public_keys"),
String::from("get_api_webauthn"), String::from("get_api_webauthn"),
]), ]),
); &conn,
)
.await?;
let save_result = user.save(&conn).await; let save_result = user.save(&conn).await;
// Prevent logging out the client where the user requested this endpoint from. // Prevent logging out the client where the user requested this endpoint from.
// If you do logout the user it will causes issues at the client side. // If you do logout the user it will causes issues at the client side.
// Adding the device uuid will prevent this. // Adding the device uuid will prevent this.
nt.send_logout(&user, Some(headers.device.uuid.clone()), &conn).await; nt.send_logout(&user, Some(&headers.device), &conn).await;
save_result save_result
} }
@@ -579,7 +580,6 @@ fn set_kdf_data(user: &mut User, data: &KDFData) -> EmptyResult {
Ok(()) Ok(())
} }
#[allow(dead_code)]
#[derive(Deserialize)] #[derive(Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct AuthenticationData { struct AuthenticationData {
@@ -588,7 +588,6 @@ struct AuthenticationData {
master_password_authentication_hash: String, master_password_authentication_hash: String,
} }
#[allow(dead_code)]
#[derive(Deserialize)] #[derive(Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct UnlockData { struct UnlockData {
@@ -597,11 +596,12 @@ struct UnlockData {
master_key_wrapped_user_key: String, master_key_wrapped_user_key: String,
} }
#[allow(dead_code)]
#[derive(Deserialize)] #[derive(Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct ChangeKdfData { struct ChangeKdfData {
#[allow(dead_code)]
new_master_password_hash: String, new_master_password_hash: String,
#[allow(dead_code)]
key: String, key: String,
authentication_data: AuthenticationData, authentication_data: AuthenticationData,
unlock_data: UnlockData, unlock_data: UnlockData,
@@ -633,10 +633,12 @@ async fn post_kdf(data: Json<ChangeKdfData>, headers: Headers, conn: DbConn, nt:
Some(data.unlock_data.master_key_wrapped_user_key), Some(data.unlock_data.master_key_wrapped_user_key),
true, true,
None, None,
); &conn,
)
.await?;
let save_result = user.save(&conn).await; let save_result = user.save(&conn).await;
nt.send_logout(&user, Some(headers.device.uuid.clone()), &conn).await; nt.send_logout(&user, Some(&headers.device), &conn).await;
save_result save_result
} }
@@ -647,6 +649,7 @@ struct UpdateFolderData {
// There is a bug in 2024.3.x which adds a `null` item. // There is a bug in 2024.3.x which adds a `null` item.
// To bypass this we allow a Option here, but skip it during the updates // To bypass this we allow a Option here, but skip it during the updates
// See: https://github.com/bitwarden/clients/issues/8453 // See: https://github.com/bitwarden/clients/issues/8453
#[serde(default, deserialize_with = "deser_opt_nonempty_str")]
id: Option<FolderId>, id: Option<FolderId>,
name: String, name: String,
} }
@@ -900,14 +903,16 @@ async fn post_rotatekey(data: Json<KeyData>, headers: Headers, conn: DbConn, nt:
Some(data.account_unlock_data.master_password_unlock_data.master_key_encrypted_user_key), Some(data.account_unlock_data.master_password_unlock_data.master_key_encrypted_user_key),
true, true,
None, None,
); &conn,
)
.await?;
let save_result = user.save(&conn).await; let save_result = user.save(&conn).await;
// Prevent logging out the client where the user requested this endpoint from. // Prevent logging out the client where the user requested this endpoint from.
// If you do logout the user it will causes issues at the client side. // If you do logout the user it will causes issues at the client side.
// Adding the device uuid will prevent this. // Adding the device uuid will prevent this.
nt.send_logout(&user, Some(headers.device.uuid.clone()), &conn).await; nt.send_logout(&user, Some(&headers.device), &conn).await;
save_result save_result
} }
@@ -919,12 +924,13 @@ async fn post_sstamp(data: Json<PasswordOrOtpData>, headers: Headers, conn: DbCo
data.validate(&user, true, &conn).await?; data.validate(&user, true, &conn).await?;
Device::delete_all_by_user(&user.uuid, &conn).await?; user.reset_security_stamp(&conn).await?;
user.reset_security_stamp();
let save_result = user.save(&conn).await; let save_result = user.save(&conn).await;
nt.send_logout(&user, None, &conn).await; nt.send_logout(&user, None, &conn).await;
Device::delete_all_by_user(&user.uuid, &conn).await?;
save_result save_result
} }
@@ -1042,7 +1048,7 @@ async fn post_email(data: Json<ChangeEmailData>, headers: Headers, conn: DbConn,
user.email_new = None; user.email_new = None;
user.email_new_token = None; user.email_new_token = None;
user.set_password(&data.new_master_password_hash, Some(data.key), true, None); user.set_password(&data.new_master_password_hash, Some(data.key), true, None, &conn).await?;
let save_result = user.save(&conn).await; let save_result = user.save(&conn).await;
@@ -1254,7 +1260,7 @@ struct SecretVerificationRequest {
pub async fn kdf_upgrade(user: &mut User, pwd_hash: &str, conn: &DbConn) -> ApiResult<()> { pub async fn kdf_upgrade(user: &mut User, pwd_hash: &str, conn: &DbConn) -> ApiResult<()> {
if user.password_iterations < CONFIG.password_iterations() { if user.password_iterations < CONFIG.password_iterations() {
user.password_iterations = CONFIG.password_iterations(); user.password_iterations = CONFIG.password_iterations();
user.set_password(pwd_hash, None, false, None); user.set_password(pwd_hash, None, false, None, conn).await?;
if let Err(e) = user.save(conn).await { if let Err(e) = user.save(conn).await {
error!("Error updating user: {e:#?}"); error!("Error updating user: {e:#?}");
@@ -1328,6 +1334,11 @@ impl<'r> FromRequest<'r> for KnownDevice {
async fn from_request(req: &'r Request<'_>) -> Outcome<Self, Self::Error> { async fn from_request(req: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let email = if let Some(email_b64) = req.headers().get_one("X-Request-Email") { let email = if let Some(email_b64) = req.headers().get_one("X-Request-Email") {
// Bitwarden seems to send padded Base64 strings since 2026.2.1
// Since these values are not streamed and Headers are always split by newlines
// we can safely ignore padding here and remove any '=' appended.
let email_b64 = email_b64.trim_end_matches('=');
let Ok(email_bytes) = data_encoding::BASE64URL_NOPAD.decode(email_b64.as_bytes()) else { let Ok(email_bytes) = data_encoding::BASE64URL_NOPAD.decode(email_b64.as_bytes()) else {
return Outcome::Error((Status::BadRequest, "X-Request-Email value failed to decode as base64url")); return Outcome::Error((Status::BadRequest, "X-Request-Email value failed to decode as base64url"));
}; };
+57 -42
View File
@@ -11,10 +11,10 @@ use rocket::{
use serde_json::Value; use serde_json::Value;
use crate::auth::ClientVersion; use crate::auth::ClientVersion;
use crate::util::{save_temp_file, NumberOrString}; use crate::util::{deser_opt_nonempty_str, save_temp_file, NumberOrString};
use crate::{ use crate::{
api::{self, core::log_event, EmptyResult, JsonResult, Notify, PasswordOrOtpData, UpdateType}, api::{self, core::log_event, EmptyResult, JsonResult, Notify, PasswordOrOtpData, UpdateType},
auth::Headers, auth::{Headers, OrgIdGuard, OwnerHeaders},
config::PathType, config::PathType,
crypto, crypto,
db::{ db::{
@@ -86,7 +86,8 @@ pub fn routes() -> Vec<Route> {
restore_cipher_put_admin, restore_cipher_put_admin,
restore_cipher_selected, restore_cipher_selected,
restore_cipher_selected_admin, restore_cipher_selected_admin,
delete_all, purge_org_vault,
purge_personal_vault,
move_cipher_selected, move_cipher_selected,
move_cipher_selected_put, move_cipher_selected_put,
put_collections2_update, put_collections2_update,
@@ -247,6 +248,7 @@ pub struct CipherData {
// Id is optional as it is included only in bulk share // Id is optional as it is included only in bulk share
pub id: Option<CipherId>, pub id: Option<CipherId>,
// Folder id is not included in import // Folder id is not included in import
#[serde(default, deserialize_with = "deser_opt_nonempty_str")]
pub folder_id: Option<FolderId>, pub folder_id: Option<FolderId>,
// TODO: Some of these might appear all the time, no need for Option // TODO: Some of these might appear all the time, no need for Option
#[serde(alias = "organizationID")] #[serde(alias = "organizationID")]
@@ -296,6 +298,7 @@ pub struct CipherData {
#[derive(Debug, Deserialize)] #[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct PartialCipherData { pub struct PartialCipherData {
#[serde(default, deserialize_with = "deser_opt_nonempty_str")]
folder_id: Option<FolderId>, folder_id: Option<FolderId>,
favorite: bool, favorite: bool,
} }
@@ -425,7 +428,7 @@ pub async fn update_cipher_from_data(
let transfer_cipher = cipher.organization_uuid.is_none() && data.organization_id.is_some(); let transfer_cipher = cipher.organization_uuid.is_none() && data.organization_id.is_some();
if let Some(org_id) = data.organization_id { if let Some(org_id) = data.organization_id {
match Membership::find_by_user_and_org(&headers.user.uuid, &org_id, conn).await { match Membership::find_confirmed_by_user_and_org(&headers.user.uuid, &org_id, conn).await {
None => err!("You don't have permission to add item to organization"), None => err!("You don't have permission to add item to organization"),
Some(member) => { Some(member) => {
if shared_to_collections.is_some() if shared_to_collections.is_some()
@@ -1568,6 +1571,7 @@ async fn restore_cipher_selected(
#[derive(Deserialize)] #[derive(Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct MoveCipherData { struct MoveCipherData {
#[serde(default, deserialize_with = "deser_opt_nonempty_str")]
folder_id: Option<FolderId>, folder_id: Option<FolderId>,
ids: Vec<CipherId>, ids: Vec<CipherId>,
} }
@@ -1642,9 +1646,51 @@ struct OrganizationIdData {
org_id: OrganizationId, org_id: OrganizationId,
} }
// Use the OrgIdGuard here, to ensure there an organization id present.
// If there is no organization id present, it should be forwarded to purge_personal_vault.
// This guard needs to be the first argument, else OwnerHeaders will be triggered which will logout the user.
#[post("/ciphers/purge?<organization..>", data = "<data>")] #[post("/ciphers/purge?<organization..>", data = "<data>")]
async fn delete_all( async fn purge_org_vault(
organization: Option<OrganizationIdData>, _org_id_guard: OrgIdGuard,
organization: OrganizationIdData,
data: Json<PasswordOrOtpData>,
headers: OwnerHeaders,
conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
if organization.org_id != headers.org_id {
err!("Organization not found", "Organization id's do not match");
}
let data: PasswordOrOtpData = data.into_inner();
let user = headers.user;
data.validate(&user, true, &conn).await?;
match Membership::find_confirmed_by_user_and_org(&user.uuid, &organization.org_id, &conn).await {
Some(member) if member.atype == MembershipType::Owner => {
Cipher::delete_all_by_organization(&organization.org_id, &conn).await?;
nt.send_user_update(UpdateType::SyncVault, &user, &headers.device.push_uuid, &conn).await;
log_event(
EventType::OrganizationPurgedVault as i32,
&organization.org_id,
&organization.org_id,
&user.uuid,
headers.device.atype,
&headers.ip.ip,
&conn,
)
.await;
Ok(())
}
_ => err!("You don't have permission to purge the organization vault"),
}
}
#[post("/ciphers/purge", data = "<data>")]
async fn purge_personal_vault(
data: Json<PasswordOrOtpData>, data: Json<PasswordOrOtpData>,
headers: Headers, headers: Headers,
conn: DbConn, conn: DbConn,
@@ -1655,42 +1701,10 @@ async fn delete_all(
data.validate(&user, true, &conn).await?; data.validate(&user, true, &conn).await?;
match organization {
Some(org_data) => {
// Organization ID in query params, purging organization vault
match Membership::find_by_user_and_org(&user.uuid, &org_data.org_id, &conn).await {
None => err!("You don't have permission to purge the organization vault"),
Some(member) => {
if member.atype == MembershipType::Owner {
Cipher::delete_all_by_organization(&org_data.org_id, &conn).await?;
nt.send_user_update(UpdateType::SyncVault, &user, &headers.device.push_uuid, &conn).await;
log_event(
EventType::OrganizationPurgedVault as i32,
&org_data.org_id,
&org_data.org_id,
&user.uuid,
headers.device.atype,
&headers.ip.ip,
&conn,
)
.await;
Ok(())
} else {
err!("You don't have permission to purge the organization vault");
}
}
}
}
None => {
// No organization ID in query params, purging user vault
// Delete ciphers and their attachments
for cipher in Cipher::find_owned_by_user(&user.uuid, &conn).await { for cipher in Cipher::find_owned_by_user(&user.uuid, &conn).await {
cipher.delete(&conn).await?; cipher.delete(&conn).await?;
} }
// Delete folders
for f in Folder::find_by_user(&user.uuid, &conn).await { for f in Folder::find_by_user(&user.uuid, &conn).await {
f.delete(&conn).await?; f.delete(&conn).await?;
} }
@@ -1699,8 +1713,6 @@ async fn delete_all(
nt.send_user_update(UpdateType::SyncVault, &user, &headers.device.push_uuid, &conn).await; nt.send_user_update(UpdateType::SyncVault, &user, &headers.device.push_uuid, &conn).await;
Ok(()) Ok(())
}
}
} }
#[derive(PartialEq)] #[derive(PartialEq)]
@@ -1980,8 +1992,11 @@ impl CipherSyncData {
} }
// Generate a HashMap with the Organization UUID as key and the Membership record // Generate a HashMap with the Organization UUID as key and the Membership record
let members: HashMap<OrganizationId, Membership> = let members: HashMap<OrganizationId, Membership> = Membership::find_confirmed_by_user(user_id, conn)
Membership::find_by_user(user_id, conn).await.into_iter().map(|m| (m.org_uuid.clone(), m)).collect(); .await
.into_iter()
.map(|m| (m.org_uuid.clone(), m))
.collect();
// Generate a HashMap with the User_Collections UUID as key and the CollectionUser record // Generate a HashMap with the User_Collections UUID as key and the CollectionUser record
let user_collections: HashMap<CollectionId, CollectionUser> = CollectionUser::find_by_user(user_id, conn) let user_collections: HashMap<CollectionId, CollectionUser> = CollectionUser::find_by_user(user_id, conn)
+1 -1
View File
@@ -653,7 +653,7 @@ async fn password_emergency_access(
}; };
// change grantor_user password // change grantor_user password
grantor_user.set_password(new_master_password_hash, Some(data.key), true, None); grantor_user.set_password(new_master_password_hash, Some(data.key), true, None, &conn).await?;
grantor_user.save(&conn).await?; grantor_user.save(&conn).await?;
// Disable TwoFactor providers since they will otherwise block logins // Disable TwoFactor providers since they will otherwise block logins
+1 -1
View File
@@ -240,7 +240,7 @@ async fn _log_user_event(
ip: &IpAddr, ip: &IpAddr,
conn: &DbConn, conn: &DbConn,
) { ) {
let memberships = Membership::find_by_user(user_id, conn).await; let memberships = Membership::find_confirmed_by_user(user_id, conn).await;
let mut events: Vec<Event> = Vec::with_capacity(memberships.len() + 1); // We need an event per org and one without an org let mut events: Vec<Event> = Vec::with_capacity(memberships.len() + 1); // We need an event per org and one without an org
// Upstream saves the event also without any org_id. // Upstream saves the event also without any org_id.
+2
View File
@@ -8,6 +8,7 @@ use crate::{
models::{Folder, FolderId}, models::{Folder, FolderId},
DbConn, DbConn,
}, },
util::deser_opt_nonempty_str,
}; };
pub fn routes() -> Vec<rocket::Route> { pub fn routes() -> Vec<rocket::Route> {
@@ -38,6 +39,7 @@ async fn get_folder(folder_id: FolderId, headers: Headers, conn: DbConn) -> Json
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct FolderData { pub struct FolderData {
pub name: String, pub name: String,
#[serde(default, deserialize_with = "deser_opt_nonempty_str")]
pub id: Option<FolderId>, pub id: Option<FolderId>,
} }
+15 -16
View File
@@ -59,7 +59,8 @@ use crate::{
error::Error, error::Error,
http_client::make_http_request, http_client::make_http_request,
mail, mail,
util::parse_experimental_client_feature_flags, util::{parse_experimental_client_feature_flags, FeatureFlagFilter},
CONFIG,
}; };
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
@@ -136,7 +137,7 @@ async fn put_eq_domains(data: Json<EquivDomainData>, headers: Headers, conn: DbC
#[get("/hibp/breach?<username>")] #[get("/hibp/breach?<username>")]
async fn hibp_breach(username: &str, _headers: Headers) -> JsonResult { async fn hibp_breach(username: &str, _headers: Headers) -> JsonResult {
let username: String = url::form_urlencoded::byte_serialize(username.as_bytes()).collect(); let username: String = url::form_urlencoded::byte_serialize(username.as_bytes()).collect();
if let Some(api_key) = crate::CONFIG.hibp_api_key() { if let Some(api_key) = CONFIG.hibp_api_key() {
let url = format!( let url = format!(
"https://haveibeenpwned.com/api/v3/breachedaccount/{username}?truncateResponse=false&includeUnverified=false" "https://haveibeenpwned.com/api/v3/breachedaccount/{username}?truncateResponse=false&includeUnverified=false"
); );
@@ -197,19 +198,17 @@ fn get_api_webauthn(_headers: Headers) -> Json<Value> {
#[get("/config")] #[get("/config")]
fn config() -> Json<Value> { fn config() -> Json<Value> {
let domain = crate::CONFIG.domain(); let domain = CONFIG.domain();
// Official available feature flags can be found here: // Official available feature flags can be found here:
// Server (v2025.6.2): https://github.com/bitwarden/server/blob/d094be3267f2030bd0dc62106bc6871cf82682f5/src/Core/Constants.cs#L103 // Server (v2026.2.1): https://github.com/bitwarden/server/blob/0e42725d0837bd1c0dabd864ff621a579959744b/src/Core/Constants.cs#L135
// Client (web-v2025.6.1): https://github.com/bitwarden/clients/blob/747c2fd6a1c348a57a76e4a7de8128466ffd3c01/libs/common/src/enums/feature-flag.enum.ts#L12 // Client (v2026.2.1): https://github.com/bitwarden/clients/blob/f96380c3138291a028bdd2c7a5fee540d5c98ba5/libs/common/src/enums/feature-flag.enum.ts#L12
// Android (v2025.6.0): https://github.com/bitwarden/android/blob/b5b022caaad33390c31b3021b2c1205925b0e1a2/app/src/main/kotlin/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L22 // Android (v2026.2.1): https://github.com/bitwarden/android/blob/6902c19c0093fa476bbf74ccaa70c9f14afbb82f/core/src/main/kotlin/com/bitwarden/core/data/manager/model/FlagKey.kt#L31
// iOS (v2025.6.0): https://github.com/bitwarden/ios/blob/ff06d9c6cc8da89f78f37f376495800201d7261a/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7 // iOS (v2026.2.1): https://github.com/bitwarden/ios/blob/cdd9ba1770ca2ffc098d02d12cc3208e3a830454/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
let mut feature_states = let feature_states = parse_experimental_client_feature_flags(
parse_experimental_client_feature_flags(&crate::CONFIG.experimental_client_feature_flags()); &CONFIG.experimental_client_feature_flags(),
feature_states.insert("duo-redirect".to_string(), true); FeatureFlagFilter::ValidOnly,
feature_states.insert("email-verification".to_string(), true); );
feature_states.insert("unauth-ui-refresh".to_string(), true); // Add default feature_states here if needed, currently no features are needed by default.
feature_states.insert("enable-pm-flight-recorder".to_string(), true);
feature_states.insert("mobile-error-reporting".to_string(), true);
Json(json!({ Json(json!({
// Note: The clients use this version to handle backwards compatibility concerns // Note: The clients use this version to handle backwards compatibility concerns
@@ -225,7 +224,7 @@ fn config() -> Json<Value> {
"url": "https://github.com/dani-garcia/vaultwarden" "url": "https://github.com/dani-garcia/vaultwarden"
}, },
"settings": { "settings": {
"disableUserRegistration": crate::CONFIG.is_signup_disabled() "disableUserRegistration": CONFIG.is_signup_disabled()
}, },
"environment": { "environment": {
"vault": domain, "vault": domain,
@@ -278,7 +277,7 @@ async fn accept_org_invite(
member.save(conn).await?; member.save(conn).await?;
if crate::CONFIG.mail_enabled() { if CONFIG.mail_enabled() {
let org = match Organization::find_by_uuid(&member.org_uuid, conn).await { let org = match Organization::find_by_uuid(&member.org_uuid, conn).await {
Some(org) => org, Some(org) => org,
None => err!("Organization not found."), None => err!("Organization not found."),
+147 -59
View File
@@ -20,7 +20,8 @@ use crate::{
DbConn, DbConn,
}, },
mail, mail,
util::{convert_json_key_lcase_first, get_uuid, NumberOrString}, sso::FAKE_SSO_IDENTIFIER,
util::{convert_json_key_lcase_first, NumberOrString},
CONFIG, CONFIG,
}; };
@@ -64,6 +65,7 @@ pub fn routes() -> Vec<Route> {
post_org_import, post_org_import,
list_policies, list_policies,
list_policies_token, list_policies_token,
get_dummy_master_password_policy,
get_master_password_policy, get_master_password_policy,
get_policy, get_policy,
put_policy, put_policy,
@@ -131,6 +133,24 @@ struct FullCollectionData {
external_id: Option<String>, external_id: Option<String>,
} }
impl FullCollectionData {
pub async fn validate(&self, org_id: &OrganizationId, conn: &DbConn) -> EmptyResult {
let org_groups = Group::find_by_organization(org_id, conn).await;
let org_group_ids: HashSet<&GroupId> = org_groups.iter().map(|c| &c.uuid).collect();
if let Some(e) = self.groups.iter().find(|g| !org_group_ids.contains(&g.id)) {
err!("Invalid group", format!("Group {} does not belong to organization {}!", e.id, org_id))
}
let org_memberships = Membership::find_by_org(org_id, conn).await;
let org_membership_ids: HashSet<&MembershipId> = org_memberships.iter().map(|m| &m.uuid).collect();
if let Some(e) = self.users.iter().find(|m| !org_membership_ids.contains(&m.id)) {
err!("Invalid member", format!("Member {} does not belong to organization {}!", e.id, org_id))
}
Ok(())
}
}
#[derive(Deserialize)] #[derive(Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct CollectionGroupData { struct CollectionGroupData {
@@ -233,11 +253,13 @@ async fn post_delete_organization(
} }
#[post("/organizations/<org_id>/leave")] #[post("/organizations/<org_id>/leave")]
async fn leave_organization(org_id: OrganizationId, headers: Headers, conn: DbConn) -> EmptyResult { async fn leave_organization(org_id: OrganizationId, headers: OrgMemberHeaders, conn: DbConn) -> EmptyResult {
match Membership::find_by_user_and_org(&headers.user.uuid, &org_id, &conn).await { if headers.membership.status != MembershipStatus::Confirmed as i32 {
None => err!("User not part of organization"), err!("You need to be a Member of the Organization to call this endpoint")
Some(member) => { }
if member.atype == MembershipType::Owner let membership = headers.membership;
if membership.atype == MembershipType::Owner
&& Membership::count_confirmed_by_org_and_type(&org_id, MembershipType::Owner, &conn).await <= 1 && Membership::count_confirmed_by_org_and_type(&org_id, MembershipType::Owner, &conn).await <= 1
{ {
err!("The last owner can't leave") err!("The last owner can't leave")
@@ -245,7 +267,7 @@ async fn leave_organization(org_id: OrganizationId, headers: Headers, conn: DbCo
log_event( log_event(
EventType::OrganizationUserLeft as i32, EventType::OrganizationUserLeft as i32,
&member.uuid, &membership.uuid,
&org_id, &org_id,
&headers.user.uuid, &headers.user.uuid,
headers.device.atype, headers.device.atype,
@@ -254,9 +276,7 @@ async fn leave_organization(org_id: OrganizationId, headers: Headers, conn: DbCo
) )
.await; .await;
member.delete(&conn).await membership.delete(&conn).await
}
}
} }
#[get("/organizations/<org_id>")] #[get("/organizations/<org_id>")]
@@ -335,7 +355,7 @@ async fn get_user_collections(headers: Headers, conn: DbConn) -> Json<Value> {
// The returned `Id` will then be passed to `get_master_password_policy` which will mainly ignore it // The returned `Id` will then be passed to `get_master_password_policy` which will mainly ignore it
#[get("/organizations/<identifier>/auto-enroll-status")] #[get("/organizations/<identifier>/auto-enroll-status")]
async fn get_auto_enroll_status(identifier: &str, headers: Headers, conn: DbConn) -> JsonResult { async fn get_auto_enroll_status(identifier: &str, headers: Headers, conn: DbConn) -> JsonResult {
let org = if identifier == crate::sso::FAKE_IDENTIFIER { let org = if identifier == FAKE_SSO_IDENTIFIER {
match Membership::find_main_user_org(&headers.user.uuid, &conn).await { match Membership::find_main_user_org(&headers.user.uuid, &conn).await {
Some(member) => Organization::find_by_uuid(&member.org_uuid, &conn).await, Some(member) => Organization::find_by_uuid(&member.org_uuid, &conn).await,
None => None, None => None,
@@ -345,7 +365,7 @@ async fn get_auto_enroll_status(identifier: &str, headers: Headers, conn: DbConn
}; };
let (id, identifier, rp_auto_enroll) = match org { let (id, identifier, rp_auto_enroll) = match org {
None => (get_uuid(), identifier.to_string(), false), None => (identifier.to_string(), identifier.to_string(), false),
Some(org) => ( Some(org) => (
org.uuid.to_string(), org.uuid.to_string(),
org.uuid.to_string(), org.uuid.to_string(),
@@ -395,7 +415,7 @@ async fn get_org_collections_details(org_id: OrganizationId, headers: ManagerHea
Membership::find_confirmed_by_org(&org_id, &conn).await.into_iter().map(|m| (m.uuid, m.atype)).collect(); Membership::find_confirmed_by_org(&org_id, &conn).await.into_iter().map(|m| (m.uuid, m.atype)).collect();
// check if current user has full access to the organization (either directly or via any group) // check if current user has full access to the organization (either directly or via any group)
let has_full_access_to_org = member.access_all let has_full_access_to_org = member.has_full_access()
|| (CONFIG.org_groups_enabled() && GroupUser::has_full_access_by_member(&org_id, &member.uuid, &conn).await); || (CONFIG.org_groups_enabled() && GroupUser::has_full_access_by_member(&org_id, &member.uuid, &conn).await);
// Get all admins, owners and managers who can manage/access all // Get all admins, owners and managers who can manage/access all
@@ -421,6 +441,11 @@ async fn get_org_collections_details(org_id: OrganizationId, headers: ManagerHea
|| (CONFIG.org_groups_enabled() || (CONFIG.org_groups_enabled()
&& GroupUser::has_access_to_collection_by_member(&col.uuid, &member.uuid, &conn).await); && GroupUser::has_access_to_collection_by_member(&col.uuid, &member.uuid, &conn).await);
// If the user is a manager, and is not assigned to this collection, skip this and continue with the next collection
if !assigned {
continue;
}
// get the users assigned directly to the given collection // get the users assigned directly to the given collection
let mut users: Vec<Value> = col_users let mut users: Vec<Value> = col_users
.iter() .iter()
@@ -475,12 +500,13 @@ async fn post_organization_collections(
err!("Organization not found", "Organization id's do not match"); err!("Organization not found", "Organization id's do not match");
} }
let data: FullCollectionData = data.into_inner(); let data: FullCollectionData = data.into_inner();
data.validate(&org_id, &conn).await?;
let Some(org) = Organization::find_by_uuid(&org_id, &conn).await else { if headers.membership.atype == MembershipType::Manager && !headers.membership.access_all {
err!("Can't find organization details") err!("You don't have permission to create collections")
}; }
let collection = Collection::new(org.uuid, data.name, data.external_id); let collection = Collection::new(org_id.clone(), data.name, data.external_id);
collection.save(&conn).await?; collection.save(&conn).await?;
log_event( log_event(
@@ -496,7 +522,7 @@ async fn post_organization_collections(
for group in data.groups { for group in data.groups {
CollectionGroup::new(collection.uuid.clone(), group.id, group.read_only, group.hide_passwords, group.manage) CollectionGroup::new(collection.uuid.clone(), group.id, group.read_only, group.hide_passwords, group.manage)
.save(&conn) .save(&org_id, &conn)
.await?; .await?;
} }
@@ -520,10 +546,6 @@ async fn post_organization_collections(
.await?; .await?;
} }
if headers.membership.atype == MembershipType::Manager && !headers.membership.access_all {
CollectionUser::save(&headers.membership.user_uuid, &collection.uuid, false, false, false, &conn).await?;
}
Ok(Json(collection.to_json_details(&headers.membership.user_uuid, None, &conn).await)) Ok(Json(collection.to_json_details(&headers.membership.user_uuid, None, &conn).await))
} }
@@ -574,10 +596,10 @@ async fn post_bulk_access_collections(
) )
.await; .await;
CollectionGroup::delete_all_by_collection(&col_id, &conn).await?; CollectionGroup::delete_all_by_collection(&col_id, &org_id, &conn).await?;
for group in &data.groups { for group in &data.groups {
CollectionGroup::new(col_id.clone(), group.id.clone(), group.read_only, group.hide_passwords, group.manage) CollectionGroup::new(col_id.clone(), group.id.clone(), group.read_only, group.hide_passwords, group.manage)
.save(&conn) .save(&org_id, &conn)
.await?; .await?;
} }
@@ -622,6 +644,7 @@ async fn post_organization_collection_update(
err!("Organization not found", "Organization id's do not match"); err!("Organization not found", "Organization id's do not match");
} }
let data: FullCollectionData = data.into_inner(); let data: FullCollectionData = data.into_inner();
data.validate(&org_id, &conn).await?;
if Organization::find_by_uuid(&org_id, &conn).await.is_none() { if Organization::find_by_uuid(&org_id, &conn).await.is_none() {
err!("Can't find organization details") err!("Can't find organization details")
@@ -650,11 +673,11 @@ async fn post_organization_collection_update(
) )
.await; .await;
CollectionGroup::delete_all_by_collection(&col_id, &conn).await?; CollectionGroup::delete_all_by_collection(&col_id, &org_id, &conn).await?;
for group in data.groups { for group in data.groups {
CollectionGroup::new(col_id.clone(), group.id, group.read_only, group.hide_passwords, group.manage) CollectionGroup::new(col_id.clone(), group.id, group.read_only, group.hide_passwords, group.manage)
.save(&conn) .save(&org_id, &conn)
.await?; .await?;
} }
@@ -903,7 +926,7 @@ async fn get_org_domain_sso_verified(data: Json<OrgDomainDetails>, conn: DbConn)
.collect::<Vec<(String, String)>>() .collect::<Vec<(String, String)>>()
{ {
v if !v.is_empty() => v, v if !v.is_empty() => v,
_ => vec![(crate::sso::FAKE_IDENTIFIER.to_string(), crate::sso::FAKE_IDENTIFIER.to_string())], _ => vec![(FAKE_SSO_IDENTIFIER.to_string(), FAKE_SSO_IDENTIFIER.to_string())],
}; };
Ok(Json(json!({ Ok(Json(json!({
@@ -998,6 +1021,24 @@ struct InviteData {
permissions: HashMap<String, Value>, permissions: HashMap<String, Value>,
} }
impl InviteData {
async fn validate(&self, org_id: &OrganizationId, conn: &DbConn) -> EmptyResult {
let org_collections = Collection::find_by_organization(org_id, conn).await;
let org_collection_ids: HashSet<&CollectionId> = org_collections.iter().map(|c| &c.uuid).collect();
if let Some(e) = self.collections.iter().flatten().find(|c| !org_collection_ids.contains(&c.id)) {
err!("Invalid collection", format!("Collection {} does not belong to organization {}!", e.id, org_id))
}
let org_groups = Group::find_by_organization(org_id, conn).await;
let org_group_ids: HashSet<&GroupId> = org_groups.iter().map(|c| &c.uuid).collect();
if let Some(e) = self.groups.iter().find(|g| !org_group_ids.contains(g)) {
err!("Invalid group", format!("Group {} does not belong to organization {}!", e, org_id))
}
Ok(())
}
}
#[post("/organizations/<org_id>/users/invite", data = "<data>")] #[post("/organizations/<org_id>/users/invite", data = "<data>")]
async fn send_invite( async fn send_invite(
org_id: OrganizationId, org_id: OrganizationId,
@@ -1009,6 +1050,7 @@ async fn send_invite(
err!("Organization not found", "Organization id's do not match"); err!("Organization not found", "Organization id's do not match");
} }
let data: InviteData = data.into_inner(); let data: InviteData = data.into_inner();
data.validate(&org_id, &conn).await?;
// HACK: We need the raw user-type to be sure custom role is selected to determine the access_all permission // HACK: We need the raw user-type to be sure custom role is selected to determine the access_all permission
// The from_str() will convert the custom role type into a manager role type // The from_str() will convert the custom role type into a manager role type
@@ -1268,20 +1310,20 @@ async fn accept_invite(
// skip invitation logic when we were invited via the /admin panel // skip invitation logic when we were invited via the /admin panel
if **member_id != FAKE_ADMIN_UUID { if **member_id != FAKE_ADMIN_UUID {
let Some(mut member) = Membership::find_by_uuid_and_org(member_id, &claims.org_id, &conn).await else { let Some(mut membership) = Membership::find_by_uuid_and_org(member_id, &claims.org_id, &conn).await else {
err!("Error accepting the invitation") err!("Error accepting the invitation")
}; };
let reset_password_key = match OrgPolicy::org_is_reset_password_auto_enroll(&member.org_uuid, &conn).await { let reset_password_key = match OrgPolicy::org_is_reset_password_auto_enroll(&membership.org_uuid, &conn).await {
true if data.reset_password_key.is_none() => err!("Reset password key is required, but not provided."), true if data.reset_password_key.is_none() => err!("Reset password key is required, but not provided."),
true => data.reset_password_key, true => data.reset_password_key,
false => None, false => None,
}; };
// In case the user was invited before the mail was saved in db. // In case the user was invited before the mail was saved in db.
member.invited_by_email = member.invited_by_email.or(claims.invited_by_email); membership.invited_by_email = membership.invited_by_email.or(claims.invited_by_email);
accept_org_invite(&headers.user, member, reset_password_key, &conn).await?; accept_org_invite(&headers.user, membership, reset_password_key, &conn).await?;
} else if CONFIG.mail_enabled() { } else if CONFIG.mail_enabled() {
// User was invited from /admin, so they are automatically confirmed // User was invited from /admin, so they are automatically confirmed
let org_name = CONFIG.invitation_org_name(); let org_name = CONFIG.invitation_org_name();
@@ -1515,9 +1557,8 @@ async fn edit_member(
&& data.permissions.get("deleteAnyCollection") == Some(&json!(true)) && data.permissions.get("deleteAnyCollection") == Some(&json!(true))
&& data.permissions.get("createNewCollections") == Some(&json!(true))); && data.permissions.get("createNewCollections") == Some(&json!(true)));
let mut member_to_edit = match Membership::find_by_uuid_and_org(&member_id, &org_id, &conn).await { let Some(mut member_to_edit) = Membership::find_by_uuid_and_org(&member_id, &org_id, &conn).await else {
Some(member) => member, err!("The specified user isn't member of the organization")
None => err!("The specified user isn't member of the organization"),
}; };
if new_type != member_to_edit.atype if new_type != member_to_edit.atype
@@ -1834,7 +1875,6 @@ async fn post_org_import(
#[derive(Deserialize)] #[derive(Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[allow(dead_code)]
struct BulkCollectionsData { struct BulkCollectionsData {
organization_id: OrganizationId, organization_id: OrganizationId,
cipher_ids: Vec<CipherId>, cipher_ids: Vec<CipherId>,
@@ -1848,6 +1888,10 @@ struct BulkCollectionsData {
async fn post_bulk_collections(data: Json<BulkCollectionsData>, headers: Headers, conn: DbConn) -> EmptyResult { async fn post_bulk_collections(data: Json<BulkCollectionsData>, headers: Headers, conn: DbConn) -> EmptyResult {
let data: BulkCollectionsData = data.into_inner(); let data: BulkCollectionsData = data.into_inner();
if Membership::find_confirmed_by_user_and_org(&headers.user.uuid, &data.organization_id, &conn).await.is_none() {
err!("You need to be a Member of the Organization to call this endpoint")
}
// Get all the collection available to the user in one query // Get all the collection available to the user in one query
// Also filter based upon the provided collections // Also filter based upon the provided collections
let user_collections: HashMap<CollectionId, Collection> = let user_collections: HashMap<CollectionId, Collection> =
@@ -1863,7 +1907,7 @@ async fn post_bulk_collections(data: Json<BulkCollectionsData>, headers: Headers
}) })
.collect(); .collect();
// Verify if all the collections requested exists and are writeable for the user, else abort // Verify if all the collections requested exists and are writable for the user, else abort
for collection_uuid in &data.collection_ids { for collection_uuid in &data.collection_ids {
match user_collections.get(collection_uuid) { match user_collections.get(collection_uuid) {
Some(collection) if collection.is_writable_by_user(&headers.user.uuid, &conn).await => (), Some(collection) if collection.is_writable_by_user(&headers.user.uuid, &conn).await => (),
@@ -1933,10 +1977,20 @@ async fn list_policies_token(org_id: OrganizationId, token: &str, conn: DbConn)
}))) })))
} }
// Called during the SSO enrollment. // Called during the SSO enrollment return the default policy
// Return the org policy if it exists, otherwise use the default one. #[get("/organizations/vaultwarden-dummy-oidc-identifier/policies/master-password", rank = 1)]
#[get("/organizations/<org_id>/policies/master-password", rank = 1)] fn get_dummy_master_password_policy() -> JsonResult {
async fn get_master_password_policy(org_id: OrganizationId, _headers: Headers, conn: DbConn) -> JsonResult { let (enabled, data) = match CONFIG.sso_master_password_policy_value() {
Some(policy) if CONFIG.sso_enabled() => (true, policy.to_string()),
_ => (false, "null".to_string()),
};
let policy = OrgPolicy::new(FAKE_SSO_IDENTIFIER.into(), OrgPolicyType::MasterPassword, enabled, data);
Ok(Json(policy.to_json()))
}
// Called during the SSO enrollment return the org policy if it exists
#[get("/organizations/<org_id>/policies/master-password", rank = 2)]
async fn get_master_password_policy(org_id: OrganizationId, _headers: OrgMemberHeaders, conn: DbConn) -> JsonResult {
let policy = let policy =
OrgPolicy::find_by_org_and_type(&org_id, OrgPolicyType::MasterPassword, &conn).await.unwrap_or_else(|| { OrgPolicy::find_by_org_and_type(&org_id, OrgPolicyType::MasterPassword, &conn).await.unwrap_or_else(|| {
let (enabled, data) = match CONFIG.sso_master_password_policy_value() { let (enabled, data) = match CONFIG.sso_master_password_policy_value() {
@@ -1950,7 +2004,7 @@ async fn get_master_password_policy(org_id: OrganizationId, _headers: Headers, c
Ok(Json(policy.to_json())) Ok(Json(policy.to_json()))
} }
#[get("/organizations/<org_id>/policies/<pol_type>", rank = 2)] #[get("/organizations/<org_id>/policies/<pol_type>", rank = 3)]
async fn get_policy(org_id: OrganizationId, pol_type: i32, headers: AdminHeaders, conn: DbConn) -> JsonResult { async fn get_policy(org_id: OrganizationId, pol_type: i32, headers: AdminHeaders, conn: DbConn) -> JsonResult {
if org_id != headers.org_id { if org_id != headers.org_id {
err!("Organization not found", "Organization id's do not match"); err!("Organization not found", "Organization id's do not match");
@@ -2144,13 +2198,13 @@ fn get_plans() -> Json<Value> {
} }
#[get("/organizations/<_org_id>/billing/metadata")] #[get("/organizations/<_org_id>/billing/metadata")]
fn get_billing_metadata(_org_id: OrganizationId, _headers: Headers) -> Json<Value> { fn get_billing_metadata(_org_id: OrganizationId, _headers: OrgMemberHeaders) -> Json<Value> {
// Prevent a 404 error, which also causes Javascript errors. // Prevent a 404 error, which also causes Javascript errors.
Json(_empty_data_json()) Json(_empty_data_json())
} }
#[get("/organizations/<_org_id>/billing/vnext/warnings")] #[get("/organizations/<_org_id>/billing/vnext/warnings")]
fn get_billing_warnings(_org_id: OrganizationId, _headers: Headers) -> Json<Value> { fn get_billing_warnings(_org_id: OrganizationId, _headers: OrgMemberHeaders) -> Json<Value> {
Json(json!({ Json(json!({
"freeTrial":null, "freeTrial":null,
"inactiveSubscription":null, "inactiveSubscription":null,
@@ -2422,6 +2476,23 @@ impl GroupRequest {
group group
} }
/// Validate if all the collections and members belong to the provided organization
pub async fn validate(&self, org_id: &OrganizationId, conn: &DbConn) -> EmptyResult {
let org_collections = Collection::find_by_organization(org_id, conn).await;
let org_collection_ids: HashSet<&CollectionId> = org_collections.iter().map(|c| &c.uuid).collect();
if let Some(e) = self.collections.iter().find(|c| !org_collection_ids.contains(&c.id)) {
err!("Invalid collection", format!("Collection {} does not belong to organization {}!", e.id, org_id))
}
let org_memberships = Membership::find_by_org(org_id, conn).await;
let org_membership_ids: HashSet<&MembershipId> = org_memberships.iter().map(|m| &m.uuid).collect();
if let Some(e) = self.users.iter().find(|m| !org_membership_ids.contains(m)) {
err!("Invalid member", format!("Member {} does not belong to organization {}!", e, org_id))
}
Ok(())
}
} }
#[derive(Deserialize, Serialize)] #[derive(Deserialize, Serialize)]
@@ -2465,6 +2536,8 @@ async fn post_groups(
} }
let group_request = data.into_inner(); let group_request = data.into_inner();
group_request.validate(&org_id, &conn).await?;
let group = group_request.to_group(&org_id); let group = group_request.to_group(&org_id);
log_event( log_event(
@@ -2501,10 +2574,12 @@ async fn put_group(
}; };
let group_request = data.into_inner(); let group_request = data.into_inner();
group_request.validate(&org_id, &conn).await?;
let updated_group = group_request.update_group(group); let updated_group = group_request.update_group(group);
CollectionGroup::delete_all_by_group(&group_id, &conn).await?; CollectionGroup::delete_all_by_group(&group_id, &org_id, &conn).await?;
GroupUser::delete_all_by_group(&group_id, &conn).await?; GroupUser::delete_all_by_group(&group_id, &org_id, &conn).await?;
log_event( log_event(
EventType::GroupUpdated as i32, EventType::GroupUpdated as i32,
@@ -2532,7 +2607,7 @@ async fn add_update_group(
for col_selection in collections { for col_selection in collections {
let mut collection_group = col_selection.to_collection_group(group.uuid.clone()); let mut collection_group = col_selection.to_collection_group(group.uuid.clone());
collection_group.save(conn).await?; collection_group.save(&org_id, conn).await?;
} }
for assigned_member in members { for assigned_member in members {
@@ -2625,7 +2700,7 @@ async fn _delete_group(
) )
.await; .await;
group.delete(conn).await group.delete(org_id, conn).await
} }
#[delete("/organizations/<org_id>/groups", data = "<data>")] #[delete("/organizations/<org_id>/groups", data = "<data>")]
@@ -2684,7 +2759,7 @@ async fn get_group_members(
err!("Group could not be found!", "Group uuid is invalid or does not belong to the organization") err!("Group could not be found!", "Group uuid is invalid or does not belong to the organization")
}; };
let group_members: Vec<MembershipId> = GroupUser::find_by_group(&group_id, &conn) let group_members: Vec<MembershipId> = GroupUser::find_by_group(&group_id, &org_id, &conn)
.await .await
.iter() .iter()
.map(|entry| entry.users_organizations_uuid.clone()) .map(|entry| entry.users_organizations_uuid.clone())
@@ -2712,9 +2787,15 @@ async fn put_group_members(
err!("Group could not be found!", "Group uuid is invalid or does not belong to the organization") err!("Group could not be found!", "Group uuid is invalid or does not belong to the organization")
}; };
GroupUser::delete_all_by_group(&group_id, &conn).await?;
let assigned_members = data.into_inner(); let assigned_members = data.into_inner();
let org_memberships = Membership::find_by_org(&org_id, &conn).await;
let org_membership_ids: HashSet<&MembershipId> = org_memberships.iter().map(|m| &m.uuid).collect();
if let Some(e) = assigned_members.iter().find(|m| !org_membership_ids.contains(m)) {
err!("Invalid member", format!("Member {} does not belong to organization {}!", e, org_id))
}
GroupUser::delete_all_by_group(&group_id, &org_id, &conn).await?;
for assigned_member in assigned_members { for assigned_member in assigned_members {
let mut user_entry = GroupUser::new(group_id.clone(), assigned_member.clone()); let mut user_entry = GroupUser::new(group_id.clone(), assigned_member.clone());
user_entry.save(&conn).await?; user_entry.save(&conn).await?;
@@ -2853,7 +2934,8 @@ async fn put_reset_password(
let reset_request = data.into_inner(); let reset_request = data.into_inner();
let mut user = user; let mut user = user;
user.set_password(reset_request.new_master_password_hash.as_str(), Some(reset_request.key), true, None); user.set_password(reset_request.new_master_password_hash.as_str(), Some(reset_request.key), true, None, &conn)
.await?;
user.save(&conn).await?; user.save(&conn).await?;
nt.send_logout(&user, None, &conn).await; nt.send_logout(&user, None, &conn).await;
@@ -2945,15 +3027,20 @@ async fn check_reset_password_applicable(org_id: &OrganizationId, conn: &DbConn)
Ok(()) Ok(())
} }
#[put("/organizations/<org_id>/users/<member_id>/reset-password-enrollment", data = "<data>")] #[put("/organizations/<org_id>/users/<user_id>/reset-password-enrollment", data = "<data>")]
async fn put_reset_password_enrollment( async fn put_reset_password_enrollment(
org_id: OrganizationId, org_id: OrganizationId,
member_id: MembershipId, user_id: UserId,
headers: Headers, headers: OrgMemberHeaders,
data: Json<OrganizationUserResetPasswordEnrollmentRequest>, data: Json<OrganizationUserResetPasswordEnrollmentRequest>,
conn: DbConn, conn: DbConn,
) -> EmptyResult { ) -> EmptyResult {
let Some(mut member) = Membership::find_by_user_and_org(&headers.user.uuid, &org_id, &conn).await else { if user_id != headers.user.uuid {
err!("User to enroll isn't member of required organization", "The user_id and acting user do not match");
}
let Some(mut membership) = Membership::find_confirmed_by_user_and_org(&headers.user.uuid, &org_id, &conn).await
else {
err!("User to enroll isn't member of required organization") err!("User to enroll isn't member of required organization")
}; };
@@ -2980,16 +3067,17 @@ async fn put_reset_password_enrollment(
.await?; .await?;
} }
member.reset_password_key = reset_password_key; membership.reset_password_key = reset_password_key;
member.save(&conn).await?; membership.save(&conn).await?;
let log_id = if member.reset_password_key.is_some() { let event_type = if membership.reset_password_key.is_some() {
EventType::OrganizationUserResetPasswordEnroll as i32 EventType::OrganizationUserResetPasswordEnroll as i32
} else { } else {
EventType::OrganizationUserResetPasswordWithdraw as i32 EventType::OrganizationUserResetPasswordWithdraw as i32
}; };
log_event(log_id, &member_id, &org_id, &headers.user.uuid, headers.device.atype, &headers.ip.ip, &conn).await; log_event(event_type, &membership.uuid, &org_id, &headers.user.uuid, headers.device.atype, &headers.ip.ip, &conn)
.await;
Ok(()) Ok(())
} }
+1 -1
View File
@@ -156,7 +156,7 @@ async fn ldap_import(data: Json<OrgImportData>, token: PublicToken, conn: DbConn
} }
}; };
GroupUser::delete_all_by_group(&group_uuid, &conn).await?; GroupUser::delete_all_by_group(&group_uuid, &org_id, &conn).await?;
for ext_id in &group_data.member_external_ids { for ext_id in &group_data.member_external_ids {
if let Some(member) = Membership::find_by_external_id_and_org(ext_id, &org_id, &conn).await { if let Some(member) = Membership::find_by_external_id_and_org(ext_id, &org_id, &conn).await {
+47 -2
View File
@@ -1,7 +1,9 @@
use chrono::{TimeDelta, Utc}; use chrono::{TimeDelta, Utc};
use data_encoding::BASE32; use data_encoding::BASE32;
use num_traits::FromPrimitive;
use rocket::serde::json::Json; use rocket::serde::json::Json;
use rocket::Route; use rocket::Route;
use serde::Deserialize;
use serde_json::Value; use serde_json::Value;
use crate::{ use crate::{
@@ -14,7 +16,7 @@ use crate::{
db::{ db::{
models::{ models::{
DeviceType, EventType, Membership, MembershipType, OrgPolicyType, Organization, OrganizationId, TwoFactor, DeviceType, EventType, Membership, MembershipType, OrgPolicyType, Organization, OrganizationId, TwoFactor,
TwoFactorIncomplete, User, UserId, TwoFactorIncomplete, TwoFactorType, User, UserId,
}, },
DbConn, DbPool, DbConn, DbPool,
}, },
@@ -31,6 +33,43 @@ pub mod protected_actions;
pub mod webauthn; pub mod webauthn;
pub mod yubikey; pub mod yubikey;
fn has_global_duo_credentials() -> bool {
CONFIG._enable_duo() && CONFIG.duo_host().is_some() && CONFIG.duo_ikey().is_some() && CONFIG.duo_skey().is_some()
}
pub fn is_twofactor_provider_usable(provider_type: TwoFactorType, provider_data: Option<&str>) -> bool {
#[derive(Deserialize)]
struct DuoProviderData {
host: String,
ik: String,
sk: String,
}
match provider_type {
TwoFactorType::Authenticator => true,
TwoFactorType::Email => CONFIG._enable_email_2fa(),
TwoFactorType::Duo | TwoFactorType::OrganizationDuo => {
provider_data
.and_then(|raw| serde_json::from_str::<DuoProviderData>(raw).ok())
.is_some_and(|duo| !duo.host.is_empty() && !duo.ik.is_empty() && !duo.sk.is_empty())
|| has_global_duo_credentials()
}
TwoFactorType::YubiKey => {
CONFIG._enable_yubico() && CONFIG.yubico_client_id().is_some() && CONFIG.yubico_secret_key().is_some()
}
TwoFactorType::Webauthn => CONFIG.is_webauthn_2fa_supported(),
TwoFactorType::Remember => !CONFIG.disable_2fa_remember(),
TwoFactorType::RecoveryCode => true,
TwoFactorType::U2f
| TwoFactorType::U2fRegisterChallenge
| TwoFactorType::U2fLoginChallenge
| TwoFactorType::EmailVerificationChallenge
| TwoFactorType::WebauthnRegisterChallenge
| TwoFactorType::WebauthnLoginChallenge
| TwoFactorType::ProtectedActions => false,
}
}
pub fn routes() -> Vec<Route> { pub fn routes() -> Vec<Route> {
let mut routes = routes![ let mut routes = routes![
get_twofactor, get_twofactor,
@@ -53,7 +92,13 @@ pub fn routes() -> Vec<Route> {
#[get("/two-factor")] #[get("/two-factor")]
async fn get_twofactor(headers: Headers, conn: DbConn) -> Json<Value> { async fn get_twofactor(headers: Headers, conn: DbConn) -> Json<Value> {
let twofactors = TwoFactor::find_by_user(&headers.user.uuid, &conn).await; let twofactors = TwoFactor::find_by_user(&headers.user.uuid, &conn).await;
let twofactors_json: Vec<Value> = twofactors.iter().map(TwoFactor::to_json_provider).collect(); let twofactors_json: Vec<Value> = twofactors
.iter()
.filter_map(|tf| {
let provider_type = TwoFactorType::from_i32(tf.atype)?;
is_twofactor_provider_usable(provider_type, Some(&tf.data)).then(|| TwoFactor::to_json_provider(tf))
})
.collect();
Json(json!({ Json(json!({
"data": twofactors_json, "data": twofactors_json,
+9 -18
View File
@@ -108,8 +108,8 @@ impl WebauthnRegistration {
#[post("/two-factor/get-webauthn", data = "<data>")] #[post("/two-factor/get-webauthn", data = "<data>")]
async fn get_webauthn(data: Json<PasswordOrOtpData>, headers: Headers, conn: DbConn) -> JsonResult { async fn get_webauthn(data: Json<PasswordOrOtpData>, headers: Headers, conn: DbConn) -> JsonResult {
if !CONFIG.domain_set() { if !CONFIG.is_webauthn_2fa_supported() {
err!("`DOMAIN` environment variable is not set. Webauthn disabled") err!("Configured `DOMAIN` is not compatible with Webauthn")
} }
let data: PasswordOrOtpData = data.into_inner(); let data: PasswordOrOtpData = data.into_inner();
@@ -438,7 +438,7 @@ pub async fn validate_webauthn_login(user_id: &UserId, response: &str, conn: &Db
// We need to check for and update the backup_eligible flag when needed. // We need to check for and update the backup_eligible flag when needed.
// Vaultwarden did not have knowledge of this flag prior to migrating to webauthn-rs v0.5.x // Vaultwarden did not have knowledge of this flag prior to migrating to webauthn-rs v0.5.x
// Because of this we check the flag at runtime and update the registrations and state when needed // Because of this we check the flag at runtime and update the registrations and state when needed
check_and_update_backup_eligible(user_id, &rsp, &mut registrations, &mut state, conn).await?; let backup_flags_updated = check_and_update_backup_eligible(&rsp, &mut registrations, &mut state)?;
let authentication_result = WEBAUTHN.finish_passkey_authentication(&rsp, &state)?; let authentication_result = WEBAUTHN.finish_passkey_authentication(&rsp, &state)?;
@@ -446,7 +446,8 @@ pub async fn validate_webauthn_login(user_id: &UserId, response: &str, conn: &Db
if ct_eq(reg.credential.cred_id(), authentication_result.cred_id()) { if ct_eq(reg.credential.cred_id(), authentication_result.cred_id()) {
// If the cred id matches and the credential is updated, Some(true) is returned // If the cred id matches and the credential is updated, Some(true) is returned
// In those cases, update the record, else leave it alone // In those cases, update the record, else leave it alone
if reg.credential.update_credential(&authentication_result) == Some(true) { let credential_updated = reg.credential.update_credential(&authentication_result) == Some(true);
if credential_updated || backup_flags_updated {
TwoFactor::new(user_id.clone(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?) TwoFactor::new(user_id.clone(), TwoFactorType::Webauthn, serde_json::to_string(&registrations)?)
.save(conn) .save(conn)
.await?; .await?;
@@ -463,13 +464,11 @@ pub async fn validate_webauthn_login(user_id: &UserId, response: &str, conn: &Db
) )
} }
async fn check_and_update_backup_eligible( fn check_and_update_backup_eligible(
user_id: &UserId,
rsp: &PublicKeyCredential, rsp: &PublicKeyCredential,
registrations: &mut Vec<WebauthnRegistration>, registrations: &mut Vec<WebauthnRegistration>,
state: &mut PasskeyAuthentication, state: &mut PasskeyAuthentication,
conn: &DbConn, ) -> Result<bool, Error> {
) -> EmptyResult {
// The feature flags from the response // The feature flags from the response
// For details see: https://www.w3.org/TR/webauthn-3/#sctn-authenticator-data // For details see: https://www.w3.org/TR/webauthn-3/#sctn-authenticator-data
const FLAG_BACKUP_ELIGIBLE: u8 = 0b0000_1000; const FLAG_BACKUP_ELIGIBLE: u8 = 0b0000_1000;
@@ -486,16 +485,7 @@ async fn check_and_update_backup_eligible(
let rsp_id = rsp.raw_id.as_slice(); let rsp_id = rsp.raw_id.as_slice();
for reg in &mut *registrations { for reg in &mut *registrations {
if ct_eq(reg.credential.cred_id().as_slice(), rsp_id) { if ct_eq(reg.credential.cred_id().as_slice(), rsp_id) {
// Try to update the key, and if needed also update the database, before the actual state check is done
if reg.set_backup_eligible(backup_eligible, backup_state) { if reg.set_backup_eligible(backup_eligible, backup_state) {
TwoFactor::new(
user_id.clone(),
TwoFactorType::Webauthn,
serde_json::to_string(&registrations)?,
)
.save(conn)
.await?;
// We also need to adjust the current state which holds the challenge used to start the authentication verification // We also need to adjust the current state which holds the challenge used to start the authentication verification
// Because Vaultwarden supports multiple keys, we need to loop through the deserialized state and check which key to update // Because Vaultwarden supports multiple keys, we need to loop through the deserialized state and check which key to update
let mut raw_state = serde_json::to_value(&state)?; let mut raw_state = serde_json::to_value(&state)?;
@@ -517,11 +507,12 @@ async fn check_and_update_backup_eligible(
} }
*state = serde_json::from_value(raw_state)?; *state = serde_json::from_value(raw_state)?;
return Ok(true);
} }
break; break;
} }
} }
} }
} }
Ok(()) Ok(false)
} }
+2 -4
View File
@@ -513,13 +513,11 @@ fn parse_sizes(sizes: &str) -> (u16, u16) {
if !sizes.is_empty() { if !sizes.is_empty() {
match ICON_SIZE_REGEX.captures(sizes.trim()) { match ICON_SIZE_REGEX.captures(sizes.trim()) {
None => {} Some(dimensions) if dimensions.len() >= 3 => {
Some(dimensions) => {
if dimensions.len() >= 3 {
width = dimensions[1].parse::<u16>().unwrap_or_default(); width = dimensions[1].parse::<u16>().unwrap_or_default();
height = dimensions[2].parse::<u16>().unwrap_or_default(); height = dimensions[2].parse::<u16>().unwrap_or_default();
} }
} _ => {}
} }
} }
+67 -17
View File
@@ -2,7 +2,6 @@ use chrono::Utc;
use num_traits::FromPrimitive; use num_traits::FromPrimitive;
use rocket::{ use rocket::{
form::{Form, FromForm}, form::{Form, FromForm},
http::Status,
response::Redirect, response::Redirect,
serde::json::Json, serde::json::Json,
Route, Route,
@@ -12,9 +11,12 @@ use serde_json::Value;
use crate::{ use crate::{
api::{ api::{
core::{ core::{
accounts::{PreloginData, RegisterData, _prelogin, _register, kdf_upgrade}, accounts::{_prelogin, _register, kdf_upgrade, PreloginData, RegisterData},
log_user_event, log_user_event,
two_factor::{authenticator, duo, duo_oidc, email, enforce_2fa_policy, webauthn, yubikey}, two_factor::{
authenticator, duo, duo_oidc, email, enforce_2fa_policy, is_twofactor_provider_usable, webauthn,
yubikey,
},
}, },
master_password_policy, master_password_policy,
push::register_push_device, push::register_push_device,
@@ -128,12 +130,14 @@ async fn login(
login_result login_result
} }
// Return Status::Unauthorized to trigger logout
async fn _refresh_login(data: ConnectData, conn: &DbConn, ip: &ClientIp) -> JsonResult { async fn _refresh_login(data: ConnectData, conn: &DbConn, ip: &ClientIp) -> JsonResult {
// Extract token // When a refresh token is invalid or missing we need to respond with an HTTP BadRequest (400)
let refresh_token = match data.refresh_token { // It also needs to return a json which holds at least a key `error` with the value `invalid_grant`
Some(token) => token, // See the link below for details
None => err_code!("Missing refresh_token", Status::Unauthorized.code), // https://github.com/bitwarden/clients/blob/2ee158e720a5e7dbe3641caf80b569e97a1dd91b/libs/common/src/services/api.service.ts#L1786-L1797
let Some(refresh_token) = data.refresh_token else {
err_json!(json!({"error": "invalid_grant"}), "Missing refresh_token")
}; };
// --- // ---
@@ -144,7 +148,10 @@ async fn _refresh_login(data: ConnectData, conn: &DbConn, ip: &ClientIp) -> Json
// let members = Membership::find_confirmed_by_user(&user.uuid, conn).await; // let members = Membership::find_confirmed_by_user(&user.uuid, conn).await;
match auth::refresh_tokens(ip, &refresh_token, data.client_id, conn).await { match auth::refresh_tokens(ip, &refresh_token, data.client_id, conn).await {
Err(err) => { Err(err) => {
err_code!(format!("Unable to refresh login credentials: {}", err.message()), Status::Unauthorized.code) err_json!(
json!({"error": "invalid_grant"}),
format!("Unable to refresh login credentials: {}", err.message())
)
} }
Ok((mut device, auth_tokens)) => { Ok((mut device, auth_tokens)) => {
// Save to update `device.updated_at` to track usage and toggle new status // Save to update `device.updated_at` to track usage and toggle new status
@@ -633,6 +640,19 @@ async fn _user_api_key_login(
Value::Null Value::Null
}; };
let account_keys = if user.private_key.is_some() {
json!({
"publicKeyEncryptionKeyPair": {
"wrappedPrivateKey": user.private_key,
"publicKey": user.public_key,
"Object": "publicKeyEncryptionKeyPair"
},
"Object": "privateKeys"
})
} else {
Value::Null
};
// Note: No refresh_token is returned. The CLI just repeats the // Note: No refresh_token is returned. The CLI just repeats the
// client_credentials login flow when the existing token expires. // client_credentials login flow when the existing token expires.
let result = json!({ let result = json!({
@@ -647,7 +667,9 @@ async fn _user_api_key_login(
"KdfMemory": user.client_kdf_memory, "KdfMemory": user.client_kdf_memory,
"KdfParallelism": user.client_kdf_parallelism, "KdfParallelism": user.client_kdf_parallelism,
"ResetMasterPassword": false, // TODO: according to official server seems something like: user.password_hash.is_empty(), but would need testing "ResetMasterPassword": false, // TODO: according to official server seems something like: user.password_hash.is_empty(), but would need testing
"ForcePasswordReset": false,
"scope": AuthMethod::UserApiKey.scope(), "scope": AuthMethod::UserApiKey.scope(),
"AccountKeys": account_keys,
"UserDecryptionOptions": { "UserDecryptionOptions": {
"HasMasterPassword": has_master_password, "HasMasterPassword": has_master_password,
"MasterPasswordUnlock": master_password_unlock, "MasterPasswordUnlock": master_password_unlock,
@@ -724,8 +746,27 @@ async fn twofactor_auth(
TwoFactorIncomplete::mark_incomplete(&user.uuid, &device.uuid, &device.name, device.atype, ip, conn).await?; TwoFactorIncomplete::mark_incomplete(&user.uuid, &device.uuid, &device.name, device.atype, ip, conn).await?;
let twofactor_ids: Vec<_> = twofactors.iter().map(|tf| tf.atype).collect(); let twofactor_ids: Vec<_> = twofactors
.iter()
.filter_map(|tf| {
let provider_type = TwoFactorType::from_i32(tf.atype)?;
(tf.enabled && is_twofactor_provider_usable(provider_type, Some(&tf.data))).then_some(tf.atype)
})
.collect();
if twofactor_ids.is_empty() {
err!("No enabled and usable two factor providers are available for this account")
}
let selected_id = data.two_factor_provider.unwrap_or(twofactor_ids[0]); // If we aren't given a two factor provider, assume the first one let selected_id = data.two_factor_provider.unwrap_or(twofactor_ids[0]); // If we aren't given a two factor provider, assume the first one
// Ignore Remember and RecoveryCode Types during this check, these are special
if ![TwoFactorType::Remember as i32, TwoFactorType::RecoveryCode as i32].contains(&selected_id)
&& !twofactor_ids.contains(&selected_id)
{
err_json!(
_json_err_twofactor(&twofactor_ids, &user.uuid, data, client_version, conn).await?,
"Invalid two factor provider"
)
}
let twofactor_code = match data.two_factor_token { let twofactor_code = match data.two_factor_token {
Some(ref code) => code, Some(ref code) => code,
@@ -742,7 +783,6 @@ async fn twofactor_auth(
use crate::crypto::ct_eq; use crate::crypto::ct_eq;
let selected_data = _selected_data(selected_twofactor); let selected_data = _selected_data(selected_twofactor);
let mut remember = data.two_factor_remember.unwrap_or(0);
match TwoFactorType::from_i32(selected_id) { match TwoFactorType::from_i32(selected_id) {
Some(TwoFactorType::Authenticator) => { Some(TwoFactorType::Authenticator) => {
@@ -774,13 +814,23 @@ async fn twofactor_auth(
} }
Some(TwoFactorType::Remember) => { Some(TwoFactorType::Remember) => {
match device.twofactor_remember { match device.twofactor_remember {
Some(ref code) if !CONFIG.disable_2fa_remember() && ct_eq(code, twofactor_code) => { // When a 2FA Remember token is used, check and validate this JWT token, if it is valid, just continue
remember = 1; // Make sure we also return the token here, otherwise it will only remember the first time // If it is invalid we need to trigger the 2FA Login prompt
} Some(ref token)
if !CONFIG.disable_2fa_remember()
&& (ct_eq(token, twofactor_code)
&& auth::decode_2fa_remember(twofactor_code)
.is_ok_and(|t| t.sub == device.uuid && t.user_uuid == user.uuid)) => {}
_ => { _ => {
// Always delete the current twofactor remember token here if it exists
if device.twofactor_remember.is_some() {
device.delete_twofactor_remember();
// We need to save here, since we send a err_json!() which prevents saving `device` at a later stage
device.save(true, conn).await?;
}
err_json!( err_json!(
_json_err_twofactor(&twofactor_ids, &user.uuid, data, client_version, conn).await?, _json_err_twofactor(&twofactor_ids, &user.uuid, data, client_version, conn).await?,
"2FA Remember token not provided" "2FA Remember token not provided or expired"
) )
} }
} }
@@ -811,10 +861,10 @@ async fn twofactor_auth(
TwoFactorIncomplete::mark_complete(&user.uuid, &device.uuid, conn).await?; TwoFactorIncomplete::mark_complete(&user.uuid, &device.uuid, conn).await?;
let remember = data.two_factor_remember.unwrap_or(0);
let two_factor = if !CONFIG.disable_2fa_remember() && remember == 1 { let two_factor = if !CONFIG.disable_2fa_remember() && remember == 1 {
Some(device.refresh_twofactor_remember()) Some(device.refresh_twofactor_remember())
} else { } else {
device.delete_twofactor_remember();
None None
}; };
Ok(two_factor) Ok(two_factor)
@@ -847,7 +897,7 @@ async fn _json_err_twofactor(
match TwoFactorType::from_i32(*provider) { match TwoFactorType::from_i32(*provider) {
Some(TwoFactorType::Authenticator) => { /* Nothing to do for TOTP */ } Some(TwoFactorType::Authenticator) => { /* Nothing to do for TOTP */ }
Some(TwoFactorType::Webauthn) if CONFIG.domain_set() => { Some(TwoFactorType::Webauthn) if CONFIG.is_webauthn_2fa_supported() => {
let request = webauthn::generate_webauthn_login(user_id, conn).await?; let request = webauthn::generate_webauthn_login(user_id, conn).await?;
result["TwoFactorProviders2"][provider.to_string()] = request.0; result["TwoFactorProviders2"][provider.to_string()] = request.0;
} }
+4 -3
View File
@@ -358,15 +358,16 @@ impl WebSocketUsers {
} }
} }
pub async fn send_logout(&self, user: &User, acting_device_id: Option<DeviceId>, conn: &DbConn) { pub async fn send_logout(&self, user: &User, acting_device: Option<&Device>, conn: &DbConn) {
// Skip any processing if both WebSockets and Push are not active // Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED { if *NOTIFICATIONS_DISABLED {
return; return;
} }
let acting_device_id = acting_device.map(|d| d.uuid.clone());
let data = create_update( let data = create_update(
vec![("UserId".into(), user.uuid.to_string().into()), ("Date".into(), serialize_date(user.updated_at))], vec![("UserId".into(), user.uuid.to_string().into()), ("Date".into(), serialize_date(user.updated_at))],
UpdateType::LogOut, UpdateType::LogOut,
acting_device_id.clone(), acting_device_id,
); );
if CONFIG.enable_websocket() { if CONFIG.enable_websocket() {
@@ -374,7 +375,7 @@ impl WebSocketUsers {
} }
if CONFIG.push_enabled() { if CONFIG.push_enabled() {
push_logout(user, acting_device_id.clone(), conn).await; push_logout(user, acting_device, conn).await;
} }
} }
+4 -6
View File
@@ -13,7 +13,7 @@ use tokio::sync::RwLock;
use crate::{ use crate::{
api::{ApiResult, EmptyResult, UpdateType}, api::{ApiResult, EmptyResult, UpdateType},
db::{ db::{
models::{AuthRequestId, Cipher, Device, DeviceId, Folder, PushId, Send, User, UserId}, models::{AuthRequestId, Cipher, Device, Folder, PushId, Send, User, UserId},
DbConn, DbConn,
}, },
http_client::make_http_request, http_client::make_http_request,
@@ -188,15 +188,13 @@ pub async fn push_cipher_update(ut: UpdateType, cipher: &Cipher, device: &Device
} }
} }
pub async fn push_logout(user: &User, acting_device_id: Option<DeviceId>, conn: &DbConn) { pub async fn push_logout(user: &User, acting_device: Option<&Device>, conn: &DbConn) {
let acting_device_id: Value = acting_device_id.map(|v| v.to_string().into()).unwrap_or_else(|| Value::Null);
if Device::check_user_has_push_device(&user.uuid, conn).await { if Device::check_user_has_push_device(&user.uuid, conn).await {
tokio::task::spawn(send_to_push_relay(json!({ tokio::task::spawn(send_to_push_relay(json!({
"userId": user.uuid, "userId": user.uuid,
"organizationId": (), "organizationId": (),
"deviceId": acting_device_id, "deviceId": acting_device.and_then(|d| d.push_uuid.as_ref()),
"identifier": acting_device_id, "identifier": acting_device.map(|d| &d.uuid),
"type": UpdateType::LogOut as i32, "type": UpdateType::LogOut as i32,
"payload": { "payload": {
"userId": user.uuid, "userId": user.uuid,
+65 -16
View File
@@ -46,6 +46,7 @@ static JWT_FILE_DOWNLOAD_ISSUER: LazyLock<String> =
LazyLock::new(|| format!("{}|file_download", CONFIG.domain_origin())); LazyLock::new(|| format!("{}|file_download", CONFIG.domain_origin()));
static JWT_REGISTER_VERIFY_ISSUER: LazyLock<String> = static JWT_REGISTER_VERIFY_ISSUER: LazyLock<String> =
LazyLock::new(|| format!("{}|register_verify", CONFIG.domain_origin())); LazyLock::new(|| format!("{}|register_verify", CONFIG.domain_origin()));
static JWT_2FA_REMEMBER_ISSUER: LazyLock<String> = LazyLock::new(|| format!("{}|2faremember", CONFIG.domain_origin()));
static PRIVATE_RSA_KEY: OnceLock<EncodingKey> = OnceLock::new(); static PRIVATE_RSA_KEY: OnceLock<EncodingKey> = OnceLock::new();
static PUBLIC_RSA_KEY: OnceLock<DecodingKey> = OnceLock::new(); static PUBLIC_RSA_KEY: OnceLock<DecodingKey> = OnceLock::new();
@@ -160,6 +161,10 @@ pub fn decode_register_verify(token: &str) -> Result<RegisterVerifyClaims, Error
decode_jwt(token, JWT_REGISTER_VERIFY_ISSUER.to_string()) decode_jwt(token, JWT_REGISTER_VERIFY_ISSUER.to_string())
} }
pub fn decode_2fa_remember(token: &str) -> Result<TwoFactorRememberClaims, Error> {
decode_jwt(token, JWT_2FA_REMEMBER_ISSUER.to_string())
}
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct LoginJwtClaims { pub struct LoginJwtClaims {
// Not before // Not before
@@ -440,6 +445,31 @@ pub fn generate_register_verify_claims(email: String, name: Option<String>, veri
} }
} }
#[derive(Serialize, Deserialize)]
pub struct TwoFactorRememberClaims {
// Not before
pub nbf: i64,
// Expiration time
pub exp: i64,
// Issuer
pub iss: String,
// Subject
pub sub: DeviceId,
// UserId
pub user_uuid: UserId,
}
pub fn generate_2fa_remember_claims(device_uuid: DeviceId, user_uuid: UserId) -> TwoFactorRememberClaims {
let time_now = Utc::now();
TwoFactorRememberClaims {
nbf: time_now.timestamp(),
exp: (time_now + TimeDelta::try_days(30).unwrap()).timestamp(),
iss: JWT_2FA_REMEMBER_ISSUER.to_string(),
sub: device_uuid,
user_uuid,
}
}
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct BasicJwtClaims { pub struct BasicJwtClaims {
// Not before // Not before
@@ -674,10 +704,9 @@ pub struct OrgHeaders {
impl OrgHeaders { impl OrgHeaders {
fn is_member(&self) -> bool { fn is_member(&self) -> bool {
// NOTE: we don't care about MembershipStatus at the moment because this is only used // Only allow not revoked members, we can not use the Confirmed status here
// where an invited, accepted or confirmed user is expected if this ever changes or // as some endpoints can be triggered by invited users during joining
// if from_i32 is changed to return Some(Revoked) this check needs to be changed accordingly self.membership_status != MembershipStatus::Revoked && self.membership_type >= MembershipType::User
self.membership_type >= MembershipType::User
} }
fn is_confirmed_and_admin(&self) -> bool { fn is_confirmed_and_admin(&self) -> bool {
self.membership_status == MembershipStatus::Confirmed && self.membership_type >= MembershipType::Admin self.membership_status == MembershipStatus::Confirmed && self.membership_type >= MembershipType::Admin
@@ -690,17 +719,10 @@ impl OrgHeaders {
} }
} }
#[rocket::async_trait] // org_id is usually the second path param ("/organizations/<org_id>"),
impl<'r> FromRequest<'r> for OrgHeaders { // but there are cases where it is a query value.
type Error = &'static str; // First check the path, if this is not a valid uuid, try the query values.
fn get_org_id(request: &Request<'_>) -> Option<OrganizationId> {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(Headers::from_request(request).await);
// org_id is usually the second path param ("/organizations/<org_id>"),
// but there are cases where it is a query value.
// First check the path, if this is not a valid uuid, try the query values.
let url_org_id: Option<OrganizationId> = {
if let Some(Ok(org_id)) = request.param::<OrganizationId>(1) { if let Some(Ok(org_id)) = request.param::<OrganizationId>(1) {
Some(org_id) Some(org_id)
} else if let Some(Ok(org_id)) = request.query_value::<OrganizationId>("organizationId") { } else if let Some(Ok(org_id)) = request.query_value::<OrganizationId>("organizationId") {
@@ -708,7 +730,34 @@ impl<'r> FromRequest<'r> for OrgHeaders {
} else { } else {
None None
} }
}; }
// Special Guard to ensure that there is an organization id present
// If there is no org id trigger the Outcome::Forward.
// This is useful for endpoints which work for both organization and personal vaults, like purge.
pub struct OrgIdGuard;
#[rocket::async_trait]
impl<'r> FromRequest<'r> for OrgIdGuard {
type Error = &'static str;
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
match get_org_id(request) {
Some(_) => Outcome::Success(OrgIdGuard),
None => Outcome::Forward(rocket::http::Status::NotFound),
}
}
}
#[rocket::async_trait]
impl<'r> FromRequest<'r> for OrgHeaders {
type Error = &'static str;
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
let headers = try_outcome!(Headers::from_request(request).await);
// Extract the org_id from the request
let url_org_id = get_org_id(request);
match url_org_id { match url_org_id {
Some(org_id) if uuid::Uuid::parse_str(&org_id).is_ok() => { Some(org_id) if uuid::Uuid::parse_str(&org_id).is_ok() => {
+45 -29
View File
@@ -14,7 +14,10 @@ use serde::de::{self, Deserialize, Deserializer, MapAccess, Visitor};
use crate::{ use crate::{
error::Error, error::Error,
util::{get_active_web_release, get_env, get_env_bool, is_valid_email, parse_experimental_client_feature_flags}, util::{
get_active_web_release, get_env, get_env_bool, is_valid_email, parse_experimental_client_feature_flags,
FeatureFlagFilter,
},
}; };
static CONFIG_FILE: LazyLock<String> = LazyLock::new(|| { static CONFIG_FILE: LazyLock<String> = LazyLock::new(|| {
@@ -920,7 +923,7 @@ make_config! {
}, },
} }
fn validate_config(cfg: &ConfigItems) -> Result<(), Error> { fn validate_config(cfg: &ConfigItems, on_update: bool) -> Result<(), Error> {
// Validate connection URL is valid and DB feature is enabled // Validate connection URL is valid and DB feature is enabled
#[cfg(sqlite)] #[cfg(sqlite)]
{ {
@@ -1026,33 +1029,17 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
} }
} }
// Server (v2025.6.2): https://github.com/bitwarden/server/blob/d094be3267f2030bd0dc62106bc6871cf82682f5/src/Core/Constants.cs#L103 let invalid_flags =
// Client (web-v2025.6.1): https://github.com/bitwarden/clients/blob/747c2fd6a1c348a57a76e4a7de8128466ffd3c01/libs/common/src/enums/feature-flag.enum.ts#L12 parse_experimental_client_feature_flags(&cfg.experimental_client_feature_flags, FeatureFlagFilter::InvalidOnly);
// Android (v2025.6.0): https://github.com/bitwarden/android/blob/b5b022caaad33390c31b3021b2c1205925b0e1a2/app/src/main/kotlin/com/x8bit/bitwarden/data/platform/manager/model/FlagKey.kt#L22
// iOS (v2025.6.0): https://github.com/bitwarden/ios/blob/ff06d9c6cc8da89f78f37f376495800201d7261a/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
//
// NOTE: Move deprecated flags to the utils::parse_experimental_client_feature_flags() DEPRECATED_FLAGS const!
const KNOWN_FLAGS: &[&str] = &[
// Autofill Team
"inline-menu-positioning-improvements",
"inline-menu-totp",
"ssh-agent",
// Key Management Team
"ssh-key-vault-item",
"pm-25373-windows-biometrics-v2",
// Tools
"export-attachments",
// Mobile Team
"anon-addy-self-host-alias",
"simple-login-self-host-alias",
"mutual-tls",
];
let configured_flags = parse_experimental_client_feature_flags(&cfg.experimental_client_feature_flags);
let invalid_flags: Vec<_> = configured_flags.keys().filter(|flag| !KNOWN_FLAGS.contains(&flag.as_str())).collect();
if !invalid_flags.is_empty() { if !invalid_flags.is_empty() {
err!(format!("Unrecognized experimental client feature flags: {invalid_flags:?}.\n\n\ let feature_flags_error = format!("Unrecognized experimental client feature flags: {:?}.\n\
Please ensure all feature flags are spelled correctly and that they are supported in this version.\n\ Please ensure all feature flags are spelled correctly and that they are supported in this version.\n\
Supported flags: {KNOWN_FLAGS:?}")); Supported flags: {:?}\n", invalid_flags, SUPPORTED_FEATURE_FLAGS);
if on_update {
err!(feature_flags_error);
} else {
println!("[WARNING] {feature_flags_error}");
}
} }
const MAX_FILESIZE_KB: i64 = i64::MAX >> 10; const MAX_FILESIZE_KB: i64 = i64::MAX >> 10;
@@ -1471,6 +1458,35 @@ pub enum PathType {
RsaKey, RsaKey,
} }
// Official available feature flags can be found here:
// Server (v2026.2.1): https://github.com/bitwarden/server/blob/0e42725d0837bd1c0dabd864ff621a579959744b/src/Core/Constants.cs#L135
// Client (v2026.2.1): https://github.com/bitwarden/clients/blob/f96380c3138291a028bdd2c7a5fee540d5c98ba5/libs/common/src/enums/feature-flag.enum.ts#L12
// Android (v2026.2.1): https://github.com/bitwarden/android/blob/6902c19c0093fa476bbf74ccaa70c9f14afbb82f/core/src/main/kotlin/com/bitwarden/core/data/manager/model/FlagKey.kt#L31
// iOS (v2026.2.1): https://github.com/bitwarden/ios/blob/cdd9ba1770ca2ffc098d02d12cc3208e3a830454/BitwardenShared/Core/Platform/Models/Enum/FeatureFlag.swift#L7
pub const SUPPORTED_FEATURE_FLAGS: &[&str] = &[
// Architecture
"desktop-ui-migration-milestone-1",
"desktop-ui-migration-milestone-2",
"desktop-ui-migration-milestone-3",
"desktop-ui-migration-milestone-4",
// Auth Team
"pm-5594-safari-account-switching",
// Autofill Team
"ssh-agent",
"ssh-agent-v2",
// Key Management Team
"ssh-key-vault-item",
"pm-25373-windows-biometrics-v2",
// Mobile Team
"anon-addy-self-host-alias",
"simple-login-self-host-alias",
"mutual-tls",
"cxp-import-mobile",
"cxp-export-mobile",
// Platform Team
"pm-30529-webauthn-related-origins",
];
impl Config { impl Config {
pub async fn load() -> Result<Self, Error> { pub async fn load() -> Result<Self, Error> {
// Loading from env and file // Loading from env and file
@@ -1484,7 +1500,7 @@ impl Config {
// Fill any missing with defaults // Fill any missing with defaults
let config = builder.build(); let config = builder.build();
if !SKIP_CONFIG_VALIDATION.load(Ordering::Relaxed) { if !SKIP_CONFIG_VALIDATION.load(Ordering::Relaxed) {
validate_config(&config)?; validate_config(&config, false)?;
} }
Ok(Config { Ok(Config {
@@ -1520,7 +1536,7 @@ impl Config {
let env = &self.inner.read().unwrap()._env; let env = &self.inner.read().unwrap()._env;
env.merge(&builder, false, &mut overrides).build() env.merge(&builder, false, &mut overrides).build()
}; };
validate_config(&config)?; validate_config(&config, true)?;
// Save both the user and the combined config // Save both the user and the combined config
{ {
+6 -10
View File
@@ -387,7 +387,6 @@ pub mod models;
#[cfg(sqlite)] #[cfg(sqlite)]
pub fn backup_sqlite() -> Result<String, Error> { pub fn backup_sqlite() -> Result<String, Error> {
use diesel::Connection; use diesel::Connection;
use std::{fs::File, io::Write};
let db_url = CONFIG.database_url(); let db_url = CONFIG.database_url();
if DbConnType::from_url(&CONFIG.database_url()).map(|t| t == DbConnType::Sqlite).unwrap_or(false) { if DbConnType::from_url(&CONFIG.database_url()).map(|t| t == DbConnType::Sqlite).unwrap_or(false) {
@@ -401,16 +400,13 @@ pub fn backup_sqlite() -> Result<String, Error> {
.to_string_lossy() .to_string_lossy()
.into_owned(); .into_owned();
match File::create(backup_file.clone()) { diesel::sql_query("VACUUM INTO ?")
Ok(mut f) => { .bind::<diesel::sql_types::Text, _>(&backup_file)
let serialized_db = conn.serialize_database_to_buffer(); .execute(&mut conn)
f.write_all(serialized_db.as_slice()).expect("Error writing SQLite backup"); .map(|_| ())
.map_res("VACUUM INTO failed")?;
Ok(backup_file) Ok(backup_file)
}
Err(e) => {
err_silent!(format!("Unable to save SQLite backup: {e:?}"))
}
}
} else { } else {
err_silent!("The database type is not SQLite. Backups only works for SQLite databases") err_silent!("The database type is not SQLite. Backups only works for SQLite databases")
} }
+21 -12
View File
@@ -559,7 +559,7 @@ impl Cipher {
if let Some(cached_member) = cipher_sync_data.members.get(org_uuid) { if let Some(cached_member) = cipher_sync_data.members.get(org_uuid) {
return cached_member.has_full_access(); return cached_member.has_full_access();
} }
} else if let Some(member) = Membership::find_by_user_and_org(user_uuid, org_uuid, conn).await { } else if let Some(member) = Membership::find_confirmed_by_user_and_org(user_uuid, org_uuid, conn).await {
return member.has_full_access(); return member.has_full_access();
} }
} }
@@ -668,10 +668,12 @@ impl Cipher {
ciphers::table ciphers::table
.filter(ciphers::uuid.eq(&self.uuid)) .filter(ciphers::uuid.eq(&self.uuid))
.inner_join(ciphers_collections::table.on( .inner_join(ciphers_collections::table.on(
ciphers::uuid.eq(ciphers_collections::cipher_uuid))) ciphers::uuid.eq(ciphers_collections::cipher_uuid)
))
.inner_join(users_collections::table.on( .inner_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid) ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
.and(users_collections::user_uuid.eq(user_uuid)))) .and(users_collections::user_uuid.eq(user_uuid))
))
.select((users_collections::read_only, users_collections::hide_passwords, users_collections::manage)) .select((users_collections::read_only, users_collections::hide_passwords, users_collections::manage))
.load::<(bool, bool, bool)>(conn) .load::<(bool, bool, bool)>(conn)
.expect("Error getting user access restrictions") .expect("Error getting user access restrictions")
@@ -697,6 +699,9 @@ impl Cipher {
.inner_join(users_organizations::table.on( .inner_join(users_organizations::table.on(
users_organizations::uuid.eq(groups_users::users_organizations_uuid) users_organizations::uuid.eq(groups_users::users_organizations_uuid)
)) ))
.inner_join(groups::table.on(groups::uuid.eq(collections_groups::groups_uuid)
.and(groups::organizations_uuid.eq(users_organizations::org_uuid))
))
.filter(users_organizations::user_uuid.eq(user_uuid)) .filter(users_organizations::user_uuid.eq(user_uuid))
.select((collections_groups::read_only, collections_groups::hide_passwords, collections_groups::manage)) .select((collections_groups::read_only, collections_groups::hide_passwords, collections_groups::manage))
.load::<(bool, bool, bool)>(conn) .load::<(bool, bool, bool)>(conn)
@@ -809,13 +814,13 @@ impl Cipher {
.left_join(groups_users::table.on( .left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid) groups_users::users_organizations_uuid.eq(users_organizations::uuid)
)) ))
.left_join(groups::table.on( .left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)
groups::uuid.eq(groups_users::groups_uuid) // Ensure that group and membership belong to the same org
.and(groups::organizations_uuid.eq(users_organizations::org_uuid))
)) ))
.left_join(collections_groups::table.on( .left_join(collections_groups::table.on(
collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid).and( collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid)
collections_groups::groups_uuid.eq(groups::uuid) .and(collections_groups::groups_uuid.eq(groups::uuid))
)
)) ))
.filter(ciphers::user_uuid.eq(user_uuid)) // Cipher owner .filter(ciphers::user_uuid.eq(user_uuid)) // Cipher owner
.or_filter(users_organizations::access_all.eq(true)) // access_all in org .or_filter(users_organizations::access_all.eq(true)) // access_all in org
@@ -986,7 +991,9 @@ impl Cipher {
.left_join(groups_users::table.on( .left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid) groups_users::users_organizations_uuid.eq(users_organizations::uuid)
)) ))
.left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid))) .left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)
.and(groups::organizations_uuid.eq(users_organizations::org_uuid))
))
.left_join(collections_groups::table.on( .left_join(collections_groups::table.on(
collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid) collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid)
.and(collections_groups::groups_uuid.eq(groups::uuid)) .and(collections_groups::groups_uuid.eq(groups::uuid))
@@ -1047,7 +1054,9 @@ impl Cipher {
.left_join(groups_users::table.on( .left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid) groups_users::users_organizations_uuid.eq(users_organizations::uuid)
)) ))
.left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid))) .left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)
.and(groups::organizations_uuid.eq(users_organizations::org_uuid))
))
.left_join(collections_groups::table.on( .left_join(collections_groups::table.on(
collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid) collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid)
.and(collections_groups::groups_uuid.eq(groups::uuid)) .and(collections_groups::groups_uuid.eq(groups::uuid))
@@ -1115,8 +1124,8 @@ impl Cipher {
.left_join(groups_users::table.on( .left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid) groups_users::users_organizations_uuid.eq(users_organizations::uuid)
)) ))
.left_join(groups::table.on( .left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)
groups::uuid.eq(groups_users::groups_uuid) .and(groups::organizations_uuid.eq(users_organizations::org_uuid))
)) ))
.left_join(collections_groups::table.on( .left_join(collections_groups::table.on(
collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid).and( collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid).and(
+11 -11
View File
@@ -191,7 +191,7 @@ impl Collection {
self.update_users_revision(conn).await; self.update_users_revision(conn).await;
CollectionCipher::delete_all_by_collection(&self.uuid, conn).await?; CollectionCipher::delete_all_by_collection(&self.uuid, conn).await?;
CollectionUser::delete_all_by_collection(&self.uuid, conn).await?; CollectionUser::delete_all_by_collection(&self.uuid, conn).await?;
CollectionGroup::delete_all_by_collection(&self.uuid, conn).await?; CollectionGroup::delete_all_by_collection(&self.uuid, &self.org_uuid, conn).await?;
db_run! { conn: { db_run! { conn: {
diesel::delete(collections::table.filter(collections::uuid.eq(self.uuid))) diesel::delete(collections::table.filter(collections::uuid.eq(self.uuid)))
@@ -239,8 +239,8 @@ impl Collection {
.left_join(groups_users::table.on( .left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid) groups_users::users_organizations_uuid.eq(users_organizations::uuid)
)) ))
.left_join(groups::table.on( .left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)
groups::uuid.eq(groups_users::groups_uuid) .and(groups::organizations_uuid.eq(users_organizations::org_uuid))
)) ))
.left_join(collections_groups::table.on( .left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and( collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
@@ -355,8 +355,8 @@ impl Collection {
.left_join(groups_users::table.on( .left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid) groups_users::users_organizations_uuid.eq(users_organizations::uuid)
)) ))
.left_join(groups::table.on( .left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)
groups::uuid.eq(groups_users::groups_uuid) .and(groups::organizations_uuid.eq(users_organizations::org_uuid))
)) ))
.left_join(collections_groups::table.on( .left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and( collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
@@ -422,8 +422,8 @@ impl Collection {
.left_join(groups_users::table.on( .left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid) groups_users::users_organizations_uuid.eq(users_organizations::uuid)
)) ))
.left_join(groups::table.on( .left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)
groups::uuid.eq(groups_users::groups_uuid) .and(groups::organizations_uuid.eq(users_organizations::org_uuid))
)) ))
.left_join(collections_groups::table.on( .left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid) collections_groups::groups_uuid.eq(groups_users::groups_uuid)
@@ -484,8 +484,8 @@ impl Collection {
.left_join(groups_users::table.on( .left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid) groups_users::users_organizations_uuid.eq(users_organizations::uuid)
)) ))
.left_join(groups::table.on( .left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)
groups::uuid.eq(groups_users::groups_uuid) .and(groups::organizations_uuid.eq(users_organizations::org_uuid))
)) ))
.left_join(collections_groups::table.on( .left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and( collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
@@ -531,8 +531,8 @@ impl Collection {
.left_join(groups_users::table.on( .left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid) groups_users::users_organizations_uuid.eq(users_organizations::uuid)
)) ))
.left_join(groups::table.on( .left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)
groups::uuid.eq(groups_users::groups_uuid) .and(groups::organizations_uuid.eq(users_organizations::org_uuid))
)) ))
.left_join(collections_groups::table.on( .left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and( collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
+24 -5
View File
@@ -1,6 +1,6 @@
use chrono::{NaiveDateTime, Utc}; use chrono::{NaiveDateTime, Utc};
use data_encoding::{BASE64, BASE64URL}; use data_encoding::BASE64URL;
use derive_more::{Display, From}; use derive_more::{Display, From};
use serde_json::Value; use serde_json::Value;
@@ -49,11 +49,16 @@ impl Device {
push_uuid: Some(PushId(get_uuid())), push_uuid: Some(PushId(get_uuid())),
push_token: None, push_token: None,
refresh_token: crypto::encode_random_bytes::<64>(&BASE64URL), refresh_token: Device::generate_refresh_token(),
twofactor_remember: None, twofactor_remember: None,
} }
} }
#[inline(always)]
pub fn generate_refresh_token() -> String {
crypto::encode_random_bytes::<64>(&BASE64URL)
}
pub fn to_json(&self) -> Value { pub fn to_json(&self) -> Value {
json!({ json!({
"id": self.uuid, "id": self.uuid,
@@ -67,10 +72,13 @@ impl Device {
} }
pub fn refresh_twofactor_remember(&mut self) -> String { pub fn refresh_twofactor_remember(&mut self) -> String {
let twofactor_remember = crypto::encode_random_bytes::<180>(&BASE64); use crate::auth::{encode_jwt, generate_2fa_remember_claims};
self.twofactor_remember = Some(twofactor_remember.clone());
twofactor_remember let two_factor_remember_claim = generate_2fa_remember_claims(self.uuid.clone(), self.user_uuid.clone());
let two_factor_remember_string = encode_jwt(&two_factor_remember_claim);
self.twofactor_remember = Some(two_factor_remember_string.clone());
two_factor_remember_string
} }
pub fn delete_twofactor_remember(&mut self) { pub fn delete_twofactor_remember(&mut self) {
@@ -257,6 +265,17 @@ impl Device {
.unwrap_or(0) != 0 .unwrap_or(0) != 0
}} }}
} }
pub async fn rotate_refresh_tokens_by_user(user_uuid: &UserId, conn: &DbConn) -> EmptyResult {
// Generate a new token per device.
// We cannot do a single UPDATE with one value because each device needs a unique token.
let devices = Self::find_by_user(user_uuid, conn).await;
for mut device in devices {
device.refresh_token = Device::generate_refresh_token();
device.save(false, conn).await?;
}
Ok(())
}
} }
#[derive(Display)] #[derive(Display)]
+62 -20
View File
@@ -1,6 +1,6 @@
use super::{CollectionId, Membership, MembershipId, OrganizationId, User, UserId}; use super::{CollectionId, Membership, MembershipId, OrganizationId, User, UserId};
use crate::api::EmptyResult; use crate::api::EmptyResult;
use crate::db::schema::{collections_groups, groups, groups_users, users_organizations}; use crate::db::schema::{collections, collections_groups, groups, groups_users, users_organizations};
use crate::db::DbConn; use crate::db::DbConn;
use crate::error::MapResult; use crate::error::MapResult;
use chrono::{NaiveDateTime, Utc}; use chrono::{NaiveDateTime, Utc};
@@ -81,7 +81,7 @@ impl Group {
// If both read_only and hide_passwords are false, then manage should be true // If both read_only and hide_passwords are false, then manage should be true
// You can't have an entry with read_only and manage, or hide_passwords and manage // You can't have an entry with read_only and manage, or hide_passwords and manage
// Or an entry with everything to false // Or an entry with everything to false
let collections_groups: Vec<Value> = CollectionGroup::find_by_group(&self.uuid, conn) let collections_groups: Vec<Value> = CollectionGroup::find_by_group(&self.uuid, &self.organizations_uuid, conn)
.await .await
.iter() .iter()
.map(|entry| { .map(|entry| {
@@ -191,7 +191,7 @@ impl Group {
pub async fn delete_all_by_organization(org_uuid: &OrganizationId, conn: &DbConn) -> EmptyResult { pub async fn delete_all_by_organization(org_uuid: &OrganizationId, conn: &DbConn) -> EmptyResult {
for group in Self::find_by_organization(org_uuid, conn).await { for group in Self::find_by_organization(org_uuid, conn).await {
group.delete(conn).await?; group.delete(org_uuid, conn).await?;
} }
Ok(()) Ok(())
} }
@@ -246,8 +246,8 @@ impl Group {
.inner_join(users_organizations::table.on( .inner_join(users_organizations::table.on(
users_organizations::uuid.eq(groups_users::users_organizations_uuid) users_organizations::uuid.eq(groups_users::users_organizations_uuid)
)) ))
.inner_join(groups::table.on( .inner_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)
groups::uuid.eq(groups_users::groups_uuid) .and(groups::organizations_uuid.eq(users_organizations::org_uuid))
)) ))
.filter(users_organizations::user_uuid.eq(user_uuid)) .filter(users_organizations::user_uuid.eq(user_uuid))
.filter(groups::access_all.eq(true)) .filter(groups::access_all.eq(true))
@@ -276,9 +276,9 @@ impl Group {
}} }}
} }
pub async fn delete(&self, conn: &DbConn) -> EmptyResult { pub async fn delete(&self, org_uuid: &OrganizationId, conn: &DbConn) -> EmptyResult {
CollectionGroup::delete_all_by_group(&self.uuid, conn).await?; CollectionGroup::delete_all_by_group(&self.uuid, org_uuid, conn).await?;
GroupUser::delete_all_by_group(&self.uuid, conn).await?; GroupUser::delete_all_by_group(&self.uuid, org_uuid, conn).await?;
db_run! { conn: { db_run! { conn: {
diesel::delete(groups::table.filter(groups::uuid.eq(&self.uuid))) diesel::delete(groups::table.filter(groups::uuid.eq(&self.uuid)))
@@ -306,8 +306,8 @@ impl Group {
} }
impl CollectionGroup { impl CollectionGroup {
pub async fn save(&mut self, conn: &DbConn) -> EmptyResult { pub async fn save(&mut self, org_uuid: &OrganizationId, conn: &DbConn) -> EmptyResult {
let group_users = GroupUser::find_by_group(&self.groups_uuid, conn).await; let group_users = GroupUser::find_by_group(&self.groups_uuid, org_uuid, conn).await;
for group_user in group_users { for group_user in group_users {
group_user.update_user_revision(conn).await; group_user.update_user_revision(conn).await;
} }
@@ -365,10 +365,19 @@ impl CollectionGroup {
} }
} }
pub async fn find_by_group(group_uuid: &GroupId, conn: &DbConn) -> Vec<Self> { pub async fn find_by_group(group_uuid: &GroupId, org_uuid: &OrganizationId, conn: &DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
collections_groups::table collections_groups::table
.inner_join(groups::table.on(
groups::uuid.eq(collections_groups::groups_uuid)
))
.inner_join(collections::table.on(
collections::uuid.eq(collections_groups::collections_uuid)
.and(collections::org_uuid.eq(groups::organizations_uuid))
))
.filter(collections_groups::groups_uuid.eq(group_uuid)) .filter(collections_groups::groups_uuid.eq(group_uuid))
.filter(collections::org_uuid.eq(org_uuid))
.select(collections_groups::all_columns)
.load::<Self>(conn) .load::<Self>(conn)
.expect("Error loading collection groups") .expect("Error loading collection groups")
}} }}
@@ -383,6 +392,13 @@ impl CollectionGroup {
.inner_join(users_organizations::table.on( .inner_join(users_organizations::table.on(
users_organizations::uuid.eq(groups_users::users_organizations_uuid) users_organizations::uuid.eq(groups_users::users_organizations_uuid)
)) ))
.inner_join(groups::table.on(groups::uuid.eq(collections_groups::groups_uuid)
.and(groups::organizations_uuid.eq(users_organizations::org_uuid))
))
.inner_join(collections::table.on(
collections::uuid.eq(collections_groups::collections_uuid)
.and(collections::org_uuid.eq(groups::organizations_uuid))
))
.filter(users_organizations::user_uuid.eq(user_uuid)) .filter(users_organizations::user_uuid.eq(user_uuid))
.select(collections_groups::all_columns) .select(collections_groups::all_columns)
.load::<Self>(conn) .load::<Self>(conn)
@@ -394,14 +410,20 @@ impl CollectionGroup {
db_run! { conn: { db_run! { conn: {
collections_groups::table collections_groups::table
.filter(collections_groups::collections_uuid.eq(collection_uuid)) .filter(collections_groups::collections_uuid.eq(collection_uuid))
.inner_join(collections::table.on(
collections::uuid.eq(collections_groups::collections_uuid)
))
.inner_join(groups::table.on(groups::uuid.eq(collections_groups::groups_uuid)
.and(groups::organizations_uuid.eq(collections::org_uuid))
))
.select(collections_groups::all_columns) .select(collections_groups::all_columns)
.load::<Self>(conn) .load::<Self>(conn)
.expect("Error loading collection groups") .expect("Error loading collection groups")
}} }}
} }
pub async fn delete(&self, conn: &DbConn) -> EmptyResult { pub async fn delete(&self, org_uuid: &OrganizationId, conn: &DbConn) -> EmptyResult {
let group_users = GroupUser::find_by_group(&self.groups_uuid, conn).await; let group_users = GroupUser::find_by_group(&self.groups_uuid, org_uuid, conn).await;
for group_user in group_users { for group_user in group_users {
group_user.update_user_revision(conn).await; group_user.update_user_revision(conn).await;
} }
@@ -415,8 +437,8 @@ impl CollectionGroup {
}} }}
} }
pub async fn delete_all_by_group(group_uuid: &GroupId, conn: &DbConn) -> EmptyResult { pub async fn delete_all_by_group(group_uuid: &GroupId, org_uuid: &OrganizationId, conn: &DbConn) -> EmptyResult {
let group_users = GroupUser::find_by_group(group_uuid, conn).await; let group_users = GroupUser::find_by_group(group_uuid, org_uuid, conn).await;
for group_user in group_users { for group_user in group_users {
group_user.update_user_revision(conn).await; group_user.update_user_revision(conn).await;
} }
@@ -429,10 +451,14 @@ impl CollectionGroup {
}} }}
} }
pub async fn delete_all_by_collection(collection_uuid: &CollectionId, conn: &DbConn) -> EmptyResult { pub async fn delete_all_by_collection(
collection_uuid: &CollectionId,
org_uuid: &OrganizationId,
conn: &DbConn,
) -> EmptyResult {
let collection_assigned_to_groups = CollectionGroup::find_by_collection(collection_uuid, conn).await; let collection_assigned_to_groups = CollectionGroup::find_by_collection(collection_uuid, conn).await;
for collection_assigned_to_group in collection_assigned_to_groups { for collection_assigned_to_group in collection_assigned_to_groups {
let group_users = GroupUser::find_by_group(&collection_assigned_to_group.groups_uuid, conn).await; let group_users = GroupUser::find_by_group(&collection_assigned_to_group.groups_uuid, org_uuid, conn).await;
for group_user in group_users { for group_user in group_users {
group_user.update_user_revision(conn).await; group_user.update_user_revision(conn).await;
} }
@@ -494,10 +520,19 @@ impl GroupUser {
} }
} }
pub async fn find_by_group(group_uuid: &GroupId, conn: &DbConn) -> Vec<Self> { pub async fn find_by_group(group_uuid: &GroupId, org_uuid: &OrganizationId, conn: &DbConn) -> Vec<Self> {
db_run! { conn: { db_run! { conn: {
groups_users::table groups_users::table
.inner_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.inner_join(users_organizations::table.on(
users_organizations::uuid.eq(groups_users::users_organizations_uuid)
.and(users_organizations::org_uuid.eq(groups::organizations_uuid))
))
.filter(groups_users::groups_uuid.eq(group_uuid)) .filter(groups_users::groups_uuid.eq(group_uuid))
.filter(groups::organizations_uuid.eq(org_uuid))
.select(groups_users::all_columns)
.load::<Self>(conn) .load::<Self>(conn)
.expect("Error loading group users") .expect("Error loading group users")
}} }}
@@ -522,6 +557,13 @@ impl GroupUser {
.inner_join(collections_groups::table.on( .inner_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid) collections_groups::groups_uuid.eq(groups_users::groups_uuid)
)) ))
.inner_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.inner_join(collections::table.on(
collections::uuid.eq(collections_groups::collections_uuid)
.and(collections::org_uuid.eq(groups::organizations_uuid))
))
.filter(collections_groups::collections_uuid.eq(collection_uuid)) .filter(collections_groups::collections_uuid.eq(collection_uuid))
.filter(groups_users::users_organizations_uuid.eq(member_uuid)) .filter(groups_users::users_organizations_uuid.eq(member_uuid))
.count() .count()
@@ -575,8 +617,8 @@ impl GroupUser {
}} }}
} }
pub async fn delete_all_by_group(group_uuid: &GroupId, conn: &DbConn) -> EmptyResult { pub async fn delete_all_by_group(group_uuid: &GroupId, org_uuid: &OrganizationId, conn: &DbConn) -> EmptyResult {
let group_users = GroupUser::find_by_group(group_uuid, conn).await; let group_users = GroupUser::find_by_group(group_uuid, org_uuid, conn).await;
for group_user in group_users { for group_user in group_users {
group_user.update_user_revision(conn).await; group_user.update_user_revision(conn).await;
} }
+2 -2
View File
@@ -269,7 +269,7 @@ impl OrgPolicy {
continue; continue;
} }
if let Some(user) = Membership::find_by_user_and_org(user_uuid, &policy.org_uuid, conn).await { if let Some(user) = Membership::find_confirmed_by_user_and_org(user_uuid, &policy.org_uuid, conn).await {
if user.atype < MembershipType::Admin { if user.atype < MembershipType::Admin {
return true; return true;
} }
@@ -332,7 +332,7 @@ impl OrgPolicy {
for policy in for policy in
OrgPolicy::find_confirmed_by_user_and_active_policy(user_uuid, OrgPolicyType::SendOptions, conn).await OrgPolicy::find_confirmed_by_user_and_active_policy(user_uuid, OrgPolicyType::SendOptions, conn).await
{ {
if let Some(user) = Membership::find_by_user_and_org(user_uuid, &policy.org_uuid, conn).await { if let Some(user) = Membership::find_confirmed_by_user_and_org(user_uuid, &policy.org_uuid, conn).await {
if user.atype < MembershipType::Admin { if user.atype < MembershipType::Admin {
match serde_json::from_str::<SendOptionsPolicyData>(&policy.data) { match serde_json::from_str::<SendOptionsPolicyData>(&policy.data) {
Ok(opts) => { Ok(opts) => {
+5 -2
View File
@@ -514,7 +514,8 @@ impl Membership {
"familySponsorshipValidUntil": null, "familySponsorshipValidUntil": null,
"familySponsorshipToDelete": null, "familySponsorshipToDelete": null,
"accessSecretsManager": false, "accessSecretsManager": false,
"limitCollectionCreation": self.atype < MembershipType::Manager, // If less then a manager return true, to limit collection creations // limit collection creation to managers with access_all permission to prevent issues
"limitCollectionCreation": self.atype < MembershipType::Manager || !self.access_all,
"limitCollectionDeletion": true, "limitCollectionDeletion": true,
"limitItemDeletion": false, "limitItemDeletion": false,
"allowAdminAccessToAllCollectionItems": true, "allowAdminAccessToAllCollectionItems": true,
@@ -1073,7 +1074,9 @@ impl Membership {
.left_join(collections_groups::table.on( .left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid) collections_groups::groups_uuid.eq(groups_users::groups_uuid)
)) ))
.left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid))) .left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)
.and(groups::organizations_uuid.eq(users_organizations::org_uuid))
))
.left_join(ciphers_collections::table.on( .left_join(ciphers_collections::table.on(
ciphers_collections::collection_uuid.eq(collections_groups::collections_uuid).and(ciphers_collections::cipher_uuid.eq(&cipher_uuid)) ciphers_collections::collection_uuid.eq(collections_groups::collections_uuid).and(ciphers_collections::cipher_uuid.eq(&cipher_uuid))
+11
View File
@@ -46,6 +46,16 @@ pub enum SendType {
File = 1, File = 1,
} }
enum SendAuthType {
#[allow(dead_code)]
// Send requires email OTP verification
Email = 0, // Not yet supported by Vaultwarden
// Send requires a password
Password = 1,
// Send requires no auth
None = 2,
}
impl Send { impl Send {
pub fn new(atype: i32, name: String, data: String, akey: String, deletion_date: NaiveDateTime) -> Self { pub fn new(atype: i32, name: String, data: String, akey: String, deletion_date: NaiveDateTime) -> Self {
let now = Utc::now().naive_utc(); let now = Utc::now().naive_utc();
@@ -145,6 +155,7 @@ impl Send {
"maxAccessCount": self.max_access_count, "maxAccessCount": self.max_access_count,
"accessCount": self.access_count, "accessCount": self.access_count,
"password": self.password_hash.as_deref().map(|h| BASE64URL_NOPAD.encode(h)), "password": self.password_hash.as_deref().map(|h| BASE64URL_NOPAD.encode(h)),
"authType": if self.password_hash.is_some() { SendAuthType::Password as i32 } else { SendAuthType::None as i32 },
"disabled": self.disabled, "disabled": self.disabled,
"hideEmail": self.hide_email, "hideEmail": self.hide_email,
-1
View File
@@ -20,7 +20,6 @@ pub struct TwoFactor {
pub last_used: i64, pub last_used: i64,
} }
#[allow(dead_code)]
#[derive(num_derive::FromPrimitive)] #[derive(num_derive::FromPrimitive)]
pub enum TwoFactorType { pub enum TwoFactorType {
Authenticator = 0, Authenticator = 0,
+8 -4
View File
@@ -185,13 +185,14 @@ impl User {
/// These routes are able to use the previous stamp id for the next 2 minutes. /// These routes are able to use the previous stamp id for the next 2 minutes.
/// After these 2 minutes this stamp will expire. /// After these 2 minutes this stamp will expire.
/// ///
pub fn set_password( pub async fn set_password(
&mut self, &mut self,
password: &str, password: &str,
new_key: Option<String>, new_key: Option<String>,
reset_security_stamp: bool, reset_security_stamp: bool,
allow_next_route: Option<Vec<String>>, allow_next_route: Option<Vec<String>>,
) { conn: &DbConn,
) -> EmptyResult {
self.password_hash = crypto::hash_password(password.as_bytes(), &self.salt, self.password_iterations as u32); self.password_hash = crypto::hash_password(password.as_bytes(), &self.salt, self.password_iterations as u32);
if let Some(route) = allow_next_route { if let Some(route) = allow_next_route {
@@ -203,12 +204,15 @@ impl User {
} }
if reset_security_stamp { if reset_security_stamp {
self.reset_security_stamp() self.reset_security_stamp(conn).await?;
} }
Ok(())
} }
pub fn reset_security_stamp(&mut self) { pub async fn reset_security_stamp(&mut self, conn: &DbConn) -> EmptyResult {
self.security_stamp = get_uuid(); self.security_stamp = get_uuid();
Device::rotate_refresh_tokens_by_user(&self.uuid, conn).await?;
Ok(())
} }
/// Set the stamp_exception to only allow a subsequent request matching a specific route using the current security-stamp. /// Set the stamp_exception to only allow a subsequent request matching a specific route using the current security-stamp.
+24 -21
View File
@@ -6,7 +6,7 @@ use std::{
time::Duration, time::Duration,
}; };
use hickory_resolver::{name_server::TokioConnectionProvider, TokioResolver}; use hickory_resolver::{net::runtime::TokioRuntimeProvider, TokioResolver};
use regex::Regex; use regex::Regex;
use reqwest::{ use reqwest::{
dns::{Name, Resolve, Resolving}, dns::{Name, Resolve, Resolving},
@@ -184,35 +184,35 @@ impl CustomDnsResolver {
} }
fn new() -> Arc<Self> { fn new() -> Arc<Self> {
match TokioResolver::builder(TokioConnectionProvider::default()) { TokioResolver::builder(TokioRuntimeProvider::default())
Ok(mut builder) => { .and_then(|mut builder| {
if CONFIG.dns_prefer_ipv6() { // Hickory's default since v0.26 is `Ipv6AndIpv4`, which sorts IPv6 first
builder.options_mut().ip_strategy = hickory_resolver::config::LookupIpStrategy::Ipv6thenIpv4; // This might cause issues on IPv4 only systems or containers
} // Unless someone enabled DNS_PREFER_IPV6, use Ipv4AndIpv6, which returns IPv4 first which was our previous default
let resolver = builder.build(); if !CONFIG.dns_prefer_ipv6() {
Arc::new(Self::Hickory(Arc::new(resolver))) builder.options_mut().ip_strategy = hickory_resolver::config::LookupIpStrategy::Ipv4AndIpv6;
}
Err(e) => {
warn!("Error creating Hickory resolver, falling back to default: {e:?}");
Arc::new(Self::Default())
}
} }
builder.build()
})
.inspect_err(|e| warn!("Error creating Hickory resolver, falling back to default: {e:?}"))
.map(|resolver| Arc::new(Self::Hickory(Arc::new(resolver))))
.unwrap_or_else(|_| Arc::new(Self::Default()))
} }
// Note that we get an iterator of addresses, but we only grab the first one for convenience // Note that we get an iterator of addresses, but we only grab the first one for convenience
async fn resolve_domain(&self, name: &str) -> Result<Option<SocketAddr>, BoxError> { async fn resolve_domain(&self, name: &str) -> Result<Vec<SocketAddr>, BoxError> {
pre_resolve(name)?; pre_resolve(name)?;
let result = match self { let results: Vec<SocketAddr> = match self {
Self::Default() => tokio::net::lookup_host(name).await?.next(), Self::Default() => tokio::net::lookup_host((name, 0)).await?.collect(),
Self::Hickory(r) => r.lookup_ip(name).await?.iter().next().map(|a| SocketAddr::new(a, 0)), Self::Hickory(r) => r.lookup_ip(name).await?.iter().map(|i| SocketAddr::new(i, 0)).collect(),
}; };
if let Some(addr) = &result { for addr in &results {
post_resolve(name, addr.ip())?; post_resolve(name, addr.ip())?;
} }
Ok(result) Ok(results)
} }
} }
@@ -242,8 +242,11 @@ impl Resolve for CustomDnsResolver {
let this = self.clone(); let this = self.clone();
Box::pin(async move { Box::pin(async move {
let name = name.as_str(); let name = name.as_str();
let result = this.resolve_domain(name).await?; let results = this.resolve_domain(name).await?;
Ok::<reqwest::dns::Addrs, _>(Box::new(result.into_iter())) if results.is_empty() {
warn!("Unable to resolve {name} to any valid IP address");
}
Ok::<reqwest::dns::Addrs, _>(Box::new(results.into_iter()))
}) })
} }
} }
+36 -5
View File
@@ -558,6 +558,12 @@ async fn launch_rocket(pool: db::DbPool, extra_debug: bool) -> Result<(), Error>
let basepath = &CONFIG.domain_path(); let basepath = &CONFIG.domain_path();
let mut config = rocket::Config::from(rocket::Config::figment()); let mut config = rocket::Config::from(rocket::Config::figment());
// We install our own signal handlers below; disable Rocket's built-in handlers
config.shutdown.ctrlc = false;
#[cfg(unix)]
config.shutdown.signals.clear();
config.temp_dir = canonicalize(CONFIG.tmp_folder()).unwrap().into(); config.temp_dir = canonicalize(CONFIG.tmp_folder()).unwrap().into();
config.cli_colors = false; // Make sure Rocket does not color any values for logging. config.cli_colors = false; // Make sure Rocket does not color any values for logging.
config.limits = Limits::new() config.limits = Limits::new()
@@ -589,11 +595,7 @@ async fn launch_rocket(pool: db::DbPool, extra_debug: bool) -> Result<(), Error>
CONFIG.set_rocket_shutdown_handle(instance.shutdown()); CONFIG.set_rocket_shutdown_handle(instance.shutdown());
tokio::spawn(async move { spawn_shutdown_signal_handler();
tokio::signal::ctrl_c().await.expect("Error setting Ctrl-C handler");
info!("Exiting Vaultwarden!");
CONFIG.shutdown();
});
#[cfg(all(unix, sqlite))] #[cfg(all(unix, sqlite))]
{ {
@@ -621,6 +623,35 @@ async fn launch_rocket(pool: db::DbPool, extra_debug: bool) -> Result<(), Error>
Ok(()) Ok(())
} }
#[cfg(unix)]
fn spawn_shutdown_signal_handler() {
tokio::spawn(async move {
use tokio::signal::unix::signal;
let mut sigint = signal(SignalKind::interrupt()).expect("Error setting SIGINT handler");
let mut sigterm = signal(SignalKind::terminate()).expect("Error setting SIGTERM handler");
let mut sigquit = signal(SignalKind::quit()).expect("Error setting SIGQUIT handler");
let signal_name = tokio::select! {
_ = sigint.recv() => "SIGINT",
_ = sigterm.recv() => "SIGTERM",
_ = sigquit.recv() => "SIGQUIT",
};
info!("Received {signal_name}, initiating graceful shutdown");
CONFIG.shutdown();
});
}
#[cfg(not(unix))]
fn spawn_shutdown_signal_handler() {
tokio::spawn(async move {
tokio::signal::ctrl_c().await.expect("Error setting Ctrl-C handler");
info!("Received Ctrl-C, initiating graceful shutdown");
CONFIG.shutdown();
});
}
fn schedule_jobs(pool: db::DbPool) { fn schedule_jobs(pool: db::DbPool) {
if CONFIG.job_poll_interval_ms() == 0 { if CONFIG.job_poll_interval_ms() == 0 {
info!("Job scheduler disabled."); info!("Job scheduler disabled.");
+1 -1
View File
@@ -17,7 +17,7 @@ use crate::{
CONFIG, CONFIG,
}; };
pub static FAKE_IDENTIFIER: &str = "VW_DUMMY_IDENTIFIER_FOR_OIDC"; pub static FAKE_SSO_IDENTIFIER: &str = "vaultwarden-dummy-oidc-identifier";
static SSO_JWT_ISSUER: LazyLock<String> = LazyLock::new(|| format!("{}|sso", CONFIG.domain_origin())); static SSO_JWT_ISSUER: LazyLock<String> = LazyLock::new(|| format!("{}|sso", CONFIG.domain_origin()));
+27 -15
View File
@@ -1,6 +1,5 @@
use std::{borrow::Cow, sync::LazyLock, time::Duration}; use std::{borrow::Cow, sync::LazyLock, time::Duration};
use mini_moka::sync::Cache;
use openidconnect::{core::*, reqwest, *}; use openidconnect::{core::*, reqwest, *};
use regex::Regex; use regex::Regex;
use url::Url; use url::Url;
@@ -13,9 +12,14 @@ use crate::{
}; };
static CLIENT_CACHE_KEY: LazyLock<String> = LazyLock::new(|| "sso-client".to_string()); static CLIENT_CACHE_KEY: LazyLock<String> = LazyLock::new(|| "sso-client".to_string());
static CLIENT_CACHE: LazyLock<Cache<String, Client>> = LazyLock::new(|| { static CLIENT_CACHE: LazyLock<moka::sync::Cache<String, Client>> = LazyLock::new(|| {
Cache::builder().max_capacity(1).time_to_live(Duration::from_secs(CONFIG.sso_client_cache_expiration())).build() moka::sync::Cache::builder()
.max_capacity(1)
.time_to_live(Duration::from_secs(CONFIG.sso_client_cache_expiration()))
.build()
}); });
static REFRESH_CACHE: LazyLock<moka::future::Cache<String, Result<RefreshTokenResponse, String>>> =
LazyLock::new(|| moka::future::Cache::builder().max_capacity(1000).time_to_live(Duration::from_secs(30)).build());
/// OpenID Connect Core client. /// OpenID Connect Core client.
pub type CustomClient = openidconnect::Client< pub type CustomClient = openidconnect::Client<
@@ -38,6 +42,8 @@ pub type CustomClient = openidconnect::Client<
EndpointSet, EndpointSet,
>; >;
pub type RefreshTokenResponse = (Option<String>, String, Option<Duration>);
#[derive(Clone)] #[derive(Clone)]
pub struct Client { pub struct Client {
pub http_client: reqwest::Client, pub http_client: reqwest::Client,
@@ -231,23 +237,29 @@ impl Client {
verifier verifier
} }
pub async fn exchange_refresh_token( pub async fn exchange_refresh_token(refresh_token: String) -> ApiResult<RefreshTokenResponse> {
refresh_token: String, let client = Client::cached().await?;
) -> ApiResult<(Option<String>, String, Option<Duration>)> {
REFRESH_CACHE
.get_with(refresh_token.clone(), async move { client._exchange_refresh_token(refresh_token).await })
.await
.map_err(Into::into)
}
async fn _exchange_refresh_token(&self, refresh_token: String) -> Result<RefreshTokenResponse, String> {
let rt = RefreshToken::new(refresh_token); let rt = RefreshToken::new(refresh_token);
let client = Client::cached().await?; match self.core_client.exchange_refresh_token(&rt).request_async(&self.http_client).await {
let token_response = Err(err) => {
match client.core_client.exchange_refresh_token(&rt).request_async(&client.http_client).await { error!("Request to exchange_refresh_token endpoint failed: {err}");
Err(err) => err!(format!("Request to exchange_refresh_token endpoint failed: {:?}", err)), Err(format!("Request to exchange_refresh_token endpoint failed: {err}"))
Ok(token_response) => token_response, }
}; Ok(token_response) => Ok((
Ok((
token_response.refresh_token().map(|token| token.secret().clone()), token_response.refresh_token().map(|token| token.secret().clone()),
token_response.access_token().secret().clone(), token_response.access_token().secret().clone(),
token_response.expires_in(), token_response.expires_in(),
)) )),
}
} }
} }
+10 -1
View File
@@ -111,7 +111,8 @@
"microsoftstore.com", "microsoftstore.com",
"xbox.com", "xbox.com",
"azure.com", "azure.com",
"windowsazure.com" "windowsazure.com",
"cloud.microsoft"
], ],
"excluded": false "excluded": false
}, },
@@ -971,5 +972,13 @@
"pinterest.se" "pinterest.se"
], ],
"excluded": false "excluded": false
},
{
"type": 91,
"domains": [
"twitter.com",
"x.com"
],
"excluded": false
} }
] ]
+7
View File
@@ -109,6 +109,9 @@ async function generateSupportString(event, dj) {
supportString += "* Websocket Check: disabled\n"; supportString += "* Websocket Check: disabled\n";
} }
supportString += `* HTTP Response Checks: ${httpResponseCheck}\n`; supportString += `* HTTP Response Checks: ${httpResponseCheck}\n`;
if (dj.invalid_feature_flags != "") {
supportString += `* Invalid feature flags: true\n`;
}
const jsonResponse = await fetch(`${BASE_URL}/admin/diagnostics/config`, { const jsonResponse = await fetch(`${BASE_URL}/admin/diagnostics/config`, {
"headers": { "Accept": "application/json" } "headers": { "Accept": "application/json" }
@@ -128,6 +131,10 @@ async function generateSupportString(event, dj) {
supportString += `\n**Environment settings which are overridden:** ${dj.overrides}\n`; supportString += `\n**Environment settings which are overridden:** ${dj.overrides}\n`;
} }
if (dj.invalid_feature_flags != "") {
supportString += `\n**Invalid feature flags:** ${dj.invalid_feature_flags}\n`;
}
// Add http response check messages if they exists // Add http response check messages if they exists
if (httpResponseCheck === false) { if (httpResponseCheck === false) {
supportString += "\n**Failed HTTP Checks:**\n"; supportString += "\n**Failed HTTP Checks:**\n";
@@ -194,6 +194,14 @@
<dd class="col-sm-7"> <dd class="col-sm-7">
<span id="http-response-errors" class="d-block"></span> <span id="http-response-errors" class="d-block"></span>
</dd> </dd>
{{#if page_data.invalid_feature_flags}}
<dt class="col-sm-5">Invalid Feature Flags
<span class="badge bg-warning text-dark abbr-badge" id="feature-flag-warning" title="Some feature flags are invalid or outdated!">Warning</span>
</dt>
<dd class="col-sm-7">
<span id="feature-flags" class="d-block"><b>Flags:</b> <span id="feature-flags-string">{{page_data.invalid_feature_flags}}</span></span>
</dd>
{{/if}}
</dl> </dl>
</div> </div>
</div> </div>
+39 -12
View File
@@ -16,7 +16,10 @@ use tokio::{
time::{sleep, Duration}, time::{sleep, Duration},
}; };
use crate::{config::PathType, CONFIG}; use crate::{
config::{PathType, SUPPORTED_FEATURE_FLAGS},
CONFIG,
};
pub struct AppHeaders(); pub struct AppHeaders();
@@ -153,9 +156,11 @@ impl Cors {
fn get_allowed_origin(headers: &HeaderMap<'_>) -> Option<String> { fn get_allowed_origin(headers: &HeaderMap<'_>) -> Option<String> {
let origin = Cors::get_header(headers, "Origin"); let origin = Cors::get_header(headers, "Origin");
let safari_extension_origin = "file://"; let safari_extension_origin = "file://";
let desktop_custom_file_origin = "bw-desktop-file://bundle";
if origin == CONFIG.domain_origin() if origin == CONFIG.domain_origin()
|| origin == safari_extension_origin || origin == safari_extension_origin
|| origin == desktop_custom_file_origin
|| (CONFIG.sso_enabled() && origin == CONFIG.sso_authority()) || (CONFIG.sso_enabled() && origin == CONFIG.sso_authority())
{ {
Some(origin) Some(origin)
@@ -629,6 +634,21 @@ fn _process_key(key: &str) -> String {
} }
} }
pub fn deser_opt_nonempty_str<'de, D, T>(deserializer: D) -> Result<Option<T>, D::Error>
where
D: Deserializer<'de>,
T: From<String>,
{
use serde::Deserialize;
Ok(Option::<String>::deserialize(deserializer)?.and_then(|s| {
if s.is_empty() {
None
} else {
Some(T::from(s))
}
}))
}
#[derive(Clone, Debug, Deserialize)] #[derive(Clone, Debug, Deserialize)]
#[serde(untagged)] #[serde(untagged)]
pub enum NumberOrString { pub enum NumberOrString {
@@ -763,21 +783,28 @@ pub fn convert_json_key_lcase_first(src_json: Value) -> Value {
} }
} }
pub enum FeatureFlagFilter {
#[allow(dead_code)]
Unfiltered,
ValidOnly,
InvalidOnly,
}
/// Parses the experimental client feature flags string into a HashMap. /// Parses the experimental client feature flags string into a HashMap.
pub fn parse_experimental_client_feature_flags(experimental_client_feature_flags: &str) -> HashMap<String, bool> { pub fn parse_experimental_client_feature_flags(
// These flags could still be configured, but are deprecated and not used anymore experimental_client_feature_flags: &str,
// To prevent old installations from starting filter these out and not error out filter_mode: FeatureFlagFilter,
const DEPRECATED_FLAGS: &[&str] = ) -> HashMap<String, bool> {
&["autofill-overlay", "autofill-v2", "browser-fileless-import", "extension-refresh", "fido2-vault-credentials"];
experimental_client_feature_flags experimental_client_feature_flags
.split(',') .split(',')
.filter_map(|f| { .map(str::trim)
let flag = f.trim(); .filter(|flag| !flag.is_empty())
if !flag.is_empty() && !DEPRECATED_FLAGS.contains(&flag) { .filter(|flag| match filter_mode {
return Some((flag.to_owned(), true)); FeatureFlagFilter::Unfiltered => true,
} FeatureFlagFilter::ValidOnly => SUPPORTED_FEATURE_FLAGS.contains(flag),
None FeatureFlagFilter::InvalidOnly => !SUPPORTED_FEATURE_FLAGS.contains(flag),
}) })
.map(|flag| (flag.to_owned(), true))
.collect() .collect()
} }
+1
View File
@@ -79,3 +79,4 @@ for name, domain_list in domain_lists.items():
# Write out the global domains JSON file. # Write out the global domains JSON file.
with open(file=OUTPUT_FILE, mode='w', encoding='utf-8') as f: with open(file=OUTPUT_FILE, mode='w', encoding='utf-8') as f:
json.dump(global_domains, f, indent=2) json.dump(global_domains, f, indent=2)
f.write("\n")